jump to navigation

MP3Tunes and the New DMCA Boundaries March 30, 2014

Posted by Bill Rosenblatt in Law, Music, Services, United States.
2 comments

With last week’s jury verdict of copyright liability against Michael Robertson of MP3Tunes, copyright owners are finally starting to get some clarity around the limits of DMCA 512.  The law gives online service operators a “safe harbor” — a way to insulate themselves from copyright liability related to files that users post on their services by responding to takedown notices.

To qualify for the safe harbor, service providers have to have a policy for terminating the accounts of repeat infringers, and — more relevantly – cannot show “willful blindness” to users’ infringing actions.  At the same time, the law does not obligate service providers to proactively police their networks for copyright infringement.  The problem is that even when online services respond to takedown notices, the copyrighted works tend to be re-uploaded immediately.

The law was enacted in 1998, and copyright owners have brought a series of lawsuits against online services over the years to try to establish liability beyond the need to respond to one takedown notice at a time.  Some of these lawsuits tried to revisit the intent of Congress in passing this law, to convince courts that Congress did not intend to require them to spend millions of dollars a year playing Whac-a-Mole games to get their content removed.

In cases such as Viacom v. YouTube and Universal Music Group v. Veoh that date back to 2007, the media industry failed to get courts to revisit the idea that service providers should act as their own copyright police.  But over the past year, the industry has made progress along the “willful blindness” (a/k/a “looking the other way”) front.

These cases featured lots of arguments over what constitutes evidence of willful blindness or its close cousin, “red flag knowledge” of users’ infringements.  Courts had a hard time navigating the blurry lines between the “willlful blindness” and “no need to self-police” principles in the law, especially when the lines must be redrawn for each online service’s feature set, marketing pitch, and so on.

But within the past couple of years, two appeals courts established some of the contours of willful blindness and related principles to give copyright owners some comfort.  The New York-based (and typically media-industry-friendly) Second Circuit, in the YouTube case, found that certain types of evidence, such as company internal communications, could be evidence of willful blindness.  And even the California-based (and typically tech-friendly) Ninth Circuit found similar evidence last year in a case against the BitTorrent site IsoHunt.

The Second Circuit’s opinion in YouTube served as the guiding precedent in the EMI v. MP3Tunes case — and in a rather curious way.  Back in 2011, the district court judge in MP3Tunes handed down a summary judgment ruling that was favorable to Robertson in some but not all respects.  But after the Second Circuit’s YouTube opinion, EMI asked the lower court judge to revisit the case, suggesting that the new YouTube precedent created issues of fact regarding willful blindness that a jury should decide.  The judge was persuaded, the trial took place, and the jury decided for EMI.  Robertson could now be on the hook for tens of millions of dollars in damages.

(Eleanor Lackman and Simon Pulman of the media-focused law firm Cowan DeBaets have an excellent summary of the legal backdrop of the MP3Tunes trial; they say that it is “very unusual” for a judge to go back on a summary judgment ruling like that.)

The MP3Tunes verdict gives media companies some long-sought leverage against online service operators, which keep claiming that their only responsibility is to respond to each takedown notice, one at a time.  This is one — but only one — step of the many needed to clarify the rights of copyright owners and responsibilities of service providers to protect copyrights.  And as far as we can tell now, it does not obligate service providers to implement any technologies or take any more proactive steps to reduce infringement.  Yet it does now seem clear that if service providers want to look the other way, they at least have to keep quiet about it.

As for Robertson, he continues to think of new startup ideas that seem particularly calculated to goad copyright owners.  The latest one, radiosearchengine.com, is an attempt to turn streaming radio into an interactive, on-demand music service a la Spotify.  It lets users find and listen to Internet streams of radio stations that are currently playing specific songs (as well as artists, genres of music, etc.).

Radiosearchengine.com starts with a database of thousands of Internet radio stations, similar to TuneIn, iHeartRadio, Reciva, and various others.  These streaming radio services (many of which are simulcasts of AM or FM signals) carry program content data, such as the title and artist of the song currently playing.  Radiosearchengine.com retrieves this data from all of the stations in its database every few seconds, adds that information to the database, and makes it searchable by users.  Robertston has even created an API so that other developers can access his database.

Of course, radiosearchengine.com can’t predict that a station will play a certain song in the future (stations aren’t allowed to report it in advance), so users are likely to click on station links and hear their chosen songs starting in the middle.  But with the most popular songs — which are helpfully listed on the site’s left navbar — you can find many stations that are playing them, so you can presumably keep clicking until you find the song near its beginning.

This is something that TuneIn and others could have offered years ago if it didn’t seem so much like lawsuit bait.  On the other hand, Robertson isn’t the first one to think of this: there’s been an app for that for at least three years.

“Netflix for E-Books” Approaches Reality October 7, 2013

Posted by Bill Rosenblatt in Publishing, Services.
add a comment

Back in 2002, a startup company called listen.com had just concluded licensing deals with all of the (then five) major labels.  The result was Rhapsody: the “celestial jukebox” finally brought to life, the first successful subscription on-demand music service.  Rhapsody — whose original focus on classical music must have made it seem like a low-impact experiment to the majors — didn’t get on the map until they closed those deals.*

Eleven years later, something analogous is happening in the world of book publishing.  Last week, the popular document sharing site Scribd obtained licenses to all backlist titles from HarperCollins, one of the Big Five trade book publishers (along with Penguin Random House, Simon & Schuster, Macmillan, and Hachette), for an $8.99/month all-you-can-read subscription service.  It should only be a matter of time before the other four trickle in.  The service had been in “soft launch” mode since January with catalogs from smaller publishers such as RosettaBooks and SourceBooks.

Why Scribd and not Oyster or any of the others?  Because Scribd already has a huge user base — 80 million monthly visitors — making it an attractive existing audience instead of a speculative one.

Scribd started in 2006 as sort of a “YouTube for documents.”  The vast majority of the documents on the site were free; many were individual authors’ writings, corporate whitepapers, court filings, and so on.  Scribd also enabled authors to sell their documents as paid downloads (DRM optional).  Eventually some publishers put e-books up for individual sale on the site, including major publishers in the higher ed and scholarly segments.

The publishing industry has been buzzing about the possibility of a “Netflix for books” for a couple of years now.  A few startups, such as Oyster, have built out the infrastructure but have only gotten licenses from smaller publishers and independent authors.  At least for now, only Scribd has a major publisher deal; that will make all the difference in taking the subscription model for e-books to the mainstream.  Like it or not, major content providers are key to the success of a content retail site.

From a technical standpoint, Scribd’s subscription service has more in common with music apps like Rhapsody and Spotify than with video services like Netflix.  Like those music services, Scribd is mainly a “streaming” service, a/k/a “cloud reading,” in that it retrieves content in small chunks instead of downloading entire e-books but also gives users the option of downloading content to their mobile devices.  (Thereby enabling me to use it on the subways in NYC.)  Files stored on mobile devices are obfuscated or encrypted, so that users will not have access to them anymore if they cancel their subscriptions.  And also analogously to the interactive streaming music services, Scribd uses a simple proprietary “good enough” encryption scheme instead of a heavyweight name-brand DRM technology such as the Adobe DRM used with the Nook, Kobo, Sony Reader, and Bookish systems.

Although Scribd is the first paid subscription service with major-publisher licensing, it’s actually not the first way to read major-publisher trade e-books on a time-limited basis: OverDrive introduced OverDrive Read, an HTML5-based cloud reading app for its public library e-lending service, a year ago.

In fact, OverDrive Read is currently the only (legal) way to read frontlist e-book titles from major publishers through a browser app on a time-limited basis.  And that leads to an important difference between Scribd’s service and interactive streaming music services: HarperCollins is only licensing backlist titles, not frontlist (latest bestsellers).  From the publishers’ point of view, this is a smart move that other Big Five publishers will most likely follow.

In that respect, Scribd could become more like Netflix than Rhapsody or Spotify, in that Netflix only offers movies in the home entertainment window — Hollywood’s rough equivalent of “backlist.”  In contrast, the major music labels licensed virtually their entire catalogs to interactive streaming services from the start, save only for some high-profile artist holdouts such as the Beatles and Led Zeppelin.  Instead, the record labels have had to settle for (hard-won) price differentiation between top new releases and back catalog for paid downloads.  Just as readers who want the latest frontlist titles in print have to pay for hardback, those who want them as e-books will have to buy them.  (Or borrow them from the library.)

*The story of Rhapsody is somewhat sad.  For music geeks like myself, the service was a revelation — a truly new way to listen to and explore music.  But Rhapsody slogged through years of difficulty communicating the value of subscription services to users amid numerous ownership changes.  Subscribership grew gradually and plateaued at about a million paying users; then it suffered unfairly from the tsunami of hype around Spotify’s US launch in 2011.  It didn’t help that Rhapsody took too long to release a halfway decent mobile client; but otherwise Spotify’s functionality was virtually identical to Rhapsody at that time.  Now Rhapsody is struggling yet again as it attempts to expand to markets where Spotify is already established, training its 24 million users to expect free on-demand streaming with ads while losing money hand over fist.  And in the latest insult to its pioneering history, a 6,000-word feature on Spotify in Mashable — a tome by online journalism standards — mentions Rhapsody not once.

Comcast Adds Carrots to Sticks August 9, 2013

Posted by Bill Rosenblatt in Fingerprinting, Services, Video.
add a comment

Variety magazine reported earlier this week that Comcast is developing a new scheme for detecting illegal file downloads over its Internet service.  When it detects a user downloading content illegally, it will send a message to the user with links to legal alternatives, including from sources that aren’t Comcast properties.  This scheme would be independent of the Copyright Alert System (CAS) that launched in the United States earlier this year.

What a difference the right economic incentives make.  Comcast has significant incentive for offering carrots instead of sticks: it owns NBC Universal, a major movie studio and TV network.  This means that Comcast has incentives to protect content revenue, even if it comes from third parties like iTunes, Netflix, or Amazon.  In addition, if Comcast protects its own network from infringers, it has a stronger position from which to negotiate content distribution deals for its own Xfinity-branded services from other major studios.

Comcast will most likely use the same monitoring services as content owners — like NBC Universal, whose people are collaborating on the design of this (as yet unnamed) system — use to detect allegedly infringing downloads.  It will be able to send messages to users in close to real time — in contrast to CAS, which processes data about detected downloads through a third party before they get sent to users.

This scheme is reminiscent of one of the earliest uses of fingerprinting technologies in a commercially licensed service: around 2005, a P2P file-sharing network called iMesh cut a deal with the major record labels (or at least some of them).  They would allow iMesh to operate its network with audio fingerprinting (supplied by Audible Magic, still a leader in the field).  The fingerprinting technology would detect attempts to upload copyrighted music to the network and block them.  Instead, iMesh offered copyrighted music files supplied by the labels, encrypted with DRM, for purchase.  Given that several other P2P file-sharing networks (such as LimeWire) continued to operate at the time without such restrictions, iMesh wasn’t much of a success.

Comcast is hoping to get other ISPs to adopt similar schemes, presumably both as a service to major content owners and in hopes that this anti-piracy feature doesn’t drive users to its competitors.  But that gambit is unlikely to succeed.  Of the four other major ISPs in the US — AT&T, Cablevision, Time Warner Cable, and Verizon — none are corporate siblings to major content owners.  (Time Warner Cable was spun off from Time Warner in 2009, though it retains the name.)  In other words, they won’t have the right incentives.

In contrast, France’s HADOPI scheme is supposed to steer people to legal alternatives by simply giving those services a “seal of approval” that they can use themselves.  What Comcast has in mind ought to be more effective.  In the world of movies and TV shows, it would be that much more effective if legal services were to offer content with anything like the completeness of record label catalogs offered through legal music services.   But that’s another story for another day.

 

The Coming Two-Tiered World of Libary E-book Lending June 4, 2013

Posted by Bill Rosenblatt in Libraries, Publishing, Services, United States.
2 comments

A group of public libraries in California recently launched a beta version of EnkiLibrary, an e-book lending system that the libraries run themselves.  EnkiLibrary is modeled on the Douglas County Libraries system in Colorado.  It enables libraries to acquire e-book titles for lending in a model that approximates print book acquisition more closely than the existing model.

Small independent publishers are making their catalogs available to these library-owned systems on liberal terms, including low prices and a package of rights that emulates ownership.  In contrast, major trade publishers license content to white-label service providers such as OverDrive under a varied, changing, and often confusing array of conditions — including limited catalog, higher prices than those charged to consumers, and limitations on the number of loans.  The vast majority of public libraries in the United States use these systems: they choose which titles to license and offer those to their patrons.

Welcome to the coming two-tiered world of library e-book lending.  E-lending systems like EnkiLibrary may well proliferate, but they are unlikely to take over; instead they will coexist with — or, in EnkiLibrary’s own words, “complement” — those used by the major publishers.

The reason for this is simple: indie publishers — and authors, working through publisher/aggregators like Smashwords — prioritize exposure over revenue, while for major publishers it’s the other way around.  If more liberal rights granted to libraries means that borrowers “overshare” e-books, then so be it: some of that oversharing has promotional value that could translate into incremental, cost-free sales.

In some ways, the emerging dichotomy in library e-lending is like the dichotomy between major and indie labels regarding Internet music sales.  Before 2009, the world of (legal) music downloads was divided into two camps: iTunes sold both major and indie music and used DRM that tied files to the Apple ecosystem; smaller services like eMusic sold only indie music, but the files were DRM-free MP3s that could be played on any device and copied freely.  That year, iTunes dropped DRM, Amazon expanded its DRM-free MP3 download service to major-label music, and eventually eMusic tapered off into irrelevance.

Yet it would be a mistake to stretch the analogy too far.  Major publishers are unlikely to license e-books for library lending on the liberal terms of a system like EnkiLibrary or Douglas County’s in the foreseeable future; the market dynamics are just not the same.

In 2008, iTunes had an inordinately large share of the music download market; the major labels had no leverage to negotiate more favorable licensing terms, such as the ability to charge variable prices for music.  The majors had tried and failed to nurture viable competitors to iTunes.  Amazon was their last and best hope.  iTunes already had an easy-to-use system that was tightly integrated with Apple’s own highly popular devices.  It became clear that the only meaningful advantage that another retailer could have over iTunes was lack of DRM.  So the major labels were compelled to give up DRM in order to get Amazon on board.  By 2009, DRM-free music from all labels became available through all major retailers.

No such competitive pressures exist in the library market.  On the contrary, libraries themselves are under competition from the private sector, including Amazon.  Furthermore, arguments that e-book lending under liberal terms leads to increased sales for small publishers won’t apply very much to major publishers, for reasons given above.

Therefore, unless libraries get e-lending rights under copyright law instead of relying on “publishers’ good graces” (as I put it at the recent IDPF Digital Book 2013 conference) for e-lending permission, it’s likely that libraries will have to labor under a two-tiered system for the foreseeable future.  Douglas County Libraries director Jamie LaRue — increasingly seen as a revolutionary force in the library community — captured the attitude of many when he said, “It isn’t the job of libraries to keep publishers in business.”  He’s right.  Ergo the stalemate should continue for some time to come.

Mega’s Aggressive Takedown Policy? February 1, 2013

Posted by Bill Rosenblatt in Law, New Zealand, Services.
add a comment

Here is an interesting addendum to last week’s story about Mega, the new file storage service from Kim Dotcom of MegaUpload fame.

Recall that Mega encrypts files that users store on its servers, with keys that only the users know… unless they publish URLs that contain the keys, like this one.  This means that Mega can’t know whether or not files on its servers are infringing, unless a user publishes a URL like that.

As TorrentFreak has found, Mega is crawling the web in search of public URLs that contain Mega encryption keys.  When it finds one, it proactively removes the content from its server — at least if the file in question contains audio or video content — and it sends the user who uploaded the file a message saying that it has taken down the file due to receipt of a takedown notice from the copyright owner.

It’s impossible to say for sure whether this is a blanket policy, and of course Mega’s web-crawling technology probably doesn’t work perfectly.  But if this is Mega’s policy, then Mega is being at least as aggressive as RapidShare in going after public links to infringing content.  RapidShare finds public links to files on its service and, apparently, examines them with content identification technology to see if they are infringing.  According to TorrentFreak’s findings, Mega does no analysis; it uses no fingerprinting or other content identification technology; it just takes the content down.  It has taken down unambiguously legal content.  (My file wasn’t taken down, because it’s just a PDF of a presentation that I created, and/or because it’s only on this blog and not on a known P2P index site.)

Mega could be doing this in order to conform to the terms of Kim Dotcom’s arrest.  Whatever the reason, it helps make sure that pirated material on Mega can only be shared by sending encryption keys through means such as email… or perhaps URLs  that are publicly available but are themselves encrypted.  And if you truly want to share audio or video material to which you have the rights, then Mega wasn’t going to be the best place for you anyway.

A commenter on TechDirt put it best: “So we’re still allowed to share the stuff, but just not on linking sites? Seems fair enough to me. Probably for the best too, since some dumbasses clearly don’t know how to hide their copyrighted material properly.”

Kim Dotcom Embraces DRM January 22, 2013

Posted by Bill Rosenblatt in DRM, New Zealand, Services.
add a comment

Kim Dotcom launched a new cloud file storage service, the New Zealand-based Mega, last weekend on the one-year anniversary of the shutdown of his previous site, the notorious MegaUpload.  (The massive initial interest in the site* prevented me from trying out the new service until today.)

Mega encrypts users’ files, using what looks like a content key (using AES-128) protected by 2048-bit RSA asymmetric-key encryption.  It derives the latter keys from users’ passwords and other pseudo-random data.  Downloading a file from a Mega account requires knowing either the password that was used to generate the RSA key (i.e., logging in to the account used to upload the file) or the key itself.

Hmm.  Content encrypted with symmetric keys that in turn are protected by asymmetric keys… sounds quite a bit like DRM, doesn’t it?

Well, not quite.  While DRM systems assume that file owners won’t want to publish keys used to encrypt the files, Mega not only allows but enables you to publish your files’ keys.  Mega lets you retrieve the key for a given file in the form of a URL; just right-click on the file you want and select “Get link.”   (Here‘s a sample.)  You can put the resulting URL into a blog post, tweet, email message, or website featuring banner ads for porn and mail-order brides.

(And of course, unlike DRM systems, once you obtain a key and download a file, it’s yours in unencrypted form to do with as you please.  The encryption isn’t integrated into a secure player app.)

Yet in practical terms, Mega is really no different from file-storage services that let users publish URLs to files they store — examples of which include RapidShare, 4Shared, and any of dozens of file transfer services (YouSendIt, WhaleMail, DropSend, Pando, SendUIt, etc.).

Mega touts its use of encryption as a privacy benefit.  What it really offers is privacy from the kinds of piracy monitoring services that media companies use to generate takedown notices — an application of encryption that hardcore pirates have used and that Kim Dotcom purports to “take … out to the mainstream.”  It will be impossible to use content identification technologies, such as fingerprinting, to detect the presence of copyrighted materials on Mega’s servers.  RapidShare, for example, analyzes third-party links to files on its site for potential infringements; Mega can’t do any such thing, by design.

Mega’s use of encryption also plays into the question of whether it could ever be held secondarily liable for its users’ infringements under laws such as DMCA 512 in the United States.  The Beltway tech policy writer Paul Sweeting wrote an astute analysis of Mega’s chances against the DMCA over the weekend.

Is Kim Dotcom simply thumbing his nose at Big Media again?  Or is he seriously trying to make Mega a competitor to legitimate, prosaic file storage services such as DropBox?  The track records of services known for piracy trying to go “legit” are not encouraging — just ask Bram Cohen (BitTorrent Entertainment Network) or Global Gaming Factory (purchasers of The Pirate Bay’s assets).  Still, this is one to watch as the year unfolds.

*Or, just possibly, server meltdowns faked to generate mountains of credulous hype?

New Study on the Changing Face of Video Content Security October 23, 2012

Posted by Bill Rosenblatt in Conditional Access, Services, Video.
4 comments

Farncombe Technologies, a pay TV technology consultancy based in the UK, has just released a white paper called “The Future of Broadcast Cardless Security.”  The white paper incorporates the results of a survey of pay TV operators, content owners, security vendors, and device makers on pay TV security concerns today and in the future.

Operators of pay TV (cable, satellite, and telco-TV) networks have put more money and effort into digital content security than any other type of media distributor, certainly more than any digital music or e-book sellers ever have.  That’s because the economic incentives of pay TV operators are aligned with those of content owners such as movie studios and TV networks: operators don’t want their signals stolen, while content owners want to minimize unauthorized use of the content that travels over those signals.

For a long time, the technology used to thwart signal theft was the same as that used to guard against copyright infringement: conditional access (CA).  Life was simple when cable companies operated closed networks to dedicated set-top boxes (STBs): the content went from head ends to  STBs and nowhere else.  In that situation, if you secure the network, you secure the content.  But nowadays, two developments threaten this alignment of incentives and thus blow open the question of how pay TV operators will secure content.

First, the model of so-called piracy has changed.  Historically, pay TV piracy has meant enabling people to receive operators’  services without paying for them, by doing such things as sharing control words (decryption keys in CA systems) or distributing unauthorized smartcards for STBs.  But now, with higher broadband distribution and technologies such as BitTorrent, people can get content that flows over pay TV networks without touching the pay TV network at all.

Second, operators are offering “TV Everywhere” type services that let users view the content on Internet-conneted devices such as PCs,  tablets, smartphones, and so on, in addition to through their STBs.  They are doing this in response to competition from “over the top” (OTT) services that make video content available over the Internet.  Operators have less direct incentive to protect content being distributed to third-party Internet-connected devices than they do to protect it within their own networks.

The Farncombe study predicts the likely effects of these developments (and others) on pay TV security in the years to come.  According to the survey results, operators’ primary piracy concerns today are, in order of priority: control word sharing, rebroadcasting their content over the Internet (illegal streaming), and downloads of their content over the Internet (e.g. torrents); but in five years’ time the order of priority is expected to reverse.  The threat of bogus smartcard distribution is expected to diminish.

The intent of this whitepaper is to motivate the use of pure software security technology for pay-TV networks, i.e., schemes that don’t use smartcards.   So-called cardless security schemes are available from vendors such as Verimatrix, which sponsored the whitepaper.  They are cheaper to implement, and they now use software techniques such as whitebox encryption and code diversity that are often considered to be as strong as hardware techniques (for more on this, see my 2011 whitepaper The New Technologies for Pay TV Content Security, available here).

However, the whitepaper also calls for the use of forensic Internet antipiracy techniques instead of — or in addition to — those that (like CA) secure operators’ networks.  In other words, if piracy takes place mostly on the Internet instead of on operators’ networks, then antipiracy measures ought to be more cost-effective if they take place on the Internet as well.

The paper advocates the use of techniques such as watermarking, fingerprinting, and other types of Internet traffic monitoring to find pirate services and gather evidence to get them shut down.  It calls such techniques “new” although video security companies such as NDS (now Cisco) and Nagravision have been offering them for years, and Irdeto acquired BayTSP a year ago in order to incorporate BayTSP’s well-established forensic techniques into its offerings.  A handful of independent forensic antipiracy services exist as well.

This all begs the question: will pay TV operators will continue to put as much effort into content security as they have done until now?  Much of pay TV networks’ offerings consist of programming licensed non-exclusively from others.  The amount of programming that is licensed exclusively to operators in their geographic markets — such as live major-league sports — is decreasing over time as a proportion of total programming that operators offer.

The answer is, most likely, that operators will continue to want to secure their core networks, if only because such techniques are not mutually exclusive with forensic Internet monitoring or other techniques.  Yet operators’ security strategies are likely to change in two ways.  First, as the Farncombe whitepaper points out, operators will want security that is more cost-effective — which cardless solutions provide.

Second, network security technologies will have to integrate with DRM and stream encryption technologies used to secure content distributed over operators’ “TV Everywhere” services.  The whitepaper doesn’t cover this aspect of it, but for example, Verimatrix can integrate its software CA technology with a couple of DRM systems (Microsoft’s PlayReady and Intertrust’s Marlin) used for Internet content distribution. Licensors of content, especially those that make exclusive deals with operators, will insist on this.

The trouble is that such integrated security is more complex and costs more, not less, than traditional CA — and the costs and complexities will only go up as these services get more sophisticated and flexible.  Operators may start to object to these growing costs and complexities when the content doesn’t flow over their networks.  On the other hand, those same operators will become increasingly dependent on high-profile exclusive licensing deals to help them retain their audiences in the era of cord-cutting — meaning that content licensors will have a strong hand in dictating content security terms.  It will be interesting to see how this dynamic affects video content security in the future as it emerges.

Music Subscription Services Go Mainstream September 17, 2012

Posted by Bill Rosenblatt in Business models, Music, Services.
add a comment

While revisiting some older articles here,  I came across a prediction I made almost exactly a year ago, after Facebook’s announcement of integration with several music subscription services at its f8 conference.  I claimed that this would have a “tidal wave” effect on such services:

I predict that by this time next year, total paid memberships of subscription music services will reach 10 million and free memberships will cross the 50 million barrier.

So, how did I do?  Not bad, as it turns out.

The biggest subscription music services worldwide are Spotify and Deezer.  Let’s look at them first.

Spotify hasn’t published subscribership data recently, but music analyst Mark Mulligan measured its monthly membership at 20 million back in May of this year.  Judging by the trajectory of Mulligan’s numbers, it ought to be about 24 million now.  In fact, Mulligan shows that Spotify’s growth trajectory is about equal to Pandora’s.  Furthermore, that’s only for users whose plays are reported to Facebook.  A redoubt of users — such as yours truly– refuse to broadcast their plays that way (despite constant pleas from Spotify), so make it at least 25 million.

Deezer, based in France, is Spotify’s number one competitor outside of the US.  A month ago, PaidContent.org put Deezer’s numbers at 20 million total but only 1.5 million paid, and added that Spotify’s paid subscribership is at 4 million.

Rhapsody is the number two subscription service in the US market.  Unlike Spotify and Deezer, Rhapsody has not embraced the “freemium” trend and has stuck to its paid-only model.  Rhapsody passed the 1 million subscriber milestone last December.

The next tier of subscription services includes MOG, Rdio, and MuveMusic (where the monthly fee is bundled in with wireless service) in the US; regional players including WIMP, simfy, and Juke (Europe); Galaxie (Canada); various others in the Asia-Pacific market; and Omnifone’s recently launched multi-geography rara.com.  These should all be good for a few hundred thousand subscribers each.

So among all these services, 50 million looks pretty safe for the number of total subscribers..  As for the number of paid subscribers, IFPI put it at 13.4 million for 2011 in its 2012 Digital Music Report, published in January.  Given that this represents a 63% increase over 2010, we can be confident in saying that the figure now is more like 17-18 million, but I’d back it off somewhat because IFPI probably counts services that I would not categorize as subscription (such as premium Internet radio).  So let’s say 13-15 million paid – way past my prediction of 10 million.

It’s also worth noting that if these figures are correct, the percentage of paid subscribership is in the 26-30% range.  That’s in line with the 20-30% that readers predicted here when I  ran a poll on this a year ago — the most optimistic of the poll answer choices.

To put this in perspective, 50 million still falls far short of the audiences for paid downloads, Internet radio, and even YouTube, which are all well above 100 million worldwide.  But it proves that the public is catching on to the value of subscription services, and they are no longer a niche product for “grazers.”

Getty Images Launches Automated Rights Licensing for Photo Sharing Services September 12, 2012

Posted by Bill Rosenblatt in Fingerprinting, Images, Law, Rights Licensing, Services.
add a comment

Getty Images announced on Monday a deal with SparkRebel, a site that calls itself a “collaborative fashion and shopping inspiration” — but is perhaps more expediently described as “Pinterest for fashionistas, with Buy buttons” — in which images that users post to the site are recognized and their owners compensated for the use.  The arrangement uses ImageIRC technology from PicScout, the Israeli company that Getty Images acquired last year.  ImageIRC is a combination of an online image rights registry and image recognition technology based on fingerprinting.  It also uses PicScout’s Post Usage Billing system to manage royalty compensation.

Here’s how it works: SparkRebel users post images of fashion items they like to their profile pages.  Whenever a user posts an image, SparkRebel calls PicScout’s content identification service to recognize it.  If it finds the image’s fingerprint in its database, it uses ImageIRC to determine the rights holders; then SparkRebel pays any royalty owed through Post Usage Billing.  PicScout ImageIRC’s database includes Getty’s own images; it is the largest stock image agency in the world.  (Getty Images itself was sold just last month to Carlyle Group, the private equity giant, for over US $3 Billion.)  In all, ImageIRC includes data on over 80 million images from more than 200 licensors, which can opt in to the arrangement with SparkRebel (and presumably similar deals in the future).

This deal is a landmark in various ways. It is a more practically useful application for image recognition than ever before, and it brings digital images into some of the same online copyright controversies that have existed for music, video, and other types of content.

Several content recognition platforms exist; examples include Civolution’s Teletrax service for video; Attributor for text; and Audible Magic, Gracenote, and Rovi for music.  Many of these technologies were first designed for catching would-be infringers: blocking uploads  and supplying evidence for takedown notices and other legal actions.  Some of them evolved to add rights licensing functionality, so that when they find content on a website, blog, etc., instead of sending a nastygram, copyright owners can offer revenue-sharing or other licensing terms.  The music industry has experimented with audio fingerprinting to automate radio royalty calculations.

The idea of extending a content identification and licensing service to user-posted content is also not new: Google’s Content ID technology for YouTube has led to YouTube becoming a major legal content platform and likely the largest source of ad revenue from music in the world.  But while Content ID is exclusive to YouTube, PicScout ImageIRC and Post Usage Billing are platforms that can be used by any service that publishes digital images.

PicScout has had the basic technology components of this system for a while; SparkRebel merely had to implement some simple code in its photo-upload function to put the pieces together.  So why don’t we see this on Pinterest, not to mention Flickr, Tumblr, and so many others?

The usual reason: money.  Put simply, SparkRebel has more to gain from this arrangement than most other image-sharing sites.  SparkRebel has to pay royalties on many of the images that its users post.  Yet many of those images are of products that SparkRebel sells; therefore if an image is very popular on the site, it will cost SparkRebel more in royalties but likely lead to more commissions on product sales.  Furthermore, a site devoted to fashion is likely to have a much higher percentage of copyrighted images posted to it than, say, Flickr.

Yet where there’s no carrot, there might be a stick.  Getty Images and other image licensors have been known to be at odds with sites like Pinterest over copyright issues.  Pinterest takes a position that is typical of social-media sites: that it is covered (in the United States) by DMCA 512, the law that enables them to avoid liability by responding to takedown notices — and as long as it responds to them expeditiously, it has no further copyright responsibility.

Courts in cases such as UMG v. Veoh and Viacom v. Google (YouTube) have also held that online services have no obligation to use content identification technology to deal with copyright issues proactively.  Yet the media industry is trying to change this; for example, that’s most likely the ultimate goal of Viacom’s pending appeal in the YouTube case.  (That case concerns content that users uploaded before Google put Content ID into place.)

On the other hand, the issue for site operators in cases like this is not just royalty payments; it’s also the cost of implementing the technology that identifies content and acts accordingly.  A photo-sharing site can implement PicScout’s technology easily and (unlike analogous technology for video) with virtually no impact on its server infrastructure or the response time for users.  This combined with the “make it easy to do the right thing” aspect of the scheme may bring the sides closer together after all.

Irdeto Intelligence: Monitoring Video Content Beyond Managed Networks September 11, 2012

Posted by Bill Rosenblatt in Conditional Access, Services, Video.
1 comment so far

Last week’s big IBC conference in Amsterdam brought a raft of announcements from video content protection vendors, most of which were typical customer success stories and strategic partnerships.  One product launch announcement, however, was particularly interesting: Irdeto Intelligence, which launched last Friday.

Irdeto Intelligence is the result of the company’s acquisition of BayTSP in October 2011.  The service is an extension of BayTSP’s existing offering and had been under development before the acquisition.  It crawls the Internet looking for infringing content and provides an interactive dashboard that enables customers to see data such as where infringing files were found (by ISP or other service provider) and the volume for each title.

Before Irdeto acquired BayTSP last year, it was one of a handful of independent companies that crawl the Internet looking for infringing content; others include Attributor, Civolution, MarkMonitor, and Peer Media Technologies.  The company wanted to grow its business beyond its core piracy monitoring service.  It found — like other companies of its type — that the mountains of data on so-called piracy that it was collecting had value beyond helping copyright owners generate cease-and-desist or takedown notices.

The big issue with piracy monitoring services is — as with so many other technologies we discuss here — who pays for them.  Hollywood studios (and other types of media businesses) pay the companies mentioned above to find infringing copies of their content.  Now that BayTSP is part of a leading video security business, its customers become managed network operators (cable, satellite, telco-TV) and broadcasters.  As I mentioned last year when the acquisition was announced, a cynic could read the deal as Hollywood’s attempt to push piracy monitoring costs downstream to operators, just as it does the cost of DRM and conditional access.

Irdeto confirmed that it is still offering BayTSP’s existing services to copyright owners.  Still, Irdeto’s acquisition of BayTSP is something of a gamble.  It’s part of a theme that I see growing in importance over the next few years: competition from Internet-based “over the top” (OTT) services is forcing managed network operators to offer “TV Anywhere” type services for viewing their programming over Internet-connected devices such as PCs, tablets, and mobile handsets.

Hollywood has always had a strong relationship with managed network operators on content protection because their economic incentives were aligned: Hollywood wanted to mitigate infringement of its movies and TV shows; operators wanted to mitigate theft of access to their networks.  This has led to set-top boxes that are fortresses of security compared, say, to e-book readers, portable music players, and (especially) PCs.

But once operator-licensed content leaves managed networks to go “over the top,” just how much responsibility do operators have to protect content?  This is a question that will loom larger and larger.

Other providers of conditional access (CA) technology for operators, such as NDS (now Cisco) and Nagra, offer piracy monitoring services.  But those have typically been limited in scope to things like sharing of control words (content keys used in CA systems for the DVB standard), not illegal file-sharing.  In acquiring BayTSP, Irdeto is betting that operators will want to pay more for this type of monitoring.

But why would, say, a cable operator care about content uploaded to file-sharing sites?  Once they have this information, how would they use it if not to generate takedown notices or other legal means of getting infringing content removed?

Irdeto has two answers to this question.  Most important is live event content, particularly sports.  Hollywood has nothing to do with this type of content.  Operators and terrestrial broadcasters suffer when users can view live events on illegal streaming sites with only slight time delays.  Irdeto Intelligence updates its search results at five-minute intervals, so that operators can act to get illegal streams shut down very quickly.

The second reason has to do with the fact that more and more operators are offering so-called triple play services which include Internet service in addition to TV and telephony.  A triple play provider will be seeking licenses to content from Hollywood, which will be more willing to grant licenses if provider actively addresses infringing content on its ISP service.

Irdeto says that it has signed two customers for Irdeto Intelligence so far, and that it received strong interest for the service on the show floor at IBC.  It will be interesting to see how other video security vendors react as OTT and TV Anywhere continue to grow.

 

Follow

Get every new post delivered to your Inbox.

Join 568 other followers