jump to navigation

New Study on the Changing Face of Video Content Security October 23, 2012

Posted by Bill Rosenblatt in Conditional Access, Services, Video.
4 comments

Farncombe Technologies, a pay TV technology consultancy based in the UK, has just released a white paper called “The Future of Broadcast Cardless Security.”  The white paper incorporates the results of a survey of pay TV operators, content owners, security vendors, and device makers on pay TV security concerns today and in the future.

Operators of pay TV (cable, satellite, and telco-TV) networks have put more money and effort into digital content security than any other type of media distributor, certainly more than any digital music or e-book sellers ever have.  That’s because the economic incentives of pay TV operators are aligned with those of content owners such as movie studios and TV networks: operators don’t want their signals stolen, while content owners want to minimize unauthorized use of the content that travels over those signals.

For a long time, the technology used to thwart signal theft was the same as that used to guard against copyright infringement: conditional access (CA).  Life was simple when cable companies operated closed networks to dedicated set-top boxes (STBs): the content went from head ends to  STBs and nowhere else.  In that situation, if you secure the network, you secure the content.  But nowadays, two developments threaten this alignment of incentives and thus blow open the question of how pay TV operators will secure content.

First, the model of so-called piracy has changed.  Historically, pay TV piracy has meant enabling people to receive operators’  services without paying for them, by doing such things as sharing control words (decryption keys in CA systems) or distributing unauthorized smartcards for STBs.  But now, with higher broadband distribution and technologies such as BitTorrent, people can get content that flows over pay TV networks without touching the pay TV network at all.

Second, operators are offering “TV Everywhere” type services that let users view the content on Internet-conneted devices such as PCs,  tablets, smartphones, and so on, in addition to through their STBs.  They are doing this in response to competition from “over the top” (OTT) services that make video content available over the Internet.  Operators have less direct incentive to protect content being distributed to third-party Internet-connected devices than they do to protect it within their own networks.

The Farncombe study predicts the likely effects of these developments (and others) on pay TV security in the years to come.  According to the survey results, operators’ primary piracy concerns today are, in order of priority: control word sharing, rebroadcasting their content over the Internet (illegal streaming), and downloads of their content over the Internet (e.g. torrents); but in five years’ time the order of priority is expected to reverse.  The threat of bogus smartcard distribution is expected to diminish.

The intent of this whitepaper is to motivate the use of pure software security technology for pay-TV networks, i.e., schemes that don’t use smartcards.   So-called cardless security schemes are available from vendors such as Verimatrix, which sponsored the whitepaper.  They are cheaper to implement, and they now use software techniques such as whitebox encryption and code diversity that are often considered to be as strong as hardware techniques (for more on this, see my 2011 whitepaper The New Technologies for Pay TV Content Security, available here).

However, the whitepaper also calls for the use of forensic Internet antipiracy techniques instead of — or in addition to — those that (like CA) secure operators’ networks.  In other words, if piracy takes place mostly on the Internet instead of on operators’ networks, then antipiracy measures ought to be more cost-effective if they take place on the Internet as well.

The paper advocates the use of techniques such as watermarking, fingerprinting, and other types of Internet traffic monitoring to find pirate services and gather evidence to get them shut down.  It calls such techniques “new” although video security companies such as NDS (now Cisco) and Nagravision have been offering them for years, and Irdeto acquired BayTSP a year ago in order to incorporate BayTSP’s well-established forensic techniques into its offerings.  A handful of independent forensic antipiracy services exist as well.

This all begs the question: will pay TV operators will continue to put as much effort into content security as they have done until now?  Much of pay TV networks’ offerings consist of programming licensed non-exclusively from others.  The amount of programming that is licensed exclusively to operators in their geographic markets — such as live major-league sports — is decreasing over time as a proportion of total programming that operators offer.

The answer is, most likely, that operators will continue to want to secure their core networks, if only because such techniques are not mutually exclusive with forensic Internet monitoring or other techniques.  Yet operators’ security strategies are likely to change in two ways.  First, as the Farncombe whitepaper points out, operators will want security that is more cost-effective — which cardless solutions provide.

Second, network security technologies will have to integrate with DRM and stream encryption technologies used to secure content distributed over operators’ “TV Everywhere” services.  The whitepaper doesn’t cover this aspect of it, but for example, Verimatrix can integrate its software CA technology with a couple of DRM systems (Microsoft’s PlayReady and Intertrust’s Marlin) used for Internet content distribution. Licensors of content, especially those that make exclusive deals with operators, will insist on this.

The trouble is that such integrated security is more complex and costs more, not less, than traditional CA — and the costs and complexities will only go up as these services get more sophisticated and flexible.  Operators may start to object to these growing costs and complexities when the content doesn’t flow over their networks.  On the other hand, those same operators will become increasingly dependent on high-profile exclusive licensing deals to help them retain their audiences in the era of cord-cutting — meaning that content licensors will have a strong hand in dictating content security terms.  It will be interesting to see how this dynamic affects video content security in the future as it emerges.

Music Subscription Services Go Mainstream September 17, 2012

Posted by Bill Rosenblatt in Business models, Music, Services.
add a comment

While revisiting some older articles here,  I came across a prediction I made almost exactly a year ago, after Facebook’s announcement of integration with several music subscription services at its f8 conference.  I claimed that this would have a “tidal wave” effect on such services:

I predict that by this time next year, total paid memberships of subscription music services will reach 10 million and free memberships will cross the 50 million barrier.

So, how did I do?  Not bad, as it turns out.

The biggest subscription music services worldwide are Spotify and Deezer.  Let’s look at them first.

Spotify hasn’t published subscribership data recently, but music analyst Mark Mulligan measured its monthly membership at 20 million back in May of this year.  Judging by the trajectory of Mulligan’s numbers, it ought to be about 24 million now.  In fact, Mulligan shows that Spotify’s growth trajectory is about equal to Pandora’s.  Furthermore, that’s only for users whose plays are reported to Facebook.  A redoubt of users — such as yours truly– refuse to broadcast their plays that way (despite constant pleas from Spotify), so make it at least 25 million.

Deezer, based in France, is Spotify’s number one competitor outside of the US.  A month ago, PaidContent.org put Deezer’s numbers at 20 million total but only 1.5 million paid, and added that Spotify’s paid subscribership is at 4 million.

Rhapsody is the number two subscription service in the US market.  Unlike Spotify and Deezer, Rhapsody has not embraced the “freemium” trend and has stuck to its paid-only model.  Rhapsody passed the 1 million subscriber milestone last December.

The next tier of subscription services includes MOG, Rdio, and MuveMusic (where the monthly fee is bundled in with wireless service) in the US; regional players including WIMP, simfy, and Juke (Europe); Galaxie (Canada); various others in the Asia-Pacific market; and Omnifone’s recently launched multi-geography rara.com.  These should all be good for a few hundred thousand subscribers each.

So among all these services, 50 million looks pretty safe for the number of total subscribers..  As for the number of paid subscribers, IFPI put it at 13.4 million for 2011 in its 2012 Digital Music Report, published in January.  Given that this represents a 63% increase over 2010, we can be confident in saying that the figure now is more like 17-18 million, but I’d back it off somewhat because IFPI probably counts services that I would not categorize as subscription (such as premium Internet radio).  So let’s say 13-15 million paid – way past my prediction of 10 million.

It’s also worth noting that if these figures are correct, the percentage of paid subscribership is in the 26-30% range.  That’s in line with the 20-30% that readers predicted here when I  ran a poll on this a year ago — the most optimistic of the poll answer choices.

To put this in perspective, 50 million still falls far short of the audiences for paid downloads, Internet radio, and even YouTube, which are all well above 100 million worldwide.  But it proves that the public is catching on to the value of subscription services, and they are no longer a niche product for “grazers.”

Getty Images Launches Automated Rights Licensing for Photo Sharing Services September 12, 2012

Posted by Bill Rosenblatt in Fingerprinting, Images, Law, Rights Licensing, Services.
add a comment

Getty Images announced on Monday a deal with SparkRebel, a site that calls itself a “collaborative fashion and shopping inspiration” — but is perhaps more expediently described as “Pinterest for fashionistas, with Buy buttons” — in which images that users post to the site are recognized and their owners compensated for the use.  The arrangement uses ImageIRC technology from PicScout, the Israeli company that Getty Images acquired last year.  ImageIRC is a combination of an online image rights registry and image recognition technology based on fingerprinting.  It also uses PicScout’s Post Usage Billing system to manage royalty compensation.

Here’s how it works: SparkRebel users post images of fashion items they like to their profile pages.  Whenever a user posts an image, SparkRebel calls PicScout’s content identification service to recognize it.  If it finds the image’s fingerprint in its database, it uses ImageIRC to determine the rights holders; then SparkRebel pays any royalty owed through Post Usage Billing.  PicScout ImageIRC’s database includes Getty’s own images; it is the largest stock image agency in the world.  (Getty Images itself was sold just last month to Carlyle Group, the private equity giant, for over US $3 Billion.)  In all, ImageIRC includes data on over 80 million images from more than 200 licensors, which can opt in to the arrangement with SparkRebel (and presumably similar deals in the future).

This deal is a landmark in various ways. It is a more practically useful application for image recognition than ever before, and it brings digital images into some of the same online copyright controversies that have existed for music, video, and other types of content.

Several content recognition platforms exist; examples include Civolution’s Teletrax service for video; Attributor for text; and Audible Magic, Gracenote, and Rovi for music.  Many of these technologies were first designed for catching would-be infringers: blocking uploads  and supplying evidence for takedown notices and other legal actions.  Some of them evolved to add rights licensing functionality, so that when they find content on a website, blog, etc., instead of sending a nastygram, copyright owners can offer revenue-sharing or other licensing terms.  The music industry has experimented with audio fingerprinting to automate radio royalty calculations.

The idea of extending a content identification and licensing service to user-posted content is also not new: Google’s Content ID technology for YouTube has led to YouTube becoming a major legal content platform and likely the largest source of ad revenue from music in the world.  But while Content ID is exclusive to YouTube, PicScout ImageIRC and Post Usage Billing are platforms that can be used by any service that publishes digital images.

PicScout has had the basic technology components of this system for a while; SparkRebel merely had to implement some simple code in its photo-upload function to put the pieces together.  So why don’t we see this on Pinterest, not to mention Flickr, Tumblr, and so many others?

The usual reason: money.  Put simply, SparkRebel has more to gain from this arrangement than most other image-sharing sites.  SparkRebel has to pay royalties on many of the images that its users post.  Yet many of those images are of products that SparkRebel sells; therefore if an image is very popular on the site, it will cost SparkRebel more in royalties but likely lead to more commissions on product sales.  Furthermore, a site devoted to fashion is likely to have a much higher percentage of copyrighted images posted to it than, say, Flickr.

Yet where there’s no carrot, there might be a stick.  Getty Images and other image licensors have been known to be at odds with sites like Pinterest over copyright issues.  Pinterest takes a position that is typical of social-media sites: that it is covered (in the United States) by DMCA 512, the law that enables them to avoid liability by responding to takedown notices — and as long as it responds to them expeditiously, it has no further copyright responsibility.

Courts in cases such as UMG v. Veoh and Viacom v. Google (YouTube) have also held that online services have no obligation to use content identification technology to deal with copyright issues proactively.  Yet the media industry is trying to change this; for example, that’s most likely the ultimate goal of Viacom’s pending appeal in the YouTube case.  (That case concerns content that users uploaded before Google put Content ID into place.)

On the other hand, the issue for site operators in cases like this is not just royalty payments; it’s also the cost of implementing the technology that identifies content and acts accordingly.  A photo-sharing site can implement PicScout’s technology easily and (unlike analogous technology for video) with virtually no impact on its server infrastructure or the response time for users.  This combined with the “make it easy to do the right thing” aspect of the scheme may bring the sides closer together after all.

Irdeto Intelligence: Monitoring Video Content Beyond Managed Networks September 11, 2012

Posted by Bill Rosenblatt in Conditional Access, Services, Video.
1 comment so far

Last week’s big IBC conference in Amsterdam brought a raft of announcements from video content protection vendors, most of which were typical customer success stories and strategic partnerships.  One product launch announcement, however, was particularly interesting: Irdeto Intelligence, which launched last Friday.

Irdeto Intelligence is the result of the company’s acquisition of BayTSP in October 2011.  The service is an extension of BayTSP’s existing offering and had been under development before the acquisition.  It crawls the Internet looking for infringing content and provides an interactive dashboard that enables customers to see data such as where infringing files were found (by ISP or other service provider) and the volume for each title.

Before Irdeto acquired BayTSP last year, it was one of a handful of independent companies that crawl the Internet looking for infringing content; others include Attributor, Civolution, MarkMonitor, and Peer Media Technologies.  The company wanted to grow its business beyond its core piracy monitoring service.  It found — like other companies of its type — that the mountains of data on so-called piracy that it was collecting had value beyond helping copyright owners generate cease-and-desist or takedown notices.

The big issue with piracy monitoring services is — as with so many other technologies we discuss here — who pays for them.  Hollywood studios (and other types of media businesses) pay the companies mentioned above to find infringing copies of their content.  Now that BayTSP is part of a leading video security business, its customers become managed network operators (cable, satellite, telco-TV) and broadcasters.  As I mentioned last year when the acquisition was announced, a cynic could read the deal as Hollywood’s attempt to push piracy monitoring costs downstream to operators, just as it does the cost of DRM and conditional access.

Irdeto confirmed that it is still offering BayTSP’s existing services to copyright owners.  Still, Irdeto’s acquisition of BayTSP is something of a gamble.  It’s part of a theme that I see growing in importance over the next few years: competition from Internet-based “over the top” (OTT) services is forcing managed network operators to offer “TV Anywhere” type services for viewing their programming over Internet-connected devices such as PCs, tablets, and mobile handsets.

Hollywood has always had a strong relationship with managed network operators on content protection because their economic incentives were aligned: Hollywood wanted to mitigate infringement of its movies and TV shows; operators wanted to mitigate theft of access to their networks.  This has led to set-top boxes that are fortresses of security compared, say, to e-book readers, portable music players, and (especially) PCs.

But once operator-licensed content leaves managed networks to go “over the top,” just how much responsibility do operators have to protect content?  This is a question that will loom larger and larger.

Other providers of conditional access (CA) technology for operators, such as NDS (now Cisco) and Nagra, offer piracy monitoring services.  But those have typically been limited in scope to things like sharing of control words (content keys used in CA systems for the DVB standard), not illegal file-sharing.  In acquiring BayTSP, Irdeto is betting that operators will want to pay more for this type of monitoring.

But why would, say, a cable operator care about content uploaded to file-sharing sites?  Once they have this information, how would they use it if not to generate takedown notices or other legal means of getting infringing content removed?

Irdeto has two answers to this question.  Most important is live event content, particularly sports.  Hollywood has nothing to do with this type of content.  Operators and terrestrial broadcasters suffer when users can view live events on illegal streaming sites with only slight time delays.  Irdeto Intelligence updates its search results at five-minute intervals, so that operators can act to get illegal streams shut down very quickly.

The second reason has to do with the fact that more and more operators are offering so-called triple play services which include Internet service in addition to TV and telephony.  A triple play provider will be seeking licenses to content from Hollywood, which will be more willing to grant licenses if provider actively addresses infringing content on its ISP service.

Irdeto says that it has signed two customers for Irdeto Intelligence so far, and that it received strong interest for the service on the show floor at IBC.  It will be interesting to see how other video security vendors react as OTT and TV Anywhere continue to grow.

 

A Nail in Public Libraries’ Coffins May 20, 2012

Posted by Bill Rosenblatt in Libraries, Publishing, Services, United States.
2 comments

There it was, on the entire back page of the A section of the New York Times a few days ago, at a likely cost of over US $100,000: a full-page ad from Amazon touting free “lending” of all of the Harry Potter e-books for members of Amazon’s $79/year Amazon Prime program who own Kindle e-readers, starting next month.

I wrote last December about the challenges that public libraries face as e-reading becomes popular and major trade book publishers increase restrictions on public library e-lending of their titles.  Copyright law allows publishers to set license terms for digital content, so instead of giving e-book buyers the standard “copyright bundle” of rights, publishers can dictate whatever terms they want — including refusal to license content at all.  Currently five of the Big 6 trade publishers restrict library e-book lending in some way, including two of them that don’t allow it at all.  Libraries have little leverage against publishers to change this state of affairs.

I also discussed Amazon’s Kindle Owners’ Lending Library (KOLL), which is one of the benefits of Amazon Prime membership (along with free shipping and access to streaming video content), as a step toward the private sector invading the turf of public libraries.  In case anyone doesn’t see this, Amazon makes it quite clear in its press release:

“With the Kindle Owners’ Lending Library, there are no due dates, books can be borrowed as frequently as once a month, and there are no limits on how many people can simultaneously borrow the same title—so readers never have to wait in line for the book they want.”

In other words, Amazon has implemented a model of “one e-book per user at a time, not more than one per month.”  It can configure any such model on its servers and enforce it through its DRM.

KOLL’s selection had been limited to a few thousand titles from smaller publishers.  Recently Amazon has been moving aggressively to increase the KOLL catalog, despite lack of permission from some publishers and authors; it now claims a catalog of over 145,000 titles.  Amazon did make a deal with Pottermore, the organization that distributes J.K. Rowling’s Harry Potter titles in digital form, to include those titles in KOLL.  Pottermore admits that Amazon paid it “a large amount of money” to do so.  Taken together, these steps take KOLL to the next level.

Of course, there are several reasons why the Harry Potter case is exceptional.  The only way to purchase Harry Potter e-books is on the Pottermore site, and Amazon wanted to find some way of luring Potter fans back to its own site; Harry Potter is a series of seven books, and Pottermore believes that allowing users to borrow one title per month will lead to increased sales of other titles; The Amazon Prime and public library demographics may not overlap much.

But still, this deal is an example of Amazon using content to make its devices and seo services more valuable.  The company is subsidizing a bestselling author’s work to induce people to buy Kindles and Amazon Prime memberships.  This kind of arrangement is likely to become more commonplace as authors, publishers, and retailers all get more information about the value of private-sector e-lending and learn how to make such deals strategically.

This is nice for already-famous authors, but it doesn’t benefit the multitude of authors who haven’t made it to J.K. Rowling’s rarified level.  It’s not something that libraries are able to replicate — neither the subsidies nor the full-page ads in the New York Times.

Who’s Subsidizin’ Who? February 9, 2012

Posted by Bill Rosenblatt in Business models, Music, Publishing, Services, Uncategorized, United States.
add a comment

Barnes & Noble has just announced a deal offering a US $100 Nook e-reader for free with a $240/year subscription to the New York Times on Nook.  Meanwhile, MuveMusic, the bundled-music service of the small US wireless carrier Cricket Wireless, passed the 500,000 subscriber mark last month.   MuveMusic has vaulted past Rdio and MOG to be probably the third largest paid subscription music service in the United States, behind Rhapsody and (probably) Spotify at over a million each.

MuveMusic isn’t quite a subsidized-music deal a la Nokia Ovi Music Unlimited, but it does offer unlimited music downloads bundled with wireless service at a price point that’s lower than the major carriers.  (The roaming charges you’d incur if you leave Cricket’s rather spotty coverage area could add to the cost.)  Cricket is apparently spending a fortune to market MuveMusic, and it’s paying off.

It looks like the business of bundling content with devices is not dead; on the contrary, it’s just beginning.  The fact that both types of bundling models exist — pay for the device, get the content free; pay for the content, get the device free — means that we can expect much experimentation in the months and years ahead.  Although it’s hard to imagine a record label offering a free device with its music, we could follow a model like Airborne Music and think of things like, say, a deal between HTC and UMG offering everything Lady Gaga puts out for $20/year with a free HTC Android phone and/or (HTC-owned) Beats earbuds.  Or how about free Disney content with a purchase of an Apple TV?

As long as someone is paying for the content, any of these models are good for content creators. device makers, ane consumers alike.  Bring them on!

Creative Commons for Music: What’s the Point? January 22, 2012

Posted by Bill Rosenblatt in Law, Music, Rights Licensing, Services, Standards.
22 comments

I recently came across a music startup called Airborne Music, which touts two features: a business model based on “subscribing to an artist” for US $1/month, and music distributed under Creative Commons licenses.  Like other music services that use Creative Commons, Airborne Music appeals primarily to indie artists who are looking to get exposure for their work.  This got me thinking about  how — or whether — Creative Commons has any real economic value for creative artists.

I have been fascinated by a dichotomy of indie vs. major-label music: indie musicians value promotion over immediate revenue, while for major-label artists it’s the other way around.  (Same for book authors with respect to the Big 6 trade publishers, photographers with respect to Getty and Corbis, etc.)  Back when the major labels were only allowing digital downloads with DRM — a technology intended to preserve revenue at the expense of promotion — I wondered if those few indie artists who landed major-label deals were getting the optimal promotion-versus-revenue tradeoffs, or if this issue even figured into major-label thinking about licensing terms and rights technologies.

When I looked at Airborne Music, it dawned on me that Creative Commons is interesting for indie artists who want to promote their works while preserving the right (if not the ability) to make money from them later.  The Creative Commons website lists ten existing sites that enable musicians to distribute their music under CC, including big ones like the bulge-bracket-funded startup SoundCloud and the commercially-oriented BandCamp.

This is an eminently practical application of Creative Commons’s motto: “Some rights reserved.”  Many CC-licensing services use the BY-SA (Attribution-Share-Alike) Creative Commons license, which gives you the right to copy and distribute the artist’s music as long as you attribute it to the artist and redistribute (i.e. share) it under the same terms.  That’s exactly what indie artists want: to get their content distributed as widely as possible but to make sure that everyone knows it’s their work.  Some use BY-SA-NC (Attribution-Share-Alike-Noncommercial), which adds the condition that you can’t sell the content, meaning that the artist is preserving her ability to make money from it.

It sounds great in theory.  It’s just too bad that there isn’t a way to make sure that those rights are actually respected.  There is a rights expression language for Creative Commons (CC REL), which makes it possible for content rendering or editing software to read the license (in XML RDFa) and act accordingly.  As a technology, the REL concept originated with Mark Stefik at Xerox PARC in the mid-1990s; the eminent MIT computer scientist Hal Abelson created CC REL in 2008.  Since then, the Creative Commons organization has maintained something of an arms-length relationship with CC REL: it describes the language and offers links to information about it, but it doesn’t (for example) include CC REL code in the actual licenses it offers.

More to the point, while there are code libraries for generating CC REL code, I have yet to hear of a working system that actually reads CC REL license terms and acts on them.  (Yes, this would be extraordinarily difficult to achieve with any completeness, e.g., taking Fair Use into account.)

Without a real enforcement mechanism, CC licenses are all little more than labels, like the garment care hieroglyphics mandated by the Federal Trade Commission in the United States.  For example, some BY-SA-licensed music tracks may end up in mashups.  How many of those mashups will attribute the sources’ artists properly?  Not many, I would guess.  Conversely, what really prevents someone who gets music licensed under ND (No Derivative Works) terms from remixing or excerpting in ways that aren’t considered Fair Use?  Are these people really afraid of being sued?  I hardly think so.

This trap door into the legal system, as I have called it, makes Creative Commons licensing of more theoretical than practical interest.  The practical value of CC seems to be concentrated in business-to-business content licensing agreements, where corporations need to take more responsibility for observing licensing terms and CC’s ready-made licenses make it easy for them to do so.  The music site Jamendo is a good example of this: it licenses its members’ music content for commercial sync rights to movie and TV producers while making it free to the public.

Free culture advocates like to tell content creators that they should give up control over their content in the digital age.  As far as I’m concerned, anyone who claims to welcome the end of control and also supports Creative Commons is talking through both sides of his mouth.  If you use a Creative Commons license, you express a desire for control, even if you don’t actually get very much of it.  What you really get is a badge that describes your intentions — a badge that a large and increasing number of web-savvy people recognize.  Yet as a practical matter, a Creative Commons logo on your site is tantamount to a statement to the average user that the content is free for the taking.

The truth is that sometimes artists benefit most from lack of control over their content, while other times they benefit from more control.  The copyright system is supposed to make sure that the public’s and creators’ benefits from creative works are balanced in order to optimize creative output. Creative Commons purports to provide simple means of redressing what its designers believe is a lack of balance in the current copyright law.  But to be attractive to artists, CC needs to offer them ways to determine their levels of control in ways that the copyright system does not support.

In the end, Creative Commons is a burglar alarm sign on your lawn without the actual alarm system.  You can easily buy fake alarm signs for a few dollars, whereas real alarm systems cost thousands.  It’s the same with digital content.  At least Creative Commons, like almost all of the content licensed with it, is free.

(I should add that I wear the badge myself.  My whitepapers and this blog are licensed under Creative Commons BY-NC-ND (Attribution-Noncommercial-No Derivative Works) terms.  I would at least rather have the copyright-savvy people who read this know my intentions.)

UltraViolet Gets Two Lifelines January 12, 2012

Posted by Bill Rosenblatt in Economics, Fingerprinting, Services, Standards, Video.
add a comment

A panel at this week’s CES show in Las Vegas yielded two pieces of positive news for the DECE/UltraViolet standard, after a launch several months ago with Warner Bros. and its Flixster subsidiary that could charitably be called “premature.”  Of the two news items, one is a nice to have, but the other is a game-changer.

Let’s get to the game-changer first: Amazon announced that a major Hollywood studio is licensing its content for UltraViolet distribution through the online retail giant.  The Amazon executive didn’t name the studio, though many assume it’s Warner Bros.  Even if it’s a single studio, the importance of this announcement to the likelihood of UltraViolet’s success in the market cannot be overstated.

Leaving aside UltraViolet’s initial technical glitches and shortage of available titles, the problem with UltraViolet from a market  perspective had always been a lukewarm interest from online retailers.  As I’ll explain, this hasn’t been a surprise, but Amazon’s new interest in UltraViolet could make all the difference.

UltraViolet is the “brand name” of a standard from a group called the Digital Entertainment Content Ecosystem (DECE), headed by Sony Pictures executive Mitch Singer.  It implements a so-called rights locker for digital movies and other video content.  Users can establish UltraViolet accounts for themselves and family members.  Then they can obtain movies in one format (say, Blu-ray) and be entitled to get it in other formats for other devices (say, Windows Media file download for PCs).  They can also stream the content to a web browser anywhere.  The rights locker, managed by Neustar Inc., tracks each user’s purchases.

In other words, UltraViolet promises users format independence and a hedge against format obsolescence, while providing some protection for the content by requiring it to be packaged in several approved DRM and stream encryption schemes.  It includes a few limitations on the number of devices and family members that can be associated with a single UltraViolet account, but in general UltraViolet is designed to make video content more portable and interoperable than, say, DVDs or iTunes downloads.

Five of the six major Hollywood studios (all but Disney*), plus the “major indie” Lionsgate, are participating in UltraViolet.

One of the design goals of UltraViolet was to ensure that no single retailer could attain a market share large enough to be able to control downstream economics — in other words, to avoid a replay of Apple’s dominance of digital music downloads (and possibly Amazon’s dominance of e-books).  To do this, the DECE studios pushed for ways to thwart consumer lock-in by online retailers that would sell UltraViolet content.

The most important example of this is rights locker portability: users can access their rights lockers from any participating retailer.  UltraViolet retailers must compete with each other through value-added features.

Amazon’s Kindle e-book scheme offers a good illustration of platform lock-in and how it differs from other features that a retailer can build or offer.  If you buy an e-book on Amazon, you can download and read it on a wide variety of devices: not just Kindle e-readers but also iPads, iPhones, Android devices, BlackBerrys, PCs, and Macs — in other words, pretty much everything but other e-reader devices.  You get e-book portability — it will even remember where you last left off if you resume reading an e-book on another device — but you are still tied to Amazon as a retailer.  If you want to read the same e-book on a Nook, for example, you have to buy it separately from Barnes & Noble (and then you can read that e-book on your PC, Mac, iPhone, Android, etc.).

This lock-in gives Amazon power in the market as a retailer; it had 58% market share as of February 2011 (by comparison, Apple has over 70% of the music download market).  UltraViolet wants to make it as difficult as possible for a single digital video retailer to assert such market power.

The downside of that policy has been a lack of enthusiasm among retailers to sell UltraViolet-licensed content — which entails significant development investment and operational expenses.  A good shorthand way to evaluate the potential impact of a standards initiative is to look at the list of participants: what points in the value chain are represented, how many of the top companies in each category, and so on.  In DECE’s case, members have included most of the major movie studios, plenty of consumer device makers, lots of DRM and conditional access technology vendors, and so on, but few big-name retailers… one of which (Best Buy) already had a different system for delivering digital video content via Sonic Solutions.

Warner Bros. tried to jump-start the UltraViolet ecosystem by acquiring Flixster, a movie-oriented social networking startup, adding digital video e-commerce capability, and using it as an UltraViolet retailer for a handful of Warner titles.  This has been little more than a proof-of-concept test, which was plagued by some technical glitches and suboptimal user experience — all of which, according to Singer, have been fixed.

It would be unworkable for Hollywood to pin its hopes for its next big digital format on a small unknown retailer owned by one of the studios.  It has been vitally necessary to attract a big-name retailer to both validate the concept and provide the necessary marketing and infrastructure footprints.  There had been talk of Wal-Mart entering the UltraViolet ecosystem, although it already has its own video delivery scheme through VUDU.  But otherwise, the membership list had been short on major retailers.

Of course, Amazon is the major-est online retailer of them all.  And it so happens that Amazon’s digital video strategy is a good fit to UltraViolet in two ways.  First, Amazon currently runs a streaming service (Amazon Instant Video), whereas UltraViolet is primarily focused on downloads, a/k/a Electronic Sell Through (EST): the idea of UltraViolet is to buy a download and only then be able to view it via streaming.

Second, Amazon Instant Video does not look particularly successful.  Of course, Amazon does not reveal user numbers, but it is telling that Amazon included Instant Video Unlimited as a perk in its US $79/year Amazon Prime program… and that when people extol the virtues of Amazon Prime, they tend to emphasize the free overnight shipping but rarely the streaming video.

The biggest winner thus far in the paid online video sweepstakes is Netflix, with about 24 million subscribers as of mid-2011.  Netflix’s subscription-on-demand model is most likely far more popular than Amazon Instant Video’s pay-per-view (except for Amazon Prime members) model.  Thus Amazon may be looking for ways to improve its market position in video without having to hack away at the Netflix streaming juggernaut.

The video download market is in comparative infancy.  It has no runaway market leader a la Netflix, or Apple in music.  If this situation persists long enough, and if Amazon’s trial run with UltraViolet is successful, then other retailers might see UltraViolet as a viable format as well… precisely because it will make them better able to compete with the Online Retailing Gorilla.

Yet the other dimension of UltraViolet that is currently lacking is availability of titles.  And that’s where the other CES announcement comes in.  Samsung announced a “Disc to Digital” feature that it will incorporate into new Blu-ray players later this year.  With this feature, users can slide in their Blu-ray discs or DVDs, and if the content is “eligible,” they can choose to have that content available in their UltraViolet rights lockers for delivery in any UltraViolet-compliant format.

The Disc to Digital feature is a collaboration between Flixster (i.e. Warner Bros.) as online retailer and Rovi as technology supplier.  It works in a manner that is analogous to “scan and match” services for music such as Apple iTunes Match: it scans your DVD or Blu-ray disc, identifies the movie, and if the movie is available in the UltraViolet library of licensed content, gives you an UltraViolet rights locker entry for that movie.  Rovi’s content identification technology and metadata library are undoubtedly at the heart of this scheme.

There are two catches: first, users will have to pay a “nominal” fee per disc for this service, which is even larger (and as yet unspecified) if they want it in high definition; second, it is limited to “eligible” content, and no one has offered a definition of “eligible” yet (beyond the fact that the content must come from one of the DECE participating studios).  But surely the “eligible” catalog will exceed the current list (19 titles) by orders of magnitude, or the service will not be worth launching.

Nevertheless, these developments are very positive news for DECE/UltraViolet after months of embarrassments and bad press.  DECE still has lots of work to do to make UltraViolet successful enough to be the major studios’ designated successor to Blu-ray, but at last it’s on track.

*Yes, I’m aware of the irony of using a tag line from “Who Wants to Be a Millionare” in the title of this article: Disney owns the home entertainment distribution rights to that hit TV game show.

Oblivion, But Not Beyond January 2, 2012

Posted by Bill Rosenblatt in Music, Services.
19 comments

Last week, the music startup Beyond Oblivion ceased operations.  The shutdown happened after three years of development and shortly before the company’s service was to go into public beta.  The news was leaked to Engadget last Thursday and became “official” when it was reported in the Financial Times on Saturday.

First, the disclosure: I consulted to Beyond Oblivion throughout much of the company’s existence.  I’m proud of what we did, privileged to have worked with its top-notch management team, and sad about what happened last week.

I’ll leave it to others to chew over the amount of cash that the company burned through or why the company shut down at this particular time.  Instead I want to talk about the company’s vision and business model, which — if it had seen the commercial light of day — did in fact have the potential to change the online music industry for the better.  Although Beyond Oblivion did get some press coverage, its unique model was never fully explained.

At a basic level, Beyond’s model was a hybrid between download services like iTunes and streaming services like Spotify.  It was based on the concepts of licensed devices and play count reporting.  Users could buy new Beyond-licensed devices or purchase licenses for their existing PCs or other devices.  They could download tracks from the Beyond catalog to their licensed devices (a la iTunes) and listen to them as often as they wanted.  The Beyond client software would securely count plays and report them for royalty purposes (a la Spotify).

Users could also add their own music files to their Beyond libraries using a process that is now called “scan and match”; Beyond would report plays of those files too, even if the original files were obtained illegally.  We had also designed a way for users to add music to Beyond’s music catalog (we called it “catalog crowdsourcing”), with permission of rights holders, which could have resulted in the world’s largest legal online music catalog.

There would be no limit to the number of tracks a user could download to a licensed device.  Furthermore, Beyond users could freely share their files with other Beyond users; a Beyond file could play on any Beyond-licensed device (within a given country).

Beyond Oblivion had two signed major-label deals with others in the works, and over seven million tracks in its catalog at last count.

Now here’s the real differentiator: users would pay neither monthly subscription fees nor per-download charges for the service.  Beyond’s business model was to charge device makers or network operators the license fees, with the expectation that they would subsidize these fees or perhaps bundle part of them into users’ monthly network charges.  If users wanted to add Beyond to their own devices, they would pay a one-time charge, expected to be well under US $100, for unlimited downloads for as long as they owned the device.

Whenever anyone knowledgeable about digital music asked for a quick explanation of Beyond’s model, I would answer, “It’s like Comes With Music on steroids.”  (Comes With Music was Nokia’s attempt to create a subsidized music model for a few of its own devices.)

The problem with device maker subsidized models is that they are limited to new devices from that maker.  Instead, Beyond’s intent was to build a large, global ecosystem of subsidized music that would work on a wide range of devices and networks.  It would be an intermediary between device makers and network operators (license fee payers) on the one hand and music copyright owners (royalty recipients) on the other.   Beyond’s pitch to the former was simple: here is a chance to eat into Apple’s market share for digital music by offering a service to users that “feels like free” but is completely legal.

The Beyond concept was based on a fundamental insight by founder Adam Kidron, a serial entrepreneur, former pop record producer, inveterate frequent flier, and spreadsheet Jedi Master.  In fact, his business model began on a spreadsheet.  He figured out that if he could count every play of a digital music file and pay a small royalty to the copyright owners for each one, he could make a profitable business by charging device licensing fees — essentially trading off device license fees against those “micro-royalties” — and still offer legal music for much less money than anyone else.  His model took into account factors such as the expected ownership lifespans of certain device types such as PCs and mobile handsets.

Kidron determined that technology companies were the only remaining entities in the digital music value chain where revenue could come from: users are being led toward expecting to get music for free, and ad revenue has been disappointing.  Thus, we tried to define a model and features with enough appeal to tech companies to get them to pay the licensing fees.

But Beyond would only have had industry-wide impact if it could sign up a critical mass of network operators and device makers at launch — a process that would require a lot of salesmanship, faith-building, and delicate discussions about exclusivity versus the power of the ecosystem.  When Kidron first approached me three years ago about helping the company and explained the model, my initial thought was, “This might actually work if someone threw enough money at it.”  Then he proceeded to explain the funding plan.  I was impressed; he had thought it through.  He didn’t just want to launch yet another music service, he wanted to move the music industry “beyond oblivion.”

The company did raise large sums of money in order to seed the entire ecosystem.  It was in advanced talks with companies worldwide.  A few name-brand device makers were considering putting out new Beyond-enabled models of handsets, tablets, and other devices.  Wireless carriers in several geographies were considering launching services for Beyond-enabled devices.  Major record labels signed licensing deals.  But even with cash in hand, the negotiations among the various constituencies proved to be a long, hard slog.

Yet Beyond’s impact on the music industry was potentially much wider than mere profitability for one business.  To understand this, it’s useful to look at its economic model in light of various recent governmental attempts to get network service  providers to assume more responsibility for curbing copyright infringement.  These have boiled down to operators paying for three different things: technology to monitor activity for possible infringement;  per-user levies for use of content, and “piracy fees” to cover copyright enforcement costs.

All of these models have serious drawbacks.  Levies are inaccurate in paying copyright owners according to actual use of their content and unfair in that they charge all users the same amount regardless of their use.  If network operators paid for their own piracy monitoring, they would do it in the same way that device makers have implemented DRM: at the lowest possible cost, with little regard for efficacy, and in ways that benefit them instead of copyright owners, such as customer lock-in.  And “piracy fees” are the most inequitable idea of all.

A market-based solution that enables network operators to offer functionally rich access to legal content in a way that feels like free seems like a much better approach — a carrot rather than a stick.  It can entice people away from copyright infringement while compensating rights holders fairly and accurately.  Given the choice, a network operator ought to want to compete on offering the most attractive music service rather than be forced to pay a “copyright tax” as a cost of doing business.  (By the way, this is not my retrospective view; it was all part of the original thinking.)

When Beyond was starting development, users had strong preferences for file ownership over streaming.  We started with a download model and figured out a way to reconcile file ownership with usage reporting.  We also designed a mechanism for determining (with reasonable accuracy) when a device changed owners, so that it would not be possible to sell a Beyond-licensed device on eBay (for example) and have the second owner inherit the music rights along with the device; “lifetime of device ownership” was key to making the numbers work.

Since then, streaming has become more popular.  Yet on-demand streaming services like Spotify and Rhapsody have business models that were originally based on monthly subscription fees; they face the choice of living with a “freemium” model in which only a fraction of users pay subscription fees (Spotify, Rdio, MOG, Deezer) or persisting with an all-pay model against the rising tide of freemium (Rhapsody, Slacker Premium, Sony Music Unlimited).  Either choice may be hard for those services to sustain financially over time.

In contrast, Beyond was designed to be a scalably profitable subsidized pay-per-use model from the beginning.  As such, it could have had better long-term prospects than those other services.

However, three years is a very long time to be developing any kind of online business in today’s world of iterative development-and-release a la Google.  Many of Beyond’s innovative features started making their way into the market through other services during the past three years.  For example:

  • Catch Media launched a service in the UK in 2010 that counts and monetizes users’ plays of MP3 files regardless of their origins, although the service costs users £30 per year.
  • Spotify, Deezer, and Rhapsody have gotten a few bundling deals with wireless carriers, though none of these are full subsidies.
  • Spotify also recently introduced an API for app developers, another feature that Beyond included from the beginning.
  • The small US mobile carrier Cricket Wireless launched MuveMusic a year ago; it is an unlimited-download package bundled with Cricket’s wireless service.  It has attracted over a quarter million users, although the service is limited to five handset models (mostly Android-based).
  • Several services have introduced scan and match features that download files from servers to users’ devices.  Apple and Catch Media offer this, while others offer it through streaming instead of downloading.

Yet only Beyond put all these features — and more — into a single offering.  Apart from the business model and concepts, I can attest that its user experience was terrific.  Its interface, responsiveness and sound quality on mobile devices all beat Spotify.  It’s a real shame that this highly promising service did not get a chance to make the impact on the music industry that it could have.

European High Court Says No to ISP-Level Copyright Filtering November 28, 2011

Posted by Bill Rosenblatt in Europe, Fingerprinting, Law, Music, Services.
add a comment

Last Thursday the European Court of Justice (ECJ) ruled that ISPs cannot be held responsible for filtering traffic on their networks in order to catch copyright infringements.  This ruling was the final step in the journey of the litigation between the Belgian music rights collecting society SABAM and the ISP Scarlet, but it is a landmark decision for all of Europe.

This ruling overturned the Belgian Court of First Instance, which four years ago required Scarlet to install filtering technology such as acoustic fingerprinting to monitor Internet traffic and block uploads of copyrighted material to the network.  Scarlet appealed this decision to the Brussels Court of Appeals, which sought guidance from the ECJ.

The ECJ’s statement affirmed copyright holders’ rights to seek injunctions from ISPs like Scarlet to prevent copyright infringement, but it said that the Belgian court’s injunction requiring ISP-level copyright filtering went too far.  It cited Article 3 of European Union Directive 2004/48, which states that “measures, procedures and remedies [for enforcing intellectual property rights] shall be fair and equitable, shall not be unnecessarily complicated or costly and not impose unreasonable time-limits or unwarranted delays.”  The ECJ decided that the mechanism defined in the appeals court’s ruling did not meet these criteria.

The real issues here are the requirement that the ISP bear the cost and complexity of running the filtering technology, and the fact that running it would slow down the network for all ISP users.  It’s easy to see how this would not meet the requirements in the above EU Directive.

This decision has direct applicability in the European Union, but its implications could reach further afield.  For example, the issue currently being argued between Viacom and Google at the appeals court level in the United States boils down to the same thing: whose bears the cost and responsibilty to police copyrights on the Internet?

Of course, EU law doesn’t apply in the United States.  In the Viacom/Google litigation, Google is relying on the “notice and takedown” portion of the Digital Millennium Copyright Act (DMCA), a/k/a section 512 of the US copyright law. This says that if a copyright holder (e.g., Viacom) sees one of its works online without its authorization, it can issue a notice to the network service provider to take the work down, and if it does so, it won’t be liable for infringement.  Google’s argument is that it follows section 512 assiduously and therefore should not be liable.

Viacom’s task in this litigation is to convince the court that the DMCA doesn’t go far enough.  More specifically, its argument is that the legislative intent behind the DMCA is not served well enough by the notice-and-takedown provisions, that network service providers should be required to take more proactive responsibility for policing copyrights on their services instead of requiring copyright owners to play the Whack-a-Mole game of notice and takedown.

The ECJ’s decsion in SABAM v. Scarlet has no precedential weight in Viacom v. Google.  But it may help get the Third Circuit Appeals Court to focus on what Jonathan Zittrain of Harvard Law School has called the “gravamen” (which is legalese for “MacGuffin“) in this case: who should be paying for protecting copyrights.

Follow

Get every new post delivered to your Inbox.

Join 630 other followers