Withholding Ads from Illegal Download Sites January 7, 2013Posted by Bill Rosenblatt in Standards.
add a comment
Over the past few months, the conversation about online infringement has shifted from topics like graduated response and DMCA-related litigation to cutting off ad revenue for sites that provide illegal downloads. This issue gained some importance during the run-up to SOPA and PIPA in 2011. David Lowery of The Trichordist gave it new visibility last year by initiating a stream of screenshots showing major consumer brands that advertise on sites like FilesTube, IsoHunt, and MP3Skull.
Last week, the issue took on a new level of importance when a report from the University of Southern California’s Annenberg Innovation Lab confirmed that major online ad networks, including those of Google and Yahoo, routinely place ads on pirate sites. The Innovation Lab has come up with an automated way of tracking the ad networks that place ads sites that (according to the Google Transparency Report) attract the most DMCA takedown notices. The study ranks the top ten ad networks that serve ads on pirate sites and will be updated monthly.
The idea is to shame consumer brands by showing them that their ads are appearing on pirate sites amid ads for pornography, mail-order brides, etc. The Annenberg study has already led to at least one major consumer brand insisting that its ads be pulled from pirate sites.
The focus on online ad networks is — as Lowery admitted at our Copyright and Technology NYC 2012 conference last month — not an ideal solution to the problem of online infringement but rather a “low hanging fruit” approach that appeals to real business imperatives without requiring lawyers or lobbyists. It’s an acknowledgement that legislation to address online infringement is not going to be achievable, at least in the near future, in the aftermath of the defeats of SOPA and PIPA in early 2012.
Yet the tactic’s effectiveness is limited by the quality of information about ad buys that flows through ad networks. For example, it’s sometimes not possible for an advertiser to know where its ads are being placed because, among other reasons, ad networks resell inventory to each other.
Let’s assume that most consumer brands would rather not have their ads placed on pirate sites. Then two things are required to solve this problem. One is standards for information about ad buys — advertiser identity, inventory, ad network, type of placement, and so on — and protocols for communicating that information up the chain of intermediaries from the website on which the ad was placed all the way up to the advertiser. The other is agreement throughout the online ad industry to use such standards in communication and reporting.
If you follow efforts to develop standard content identifiers and online rights registries, you should see the analogy here.
The good news is that the Interactive Advertising Bureau (IAB) has been working on standards that look like they could apply here with some tweaks. The IAB launched the eBusiness Interactive Standards initiative in 2008 with the goal of increasing efficiency and reducing errors in online ad workflows. The eBusiness Interactive Standards spec defines XML structures for communicating information among advertisers, agencies, and publishers (websites) from RFPs (requests for proposal) through to IOs (insertion orders).
Now the bad news. The IAB standards would need some modification to cover the requirements here: they don’t appear to work through multiple levels of ad networks, they don’t include globally unique identifiers for ad placements (though this would be simple to add), and they aren’t designed to cover performance reporting. Furthermore, progress in getting the standard to market appears to be slow: it entered a beta phase with limited customers in 2011, and no progress has been apparent since then.
Yet even if the right standards were adopted, the advertising industry would still need to commit to the kinds of transparency that would be necessary to ensure, if an advertiser wishes, that its ads don’t appear on pirate sites. For one thing, advertisers often buy “blind” a/k/a run-of-network inventory in order to get discount pricing, and there is no reliable way to ensure that such buys don’t inadvertently end up on pirate sites. A related problem is where to draw the line between obvious pirate sites like the ones mentioned above and those that happen to occasionally host unauthorized material.
Regulatory initiatives seem unlikely here. Indeed, the Obama Administration and Congress in 2011 asked the ad industry to adopt a “pledge” against advertising on pirate sites; the industry’s two major U.S. trade associations responded last May with a statement full of equivocation and wiggle-room.
Ultimately, the pressure would have to come from advertisers themselves. They could demand, for example, that even blind buys not appear on the 200-250 sites that show more than 10,000 takedown notices a month in the Google Transparency Report (mainstream sites like Facebook, Tumblr, DailyMotion, Scribd, and SoundCloud fall well below this threshold). A good set of technical standards for tracking and reporting would help convince them that they can demand to withhold their ads from these sites with a reasonable chance of success.
As I have worked with video service providers that are trying to upgrade their offerings to include online and mobile services, I’ve seen bewilderment about the maze of codecs, streaming protocols, and player apps as well as content protection technologies that those service providers need to understand. Yet a development that took place earlier this month should help ease some of the complexity.
Microsoft’s PlayReady is becoming a popular choice for content protection. Dozens of service providers use it, including BSkyB, Canal+, HBO, Hulu, MTV, Netflix, and many ISPs, pay TV operators, and wireless carriers. PlayReady handles both downloads and streaming, and it is currently the only commercial DRM technology certified for use with UltraViolet (though that should change soon). Microsoft has developed a healthy ecosystem of vendors that supply things like player apps for different platforms, “hardening” of client implementations to ensure robustness, server-side integration services, and end-to-end services. And after years of putting in very little effort on marketing, Microsoft has finally upgraded its PlayReady website with information to make it easier to understand how to use and license the technology.
Streaming protocols are still a bit of an issue, though. Several vendors have created so-called adaptive streaming protocols, which monitor the user’s throughput and vary the bit rate of the content to ensure optimal quality without interruptions. Apple has HTTP Live Streaming (HLS), Microsoft has Smooth Streaming, Adobe has HTTP Dynamic Streaming (HDS), and Google has technology it acquired from Widevine. Yet operators have been more interested in Dynamic Adaptive Streaming over HTTP (DASH), an emerging vendor-independent MPEG standard. The hope with MPEG-DASH is that operators can use the same protocol to stream to a wide variety of client devices, thereby making deployment of TV Everywhere-type services cheaper and easier.
MPEG-DASH took a significant step towards real-world viability over the last few months with the establishment of the DASH Industry Forum, a trade association that promotes market adoption of the standard. Microsoft and Adobe are members, though not Apple or Google, indicating that at least some of the vendors of proprietary adaptive streaming will embrace the standard. The membership also includes a healthy critical mass of vendors in the video content protection space: Adobe, BuyDRM, castLabs, Irdeto, Nagra, and Verimatrix — plus Cisco, owner of NDS, and Motorola, owner of SecureMedia.
Adaptive streaming protocols need to be integrated with content protection schemes. PlayReady was originally designed to work with Smooth Streaming. It has also been integrated with HLS, which is probably the most popular of the proprietary adaptive streaming schemes. Integration of PlayReady with MPEG-DASH is likely to be viewed as a safe choice, in line with the way the industry is going. That solution came into view this month as BuyDRM and Fraunhofer IIS announced an integration of MPEG-DASH with PlayReady for the HBO GO service in Europe. HBO GO is HBO’s “over the top” service for subscribers.
For the HBO GO demo, BuyDRM implemented a version of its PlayReady client that uses Fraunhofer’s AAC 5.1 surround-sound codec, which ships with devices that run Android 4.1 Jelly Bean. The integration is being showcased with HD quality video on HBO’s “Boardwalk Empire” series. Users can connect Android 4.1 devices with the proper outputs — even handsets — to home-theater audio playback systems to get an experience equivalent to playing a Blu-ray disc. The current implementation supports live broadcasting, with VOD support on the way shortly.
PlayReady integrated with MPEG-DASH is likely to be a popular choice for a variety of video service providers, ranging from traditional pay TV operators to over-the-top services like HBO Go. BuyDRM and Fraunhofer’s deployment is an important step towards that choice becoming widely feasible.
UK IPO Publishes Digital Copyright Hub Report August 13, 2012Posted by Bill Rosenblatt in Rights Licensing, Standards, UK.
add a comment
Last month, the UK Intellectual Property Office published a report called Copyright Works: streamlining copyright licensing for the digital age. This is the second report in Richard Hooper CBE and Dr. Ros Lynch’s engagement with the UK IPO. Hooper’s background includes positions at the top of the UK’s media and telecommunications industries; Lynch is a senior civil servant in the UK’s Department for Business, Innovation and Skills.
The second Hooper Report follows on the heels of several important developments in the UK regarding copyright in the digital age, most recently including the Digital Economy Act and the Hargreaves Review. Having found (in the first Hooper Report) that the legal content marketplace is being held back by several obstacles, such as licensing difficulties, lack of standards, and deficiencies in both content and metadata, the second Hooper Report makes recommendations on how to solve the problems.
Unfortunately the recommendations in the second Hooper Report don’t go far enough. Hooper and Lynch did a lot of research, talked to lots of people, and synthesized lots of information. Most of their input appears to have come from established industry sources, including the major licensing entities in the UK, such as PRS and PPL (UK analogs to ASCAP and RIAA in the US); major media companies; trade associations; and standards initiatives engendered by the EU Digital Agenda such as the Global Repertoire Database (GRD) and Linked Content Coalition (LCC). They also researched important initiatives outside of the UK, such as the Copyright Clearance Center’s RightsLink service in the US.
Whereas the first Hooper Report established that major problems exist, this new report is best appreciated as a summary of the various initiatives being planned to solve pieces of them — such as the GRD and LCC. Hooper and Lynch offer cogent explanations of problems to be solved: difficulty of licensing content into legitimate services, lack of complete and consistent information about content and rights, lack of standards for rights information and communication among relevant entities, resistance of collective licensing schemes to new business models, and the relative lack of content available for legal use through various channels.
The authors appear to understand that the various efforts being proposed are not going to solve all the problems by themselves. On the other hand, they also understand problems of “not invented here,” and they take the pragmatic view that the best way forward is to work with existing standards and integrate them together rather than try to come up with some kind of overall solution that may not be practicable.
So far, so good; but that’s essentially where it all stops. After explaining the problems and summarizing existing initiatives, the report tantalizingly lays out a vision for a Copyright Hub that will bring everything together. It recommends government seed funding as a way of both kick-starting the Copyright Hub and ensuring that people work together to build it.
Unfortunately, the vision for the Copyright Hub turns out to be an inch deep. It also lacks explanations of how, or if, all these initiatives — ranging from PRS and PPL’s efforts to offer “one-stop” music licenses all the way up through the technically sophisticated GRD — could fit together or even how they map to the elements in the proposed Copyright Hub. The LCC project is looking at technical aspects of the integration issue, but it is conceived as an enabler of standards, not as a marketplace solution. It’s possible that such a solution is envisioned as a next step in the process. But the report betrays evidence of a lack of technical understanding that would have benefited both the analysis and the envisioning of solutions.
For example: The report has a section on digital images, which discusses the problem that many images are stripped of their rights metadata as part of normal publishing processes. It discusses the possibility of using Internet-standard Uniform Resource Identifiers (URIs) to identify images and the work that entities such as Getty Images and the PLUS coalition are doing to create image registries and automate rights licensing. But when put in this context, the solution to the metadata stripping problem is obvious: watermarking, the standard way of ensuring that data travels with content. The problem can be solved with a standard watermarking scheme whose payload includes a serial identifier that can be used to reference a URI in a registry. This is what the RIAA proposed for music in the U.S. in 2009, albeit to precious little fanfare; but Hooper and his people didn’t see it. (They use the word “embed” without appearing to understand its meaning.) There are other examples like this.
The report mentions “long tail” licensing — not as in long tail content, but as in long tail uses of content rights. The work that needs to be done should, the report rightly says, address the large and growing number of low-value licensing transactions rather than, say, Universal Music Group licensing to Spotify or Deezer (the kind of deal that will always get done the old-fashioned way). Unfortunately, the authors don’t seem to have talked to many people who try to get such licensing. They should, for example, have sought out startup companies that have to navigate the impenetrable maze of direct licensing deals with rights holders, face the rigidity of collecting societies that won’t accommodate their innovative business models, and make separate deals in 27 member states to get a pan-European service launched.
Overall, the second Hooper Report reads like a particularly well-informed version of the typical industry response to a government body’s investigation into industry practices: look at all the steps we’re already taking to solve this problem; leave us alone.
As a result, the new Hooper Report is a solid foundation on which to build solutions, but it doesn’t provide enough forward direction. It’s all very well to talk about respecting the growing body of valuable work that different organizations are doing to solve online content licensing problems, avoiding “not invented here,” promoting open standards, and so on. But the work that must be done will necessarily include tasks that are tedious and contentious, aspects that the Hooper Report glosses over.
Metadata schemes will have to be rationalized against one another; gaps and incompatibilities will have to be identified and eliminated. Rights holders whose metadata is incomplete or poor quality will have to be identified and given sufficient incentive to improve. Well-intentioned standards initiatives with overlapping or conflicting goals will have to change. Digital holdouts will have to be convinced to participate. And the many organizations with vested interests in maintaining the status quo will have to be called out as part of the problem rather than the solution. This may be ugly work, but it will have to get done.
I am working with the International Digital Publishing Forum (IDPF), helping them define a new type of content protection standard that may be incorporated into the upcoming Version 3 of IDPF’s EPUB standard for e-books. We’re calling this new standard EPUB Lightweight Content Protection (EPUB LCP).
EPUB LCP is currently in a draft requirements stage. The draft requirements, along with some explanatory information, are publicly available; IDPF is requesting comments on them until June 8. I will be giving a talk about EPUB LCP, and the state of content protection for e-books in general, at Book Expo America in NYC next week, during IDPF’s Digital Book Program on Tuesday June 5.
Now let’s get the disclaimer out of the way: the remainder of this article contains my own views, not necessarily those of IDPF, its management, or its board members. I’m a consultant to IDPF; any decisions made about EPUB LCP are ultimately IDPF’s. The requirements document mentioned above was written by me but edited by IDPF management to suit its own needs.
IDPF is defining a new standard for what amounts to a simple, lightweight, looser DRM. EPUB is widely used in the e-book industry (by just about everyone except Amazon), but lack of an interoperable DRM standard has caused fragmentation that has hampered its success in the market. Frankly, IDPF blew it on this years ago (before its current management came in). They bowed to pressures from online retailers and reading device makers not to make EPUB compliance contingent on adopting a standard DRM, and they considered DRM (understandably) not to be “low hanging fruit.”
IDPF first announced this initiative on May 18; it got press coverage in online publications such as Ars Technica, PaidContent.org, and others. The bulk of the comments were generally “DRM sucks no matter what you call it” or “Why bother with this at all, it won’t help prevent any infringement.” A small number of commenters said something on the order of “If there has to be DRM, this isn’t a bad alternative.” One very knowledgeable commenter on Ars Technica first judged the scheme to be pointless because it’s cryptographically weak, then came around to understanding what we’re trying to do and even offered some beneficial insights.
The draft requirements document provides the basic information about the design; my main purpose here is to focus more on the circumstances and motivation behind the initial design choices.
Let’s start at a high level, with the overall e-book market. (Those of you who read my article about this on PaidContent.org a few months ago can skip this and the next five paragraphs.) Right now it’s at a tipping point between two outcomes that are both undesirable for the publishing industry. The key figure to watch is Amazon’s market share, which is currently in the neighborhood of 60%; Barnes and Noble’s Nook is in second place with share somewhere in the 25-30% range.
One outcome is Amazon increasing its market share and entering monopoly territory (according to the benchmark of 70% market share often used in US law). If that happens, Amazon can do to the publishing industry as Apple has done for music downloads: dominate the market so much that it can both dictate economic terms and lock customers in to its own ecosystem of devices, software, and services.
The other outcome is that Amazon’s market share falls, say to 50% or lower, due to competition. In that case, the market fragments even further, putting a damper 0n overall growth in e-reading. Also not good for publishers.
Let’s look at what happens to DRM in each of these cases. In the first (Amazon monopoly) case, Amazon may drop DRM just as Apple did for music — but it will be too late: Amazon will have achieved lock-in and can preserve it in other ways, such as by making it generally inconvenient for users to use other devices or software to read Amazon e-books. Other e-book retailers would then drop DRM as well, but few will care.
In the second case, everyone will probably keep their DRMs in order to keep users from straying to competitors (though some individual publishers will opt out of it). In other words, if the DRM status quo remains, the likely alternatives are DRM-free monopoly or DRM and fragmentation.
If IDPF had included an interoperable DRM standard back in 2007 when both EPUB and the Kindle launched, e-books might well be more portable among devices and reading software than they are now. Yet the most desirable outcome for the reading public is 100% interoperability, and we know from the history of technology markets (with the admittedly major exception of HTML) that this is a chimera. (Again, I explained this in PaidContent.org a few months ago.)
To many people, the way out of this dilemma is obvious: everyone should get rid of DRM now. That certainly would be good for consumers. But most publishers — who control the terms by which e-books are licensed to retailers — don’t want to do this; neither do many authors, who own copyrights in their books.
E-book retailers and device vendors can get lock-in benefits from DRM. As for whether DRM does anything to benefit rights holders by improving consumers’ copyright compliance or reducing infringement, that’s a real question. Notwithstanding the opinions of the many self-styled experts in user behavior analysis and infringement data collection among the techblogorati and commentariat, the answer is unknown and possibly unknowable. Publishers are motivated to keep DRM if for no other reason than fear that once it goes away, they can never bring it back. Moreover, certain segments of the publishing industry (such as higher education) want DRM that’s even stronger than the current major schemes.
The fact is, none of the major DRMs in today’s e-book market are very sophisticated — at least not compared to content protection technologies used for video content. The economics of the e-book industry make this impossible: the publishers and authors who want DRM don’t pay for it, resulting in cost and complexity constraints. DRM helps retailers insofar as it promotes lock-in, but it doesn’t help them protect their overall services. In contrast, content protection helps pay TV operators (for example) protect their services, which they want protected just as much as Hollywood doesn’t want its content stolen; so they’re willing to pay for more sophisticated content protection.
The two leading e-book DRMs right now are Amazon’s Mobipocket DRM and Adobe’s Content Server 4; the latter is used by Barnes & Noble, Sony, and various others. Hackers have developed what I call “one-click hacks” for both. One-click hacks meet three criteria: people without special technical expertise can use them; they work on any file that’s packaged in the given DRM; and they work permanently (i.e., there is no way to recover from them). In contrast, pay TV content protection schemes are generally not one-click-hackable.
In other words one-click DRM hacks are like format converters, like the one built into Microsoft Word that converts files from WordPerfect or the ones built in to photo editing utilities that convert TIFF to JPEG. But there’s a difference: DRM hacks are illegal in many countries, including the United States, European Union member states, Brazil, India, Taiwan, and Australia; all other signatories to the Anti-Counterfeiting Trade Agreement will eventually have so-called anticircumvention laws too.
The effect of anticircumvention law has been to force DRM hacks into the shadows, making them less easily accessible to the non-tech-savvy and at least somewhat stigmatized. Without the law, we would have things like Nook devices and software with “Convert from Kindle Format” options (and vice versa). The popular, free Calibre e-book reading app, for example, had a DRM stripper but removed it (presumably under legal pressure) in 2009. A DRM removal plug-in for Calibre is available, but it’s not an official one; David Pogue of the New York Times — hardly a fan of DRM – recently dismissed it as difficult to use as well as illegal.
The US has a rich case history around anticircumvention law that has made the boundaries of legal acceptability reasonably clear. It has shut off the availability of hacks from “legitimate” sources and ensured that if your hack is causing enough trouble, you will be sued out of existence. I am not personally a fan of anticircumvention law, but I accept as fact that it has made hacks less accessible to the general public.
The foregoing line of thought got IDPF Executive Director Bill McCoy and me talking last year about what IDPF might be able to do about DRM in the upcoming version of EPUB, in order to help IDPF further its objective of making EPUB a universal standard for digital publishing and forestall the two undesirable market trajectories described above. We did not set out to design an “ultimate DRM” or even “yet another DRM”; we set out to design something intended to solve problems in the digital publishing market while working within existing marketplace constraints.
So now, with that background, here is a set of interrelated design principles we established for EPUB LCP:
- Require interoperability so that retailers cannot use it to promote lock-in. This is what the UltraViolet standard for video is attempting to do, albeit in a technically much more complex way. The idea of UltraViolet is to provide some of the interoperability and sharing features that users want while still maintaining some degree of control. Our theory is that both publishers and e-book retailers would be willing to accept a looser form of DRM that could break the above market dilemma while striking a similar balance between interoperability and control.
- Support functions that users really want, such as “social” sharing of e-books. Build on the idea of e-book watermarking, such as that used in Safari Books Online for PDF downloads and in the Pottermore Store for EPUB format e-books: embed users’ personal information into the content, on the expectation that users will only share files with people whom they trust not to abuse their personal information.
- Create a scheme that can support non-retail models such as library lending and can be extended to support additional business models (see below) or the stronger security that industry segments such as higher ed need.
- Include the kinds of user-friendly features that Reclaim Your Game has recommended for video game DRMs. These include respecting privacy by not “phoning home” to servers and ensuring permanent offline use so that files can be used even if the retailer goes out of business. They also include not jeopardizing the security or integrity of users’ devices, as in the infamous “rootkit” installed by CD copy protection technology for music several years ago.
- Eliminate design elements that add disproportionately to cost and complexity. Perhaps the biggest of these is the s0-called robustness rules that have become standard elements of DRMs such as OMA DRM, Marlin, and PlayReady where the DRM technology licensor doesn’t own the hardware or platform software. Eliminating “phoning home” also saves costs and complexity. Other elements to be eliminated include key revocation, recoverability, and fancy authentication schemes such as the domain authentication used in UltraViolet.
- Finally, don’t try very hard to make the scheme hack-proof. The strongest protection schemes for commercial content — such as those found in pay television — are those that minimize the impact of hacks so that they are temporary and recoverable; such schemes are too complex, invasive, and expensive for e-book retailers or e-reader makers to consider. Instead, assume that EPUB LCP will be hacked, and rely on two things to blunt the impact: anticircumvention law, and allowing enough differences among implementations that each one will require its own hack (a form of what security technologists call “code diversity.”).
With those design principles in mind, we have designed a scheme that takes its inspiration from two sources in particular: the content protection technology used in the eReader/FictionWise e-book technology that is now owned by Barnes & Noble, and the layered functionality concept built into the Digital Media Project‘s IDP (Interoperable DRM Platform) standard.
The central idea of EPUB LCP is a passphrase supplied by the user or retailer. This could be an item of personal information, such as a name, email address, or even credit card number; distributors or rights holders can decide what types of passphrases to use or require. The passphrase is irrecoverably obfuscated (e.g. through a hash function) so that even if a hack recovers the passphrase, it won’t recover the personal information; yet the retailer can link the obfuscated passphrase to the user. The obfuscated passphrase is then embedded into the e-book file. If the user wants to share an e-book, all she has to do is share the passphrase. Otherwise, the content must be hacked to be readable.
Other aspects of the draft requirements are covered in the document on the IDPF website. Apart from that, it’s worth mentioning that this type of scheme will not support certain content distribution models unless extensions are added to make them possible. Features intentionally left out of the basic EPUB LCP design include:
- Separate license delivery, which allows different sets of rights for a given file
- License chaining, which supports subscription services
- Domain authentication, which can support multi-device/multi-user “family accounts” a la UltraViolet
- Master-slave secure file transfer, for sideloading onto portable devices, a la Windows Media DRM
- Forward-and-delete, to implement “Digital Personal Property” a la the IEEE P1817 standard
Once again, we set out to design something that meets current market needs and works within current market constraints; EPUB LCP is not a research-lab R&D project.
Again, I’ll be discussing this, as well as the landscape for e-book content protection in general, at Book Expo America next week. Feel free to come and heckle (or just heckle in the comments right here). I’m sure I will have more to report as this very interesting project develops.
Creative Commons for Music: What’s the Point? January 22, 2012Posted by Bill Rosenblatt in Law, Music, Rights Licensing, Services, Standards.
I recently came across a music startup called Airborne Music, which touts two features: a business model based on “subscribing to an artist” for US $1/month, and music distributed under Creative Commons licenses. Like other music services that use Creative Commons, Airborne Music appeals primarily to indie artists who are looking to get exposure for their work. This got me thinking about how — or whether — Creative Commons has any real economic value for creative artists.
I have been fascinated by a dichotomy of indie vs. major-label music: indie musicians value promotion over immediate revenue, while for major-label artists it’s the other way around. (Same for book authors with respect to the Big 6 trade publishers, photographers with respect to Getty and Corbis, etc.) Back when the major labels were only allowing digital downloads with DRM — a technology intended to preserve revenue at the expense of promotion — I wondered if those few indie artists who landed major-label deals were getting the optimal promotion-versus-revenue tradeoffs, or if this issue even figured into major-label thinking about licensing terms and rights technologies.
When I looked at Airborne Music, it dawned on me that Creative Commons is interesting for indie artists who want to promote their works while preserving the right (if not the ability) to make money from them later. The Creative Commons website lists ten existing sites that enable musicians to distribute their music under CC, including big ones like the bulge-bracket-funded startup SoundCloud and the commercially-oriented BandCamp.
This is an eminently practical application of Creative Commons’s motto: “Some rights reserved.” Many CC-licensing services use the BY-SA (Attribution-Share-Alike) Creative Commons license, which gives you the right to copy and distribute the artist’s music as long as you attribute it to the artist and redistribute (i.e. share) it under the same terms. That’s exactly what indie artists want: to get their content distributed as widely as possible but to make sure that everyone knows it’s their work. Some use BY-SA-NC (Attribution-Share-Alike-Noncommercial), which adds the condition that you can’t sell the content, meaning that the artist is preserving her ability to make money from it.
It sounds great in theory. It’s just too bad that there isn’t a way to make sure that those rights are actually respected. There is a rights expression language for Creative Commons (CC REL), which makes it possible for content rendering or editing software to read the license (in XML RDFa) and act accordingly. As a technology, the REL concept originated with Mark Stefik at Xerox PARC in the mid-1990s; the eminent MIT computer scientist Hal Abelson created CC REL in 2008. Since then, the Creative Commons organization has maintained something of an arms-length relationship with CC REL: it describes the language and offers links to information about it, but it doesn’t (for example) include CC REL code in the actual licenses it offers.
More to the point, while there are code libraries for generating CC REL code, I have yet to hear of a working system that actually reads CC REL license terms and acts on them. (Yes, this would be extraordinarily difficult to achieve with any completeness, e.g., taking Fair Use into account.)
Without a real enforcement mechanism, CC licenses are all little more than labels, like the garment care hieroglyphics mandated by the Federal Trade Commission in the United States. For example, some BY-SA-licensed music tracks may end up in mashups. How many of those mashups will attribute the sources’ artists properly? Not many, I would guess. Conversely, what really prevents someone who gets music licensed under ND (No Derivative Works) terms from remixing or excerpting in ways that aren’t considered Fair Use? Are these people really afraid of being sued? I hardly think so.
This trap door into the legal system, as I have called it, makes Creative Commons licensing of more theoretical than practical interest. The practical value of CC seems to be concentrated in business-to-business content licensing agreements, where corporations need to take more responsibility for observing licensing terms and CC’s ready-made licenses make it easy for them to do so. The music site Jamendo is a good example of this: it licenses its members’ music content for commercial sync rights to movie and TV producers while making it free to the public.
Free culture advocates like to tell content creators that they should give up control over their content in the digital age. As far as I’m concerned, anyone who claims to welcome the end of control and also supports Creative Commons is talking through both sides of his mouth. If you use a Creative Commons license, you express a desire for control, even if you don’t actually get very much of it. What you really get is a badge that describes your intentions — a badge that a large and increasing number of web-savvy people recognize. Yet as a practical matter, a Creative Commons logo on your site is tantamount to a statement to the average user that the content is free for the taking.
The truth is that sometimes artists benefit most from lack of control over their content, while other times they benefit from more control. The copyright system is supposed to make sure that the public’s and creators’ benefits from creative works are balanced in order to optimize creative output. Creative Commons purports to provide simple means of redressing what its designers believe is a lack of balance in the current copyright law. But to be attractive to artists, CC needs to offer them ways to determine their levels of control in ways that the copyright system does not support.
In the end, Creative Commons is a burglar alarm sign on your lawn without the actual alarm system. You can easily buy fake alarm signs for a few dollars, whereas real alarm systems cost thousands. It’s the same with digital content. At least Creative Commons, like almost all of the content licensed with it, is free.
(I should add that I wear the badge myself. My whitepapers and this blog are licensed under Creative Commons BY-NC-ND (Attribution-Noncommercial-No Derivative Works) terms. I would at least rather have the copyright-savvy people who read this know my intentions.)
UltraViolet Gets Two Lifelines January 12, 2012Posted by Bill Rosenblatt in Economics, Fingerprinting, Services, Standards, Video.
add a comment
A panel at this week’s CES show in Las Vegas yielded two pieces of positive news for the DECE/UltraViolet standard, after a launch several months ago with Warner Bros. and its Flixster subsidiary that could charitably be called “premature.” Of the two news items, one is a nice to have, but the other is a game-changer.
Let’s get to the game-changer first: Amazon announced that a major Hollywood studio is licensing its content for UltraViolet distribution through the online retail giant. The Amazon executive didn’t name the studio, though many assume it’s Warner Bros. Even if it’s a single studio, the importance of this announcement to the likelihood of UltraViolet’s success in the market cannot be overstated.
Leaving aside UltraViolet’s initial technical glitches and shortage of available titles, the problem with UltraViolet from a market perspective had always been a lukewarm interest from online retailers. As I’ll explain, this hasn’t been a surprise, but Amazon’s new interest in UltraViolet could make all the difference.
UltraViolet is the “brand name” of a standard from a group called the Digital Entertainment Content Ecosystem (DECE), headed by Sony Pictures executive Mitch Singer. It implements a so-called rights locker for digital movies and other video content. Users can establish UltraViolet accounts for themselves and family members. Then they can obtain movies in one format (say, Blu-ray) and be entitled to get it in other formats for other devices (say, Windows Media file download for PCs). They can also stream the content to a web browser anywhere. The rights locker, managed by Neustar Inc., tracks each user’s purchases.
In other words, UltraViolet promises users format independence and a hedge against format obsolescence, while providing some protection for the content by requiring it to be packaged in several approved DRM and stream encryption schemes. It includes a few limitations on the number of devices and family members that can be associated with a single UltraViolet account, but in general UltraViolet is designed to make video content more portable and interoperable than, say, DVDs or iTunes downloads.
Five of the six major Hollywood studios (all but Disney*), plus the “major indie” Lionsgate, are participating in UltraViolet.
One of the design goals of UltraViolet was to ensure that no single retailer could attain a market share large enough to be able to control downstream economics — in other words, to avoid a replay of Apple’s dominance of digital music downloads (and possibly Amazon’s dominance of e-books). To do this, the DECE studios pushed for ways to thwart consumer lock-in by online retailers that would sell UltraViolet content.
The most important example of this is rights locker portability: users can access their rights lockers from any participating retailer. UltraViolet retailers must compete with each other through value-added features.
Amazon’s Kindle e-book scheme offers a good illustration of platform lock-in and how it differs from other features that a retailer can build or offer. If you buy an e-book on Amazon, you can download and read it on a wide variety of devices: not just Kindle e-readers but also iPads, iPhones, Android devices, BlackBerrys, PCs, and Macs — in other words, pretty much everything but other e-reader devices. You get e-book portability — it will even remember where you last left off if you resume reading an e-book on another device — but you are still tied to Amazon as a retailer. If you want to read the same e-book on a Nook, for example, you have to buy it separately from Barnes & Noble (and then you can read that e-book on your PC, Mac, iPhone, Android, etc.).
This lock-in gives Amazon power in the market as a retailer; it had 58% market share as of February 2011 (by comparison, Apple has over 70% of the music download market). UltraViolet wants to make it as difficult as possible for a single digital video retailer to assert such market power.
The downside of that policy has been a lack of enthusiasm among retailers to sell UltraViolet-licensed content — which entails significant development investment and operational expenses. A good shorthand way to evaluate the potential impact of a standards initiative is to look at the list of participants: what points in the value chain are represented, how many of the top companies in each category, and so on. In DECE’s case, members have included most of the major movie studios, plenty of consumer device makers, lots of DRM and conditional access technology vendors, and so on, but few big-name retailers… one of which (Best Buy) already had a different system for delivering digital video content via Sonic Solutions.
Warner Bros. tried to jump-start the UltraViolet ecosystem by acquiring Flixster, a movie-oriented social networking startup, adding digital video e-commerce capability, and using it as an UltraViolet retailer for a handful of Warner titles. This has been little more than a proof-of-concept test, which was plagued by some technical glitches and suboptimal user experience — all of which, according to Singer, have been fixed.
It would be unworkable for Hollywood to pin its hopes for its next big digital format on a small unknown retailer owned by one of the studios. It has been vitally necessary to attract a big-name retailer to both validate the concept and provide the necessary marketing and infrastructure footprints. There had been talk of Wal-Mart entering the UltraViolet ecosystem, although it already has its own video delivery scheme through VUDU. But otherwise, the membership list had been short on major retailers.
Of course, Amazon is the major-est online retailer of them all. And it so happens that Amazon’s digital video strategy is a good fit to UltraViolet in two ways. First, Amazon currently runs a streaming service (Amazon Instant Video), whereas UltraViolet is primarily focused on downloads, a/k/a Electronic Sell Through (EST): the idea of UltraViolet is to buy a download and only then be able to view it via streaming.
Second, Amazon Instant Video does not look particularly successful. Of course, Amazon does not reveal user numbers, but it is telling that Amazon included Instant Video Unlimited as a perk in its US $79/year Amazon Prime program… and that when people extol the virtues of Amazon Prime, they tend to emphasize the free overnight shipping but rarely the streaming video.
The biggest winner thus far in the paid online video sweepstakes is Netflix, with about 24 million subscribers as of mid-2011. Netflix’s subscription-on-demand model is most likely far more popular than Amazon Instant Video’s pay-per-view (except for Amazon Prime members) model. Thus Amazon may be looking for ways to improve its market position in video without having to hack away at the Netflix streaming juggernaut.
The video download market is in comparative infancy. It has no runaway market leader a la Netflix, or Apple in music. If this situation persists long enough, and if Amazon’s trial run with UltraViolet is successful, then other retailers might see UltraViolet as a viable format as well… precisely because it will make them better able to compete with the Online Retailing Gorilla.
Yet the other dimension of UltraViolet that is currently lacking is availability of titles. And that’s where the other CES announcement comes in. Samsung announced a “Disc to Digital” feature that it will incorporate into new Blu-ray players later this year. With this feature, users can slide in their Blu-ray discs or DVDs, and if the content is “eligible,” they can choose to have that content available in their UltraViolet rights lockers for delivery in any UltraViolet-compliant format.
The Disc to Digital feature is a collaboration between Flixster (i.e. Warner Bros.) as online retailer and Rovi as technology supplier. It works in a manner that is analogous to “scan and match” services for music such as Apple iTunes Match: it scans your DVD or Blu-ray disc, identifies the movie, and if the movie is available in the UltraViolet library of licensed content, gives you an UltraViolet rights locker entry for that movie. Rovi’s content identification technology and metadata library are undoubtedly at the heart of this scheme.
There are two catches: first, users will have to pay a “nominal” fee per disc for this service, which is even larger (and as yet unspecified) if they want it in high definition; second, it is limited to “eligible” content, and no one has offered a definition of “eligible” yet (beyond the fact that the content must come from one of the DECE participating studios). But surely the “eligible” catalog will exceed the current list (19 titles) by orders of magnitude, or the service will not be worth launching.
Nevertheless, these developments are very positive news for DECE/UltraViolet after months of embarrassments and bad press. DECE still has lots of work to do to make UltraViolet successful enough to be the major studios’ designated successor to Blu-ray, but at last it’s on track.
*Yes, I’m aware of the irony of using a tag line from “Who Wants to Be a Millionare” in the title of this article: Disney owns the home entertainment distribution rights to that hit TV game show.
The Future of Music: From Blanket Licensing to Registries October 10, 2011Posted by Bill Rosenblatt in Law, Music, Rights Licensing, Standards.
The Future of Music Coalition Policy Summit, which took place last week, has been a fixture in Washington, DC for a decade now. For those interested in how copyright has to find its way in the ever-changing world of digital music, this is a wonderful place to spend a couple of days. The FMC Policy Summit is a great event — and an inspiration for our own Copyright and Technology Conference — because it gathers many different types of people and forces them into a single room to get to know one another. As an organization, FMC represents the interests of independent musicians and songwriters, but the subject matter discussed at its Policy Summit should be of interest to anyone contemplating the future of music.
The panels at the FMC Policy Summit cover a range of topics beyond copyright. But last week’s conference had two panels on copyright arcana that were linked implicitly if not explicitly: on the first day, a panel on blanket licensing; on the second, a panel on music copyright registries. Perhaps the most remarkable aspect of these two panels was that digital music expert/ideologue Jim Griffin was on the latter panel, not the former.
Let me take a couple of steps back to explain why this is remarkable.
The treatment of music copyrights most countries is a horrible mess. It is so complex as to be virtually incomprehensible to content creators — the people who need to understand them the most.
If you make a music recording, you have two sets of copyrights: one for the underlying composition (which could be someone else’s if you didn’t write the music), and another for the recorded performance of it. Each of those rights needs to be owned by, granted by law to, or licensed by entities such as record labels, distributors, service providers, and end-users. These rights are handled in various different ways in the United States. Some are implicit copyright rights; some come from so-called statutory licenses that have been added to the copyright law; some result from ad-hoc license agreements; and some come through collecting societies (a/k/a PROs or Performing Rights Organizations) like ASCAP and BMI, which represent only those rights holders who sign up with them.
If you’re already confused, welcome to a very large club.
A few panelists at the FMC Summit — mainly law-professor types who habitually think in terms of concepts and idealism instead of practicalities and the real world — contemplated blowing up the entire system and starting from scratch. Others, such as the new Register of Copyrights, Maria Pallante, settled for “Sure it’s bad here in the US, but it’s worse elsewhere” arguments. Her predecessor, Marybeth Peters, was an advocate of streamlining the entire music licensing process so that content creators can come closer to “one-stop shopping,” as countries such as the UK have attempted.
There are two schools of thought on how to improve a system that, in the words of Gary Greenstein of the law firm Wilson Sonsini (who will also speak at Copyright and Technology 2011), exists primarily to preserve the many jobs that would be eliminated under a more streamlined system. One is to move to a comprehensive system of blanket licensing, i.e. forming entities that represent all music rights holders and license their works under fixed terms. Another is to use technology to measure all usages of copyrighted works and compensate rights holders accordingly.
These two schools of thought are not mutually exclusive. Automated measurement and compensation can work in a blanket or statutory licensing regime if the technology is pervasive and accurate enough. Yet blanket licensing usually works with compensation schemes derived from sampling (e.g., BMI requires radio stations to log the music they play for a couple of weeks each year) or levies (“copyright taxes” collected from makers of consumer electronics or blank recording media). These are blunt-instrument approaches which all but guarantee that “long tail” content creators will not be compensated fairly and that abuses will creep in.
The blunt-instrument school of thought has persisted for quite a while as a lowest common denominator that is at least practicable, even if it has outlived its usefulness. Yet recent developments have proved two important things: first, the blunt-instrument approach has serious limitations in the digital world, given the Byzantine nature of the underlying system; second, better alternatives not only exist but are exposing the inherent inadequacies of the blunt-instrument approach.
The better alternative that has emerged here in the States, according to the views of most FMC Policy Summit attendees, is SoundExchange. SoundExchange came in to being in the early 2000s as the result of laws enacted in the late 90s that established “performance rights in sound recordings”; this meant that online music services had to pay royalties for playing recordings, not just for the underlying compositions. The latter royalties are administered by composers’ collecting societies like ASCAP and BMI. As the result of the new laws, online music services would have to pay performance royalties, though terrestrial broadcast radio would not. (See, I told you this was a confusing mess.)
SoundExchange requires online music services to collect data on the music they play, report the data, and pay royalties accordingly. (Small noncommercial webcasters are exempt from this process and only pay a small flat annual fee.) SoundExchange negotiates royalty rates for various types of digital music services (webcasters, on-demand streaming services, satellite radio, etc.) through periodic rate-setting proceedings before panels of judges in Washington.
FMC Policy Summit attendees — who tend to be musicians, songwriters, or indie label people — see SoundExchange as a beacon of light in the darkness, an organization that gets musicians paid and does it with relative transparency and low overhead, at least compared to older organizations like ASCAP and BMI.
While SoundExchange has shown that automated, data-driven royalty compensation can be done, advocates of blanket licensing have run into a major snag: if you’re going to offer an online music service a blanket license to music, you have to offer it for “all music,” not just some of it, otherwise what you’re offering is not going to be very helpful to the online music service. The problem is that offering a license to “all music” is just plain impossible, at least without an act of Congress like that which produced SoundExchange.
With this insight, naive and idealistic notions such as charging all ISP subscribers a monthly “music tax” that gets (somehow) distributed to rights holders go straight out the window. This is where we finally get back to Jim Griffin: blanket-licensing schemes such as Choruss, the business that Jim Griffin ran for Warner Music Group, are revealed to be the impossibilities they are.
Griffin, a battle-scarred veteran of the early days of digital music, had been an articulate blanket-licensing ideologue for years when WMG CEO Edgar Bronfman asked him to set up a blanket licensing business, which they called Choruss. Choruss failed about a year ago; as I explained at that time, the primary reason for its failure was that it couldn’t get licenses to anywhere near “all music.”
So Griffin has acknowledged the impossibility and moved on. He has turned his attention to an underlying problem that is even more complex and fundamental: the lack of a global registry of all music rights information that would be required to support any kind of comprehensive and fair licensing scheme. At the FMC Policy Summit, Griffin was on a panel on music rights data; he was talking about the International Music Registry (IMR), a project led by the World Intellectual Property Organization (WIPO). Griffin is one of over two dozen people from around the world working on the IMR.
IMR is adopting a federated approach to rights registries that acknowledges and leverages the existences of various “island” registries throughout the world and attempts to build a unifying layer on top of them. (One of these “islands” is the so-called Global Repertoire Database, which is initially focused on Europe.) This approach is analogous to the Digital Object Identifier (DOI) standard that I helped define in the publishing industry in the late 1990s: we wanted a copyright work identifier and registry that could coexist peacefully with various existing standards and registries such as ISBN for books, ISSN for journals, PII for other journals, URL for online resources, and so on. On the other hand, it differs from the Book Rights Registry contemplated in Google’s settlement with book publishers and authors, which would have been a single über-registry for all book content, at least in the United States.
So that’s a long way of explaining what Jim Griffin was doing on the music registry panel instead of the blanket licensing panel at the FMC Policy Summit, and why that’s important. The rights registry problem is the right (no pun intended) one to be working on. If it can be solved, it would get us away from blunt-instrument schemes that encourage systemic abuses and favor big-name artists over the long tail, and it would facilitate content creators actually getting paid according to how much their music is played. It’s a problem that’s worth the monumental effort it will take to solve… if it’s even solvable at all. It will take years to find out one way or another, but it’s worth the journey.
DECE Releases Spec… But It’s a Secret June 15, 2011Posted by Bill Rosenblatt in Standards.
I got an email message last week from the Digital Entertainment Content Ecosystem trumpeting the release of the “finalized” version 1.0 DECE/UltraViolet specs.
Under normal circumstances, I would take the time to read the specs and summarize and comment on them here — as I have done with Marlin, Coral, XrML, DMP IDP, hNews, the RIAA watermark payload spec, and various others over the years. But I can’t do that, because DECE is demanding that anyone who wants to download the specs sign a nondisclosure agreement – which they say is non-negotiable.
Releasing a purported digital media standard under NDA (let alone a non-negotiable one) is unheard-of nowadays. Every recent consortium or standards body in this field makes its specs publicly available; at the outside, they require filling out a form with contact information.
If DECE wants to attract positive early press in the run-up to its planned summer launch, this is exactly the wrong the way to start. It’s hard not to draw a conclusion that DECE is using secrecy of the spec to bolster its security scheme; and if that’s the case, security experts will draw the conclusion that it can’t be much of a security scheme. Even the secrecy of crypto algorithms went out of fashion several years ago, dismissed by experts as “security by obscurity.”
Others will immediately assume that this is a sign of yet another paranoid Hollywood cabal and write it off. DECE has not exactly been lavishly forthcoming with information over the past couple of years anyway. Ars Technica had even used largely positive language in describing DECE, but I wouldn’t expect such relative goodwill to continue.
OK, Nate Anderson, Mike Masnick, Cory Doctorow, Slashdot denizens, etc. … all yours, go to it!
On to other matters. I’ve closed the poll I ran last week on Apple’s iTunes Match feature, which enables users to get legitimate versions of non-iTunes music tracks from Apple’s iCloud for a fee of $25 per year. Apple was ambiguous about whether this music would be offered as streams from the iCloud servers or as downloads to iTunes devices — so ambiguous that different journalists and commentators made different assumptions, while others hedged their bets.
By a factor of more than two to one (65% to 30%), the Copyright and Technology readership expects Apple to offer streaming of iTunes Match tracks instead of downloads, and moreover, streaming that only works with iTunes devices — that is, connected iOS devices (iPhones, iPod Touches, and iPads) and PCs and Macs running iTunes software. A tiny 6% of voters expect that Apple will offer streaming to any Internet-connected device, as Amazon and Google do with their cloud music offerings.
I suspect that Apple is ambiguous about iTunes Match because the stream-vs.-download issue is still a point of contention with the record companies and music publishers. Apple undoubtedly prefers downloads for a number of reasons: no need to run streaming servers; lots of downloads to consumer devices promotes purchases of bigger, more expensive devices (as Nick Bilton of the New York Times pointed out). The only disadvantage of downloads to Apple is higher royalty payments, though this may be part of the ongoing negotiations.
The record companies, on the other hand, presumably see high potential for abuse from downloads. For $25 per year, you could download up to 25,000 illegal tracks from your favorite file-sharing site (or rip them from your friends’ CDs), exchange them for legal, high-quality AAC files, and delete the illegal ones. That’s a tenth of a cent per track! Such a deal!
We’ll see who’s right in the fall when Apple launches iTunes Match.
Meanwhile, some of you wondered where I got the phrase “iCloud Cuckoo Land” as the headline for last week’s article. Here’s the answer: search for “Cloud Cuckoo Land” (without the “i”) and you’ll find that it’s a reference to Aristophanes’ satirical comedy The Birds. Cloud Cuckoo Land is an imaginary (and unattainable) idealistic city in the air where everything is perfect. I thought the reference was apt.
Book Industry Bodies Consider DRM… Again May 26, 2011Posted by Bill Rosenblatt in DRM, Publishing, Standards.
This week at Book Expo America in New York, the Book Industry Study Group (BISG) and the International Digital Publishing forum (IDPF) held an open meeting to discuss what the two industry bodies should do about DRM standardization.
Although this meeting wasn’t all that well attended — it was hampered by a hard-to-find location in the remote reaches of the cavernous Javits Center — it did provide good insight into book publishers’ attitudes about DRM, now that e-books have a much bigger impact on the industry than they did a few years ago.
Angela Bole of BISG kicked off the meeting by explaining the research and standards body’s role in the process. She emphasized that the reason for BISG’s interest in DRM standardization was to “take friction out of the supply chain” for publishers, retailers, and users. BISG has been successful in promoting other supply-chain-oriented initiatives, such as the ONIX standard for book product metadata.
Then Bill McCoy, Executive Director of the IDPF (and former e-publishing executive at Adobe), laid out a few possible choices for direction that the IDPF could help facilitate, and discussed their pros and cons (mostly the latter):
- Rely on e-books migrating to browser-based delivery on connected devices, meaning that users will no longer need to download e-books, making file-based DRM unnecessary (instead relying on what I call “screenshot DRM,” as currently practiced by Google Editions and Amazon’s “Look Inside” feature). This option isn’t practical because the technology won’t be in place for years, and people still want to own their e-books permanently.
- Go DRM-free. One of the advocates of this approach, Andrew Savikas from O’Reilly & Associates, argued for DRM-free and cited his company’s research to prove that “piracy helps sales” [see note below]. But few major publishers are interested in giving up DRM at this time.
- Gravitate towards a single-vendor solution, as the music industry effectively did with Apple and iTunes. This would improve the user experience, but it would result in a single entity with a stranglehold on supply chain economics; publishers would lose.
- Advance an interoperable DRM standard. By process of elimination, McCoy expressed interest in pushing this model.
The IDPF, and its predecessor organization the Open e-Book Forum (OeBF), have muffed the DRM issue twice already over the past decade. When it developed the highly successful EPUB format, IDPF opted not to include DRM in the specification. This happened primarily because the technology vendors that hold sway at the IDPF did not want a DRM standard: they either wanted to do without DRM entirely or to stick with their proprietary DRM; adopting a standard DRM would be an expense and hassle they would rather do without.
Before that, around 2003, the OeBF tried to define a standard rights expression language (REL) that publishers and retailers could use to express rights that they wanted to grant to consumers as part of a DRM system. The MPEG standards body adopted an REL standard (MPEG-REL) as part of its MPEG-21 suite of standards for digital multimedia. The OeBF decided to create an e-book-specific version of MPEG-REL. (I participated in this effort on behalf of the Association of American Publishers.) MPEG-REL has had negligible impact on the market, and the OeBF’s e-book REL effort went nowhere.
The current state of the e-book market makes any DRM standardization strategy challenging. There are now three dominant platform vendors, each with their own DRM: Amazon, Apple, and Adobe (used in virtually all other e-readers, including the Barnes & Noble Nooks and Sony and Kobo Readers). Any DRM standard would have to either promote interoperability among these or replace them. But the major players are already well established and therefore have little incentive to cooperate. Contrast this with Hollywood, where the market for digital video downloads is arguably less mature.
With that in mind, McCoy posited three possible approaches to interoperable DRM:
- Standardize on a single DRM, the way Hollywood did with AACS for Blu-ray (and HD-DVD).
- Instead of using file encryption, use a type of technique that McCoy has dubbed “Social DRM”: insert watermarks into e-books that contain personal information related to the user, such as a credit card number.
- Adopt a rights locker approach similar to that of Hollywood’s DECE (a/k/a UltraViolet), in which users pay for the right to download a title to one or more e-reading devices of their choice, as long as each device supports one of the approved DRMs.
The first of these options is a virtual impossibility with three platform vendors already established in the market. The “social DRM” technique has been tried in both e-books (by Microsoft in the previous decade) and music, with little success. Furthermore, it’s unclear how such a system would work with the EPUB text-markup format: for one thing, I don’t see how to avoid simple tools for stripping the watermark data from EPUB files without reverting to “regular” DRM.
That leaves the third option, which was the subject of some discussion at the meeting at BEA. The advantage of a DECE-type model for e-books is that it makes it unlikely that any of the platform vendors would need to scrap and replace their existing DRMs. DECE-approved DRMs must merely share certain basic technical characteristics, such as using the same crypto algorithm, so that the central rights locker can store encryption keys that work with all compliant DRMs.
But I don’t see how adopting DECE would be particularly helpful in reducing the number of e-book platforms or promoting interoperability. Of the three major platform providers, at least two (Apple and Amazon) have no history of cooperating with others. The latest market share statistics for e-book retailers, from Goldman Sachs in February, gives Amazon 58% of the market, Barnes & Noble 27%, and Apple’s iBooks 9%. If we assume that the remaining 6% consists of other retailers that use the Adobe platform (such as Sony), then we have Amazon and Adobe fighting it out at a reasonably competitive 58% vs. 33%.
Market forces alone may well reduce the number of dominant platforms to two, by marginalizing Apple as a DRM platform provider for e-books. Both Amazon and B&N have apps that run on popular mobile devices. So one way to achieve “interoperability” is simply to use an iPad, iPhone, Android, or BlackBerry (not to mention Windows or Mac) with both Kindle and Nook apps, and live with two e-bookstores. Apple’s iBooks, which only runs on Apple iOS devices, will isolate itself into irrelevance. And its dependence on the iTunes retail infrastructure hampers Apple from doing the previously unthinkable and switching iBooks to Adobe’s DRM (thereby joining B&N and others to weaken Amazon).
If the book industry really wants to achieve e-book interoperability among dedicated e-readers, then a fourth alternative, beyond those that Bill McCoy suggested, may be worth investigating: Coral. Coral was a consortium led by Intertrust that had developed a framework for actual interoperation among DRMs through trusted intermediary services. This approach makes it possible for a user to call a service to “translate” content from one DRM to another while maintaining security.
Coral still technically exists but has been quiescent over the last several years as Hollywood rejected it in favor of the DECE multi-DRM approach. DECE depends on online retailers building infrastructure to support all compliant DRMs — currently five of them — and agreeing to let users migrate from one retailer to another like GSM mobile subscribers do with their SIM cards. This is unlikely to fly with Amazon or Barnes & Noble.
Instead, Coral would enable users to use their e-books on other devices while letting retailers retain control of their users’ purchase information. This alternative seems more palatable to e-book retailers than the DECE approach, and it would help users.
Technical and licensing issues must be investigated in order to determine whether Coral might be suitable for current e-book platforms. As various participants stated at the BEA meeting, book publishers are far more likely to be successful in pushing for DRM interoperability through industry-wide vehicles than one publisher at a time. The major e-book retailers need incentives to adopt interoperability that will enhance the user experience and help the market grow faster. Publishers can push for such incentives in licensing deals. As long as their actions fall on the correct side of antitrust law, the IDPF has a way forward.
*O’Reilly commissioned my colleague Brian O’Leary to do a study on piracy’s effect on sales in 2008. O’Leary’s findings encouraged O’Reilly to stay away from DRM. When I asked Savikas what the study measured, he stressed that it was a limited study that was only relevant to the way O’Reilly sells and markets its content.
As the author of books published by O’Reilly myself, I would like to assert that O’Reilly is an outlier, and the research results should not be taken as representative of the book industry as a whole. I maintain that both piracy’s effect on sales and DRM’s effect on piracy (or sales) have yet to be measured with any degree of confidence for book publishing (or any other media industry segment) — and perhaps never will.
Here’s why O’Reilly is atypical: first, it is much more active and sophisticated than other book publishers at using online techniques to market and distribute content, thereby making it easier for O’Reilly to monetize content online. Second, this redounds doubly to O’Reilly’s benefit because of the tech-savvy of O’Reilly’s core audience of IT professionals. Finally, O’Reilly’s content attracts an open-source-oriented crowd that has a particular antipathy towards DRM, making a backlash more likely than for other publishers if O’Reilly were to implement it. O’Reilly & Associates is a superb publisher, but its study on piracy and DRM has limited meaning for the industry at large.
DECE Announces UltraViolet Roadmap and Usage Rules January 10, 2011Posted by Bill Rosenblatt in Devices, Services, Standards, Video.
The Digital Entertainment Content Ecosystem (DECE) issued a press release in conjunction with last week’s massive CES trade show in Las Vegas. The verbiage in the press release proclaimed “series of milestones in the development and availability of UltraViolet™,” the latter being the brand name attached to DECE-compliant products and services.
So what, for those of us who have been following DECE’s progress over the past couple of years, are those milestones? The most interesting actual accomplishment is one that, unfortunately, I can’t tell you much about: DECE has completed a technical specification. I filled out a web form to request one, hoping that I could have read and written about it by now, but I haven’t gotten it yet.
There are two possible reasons for this: first, the folks at DECE are too busy with CES-related business and haven’t gotten around to it; second, they have decided to make DECE a closed club and not reveal any details without a paid-up evaluation license and/or a nondisclosure agreement. I’ll reserve judgment until a little later on, but let’s just say (once again) that the latter would be a bad idea.
Apart from the spec, the press releases discloses a few items of interest. One is that DECE has set usage rules for UltraViolet accounts. Recall that the heart of UltraViolet is a so-called rights locker service, which is run by the company Neustar. If you buy a movie or other piece of content, Neustar makes a record of your purchase in the central rights locker. This gives you the right to download that content onto any of your UltraViolet-compliant devices, to obtain a physical copy (e.g. on Blu-ray), and to stream it to virtually any web browser through your UltraViolet account.
Now we know that there will be limits to the number of users who and devices that can share content from a single UltraViolet account: 6 and 12 respectively. This is meant to represent the size of a family and its devices. In other words, DECE has decided that the only reasonable way to define what is known as a domain — a group of users and devices, such as all those in a family — is to put limits on users and devices. Other possible techniques include allowing devices within geographic proximity of one another to be in the same domain, but that doesn’t allow for portable or automotive use.
One presumes that those numbers represent a consensus of the content licensors involved in DECE, which include all major movie studios except Disney. But expect those numbers to be points of contention in the future. We’ve seen this before regarding such scenarios as the number of devices that can play content from an iTunes account (five) or the number of devices that can read an e-book in Adobe’s DRM (six, though the number has varied over the years).
Another interesting tidbit from the press release is that the voice of DECE is no longer Mitch Singer, CTO of Sony Pictures and DECE President; it is now Mark Teitell, DECE General Manager. This is evidence that the backers of DECE are investing in resources to make it happen, a good sign.
Otherwise, the CES press release is primarily a series of pre-announcements, a roadmap:
- By the middle of this year, the rights locker infrastructure will be up and running, and the first UltraViolet-based retail services will launch.
- By the end of this year, software updates to PCs and other devices will become available, enabling them to become UltraViolet-compatible. This means, among other things, to be able to read the DECE Common File Format and to interoperate it with one of the five DECE-approved DRMs.
- By early next year (presumably by CES 2012), the first UltraViolet-compatible devices will hit the market.
We’ll see how well DECE fares in meeting those milestones with a critical mass of retail service providers and (eventually) devices. But for now, I’d settle for a copy of that spec.