MovieLabs Releases Best Practices for Video Content Protection October 23, 2013Posted by Bill Rosenblatt in DRM, Standards, Video.
As Hollywood prepares for its transition to 4k video (four times the resolution of HD), it appears to be adopting a new approach to content protection, one that promotes more service flexibility and quicker time to market than previous approaches but carries other risks. The recent publication of a best-practices document for content protection from MovieLabs, Hollywood’s R&D consortium, signals this new approach.
In previous generations of video technology, Hollywood studios got together with major technology companies and formed technology licensing entities to set and administer standards for content protection. For example, a subset of the major studios teamed up with IBM, Intel, Microsoft, Panasonic, and Toshiba to form AACS LA, the licensing authority for the AACS content protection scheme for Blu-ray discs and (originally) HD DVDs. AACS LA defines the technology specification, sets the terms and conditions under which it can be licensed, and performs other functions to maintain the technology.
A licensing authority like AACS LA (and there are a veritable alphabet soup of others) provides certainty to technology implementation including compliance, patent licensing, and interoperability among licensees. It helps insulate the major studios from accusations of collusion by being a separate entity in which at most a subset of them participate.
As we now know, the licensing-authority model has its drawbacks. One is that it can take the licensing authority several years to develop technology specs to a point where vendors can implement them — by which time they risk obsolescence. Another is that it does not offer much flexibility in how the technology can adapt to new device types and content delivery paradigms. For example, AACS was designed with optical discs in mind at a time when Internet video streaming was just a blip on the horizon.
A document published recently by MovieLabs signals a new approach. MovieLabs Specification for Enhanced Content Protection is not really a specification, in that it is in nowhere near enough detail to be usable as the basis for implementations. It is more a compendium of what we now understand as best practices for protecting digital video. It contains room for change and interpretation.
The best practices in the document amount to a wish list for Hollywood. They include things like:
- Techniques for limiting the impact of hacks to DRM schemes, such as requiring device as well as content keys, code diversity (a hack that works on one device won’t necessarily work on another), title diversity (a hack that works with one title won’t necessarily work on another), device revocation, and renewal of protection schemes.
- Proactive renewal of software components instead of “locking the barn door after the horse has escaped.”
- Component technologies that are currently considered safe from hacks by themselves, including standard AES encryption with minimum key length of 128 and version 2.2 or better of the HDCP scheme for protecting links such as HDMI cables (earlier versions were hacked).
- Hardware roots of trust on devices, running in secure execution environments, to limit opportunities for key leakage.
- Forensic watermarking, meaning that content should have information embedded in it about the device or user who requested it.
Those who saw Sony Pictures CTO Spencer Stephens’s talk at the Anti-Piracy and Content Protection Summit in LA back in July will find much of this familiar. Some of these techniques come from the current state of the art in content protection for pay TV services; for more detail on this, see my whitepaper The New Technologies for Pay TV Content Security. Others, such as the forensic watermarking requirement, come from current systems for distributing HD movies in early release windows. And some result from lessons learned from cracks to older technologies such as AACS, HDCP, and CSS (for DVDs).
MovieLabs is unable to act as a licensor of standards for content protection (or anything else, for that matter). The six major studios set it up in 2005 as a movie industry joint R&D consortium modeled on the cable television industry’s CableLabs and other organizations enabled by the National Cooperative Research Act of 1984, such as Bellcore (telecommunications) and SEMATECH (semiconductors). R&D consortia are allowed, under antitrust law, to engage in “pre-competitive” research and development, but not to develop technologies that are proprietary to their members.
Accordingly, the document contains a lot of language intended to disassociate these requirements from any actual implementations, standards, or studio policies, such as “Each studio will determine individually which practices are prerequisites to the distribution of its content in any particular situation” and “This document defined only one approach to security and compatibility, and other approaches may be available.”
Instead, the best-practices approach looks like it is intended to give “signals” from the major studios to content protection technology vendors, such as Microsoft, Irdeto, Intertrust, and Verimatrix, who work with content service providers. These vendors will then presumably develop protection schemes that follow the best practices, with an understanding that studios will then agree to license their content to those services.
The result of this approach should be legal content services for next-generation video that get to market faster. The best practices are independent of things like content delivery modalities (physical media, downloads, streaming) and largely independent of usage rules. Therefore they should enable a wider variety of services than is possible with the traditional licensing authority paradigm.
Yet this approach has two drawbacks compared to the older approach. (And of course the two approaches are not mutually exclusive.) First is that it jeopardizes the interoperability among services that Hollywood craves — and has gone to great lengths to preserve in the UltraViolet standard. Service providers and device makers can incorporate content protection schemes that follow MovieLabs’ best practices, but consumers may not be able to interoperate content among them, and service providers will be able to use content protection schemes to lock users in to their services. In contrast, many in Hollywood are now nostalgic for the DVD because, although its protection scheme was easily hacked, it guaranteed interoperability across all players (at least all within a given geographic region).
The other drawback is that the document is a wish list provided by organizations that won’t pay for the technology. This means that downstream entities such as device makers and service providers will treat it as the maximum amount of protection that they have to implement to get studio approval. Because there is no license agreement that they have to sign to get access to the technology, the downstream entities are likely to negotiate down from there. (Such negotiation already took place behind the scenes during the rollout of Blu-ray, as player makers refused to implement some of the more expensive protection features and some studios agreed to let them slip.)
Downstream entities are particularly likely to push back against some of MovieLabs’s best practices that involve costs and potential impairments of the user experience; examples include device connectivity to networks for purposes of authentication and revocation, proactive renewal of device software, and embedding of situation-specific watermarks.
Surely the studios understand all this. The publication of this document by MovieLabs shows that Hollywood is willing to entertain dialogues with service providers, device makers, and content protection vendors to speed up time-to-market of legitimate video services and ensure that downstream entities can innovate more freely. How much protection will the studios will ultimately end up with when 4k video reaches the mainstream? It will be very interesting to watch over the next couple of years.
E-Book Watermarking Gains Traction in Europe October 3, 2013Posted by Bill Rosenblatt in DRM, Europe, Publishing, United States, Watermarking.
The firm Rüdiger Wischenbart Content and Consulting has just released the latest version of Global eBook, its overview of the worldwide ebook market. This sweeping, highly informative report is available for free during the month of October.
The report contains lots of information about piracy and rights worldwide — attitudes, public policy initiatives, and technologies. A few conclusions in particular stand out. First, while growth of e-book reading appears to be slowing down, it has reached a level of 20% of book sales in the U.S. market (and even higher by unit volume). This puts e-books firmly in the mainstream of media consumption.
Accordingly, e-book piracy has become a mainstream concern. Publishers — and their trade associations, such as the Börsenverein des Deutschen Buchhandels in Germany, which is the most active on this issue — had been less involved in the online infringement issue than their counterparts in the music and film industries, but that’s changing now. Several studies have been done that generally show e-book piracy levels rising rapidly, but there’s wide disagreement on its volume. And virtually no data at all is available about the promotional vs. detrimental effects of unauthorized file-sharing on legitimate sales. Part of the problem is that e-book files are much smaller than music MP3s or (especially) digital video or games; therefore e-book files are more likely to be shared through email (which can’t be tracked) and less likely to be available through torrent sites.
The lack of quantitative understanding of infringement and its impact has led different countries to pursue different paths, in terms of both legal actions and the use of antipiracy technologies. Perhaps the most surprising of the latter trend — at least to those of us on this side of the Atlantic — is the rapid ascendancy of watermarking (a/k/a “social DRM”) in some European countries. For example:
- Netherlands: Arbeiderspers/Bruna, the country’s largest book publisher, switched from traditional DRM to watermarking for its entire catalog at the beginning of this year.
- Austria: 65% of the e-books available in the country have watermarks embedded, compared to only 35% with DRM.
- Hungary: Watermarking is now the preferred method of content protection.
- Sweden: Virtually all trade ebooks are DRM-free. The e-book distributor eLib (owned by the Swedish media giant Bonnier), uses watermarking for 80% of its titles.
- Italy: watermarking has grown from 15% to 42% of all e-books, overtaking the 35% that use DRM.
(Note that these are, with all due respect to them, second-tier European countries. I have anecdotal evidence that e-book watermarking is on the rise in the UK, but not much evidence of it in France or Germany. At the same time, the above countries are often test beds for technologies that, if successful, spread to larger markets — whether by design or market forces.)
Meanwhile, there’s still a total absence of data on the effects of both DRM and watermarking on users’ e-book behavior — which is why I have been discussing with the Book Industry Study Group the possibility of doing a study on this.
The prevailing attitude among authors is that DRM should still be used. An interesting data point on this came back in January when Lulu, one of the prominent online self-publishing services, decided to stop offering authors the option of DRM protection (using Adobe Content Server, the de facto standard DRM for ebooks outside of the Amazon and Apple ecosystems) for ebooks sold on the Lulu site. Lulu authors would still be able to distribute their titles through Amazon and other services that use DRM.
Lulu announced this in a blog post which elicited large numbers of comments, largely from authors. My pseudo-scientific tally of the authors’ comments showed that they are in favor of DRM — and unhappy with Lulu’s decision to drop it — by more than a two-to-one margin. Many said that they would drop Lulu and move to its competitor Smashwords, which continues to support DRM as an option. Remember that these are independent authors of mostly “long tail” titles in need of exposure, not bestselling authors or major publishers.
One reason for Lulu’s decision to drop DRM was undoubtedly the operational expense. Smashwords’ CEO, Mark Coker, expressed the attitudes of ebook distributors succintly in a Publishers Weekly article covering Lulu’s move when he said, “What’s relevant is whether the cost of DRM (measured by fees to Adobe, [and for consumers] increased complexity, decreased availability, decreased sharing and word of mouth, decreased customer satisfaction) outweigh the benefits[.]” As we used to say over here, that’s the $64,000 question.
Content Protection for 4k Video July 2, 2013Posted by Bill Rosenblatt in DRM, Technologies, Video, Watermarking.
As Hollywood adepts know, the next phase in picture quality beyond HD is something called 4k. Although the name suggests 4k (perhaps 4096) pixels in the vertical or horizontal direction, its resolution is actually 3840 × 2160, i.e., twice the pixels of HD in both horizontal and vertical directions.
4k is the highest quality of image actually captured by digital cinematography right now. The question is, how will it be delivered to consumers, in what timeframe, and how will it be protected?
Those of us who attended the Anti-Piracy and Content Protection Summit in LA last week learned that the answer to the latter question is unknown as yet. Spencer Stephens, CTO of Sony Pictures, gave a brief presentation explaining what 4k is and outlining his studio’s wish list for 4k content protection. He said that it was an opportunity to start fresh with a new design, compared to the AACS content protection technology for Blu-ray discs, which is 10 years old.
This is interesting on a couple of levels. First, it implies that the studios have not predetermined a standard for 4k content protection; in contrast, Blu-ray discs were introduced in the market about three years after AACS was designed. Second, Stephens’s remarks had the flavor of a semi-public appeal to the community of content protection vendors — some of which were in the audience at this conference — for help in designing DRM schemes for 4k that met his requirements.
Stephens’s wish list included such elements as:
- Title-by-title diversity, so that a technique used to hack one movie title doesn’t necessarily apply to another
- Requiring players to authenticate themselves online before playback, which enables hacked players to be denied but makes it impossible to play 4k content without an Internet connection
- The use of HDCP 2.2 to protect digital outputs, since older versions of HDCP have been hacked
- Session-based watermarking, so that each 4k file is marked with the identity of the device or user that downloaded it (a technique used today with early-window HD content)
- The use of trusted execution environments (TEE) for playback, which combine the security of hardware with the renewability of software
From time to time I hear from startup companies that claim to have designed better technologies for video content protection. I tell them that getting studio approval for new content protection schemes is a tricky business. You can get studio technology executives excited about your technology, but they don’t actually “approve” it such that they guarantee they’ll accept it if it’s used in a content service. Instead, they expect service providers to propose the technology in the context of the overall service, and the studios will consider providing licenses to their content in that broader context. And of course the studios don’t actually pay for the technology; the service providers or consumer device makers do.
In other words, studios “bless” new content protection technologies, but otherwise the entire sales process takes place at arms’ length from the studios. In that sense, the studios act somewhat like a regulatory agency does when setting guidelines for compliance with a regulation such as HIPAA and GLB (for information privacy in healthcare and financial services respectively). The resulting technology often meets the letter but not the spirit of the regulations.
In this respect, Stephens’s remarks were a bit of fresh air. They are an invitation to more open dialog among vendors, studios, and service providers about the types of content protection that they may be willing to implement when it comes time to distribute 4k content to consumers.
In the past, such discussions often happened behind closed doors, took the form of unilateral “unfunded mandates,” and/or resulted in implementations that plainly did not work. As technology gets more sophisticated and the world gets more complex, Hollywood is going to have to work more closely with downstream entities in the content distribution chain if it wants its content protected. Spencer Stephens’s presentation was a good start in that direction.
Kim Dotcom Embraces DRM January 22, 2013Posted by Bill Rosenblatt in DRM, New Zealand, Services.
add a comment
Kim Dotcom launched a new cloud file storage service, the New Zealand-based Mega, last weekend on the one-year anniversary of the shutdown of his previous site, the notorious MegaUpload. (The massive initial interest in the site* prevented me from trying out the new service until today.)
Mega encrypts users’ files, using what looks like a content key (using AES-128) protected by 2048-bit RSA asymmetric-key encryption. It derives the latter keys from users’ passwords and other pseudo-random data. Downloading a file from a Mega account requires knowing either the password that was used to generate the RSA key (i.e., logging in to the account used to upload the file) or the key itself.
Hmm. Content encrypted with symmetric keys that in turn are protected by asymmetric keys… sounds quite a bit like DRM, doesn’t it?
Well, not quite. While DRM systems assume that file owners won’t want to publish keys used to encrypt the files, Mega not only allows but enables you to publish your files’ keys. Mega lets you retrieve the key for a given file in the form of a URL; just right-click on the file you want and select “Get link.” (Here‘s a sample.) You can put the resulting URL into a blog post, tweet, email message, or website featuring banner ads for porn and mail-order brides.
(And of course, unlike DRM systems, once you obtain a key and download a file, it’s yours in unencrypted form to do with as you please. The encryption isn’t integrated into a secure player app.)
Yet in practical terms, Mega is really no different from file-storage services that let users publish URLs to files they store — examples of which include RapidShare, 4Shared, and any of dozens of file transfer services (YouSendIt, WhaleMail, DropSend, Pando, SendUIt, etc.).
Mega touts its use of encryption as a privacy benefit. What it really offers is privacy from the kinds of piracy monitoring services that media companies use to generate takedown notices — an application of encryption that hardcore pirates have used and that Kim Dotcom purports to “take … out to the mainstream.” It will be impossible to use content identification technologies, such as fingerprinting, to detect the presence of copyrighted materials on Mega’s servers. RapidShare, for example, analyzes third-party links to files on its site for potential infringements; Mega can’t do any such thing, by design.
Mega’s use of encryption also plays into the question of whether it could ever be held secondarily liable for its users’ infringements under laws such as DMCA 512 in the United States. The Beltway tech policy writer Paul Sweeting wrote an astute analysis of Mega’s chances against the DMCA over the weekend.
Is Kim Dotcom simply thumbing his nose at Big Media again? Or is he seriously trying to make Mega a competitor to legitimate, prosaic file storage services such as DropBox? The track records of services known for piracy trying to go “legit” are not encouraging — just ask Bram Cohen (BitTorrent Entertainment Network) or Global Gaming Factory (purchasers of The Pirate Bay’s assets). Still, this is one to watch as the year unfolds.
*Or, just possibly, server meltdowns faked to generate mountains of credulous hype?
As I have worked with video service providers that are trying to upgrade their offerings to include online and mobile services, I’ve seen bewilderment about the maze of codecs, streaming protocols, and player apps as well as content protection technologies that those service providers need to understand. Yet a development that took place earlier this month should help ease some of the complexity.
Microsoft’s PlayReady is becoming a popular choice for content protection. Dozens of service providers use it, including BSkyB, Canal+, HBO, Hulu, MTV, Netflix, and many ISPs, pay TV operators, and wireless carriers. PlayReady handles both downloads and streaming, and it is currently the only commercial DRM technology certified for use with UltraViolet (though that should change soon). Microsoft has developed a healthy ecosystem of vendors that supply things like player apps for different platforms, “hardening” of client implementations to ensure robustness, server-side integration services, and end-to-end services. And after years of putting in very little effort on marketing, Microsoft has finally upgraded its PlayReady website with information to make it easier to understand how to use and license the technology.
Streaming protocols are still a bit of an issue, though. Several vendors have created so-called adaptive streaming protocols, which monitor the user’s throughput and vary the bit rate of the content to ensure optimal quality without interruptions. Apple has HTTP Live Streaming (HLS), Microsoft has Smooth Streaming, Adobe has HTTP Dynamic Streaming (HDS), and Google has technology it acquired from Widevine. Yet operators have been more interested in Dynamic Adaptive Streaming over HTTP (DASH), an emerging vendor-independent MPEG standard. The hope with MPEG-DASH is that operators can use the same protocol to stream to a wide variety of client devices, thereby making deployment of TV Everywhere-type services cheaper and easier.
MPEG-DASH took a significant step towards real-world viability over the last few months with the establishment of the DASH Industry Forum, a trade association that promotes market adoption of the standard. Microsoft and Adobe are members, though not Apple or Google, indicating that at least some of the vendors of proprietary adaptive streaming will embrace the standard. The membership also includes a healthy critical mass of vendors in the video content protection space: Adobe, BuyDRM, castLabs, Irdeto, Nagra, and Verimatrix — plus Cisco, owner of NDS, and Motorola, owner of SecureMedia.
Adaptive streaming protocols need to be integrated with content protection schemes. PlayReady was originally designed to work with Smooth Streaming. It has also been integrated with HLS, which is probably the most popular of the proprietary adaptive streaming schemes. Integration of PlayReady with MPEG-DASH is likely to be viewed as a safe choice, in line with the way the industry is going. That solution came into view this month as BuyDRM and Fraunhofer IIS announced an integration of MPEG-DASH with PlayReady for the HBO GO service in Europe. HBO GO is HBO’s “over the top” service for subscribers.
For the HBO GO demo, BuyDRM implemented a version of its PlayReady client that uses Fraunhofer’s AAC 5.1 surround-sound codec, which ships with devices that run Android 4.1 Jelly Bean. The integration is being showcased with HD quality video on HBO’s “Boardwalk Empire” series. Users can connect Android 4.1 devices with the proper outputs — even handsets — to home-theater audio playback systems to get an experience equivalent to playing a Blu-ray disc. The current implementation supports live broadcasting, with VOD support on the way shortly.
PlayReady integrated with MPEG-DASH is likely to be a popular choice for a variety of video service providers, ranging from traditional pay TV operators to over-the-top services like HBO Go. BuyDRM and Fraunhofer’s deployment is an important step towards that choice becoming widely feasible.
DRM Anticircumvention for Dummies July 15, 2012Posted by Bill Rosenblatt in DRM, Law.
I have seen a lot of writings and gotten a lot of feedback regarding the EPUB Lightweight Content Protection (EPUB LCP) scheme I am helping to design for the International Digital Publishing Forum (IDPF), which oversees the EPUB standard. The criticisms fall into two buckets: DRM sucks, why is the IDPF wasting time on this; the security is too weak, publishers need stronger protection.
Yet these diametrically opposed criticisms have one thing in common: a lack of understanding of how anticircumvention law, such as Section 1201 of the DMCA in the United States, works in practice and how it figures into the design of EPUB LCP. This lack of understanding is common to both DRM opponents and people from DRM technology vendors. Anticircumvention law makes it a crime to hack DRMs.
So I thought I would offer some information about the practicalities of anticircumvention law, presented as rebuttals to some of the false assertions that I have heard. Three caveats are in order: first, the following is going to be U.S.-centric. That’s because am I most familiar with the U.S. anticircumvention law, but also because the U.S. law is by far the most highly developed through litigation. Second, I am not a lawyer — nor are any of the people who have talked to me about this. So if you’re a legal expert and I’m wrong, please correct me. Third, I’m not an official spokesman for IDPF, and they may have different views.
Assertion: Anticircumvention law doesn’t stop hacks; hacks are going to be available anyway.
Reality: Of course the law doesn’t eliminate hacks, but it does make hacks less easily accessible to people who are not determined hackers. The law comes down hardest on those who gain commercially from their hacks. Because of the anticircumvention law, there is not (for example) a “convert from Amazon” option in Nook readers and apps, or the converse in Kindles; instead you have to go find the hack, install it, and use it – something that requires more time, determination, and skill. (Note that this is a different issue from “DRM doesn’t stop piracy.” Here I agree: absolutely, there are various other ways to infringe copyright, some of which are easier than hacking DRMs.)
Assertion: DRM systems that aren’t robust don’t qualify for the anticircumvention law.
Reality: This one comes from DRM vendors, which have vested interests in robustness. To answer this, you need to look at the history of litigation (again, this is a US-centric view). The most important legal precedent here is Universal v. Reimerdes, which was decided in U.S. district court in 2000 and upheld on appeal. This case was one of several involving the weak CSS encryption scheme for DVDs. The defense asked the court to find it not liable because CSS was too weak to meet the definition of “effective” in “technological measure [that] effectively controls access to a work” under the law. In his opinion, the judge explicitly refused to establish an “effectiveness test” by deciding this issue. I know of a couple of cases that attempted to revisit this issue but were dropped. The effect, at least for now, is that any DRM that’s as strong (i.e. weak) as CSS, or stronger, should qualify for protection under the law.
Assertion: The IDPF intends to sue hackers as part of the EPUB LCP initiative.
Reality: Not true at all. The IDPF is not even in a position to facilitate litigation the way the MPAA and RIAA do. (For one thing, it’s an international body, not a national one.) If any organization is going to facilitate litigation, it would be the Association of American Publishers (AAP) in the U.S., which has not been involved in the EPUB LCP initiative. More generally, it may help to explain how the litigation process works in practice. Copyright owners do the suing; they are the actual plaintiffs. They will only bother to sue under the anticircumvention law if they see hacks that are being used widely enough to cause significant infringement and/or the supplier of the hack is making money from the hack. So as a practical matter, a hack that “sits in the shadows” as described above is unlikely to be used widely enough to draw a lawsuit.
Assertion: Users get sued for using hacks.
Reality: Although the law does provide penalties for using as well as distributing hacks, individual users have never gotten sued for using hacks (or for creating hacks for personal use only). Users have been sued for copyright infringement; if you hack a DRM, you may be infringing copyright. Only those who make hacks publicly available have ever been sued for DMCA 1201 violations.
Assertion: This is a US matter and irrelevant elsewhere in the world, especially now that ACTA is dead in Europe.
Reality: As mentioned above, the interpretation of “effectiveness” is a US-centric one that may or may not apply elsewhere. But otherwise, this statement is also incorrect. Anticircumvention law is on the books today in most industrialized countries, including EU member states (resulting from the European Union Copyright Directive of 2001), Australia, New Zealand, Japan, Singapore, India, China, Brazil, and a few others; South Korea and Canada should get anticircumvention laws soon.
I am working with the International Digital Publishing Forum (IDPF), helping them define a new type of content protection standard that may be incorporated into the upcoming Version 3 of IDPF’s EPUB standard for e-books. We’re calling this new standard EPUB Lightweight Content Protection (EPUB LCP).
EPUB LCP is currently in a draft requirements stage. The draft requirements, along with some explanatory information, are publicly available; IDPF is requesting comments on them until June 8. I will be giving a talk about EPUB LCP, and the state of content protection for e-books in general, at Book Expo America in NYC next week, during IDPF’s Digital Book Program hosted by an SEO Agency on Tuesday June 5.
Now let’s get the disclaimer out of the way: the remainder of this article contains my own views, not necessarily those of IDPF, its management, or its board members. I’m a consultant to IDPF; any decisions made about EPUB LCP are ultimately IDPF’s. The requirements document mentioned above was written by me but edited by IDPF management to suit its own needs.
IDPF is defining a new standard for what amounts to a simple, lightweight, looser DRM. EPUB is widely used in the e-book industry (by just about everyone except Amazon), but lack of an interoperable DRM standard has caused fragmentation that has hampered its success in the market. Frankly, IDPF blew it on this years ago (before its current management came in). They bowed to pressures from online retailers and reading device makers not to make EPUB compliance contingent on adopting a standard DRM, and they considered DRM (understandably) not to be “low hanging fruit.”
IDPF first announced this initiative on May 18; it got press coverage in online publications such as Ars Technica, PaidContent.org, and others. The bulk of the comments were generally “DRM sucks no matter what you call it” or “Why bother with this at all, it won’t help prevent any infringement.” A small number of commenters said something on the order of “If there has to be DRM, this isn’t a bad alternative.” One very knowledgeable commenter on Ars Technica first judged the scheme to be pointless because it’s cryptographically weak, then came around to understanding what we’re trying to do and even offered some beneficial insights.
The draft requirements document provides the basic information about the design; my main purpose here is to focus more on the circumstances and motivation behind the initial design choices.
Let’s start at a high level, with the overall e-book market. (Those of you who read my article about this on PaidContent.org a few months ago can skip this and the next five paragraphs.) Right now it’s at a tipping point between two outcomes that are both undesirable for the publishing industry. The key figure to watch is Amazon’s market share, which is currently in the neighborhood of 60%; Barnes and Noble’s Nook is in second place with share somewhere in the 25-30% range.
One outcome is Amazon increasing its market share and entering monopoly territory (according to the benchmark of 70% market share often used in US law). If that happens, Amazon can do to the publishing industry as Apple has done for music downloads: dominate the market so much that it can both dictate economic terms and lock customers in to its own ecosystem of devices, software, and services.
The other outcome is that Amazon’s market share falls, say to 50% or lower, due to competition. In that case, the market fragments even further, putting a damper 0n overall growth in e-reading. Also not good for publishers.
Let’s look at what happens to DRM in each of these cases. In the first (Amazon monopoly) case, Amazon may drop DRM just as Apple did for music — but it will be too late: Amazon will have achieved lock-in and can preserve it in other ways, such as by making it generally inconvenient for users to use other devices or software to read Amazon e-books. Other e-book retailers would then drop DRM as well, but few will care.
In the second case, everyone will probably keep their DRMs in order to keep users from straying to competitors (though some individual publishers will opt out of it). In other words, if the DRM status quo remains, the likely alternatives are DRM-free monopoly or DRM and fragmentation.
If IDPF had included an interoperable DRM standard back in 2007 when both EPUB and the Kindle launched, e-books might well be more portable among devices and reading software than they are now. Yet the most desirable outcome for the reading public is 100% interoperability, and we know from the history of technology markets (with the admittedly major exception of HTML) that this is a chimera. (Again, I explained this in PaidContent.org a few months ago.)
To many people, the way out of this dilemma is obvious: everyone should get rid of DRM now. That certainly would be good for consumers. But most publishers — who control the terms by which e-books are licensed to retailers — don’t want to do this; neither do many authors, who own copyrights in their books.
E-book retailers and device vendors can get lock-in benefits from DRM. As for whether DRM does anything to benefit rights holders by improving consumers’ copyright compliance or reducing infringement, that’s a real question. Notwithstanding the opinions of the many self-styled experts in user behavior analysis and infringement data collection among the techblogorati and commentariat, the answer is unknown and possibly unknowable. Publishers are motivated to keep DRM if for no other reason than fear that once it goes away, they can never bring it back. Moreover, certain segments of the publishing industry (such as higher education) want DRM that’s even stronger than the current major schemes.
The fact is, none of the major DRMs in today’s e-book market are very sophisticated — at least not compared to content protection technologies used for video content. The economics of the e-book industry make this impossible: the publishers and authors who want DRM don’t pay for it, resulting in cost and complexity constraints. DRM helps retailers insofar as it promotes lock-in, but it doesn’t help them protect their overall services. In contrast, content protection helps pay TV operators (for example) protect their services, which they want protected just as much as Hollywood doesn’t want its content stolen; so they’re willing to pay for more sophisticated content protection.
The two leading e-book DRMs right now are Amazon’s Mobipocket DRM and Adobe’s Content Server 4; the latter is used by Barnes & Noble, Sony, and various others. Hackers have developed what I call “one-click hacks” for both. One-click hacks meet three criteria: people without special technical expertise can use them; they work on any file that’s packaged in the given DRM; and they work permanently (i.e., there is no way to recover from them). In contrast, pay TV content protection schemes are generally not one-click-hackable.
In other words one-click DRM hacks are like format converters, like the one built into Microsoft Word that converts files from WordPerfect or the ones built in to photo editing utilities that convert TIFF to JPEG. But there’s a difference: DRM hacks are illegal in many countries, including the United States, European Union member states, Brazil, India, Taiwan, and Australia; all other signatories to the Anti-Counterfeiting Trade Agreement will eventually have so-called anticircumvention laws too.
The effect of anticircumvention law has been to force DRM hacks into the shadows, making them less easily accessible to the non-tech-savvy and at least somewhat stigmatized. Without the law, we would have things like Nook devices and software with “Convert from Kindle Format” options (and vice versa). The popular, free Calibre e-book reading app, for example, had a DRM stripper but removed it (presumably under legal pressure) in 2009. A DRM removal plug-in for Calibre is available, but it’s not an official one; David Pogue of the New York Times — hardly a fan of DRM – recently dismissed it as difficult to use as well as illegal.
The US has a rich case history around anticircumvention law that has made the boundaries of legal acceptability reasonably clear. It has shut off the availability of hacks from “legitimate” sources and ensured that if your hack is causing enough trouble, you will be sued out of existence. I am not personally a fan of anticircumvention law, but I accept as fact that it has made hacks less accessible to the general public.
The foregoing line of thought got IDPF Executive Director Bill McCoy and me talking last year about what IDPF might be able to do about DRM in the upcoming version of EPUB, in order to help IDPF further its objective of making EPUB a universal standard for digital publishing and forestall the two undesirable market trajectories described above. We did not set out to design an “ultimate DRM” or even “yet another DRM”; we set out to design something intended to solve problems in the digital publishing market while working within existing marketplace constraints.
So now, with that background, here is a set of interrelated design principles we established for EPUB LCP:
- Require interoperability so that retailers cannot use it to promote lock-in. This is what the UltraViolet standard for video is attempting to do, albeit in a technically much more complex way. The idea of UltraViolet is to provide some of the interoperability and sharing features that users want while still maintaining some degree of control. Our theory is that both publishers and e-book retailers would be willing to accept a looser form of DRM that could break the above market dilemma while striking a similar balance between interoperability and control.
- Support functions that users really want, such as “social” sharing of e-books. Build on the idea of e-book watermarking, such as that used in Safari Books Online for PDF downloads and in the Pottermore Store for EPUB format e-books: embed users’ personal information into the content, on the expectation that users will only share files with people whom they trust not to abuse their personal information.
- Create a scheme that can support non-retail models such as library lending and can be extended to support additional business models (see below) or the stronger security that industry segments such as higher ed need.
- Include the kinds of user-friendly features that Reclaim Your Game has recommended for video game DRMs. These include respecting privacy by not “phoning home” to servers and ensuring permanent offline use so that files can be used even if the retailer goes out of business. They also include not jeopardizing the security or integrity of users’ devices, as in the infamous “rootkit” installed by CD copy protection technology for music several years ago.
- Eliminate design elements that add disproportionately to cost and complexity. Perhaps the biggest of these is the s0-called robustness rules that have become standard elements of DRMs such as OMA DRM, Marlin, and PlayReady where the DRM technology licensor doesn’t own the hardware or platform software. Eliminating “phoning home” also saves costs and complexity. Other elements to be eliminated include key revocation, recoverability, and fancy authentication schemes such as the domain authentication used in UltraViolet.
- Finally, don’t try very hard to make the scheme hack-proof. The strongest protection schemes for commercial content — such as those found in pay television — are those that minimize the impact of hacks so that they are temporary and recoverable; such schemes are too complex, invasive, and expensive for e-book retailers or e-reader makers to consider. Instead, assume that EPUB LCP will be hacked, and rely on two things to blunt the impact: anticircumvention law, and allowing enough differences among implementations that each one will require its own hack (a form of what security technologists call “code diversity.”).
With those design principles in mind, we have designed a scheme that takes its inspiration from two sources in particular: the content protection technology used in the eReader/FictionWise e-book technology that is now owned by Barnes & Noble, and the layered functionality concept built into the Digital Media Project‘s IDP (Interoperable DRM Platform) standard.
The central idea of EPUB LCP is a passphrase supplied by the user or retailer. This could be an item of personal information, such as a name, email address, or even credit card number; distributors or rights holders can decide what types of passphrases to use or require. The passphrase is irrecoverably obfuscated (e.g. through a hash function) so that even if a hack recovers the passphrase, it won’t recover the personal information; yet the retailer can link the obfuscated passphrase to the user. The obfuscated passphrase is then embedded into the e-book file. If the user wants to share an e-book, all she has to do is share the passphrase. Otherwise, the content must be hacked to be readable.
Other aspects of the draft requirements are covered in the document on the IDPF website. Apart from that, it’s worth mentioning that this type of scheme will not support certain content distribution models unless extensions are added to make them possible. Features intentionally left out of the basic EPUB LCP design include:
- Separate license delivery, which allows different sets of rights for a given file
- License chaining, which supports subscription services
- Domain authentication, which can support multi-device/multi-user “family accounts” a la UltraViolet
- Master-slave secure file transfer, for sideloading onto portable devices, a la Windows Media DRM
- Forward-and-delete, to implement “Digital Personal Property” a la the IEEE P1817 standard
Once again, we set out to design something that meets current market needs and works within current market constraints; EPUB LCP is not a research-lab R&D project.
Again, I’ll be discussing this, as well as the landscape for e-book content protection in general, at Book Expo America next week. Feel free to come and heckle (or just heckle in the comments right here). I’m sure I will have more to report as this very interesting project develops.
Inisoft of Korea Acquires BuyDRM May 24, 2012Posted by Bill Rosenblatt in DRM, Video.
add a comment
Inisoft, a Korean company that does software development for mobile media applications, has acquired Texas-based BuyDRM. BuyDRM is a well-established player in the Microsoft DRM ecosystem with customers including HBO, BBC, and NBC. The company offers a DRM platform called KeyOS that incorporates Microsoft’s PlayReady DRM; Inisoft focuses on media player applications and DRM clients for mobile devices.
The deal is a good one for both parties as well as the premium video content marketplace in general. It enables BuyDRM — which will continue to operate under its own name — to increase its ability to offer the “one stop shopping” that service providers are often looking for, to build services that work on multiple devices more quickly and easily. This is increasingly necessary as service providers are scrambling to build “TV Everywhere” type services over multiple networks to a growing number of devices.
The newly-merged company is in a sweet spot in the video market, due to PlayReady’s emergence as a leading DRM for Hollywood content, for both streaming and download. Yet while Microsoft has fostered a healthy partner ecosystem, as it typically does for “platform” technologies like PlayReady, the ecosystem that exists can be confusing to service providers.
For one thing, Microsoft isn’t supporting the most popular client platforms by itself. Microsoft provides PlayReady server code and client code for Windows, Silverlight (Microsoft’s web application development platform), and Windows Phone, plus an SDK for porting to non-Microsoft platforms. But unlike other video DRM providers (e.g., Widevine), it doesn’t provide the actual ports to other client devices — including the most popular (and admittedly competing) platforms, Apple’s iOS and Google’s Android. Instead it leaves that to its partners.
The other problem is that Microsoft’s PlayReady partners cover an overlapping array of technologies and services that can be confusing to service providers who just want to get something up and running that meets Hollywood’s content protection requirements. There’s a profusion of vendors with different and often overlapping product sets. As a few examples: Discretix and Trusted Logic offer secure client ports but not server code; Axinom and castLabs offer server-side only; AuthenTec and Irdeto offer both server and client implementations; Verimatrix integrates PlayReady with its own stream protection technology; yet other vendors like Azuki Systems provide complete platforms for multiscreen Internet video content delivery with many more components beyond DRM.
The process of acquiring this technology is thus more complicated than it needs to be, especially in this age of proliferating devices and platforms. Service providers that are interested in using PlayReady to protect licensed content don’t get much help from Microsoft in guiding them through this maze of products and services; partners are left to do all the marketing. (Microsoft itself hasn’t put out a press release on PlayReady in over a year, despite its traction in the market.) In effect, Microsoft has let the market sort itself out through the relatively slow and cumbersome processes of partnerships, OEM deals, multiple-vendor arrangements, and — in the case of BuyDRM and Inisoft — mergers/acquisitions.
Having said that, Inisoft’s acquisition of BuyDRM should help bring some much-needed clarity to service providers. It is a positive development for the market for multi-device video services with studio content.
Roots of the Online Upheaval of SOPA/PIPA May 13, 2012Posted by Bill Rosenblatt in DRM, Law, United States.
add a comment
I’m in the middle of reading a new book called Hollywood’s Copyright Wars: From Edison to the Internet, by University of Pennsylvania professor Peter DeCherney. I’ll report back on this book later; today I want to talk about a PhD dissertation that appears in a footnote in this book.
Bill Herman’s dissertation at Penn’s Annenberg School of Communication is called The Battle over Digital Rights Management: A Multi-Method Study of the Politics of Copyright Management Technologies. It was written in 2009, and it presciently anticipates the online movement that led to the downfall of SOPA and PIPA two years later.
Herman — now a professor of film and media at Hunter College in NYC — looked at four legislative developments in U.S. digital copyright policy and measured how they were influenced by three types of communication: direct communications with legislators (e.g., lobbying), the press, and online. The four developments were the Audio Home Recording Act (1992), the anticircumvention provision of the Digital Millennium Copyright Act (1998), efforts to revise the DMCA (2003-2005), and the FCC Broadcast Flag regulation (2006).
Herman’s research analyzes communications in those three arenas and grades them according to whether they tilt “strong copyright” or “strong fair use.” He finds that communications with congress, which tilted strongly “strong copyright,” predominated in the earlier years; press reporting (in the Washington Post and New York Times) was roughly balanced, with a slight “strong fair use” tilt; then online communication took over the debate with a forty-to-one “strong fair use” slant and influenced the repeal of the FCC Broadcast Flag regulation in 2007. Although Herman is unabashedly on the “strong fair use” side, his methodologies for identifying and characterizing these various communications are rigorous and do not show bias.
In his introduction, Herman writes: “While the time period under study does not include their ultimate triumph at the bargaining table — as of this writing, what I describe as the strong fair use coalition still has not won a major legislative victory — it does include the beginning of their time as a genuine force at that table.” As a prediction of the online and copyleft communities killing SOPA and PIPA, this is pretty impressive.
Herman’s thesis goes into great detail about the ways in which the “strong fair use” axis posted lots of material online to feed the debate, while the other side didn’t. It’s a trove of factual evidence about how to shape policy debate in the Internet age (and how not to). It also, in effect, shoots holes in the theory held by some strong-copyright people that a Google-led cabal caused the defeat of SOPA and PIPA.
I admit not to having read the entire 400-plus pages of the dissertation, though it contains a much more manageable 27-page introduction that summarizes the methodology and results. With that caveat in mind, I can identify one shortcoming in Herman’s methodology that, if he had corrected it, might have changed the nature of his conclusions.
Herman tracked press stories that specifically covered the four legislative developments mentioned above. But he didn’t track stories that covered the real-world marketplace of the technologies being regulated – articles by the likes of David Pogue in the Times and Walter Mossberg in the Wall Street Journal. (Nor did he track online content about the same, from the likes of TechCrunch, CNet, etc., not to mention Internet ideologues like Cory Doctorow and thousands or millions of blogs.)
If he had done this, he would have found a much more anti-DRM tilt in the press during the early-mid 2000s than he did. Articles from this period (and thereafter) took a populist, pro-consumer viewpoint: after all, people read Pogue, Mossberg, and CNet to help them choose the best digital content services and devices. The job of these writers isn’t to defend the interests of copyright owners or content creators; it’s to help sell newspapers and drive traffic to websites.
These sources routinely praise digital content services and devices that offer as many rights to as much content for as little money as possible. DRM can be used to enable new content distribution models, but it can also be used to force consumers to pay, limit interoperability, and restrict uses of content that are allowed under copyright law. Thus it makes sense that these writers would paint DRM in a negative light.
One has to wonder how much the pro-consumer point of view in this press coverage influenced legislation. The journalists who covered legislative developments during the period Herman studied did not overlap much with those who covered products and services. For example, Jenna Wortham, Jonathan Weisman, and Brian Stelter provided the bulk of legislative coverage at the Times, while over at CNet, Declan McCullagh wrote about policy and legislation while Greg Sandoval did (and does) most of the marketplace coverage.
Herman attributes the “strong fair use” coalition’s increased legislative influence to its greater effectiveness than the “strong copyright” community in putting its message out online. But I would suggest that they had a lot of help from both professional and amateur writers about consumer media technologies, who led people to wonder why technologies like DRM exist and then what role government plays in them.
It might not be as easy to gauge that influence, but it was — and is — surely significant; and that means that the press could well influence digital copyright legislation more strongly than Herman surmises. Herman seems eager to glorify the power of the Internet by itself. While there’s no doubt that Internet forces killed SOPA and PIPA, what Herman calls the “strong fair use” movement has roots outside of the copyleft academia and advocacy groups that he credits (he was an intern at Public Knowledge and considers Larry Lessig a hero).
Regardless, the defeat of SOPA and PIPA has made it clear that the online community now has a lot of power over policy debate. Gary Shapiro of the Consumer Electronics Association wrote a letter to the editor in the Times admitting that “back rooms do not exist on the Internet.” I would suggest that if the RIAAs and MPAAs of the world want to understand how to engage the online public in order to shape future legislation, Herman’s thesis ought to be required reading for them.
As a postscript, there is now a bit of overlap in coverage of digital content products and services and legislative policy, now that people are digging through the post-SOPA/PIPA wreckage and considering what to do next. David Pogue, for example, got around to actually reading the legislation back in January as it was failing. He made two badly-needed observations: that many of the objectors to SOPA and PIPA didn’t like it simply because it could cut off their supply of free content, and that such people generally didn’t have a clue about the actual legislation and acted on misinformation about it. Let’s hope that now that Pogue has connected the dots, more people will follow that train of thought to some reasonable policy developments.
Webinar on Studios’ Content Security Policies April 24, 2012Posted by Bill Rosenblatt in Conditional Access, DRM, Events, Video, Watermarking.
add a comment
For those who couldn’t attend the breakfast event at the NAB trade show last week, I will be doing a webinar on Content Security Requirements for Multi-Screen Video Services, on Thursday April 26 at noon US east coast time/1700 GMT. I’ll be presenting a synopsis of the whitepaper I published last December on the topic. I will be joined by Petr Peterka, CTO of Verimatrix, sponsor of the webinar. Click here to register.