add a comment
President Obama recently signed into law a bill that allows people to “jailbreak” or “root” their mobile phones in order to switch wireless carriers. The Unlocking Consumer Choice and Wireless Competition Act was that rarest of rarities these days: a bipartisan bill that passed both houses of Congress by unanimous consent. Copyleft advocates such as Public Knowledge see this as an important step towards weakening the part of the Digital Millennium Copyright Act that outlaws hacks to DRM systems, known as DMCA 1201.
For those of you who might be scratching your heads wondering what jailbreaking your iPhone or rooting your Android device has to do with DRM hacking, here is some background. Last year, the U.S. Copyright Office declined to renew a temporary exception to DMCA 1201 that would make it legal to unlock mobile phones. A petition to the president to reverse the decision garnered over 100,000 signatures, but as he has no power to do this, I predicted that nothing would happen. I was wrong; Congress did take up the issue, with the resulting legislation breezing through Congress last month.
Around the time of the Copyright Office’s ruling last year, Zoe Lofgren, a Democrat who represents a chunk of Silicon Valley in Congress, introduced a bill called the Unlocking Technology Act that would go considerably further in weakening DMCA 1201. This legislation would sidestep the triennial rulemaking process in which the Copyright Office considers temporary exceptions to the law; it would create permanent exceptions to DMCA 1201 for any hack to a DRM scheme, as long as the primary purpose of the hack is not an infringement of copyright. The ostensible aim of this bill is to allow people to break their devices’ DRMs for such purposes as enabling read-aloud features in e-book readers, as well as to unlock their mobile phones.
DMCA 1201 was purposefully crafted so as to disallow any hacks to DRMs even if the resulting uses of content are noninfringing. There were two rationales for this. Most basically, if you could hack a DRM, then you would be able to get unencrypted content, which you could use for any reason, including emailing it to your million best friends (which would have been a consideration in the 1990s when the law was created, as Torrent trackers and cyberlockers weren’t around yet).
But more specifically, if it’s OK to hack DRMs for noninfringing purposes, then potentially sticky questions about whether a resulting use of content qualifies as fair use must be judged the old-fashioned way: through the legal system, not through technology. And if you are trying to enforce copyrights, once you fall through what I have called the trap door into the legal system, you lose: enforcement through the traditional legal system is massively less effective and efficient than enforcement through technology. The media industry doesn’t want judgments about fair use from hacked DRMs to be left up to consumers; it wants to reserve the benefit of the doubt for itself.
The tech industry, on the other hand, wants to allow fair uses of content obtained from hacked DRMs in order to make its products and services more useful to consumers. And there’s no question that the Unlocking Technology Act has aspects that would be beneficial to consumers. But there is a deeper principle at work here that renders the costs and benefits less clear.
The primary motivation for DMCA 1201 in the first place was to erect a legal backstop for DRM technology that wasn’t very effective — such as the CSS scheme for DVDs, which was the subject of several DMCA 1201 litigations in the previous decades. The media industry wanted to avoid an “arms race” against hackers. The telecommunications industry — which was on the opposite side of the negotiating table when these issues were debated in the early 1990s — was fine with this: telcos understood that with a legal backstop against hacks in place, they would have less responsibility to implement more expensive and complex DRM systems that were actually strong; furthermore, the law placed accountability for hacks squarely on hackers, and not on the service providers (such as telcos) that implemented the DRMs in the first place. In all, if there had to be a law against DRM hacking, DMCA 1201 was not a bad deal for today’s service providers and app developers.
The problem with the Unlocking Technology Act is in the interpretation of phrases in it like “primarily designed or produced for the purpose of facilitating noninfringing uses of [copyrighted] works.” Most DRM hacks that I’m familiar with are “marketed” with language like “Exercise your fair use rights to your content” and disclaimers — nudge, nudge, wink, wink — that the hack should not be used for copyright infringement. Hacks that developers sell for money are subject to the law against products and services that “induce” infringement, thanks to the Supreme Court’s 2005 Grokster decision, so commercial hackers have been on notice for years about avoiding promotional language that encourages infringement. (And of course none of these laws apply outside of the United States.)
So, if a law like the Unlocking Technology Act passes, then copyright owners could face challenges in getting courts to find that DRM hacks were not “primarily designed or produced for the purpose of facilitating noninfringing uses[.]” The question of liability would seem to shift from the supplier of the hack to the user. In other words, this law would render DMCA 1201 essentially toothless — which is what copyleft interests have wanted all along.
From a pragmatic perspective, this law could lead non-dominant retailers of digital content to build DRM hacks into their software for “interoperability” purposes, to help them compete with the market leaders. It’s particularly easy to see why Google should want this, as it has zillions of users but has struggled to get traction for its Google Play content retail operations. Under this law, Google could add an “Import from iTunes” option for video and “Import from Kindle/Nook/iBooks” options for e-books. (And once one retailer did this, all of the others would follow.) As long as those “import” options re-encrypted content in the native DRM, there shouldn’t be much of an issue with “fair use.” (There would be plenty of issues about users violating retailers’ license agreements, but that would be a separate matter.)
This in turn could cause retailers that use DRM to help lock consumers into their services to implement stronger, more complex, and more expensive DRM. They would have to use techniques that help thwart hacks over time, such as reverse engineering prevention, code diversity and renewability, and sophisticated key hiding techniques such as whitebox encryption. Some will argue that making lock-in more of a hassle will cause technology companies to stop trying. This argument is misguided: first, lock-in is fundamental to theories of markets in the networked digital economy and isn’t likely to go away over costs of DRM implementation; second, DRM is far from the only way to achieve lock-in.
The other question is whether Hollywood studios and other copyright owners will demand stronger DRM from service providers that have little motivation to implement it. The problem, as usual, is that copyright owners demand the technology (as a condition of licensing their content) but don’t pay for it. If there’s no effective legal backstop to weak DRM, then negotiations between copyright owners and technology companies may get tougher. However, this may not be an issue particularly where Hollywood is concerned, since studios tend to rely more heavily on terms in license agreements (such as robustness rules) than on DMCA 1201 to enforce the strength of DRM implementations.
Regardless, the passage of the mobile phone unlocking legislation has led to increased interest in the Unlocking Technology Act, such as the recent panel that Public Knowledge and other like-minded organizations put on in Washington. Rep. Lofgren has succeeded in getting several more members of Congress to co-sponsor her bill. The trouble is, all but one of them are Democrats (in a Republican-controlled House of Representatives not exactly known for cooperation with the other side of the aisle); and the Democratically-controlled Senate has not introduced parallel legislation. This means that the fate of the Unlocking Technology Act is likely to be similar to that of past attempts to do much the same thing: the Digital Media Consumers’ Rights Act of 2003 and the Freedom and Innovation Revitalizing United States Entrepreneurship (FAIR USE) Act of 2007. That is, it’s likely to go nowhere.
Dispatches from IDPF Digital Book 2014, Pt. 3: DRM June 5, 2014Posted by Bill Rosenblatt in DRM, Publishing, Standards.
1 comment so far
The final set of interesting developments at last week’s IDPF Digital Book 2014 in NYC has to do with DRM and rights.
Tom Doherty, founder of the science fiction publisher Tor Books, gave a speech about his company’s experimentation with DRM-free e-books and its launch of a line of e-novellas without DRM. The buildup to this speech (among those of us who were aware of the program in advance) was palpable, but the result fell with a thud. You had to listen hard to find the tiny morsel about how going DRM-free has barely affected sales; otherwise the speech was standard-issue dogma about DRM with virtually no new insights or data. And he did not take questions from the audience.
DRM has become something of a taboo subject even at conferences like this, so most of the rest of the discussion about it took the form of hallway buzz. And the buzz is that many are predicting that DRM will be on its way out for retail trade e-books within the next couple of years.
That’s the way things are likely to go if technology market forces play out the way they usually do. Retailers other than Amazon (and possibly Apple) will want to embrace more open standards so that they can offer greater interoperability and thus band together to compete with the dominant player; getting rid of DRM is certainly a step in that direction. Meanwhile, publishers, getting more and more fed up with or afraid of Amazon, will find common cause with other retailers and agree to license more of their material for distribution without DRM. (Several retailers in second-tier European countries as well as some retailers for self-publishing authors, such as Lulu, have already dropped DRM entirely.)
Such sentiments will eventually supersede most publishers’ current “faith-based” insistence on DRM. In other words, publishers and retailers will behave more or less the same way as the major record labels and non-Apple retailers behaved back in 2006-2007.
This course of events seems inevitable… unless publishers get some hard, credible data that tells them that DRM helps prevent piracy and “oversharing” more than it hurts the consumer experience. That’s the only way (other than outright inertia) that I can see DRM staying in place for trade books over the next couple of years.
The situation for educational, professional, and STM (scientific, technical, medical) books is another story (as are library lending and other non-retail models). Higher ed publishers in particular have reasons to stick with DRM: for example, e-textbook piracy has been rising dramatically in recent years and is up to 34% of students as of last year.
Adobe recently re-launched its DRM with a focus on these publishing market segments. I’d describe the re-launch as “awkward,” though publishers I’ve spoken to would characterize in it less polite terms. This has led to openings for other vendors, such as Sony DADC; and the Readium Foundation is still working on the open-source EPUB Lightweight Content Protection scheme.
The hallway buzz at IDPF Digital Book was that DRM for these market segments is here to stay — except that in higher ed, it may become unnecessary in a longer timeframe, when educational materials are delivered dynamically and in a fashion more akin to streaming than to downloads of e-books.
I attended a panel on EDUPUB, a standards initiative aimed at exactly this future for educational publishing. The effort, led by Pearson Education (the largest of the educational publishers), the IMS Global Learning Consortium, and IDPF, is impressive: it’s based on combining existing open standards (such as IDPF’s EPUB 3) instead of inventing new ones. It’s meant to be inclusive and beneficial to all players in the higher ed value chain, including Pearson’s competitors.
However, EDUPUB is in danger of making the same mistake as the IDPF did by ignoring DRM and other rights issues. When asked about DRM, Paul Belfanti, Pearson’s lead executive on EDUPUB, answered that EDUPUB is DRM-agnostic and would leave decisions on DRM to providers of content delivery platforms. This decision was problematic for trade publishers when IDPF made it for EPUB several years ago; it’s even more potentially problematic for higher ed; EDUPUB-based materials could certainly be delivered in e-textbook form.
EDUPUB could also help enable one of the Holy Grails of higher ed publishing, which is to combine materials from multiple publishers into custom textbooks or dynamically delivered digital content. Unlike most trade books, textbooks often contain hundreds or thousands of content components, each of which may have different rights associated with them.
Clearing rights for higher ed content is a manual, labor-intensive job. In tomorrow’s world of dynamic digital educational content, it will be more important than ever to make sure that the content being delivered has the proper clearances, in real time. In reality, this doesn’t necessarily involve DRM; it’s mainly a question of machine-readable rights metadata.
Attempts to standardize this type of rights metadata date back at least to the mid-1990s (when I was involved in such an attempt); none have succeeded. This is a “last mile” issue that EDUPUB will have to address, sooner rather than later, for it to make good on its very promising start. DRM and rights are not popular topics for standards bodies to address, but it has become increasingly clear that they must address these issues to be successful.
Adobe Resurrects E-Book DRM… Again February 10, 2014Posted by Bill Rosenblatt in DRM, Publishing.
Over the past couple of weeks, Adobe has made a series of low-key announcements regarding new versions of its DRM for e-books, Adobe Content Server and Rights Management SDK. The new versions are ACS5 and RMSDK10 respectively, and they are released on major platforms now (iOS, Android, etc.) with more to come next month.
The new releases, though rumored for a while, came as something of a surprise to those of us who understood Adobe to have lost interest in the e-book market… again. They did so for the first time back in 2006, before the launch of the Kindle kicked the market into high gear. At that time, Adobe announced that version 3 of ACS would be discontinued. Then the following year, Adobe reversed course and introduced ACS4. ACS4 supports the International Digital Publishing Forum (IDPF)’s EPUB standard as well as Adobe’s PDF.
This saga repeated itself, roughly speaking, over the past year. As the IDPF worked on version 3 of EPUB, Adobe indicated that it would not upgrade its e-reader software to work with it, nor would it guarantee that ACS4 would support it. The DRM products were transferred to an offshore maintenance group within Adobe, and all indications were that Adobe was not going to develop it any further. Now that’s all changed.
Adobe had originally positioned ACS in the e-book market as a de facto standard DRM. It licensed the technology to a large number of makers of e-reader devices and applications, and e-book distributors around the world. At first this strategy seemed to work: ACS looked like an “everyone but Amazon” de facto standard, and some e-reader vendors (such as Sony) even migrated from proprietary DRMs to the Adobe technology.
But then cracks began to appear: Barnes & Noble “forked” ACS with its own extensions to support features such as user-to-user lending in the Nook system; Apple launched iBooks with a variant of its FairPlay DRM for iTunes content; and independent bookstores’ IndieBound system adopted Kobo, which has its own DRM. Furthermore, interoperability of e-book files among different RMSDK-based e-readers was not exactly seamless. As of today, “pure” ACS represents only a minor part of the e-book retail market, at least in the US, including Google Play, SmashWords, and retailers served by OverDrive and other wholesalers.
It’s unclear why Adobe chose to go back into the e-book DRM game, though pressure from publishers must have been a factor. Adobe can’t do much about interoperability glitches among retailers and readers, but publishers and distributors alike have asked for various features to be added to ACS over the years. Publishers have mainly been concerned with the relatively easy availability of hacks, while distributors have also expressed the desire for a DRM that facilitates certain content access models that ACS4 does not currently support.
The new ACS5/RMSDK10 platform promises to give both publishers and distributors just about everything they have asked for. First, Adobe has beefed up the client-side security using (what appear to be) software hardening, key management, and crypto renewability techniques that are commonly used for video and games nowadays.
Adobe has also added support for several interesting content access models. At the top of the list of most requested models is subscriptions. ACS5 will not only support periodical-style subscriptions but also periodic updates to existing files; the latter is useful in STM (scientific, technical, medical) and various professional publishing markets.
ACS5 also contains two enhancements that are of interest to the educational market. One is support for collections of content shared among multiple devices, which is useful for institutional libraries. Another is support for “bulk fulfillment,” such as pre-loading e-reader devices with encrypted books (such as textbooks). Bulk fulfillment requires a feature called separate license delivery, which is supported in many DRMs but hasn’t been in ACS thus far. With separate license delivery, DRM-packaged files can be delivered in any way (download, optical disk, device pre-load, etc.), and then the user’s device or app can obtain licenses for them as needed.
Finally, ACS5 will support the Readium Foundation’s open-source EPUB3 e-reader software. Adobe is “evaluating the feasibility” of supporting the Readium EPUB 3 SDK in its Adobe Reader Mobile SDK; but this means that distributors will now definitely be able to accommodate EPUB3 in their apps.
In all, ACS5 fulfills many of the wish list items that I have heard from publishers over the past couple of years, leaving one with the impression that it could expand its market share again and move towards Adobe’s original goal of de facto standard-hood (except for Amazon and possibly Apple). ACS5 is backward compatible with older versions of ACS and does not require that e-books be re-packaged; in other words, users can read their older files in RMSDK10-enabled e-readers.
Yet Adobe made a gaffe in its announcements that immediately jeopardized all this potential: it initially gave the impression that it would force upgrades to ACS5/RMSDK10 this July. (Watch this webinar video from Adobe’s partner Datalogics, starting around the 21-minute mark.) Distributors would have to upgrade their apps to the latest versions, with the hardened security; and users would have to install the upgrades before being able to read e-books packaged with the new DRM. Furthermore, if users obtain e-books packaged with the new DRM, they would not be able to read them on e-readers based on the older RMSDK. (Yet another sign that Adobe has acted on pressure from publishers rather than distributors.) In other words, Adobe wanted to force the entire ACS ecosystem to move to a more secure DRM client in lock-step.
This forced-upgrade routine is similar to what DRM-enabled download services like iTunes (video) do with their client software. But then Apple doesn’t rely on a network of distributors, almost all of which maintain their own e-reading devices and apps.
In any case, the backlash from distributors and the e-publishing blogosphere was swift and harsh; and Adobe quickly relented. Now the story is that distributors can decide on their own upgrade timelines. In other words, publishers will themselves have to put pressure on distributors to upgrade the DRM, at least for the traditional retail and library-lending models; and some less-secure implementations will likely remain out there for some time to come.
Adobe’s new release balances between divergent effects of DRM. On the one hand, DRM interoperability is more important than ever for publishers and distributors alike, to counteract the dominance of Amazon in the e-book retail market; and the surest way to achieve DRM interoperability is to do away with DRM altogether. (There are other ways to inhibit interoperability that have nothing to do with DRM.) But on the other hand, integrating interoperability with support for content access models that are unsupportable without some form of content access control — such as subscriptions and institutional library access — seems like an attractive idea. Adobe has survived tugs-of-war with publishers and distributors over DRM restrictions before, so this one probably won’t be fatal.
Judge Dismisses E-Book DRM Antitrust Case December 12, 2013Posted by Bill Rosenblatt in DRM, Law, Publishing.
Last week a federal judge in New York dismissed a lawsuit that a group of independent booksellers brought earlier this year against Amazon.com and the (then) Big Six trade publishers. The suit alleged that the publishers were conspiring with Amazon to use Amazon’s DRM to shut the indie booksellers out of the majority of the e-book market. The three bookstores sought class action status on behalf of all indie booksellers.
In most cases, independent booksellers can’t sell e-books that can be read on Amazon’s Kindle e-readers; instead they have a program through the Independent Booksellers Association that enables consumers to buy e-books from the stores’ websites via the Kobo e-book platform, which has apps for all major devices (PCs, iOS, Android, etc.) as well as Kobo eReaders.
Let’s get the full disclosure out of the way: I worked with the plaintiffs in this case as an expert witness. (Which is why I didn’t write about this case when it was brought several months ago.) I did so because, like others, I read the complaint and found that it reflected various misconceptions about DRM and its place in the e-book market; I thought that perhaps I could help educate the booksellers.
The booksellers asked the court to enjoin (force) Amazon to drop its proprietary DRM, and to enjoin the Big Six to allow independent bookstores to sell their e-books using an interoperable DRM that would presumably work with Kindles as well as iOS and Android devices, PCs, Macs, BlackBerrys, etc. (The term that the complaint used for the opposite of “interoperable DRM” was “inoperable DRM,” much to the amusement of some anti-DRM folks.)
There were two fundamental problems with the complaint. One was that it presupposed the existence of an idealized interoperable DRM that would work with any “interoperable or open architecture device,” and that “Amazon could easily, and without significant cost or disruption, eliminate its device specific restrictive  DRM and instead utilize an available interoperable system.”
There is no such thing, nor is one likely to come into being. I worked with the International Digital Publishing Form (IDPF), the trade association for e-books, to design a “lightweight” content protection scheme that would be attractive to a large number of retailers through low cost of adoption, but that project is far from fruition, and in any case, no one associated with it is under any illusion that all retailers will adopt the scheme. The only DRM that is guaranteed to work with all devices and all retailers forever is no DRM at all.
The closest thing there is to an “interoperable” DRM nowadays is Adobe Content Server (ACS) — which isn’t all that close. Adobe had intended ACS to become an interoperable standard, much like PDF is. Unlike Amazon’s Mobipocket DRM and Apple’s FairPlay DRM for iBooks, ACS can be licensed and used by makers of e-reader devices and apps. Several e-book platforms do use it. But the only retailer with significant market share in the United States that does so is Barnes & Noble, which has modified it and combined it with another DRM that it had acquired years ago. Kobo has its own DRM and uses ACS only for interoperability with other environments.
More relevantly, I have heard it said that Amazon experimented with ACS before launching the Kindle with the Mobipocket DRM that it acquired back in 2005. But in any case, ACS’s presence in the US e-book market is on the wane, and Adobe has stopped actively working on the product.
The second misconception in the booksellers’ complaint was the implication that the major publishers had an interest in limiting their opportunities to sell e-books through indie bookstores. The reality is just the opposite: publishers, from the (now) Big Five on down, would like nothing more than to be able to sell e-books through every possible retailer onto every possible device. The complaint alleges that publishers “confirmed, affirmed, and/or condoned AMAZON’s use of restrictive DRMs” and thereby conspired to restrain trade in the e-book market.
Publishers have been wary of Amazon’s dominant market position for years, but they have tolerated its proprietary technology ecosystem — at least in part because many of them understand that technology-based media markets always settle down to steady states involving two or three different platforms, protocols, formats, etc. DRM helps vendors create walls around their ecosystems, but it is far from the only technology that does so.
As I’ve said before, the ideal of an “MP3 for e-books” is highly unlikely and is largely a mirage in any case. Copyright owners have a constant struggle to create and preserve level playing fields for retailers in the digital age, one that the more savvy among them recognize that they can’t win as much as they would like.
Judge Jed Rakoff picked up on this second point in his opinion dismissing the case. He said, “… nothing about [the] fact [that publishers made agreements with Amazon requiring DRM] suggests that the Publishers also required Amazon to use device-restrictive DRM limiting the devices on which the Publishers’ e-books can be display, or to place restrictions on Kindle devices and apps such that they could only display e-books enabled with Amazon’s proprietary DRM. Indeed, unlike DRM requirements, which clearly serve the Publishers’ economic interests by preventing copyright violations, these latter types of restrictions run counter to the Publishers’ interests …” (emphasis in original).
Indie bookstores are great things; it’s a shame that Amazon’s Kindle ecosystem doesn’t play nicely with them. But at the end of the day — as Judge Rakoff also pointed out — Amazon competes with independent booksellers, and “no business has a duty to aid competitors,” even under antitrust law.
In fact, Amazon has repeatedly shown that it will “cooperate” with competitors only as a means of cutting into their markets. Its extension of the Kindle platform to public library e-lending last year is best seen as part of its attempt to invade libraries’ territory. More recently, Amazon has attempted to get indie booksellers interested in selling Kindle devices in their stores, a move that has elicited frosty reactions from the bookstores.
The rest of Judge Rakoff’s opinion dealt with the booksellers’ failure to meet legal criteria under antitrust law. Independent booksellers might possibly have a case to bring against Amazon for boxing them out of the market as reading goes digital, but Book House of Stuyvesant Plaza et al v. Amazon.com et al wasn’t it.
MovieLabs Releases Best Practices for Video Content Protection October 23, 2013Posted by Bill Rosenblatt in DRM, Standards, Video.
As Hollywood prepares for its transition to 4k video (four times the resolution of HD), it appears to be adopting a new approach to content protection, one that promotes more service flexibility and quicker time to market than previous approaches but carries other risks. The recent publication of a best-practices document for content protection from MovieLabs, Hollywood’s R&D consortium, signals this new approach.
In previous generations of video technology, Hollywood studios got together with major technology companies and formed technology licensing entities to set and administer standards for content protection. For example, a subset of the major studios teamed up with IBM, Intel, Microsoft, Panasonic, and Toshiba to form AACS LA, the licensing authority for the AACS content protection scheme for Blu-ray discs and (originally) HD DVDs. AACS LA defines the technology specification, sets the terms and conditions under which it can be licensed, and performs other functions to maintain the technology.
A licensing authority like AACS LA (and there are a veritable alphabet soup of others) provides certainty to technology implementation including compliance, patent licensing, and interoperability among licensees. It helps insulate the major studios from accusations of collusion by being a separate entity in which at most a subset of them participate.
As we now know, the licensing-authority model has its drawbacks. One is that it can take the licensing authority several years to develop technology specs to a point where vendors can implement them — by which time they risk obsolescence. Another is that it does not offer much flexibility in how the technology can adapt to new device types and content delivery paradigms. For example, AACS was designed with optical discs in mind at a time when Internet video streaming was just a blip on the horizon.
A document published recently by MovieLabs signals a new approach. MovieLabs Specification for Enhanced Content Protection is not really a specification, in that it is in nowhere near enough detail to be usable as the basis for implementations. It is more a compendium of what we now understand as best practices for protecting digital video. It contains room for change and interpretation.
The best practices in the document amount to a wish list for Hollywood. They include things like:
- Techniques for limiting the impact of hacks to DRM schemes, such as requiring device as well as content keys, code diversity (a hack that works on one device won’t necessarily work on another), title diversity (a hack that works with one title won’t necessarily work on another), device revocation, and renewal of protection schemes.
- Proactive renewal of software components instead of “locking the barn door after the horse has escaped.”
- Component technologies that are currently considered safe from hacks by themselves, including standard AES encryption with minimum key length of 128 and version 2.2 or better of the HDCP scheme for protecting links such as HDMI cables (earlier versions were hacked).
- Hardware roots of trust on devices, running in secure execution environments, to limit opportunities for key leakage.
- Forensic watermarking, meaning that content should have information embedded in it about the device or user who requested it.
Those who saw Sony Pictures CTO Spencer Stephens’s talk at the Anti-Piracy and Content Protection Summit in LA back in July will find much of this familiar. Some of these techniques come from the current state of the art in content protection for pay TV services; for more detail on this, see my whitepaper The New Technologies for Pay TV Content Security. Others, such as the forensic watermarking requirement, come from current systems for distributing HD movies in early release windows. And some result from lessons learned from cracks to older technologies such as AACS, HDCP, and CSS (for DVDs).
MovieLabs is unable to act as a licensor of standards for content protection (or anything else, for that matter). The six major studios set it up in 2005 as a movie industry joint R&D consortium modeled on the cable television industry’s CableLabs and other organizations enabled by the National Cooperative Research Act of 1984, such as Bellcore (telecommunications) and SEMATECH (semiconductors). R&D consortia are allowed, under antitrust law, to engage in “pre-competitive” research and development, but not to develop technologies that are proprietary to their members.
Accordingly, the document contains a lot of language intended to disassociate these requirements from any actual implementations, standards, or studio policies, such as “Each studio will determine individually which practices are prerequisites to the distribution of its content in any particular situation” and “This document defined only one approach to security and compatibility, and other approaches may be available.”
Instead, the best-practices approach looks like it is intended to give “signals” from the major studios to content protection technology vendors, such as Microsoft, Irdeto, Intertrust, and Verimatrix, who work with content service providers. These vendors will then presumably develop protection schemes that follow the best practices, with an understanding that studios will then agree to license their content to those services.
The result of this approach should be legal content services for next-generation video that get to market faster. The best practices are independent of things like content delivery modalities (physical media, downloads, streaming) and largely independent of usage rules. Therefore they should enable a wider variety of services than is possible with the traditional licensing authority paradigm.
Yet this approach has two drawbacks compared to the older approach. (And of course the two approaches are not mutually exclusive.) First is that it jeopardizes the interoperability among services that Hollywood craves — and has gone to great lengths to preserve in the UltraViolet standard. Service providers and device makers can incorporate content protection schemes that follow MovieLabs’ best practices, but consumers may not be able to interoperate content among them, and service providers will be able to use content protection schemes to lock users in to their services. In contrast, many in Hollywood are now nostalgic for the DVD because, although its protection scheme was easily hacked, it guaranteed interoperability across all players (at least all within a given geographic region).
The other drawback is that the document is a wish list provided by organizations that won’t pay for the technology. This means that downstream entities such as device makers and service providers will treat it as the maximum amount of protection that they have to implement to get studio approval. Because there is no license agreement that they have to sign to get access to the technology, the downstream entities are likely to negotiate down from there. (Such negotiation already took place behind the scenes during the rollout of Blu-ray, as player makers refused to implement some of the more expensive protection features and some studios agreed to let them slip.)
Downstream entities are particularly likely to push back against some of MovieLabs’s best practices that involve costs and potential impairments of the user experience; examples include device connectivity to networks for purposes of authentication and revocation, proactive renewal of device software, and embedding of situation-specific watermarks.
Surely the studios understand all this. The publication of this document by MovieLabs shows that Hollywood is willing to entertain dialogues with service providers, device makers, and content protection vendors to speed up time-to-market of legitimate video services and ensure that downstream entities can innovate more freely. How much protection will the studios will ultimately end up with when 4k video reaches the mainstream? It will be very interesting to watch over the next couple of years.
E-Book Watermarking Gains Traction in Europe October 3, 2013Posted by Bill Rosenblatt in DRM, Europe, Publishing, United States, Watermarking.
The firm Rüdiger Wischenbart Content and Consulting has just released the latest version of Global eBook, its overview of the worldwide ebook market. This sweeping, highly informative report is available for free during the month of October.
The report contains lots of information about piracy and rights worldwide — attitudes, public policy initiatives, and technologies. A few conclusions in particular stand out. First, while growth of e-book reading appears to be slowing down, it has reached a level of 20% of book sales in the U.S. market (and even higher by unit volume). This puts e-books firmly in the mainstream of media consumption.
Accordingly, e-book piracy has become a mainstream concern. Publishers — and their trade associations, such as the Börsenverein des Deutschen Buchhandels in Germany, which is the most active on this issue — had been less involved in the online infringement issue than their counterparts in the music and film industries, but that’s changing now. Several studies have been done that generally show e-book piracy levels rising rapidly, but there’s wide disagreement on its volume. And virtually no data at all is available about the promotional vs. detrimental effects of unauthorized file-sharing on legitimate sales. Part of the problem is that e-book files are much smaller than music MP3s or (especially) digital video or games; therefore e-book files are more likely to be shared through email (which can’t be tracked) and less likely to be available through torrent sites.
The lack of quantitative understanding of infringement and its impact has led different countries to pursue different paths, in terms of both legal actions and the use of antipiracy technologies. Perhaps the most surprising of the latter trend — at least to those of us on this side of the Atlantic — is the rapid ascendancy of watermarking (a/k/a “social DRM”) in some European countries. For example:
- Netherlands: Arbeiderspers/Bruna, the country’s largest book publisher, switched from traditional DRM to watermarking for its entire catalog at the beginning of this year.
- Austria: 65% of the e-books available in the country have watermarks embedded, compared to only 35% with DRM.
- Hungary: Watermarking is now the preferred method of content protection.
- Sweden: Virtually all trade ebooks are DRM-free. The e-book distributor eLib (owned by the Swedish media giant Bonnier), uses watermarking for 80% of its titles.
- Italy: watermarking has grown from 15% to 42% of all e-books, overtaking the 35% that use DRM.
(Note that these are, with all due respect to them, second-tier European countries. I have anecdotal evidence that e-book watermarking is on the rise in the UK, but not much evidence of it in France or Germany. At the same time, the above countries are often test beds for technologies that, if successful, spread to larger markets — whether by design or market forces.)
Meanwhile, there’s still a total absence of data on the effects of both DRM and watermarking on users’ e-book behavior — which is why I have been discussing with the Book Industry Study Group the possibility of doing a study on this.
The prevailing attitude among authors is that DRM should still be used. An interesting data point on this came back in January when Lulu, one of the prominent online self-publishing services, decided to stop offering authors the option of DRM protection (using Adobe Content Server, the de facto standard DRM for ebooks outside of the Amazon and Apple ecosystems) for ebooks sold on the Lulu site. Lulu authors would still be able to distribute their titles through Amazon and other services that use DRM.
Lulu announced this in a blog post which elicited large numbers of comments, largely from authors. My pseudo-scientific tally of the authors’ comments showed that they are in favor of DRM — and unhappy with Lulu’s decision to drop it — by more than a two-to-one margin. Many said that they would drop Lulu and move to its competitor Smashwords, which continues to support DRM as an option. Remember that these are independent authors of mostly “long tail” titles in need of exposure, not bestselling authors or major publishers.
One reason for Lulu’s decision to drop DRM was undoubtedly the operational expense. Smashwords’ CEO, Mark Coker, expressed the attitudes of ebook distributors succintly in a Publishers Weekly article covering Lulu’s move when he said, “What’s relevant is whether the cost of DRM (measured by fees to Adobe, [and for consumers] increased complexity, decreased availability, decreased sharing and word of mouth, decreased customer satisfaction) outweigh the benefits[.]” As we used to say over here, that’s the $64,000 question.
Content Protection for 4k Video July 2, 2013Posted by Bill Rosenblatt in DRM, Technologies, Video, Watermarking.
As Hollywood adepts know, the next phase in picture quality beyond HD is something called 4k. Although the name suggests 4k (perhaps 4096) pixels in the vertical or horizontal direction, its resolution is actually 3840 × 2160, i.e., twice the pixels of HD in both horizontal and vertical directions.
4k is the highest quality of image actually captured by digital cinematography right now. The question is, how will it be delivered to consumers, in what timeframe, and how will it be protected?
Those of us who attended the Anti-Piracy and Content Protection Summit in LA last week learned that the answer to the latter question is unknown as yet. Spencer Stephens, CTO of Sony Pictures, gave a brief presentation explaining what 4k is and outlining his studio’s wish list for 4k content protection. He said that it was an opportunity to start fresh with a new design, compared to the AACS content protection technology for Blu-ray discs, which is 10 years old.
This is interesting on a couple of levels. First, it implies that the studios have not predetermined a standard for 4k content protection; in contrast, Blu-ray discs were introduced in the market about three years after AACS was designed. Second, Stephens’s remarks had the flavor of a semi-public appeal to the community of content protection vendors — some of which were in the audience at this conference — for help in designing DRM schemes for 4k that met his requirements.
Stephens’s wish list included such elements as:
- Title-by-title diversity, so that a technique used to hack one movie title doesn’t necessarily apply to another
- Requiring players to authenticate themselves online before playback, which enables hacked players to be denied but makes it impossible to play 4k content without an Internet connection
- The use of HDCP 2.2 to protect digital outputs, since older versions of HDCP have been hacked
- Session-based watermarking, so that each 4k file is marked with the identity of the device or user that downloaded it (a technique used today with early-window HD content)
- The use of trusted execution environments (TEE) for playback, which combine the security of hardware with the renewability of software
From time to time I hear from startup companies that claim to have designed better technologies for video content protection. I tell them that getting studio approval for new content protection schemes is a tricky business. You can get studio technology executives excited about your technology, but they don’t actually “approve” it such that they guarantee they’ll accept it if it’s used in a content service. Instead, they expect service providers to propose the technology in the context of the overall service, and the studios will consider providing licenses to their content in that broader context. And of course the studios don’t actually pay for the technology; the service providers or consumer device makers do.
In other words, studios “bless” new content protection technologies, but otherwise the entire sales process takes place at arms’ length from the studios. In that sense, the studios act somewhat like a regulatory agency does when setting guidelines for compliance with a regulation such as HIPAA and GLB (for information privacy in healthcare and financial services respectively). The resulting technology often meets the letter but not the spirit of the regulations.
In this respect, Stephens’s remarks were a bit of fresh air. They are an invitation to more open dialog among vendors, studios, and service providers about the types of content protection that they may be willing to implement when it comes time to distribute 4k content to consumers.
In the past, such discussions often happened behind closed doors, took the form of unilateral “unfunded mandates,” and/or resulted in implementations that plainly did not work. As technology gets more sophisticated and the world gets more complex, Hollywood is going to have to work more closely with downstream entities in the content distribution chain if it wants its content protected. Spencer Stephens’s presentation was a good start in that direction.
Kim Dotcom Embraces DRM January 22, 2013Posted by Bill Rosenblatt in DRM, New Zealand, Services.
add a comment
Kim Dotcom launched a new cloud file storage service, the New Zealand-based Mega, last weekend on the one-year anniversary of the shutdown of his previous site, the notorious MegaUpload. (The massive initial interest in the site* prevented me from trying out the new service until today.)
Mega encrypts users’ files, using what looks like a content key (using AES-128) protected by 2048-bit RSA asymmetric-key encryption. It derives the latter keys from users’ passwords and other pseudo-random data. Downloading a file from a Mega account requires knowing either the password that was used to generate the RSA key (i.e., logging in to the account used to upload the file) or the key itself.
Hmm. Content encrypted with symmetric keys that in turn are protected by asymmetric keys… sounds quite a bit like DRM, doesn’t it?
Well, not quite. While DRM systems assume that file owners won’t want to publish keys used to encrypt the files, Mega not only allows but enables you to publish your files’ keys. Mega lets you retrieve the key for a given file in the form of a URL; just right-click on the file you want and select “Get link.” (Here‘s a sample.) You can put the resulting URL into a blog post, tweet, email message, or website featuring banner ads for porn and mail-order brides.
(And of course, unlike DRM systems, once you obtain a key and download a file, it’s yours in unencrypted form to do with as you please. The encryption isn’t integrated into a secure player app.)
Yet in practical terms, Mega is really no different from file-storage services that let users publish URLs to files they store — examples of which include RapidShare, 4Shared, and any of dozens of file transfer services (YouSendIt, WhaleMail, DropSend, Pando, SendUIt, etc.).
Mega touts its use of encryption as a privacy benefit. What it really offers is privacy from the kinds of piracy monitoring services that media companies use to generate takedown notices — an application of encryption that hardcore pirates have used and that Kim Dotcom purports to “take … out to the mainstream.” It will be impossible to use content identification technologies, such as fingerprinting, to detect the presence of copyrighted materials on Mega’s servers. RapidShare, for example, analyzes third-party links to files on its site for potential infringements; Mega can’t do any such thing, by design.
Mega’s use of encryption also plays into the question of whether it could ever be held secondarily liable for its users’ infringements under laws such as DMCA 512 in the United States. The Beltway tech policy writer Paul Sweeting wrote an astute analysis of Mega’s chances against the DMCA over the weekend.
Is Kim Dotcom simply thumbing his nose at Big Media again? Or is he seriously trying to make Mega a competitor to legitimate, prosaic file storage services such as DropBox? The track records of services known for piracy trying to go “legit” are not encouraging — just ask Bram Cohen (BitTorrent Entertainment Network) or Global Gaming Factory (purchasers of The Pirate Bay’s assets). Still, this is one to watch as the year unfolds.
*Or, just possibly, server meltdowns faked to generate mountains of credulous hype?
As I have worked with video service providers that are trying to upgrade their offerings to include online and mobile services, I’ve seen bewilderment about the maze of codecs, streaming protocols, and player apps as well as content protection technologies that those service providers need to understand. Yet a development that took place earlier this month should help ease some of the complexity.
Microsoft’s PlayReady is becoming a popular choice for content protection. Dozens of service providers use it, including BSkyB, Canal+, HBO, Hulu, MTV, Netflix, and many ISPs, pay TV operators, and wireless carriers. PlayReady handles both downloads and streaming, and it is currently the only commercial DRM technology certified for use with UltraViolet (though that should change soon). Microsoft has developed a healthy ecosystem of vendors that supply things like player apps for different platforms, “hardening” of client implementations to ensure robustness, server-side integration services, and end-to-end services. And after years of putting in very little effort on marketing, Microsoft has finally upgraded its PlayReady website with information to make it easier to understand how to use and license the technology.
Streaming protocols are still a bit of an issue, though. Several vendors have created so-called adaptive streaming protocols, which monitor the user’s throughput and vary the bit rate of the content to ensure optimal quality without interruptions. Apple has HTTP Live Streaming (HLS), Microsoft has Smooth Streaming, Adobe has HTTP Dynamic Streaming (HDS), and Google has technology it acquired from Widevine. Yet operators have been more interested in Dynamic Adaptive Streaming over HTTP (DASH), an emerging vendor-independent MPEG standard. The hope with MPEG-DASH is that operators can use the same protocol to stream to a wide variety of client devices, thereby making deployment of TV Everywhere-type services cheaper and easier.
MPEG-DASH took a significant step towards real-world viability over the last few months with the establishment of the DASH Industry Forum, a trade association that promotes market adoption of the standard. Microsoft and Adobe are members, though not Apple or Google, indicating that at least some of the vendors of proprietary adaptive streaming will embrace the standard. The membership also includes a healthy critical mass of vendors in the video content protection space: Adobe, BuyDRM, castLabs, Irdeto, Nagra, and Verimatrix — plus Cisco, owner of NDS, and Motorola, owner of SecureMedia.
Adaptive streaming protocols need to be integrated with content protection schemes. PlayReady was originally designed to work with Smooth Streaming. It has also been integrated with HLS, which is probably the most popular of the proprietary adaptive streaming schemes. Integration of PlayReady with MPEG-DASH is likely to be viewed as a safe choice, in line with the way the industry is going. That solution came into view this month as BuyDRM and Fraunhofer IIS announced an integration of MPEG-DASH with PlayReady for the HBO GO service in Europe. HBO GO is HBO’s “over the top” service for subscribers.
For the HBO GO demo, BuyDRM implemented a version of its PlayReady client that uses Fraunhofer’s AAC 5.1 surround-sound codec, which ships with devices that run Android 4.1 Jelly Bean. The integration is being showcased with HD quality video on HBO’s “Boardwalk Empire” series. Users can connect Android 4.1 devices with the proper outputs — even handsets — to home-theater audio playback systems to get an experience equivalent to playing a Blu-ray disc. The current implementation supports live broadcasting, with VOD support on the way shortly.
PlayReady integrated with MPEG-DASH is likely to be a popular choice for a variety of video service providers, ranging from traditional pay TV operators to over-the-top services like HBO Go. BuyDRM and Fraunhofer’s deployment is an important step towards that choice becoming widely feasible.
DRM Anticircumvention for Dummies July 15, 2012Posted by Bill Rosenblatt in DRM, Law.
I have seen a lot of writings and gotten a lot of feedback regarding the EPUB Lightweight Content Protection (EPUB LCP) scheme I am helping to design for the International Digital Publishing Forum (IDPF), which oversees the EPUB standard. The criticisms fall into two buckets: DRM sucks, why is the IDPF wasting time on this; the security is too weak, publishers need stronger protection.
Yet these diametrically opposed criticisms have one thing in common: a lack of understanding of how anticircumvention law, such as Section 1201 of the DMCA in the United States, works in practice and how it figures into the design of EPUB LCP. This lack of understanding is common to both DRM opponents and people from DRM technology vendors. Anticircumvention law makes it a crime to hack DRMs.
So I thought I would offer some information about the practicalities of anticircumvention law, presented as rebuttals to some of the false assertions that I have heard. Three caveats are in order: first, the following is going to be U.S.-centric. That’s because am I most familiar with the U.S. anticircumvention law, but also because the U.S. law is by far the most highly developed through litigation. Second, I am not a lawyer — nor are any of the people who have talked to me about this. So if you’re a legal expert and I’m wrong, please correct me. Third, I’m not an official spokesman for IDPF, and they may have different views.
Assertion: Anticircumvention law doesn’t stop hacks; hacks are going to be available anyway.
Reality: Of course the law doesn’t eliminate hacks, but it does make hacks less easily accessible to people who are not determined hackers. The law comes down hardest on those who gain commercially from their hacks. Because of the anticircumvention law, there is not (for example) a “convert from Amazon” option in Nook readers and apps, or the converse in Kindles; instead you have to go find the hack, install it, and use it – something that requires more time, determination, and skill. (Note that this is a different issue from “DRM doesn’t stop piracy.” Here I agree: absolutely, there are various other ways to infringe copyright, some of which are easier than hacking DRMs.)
Assertion: DRM systems that aren’t robust don’t qualify for the anticircumvention law.
Reality: This one comes from DRM vendors, which have vested interests in robustness. To answer this, you need to look at the history of litigation (again, this is a US-centric view). The most important legal precedent here is Universal v. Reimerdes, which was decided in U.S. district court in 2000 and upheld on appeal. This case was one of several involving the weak CSS encryption scheme for DVDs. The defense asked the court to find it not liable because CSS was too weak to meet the definition of “effective” in “technological measure [that] effectively controls access to a work” under the law. In his opinion, the judge explicitly refused to establish an “effectiveness test” by deciding this issue. I know of a couple of cases that attempted to revisit this issue but were dropped. The effect, at least for now, is that any DRM that’s as strong (i.e. weak) as CSS, or stronger, should qualify for protection under the law.
Assertion: The IDPF intends to sue hackers as part of the EPUB LCP initiative.
Reality: Not true at all. The IDPF is not even in a position to facilitate litigation the way the MPAA and RIAA do. (For one thing, it’s an international body, not a national one.) If any organization is going to facilitate litigation, it would be the Association of American Publishers (AAP) in the U.S., which has not been involved in the EPUB LCP initiative. More generally, it may help to explain how the litigation process works in practice. Copyright owners do the suing; they are the actual plaintiffs. They will only bother to sue under the anticircumvention law if they see hacks that are being used widely enough to cause significant infringement and/or the supplier of the hack is making money from the hack. So as a practical matter, a hack that “sits in the shadows” as described above is unlikely to be used widely enough to draw a lawsuit.
Assertion: Users get sued for using hacks.
Reality: Although the law does provide penalties for using as well as distributing hacks, individual users have never gotten sued for using hacks (or for creating hacks for personal use only). Users have been sued for copyright infringement; if you hack a DRM, you may be infringing copyright. Only those who make hacks publicly available have ever been sued for DMCA 1201 violations.
Assertion: This is a US matter and irrelevant elsewhere in the world, especially now that ACTA is dead in Europe.
Reality: As mentioned above, the interpretation of “effectiveness” is a US-centric one that may or may not apply elsewhere. But otherwise, this statement is also incorrect. Anticircumvention law is on the books today in most industrialized countries, including EU member states (resulting from the European Union Copyright Directive of 2001), Australia, New Zealand, Japan, Singapore, India, China, Brazil, and a few others; South Korea and Canada should get anticircumvention laws soon.