Kim Dotcom Embraces DRM January 22, 2013Posted by Bill Rosenblatt in DRM, New Zealand, Services.
add a comment
Kim Dotcom launched a new cloud file storage service, the New Zealand-based Mega, last weekend on the one-year anniversary of the shutdown of his previous site, the notorious MegaUpload. (The massive initial interest in the site* prevented me from trying out the new service until today.)
Mega encrypts users’ files, using what looks like a content key (using AES-128) protected by 2048-bit RSA asymmetric-key encryption. It derives the latter keys from users’ passwords and other pseudo-random data. Downloading a file from a Mega account requires knowing either the password that was used to generate the RSA key (i.e., logging in to the account used to upload the file) or the key itself.
Hmm. Content encrypted with symmetric keys that in turn are protected by asymmetric keys… sounds quite a bit like DRM, doesn’t it?
Well, not quite. While DRM systems assume that file owners won’t want to publish keys used to encrypt the files, Mega not only allows but enables you to publish your files’ keys. Mega lets you retrieve the key for a given file in the form of a URL; just right-click on the file you want and select “Get link.” (Here‘s a sample.) You can put the resulting URL into a blog post, tweet, email message, or website featuring banner ads for porn and mail-order brides.
(And of course, unlike DRM systems, once you obtain a key and download a file, it’s yours in unencrypted form to do with as you please. The encryption isn’t integrated into a secure player app.)
Yet in practical terms, Mega is really no different from file-storage services that let users publish URLs to files they store — examples of which include RapidShare, 4Shared, and any of dozens of file transfer services (YouSendIt, WhaleMail, DropSend, Pando, SendUIt, etc.).
Mega touts its use of encryption as a privacy benefit. What it really offers is privacy from the kinds of piracy monitoring services that media companies use to generate takedown notices — an application of encryption that hardcore pirates have used and that Kim Dotcom purports to “take … out to the mainstream.” It will be impossible to use content identification technologies, such as fingerprinting, to detect the presence of copyrighted materials on Mega’s servers. RapidShare, for example, analyzes third-party links to files on its site for potential infringements; Mega can’t do any such thing, by design.
Mega’s use of encryption also plays into the question of whether it could ever be held secondarily liable for its users’ infringements under laws such as DMCA 512 in the United States. The Beltway tech policy writer Paul Sweeting wrote an astute analysis of Mega’s chances against the DMCA over the weekend.
Is Kim Dotcom simply thumbing his nose at Big Media again? Or is he seriously trying to make Mega a competitor to legitimate, prosaic file storage services such as DropBox? The track records of services known for piracy trying to go “legit” are not encouraging — just ask Bram Cohen (BitTorrent Entertainment Network) or Global Gaming Factory (purchasers of The Pirate Bay’s assets). Still, this is one to watch as the year unfolds.
*Or, just possibly, server meltdowns faked to generate mountains of credulous hype?
As I have worked with video service providers that are trying to upgrade their offerings to include online and mobile services, I’ve seen bewilderment about the maze of codecs, streaming protocols, and player apps as well as content protection technologies that those service providers need to understand. Yet a development that took place earlier this month should help ease some of the complexity.
Microsoft’s PlayReady is becoming a popular choice for content protection. Dozens of service providers use it, including BSkyB, Canal+, HBO, Hulu, MTV, Netflix, and many ISPs, pay TV operators, and wireless carriers. PlayReady handles both downloads and streaming, and it is currently the only commercial DRM technology certified for use with UltraViolet (though that should change soon). Microsoft has developed a healthy ecosystem of vendors that supply things like player apps for different platforms, “hardening” of client implementations to ensure robustness, server-side integration services, and end-to-end services. And after years of putting in very little effort on marketing, Microsoft has finally upgraded its PlayReady website with information to make it easier to understand how to use and license the technology.
Streaming protocols are still a bit of an issue, though. Several vendors have created so-called adaptive streaming protocols, which monitor the user’s throughput and vary the bit rate of the content to ensure optimal quality without interruptions. Apple has HTTP Live Streaming (HLS), Microsoft has Smooth Streaming, Adobe has HTTP Dynamic Streaming (HDS), and Google has technology it acquired from Widevine. Yet operators have been more interested in Dynamic Adaptive Streaming over HTTP (DASH), an emerging vendor-independent MPEG standard. The hope with MPEG-DASH is that operators can use the same protocol to stream to a wide variety of client devices, thereby making deployment of TV Everywhere-type services cheaper and easier.
MPEG-DASH took a significant step towards real-world viability over the last few months with the establishment of the DASH Industry Forum, a trade association that promotes market adoption of the standard. Microsoft and Adobe are members, though not Apple or Google, indicating that at least some of the vendors of proprietary adaptive streaming will embrace the standard. The membership also includes a healthy critical mass of vendors in the video content protection space: Adobe, BuyDRM, castLabs, Irdeto, Nagra, and Verimatrix — plus Cisco, owner of NDS, and Motorola, owner of SecureMedia.
Adaptive streaming protocols need to be integrated with content protection schemes. PlayReady was originally designed to work with Smooth Streaming. It has also been integrated with HLS, which is probably the most popular of the proprietary adaptive streaming schemes. Integration of PlayReady with MPEG-DASH is likely to be viewed as a safe choice, in line with the way the industry is going. That solution came into view this month as BuyDRM and Fraunhofer IIS announced an integration of MPEG-DASH with PlayReady for the HBO GO service in Europe. HBO GO is HBO’s “over the top” service for subscribers.
For the HBO GO demo, BuyDRM implemented a version of its PlayReady client that uses Fraunhofer’s AAC 5.1 surround-sound codec, which ships with devices that run Android 4.1 Jelly Bean. The integration is being showcased with HD quality video on HBO’s “Boardwalk Empire” series. Users can connect Android 4.1 devices with the proper outputs — even handsets — to home-theater audio playback systems to get an experience equivalent to playing a Blu-ray disc. The current implementation supports live broadcasting, with VOD support on the way shortly.
PlayReady integrated with MPEG-DASH is likely to be a popular choice for a variety of video service providers, ranging from traditional pay TV operators to over-the-top services like HBO Go. BuyDRM and Fraunhofer’s deployment is an important step towards that choice becoming widely feasible.
DRM Anticircumvention for Dummies July 15, 2012Posted by Bill Rosenblatt in DRM, Law.
I have seen a lot of writings and gotten a lot of feedback regarding the EPUB Lightweight Content Protection (EPUB LCP) scheme I am helping to design for the International Digital Publishing Forum (IDPF), which oversees the EPUB standard. The criticisms fall into two buckets: DRM sucks, why is the IDPF wasting time on this; the security is too weak, publishers need stronger protection.
Yet these diametrically opposed criticisms have one thing in common: a lack of understanding of how anticircumvention law, such as Section 1201 of the DMCA in the United States, works in practice and how it figures into the design of EPUB LCP. This lack of understanding is common to both DRM opponents and people from DRM technology vendors. Anticircumvention law makes it a crime to hack DRMs.
So I thought I would offer some information about the practicalities of anticircumvention law, presented as rebuttals to some of the false assertions that I have heard. Three caveats are in order: first, the following is going to be U.S.-centric. That’s because am I most familiar with the U.S. anticircumvention law, but also because the U.S. law is by far the most highly developed through litigation. Second, I am not a lawyer — nor are any of the people who have talked to me about this. So if you’re a legal expert and I’m wrong, please correct me. Third, I’m not an official spokesman for IDPF, and they may have different views.
Assertion: Anticircumvention law doesn’t stop hacks; hacks are going to be available anyway.
Reality: Of course the law doesn’t eliminate hacks, but it does make hacks less easily accessible to people who are not determined hackers. The law comes down hardest on those who gain commercially from their hacks. Because of the anticircumvention law, there is not (for example) a “convert from Amazon” option in Nook readers and apps, or the converse in Kindles; instead you have to go find the hack, install it, and use it – something that requires more time, determination, and skill. (Note that this is a different issue from “DRM doesn’t stop piracy.” Here I agree: absolutely, there are various other ways to infringe copyright, some of which are easier than hacking DRMs.)
Assertion: DRM systems that aren’t robust don’t qualify for the anticircumvention law.
Reality: This one comes from DRM vendors, which have vested interests in robustness. To answer this, you need to look at the history of litigation (again, this is a US-centric view). The most important legal precedent here is Universal v. Reimerdes, which was decided in U.S. district court in 2000 and upheld on appeal. This case was one of several involving the weak CSS encryption scheme for DVDs. The defense asked the court to find it not liable because CSS was too weak to meet the definition of “effective” in “technological measure [that] effectively controls access to a work” under the law. In his opinion, the judge explicitly refused to establish an “effectiveness test” by deciding this issue. I know of a couple of cases that attempted to revisit this issue but were dropped. The effect, at least for now, is that any DRM that’s as strong (i.e. weak) as CSS, or stronger, should qualify for protection under the law.
Assertion: The IDPF intends to sue hackers as part of the EPUB LCP initiative.
Reality: Not true at all. The IDPF is not even in a position to facilitate litigation the way the MPAA and RIAA do. (For one thing, it’s an international body, not a national one.) If any organization is going to facilitate litigation, it would be the Association of American Publishers (AAP) in the U.S., which has not been involved in the EPUB LCP initiative. More generally, it may help to explain how the litigation process works in practice. Copyright owners do the suing; they are the actual plaintiffs. They will only bother to sue under the anticircumvention law if they see hacks that are being used widely enough to cause significant infringement and/or the supplier of the hack is making money from the hack. So as a practical matter, a hack that “sits in the shadows” as described above is unlikely to be used widely enough to draw a lawsuit.
Assertion: Users get sued for using hacks.
Reality: Although the law does provide penalties for using as well as distributing hacks, individual users have never gotten sued for using hacks (or for creating hacks for personal use only). Users have been sued for copyright infringement; if you hack a DRM, you may be infringing copyright. Only those who make hacks publicly available have ever been sued for DMCA 1201 violations.
Assertion: This is a US matter and irrelevant elsewhere in the world, especially now that ACTA is dead in Europe.
Reality: As mentioned above, the interpretation of “effectiveness” is a US-centric one that may or may not apply elsewhere. But otherwise, this statement is also incorrect. Anticircumvention law is on the books today in most industrialized countries, including EU member states (resulting from the European Union Copyright Directive of 2001), Australia, New Zealand, Japan, Singapore, India, China, Brazil, and a few others; South Korea and Canada should get anticircumvention laws soon.
I am working with the International Digital Publishing Forum (IDPF), helping them define a new type of content protection standard that may be incorporated into the upcoming Version 3 of IDPF’s EPUB standard for e-books. We’re calling this new standard EPUB Lightweight Content Protection (EPUB LCP).
EPUB LCP is currently in a draft requirements stage. The draft requirements, along with some explanatory information, are publicly available; IDPF is requesting comments on them until June 8. I will be giving a talk about EPUB LCP, and the state of content protection for e-books in general, at Book Expo America in NYC next week, during IDPF’s Digital Book Program on Tuesday June 5.
Now let’s get the disclaimer out of the way: the remainder of this article contains my own views, not necessarily those of IDPF, its management, or its board members. I’m a consultant to IDPF; any decisions made about EPUB LCP are ultimately IDPF’s. The requirements document mentioned above was written by me but edited by IDPF management to suit its own needs.
IDPF is defining a new standard for what amounts to a simple, lightweight, looser DRM. EPUB is widely used in the e-book industry (by just about everyone except Amazon), but lack of an interoperable DRM standard has caused fragmentation that has hampered its success in the market. Frankly, IDPF blew it on this years ago (before its current management came in). They bowed to pressures from online retailers and reading device makers not to make EPUB compliance contingent on adopting a standard DRM, and they considered DRM (understandably) not to be “low hanging fruit.”
IDPF first announced this initiative on May 18; it got press coverage in online publications such as Ars Technica, PaidContent.org, and others. The bulk of the comments were generally “DRM sucks no matter what you call it” or “Why bother with this at all, it won’t help prevent any infringement.” A small number of commenters said something on the order of “If there has to be DRM, this isn’t a bad alternative.” One very knowledgeable commenter on Ars Technica first judged the scheme to be pointless because it’s cryptographically weak, then came around to understanding what we’re trying to do and even offered some beneficial insights.
The draft requirements document provides the basic information about the design; my main purpose here is to focus more on the circumstances and motivation behind the initial design choices.
Let’s start at a high level, with the overall e-book market. (Those of you who read my article about this on PaidContent.org a few months ago can skip this and the next five paragraphs.) Right now it’s at a tipping point between two outcomes that are both undesirable for the publishing industry. The key figure to watch is Amazon’s market share, which is currently in the neighborhood of 60%; Barnes and Noble’s Nook is in second place with share somewhere in the 25-30% range.
One outcome is Amazon increasing its market share and entering monopoly territory (according to the benchmark of 70% market share often used in US law). If that happens, Amazon can do to the publishing industry as Apple has done for music downloads: dominate the market so much that it can both dictate economic terms and lock customers in to its own ecosystem of devices, software, and services.
The other outcome is that Amazon’s market share falls, say to 50% or lower, due to competition. In that case, the market fragments even further, putting a damper 0n overall growth in e-reading. Also not good for publishers.
Let’s look at what happens to DRM in each of these cases. In the first (Amazon monopoly) case, Amazon may drop DRM just as Apple did for music — but it will be too late: Amazon will have achieved lock-in and can preserve it in other ways, such as by making it generally inconvenient for users to use other devices or software to read Amazon e-books. Other e-book retailers would then drop DRM as well, but few will care.
In the second case, everyone will probably keep their DRMs in order to keep users from straying to competitors (though some individual publishers will opt out of it). In other words, if the DRM status quo remains, the likely alternatives are DRM-free monopoly or DRM and fragmentation.
If IDPF had included an interoperable DRM standard back in 2007 when both EPUB and the Kindle launched, e-books might well be more portable among devices and reading software than they are now. Yet the most desirable outcome for the reading public is 100% interoperability, and we know from the history of technology markets (with the admittedly major exception of HTML) that this is a chimera. (Again, I explained this in PaidContent.org a few months ago.)
To many people, the way out of this dilemma is obvious: everyone should get rid of DRM now. That certainly would be good for consumers. But most publishers — who control the terms by which e-books are licensed to retailers — don’t want to do this; neither do many authors, who own copyrights in their books.
E-book retailers and device vendors can get lock-in benefits from DRM. As for whether DRM does anything to benefit rights holders by improving consumers’ copyright compliance or reducing infringement, that’s a real question. Notwithstanding the opinions of the many self-styled experts in user behavior analysis and infringement data collection among the techblogorati and commentariat, the answer is unknown and possibly unknowable. Publishers are motivated to keep DRM if for no other reason than fear that once it goes away, they can never bring it back. Moreover, certain segments of the publishing industry (such as higher education) want DRM that’s even stronger than the current major schemes.
The fact is, none of the major DRMs in today’s e-book market are very sophisticated — at least not compared to content protection technologies used for video content. The economics of the e-book industry make this impossible: the publishers and authors who want DRM don’t pay for it, resulting in cost and complexity constraints. DRM helps retailers insofar as it promotes lock-in, but it doesn’t help them protect their overall services. In contrast, content protection helps pay TV operators (for example) protect their services, which they want protected just as much as Hollywood doesn’t want its content stolen; so they’re willing to pay for more sophisticated content protection.
The two leading e-book DRMs right now are Amazon’s Mobipocket DRM and Adobe’s Content Server 4; the latter is used by Barnes & Noble, Sony, and various others. Hackers have developed what I call “one-click hacks” for both. One-click hacks meet three criteria: people without special technical expertise can use them; they work on any file that’s packaged in the given DRM; and they work permanently (i.e., there is no way to recover from them). In contrast, pay TV content protection schemes are generally not one-click-hackable.
In other words one-click DRM hacks are like format converters, like the one built into Microsoft Word that converts files from WordPerfect or the ones built in to photo editing utilities that convert TIFF to JPEG. But there’s a difference: DRM hacks are illegal in many countries, including the United States, European Union member states, Brazil, India, Taiwan, and Australia; all other signatories to the Anti-Counterfeiting Trade Agreement will eventually have so-called anticircumvention laws too.
The effect of anticircumvention law has been to force DRM hacks into the shadows, making them less easily accessible to the non-tech-savvy and at least somewhat stigmatized. Without the law, we would have things like Nook devices and software with “Convert from Kindle Format” options (and vice versa). The popular, free Calibre e-book reading app, for example, had a DRM stripper but removed it (presumably under legal pressure) in 2009. A DRM removal plug-in for Calibre is available, but it’s not an official one; David Pogue of the New York Times — hardly a fan of DRM – recently dismissed it as difficult to use as well as illegal.
The US has a rich case history around anticircumvention law that has made the boundaries of legal acceptability reasonably clear. It has shut off the availability of hacks from “legitimate” sources and ensured that if your hack is causing enough trouble, you will be sued out of existence. I am not personally a fan of anticircumvention law, but I accept as fact that it has made hacks less accessible to the general public.
The foregoing line of thought got IDPF Executive Director Bill McCoy and me talking last year about what IDPF might be able to do about DRM in the upcoming version of EPUB, in order to help IDPF further its objective of making EPUB a universal standard for digital publishing and forestall the two undesirable market trajectories described above. We did not set out to design an “ultimate DRM” or even “yet another DRM”; we set out to design something intended to solve problems in the digital publishing market while working within existing marketplace constraints.
So now, with that background, here is a set of interrelated design principles we established for EPUB LCP:
- Require interoperability so that retailers cannot use it to promote lock-in. This is what the UltraViolet standard for video is attempting to do, albeit in a technically much more complex way. The idea of UltraViolet is to provide some of the interoperability and sharing features that users want while still maintaining some degree of control. Our theory is that both publishers and e-book retailers would be willing to accept a looser form of DRM that could break the above market dilemma while striking a similar balance between interoperability and control.
- Support functions that users really want, such as “social” sharing of e-books. Build on the idea of e-book watermarking, such as that used in Safari Books Online for PDF downloads and in the Pottermore Store for EPUB format e-books: embed users’ personal information into the content, on the expectation that users will only share files with people whom they trust not to abuse their personal information.
- Create a scheme that can support non-retail models such as library lending and can be extended to support additional business models (see below) or the stronger security that industry segments such as higher ed need.
- Include the kinds of user-friendly features that Reclaim Your Game has recommended for video game DRMs. These include respecting privacy by not “phoning home” to servers and ensuring permanent offline use so that files can be used even if the retailer goes out of business. They also include not jeopardizing the security or integrity of users’ devices, as in the infamous “rootkit” installed by CD copy protection technology for music several years ago.
- Eliminate design elements that add disproportionately to cost and complexity. Perhaps the biggest of these is the s0-called robustness rules that have become standard elements of DRMs such as OMA DRM, Marlin, and PlayReady where the DRM technology licensor doesn’t own the hardware or platform software. Eliminating “phoning home” also saves costs and complexity. Other elements to be eliminated include key revocation, recoverability, and fancy authentication schemes such as the domain authentication used in UltraViolet.
- Finally, don’t try very hard to make the scheme hack-proof. The strongest protection schemes for commercial content — such as those found in pay television — are those that minimize the impact of hacks so that they are temporary and recoverable; such schemes are too complex, invasive, and expensive for e-book retailers or e-reader makers to consider. Instead, assume that EPUB LCP will be hacked, and rely on two things to blunt the impact: anticircumvention law, and allowing enough differences among implementations that each one will require its own hack (a form of what security technologists call “code diversity.”).
With those design principles in mind, we have designed a scheme that takes its inspiration from two sources in particular: the content protection technology used in the eReader/FictionWise e-book technology that is now owned by Barnes & Noble, and the layered functionality concept built into the Digital Media Project‘s IDP (Interoperable DRM Platform) standard.
The central idea of EPUB LCP is a passphrase supplied by the user or retailer. This could be an item of personal information, such as a name, email address, or even credit card number; distributors or rights holders can decide what types of passphrases to use or require. The passphrase is irrecoverably obfuscated (e.g. through a hash function) so that even if a hack recovers the passphrase, it won’t recover the personal information; yet the retailer can link the obfuscated passphrase to the user. The obfuscated passphrase is then embedded into the e-book file. If the user wants to share an e-book, all she has to do is share the passphrase. Otherwise, the content must be hacked to be readable.
Other aspects of the draft requirements are covered in the document on the IDPF website. Apart from that, it’s worth mentioning that this type of scheme will not support certain content distribution models unless extensions are added to make them possible. Features intentionally left out of the basic EPUB LCP design include:
- Separate license delivery, which allows different sets of rights for a given file
- License chaining, which supports subscription services
- Domain authentication, which can support multi-device/multi-user “family accounts” a la UltraViolet
- Master-slave secure file transfer, for sideloading onto portable devices, a la Windows Media DRM
- Forward-and-delete, to implement “Digital Personal Property” a la the IEEE P1817 standard
Once again, we set out to design something that meets current market needs and works within current market constraints; EPUB LCP is not a research-lab R&D project.
Again, I’ll be discussing this, as well as the landscape for e-book content protection in general, at Book Expo America next week. Feel free to come and heckle (or just heckle in the comments right here). I’m sure I will have more to report as this very interesting project develops.
Inisoft of Korea Acquires BuyDRM May 24, 2012Posted by Bill Rosenblatt in DRM, Video.
add a comment
Inisoft, a Korean company that does software development for mobile media applications, has acquired Texas-based BuyDRM. BuyDRM is a well-established player in the Microsoft DRM ecosystem with customers including HBO, BBC, and NBC. The company offers a DRM platform called KeyOS that incorporates Microsoft’s PlayReady DRM; Inisoft focuses on media player applications and DRM clients for mobile devices.
The deal is a good one for both parties as well as the premium video content marketplace in general. It enables BuyDRM — which will continue to operate under its own name — to increase its ability to offer the “one stop shopping” that service providers are often looking for, to build services that work on multiple devices more quickly and easily. This is increasingly necessary as service providers are scrambling to build “TV Everywhere” type services over multiple networks to a growing number of devices.
The newly-merged company is in a sweet spot in the video market, due to PlayReady’s emergence as a leading DRM for Hollywood content, for both streaming and download. Yet while Microsoft has fostered a healthy partner ecosystem, as it typically does for “platform” technologies like PlayReady, the ecosystem that exists can be confusing to service providers.
For one thing, Microsoft isn’t supporting the most popular client platforms by itself. Microsoft provides PlayReady server code and client code for Windows, Silverlight (Microsoft’s web application development platform), and Windows Phone, plus an SDK for porting to non-Microsoft platforms. But unlike other video DRM providers (e.g., Widevine), it doesn’t provide the actual ports to other client devices — including the most popular (and admittedly competing) platforms, Apple’s iOS and Google’s Android. Instead it leaves that to its partners.
The other problem is that Microsoft’s PlayReady partners cover an overlapping array of technologies and services that can be confusing to service providers who just want to get something up and running that meets Hollywood’s content protection requirements. There’s a profusion of vendors with different and often overlapping product sets. As a few examples: Discretix and Trusted Logic offer secure client ports but not server code; Axinom and castLabs offer server-side only; AuthenTec and Irdeto offer both server and client implementations; Verimatrix integrates PlayReady with its own stream protection technology; yet other vendors like Azuki Systems provide complete platforms for multiscreen Internet video content delivery with many more components beyond DRM.
The process of acquiring this technology is thus more complicated than it needs to be, especially in this age of proliferating devices and platforms. Service providers that are interested in using PlayReady to protect licensed content don’t get much help from Microsoft in guiding them through this maze of products and services; partners are left to do all the marketing. (Microsoft itself hasn’t put out a press release on PlayReady in over a year, despite its traction in the market.) In effect, Microsoft has let the market sort itself out through the relatively slow and cumbersome processes of partnerships, OEM deals, multiple-vendor arrangements, and — in the case of BuyDRM and Inisoft — mergers/acquisitions.
Having said that, Inisoft’s acquisition of BuyDRM should help bring some much-needed clarity to service providers. It is a positive development for the market for multi-device video services with studio content.
Roots of the Online Upheaval of SOPA/PIPA May 13, 2012Posted by Bill Rosenblatt in DRM, Law, United States.
add a comment
I’m in the middle of reading a new book called Hollywood’s Copyright Wars: From Edison to the Internet, by University of Pennsylvania professor Peter DeCherney. I’ll report back on this book later; today I want to talk about a PhD dissertation that appears in a footnote in this book.
Bill Herman’s dissertation at Penn’s Annenberg School of Communication is called The Battle over Digital Rights Management: A Multi-Method Study of the Politics of Copyright Management Technologies. It was written in 2009, and it presciently anticipates the online movement that led to the downfall of SOPA and PIPA two years later.
Herman — now a professor of film and media at Hunter College in NYC — looked at four legislative developments in U.S. digital copyright policy and measured how they were influenced by three types of communication: direct communications with legislators (e.g., lobbying), the press, and online. The four developments were the Audio Home Recording Act (1992), the anticircumvention provision of the Digital Millennium Copyright Act (1998), efforts to revise the DMCA (2003-2005), and the FCC Broadcast Flag regulation (2006).
Herman’s research analyzes communications in those three arenas and grades them according to whether they tilt “strong copyright” or “strong fair use.” He finds that communications with congress, which tilted strongly “strong copyright,” predominated in the earlier years; press reporting (in the Washington Post and New York Times) was roughly balanced, with a slight “strong fair use” tilt; then online communication took over the debate with a forty-to-one “strong fair use” slant and influenced the repeal of the FCC Broadcast Flag regulation in 2007. Although Herman is unabashedly on the “strong fair use” side, his methodologies for identifying and characterizing these various communications are rigorous and do not show bias.
In his introduction, Herman writes: “While the time period under study does not include their ultimate triumph at the bargaining table — as of this writing, what I describe as the strong fair use coalition still has not won a major legislative victory — it does include the beginning of their time as a genuine force at that table.” As a prediction of the online and copyleft communities killing SOPA and PIPA, this is pretty impressive.
Herman’s thesis goes into great detail about the ways in which the “strong fair use” axis posted lots of material online to feed the debate, while the other side didn’t. It’s a trove of factual evidence about how to shape policy debate in the Internet age (and how not to). It also, in effect, shoots holes in the theory held by some strong-copyright people that a Google-led cabal caused the defeat of SOPA and PIPA.
I admit not to having read the entire 400-plus pages of the dissertation, though it contains a much more manageable 27-page introduction that summarizes the methodology and results. With that caveat in mind, I can identify one shortcoming in Herman’s methodology that, if he had corrected it, might have changed the nature of his conclusions.
Herman tracked press stories that specifically covered the four legislative developments mentioned above. But he didn’t track stories that covered the real-world marketplace of the technologies being regulated – articles by the likes of David Pogue in the Times and Walter Mossberg in the Wall Street Journal. (Nor did he track online content about the same, from the likes of TechCrunch, CNet, etc., not to mention Internet ideologues like Cory Doctorow and thousands or millions of blogs.)
If he had done this, he would have found a much more anti-DRM tilt in the press during the early-mid 2000s than he did. Articles from this period (and thereafter) took a populist, pro-consumer viewpoint: after all, people read Pogue, Mossberg, and CNet to help them choose the best digital content services and devices. The job of these writers isn’t to defend the interests of copyright owners or content creators; it’s to help sell newspapers and drive traffic to websites.
These sources routinely praise digital content services and devices that offer as many rights to as much content for as little money as possible. DRM can be used to enable new content distribution models, but it can also be used to force consumers to pay, limit interoperability, and restrict uses of content that are allowed under copyright law. Thus it makes sense that these writers would paint DRM in a negative light.
One has to wonder how much the pro-consumer point of view in this press coverage influenced legislation. The journalists who covered legislative developments during the period Herman studied did not overlap much with those who covered products and services. For example, Jenna Wortham, Jonathan Weisman, and Brian Stelter provided the bulk of legislative coverage at the Times, while over at CNet, Declan McCullagh wrote about policy and legislation while Greg Sandoval did (and does) most of the marketplace coverage.
Herman attributes the “strong fair use” coalition’s increased legislative influence to its greater effectiveness than the “strong copyright” community in putting its message out online. But I would suggest that they had a lot of help from both professional and amateur writers about consumer media technologies, who led people to wonder why technologies like DRM exist and then what role government plays in them.
It might not be as easy to gauge that influence, but it was — and is — surely significant; and that means that the press could well influence digital copyright legislation more strongly than Herman surmises. Herman seems eager to glorify the power of the Internet by itself. While there’s no doubt that Internet forces killed SOPA and PIPA, what Herman calls the “strong fair use” movement has roots outside of the copyleft academia and advocacy groups that he credits (he was an intern at Public Knowledge and considers Larry Lessig a hero).
Regardless, the defeat of SOPA and PIPA has made it clear that the online community now has a lot of power over policy debate. Gary Shapiro of the Consumer Electronics Association wrote a letter to the editor in the Times admitting that “back rooms do not exist on the Internet.” I would suggest that if the RIAAs and MPAAs of the world want to understand how to engage the online public in order to shape future legislation, Herman’s thesis ought to be required reading for them.
As a postscript, there is now a bit of overlap in coverage of digital content products and services and legislative policy, now that people are digging through the post-SOPA/PIPA wreckage and considering what to do next. David Pogue, for example, got around to actually reading the legislation back in January as it was failing. He made two badly-needed observations: that many of the objectors to SOPA and PIPA didn’t like it simply because it could cut off their supply of free content, and that such people generally didn’t have a clue about the actual legislation and acted on misinformation about it. Let’s hope that now that Pogue has connected the dots, more people will follow that train of thought to some reasonable policy developments.
Webinar on Studios’ Content Security Policies April 24, 2012Posted by Bill Rosenblatt in Conditional Access, DRM, Events, Video, Watermarking.
add a comment
For those who couldn’t attend the breakfast event at the NAB trade show last week, I will be doing a webinar on Content Security Requirements for Multi-Screen Video Services, on Thursday April 26 at noon US east coast time/1700 GMT. I’ll be presenting a synopsis of the whitepaper I published last December on the topic. I will be joined by Petr Peterka, CTO of Verimatrix, sponsor of the webinar. Click here to register.
Will Harry Potter Break the E-book DRM Spell? March 28, 2012Posted by Bill Rosenblatt in DRM, Publishing.
The Harry Potter franchise has been the major digital holdout in trade publishing, the analog (until recently) of the Beatles in music. No more: the Pottermore Shop features all of the Harry Potter titles in e-book and digital audiobook formats. The e-books are available in the standard EPUB as well as Amazon Kindle formats, and the audiobooks are in MP3. The EPUB and MP3 files are DRM-free.
Some major-publisher audiobooks are already DRM-free. But does this mean the end of DRM for major-publisher e-books?
First of all, it’s possible to buy Harry Potter e-books on all of the major e-book retail sites (or through them via affiliate links). At least the Kindle and Nook format e-books use DRM. Only the EPUB-format files are DRM-free.
Furthermore, Harry Potter is highly anomalous in the world of book publishing: it’s a goldmine of revenue from many sources, far beyond the books themselves. Harry Potter has more in common with Disney cartoon movies than with most other books or book series. The animated features that Disney has released in recent years are all part of vast orchestrated campaigns of ancillary revenue sources: books, toys, theme park rides, ad-revenue-bearing TV shows, Broadway musicals, and on and on. Think The Lion King, Cars, or Toy Story. In fact, Harry Potter ancillary revenue streams have more than doubled book revenues already.
In other words, J.K. Rowling doesn’t need to maximize revenue from selling e-books, especially since she does not plan to write any more Harry Potter titles. Instead, her strategy is surely to use e-books — and print books, for that matter — for their marketing value, to induce her vast audience (and their parents) to purchase the stream of Potter-themed products that her organization will release for years to come. When viewed that way, DRM becomes a liability.
Instead, Rowling is launching an entire site devoted to All Things Harry: Pottermore Shop is part of the overall Pottermore site, which is currently in beta. This will enable the Rowling team to establish relationships with their customers that are far richer and more lucrative than if the e-books were available only on Amazon, Barnes & Noble, or other retail sites. Pottermore will add new content and features on a regular basis and, of course, include lots of social features for Harry fans.
Pottermore is likely to be a popular destination site; Harry Potter is perhaps the only publishing property that doesn’t need Amazon or B&N. The trade publishing industry would love to have more blockbuster franchises like Harry Potter, but given the way the industry and authors work, such properties are likely to be fewer in number than those found in the movie industry. (Incidentally, Scholastic, Rowling’s publisher, may have its hands on the next blockbuster franchise: Suzanne Collins’s The Hunger Games.) Those rare mega-properties don’t need DRM, but that has nothing to do with the question of whether the rest of the publishing industry does.
In addition, publishers have much more limited ability to monetize big franchise properties than movie studios do, for the simple reason that authors own the copyrights to most trade books. Of course, publishers can negotiate rights that go beyond print books or e-books. But it’s instructive to note that the word “Scholastic” appears exactly nowhere on the Pottermore site.
add a comment
I have released a new white paper on content security requirements for video services that distribute content to multiple devices. This white paper discusses copyright owners’ requirements for security in today’s world of proliferating devices and delivery channels.
So-called managed networks (cable, satellite, and telco TV) are under increasing pressure to compete with “over the top” (OTT) video services that can run on any IP-based (unmanaged) network to a variety of devices — services like Netflix and Hulu. In the US, in fact, total subscriberships of OTT services are fast approaching the total subscriberships of cable, satellite, and telco TV.
Therefore pay-TV operators have to respond by making their content available on a similar variety of devices and even through unmanaged networks. While some major pay-TV providers like Comcast and Time Warner Cable are launching “TV Everywhere” services, many more pay-TV operators are trying to keep up by building their own service extensions onto mobile phones, tablets, and home devices other than traditional set-top boxes (STBs).
Content security is one of the many requirements that operators have to meet in order to license content from studios, TV networks, sports leagues, and other major content sources. Life for pay-TV operators used to be relatively simple: adopt a conditional access (CA) technology that was equally effective in thwarting signal theft as it was in thwarting content piracy. Economic and security goals were aligned between operators and copyright owners. Now life is considerably more complicated, as operators have to support home networks and branch out into mobile services. Content security requirements are more complicated as well.
This white paper gathers security requirements from major content owners and describes them in a single document. The intent is to help pay-TV operators and other video service providers that are looking to launch multi-screen video services, so that they know what to expect and avoid any unpleasant surprises with regard to security requirements when licensing content to offer through their services.
I spoke to representatives from most of the major Hollywood studios to get their requirements. Although it is not possible to build a gigantic table that an operator can use to look up DRM or conditional access requirements for any given delivery modality and client device — among other things, such a table would become obsolete very quickly — I was able to create a set of guidelines that should be useful for operators.
Content security guidelines do depend on certain factors, including release windows (how long after a film’s theatrical release or a TV show’s first airing), display quality, and the usage rules granted to users and their devices. In the white paper, I map these factors to certain specific content security requirements, such as roots of trust, watermarks, software hardening, and DRM robustness rules. Security guidelines also depend on external market factors that the white paper also describes.
Public Library E-Book Lending Must Change to Survive December 4, 2011Posted by Bill Rosenblatt in DRM, Law, Publishing, Uncategorized.
A few events over the past few weeks illustrate the downward arc that I have suggested is in store for public libraries in the e-book age. First, Amazon introduced its own e-book “lending library” for members of its $79/year Amazon Prime service, which allows users to “borrow” one e-book at a time, with no due dates. Second, yet another major trade book publisher, Penguin, got into a spat with public libraries over e-book lending. Penguin stopped offering new titles and withheld Kindle access to all titles, out of unspecified security concerns with OverDrive (the service that powers most U.S. e-book library lending) and Amazon. (Penguin subsequently restored access for existing titles, but not for new ones.)
The Penguin incident is only the latest in what will undoubtedly be a long series of squabbles between publishers and libraries over e-book lending. In fact, five of the “Big Six” U.S. trade book publishers are now either limiting their e-book licensing to libraries or not licensing at all — and the sixth (and largest), Random House, is reportedly reconsidering its library e-book licensing policies. Such spats may well lead to a world of off-putting restrictions and confusion for libraries and their patrons.
Libraries have two fundamental problems here: they have less control over the situation than publishers do, and they are about to get some serious competition from the private sector. An article in Publishers Weekly gives an overview of Amazon’s e-book lending feature and its implications for publishers and authors. In a nutshell, the program is currently limited to a few thousand titles that originate either from Amazon itself or from smaller publishers that still sell e-books to Amazon under a wholesale model, as opposed to the “agent” model used by most major trade publishers, which forbids such activity.
But the Publishers Weekly piece only covers the impact of e-book lending on publishers and authors, many of whom are raising a fuss about Amazon’s program. It says nothing about the program’s impact on public libraries. The executive director of the American Library Association (ALA), Keith Fiels, has publicly expressed a lack of concern over the impact of Amazon’s lending program, given its limited range of titles and that it’s part of a subscription program that includes other features such as streaming video and free expedited shipping. The ALA is more concerned about major-publisher moves like Penguin’s.
Indeed, public libraries are experiencing major growth in e-book lending, especially since Amazon joined the e-lending world by opening up its DRM to enable lending and integrating it with OverDrive’s library lending service. Another piece of evidence that library e-lending is expanding is the entry of a Seattle-based startup called BlueFire Productions as the first serious competitor to OverDrive in the public library space.
At bottom, this is about two things: ways to make e-books available legally for free, and the promotional value of free distribution. That’s why libraries should be worried. First, consumers generally don’t care where they get free legal e-books, as long as they are available conveniently and can be read on their favorite devices. Second, what Amazon has started as a limited service that’s only available to an elite tier of customers will surely become more widely available and with more titles, especially with competitors like Barnes & Noble constantly looking for ways to differentiate themselves from the market leader.
Amazon subsidizes the wholesale cost of e-books that it lends to Amazon Prime members. It does this to make its own services and devices more attractive, not to spur sales of those e-books. If and when B&N offers an equivalent feature, it will undoubtedly do the same.
If I were Keith Fiels at the ALA, I would be very, very afraid. The e-book publishing world may be about to split up into the equivalent of the music industry’s major and indie labels: major labels tend to make deals that maximize revenue and limit free promotion, while indies try for maximum promotion in hopes of getting revenue later. When you apply this dichotomy to publishers and e-books, you will see that libraries will inevitably get squeezed out.
The majors will make life increasingly difficult for public libraries through refusal to license or restrictive and confusing licensing terms. Meanwhile, smaller publishers will “lend” their titles through Amazon and other e-book services — and will most likely be happy with the arrangement for the promotional value it gets them. And some indie publishers will give their e-books away outright — through e-book retailers or through sites like Facebook — in hopes of getting exposure for their authors and selling hardcopy titles, just as thousands of indie musicians used to give away MP3s on MySpace. And let’s not forget that e-book prices are often much lower than their hardcopy counterparts to begin with.
Then it will only be a matter of time until some publishing industry equivalent of Michael Robertson (the music industry’s digital provocateur) will create a search engine for finding free e-books from all of these sources in a single convenient place, storing them in an online locker, sharing them with friends, etc.
If you extrapolate from these changes, you can see how public libraries could become virtually irrelevant for e-book readers.
It’s all because publishers get to decide what e-book titles libraries may lend and (to some extent) under what terms. Again, think of this in music terms: radio stations get the right to play whatever music they want under a license granted by law — a so-called statutory license. Online equivalents of radio (e.g., Pandora, iHeartRadio) get similar rights. Library lending of digital music is virtually nonexistent; radio remains the primary promotional channel for record companies. Perhaps it’s time to think more carefully about public libraries in this light for e-books, as I’ll explain.
There is no equivalent of a statutory license for e-books that would allow libraries to lend them without explicit, title-by-title permission from publishers. As I’ve discussed previously, libraries do get rights under Section 108 of the copyright law to lend e-books under certain conditions. But because most publishers only give libraries e-books to lend as DRM-protected files with license terms attached to them, and Section 108 requires libraries to abide by those license terms, libraries can’t exercise those rights. In effect, those rights have no value for libraries.
Libraries simply do not have enough leverage against major publishers and retailers to improve this situation in the private sector. If they are to remain relevant in the e-book age, they are going to need to push for significant legal reforms, which both publishers and retailers will undoubtedly resist.
I previously suggested one option, albeit in a somewhat tongue-in-cheek manner: push for the Copyright Office to define an exemption to the law that criminalizes hacking of DRMs (Section 1201 of the Copyright Act) so that public libraries can legally remove DRM for the purpose of lending e-books if they repackage them with DRM to enforce lending terms. However, this has two disadvantages: exemptions to Section 1201 only last for three years, until the Copyright Office considers a new set of exemptions, and publishers could push for stronger DRMs that are harder to hack.
The “cleanest” solution to this problem would be to enact Digital First Sale, i.e., an extension to Section 109 of the copyright law that lets anyone do whatever they want with digital downloads once they have acquired them legally. (We had a great discussion on this subject at last week’s conference.) Public libraries owe their existence to First Sale (on physical goods) in the first place. But that won’t help for e-books as long as publishers distribute them with DRM and DRM hacking is still illegal; and anyway, as I discussed recently, Digital First Sale isn’t likely to happen anytime soon. Therefore it would be worth libraries’ while to investigate changes to the law that help them lend e-books while leaving Digital First Sale off the table.
One option would be to push for additional rights for libraries under Section 108. At a minimum, Subsection (f)(4) would have to be relaxed so that libraries may lend e-books even if the licenses they come with forbid this activity. This would be tantamount to a statutory license for libraries to lend e-books without explicit permission from publishers.
As a practical matter, this wouldn’t really change the way things are done today. Libraries lend e-books through third parties like OverDrive, which already get e-books from publishers without DRM and package them with DRM — just like music and video retail services. And provisions already exist in Section 108 that hold libraries liable if they make their own unauthorized copies of e-books. OverDrive and its ilk use DRM to enforce one-copy-at-a time lending as well as the lending time limits that are in libraries’ own best interests.
This change in the law would improve the situation for libraries substantially. However, the economics may have to change to make it palatable to publishers. For example, libraries acquire e-books for their collections by paying for them title by title, just as they pay for printed books. Radio stations, on the other hand, typically get free copies of recordings from record labels but pay royalties to the music industry for playing them on the air.
If publishers acknowledge the promotional value of library e-book lending, then they might be willing to accept a statutory license to lend e-books if they can negotiate a per-loan royalty rate in lieu of upfront purchase prices. The Copyright Clearance Center, for example, would be in a good position to manage these payments and royalty disbursements, just as ASCAP, BMI, and SoundExchange do for music.
This type of arrangement would enable libraries to maintain huge collections of e-books (through service providers like OverDrive and BlueFire, which would actually house and distribute the e-books) and thus serve the public well. At the same time, the negotiations would have to resolve questions of how many copies of an e-book a given library could lend out concurrently; one copy per library doesn’t reflect the fact that big libraries acquire multiple copies of popular titles. Is it possible for the numbers to defined so as to be fair to both publishers and libraries? That would be a good question for the Section 108 Study Group, the venue for recommending changes to that section of the copyright law, which used to convene every five years but was disbanded by Congress after its last report in 2008.
A limited form of just such a statutory license-type solution has actually been suggested in the private sector already, in the proposed settlement to publishers’ and authors’ lawsuits against Google. It includes giving public libraries rights to make every book scanned on Google’s behalf — over 12 million titles at last count — available on a single terminal within each library. Libraries would not even have to pay for this. However, this doesn’t allow e-books to be available outside of libraries’ physical confines, it doesn’t allow libraries to acquire multiple copies of e-books they want to make available to more than one patron at a time, and Google can withhold up to 15% of its scanned titles at its discretion.
The Google book settlement is still unresolved, but the terms in it show that publishers may be willing to grant libraries some limited e-book lending rights. Libraries have complained about the “table crumbs” offered to them in the Google book settlement. But unless they take action similar to what I’ve described here, those rights may be the best that public libraries can hope for as the e-book market expands.