jump to navigation

Adobe’s Latest E-Book Misstep: This Time, It’s Not the DRM October 10, 2014

Posted by Bill Rosenblatt in DRM, Publishing, Technologies.
18 comments

A few days ago, it emerged that the latest version of Adobe’s e-book reading software for PCs and Macs, Adobe Digital Editions 4 (ADE4), collects data about users’ reading activities and sends them to Adobe’s servers in unencrypted cleartext, so that anyone can intercept and use the data, even without NSA-grade snooping tools.

The story was broken by Nate Hoffelder at The Digital Reader on Monday.  The Internet being the Internet, the techblogosphere was soon full of stories about it, mostly half-baked analysis, knee-jerk opinions, jumped-to conclusions, and just plain misinformation.  Even the usually thorough and reliable Ars Technica, the first to publish serious technical analysis, didn’t quite get it right.  At this time of writing, the best summary of it comes from the respected library technologist Eric Hellman.

More actual facts about this sorry case will emerge in the coming days, no doubt, leading to a fully clear picture of what Adobe is doing and why.  My purpose here and now is to address the various accusations that this latest e-book gaffe by Adobe has to do with its DRM.  These include a gun-jumping post by the Electronic Frontier Foundation (EFF) that has inadvertently dragged Sony DADC, the division of Sony that is currently marketing a DRM solution for e-books, into the mess undeservedly.

Let’s start with the basics: ADE4 does collect information about users’ reading activities and transmit it in the clear.  This is just plain unacceptable; no matter what Adobe’s terms and conditions might say, it’s a breach of privacy and trust, and (as I’ll discuss later) it seems like a strange fit to Adobe’s role in the e-book ecosystem.  Whether it’s naivete, sloppiness, or both, it’s redolent of Adobe’s missteps in its release of the latest version of its e-book DRM at the beginning of this year.

But is ADE4′s data reporting part of the DRM, as various people have suggested?  No.

The reporting on this story to date has missed one small but important fact, which I suspected and then confirmed with a well-placed source yesterday: ADE4 reports data on all EPUB format files, whether or not they are DRM-encrypted.  The DRM client (Adobe RMSDK) is completely separate from the reporting scheme.  By analogy, this would be like Apple collecting data on users’ music and movie playing habits from their iTunes software, even though Apple’s music files are DRM-free (though movies are not).

Some savvier writers have pointed out that even though DRM may not be directly involved, this is what happens when users are forced to use media rendering software that’s part of a DRM-based ecosystem.  This is a fair point, but in this particular case it’s not really true.  (It would be more true in the case of Amazon, which forces people to use its e-reading devices and apps, and unquestionably collects data on users’ reading behaviors – although it encrypts the information.)

Unlike the Kindle ecosystem, users aren’t forced to use ADE4; it’s one of several e-reader software packages available that reads EPUB files that are encrypted with Adobe’s Content Server DRM.  None of the major e-book retailers use or require it, at least not in the United States.  Instead, it is most often used to read e-books that are borrowed from public libraries using e-lending platforms such as OverDrive; and in fact such libraries recommend and link to Digital Editions on their websites.

But other e-reader apps, such as the increasingly popular BlueFire Reader for Android, iOS, and Windows, will work just as well in reading e-books encrypted with Adobe’s DRM, as well as DRM-free EPUB files.  BlueFire (who can blame them?) sees the opportunity here and points out that it does not do this type of data collection.  Users of library e-lending systems can use BlueFire or other apps instead of ADE4.  Earlier versions of ADE also don’t collect and report reading data.

A larger question is why Adobe collects this data in the first place.  The usual reason for collecting users’ reading (or listening or viewing) data is for analytics purposes, to help content owners determine what’s popular and hone their marketing strategies.  Yet not only is Adobe not an e-book retailer, but e-book retailers that use its DRM (such as Barnes & Noble) don’t use Digital Editions as their client software.

One possible explanation is that Adobe is expecting to market ADE4 as part of its new DRM ecosystem that’s oriented towards the academic and educational publishing markets, and that it expects the data to be attractive to publishers in those market segments (as opposed to the trade books typically found in public libraries).  Eric Hellman suggests another plausible explanation: that it collects data not for analytics purposes but to support a device-syncing feature that all of the major e-book retailers already offer – so that users can automatically get their e-books on all of their devices and have each device sync to the last page that the user read in each book.

Regardless of the reason, it seems unsettling when a platform software vendor, as opposed to an actual retailer, collects this type of information.  Here’s another analogy: various video websites use Microsoft’s Silverlight web application environment.  Silverlight contains a version of Microsoft’s PlayReady DRM.  Users don’t see the Microsoft brand; instead they see brands like Netflix that use the technology.  Users might expect Netflix to collect information about their viewing habits (provided that Netflix treated the information appropriately), but they would be concerned to hear (in a vacuum) that Microsoft does it; and in fact Microsoft probably does contribute to the collection of viewing information for Netflix and other Silverlight users.

In any case, Adobe can fix the situation easily enough by encrypting the data (e.g., via SSL), providing a user option in Digital Editions to turn off the data collection, and offering better explanations as to why it collects the data in the first place (at least better than the ambiguous, anodyne, PR/legal department-buffed one shown here).  Until then, platform providers like OverDrive can link to other reader apps, like BlueFire, instead of to Adobe Digital Editions.

Finally, as for Sony DADC: the EFF’s web page on this situation contains a link, as a “related case,” to material on a previous technical fiasco involving Sony BMG Music, one of the major recording companies in the mid-2000s.  At that time, Sony BMG released some albums on CDs that had been outfitted with a form of DRM.  When a user put the disc in a CD drive on a PC, an “autorun” executable installed a DRM client onto the PC, part of which was a “rootkit” that enabled viruses.  After a firestorm of negative publicity that the EFF spearheaded, Sony BMG abandoned the technology.  (In one of its more savvy gambits, the EFF used momentum from that episode to cause other major labels to drop their CD DRMs as well; the technology was dead in the water by 2008.)  In this case, unlike with Adobe, the problem was most definitely in the DRM.

Apparently some people think that because this incident involved “Sony,” Sony DADC — which is currently marketing an e-book DRM solution based on the Marlin DRM technology — was involved.  Not true; the DRM that installed the rootkit came from a British company called First4Internet (F4I).  Not only did Sony DADC have nothing to do with this (as I have confirmed), but Sony DADC actually advised Sony Music against using the F4I technology.

That Old Question Again September 28, 2014

Posted by Bill Rosenblatt in DRM, Economics, Music, Services, United States.
add a comment

I’m looking at the U.S. music revenue numbers that the RIAA just released for the first half of 2014; at the same time, I’m reading Download: How Digital Destroyed the Record Business, a 2013 book by the noted UK music journalist Phil Hardy, who tragically passed away in April of this year.  For “numbers guys” like me, the book is a bonanza of information about the major labels’ travails during the transition from CDs to purely digital music. It’s a compendium of zillions of hard facts and opinions delivered with Hardy’s typical dry British wit  – though (like his other books) it would have benefited from a copy editor and, occasionally, fact checker.

One of the statements in Hardy’s book that sits somewhere between fact and opinion is his assertion — as recently as last year! — that the elimination of DRM from music downloads boosted sales.  Sigh… that old question again.

The question of whether DRM-free music download sales helped or hindered the music industry (no doubt it was good for consumers) served as a sort of Rorschach test back in the late 2000s after Apple and Amazon started selling DRM-free downloads — rather like the Rorschach test of Radiohead’s “pay what you wish” experiment in 2011.  If you hated DRM, DRM-free was going to usher in a bright new era of opportunity for everyone; if you liked it, removing DRM was going to spell the end of the music business.

So I thought that with the RIAA revenue statistics database in hand, I could put the old question to rest.  Here is what I found:

U.S. Digital Music Revenue, millions of 2013 dollars.  Source: RIAA.

U.S. Digital Music Revenue, millions of 2013 dollars. Source: RIAA.

In this chart, “Downloads” includes singles plus albums; “Streaming” includes both paid and ad-supported on-demand services (Spotify, Rhapsody, YouTube, Vevo)* as well as Internet radio (Pandora, iHeartRadio, Slacker, TuneIn Radio), plus satellite radio and a few other odds and ends.  I estimated totals for 2014 by taking the RIAA’s newly released numbers for the first half of this year and applying growth rates from the second half of 2013 to the first half of this year.

The relevant dates are:

  • April 2003: Apple opens the iTunes Music Store.
  • May 2007: Apple launches iTunes Plus, selling tracks from EMI without DRM for $1.29.
  • January 2008: Amazon launches AmazonMP3 with DRM-free MP3s from all labels.
  • May 2009: iTunes goes completely DRM-free in the US.
  • 2011: Spotify launches its “freemium” model in the US; major labels complete ad revenue share deals with YouTube, so that virtually all major-label music is available on YouTube legally.

Before we get into the analysis, let’s get one thing out of the way: the biggest change in music industry revenues from 2003 onwards was, of course, the dramatic drop in revenues from CDs.  Those numbers aren’t shown here; for one thing, they would dwarf the other numbers.  This is all part of the move from physical products to digital products and services, which has affected both downloads and streaming.

Now let’s look at what happened after 2007.  Growth in download sales began to slow down a bit, while streaming remained fairly flat.  Starting in 2008, growth in paid downloads remained virtually unchanged until the ad-supported on-demand year of 2011.  2008 was a transitional year for DRM, as Apple only offered a small amount of music DRM-free (and at higher prices), while Amazon offered all DRM-free music but had only a single-digit share of the market.  The real post-DRM era for paid downloads started in May 2009.

So, to see what happened after the major labels agreed to sell digital files without DRM, we need to look at the period from May 2009 to the start of 2011, which is highlighted in the chart.  What happened then?  Not much of anything.  Growth in download sales was essentially unchanged from the preceding two years.

One could argue that if streaming hadn’t ever existed, download revenues might have grown after January 2009, given that streaming revenues from 2009-2011 started to grow faster.  But given that streaming growth didn’t accelerate immediately after January 2009, I wouldn’t make that causality.

So there you have the answer to the old question: removing DRM from music files had little or no effect on download sales.

As a postscript, my 2014 projections included an interesting factoid: vinyl album sales, if current growth rates continue, should reach about $340 million this year.  That takes the resurgence of vinyl from a mildly curious hipster phenomenon to almost 5% of total music revenue.  For comparison purposes, it makes vinyl almost as valuable as ad-supported on-demand streaming (YouTube, Spotify Free, Vevo) and puts it on track to exceed that segment in 2015.  Vinyl could even end up equaling CD revenue sometime around 2016-2017 — for the first time since the late 1980s!

*Paid subscription on-demand services include download features, which use DRM to tie files to users’ devices and make them playable as long as the user pays the subscription fees. But the RIAA reports these as part of “subscription services,” lumping them in with streaming on-demand music.

Mobile Phone Unlocking Legislation Leads to Renewed Interest in DMCA 1201 Reform August 10, 2014

Posted by Bill Rosenblatt in DRM, Law, Uncategorized, United States.
add a comment

President Obama recently signed into law a bill that allows people to “jailbreak” or “root” their mobile phones in order to switch wireless carriers.  The Unlocking Consumer Choice and Wireless Competition Act was that rarest of rarities these days: a bipartisan bill that passed both houses of Congress by unanimous consent. Copyleft advocates such as Public Knowledge see this as an important step towards weakening the part of the Digital Millennium Copyright Act that outlaws hacks to DRM systems, known as DMCA 1201.

For those of you who might be scratching your heads wondering what jailbreaking your iPhone or rooting your Android device has to do with DRM hacking, here is some background.  Last year, the U.S. Copyright Office declined to renew a temporary exception to DMCA 1201 that would make it legal to unlock mobile phones.  A petition to the president to reverse the decision garnered over 100,000 signatures, but as he has no power to do this, I predicted that nothing would happen.  I was wrong; Congress did take up the issue, with the resulting legislation breezing through Congress last month.

Around the time of the Copyright Office’s ruling last year, Zoe Lofgren, a Democrat who represents a chunk of Silicon Valley in Congress, introduced a bill called the Unlocking Technology Act that would go considerably further in weakening DMCA 1201.  This legislation would sidestep the triennial rulemaking process in which the Copyright Office considers temporary exceptions to the law; it would create permanent exceptions to DMCA 1201 for any hack to a DRM scheme, as long as the primary purpose of the hack is not an infringement of copyright.  The ostensible aim of this bill is to allow people to break their devices’ DRMs for such purposes as enabling read-aloud features in e-book readers, as well as to unlock their mobile phones.

DMCA 1201 was purposefully crafted so as to disallow any hacks to DRMs even if the resulting uses of content are noninfringing.  There were two rationales for this.  Most basically, if you could hack a DRM, then you would be able to get unencrypted content, which you could use for any reason, including emailing it to your million best friends (which would have been a consideration in the 1990s when the law was created, as Torrent trackers and cyberlockers weren’t around yet).

But more specifically, if it’s OK to hack DRMs for noninfringing purposes, then potentially sticky questions about whether a resulting use of content qualifies as fair use must be judged the old-fashioned way: through the legal system, not through technology.  And if you are trying to enforce copyrights, once you fall through what I have called the trap door into the legal system, you lose: enforcement through the traditional legal system is massively less effective and efficient than enforcement through technology.  The media industry doesn’t want judgments about fair use from hacked DRMs to be left up to consumers; it wants to reserve the benefit of the doubt for itself.

The tech industry, on the other hand, wants to allow fair uses of content obtained from hacked DRMs in order to make its products and services more useful to consumers.  And there’s no question that the Unlocking Technology Act has aspects that would be beneficial to consumers.  But there is a deeper principle at work here that renders the costs and benefits less clear.

The primary motivation for DMCA 1201 in the first place was to erect a legal backstop for DRM technology that wasn’t very effective — such as the CSS scheme for DVDs, which was the subject of several DMCA 1201 litigations in the previous decades.  The media industry wanted to avoid an “arms race” against hackers.  The telecommunications industry — which was on the opposite side of the negotiating table when these issues were debated in the early 1990s — was fine with this: telcos understood that with a legal backstop against hacks in place, they would have less responsibility to implement more expensive and complex DRM systems that were actually strong; furthermore, the law placed accountability for hacks squarely on hackers, and not on the service providers (such as telcos) that implemented the DRMs in the first place.  In all, if there had to be a law against DRM hacking, DMCA 1201 was not a bad deal for today’s service providers and app developers.

The problem with the Unlocking Technology Act is in the interpretation of phrases in it like “primarily designed or produced for the purpose of facilitating noninfringing uses of [copyrighted] works.”  Most DRM hacks that I’m familiar with are “marketed” with language like “Exercise your fair use rights to your content” and disclaimers — nudge, nudge, wink, wink — that the hack should not be used for copyright infringement.  Hacks that developers sell for money are subject to the law against products and services that “induce” infringement, thanks to the Supreme Court’s 2005 Grokster decision, so commercial hackers have been on notice for years about avoiding promotional language that encourages infringement.  (And of course none of these laws apply outside of the United States.)

So, if a law like the Unlocking Technology Act passes, then copyright owners could face challenges in getting courts to find that DRM hacks were not “primarily designed or produced for the purpose of facilitating noninfringing uses[.]”  The question of liability would seem to shift from the supplier of the hack to the user.  In other words, this law would render DMCA 1201 essentially toothless — which is what copyleft interests have wanted all along.

From a pragmatic perspective, this law could lead non-dominant retailers of digital content to build DRM hacks into their software for “interoperability” purposes, to help them compete with the market leaders.  It’s particularly easy to see why Google should want this, as it has zillions of users but has struggled to get traction for its Google Play content retail operations.  Under this law, Google could add an “Import from iTunes” option for video and “Import from Kindle/Nook/iBooks” options for e-books.  (And once one retailer did this, all of the others would follow.)  As long as those “import” options re-encrypted content in the native DRM, there shouldn’t be much of an issue with “fair use.”   (There would be plenty of issues about users violating retailers’ license agreements, but that would be a separate matter.)

This in turn could cause retailers that use DRM to help lock consumers into their services to implement stronger, more complex, and more expensive DRM.  They would have to use techniques that help thwart hacks over time, such as reverse engineering prevention, code diversity and renewability, and sophisticated key hiding techniques such as whitebox encryption.    Some will argue that making lock-in more of a hassle will cause technology companies to stop trying. This argument is misguided: first, lock-in is fundamental to theories of markets in the networked digital economy and isn’t likely to go away over costs of DRM implementation; second, DRM is far from the only way to achieve lock-in.

The other question is whether Hollywood studios and other copyright owners will demand stronger DRM from service providers that have little motivation to implement it.  The problem, as usual, is that copyright owners demand the technology (as a condition of licensing their content) but don’t pay for it.  If there’s no effective legal backstop to weak DRM, then negotiations between copyright owners and technology companies may get tougher.  However, this may not be an issue particularly where Hollywood is concerned, since studios tend to rely more heavily on terms in license agreements (such as robustness rules) than on DMCA 1201 to enforce the strength of DRM implementations.

Regardless, the passage of the mobile phone unlocking legislation has led to increased interest in the Unlocking Technology Act, such as the recent panel that Public Knowledge and other like-minded organizations put on in Washington.  Rep. Lofgren has succeeded in getting several more members of Congress to co-sponsor her bill.  The trouble is, all but one of them are Democrats (in a Republican-controlled House of Representatives not exactly known for cooperation with the other side of the aisle); and the Democratically-controlled Senate has not introduced parallel legislation.  This means that the fate of the Unlocking Technology Act is likely to be similar to that of past attempts to do much the same thing: the Digital Media Consumers’ Rights Act of 2003 and the Freedom and Innovation Revitalizing United States Entrepreneurship (FAIR USE) Act of 2007.  That is, it’s likely to go nowhere.

Dispatches from IDPF Digital Book 2014, Pt. 3: DRM June 5, 2014

Posted by Bill Rosenblatt in DRM, Publishing, Standards.
1 comment so far

The final set of interesting developments at last week’s IDPF Digital Book 2014 in NYC has to do with DRM and rights.

Tom Doherty, founder of the science fiction publisher Tor Books, gave a speech about his company’s experimentation with DRM-free e-books and its launch of a line of e-novellas without DRM.  The buildup to this speech (among those of us who were aware of the program in advance) was palpable, but the result fell with a thud.  You had to listen hard to find the tiny morsel about how going DRM-free has barely affected sales; otherwise the speech was standard-issue dogma about DRM with virtually no new insights or data.  And he did not take questions from the audience.

DRM has become something of a taboo subject even at conferences like this, so most of the rest of the discussion about it took the form of hallway buzz.  And the buzz is that many are predicting that DRM will be on its way out for retail trade e-books within the next couple of years.

That’s the way things are likely to go if technology market forces play out the way they usually do.  Retailers other than Amazon (and possibly Apple) will want to embrace more open standards so that they can offer greater interoperability and thus band together to compete with the dominant player; getting rid of DRM is certainly a step in that direction.  Meanwhile, publishers, getting more and more fed up with or afraid of Amazon, will find common cause with other retailers and agree to license more of their material for distribution without DRM.  (Several retailers in second-tier European countries as well as some retailers for self-publishing authors, such as Lulu, have already dropped DRM entirely.)

Such sentiments will eventually supersede most publishers’ current “faith-based” insistence on DRM.  In other words, publishers and retailers will behave more or less the same way as the major record labels and non-Apple retailers behaved back in 2006-2007.

This course of events seems inevitable… unless publishers get some hard, credible data that tells them that DRM helps prevent piracy and “oversharing” more than it hurts the consumer experience.  That’s the only way (other than outright inertia) that I can see DRM staying in place for trade books over the next couple of years.

The situation for educational, professional, and STM (scientific, technical, medical) books is another story (as are library lending and other non-retail models).  Higher ed publishers in particular have reasons to stick with DRM: for example, e-textbook piracy has been rising dramatically in recent years and is up to 34% of students as of last year.

Adobe recently re-launched its DRM with a focus on these publishing market segments. I’d describe the re-launch as “awkward,” though publishers I’ve spoken to would characterize in it less polite terms.  This has led to openings for other vendors, such as Sony DADC; and the Readium Foundation is still working on the open-source EPUB Lightweight Content Protection scheme.

The hallway buzz at IDPF Digital Book was that DRM for these market segments is here to stay — except that in higher ed, it may become unnecessary in a longer timeframe, when educational materials are delivered dynamically and in a fashion more akin to streaming than to downloads of e-books.

I attended a panel on EDUPUB, a standards initiative aimed at exactly this future for educational publishing.  The effort, led by Pearson Education (the largest of the educational publishers), the IMS Global Learning Consortium, and IDPF, is impressive: it’s based on combining existing open standards (such as IDPF’s EPUB 3) instead of inventing new ones.  It’s meant to be inclusive and beneficial to all players in the higher ed value chain, including Pearson’s competitors.

However, EDUPUB is in danger of making the same mistake as the IDPF did by ignoring DRM and other rights issues.  When asked about DRM, Paul Belfanti, Pearson’s lead executive on EDUPUB, answered that EDUPUB is DRM-agnostic and would leave decisions on DRM to providers of content delivery platforms. This decision was problematic for trade publishers when IDPF made it for EPUB several years ago; it’s even more potentially problematic for higher ed; EDUPUB-based materials could certainly be delivered in e-textbook form.

EDUPUB could also help enable one of the Holy Grails of higher ed publishing, which is to combine materials from multiple publishers into custom textbooks or dynamically delivered digital content.  Unlike most trade books, textbooks often contain hundreds or thousands of content components, each of which may have different rights associated with them.

Clearing rights for higher ed content is a manual, labor-intensive job.  In tomorrow’s world of dynamic digital educational content, it will be more important than ever to make sure that the content being delivered has the proper clearances, in real time.  In reality, this doesn’t necessarily involve DRM; it’s mainly a question of machine-readable rights metadata.

Attempts to standardize this type of rights metadata date back at least to the mid-1990s (when I was involved in such an attempt); none have succeeded.  This is a “last mile” issue that EDUPUB will have to address, sooner rather than later, for it to make good on its very promising start.  DRM and rights are not popular topics for standards bodies to address, but it has become increasingly clear that they must address these issues to be successful.

Adobe Resurrects E-Book DRM… Again February 10, 2014

Posted by Bill Rosenblatt in DRM, Publishing.
3 comments

Over the past couple of weeks, Adobe has made a series of low-key announcements regarding new versions of its DRM for e-books, Adobe Content Server and Rights Management SDK.  The new versions are ACS5 and RMSDK10 respectively, and they are released on major platforms now (iOS, Android, etc.) with more to come next month.

The new releases, though rumored for a while, came as something of a surprise to those of us who understood Adobe to have lost interest in the e-book market… again.  They did so for the first time back in 2006, before the launch of the Kindle kicked the market into high gear.  At that time, Adobe announced that version 3 of ACS would be discontinued.  Then the following year, Adobe reversed course and introduced ACS4.  ACS4 supports the International Digital Publishing Forum (IDPF)’s EPUB standard as well as Adobe’s PDF.

This saga repeated itself, roughly speaking, over the past year.  As the IDPF worked on version 3 of EPUB, Adobe indicated that it would not upgrade its e-reader software to work with it, nor would it guarantee that ACS4 would support it.  The DRM products were transferred to an offshore maintenance group within Adobe, and all indications were that Adobe was not going to develop it any further.  Now that’s all changed.

Adobe had originally positioned ACS in the e-book market as a de facto standard DRM.  It licensed the technology to a large number of makers of e-reader devices and applications, and e-book distributors around the world.  At first this strategy seemed to work: ACS looked like an “everyone but Amazon” de facto standard, and some e-reader vendors (such as Sony) even migrated from proprietary DRMs to the Adobe technology.

But then cracks began to appear: Barnes & Noble “forked” ACS with its own extensions to support features such as user-to-user lending in the Nook system; Apple launched iBooks with a variant of its FairPlay DRM for iTunes content; and independent bookstores’ IndieBound system adopted Kobo, which has its own DRM.  Furthermore, interoperability of e-book files among different RMSDK-based e-readers was not exactly seamless.  As of today, “pure” ACS represents only a minor part of the e-book retail market, at least in the US, including Google Play, SmashWords, and retailers served by OverDrive and other wholesalers.

It’s unclear why Adobe chose to go back into the e-book DRM game, though pressure from publishers must have been a factor.  Adobe can’t do much about interoperability glitches among retailers and readers, but publishers and distributors alike have asked for various features to be added to ACS over the years.  Publishers have mainly been concerned with the relatively easy availability of hacks, while distributors have also expressed the desire for a DRM that facilitates certain content access models that ACS4 does not currently support.

The new ACS5/RMSDK10 platform promises to give both publishers and distributors just about everything they have asked for.  First, Adobe has beefed up the client-side security using (what appear to be) software hardening, key management, and crypto renewability techniques that are commonly used for video and games nowadays.

Adobe has also added support for several interesting content access models. At the top of the list of most requested models is subscriptions.  ACS5 will not only support periodical-style subscriptions but also periodic updates to existing files; the latter is useful in STM (scientific, technical, medical) and various professional publishing markets.

ACS5 also contains two enhancements that are of interest to the educational market.  One is support for collections of content shared among multiple devices, which is useful for institutional libraries.  Another is support for “bulk fulfillment,” such as pre-loading e-reader devices with encrypted books (such as textbooks).  Bulk fulfillment requires a feature called separate license delivery, which is supported in many DRMs but hasn’t been in ACS thus far.  With separate license delivery, DRM-packaged files can be delivered in any way (download, optical disk, device pre-load, etc.), and then the user’s device or app can obtain licenses for them as needed.

Finally, ACS5 will support the Readium Foundation’s open-source EPUB3 e-reader software.  Adobe is “evaluating the feasibility” of supporting the Readium EPUB 3 SDK in its Adobe Reader Mobile SDK; but this means that distributors will now definitely be able to accommodate EPUB3 in their apps.

In all, ACS5 fulfills many of the wish list items that I have heard from publishers over the past couple of years, leaving one with the impression that it could expand its market share again and move towards Adobe’s original goal of de facto standard-hood (except for Amazon and possibly Apple).  ACS5 is backward compatible with older versions of ACS and does not require that e-books be re-packaged; in other words, users can read their older files in RMSDK10-enabled e-readers.

Yet Adobe made a gaffe in its announcements that immediately jeopardized all this potential: it initially gave the impression that it would force upgrades to ACS5/RMSDK10 this July.  (Watch this webinar video from Adobe’s partner Datalogics, starting around the 21-minute mark.)  Distributors would have to upgrade their apps to the latest versions, with the hardened security; and users would have to install the upgrades before being able to read e-books packaged with the new DRM.  Furthermore, if users obtain e-books packaged with the new DRM, they would not be able to read them on e-readers based on the older RMSDK. (Yet another sign that Adobe has acted on pressure from publishers rather than distributors.)  In other words, Adobe wanted to force the entire ACS ecosystem to move to a more secure DRM client in lock-step.

This forced-upgrade routine is similar to what DRM-enabled download services like iTunes (video) do with their client software.  But then Apple doesn’t rely on a network of distributors, almost all of which maintain their own e-reading devices and apps.

In any case, the backlash from distributors and the e-publishing blogosphere was swift and harsh; and Adobe quickly relented.  Now the story is that distributors can decide on their own upgrade timelines.  In other words, publishers will themselves have to put pressure on distributors to upgrade the DRM, at least for the traditional retail and library-lending models; and some less-secure implementations will likely remain out there for some time to come.

Adobe’s new release balances between divergent effects of DRM.  On the one hand, DRM interoperability is more important than ever for publishers and distributors alike, to counteract the dominance of Amazon in the e-book retail market; and the surest way to achieve DRM interoperability is to do away with DRM altogether.  (There are other ways to inhibit interoperability that have nothing to do with DRM.)  But on the other hand, integrating interoperability with support for content access models that are unsupportable without some form of content access control — such as subscriptions and institutional library access — seems like an attractive idea.  Adobe has survived tugs-of-war with publishers and distributors over DRM restrictions before, so this one probably won’t be fatal.

Judge Dismisses E-Book DRM Antitrust Case December 12, 2013

Posted by Bill Rosenblatt in DRM, Law, Publishing.
2 comments

Last week a federal judge in New York dismissed a lawsuit that a group of independent booksellers brought earlier this year against Amazon.com and the (then) Big Six trade publishers.  The suit alleged that the publishers were conspiring with Amazon to use Amazon’s DRM to shut the indie booksellers out of the majority of the e-book market.  The three bookstores sought class action status on behalf of all indie booksellers.

In most cases, independent booksellers can’t sell e-books that can be read on Amazon’s Kindle e-readers; instead they have a program through the Independent Booksellers Association that enables consumers to buy e-books from the stores’ websites via the Kobo e-book platform, which has apps for all major devices (PCs, iOS, Android, etc.) as well as Kobo eReaders.

Let’s get the full disclosure out of the way: I worked with the plaintiffs in this case as an expert witness.  (Which is why I didn’t write about this case when it was brought several months ago.)  I did so because, like others, I read the complaint and found that it reflected various misconceptions about DRM and its place in the e-book market; I thought that perhaps I could help educate the booksellers.

The booksellers asked the court to enjoin (force) Amazon to drop its proprietary DRM, and to enjoin the Big Six to allow independent bookstores to sell their e-books using an interoperable DRM that would presumably work with Kindles as well as iOS and Android devices, PCs, Macs, BlackBerrys, etc.  (The term that the complaint used for the opposite of “interoperable DRM” was “inoperable DRM,” much to the amusement of some anti-DRM folks.)

There were two fundamental problems with the complaint.  One was that it presupposed the existence of an idealized interoperable DRM that would work with any “interoperable or open architecture device,” and that “Amazon could easily, and without significant cost or disruption, eliminate its device specific restrictive [] DRM and instead utilize an available interoperable system.”

There is no such thing, nor is one likely to come into being.  I worked with the International Digital Publishing Form (IDPF), the trade association for e-books, to design a “lightweight” content protection scheme that would be attractive to a large number of retailers through low cost of adoption, but that project is far from fruition, and in any case, no one associated with it is under any illusion that all retailers will adopt the scheme.  The only DRM that is guaranteed to work with all devices and all retailers forever is no DRM at all.

The closest thing there is to an “interoperable” DRM nowadays is Adobe Content Server (ACS) — which isn’t all that close.  Adobe had intended ACS to become an interoperable standard, much like PDF is.  Unlike Amazon’s Mobipocket DRM and Apple’s FairPlay DRM for iBooks, ACS can be licensed and used by makers of e-reader devices and apps.  Several e-book platforms do use it.  But the only retailer with significant market share in the United States that does so is Barnes & Noble, which has modified it and combined it with another DRM that it had acquired years ago.  Kobo has its own DRM and uses ACS only for interoperability with other environments.

More relevantly, I have heard it said that Amazon experimented with ACS before launching the Kindle with the Mobipocket DRM that it acquired back in 2005.  But in any case, ACS’s presence in the US e-book market is on the wane, and Adobe has stopped actively working on the product.

The second misconception in the booksellers’ complaint was the implication that the major publishers had an interest in limiting their opportunities to sell e-books through indie bookstores.  The reality is just the opposite: publishers, from the (now) Big Five on down, would like nothing more than to be able to sell e-books through every possible retailer onto every possible device.  The complaint alleges that publishers “confirmed, affirmed, and/or condoned AMAZON’s use of restrictive DRMs” and thereby conspired to restrain trade in the e-book market.

Publishers have been wary of Amazon’s dominant market position for years, but they have tolerated its proprietary technology ecosystem — at least in part because many of them understand that technology-based media markets always settle down to steady states involving two or three different platforms, protocols, formats, etc.  DRM helps vendors create walls around their ecosystems, but it is far from the only technology that does so.

As I’ve said before, the ideal of an “MP3 for e-books” is highly unlikely and is largely a mirage in any case.  Copyright owners have a constant struggle to create and preserve level playing fields for retailers in the digital age, one that the more savvy among them recognize that they can’t win as much as they would like.

Judge Jed Rakoff picked up on this second point in his opinion dismissing the case. He said, “… nothing about [the] fact [that publishers made agreements with Amazon requiring DRM] suggests that the Publishers also required Amazon to use device-restrictive DRM limiting the devices on which the Publishers’ e-books can be display, or to place restrictions on Kindle devices and apps such that they could only display e-books enabled with Amazon’s proprietary DRM. Indeed, unlike DRM requirements, which clearly serve the Publishers’ economic interests by preventing copyright violations, these latter types of restrictions run counter to the Publishers’ interests …” (emphasis in original).

Indie bookstores are great things; it’s a shame that Amazon’s Kindle ecosystem doesn’t play nicely with them. But at the end of the day — as Judge Rakoff also pointed out — Amazon competes with independent booksellers, and “no business has a duty to aid competitors,” even under antitrust law.

In fact, Amazon has repeatedly shown that it will “cooperate” with competitors only as a means of cutting into their markets. Its extension of the Kindle platform to public library e-lending last year is best seen as part of its attempt to invade libraries’ territory. More recently, Amazon has attempted to get indie booksellers interested in selling Kindle devices in their stores, a move that has elicited frosty reactions from the bookstores.

The rest of Judge Rakoff’s opinion dealt with the booksellers’ failure to meet legal criteria under antitrust law.  Independent booksellers might possibly have a case to bring against Amazon for boxing them out of the market as reading goes digital, but Book House of Stuyvesant Plaza et al v. Amazon.com et al wasn’t it.

MovieLabs Releases Best Practices for Video Content Protection October 23, 2013

Posted by Bill Rosenblatt in DRM, Standards, Video.
3 comments

As Hollywood prepares for its transition to 4k video (four times the resolution of HD), it appears to be adopting a new approach to content protection, one that promotes more service flexibility and quicker time to market than previous approaches but carries other risks.  The recent publication of a best-practices document for content protection from MovieLabs, Hollywood’s R&D consortium, signals this new approach.

In previous generations of video technology, Hollywood studios got together with major technology companies and formed technology licensing entities to set and administer standards for content protection.  For example, a subset of the major studios teamed up with IBM, Intel, Microsoft, Panasonic, and Toshiba to form AACS LA, the licensing authority for the AACS content protection scheme for Blu-ray discs and (originally) HD DVDs.  AACS LA defines the technology specification, sets the terms and conditions under which it can be licensed, and performs other functions to maintain the technology.

A licensing authority like AACS LA (and there are a veritable alphabet soup of others) provides certainty to technology implementation including compliance, patent licensing, and interoperability among licensees.  It helps insulate the major studios from accusations of collusion by being a separate entity in which at most a subset of them participate.

As we now know, the licensing-authority model has its drawbacks.  One is that it can take the licensing authority several years to develop technology specs to a point where vendors can implement them — by which time they risk obsolescence.  Another is that it does not offer much flexibility in how the technology can adapt to new device types and content delivery paradigms.  For example, AACS was designed with optical discs in mind at a time when Internet video streaming was just a blip on the horizon.

A document published recently by MovieLabs signals a new approach.  MovieLabs Specification for Enhanced Content Protection is not really a specification, in that it is in nowhere near enough detail to be usable as the basis for implementations.  It is more a compendium of what we now understand as best practices for protecting digital video.  It contains room for change and interpretation.

The best practices in the document amount to a wish list for Hollywood.  They include things like:

  • Techniques for limiting the impact of hacks to DRM schemes, such as requiring device as well as content keys, code diversity (a hack that works on one device won’t necessarily work on another), title diversity (a hack that works with one title won’t necessarily work on another), device revocation, and renewal of protection schemes.
  • Proactive renewal of software components instead of “locking the barn door after the horse has escaped.”
  • Component technologies that are currently considered safe from hacks by themselves, including standard AES encryption with minimum key length of 128 and version 2.2 or better of the HDCP scheme for protecting links such as HDMI cables (earlier versions were hacked).
  • Hardware roots of trust on devices, running in secure execution environments, to limit opportunities for key leakage.
  • Forensic watermarking, meaning that content should have information embedded in it about the device or user who requested it.

Those who saw Sony Pictures CTO Spencer Stephens’s talk at the  Anti-Piracy and Content Protection Summit in LA back in July will find much of this familiar.  Some of these techniques come from the current state of the art in content protection for pay TV services; for more detail on this, see my whitepaper The New Technologies for Pay TV Content Security.  Others, such as the forensic watermarking requirement, come from current systems for distributing HD movies in early release windows.  And some result from lessons learned from cracks to older technologies such as AACS, HDCP, and CSS (for DVDs).

MovieLabs is unable to act as a licensor of standards for content protection (or anything else, for that matter).  The six major studios set it up in 2005 as a movie industry joint R&D consortium modeled on the cable television industry’s CableLabs and other organizations enabled by the National Cooperative Research Act of 1984, such as Bellcore (telecommunications) and SEMATECH (semiconductors).  R&D consortia are allowed, under antitrust law, to engage in “pre-competitive” research and development, but not to develop technologies that are proprietary to their members.

Accordingly, the document contains a lot of language intended to disassociate these requirements from any actual implementations, standards, or studio policies, such as “Each studio will determine individually which practices are prerequisites to the distribution of its content in any particular situation” and “This document defined only one approach to security and compatibility, and other approaches may be available.”

Instead, the best-practices approach looks like it is intended to give “signals” from the major studios to content protection technology vendors, such as Microsoft, Irdeto, Intertrust, and Verimatrix, who work with content service providers.  These vendors will then presumably develop protection schemes that follow the best practices, with an understanding that studios will then agree to license their content to those services.

The result of this approach should be legal content services for next-generation video that get to market faster.  The best practices are independent of things like content delivery modalities (physical media, downloads, streaming) and largely independent of usage rules.  Therefore they should enable a wider variety of services than is possible with the traditional licensing authority paradigm.

Yet this approach has two drawbacks compared to the older approach.  (And of course the two approaches are not mutually exclusive.)  First is that it jeopardizes the interoperability among services that Hollywood craves — and has gone to great lengths to preserve in the UltraViolet standard.  Service providers and device makers can incorporate content protection schemes that follow MovieLabs’ best practices, but consumers may not be able to interoperate content among them, and service providers will be able to use content protection schemes to lock users in to their services.  In contrast, many in Hollywood are now nostalgic for the DVD because, although its protection scheme was easily hacked, it guaranteed interoperability across all players (at least all within a given geographic region).

The other drawback is that the document is a wish list provided by organizations that won’t pay for the technology.  This means that downstream entities such as device makers and service providers will treat it as the maximum amount of protection that they have to implement to get studio approval.  Because there is no license agreement that they have to sign to get access to the technology, the downstream entities are likely to negotiate down from there.  (Such negotiation already took place behind the scenes during the rollout of Blu-ray, as player makers refused to implement some of the more expensive protection features and some studios agreed to let them slip.)

Downstream entities are particularly likely to push back against some of MovieLabs’s best practices that involve costs and potential impairments of the user experience; examples include device connectivity to networks for purposes of authentication and revocation, proactive renewal of device software, and embedding of situation-specific watermarks.

Surely the studios understand all this.  The publication of this document by MovieLabs shows that Hollywood is willing to entertain dialogues with service providers, device makers, and content protection vendors to speed up time-to-market of legitimate video services and ensure that downstream entities can innovate more freely.  How much protection will the studios will ultimately end up with when 4k video reaches the mainstream?  It will be very interesting to watch over the next couple of years.

E-Book Watermarking Gains Traction in Europe October 3, 2013

Posted by Bill Rosenblatt in DRM, Europe, Publishing, United States, Watermarking.
2 comments

The firm Rüdiger Wischenbart Content and Consulting has just released the latest version of Global eBook, its overview of the worldwide ebook market.  This sweeping, highly informative report is available for free during the month of October.

The report contains lots of information about piracy and rights worldwide — attitudes, public policy initiatives, and technologies.  A few conclusions in particular stand out.  First, while growth of e-book reading appears to be slowing down, it has reached a level of 20% of book sales in the U.S. market (and even higher by unit volume).  This puts e-books firmly in the mainstream of media consumption.

Accordingly, e-book piracy has become a mainstream concern.  Publishers — and their trade associations, such as the Börsenverein des Deutschen Buchhandels in Germany, which is the most active on this issue — had been less involved in the online infringement issue than their counterparts in the music and film industries, but that’s changing now.  Several studies have been done that generally show e-book piracy levels rising rapidly, but there’s wide disagreement on its volume.  And virtually no data at all is available about the promotional vs. detrimental effects of unauthorized file-sharing on legitimate sales.  Part of the problem is that e-book files are much smaller than music MP3s or (especially) digital video or games; therefore e-book files are more likely to be shared through email (which can’t be tracked) and less likely to be available through torrent sites.

The lack of quantitative understanding of infringement and its impact has led different countries to pursue different paths, in terms of both legal actions and the use of antipiracy technologies.  Perhaps the most surprising of the latter trend — at least to those of us on this side of the Atlantic — is the rapid ascendancy of watermarking (a/k/a “social DRM”) in some European countries.  For example:

  • Netherlands: Arbeiderspers/Bruna, the country’s largest book publisher, switched from traditional DRM to watermarking for its entire catalog at the beginning of this year.
  • Austria: 65% of the e-books available in the country have watermarks embedded, compared to only 35% with DRM.
  • Hungary: Watermarking is now the preferred method of content protection.
  • Sweden: Virtually all trade ebooks are DRM-free.  The e-book distributor eLib (owned by the Swedish media giant Bonnier), uses watermarking for 80% of its titles.
  • Italy: watermarking has grown from 15% to 42% of all e-books, overtaking the 35% that use DRM.

(Note that these are, with all due respect to them, second-tier European countries.  I have anecdotal evidence that e-book watermarking is on the rise in the UK, but not much evidence of it in France or Germany.  At the same time, the above countries are often test beds for technologies that, if successful, spread to larger markets — whether by design or market forces.)

Meanwhile, there’s still a total absence of data on the effects of both DRM and watermarking on users’ e-book behavior — which is why I have been discussing with the Book Industry Study Group the possibility of doing a study on this.

The prevailing attitude among authors is that DRM should still be used.  An interesting data point on this came back in January when Lulu, one of the prominent online self-publishing services, decided to stop offering authors the option of DRM protection (using Adobe Content Server, the de facto standard DRM for ebooks outside of the Amazon and Apple ecosystems) for ebooks sold on the Lulu site.  Lulu authors would still be able to distribute their titles through Amazon and other services that use DRM.

Lulu announced this in a blog post which elicited large numbers of comments, largely from authors.  My pseudo-scientific tally of the authors’ comments showed that they are in favor of DRM — and unhappy with Lulu’s decision to drop it — by more than a two-to-one margin.  Many said that they would drop Lulu and move to its competitor Smashwords, which continues to support DRM as an option.  Remember that these are independent authors of mostly “long tail” titles in need of exposure, not bestselling authors or major publishers.

One reason for Lulu’s decision to drop DRM was undoubtedly the operational expense.  Smashwords’ CEO, Mark Coker, expressed the attitudes of ebook distributors succintly in a Publishers Weekly article covering Lulu’s move when he said, “What’s relevant is whether the cost of DRM (measured by fees to Adobe, [and for consumers] increased complexity, decreased availability, decreased sharing and word of mouth, decreased customer satisfaction) outweigh the benefits[.]”  As we used to say over here, that’s the $64,000 question.

Content Protection for 4k Video July 2, 2013

Posted by Bill Rosenblatt in DRM, Technologies, Video, Watermarking.
15 comments

As Hollywood adepts know, the next phase in picture quality beyond HD is something called 4k.  Although the name suggests 4k (perhaps 4096) pixels in the vertical or horizontal direction, its resolution is actually 3840 × 2160, i.e., twice the pixels of HD in both horizontal and vertical directions.

4k is the highest quality of image actually captured by digital cinematography right now.  The question is, how will it be delivered to consumers, in what timeframe, and how will it be protected?

Those of us who attended the Anti-Piracy and Content Protection Summit in LA last week learned that the answer to the latter question is unknown as yet.  Spencer Stephens, CTO of Sony Pictures, gave a brief presentation explaining what 4k is and outlining his studio’s wish list for 4k content protection.  He said that it was an opportunity to start fresh with a new design, compared to the AACS content protection technology for Blu-ray discs, which is 10 years old.

This is interesting on a couple of levels.  First, it implies that the studios have not predetermined a standard for 4k content protection; in contrast, Blu-ray discs were introduced in the market about three years after AACS was designed.  Second, Stephens’s remarks had the flavor of a semi-public appeal to the community of content protection vendors — some of which were in the audience at this conference — for help in designing DRM schemes for 4k that met his requirements.

Stephens’s wish list included such elements as:

  • Title-by-title diversity, so that  a technique used to hack one movie title doesn’t necessarily apply to another
  • Requiring players to authenticate themselves online before playback, which enables hacked players to be denied but makes it impossible to play 4k content without an Internet connection
  • The use of HDCP 2.2 to protect digital outputs, since older versions of HDCP have been hacked
  • Session-based watermarking, so that each 4k file is marked with the identity of the device or user that downloaded it (a technique used today with early-window HD content)
  • The use of trusted execution environments (TEE) for playback, which combine the security of hardware with the renewability of software

From time to time I hear from startup companies that claim to have designed better technologies for video content protection.  I tell them that getting studio approval for new content protection schemes is a tricky business.  You can get studio technology executives excited about your technology, but they don’t actually “approve” it such that they guarantee they’ll accept it if it’s used in a content service.  Instead, they expect service providers to propose the technology in the context of the overall service, and the studios will consider providing licenses to their content in that broader context.  And of course the studios don’t actually pay for the technology; the service providers or consumer device makers do.

In other words, studios “bless” new content protection technologies, but otherwise the entire sales process takes place at arms’ length from the studios.  In that sense, the studios act somewhat like a regulatory agency does when setting guidelines for compliance with a regulation such as HIPAA and GLB (for information privacy in healthcare and financial services respectively).  The resulting technology often meets the letter but not the spirit of the regulations.

In this respect, Stephens’s remarks were a bit of fresh air.  They are an invitation to more open dialog among vendors, studios, and service providers about the types of content protection that they may be willing to implement when it comes time to distribute 4k content to consumers.

In the past, such discussions often happened behind closed doors, took the form of unilateral “unfunded mandates,” and/or resulted in implementations that plainly did not work.  As technology gets more sophisticated and the world gets more complex, Hollywood is going to have to work more closely with downstream entities in the content distribution chain if it wants its content protected.  Spencer Stephens’s presentation was a good start in that direction.

Kim Dotcom Embraces DRM January 22, 2013

Posted by Bill Rosenblatt in DRM, New Zealand, Services.
add a comment

Kim Dotcom launched a new cloud file storage service, the New Zealand-based Mega, last weekend on the one-year anniversary of the shutdown of his previous site, the notorious MegaUpload.  (The massive initial interest in the site* prevented me from trying out the new service until today.)

Mega encrypts users’ files, using what looks like a content key (using AES-128) protected by 2048-bit RSA asymmetric-key encryption.  It derives the latter keys from users’ passwords and other pseudo-random data.  Downloading a file from a Mega account requires knowing either the password that was used to generate the RSA key (i.e., logging in to the account used to upload the file) or the key itself.

Hmm.  Content encrypted with symmetric keys that in turn are protected by asymmetric keys… sounds quite a bit like DRM, doesn’t it?

Well, not quite.  While DRM systems assume that file owners won’t want to publish keys used to encrypt the files, Mega not only allows but enables you to publish your files’ keys.  Mega lets you retrieve the key for a given file in the form of a URL; just right-click on the file you want and select “Get link.”   (Here‘s a sample.)  You can put the resulting URL into a blog post, tweet, email message, or website featuring banner ads for porn and mail-order brides.

(And of course, unlike DRM systems, once you obtain a key and download a file, it’s yours in unencrypted form to do with as you please.  The encryption isn’t integrated into a secure player app.)

Yet in practical terms, Mega is really no different from file-storage services that let users publish URLs to files they store — examples of which include RapidShare, 4Shared, and any of dozens of file transfer services (YouSendIt, WhaleMail, DropSend, Pando, SendUIt, etc.).

Mega touts its use of encryption as a privacy benefit.  What it really offers is privacy from the kinds of piracy monitoring services that media companies use to generate takedown notices — an application of encryption that hardcore pirates have used and that Kim Dotcom purports to “take … out to the mainstream.”  It will be impossible to use content identification technologies, such as fingerprinting, to detect the presence of copyrighted materials on Mega’s servers.  RapidShare, for example, analyzes third-party links to files on its site for potential infringements; Mega can’t do any such thing, by design.

Mega’s use of encryption also plays into the question of whether it could ever be held secondarily liable for its users’ infringements under laws such as DMCA 512 in the United States.  The Beltway tech policy writer Paul Sweeting wrote an astute analysis of Mega’s chances against the DMCA over the weekend.

Is Kim Dotcom simply thumbing his nose at Big Media again?  Or is he seriously trying to make Mega a competitor to legitimate, prosaic file storage services such as DropBox?  The track records of services known for piracy trying to go “legit” are not encouraging — just ask Bram Cohen (BitTorrent Entertainment Network) or Global Gaming Factory (purchasers of The Pirate Bay’s assets).  Still, this is one to watch as the year unfolds.

*Or, just possibly, server meltdowns faked to generate mountains of credulous hype?

Follow

Get every new post delivered to your Inbox.

Join 634 other followers