jump to navigation

Dispatches from IDPF Digital Book 2014, Pt. 3: DRM June 5, 2014

Posted by Bill Rosenblatt in DRM, Publishing, Standards.
add a comment

The final set of interesting developments at last week’s IDPF Digital Book 2014 in NYC has to do with DRM and rights.

Tom Doherty, founder of the science fiction publisher Tor Books, gave a speech about his company’s experimentation with DRM-free e-books and its launch of a line of e-novellas without DRM.  The buildup to this speech (among those of us who were aware of the program in advance) was palpable, but the result fell with a thud.  You had to listen hard to find the tiny morsel about how going DRM-free has barely affected sales; otherwise the speech was standard-issue dogma about DRM with virtually no new insights or data.  And he did not take questions from the audience.

DRM has become something of a taboo subject even at conferences like this, so most of the rest of the discussion about it took the form of hallway buzz.  And the buzz is that many are predicting that DRM will be on its way out for retail trade e-books within the next couple of years.

That’s the way things are likely to go if technology market forces play out the way they usually do.  Retailers other than Amazon (and possibly Apple) will want to embrace more open standards so that they can offer greater interoperability and thus band together to compete with the dominant player; getting rid of DRM is certainly a step in that direction.  Meanwhile, publishers, getting more and more fed up with or afraid of Amazon, will find common cause with other retailers and agree to license more of their material for distribution without DRM.  (Several retailers in second-tier European countries as well as some retailers for self-publishing authors, such as Lulu, have already dropped DRM entirely.)

Such sentiments will eventually supersede most publishers’ current “faith-based” insistence on DRM.  In other words, publishers and retailers will behave more or less the same way as the major record labels and non-Apple retailers behaved back in 2006-2007.

This course of events seems inevitable… unless publishers get some hard, credible data that tells them that DRM helps prevent piracy and “oversharing” more than it hurts the consumer experience.  That’s the only way (other than outright inertia) that I can see DRM staying in place for trade books over the next couple of years.

The situation for educational, professional, and STM (scientific, technical, medical) books is another story (as are library lending and other non-retail models).  Higher ed publishers in particular have reasons to stick with DRM: for example, e-textbook piracy has been rising dramatically in recent years and is up to 34% of students as of last year.

Adobe recently re-launched its DRM with a focus on these publishing market segments. I’d describe the re-launch as “awkward,” though publishers I’ve spoken to would characterize in it less polite terms.  This has led to openings for other vendors, such as Sony DADC; and the Readium Foundation is still working on the open-source EPUB Lightweight Content Protection scheme.

The hallway buzz at IDPF Digital Book was that DRM for these market segments is here to stay — except that in higher ed, it may become unnecessary in a longer timeframe, when educational materials are delivered dynamically and in a fashion more akin to streaming than to downloads of e-books.

I attended a panel on EDUPUB, a standards initiative aimed at exactly this future for educational publishing.  The effort, led by Pearson Education (the largest of the educational publishers), the IMS Global Learning Consortium, and IDPF, is impressive: it’s based on combining existing open standards (such as IDPF’s EPUB 3) instead of inventing new ones.  It’s meant to be inclusive and beneficial to all players in the higher ed value chain, including Pearson’s competitors.

However, EDUPUB is in danger of making the same mistake as the IDPF did by ignoring DRM and other rights issues.  When asked about DRM, Paul Belfanti, Pearson’s lead executive on EDUPUB, answered that EDUPUB is DRM-agnostic and would leave decisions on DRM to providers of content delivery platforms. This decision was problematic for trade publishers when IDPF made it for EPUB several years ago; it’s even more potentially problematic for higher ed; EDUPUB-based materials could certainly be delivered in e-textbook form.

EDUPUB could also help enable one of the Holy Grails of higher ed publishing, which is to combine materials from multiple publishers into custom textbooks or dynamically delivered digital content.  Unlike most trade books, textbooks often contain hundreds or thousands of content components, each of which may have different rights associated with them.

Clearing rights for higher ed content is a manual, labor-intensive job.  In tomorrow’s world of dynamic digital educational content, it will be more important than ever to make sure that the content being delivered has the proper clearances, in real time.  In reality, this doesn’t necessarily involve DRM; it’s mainly a question of machine-readable rights metadata.

Attempts to standardize this type of rights metadata date back at least to the mid-1990s (when I was involved in such an attempt); none have succeeded.  This is a “last mile” issue that EDUPUB will have to address, sooner rather than later, for it to make good on its very promising start.  DRM and rights are not popular topics for standards bodies to address, but it has become increasingly clear that they must address these issues to be successful.

Adobe Resurrects E-Book DRM… Again February 10, 2014

Posted by Bill Rosenblatt in DRM, Publishing.
3 comments

Over the past couple of weeks, Adobe has made a series of low-key announcements regarding new versions of its DRM for e-books, Adobe Content Server and Rights Management SDK.  The new versions are ACS5 and RMSDK10 respectively, and they are released on major platforms now (iOS, Android, etc.) with more to come next month.

The new releases, though rumored for a while, came as something of a surprise to those of us who understood Adobe to have lost interest in the e-book market… again.  They did so for the first time back in 2006, before the launch of the Kindle kicked the market into high gear.  At that time, Adobe announced that version 3 of ACS would be discontinued.  Then the following year, Adobe reversed course and introduced ACS4.  ACS4 supports the International Digital Publishing Forum (IDPF)’s EPUB standard as well as Adobe’s PDF.

This saga repeated itself, roughly speaking, over the past year.  As the IDPF worked on version 3 of EPUB, Adobe indicated that it would not upgrade its e-reader software to work with it, nor would it guarantee that ACS4 would support it.  The DRM products were transferred to an offshore maintenance group within Adobe, and all indications were that Adobe was not going to develop it any further.  Now that’s all changed.

Adobe had originally positioned ACS in the e-book market as a de facto standard DRM.  It licensed the technology to a large number of makers of e-reader devices and applications, and e-book distributors around the world.  At first this strategy seemed to work: ACS looked like an “everyone but Amazon” de facto standard, and some e-reader vendors (such as Sony) even migrated from proprietary DRMs to the Adobe technology.

But then cracks began to appear: Barnes & Noble “forked” ACS with its own extensions to support features such as user-to-user lending in the Nook system; Apple launched iBooks with a variant of its FairPlay DRM for iTunes content; and independent bookstores’ IndieBound system adopted Kobo, which has its own DRM.  Furthermore, interoperability of e-book files among different RMSDK-based e-readers was not exactly seamless.  As of today, “pure” ACS represents only a minor part of the e-book retail market, at least in the US, including Google Play, SmashWords, and retailers served by OverDrive and other wholesalers.

It’s unclear why Adobe chose to go back into the e-book DRM game, though pressure from publishers must have been a factor.  Adobe can’t do much about interoperability glitches among retailers and readers, but publishers and distributors alike have asked for various features to be added to ACS over the years.  Publishers have mainly been concerned with the relatively easy availability of hacks, while distributors have also expressed the desire for a DRM that facilitates certain content access models that ACS4 does not currently support.

The new ACS5/RMSDK10 platform promises to give both publishers and distributors just about everything they have asked for.  First, Adobe has beefed up the client-side security using (what appear to be) software hardening, key management, and crypto renewability techniques that are commonly used for video and games nowadays.

Adobe has also added support for several interesting content access models. At the top of the list of most requested models is subscriptions.  ACS5 will not only support periodical-style subscriptions but also periodic updates to existing files; the latter is useful in STM (scientific, technical, medical) and various professional publishing markets.

ACS5 also contains two enhancements that are of interest to the educational market.  One is support for collections of content shared among multiple devices, which is useful for institutional libraries.  Another is support for “bulk fulfillment,” such as pre-loading e-reader devices with encrypted books (such as textbooks).  Bulk fulfillment requires a feature called separate license delivery, which is supported in many DRMs but hasn’t been in ACS thus far.  With separate license delivery, DRM-packaged files can be delivered in any way (download, optical disk, device pre-load, etc.), and then the user’s device or app can obtain licenses for them as needed.

Finally, ACS5 will support the Readium Foundation’s open-source EPUB3 e-reader software.  Adobe is “evaluating the feasibility” of supporting the Readium EPUB 3 SDK in its Adobe Reader Mobile SDK; but this means that distributors will now definitely be able to accommodate EPUB3 in their apps.

In all, ACS5 fulfills many of the wish list items that I have heard from publishers over the past couple of years, leaving one with the impression that it could expand its market share again and move towards Adobe’s original goal of de facto standard-hood (except for Amazon and possibly Apple).  ACS5 is backward compatible with older versions of ACS and does not require that e-books be re-packaged; in other words, users can read their older files in RMSDK10-enabled e-readers.

Yet Adobe made a gaffe in its announcements that immediately jeopardized all this potential: it initially gave the impression that it would force upgrades to ACS5/RMSDK10 this July.  (Watch this webinar video from Adobe’s partner Datalogics, starting around the 21-minute mark.)  Distributors would have to upgrade their apps to the latest versions, with the hardened security; and users would have to install the upgrades before being able to read e-books packaged with the new DRM.  Furthermore, if users obtain e-books packaged with the new DRM, they would not be able to read them on e-readers based on the older RMSDK. (Yet another sign that Adobe has acted on pressure from publishers rather than distributors.)  In other words, Adobe wanted to force the entire ACS ecosystem to move to a more secure DRM client in lock-step.

This forced-upgrade routine is similar to what DRM-enabled download services like iTunes (video) do with their client software.  But then Apple doesn’t rely on a network of distributors, almost all of which maintain their own e-reading devices and apps.

In any case, the backlash from distributors and the e-publishing blogosphere was swift and harsh; and Adobe quickly relented.  Now the story is that distributors can decide on their own upgrade timelines.  In other words, publishers will themselves have to put pressure on distributors to upgrade the DRM, at least for the traditional retail and library-lending models; and some less-secure implementations will likely remain out there for some time to come.

Adobe’s new release balances between divergent effects of DRM.  On the one hand, DRM interoperability is more important than ever for publishers and distributors alike, to counteract the dominance of Amazon in the e-book retail market; and the surest way to achieve DRM interoperability is to do away with DRM altogether.  (There are other ways to inhibit interoperability that have nothing to do with DRM.)  But on the other hand, integrating interoperability with support for content access models that are unsupportable without some form of content access control — such as subscriptions and institutional library access — seems like an attractive idea.  Adobe has survived tugs-of-war with publishers and distributors over DRM restrictions before, so this one probably won’t be fatal.

Judge Dismisses E-Book DRM Antitrust Case December 12, 2013

Posted by Bill Rosenblatt in DRM, Law, Publishing.
2 comments

Last week a federal judge in New York dismissed a lawsuit that a group of independent booksellers brought earlier this year against Amazon.com and the (then) Big Six trade publishers.  The suit alleged that the publishers were conspiring with Amazon to use Amazon’s DRM to shut the indie booksellers out of the majority of the e-book market.  The three bookstores sought class action status on behalf of all indie booksellers.

In most cases, independent booksellers can’t sell e-books that can be read on Amazon’s Kindle e-readers; instead they have a program through the Independent Booksellers Association that enables consumers to buy e-books from the stores’ websites via the Kobo e-book platform, which has apps for all major devices (PCs, iOS, Android, etc.) as well as Kobo eReaders.

Let’s get the full disclosure out of the way: I worked with the plaintiffs in this case as an expert witness.  (Which is why I didn’t write about this case when it was brought several months ago.)  I did so because, like others, I read the complaint and found that it reflected various misconceptions about DRM and its place in the e-book market; I thought that perhaps I could help educate the booksellers.

The booksellers asked the court to enjoin (force) Amazon to drop its proprietary DRM, and to enjoin the Big Six to allow independent bookstores to sell their e-books using an interoperable DRM that would presumably work with Kindles as well as iOS and Android devices, PCs, Macs, BlackBerrys, etc.  (The term that the complaint used for the opposite of “interoperable DRM” was “inoperable DRM,” much to the amusement of some anti-DRM folks.)

There were two fundamental problems with the complaint.  One was that it presupposed the existence of an idealized interoperable DRM that would work with any “interoperable or open architecture device,” and that “Amazon could easily, and without significant cost or disruption, eliminate its device specific restrictive [] DRM and instead utilize an available interoperable system.”

There is no such thing, nor is one likely to come into being.  I worked with the International Digital Publishing Form (IDPF), the trade association for e-books, to design a “lightweight” content protection scheme that would be attractive to a large number of retailers through low cost of adoption, but that project is far from fruition, and in any case, no one associated with it is under any illusion that all retailers will adopt the scheme.  The only DRM that is guaranteed to work with all devices and all retailers forever is no DRM at all.

The closest thing there is to an “interoperable” DRM nowadays is Adobe Content Server (ACS) — which isn’t all that close.  Adobe had intended ACS to become an interoperable standard, much like PDF is.  Unlike Amazon’s Mobipocket DRM and Apple’s FairPlay DRM for iBooks, ACS can be licensed and used by makers of e-reader devices and apps.  Several e-book platforms do use it.  But the only retailer with significant market share in the United States that does so is Barnes & Noble, which has modified it and combined it with another DRM that it had acquired years ago.  Kobo has its own DRM and uses ACS only for interoperability with other environments.

More relevantly, I have heard it said that Amazon experimented with ACS before launching the Kindle with the Mobipocket DRM that it acquired back in 2005.  But in any case, ACS’s presence in the US e-book market is on the wane, and Adobe has stopped actively working on the product.

The second misconception in the booksellers’ complaint was the implication that the major publishers had an interest in limiting their opportunities to sell e-books through indie bookstores.  The reality is just the opposite: publishers, from the (now) Big Five on down, would like nothing more than to be able to sell e-books through every possible retailer onto every possible device.  The complaint alleges that publishers “confirmed, affirmed, and/or condoned AMAZON’s use of restrictive DRMs” and thereby conspired to restrain trade in the e-book market.

Publishers have been wary of Amazon’s dominant market position for years, but they have tolerated its proprietary technology ecosystem — at least in part because many of them understand that technology-based media markets always settle down to steady states involving two or three different platforms, protocols, formats, etc.  DRM helps vendors create walls around their ecosystems, but it is far from the only technology that does so.

As I’ve said before, the ideal of an “MP3 for e-books” is highly unlikely and is largely a mirage in any case.  Copyright owners have a constant struggle to create and preserve level playing fields for retailers in the digital age, one that the more savvy among them recognize that they can’t win as much as they would like.

Judge Jed Rakoff picked up on this second point in his opinion dismissing the case. He said, “… nothing about [the] fact [that publishers made agreements with Amazon requiring DRM] suggests that the Publishers also required Amazon to use device-restrictive DRM limiting the devices on which the Publishers’ e-books can be display, or to place restrictions on Kindle devices and apps such that they could only display e-books enabled with Amazon’s proprietary DRM. Indeed, unlike DRM requirements, which clearly serve the Publishers’ economic interests by preventing copyright violations, these latter types of restrictions run counter to the Publishers’ interests …” (emphasis in original).

Indie bookstores are great things; it’s a shame that Amazon’s Kindle ecosystem doesn’t play nicely with them. But at the end of the day — as Judge Rakoff also pointed out — Amazon competes with independent booksellers, and “no business has a duty to aid competitors,” even under antitrust law.

In fact, Amazon has repeatedly shown that it will “cooperate” with competitors only as a means of cutting into their markets. Its extension of the Kindle platform to public library e-lending last year is best seen as part of its attempt to invade libraries’ territory. More recently, Amazon has attempted to get indie booksellers interested in selling Kindle devices in their stores, a move that has elicited frosty reactions from the bookstores.

The rest of Judge Rakoff’s opinion dealt with the booksellers’ failure to meet legal criteria under antitrust law.  Independent booksellers might possibly have a case to bring against Amazon for boxing them out of the market as reading goes digital, but Book House of Stuyvesant Plaza et al v. Amazon.com et al wasn’t it.

MovieLabs Releases Best Practices for Video Content Protection October 23, 2013

Posted by Bill Rosenblatt in DRM, Standards, Video.
3 comments

As Hollywood prepares for its transition to 4k video (four times the resolution of HD), it appears to be adopting a new approach to content protection, one that promotes more service flexibility and quicker time to market than previous approaches but carries other risks.  The recent publication of a best-practices document for content protection from MovieLabs, Hollywood’s R&D consortium, signals this new approach.

In previous generations of video technology, Hollywood studios got together with major technology companies and formed technology licensing entities to set and administer standards for content protection.  For example, a subset of the major studios teamed up with IBM, Intel, Microsoft, Panasonic, and Toshiba to form AACS LA, the licensing authority for the AACS content protection scheme for Blu-ray discs and (originally) HD DVDs.  AACS LA defines the technology specification, sets the terms and conditions under which it can be licensed, and performs other functions to maintain the technology.

A licensing authority like AACS LA (and there are a veritable alphabet soup of others) provides certainty to technology implementation including compliance, patent licensing, and interoperability among licensees.  It helps insulate the major studios from accusations of collusion by being a separate entity in which at most a subset of them participate.

As we now know, the licensing-authority model has its drawbacks.  One is that it can take the licensing authority several years to develop technology specs to a point where vendors can implement them — by which time they risk obsolescence.  Another is that it does not offer much flexibility in how the technology can adapt to new device types and content delivery paradigms.  For example, AACS was designed with optical discs in mind at a time when Internet video streaming was just a blip on the horizon.

A document published recently by MovieLabs signals a new approach.  MovieLabs Specification for Enhanced Content Protection is not really a specification, in that it is in nowhere near enough detail to be usable as the basis for implementations.  It is more a compendium of what we now understand as best practices for protecting digital video.  It contains room for change and interpretation.

The best practices in the document amount to a wish list for Hollywood.  They include things like:

  • Techniques for limiting the impact of hacks to DRM schemes, such as requiring device as well as content keys, code diversity (a hack that works on one device won’t necessarily work on another), title diversity (a hack that works with one title won’t necessarily work on another), device revocation, and renewal of protection schemes.
  • Proactive renewal of software components instead of “locking the barn door after the horse has escaped.”
  • Component technologies that are currently considered safe from hacks by themselves, including standard AES encryption with minimum key length of 128 and version 2.2 or better of the HDCP scheme for protecting links such as HDMI cables (earlier versions were hacked).
  • Hardware roots of trust on devices, running in secure execution environments, to limit opportunities for key leakage.
  • Forensic watermarking, meaning that content should have information embedded in it about the device or user who requested it.

Those who saw Sony Pictures CTO Spencer Stephens’s talk at the  Anti-Piracy and Content Protection Summit in LA back in July will find much of this familiar.  Some of these techniques come from the current state of the art in content protection for pay TV services; for more detail on this, see my whitepaper The New Technologies for Pay TV Content Security.  Others, such as the forensic watermarking requirement, come from current systems for distributing HD movies in early release windows.  And some result from lessons learned from cracks to older technologies such as AACS, HDCP, and CSS (for DVDs).

MovieLabs is unable to act as a licensor of standards for content protection (or anything else, for that matter).  The six major studios set it up in 2005 as a movie industry joint R&D consortium modeled on the cable television industry’s CableLabs and other organizations enabled by the National Cooperative Research Act of 1984, such as Bellcore (telecommunications) and SEMATECH (semiconductors).  R&D consortia are allowed, under antitrust law, to engage in “pre-competitive” research and development, but not to develop technologies that are proprietary to their members.

Accordingly, the document contains a lot of language intended to disassociate these requirements from any actual implementations, standards, or studio policies, such as “Each studio will determine individually which practices are prerequisites to the distribution of its content in any particular situation” and “This document defined only one approach to security and compatibility, and other approaches may be available.”

Instead, the best-practices approach looks like it is intended to give “signals” from the major studios to content protection technology vendors, such as Microsoft, Irdeto, Intertrust, and Verimatrix, who work with content service providers.  These vendors will then presumably develop protection schemes that follow the best practices, with an understanding that studios will then agree to license their content to those services.

The result of this approach should be legal content services for next-generation video that get to market faster.  The best practices are independent of things like content delivery modalities (physical media, downloads, streaming) and largely independent of usage rules.  Therefore they should enable a wider variety of services than is possible with the traditional licensing authority paradigm.

Yet this approach has two drawbacks compared to the older approach.  (And of course the two approaches are not mutually exclusive.)  First is that it jeopardizes the interoperability among services that Hollywood craves — and has gone to great lengths to preserve in the UltraViolet standard.  Service providers and device makers can incorporate content protection schemes that follow MovieLabs’ best practices, but consumers may not be able to interoperate content among them, and service providers will be able to use content protection schemes to lock users in to their services.  In contrast, many in Hollywood are now nostalgic for the DVD because, although its protection scheme was easily hacked, it guaranteed interoperability across all players (at least all within a given geographic region).

The other drawback is that the document is a wish list provided by organizations that won’t pay for the technology.  This means that downstream entities such as device makers and service providers will treat it as the maximum amount of protection that they have to implement to get studio approval.  Because there is no license agreement that they have to sign to get access to the technology, the downstream entities are likely to negotiate down from there.  (Such negotiation already took place behind the scenes during the rollout of Blu-ray, as player makers refused to implement some of the more expensive protection features and some studios agreed to let them slip.)

Downstream entities are particularly likely to push back against some of MovieLabs’s best practices that involve costs and potential impairments of the user experience; examples include device connectivity to networks for purposes of authentication and revocation, proactive renewal of device software, and embedding of situation-specific watermarks.

Surely the studios understand all this.  The publication of this document by MovieLabs shows that Hollywood is willing to entertain dialogues with service providers, device makers, and content protection vendors to speed up time-to-market of legitimate video services and ensure that downstream entities can innovate more freely.  How much protection will the studios will ultimately end up with when 4k video reaches the mainstream?  It will be very interesting to watch over the next couple of years.

E-Book Watermarking Gains Traction in Europe October 3, 2013

Posted by Bill Rosenblatt in DRM, Europe, Publishing, United States, Watermarking.
2 comments

The firm Rüdiger Wischenbart Content and Consulting has just released the latest version of Global eBook, its overview of the worldwide ebook market.  This sweeping, highly informative report is available for free during the month of October.

The report contains lots of information about piracy and rights worldwide — attitudes, public policy initiatives, and technologies.  A few conclusions in particular stand out.  First, while growth of e-book reading appears to be slowing down, it has reached a level of 20% of book sales in the U.S. market (and even higher by unit volume).  This puts e-books firmly in the mainstream of media consumption.

Accordingly, e-book piracy has become a mainstream concern.  Publishers — and their trade associations, such as the Börsenverein des Deutschen Buchhandels in Germany, which is the most active on this issue — had been less involved in the online infringement issue than their counterparts in the music and film industries, but that’s changing now.  Several studies have been done that generally show e-book piracy levels rising rapidly, but there’s wide disagreement on its volume.  And virtually no data at all is available about the promotional vs. detrimental effects of unauthorized file-sharing on legitimate sales.  Part of the problem is that e-book files are much smaller than music MP3s or (especially) digital video or games; therefore e-book files are more likely to be shared through email (which can’t be tracked) and less likely to be available through torrent sites.

The lack of quantitative understanding of infringement and its impact has led different countries to pursue different paths, in terms of both legal actions and the use of antipiracy technologies.  Perhaps the most surprising of the latter trend — at least to those of us on this side of the Atlantic — is the rapid ascendancy of watermarking (a/k/a “social DRM”) in some European countries.  For example:

  • Netherlands: Arbeiderspers/Bruna, the country’s largest book publisher, switched from traditional DRM to watermarking for its entire catalog at the beginning of this year.
  • Austria: 65% of the e-books available in the country have watermarks embedded, compared to only 35% with DRM.
  • Hungary: Watermarking is now the preferred method of content protection.
  • Sweden: Virtually all trade ebooks are DRM-free.  The e-book distributor eLib (owned by the Swedish media giant Bonnier), uses watermarking for 80% of its titles.
  • Italy: watermarking has grown from 15% to 42% of all e-books, overtaking the 35% that use DRM.

(Note that these are, with all due respect to them, second-tier European countries.  I have anecdotal evidence that e-book watermarking is on the rise in the UK, but not much evidence of it in France or Germany.  At the same time, the above countries are often test beds for technologies that, if successful, spread to larger markets — whether by design or market forces.)

Meanwhile, there’s still a total absence of data on the effects of both DRM and watermarking on users’ e-book behavior — which is why I have been discussing with the Book Industry Study Group the possibility of doing a study on this.

The prevailing attitude among authors is that DRM should still be used.  An interesting data point on this came back in January when Lulu, one of the prominent online self-publishing services, decided to stop offering authors the option of DRM protection (using Adobe Content Server, the de facto standard DRM for ebooks outside of the Amazon and Apple ecosystems) for ebooks sold on the Lulu site.  Lulu authors would still be able to distribute their titles through Amazon and other services that use DRM.

Lulu announced this in a blog post which elicited large numbers of comments, largely from authors.  My pseudo-scientific tally of the authors’ comments showed that they are in favor of DRM — and unhappy with Lulu’s decision to drop it — by more than a two-to-one margin.  Many said that they would drop Lulu and move to its competitor Smashwords, which continues to support DRM as an option.  Remember that these are independent authors of mostly “long tail” titles in need of exposure, not bestselling authors or major publishers.

One reason for Lulu’s decision to drop DRM was undoubtedly the operational expense.  Smashwords’ CEO, Mark Coker, expressed the attitudes of ebook distributors succintly in a Publishers Weekly article covering Lulu’s move when he said, “What’s relevant is whether the cost of DRM (measured by fees to Adobe, [and for consumers] increased complexity, decreased availability, decreased sharing and word of mouth, decreased customer satisfaction) outweigh the benefits[.]”  As we used to say over here, that’s the $64,000 question.

Content Protection for 4k Video July 2, 2013

Posted by Bill Rosenblatt in DRM, Technologies, Video, Watermarking.
15 comments

As Hollywood adepts know, the next phase in picture quality beyond HD is something called 4k.  Although the name suggests 4k (perhaps 4096) pixels in the vertical or horizontal direction, its resolution is actually 3840 × 2160, i.e., twice the pixels of HD in both horizontal and vertical directions.

4k is the highest quality of image actually captured by digital cinematography right now.  The question is, how will it be delivered to consumers, in what timeframe, and how will it be protected?

Those of us who attended the Anti-Piracy and Content Protection Summit in LA last week learned that the answer to the latter question is unknown as yet.  Spencer Stephens, CTO of Sony Pictures, gave a brief presentation explaining what 4k is and outlining his studio’s wish list for 4k content protection.  He said that it was an opportunity to start fresh with a new design, compared to the AACS content protection technology for Blu-ray discs, which is 10 years old.

This is interesting on a couple of levels.  First, it implies that the studios have not predetermined a standard for 4k content protection; in contrast, Blu-ray discs were introduced in the market about three years after AACS was designed.  Second, Stephens’s remarks had the flavor of a semi-public appeal to the community of content protection vendors — some of which were in the audience at this conference — for help in designing DRM schemes for 4k that met his requirements.

Stephens’s wish list included such elements as:

  • Title-by-title diversity, so that  a technique used to hack one movie title doesn’t necessarily apply to another
  • Requiring players to authenticate themselves online before playback, which enables hacked players to be denied but makes it impossible to play 4k content without an Internet connection
  • The use of HDCP 2.2 to protect digital outputs, since older versions of HDCP have been hacked
  • Session-based watermarking, so that each 4k file is marked with the identity of the device or user that downloaded it (a technique used today with early-window HD content)
  • The use of trusted execution environments (TEE) for playback, which combine the security of hardware with the renewability of software

From time to time I hear from startup companies that claim to have designed better technologies for video content protection.  I tell them that getting studio approval for new content protection schemes is a tricky business.  You can get studio technology executives excited about your technology, but they don’t actually “approve” it such that they guarantee they’ll accept it if it’s used in a content service.  Instead, they expect service providers to propose the technology in the context of the overall service, and the studios will consider providing licenses to their content in that broader context.  And of course the studios don’t actually pay for the technology; the service providers or consumer device makers do.

In other words, studios “bless” new content protection technologies, but otherwise the entire sales process takes place at arms’ length from the studios.  In that sense, the studios act somewhat like a regulatory agency does when setting guidelines for compliance with a regulation such as HIPAA and GLB (for information privacy in healthcare and financial services respectively).  The resulting technology often meets the letter but not the spirit of the regulations.

In this respect, Stephens’s remarks were a bit of fresh air.  They are an invitation to more open dialog among vendors, studios, and service providers about the types of content protection that they may be willing to implement when it comes time to distribute 4k content to consumers.

In the past, such discussions often happened behind closed doors, took the form of unilateral “unfunded mandates,” and/or resulted in implementations that plainly did not work.  As technology gets more sophisticated and the world gets more complex, Hollywood is going to have to work more closely with downstream entities in the content distribution chain if it wants its content protected.  Spencer Stephens’s presentation was a good start in that direction.

Kim Dotcom Embraces DRM January 22, 2013

Posted by Bill Rosenblatt in DRM, New Zealand, Services.
add a comment

Kim Dotcom launched a new cloud file storage service, the New Zealand-based Mega, last weekend on the one-year anniversary of the shutdown of his previous site, the notorious MegaUpload.  (The massive initial interest in the site* prevented me from trying out the new service until today.)

Mega encrypts users’ files, using what looks like a content key (using AES-128) protected by 2048-bit RSA asymmetric-key encryption.  It derives the latter keys from users’ passwords and other pseudo-random data.  Downloading a file from a Mega account requires knowing either the password that was used to generate the RSA key (i.e., logging in to the account used to upload the file) or the key itself.

Hmm.  Content encrypted with symmetric keys that in turn are protected by asymmetric keys… sounds quite a bit like DRM, doesn’t it?

Well, not quite.  While DRM systems assume that file owners won’t want to publish keys used to encrypt the files, Mega not only allows but enables you to publish your files’ keys.  Mega lets you retrieve the key for a given file in the form of a URL; just right-click on the file you want and select “Get link.”   (Here‘s a sample.)  You can put the resulting URL into a blog post, tweet, email message, or website featuring banner ads for porn and mail-order brides.

(And of course, unlike DRM systems, once you obtain a key and download a file, it’s yours in unencrypted form to do with as you please.  The encryption isn’t integrated into a secure player app.)

Yet in practical terms, Mega is really no different from file-storage services that let users publish URLs to files they store — examples of which include RapidShare, 4Shared, and any of dozens of file transfer services (YouSendIt, WhaleMail, DropSend, Pando, SendUIt, etc.).

Mega touts its use of encryption as a privacy benefit.  What it really offers is privacy from the kinds of piracy monitoring services that media companies use to generate takedown notices — an application of encryption that hardcore pirates have used and that Kim Dotcom purports to “take … out to the mainstream.”  It will be impossible to use content identification technologies, such as fingerprinting, to detect the presence of copyrighted materials on Mega’s servers.  RapidShare, for example, analyzes third-party links to files on its site for potential infringements; Mega can’t do any such thing, by design.

Mega’s use of encryption also plays into the question of whether it could ever be held secondarily liable for its users’ infringements under laws such as DMCA 512 in the United States.  The Beltway tech policy writer Paul Sweeting wrote an astute analysis of Mega’s chances against the DMCA over the weekend.

Is Kim Dotcom simply thumbing his nose at Big Media again?  Or is he seriously trying to make Mega a competitor to legitimate, prosaic file storage services such as DropBox?  The track records of services known for piracy trying to go “legit” are not encouraging — just ask Bram Cohen (BitTorrent Entertainment Network) or Global Gaming Factory (purchasers of The Pirate Bay’s assets).  Still, this is one to watch as the year unfolds.

*Or, just possibly, server meltdowns faked to generate mountains of credulous hype?

Reducing Complexity of Multiscreen Video Services with PlayReady and MPEG-DASH November 19, 2012

Posted by Bill Rosenblatt in DRM, Standards, Video.
4 comments

As I have worked with video service providers that are trying to upgrade their offerings to include online and mobile services, I’ve seen bewilderment about the maze of codecs, streaming protocols, and player apps as well as content protection technologies that those service providers need to understand.  Yet a development that took place earlier this month should help ease some of the complexity.

Microsoft’s PlayReady is becoming a popular choice for content protection.  Dozens of service providers use it, including BSkyB, Canal+, HBO, Hulu, MTV, Netflix, and many ISPs, pay TV operators, and wireless carriers.  PlayReady handles both downloads and streaming, and it is currently the only commercial DRM technology certified for use with UltraViolet (though that should change soon).  Microsoft has developed a healthy ecosystem of vendors that supply things like player apps for different platforms, “hardening” of client implementations to ensure robustness, server-side integration services, and end-to-end services.  And after years of putting in very little effort on marketing, Microsoft has finally upgraded its PlayReady website with information to make it easier to understand how to use and license the technology.

Streaming protocols are still a bit of an issue, though.  Several vendors have created so-called adaptive streaming protocols, which monitor the user’s throughput and vary the bit rate of the content to ensure optimal quality without interruptions.  Apple has HTTP Live Streaming (HLS), Microsoft has Smooth Streaming, Adobe has HTTP Dynamic Streaming (HDS), and Google has technology it acquired from Widevine.  Yet operators have been more interested in  Dynamic Adaptive Streaming over HTTP (DASH), an emerging vendor-independent MPEG standard.  The hope with MPEG-DASH is that operators can use the same protocol to stream to a wide variety of client devices, thereby making deployment of TV Everywhere-type services cheaper and easier.

MPEG-DASH took a significant step towards real-world viability over the last few months with the establishment of the DASH Industry Forum, a trade association that promotes market adoption of the standard.  Microsoft and Adobe are members, though not Apple or Google, indicating that at least some of the vendors of proprietary adaptive streaming will embrace the standard.  The membership also includes a healthy critical mass of vendors in the video content protection space: Adobe, BuyDRM, castLabs, Irdeto, Nagra, and Verimatrix — plus Cisco, owner of NDS, and Motorola, owner of SecureMedia.

Adaptive streaming protocols need to be integrated with content protection schemes.  PlayReady was originally designed to work with Smooth Streaming.  It has also been integrated with HLS, which is probably the most popular of the proprietary adaptive streaming schemes.  Integration of PlayReady with MPEG-DASH is likely to be viewed as a safe choice, in line with the way the industry is going.  That solution came into view this month as BuyDRM and Fraunhofer IIS announced an integration of MPEG-DASH with PlayReady for the HBO GO service in Europe.  HBO GO is HBO’s “over the top” service for subscribers.

For the HBO GO demo, BuyDRM implemented a version of its PlayReady client that uses Fraunhofer’s AAC 5.1 surround-sound codec, which ships with devices that run Android 4.1 Jelly Bean.  The integration is being showcased with HD quality video on HBO’s “Boardwalk Empire” series. Users can connect Android 4.1 devices with the proper outputs — even handsets — to home-theater audio playback systems to get an experience equivalent to playing a Blu-ray disc.  The current implementation supports live broadcasting, with VOD support on the way shortly.

PlayReady integrated with MPEG-DASH is likely to be a popular choice for a variety of video service providers, ranging from traditional pay TV operators to over-the-top services like HBO Go.  BuyDRM and Fraunhofer’s deployment is an important step towards that choice becoming widely feasible.

DRM Anticircumvention for Dummies July 15, 2012

Posted by Bill Rosenblatt in DRM, Law.
5 comments

I have seen a lot of writings and gotten a lot of feedback regarding the EPUB Lightweight Content Protection (EPUB LCP) scheme I am helping to design for the International Digital Publishing Forum (IDPF), which oversees the EPUB standard.  The criticisms fall into two buckets: DRM sucks, why is the IDPF wasting time on this; the security is too weak, publishers need stronger protection.

Yet these diametrically opposed criticisms have one thing in common: a lack of understanding of how anticircumvention law, such as Section 1201 of the DMCA in the United States, works in practice and how it figures into the design of EPUB LCP.  This lack of understanding is common to both DRM opponents and people from DRM technology vendors.  Anticircumvention law makes it a crime to hack DRMs.

So I thought I would offer some information about the practicalities of anticircumvention law, presented as rebuttals to some of the false assertions that I have heard.  Three caveats are in order: first, the following is going to be U.S.-centric.  That’s because am I most familiar with the U.S. anticircumvention law, but also because the U.S. law is by far the most highly developed through litigation.  Second, I am not a lawyer — nor are any of the people who have talked to me about this.  So if you’re a legal expert and I’m wrong, please correct me.  Third, I’m not an official spokesman for IDPF, and they may have different views.

Assertion: Anticircumvention law doesn’t stop hacks; hacks are going to be available anyway.

Reality: Of course the law doesn’t  eliminate hacks, but it does make hacks less easily accessible to people who are not determined hackers.  The law comes down hardest on those who gain commercially from their hacks.  Because of the anticircumvention law, there is not (for example) a “convert from Amazon” option in Nook readers and apps, or the converse in Kindles; instead you have to go find the hack, install it, and use it – something that requires more time, determination, and skill.  (Note that this is a different issue from “DRM doesn’t stop piracy.”  Here I agree: absolutely, there are various other ways to infringe copyright, some of which are easier than hacking DRMs.)

Assertion: DRM systems that aren’t robust don’t qualify for the anticircumvention law.

Reality: This one comes from DRM vendors, which have vested interests in robustness.  To answer this, you need to look at the history of litigation (again, this is a US-centric view). The most important legal precedent here is Universal v. Reimerdes, which was decided in U.S. district court in 2000 and upheld on appeal.  This case was one of several involving the weak CSS encryption scheme for DVDs.  The defense asked the court to find it not liable because CSS was too weak to meet the definition of “effective” in “technological measure [that] effectively controls access to a work” under the law. In his opinion, the judge explicitly refused to establish an “effectiveness test” by deciding this issue.   I know of a couple of cases that attempted to revisit this issue but were dropped.  The effect, at least for now, is that any DRM that’s as strong (i.e. weak) as CSS, or stronger, should qualify for protection under the law.

Assertion: The IDPF intends to sue hackers as part of the EPUB LCP initiative.

Reality: Not true at all.  The IDPF is not even in a position to facilitate litigation the way the MPAA and RIAA do.  (For one thing, it’s an international body, not a national one.)  If any organization is going to facilitate litigation, it would be the Association of American Publishers (AAP) in the U.S., which has not been involved in the EPUB LCP initiative.  More generally, it may help to explain how the litigation process works in practice.  Copyright owners do the suing; they are the actual plaintiffs.  They will only bother to sue under the anticircumvention law if they see hacks that are being used widely enough to cause significant infringement and/or the supplier of the hack is making money from the hack.  So as a practical matter, a hack that “sits in the shadows” as described above is unlikely to be used widely enough to draw a lawsuit.

Assertion: Users get sued for using hacks.

Reality: Although the law does provide penalties for using as well as distributing hacks, individual users have never gotten sued for using hacks (or for creating hacks for personal use only).  Users have been sued for copyright infringement; if you hack a DRM, you may be infringing copyright.  Only those who make hacks publicly available have ever been sued for DMCA 1201 violations.

Assertion: This is a US matter and irrelevant elsewhere in the world, especially now that ACTA is dead in Europe.

Reality: As mentioned above, the interpretation of “effectiveness” is a US-centric one that may or may not apply elsewhere.  But otherwise, this statement is also incorrect.  Anticircumvention law is on the books today in most industrialized countries, including EU member states (resulting from the European Union Copyright Directive of 2001), Australia, New Zealand, Japan, Singapore, India, China, Brazil, and a few others; South Korea and Canada should get anticircumvention laws soon.

The IDPF’s Lightweight Content Protection Standard for E-books May 31, 2012

Posted by Bill Rosenblatt in DRM, Publishing, Standards.
3 comments

I am working with the International Digital Publishing Forum (IDPF), helping them define a new type of  content protection standard that may be incorporated into the upcoming Version 3 of IDPF’s EPUB standard for e-books.  We’re calling this new standard EPUB Lightweight Content Protection (EPUB LCP).

EPUB LCP is currently in a draft requirements stage.  The draft requirements, along with some explanatory information, are publicly available; IDPF is requesting comments on them until June 8.  I will be giving a talk about EPUB LCP, and the state of content protection for e-books in general, at Book Expo America in NYC next week, during IDPF’s Digital Book Program on Tuesday June 5.

Now let’s get the disclaimer out of the way: the remainder of this article contains my own views, not necessarily those of IDPF, its management, or its board members.  I’m a consultant to IDPF; any decisions made about EPUB LCP are ultimately IDPF’s.  The requirements document mentioned above was written by me but edited by IDPF management to suit its own needs.

IDPF is defining a new standard for what amounts to a simple, lightweight, looser DRM.  EPUB is widely used in the e-book industry (by just about everyone except Amazon), but lack of an interoperable DRM standard has caused fragmentation that has hampered its success in the market. Frankly, IDPF blew it on this years ago (before its current management came in).  They bowed to pressures from online retailers and reading device makers not to make EPUB compliance contingent on adopting a standard DRM, and they considered DRM (understandably) not to be “low hanging fruit.”

IDPF first announced this initiative on May 18; it got press coverage in online publications such as Ars Technica, PaidContent.org, and others.  The bulk of the comments were generally “DRM sucks no matter what you call it” or “Why bother with this at all, it won’t help prevent any infringement.”  A small number of commenters said something on the order of “If there has to be DRM, this isn’t a bad alternative.”  One very knowledgeable commenter on Ars Technica first judged the scheme to be pointless because it’s cryptographically weak, then came around to understanding what we’re trying to do and even offered some beneficial insights.

The draft requirements document provides the basic information about the design; my main purpose here is to focus more on the circumstances and motivation behind the initial design choices.

Let’s start at a high level, with the overall e-book market.  (Those of you who read my article about this on PaidContent.org a few months ago can skip this and the next five paragraphs.)  Right now it’s at a tipping point between two outcomes that are both undesirable for the publishing industry.  The key figure to watch is Amazon’s market share, which is currently in the neighborhood of 60%; Barnes and Noble’s Nook is in second place with share somewhere in the 25-30% range.

One outcome is Amazon increasing its market share and entering monopoly territory (according to the benchmark of 70% market share often used in US law).  If that happens, Amazon can do to the publishing industry as Apple has done for music downloads: dominate the market so much that it can both dictate economic terms and lock customers in to its own ecosystem of devices, software, and services.

The other outcome is that Amazon’s market share falls, say to 50% or lower, due to competition.  In that case, the market fragments even further, putting a damper 0n overall growth in e-reading.  Also not good for publishers.

Let’s look at what happens to DRM in each of these cases.  In the first (Amazon monopoly) case, Amazon may drop DRM just as Apple did for music — but it will be too late: Amazon will have achieved lock-in and can preserve it in other ways, such as by making it generally inconvenient for users to use other devices or software to read Amazon e-books.  Other e-book retailers would then drop DRM as well, but few will care.

In the second case, everyone will probably keep their DRMs in order to keep users from straying to competitors (though some individual publishers will opt out of it).  In other words, if the DRM status quo remains, the likely alternatives are DRM-free monopoly or DRM and fragmentation.

If IDPF had included an interoperable DRM standard back in 2007 when both EPUB and the Kindle launched, e-books might well be more portable among devices and reading software than they are now.  Yet the most desirable outcome for the reading public is 100% interoperability, and we know from the history of technology markets (with the admittedly major exception of HTML) that this is a chimera.  (Again, I explained this in PaidContent.org a few months ago.)

To many people, the way out of this dilemma is obvious: everyone should get rid of DRM now.  That certainly would be good for consumers.  But most publishers — who control the terms by which e-books are licensed to retailers —  don’t want to do this; neither do many authors, who own copyrights in their books.

E-book retailers and device vendors can get lock-in benefits from DRM.  As for whether DRM does anything to benefit rights holders by improving consumers’ copyright compliance or reducing infringement, that’s a real question.  Notwithstanding the opinions of the many self-styled experts in user behavior analysis and infringement data collection among the techblogorati and commentariat, the answer is unknown and possibly unknowable.  Publishers are motivated to keep DRM if for no other reason than fear that once it goes away, they can never bring it back.  Moreover, certain segments of the publishing industry (such as higher education) want DRM that’s even stronger than the current major schemes.

The fact is, none of the major DRMs in today’s e-book market are very sophisticated — at least not compared to content protection technologies used for video content.  The economics of the e-book industry make this impossible: the publishers and authors who want DRM don’t pay for it, resulting in cost and complexity constraints.  DRM helps retailers insofar as it promotes lock-in, but it doesn’t help them protect their overall services.  In contrast, content protection helps pay TV operators (for example) protect their services, which they want protected just as much as Hollywood doesn’t want its content stolen; so they’re willing to pay for more sophisticated content protection.

The two leading e-book DRMs right now are Amazon’s Mobipocket DRM and Adobe’s Content Server 4; the latter is used by Barnes & Noble, Sony, and various others.  Hackers have developed what I call “one-click hacks” for both.  One-click hacks meet three criteria: people without special technical expertise can use them; they work on any file that’s packaged in the given DRM; and they work permanently (i.e., there is no way to recover from them).  In contrast, pay TV content protection schemes are generally not one-click-hackable.

In other words one-click DRM hacks are like format converters, like the one built into Microsoft Word that converts files from WordPerfect or the ones built in to photo editing utilities that convert TIFF to JPEG. But there’s a difference: DRM hacks are illegal in many countries, including the United States, European Union member states, Brazil, India, Taiwan, and Australia; all other signatories to the Anti-Counterfeiting Trade Agreement will eventually have so-called anticircumvention laws too.

The effect of anticircumvention law has been to force DRM hacks into the shadows, making them less easily accessible to the non-tech-savvy and at least somewhat stigmatized.  Without the law, we would have things like Nook devices and software with “Convert from Kindle Format” options (and vice versa).  The popular, free Calibre e-book reading app, for example, had a DRM stripper but removed it (presumably under legal pressure) in 2009.  A DRM removal plug-in for Calibre is available, but it’s not an official one; David Pogue of the New York Times — hardly a fan of DRM – recently dismissed it as difficult to use as well as illegal.

The US has a rich case history around anticircumvention law that has made the boundaries of legal acceptability reasonably clear.  It has shut off the availability of hacks from “legitimate” sources and ensured that if your hack is causing enough trouble, you will be sued out of existence.  I am not personally a fan of anticircumvention law, but I accept as fact that it has made hacks less accessible to the general public.

The foregoing line of thought got IDPF Executive Director Bill McCoy and me talking last year about what IDPF might be able to do about DRM in the upcoming version of EPUB, in order to help IDPF further its objective of making EPUB a universal standard for digital publishing and forestall the two undesirable market trajectories described above.  We did not set out to design an “ultimate DRM” or even “yet another DRM”; we set out to design something intended to solve problems in the digital publishing market while working within existing marketplace constraints.

So now, with that background, here is a set of interrelated design principles we established for EPUB LCP:

  1. Require interoperability so that retailers cannot use it to promote lock-in.  This is what the UltraViolet standard for video is attempting to do, albeit in a technically much more complex way.  The idea of UltraViolet is to provide some of the interoperability and sharing features that users want while still maintaining some degree of control.  Our theory is that both publishers and e-book retailers would be willing to accept a looser form of DRM that could break the above market dilemma while striking a similar balance between interoperability and control.
  2. Support functions that users really want, such as “social” sharing of e-books. Build on the idea of e-book watermarking, such as that used in Safari Books Online for PDF downloads and in the Pottermore Store for EPUB format e-books: embed users’ personal information into the content, on the expectation that users will only share files with people whom they trust not to abuse their personal information.
  3. Create a scheme that can support non-retail models such as library lending and can be extended to support additional business models (see below) or the stronger security that industry segments such as higher ed need.
  4. Include the kinds of user-friendly features that Reclaim Your Game has recommended for video game DRMs.  These include respecting privacy by not “phoning home” to servers and ensuring permanent offline use so that files can be used even if the retailer goes out of business.  They also include not jeopardizing the security or integrity of users’ devices, as in the infamous “rootkit” installed by CD copy protection technology for music several years ago.
  5. Eliminate design elements that add disproportionately to cost and complexity.  Perhaps the biggest of these is the s0-called robustness rules that have become standard elements of DRMs such as OMA DRM, Marlin, and PlayReady where the DRM technology licensor doesn’t own the hardware or platform software.  Eliminating “phoning home” also saves costs and complexity.  Other elements to be eliminated include key revocation, recoverability, and fancy authentication schemes such as the domain authentication used in UltraViolet.
  6. Finally, don’t try very hard to make the scheme hack-proof.  The strongest protection schemes for commercial content — such as those found in pay television — are those that minimize the impact of hacks so that they are temporary and recoverable; such schemes are too complex, invasive, and expensive for e-book retailers or e-reader makers to consider.  Instead, assume that EPUB LCP will be hacked, and rely on two things to blunt the impact: anticircumvention law, and allowing enough differences among implementations that each one will require its own hack (a form of what security technologists call “code diversity.”).

With those design principles in mind, we have designed a scheme that takes its inspiration from two sources in particular: the content protection technology used in the eReader/FictionWise e-book technology that is now owned by Barnes & Noble, and the layered functionality concept built into the Digital Media Project‘s IDP (Interoperable DRM Platform) standard.

The central idea of EPUB LCP is a passphrase supplied by the user or retailer.  This could be an item of personal information, such as a name, email address, or even credit card number; distributors or rights holders can decide what types of passphrases to use or require.  The passphrase is irrecoverably obfuscated (e.g. through a hash function) so that even if a hack recovers the passphrase, it won’t recover the personal information; yet the retailer can link the obfuscated passphrase to the user.  The obfuscated passphrase is then embedded into the e-book file.  If the user wants to share an e-book, all she has to do is share the passphrase.  Otherwise, the content must be hacked to be readable.

Other aspects of the draft requirements are covered in the document on the IDPF website.  Apart from that, it’s worth mentioning that this type of scheme will not support certain content distribution models unless extensions are added to make them possible.  Features intentionally left out of the basic EPUB LCP design include:

  • Separate license delivery, which allows different sets of rights for a given file
  • License chaining, which supports subscription services
  • Domain authentication, which can support multi-device/multi-user “family accounts” a la UltraViolet
  • Master-slave secure file transfer, for sideloading onto portable devices, a la Windows Media DRM
  • Forward-and-delete, to implement “Digital Personal Property” a la the IEEE P1817 standard

Once again, we set out to design something that meets current market needs and works within current market constraints; EPUB LCP is not a research-lab R&D project.

Again, I’ll be discussing this, as well as the landscape for e-book content protection in general, at Book Expo America next week.  Feel free to come and heckle (or just heckle in the comments right here).  I’m sure I will have more to report as this very interesting project develops.

Follow

Get every new post delivered to your Inbox.

Join 590 other followers