“Netflix for E-Books” Approaches Reality October 7, 2013Posted by Bill Rosenblatt in Publishing, Services.
add a comment
Back in 2002, a startup company called listen.com had just concluded licensing deals with all of the (then five) major labels. The result was Rhapsody: the “celestial jukebox” finally brought to life, the first successful subscription on-demand music service. Rhapsody — whose original focus on classical music must have made it seem like a low-impact experiment to the majors — didn’t get on the map until they closed those deals.*
Eleven years later, something analogous is happening in the world of book publishing. Last week, the popular document sharing site Scribd obtained licenses to all backlist titles from HarperCollins, one of the Big Five trade book publishers (along with Penguin Random House, Simon & Schuster, Macmillan, and Hachette), for an $8.99/month all-you-can-read subscription service. It should only be a matter of time before the other four trickle in. The service had been in “soft launch” mode since January with catalogs from smaller publishers such as RosettaBooks and SourceBooks.
Why Scribd and not Oyster or any of the others? Because Scribd already has a huge user base — 80 million monthly visitors — making it an attractive existing audience instead of a speculative one.
Scribd started in 2006 as sort of a “YouTube for documents.” The vast majority of the documents on the site were free; many were individual authors’ writings, corporate whitepapers, court filings, and so on. Scribd also enabled authors to sell their documents as paid downloads (DRM optional). Eventually some publishers put e-books up for individual sale on the site, including major publishers in the higher ed and scholarly segments.
The publishing industry has been buzzing about the possibility of a “Netflix for books” for a couple of years now. A few startups, such as Oyster, have built out the infrastructure but have only gotten licenses from smaller publishers and independent authors. At least for now, only Scribd has a major publisher deal; that will make all the difference in taking the subscription model for e-books to the mainstream. Like it or not, major content providers are key to the success of a content retail site.
From a technical standpoint, Scribd’s subscription service has more in common with music apps like Rhapsody and Spotify than with video services like Netflix. Like those music services, Scribd is mainly a “streaming” service, a/k/a “cloud reading,” in that it retrieves content in small chunks instead of downloading entire e-books but also gives users the option of downloading content to their mobile devices. (Thereby enabling me to use it on the subways in NYC.) Files stored on mobile devices are obfuscated or encrypted, so that users will not have access to them anymore if they cancel their subscriptions. And also analogously to the interactive streaming music services, Scribd uses a simple proprietary “good enough” encryption scheme instead of a heavyweight name-brand DRM technology such as the Adobe DRM used with the Nook, Kobo, Sony Reader, and Bookish systems.
Although Scribd is the first paid subscription service with major-publisher licensing, it’s actually not the first way to read major-publisher trade e-books on a time-limited basis: OverDrive introduced OverDrive Read, an HTML5-based cloud reading app for its public library e-lending service, a year ago.
In fact, OverDrive Read is currently the only (legal) way to read frontlist e-book titles from major publishers through a browser app on a time-limited basis. And that leads to an important difference between Scribd’s service and interactive streaming music services: HarperCollins is only licensing backlist titles, not frontlist (latest bestsellers). From the publishers’ point of view, this is a smart move that other Big Five publishers will most likely follow.
In that respect, Scribd could become more like Netflix than Rhapsody or Spotify, in that Netflix only offers movies in the home entertainment window — Hollywood’s rough equivalent of “backlist.” In contrast, the major music labels licensed virtually their entire catalogs to interactive streaming services from the start, save only for some high-profile artist holdouts such as the Beatles and Led Zeppelin. Instead, the record labels have had to settle for (hard-won) price differentiation between top new releases and back catalog for paid downloads. Just as readers who want the latest frontlist titles in print have to pay for hardback, those who want them as e-books will have to buy them. (Or borrow them from the library.)
*The story of Rhapsody is somewhat sad. For music geeks like myself, the service was a revelation — a truly new way to listen to and explore music. But Rhapsody slogged through years of difficulty communicating the value of subscription services to users amid numerous ownership changes. Subscribership grew gradually and plateaued at about a million paying users; then it suffered unfairly from the tsunami of hype around Spotify’s US launch in 2011. It didn’t help that Rhapsody took too long to release a halfway decent mobile client; but otherwise Spotify’s functionality was virtually identical to Rhapsody at that time. Now Rhapsody is struggling yet again as it attempts to expand to markets where Spotify is already established, training its 24 million users to expect free on-demand streaming with ads while losing money hand over fist. And in the latest insult to its pioneering history, a 6,000-word feature on Spotify in Mashable — a tome by online journalism standards — mentions Rhapsody not once.
E-Book Watermarking Gains Traction in Europe October 3, 2013Posted by Bill Rosenblatt in DRM, Europe, Publishing, United States, Watermarking.
The firm Rüdiger Wischenbart Content and Consulting has just released the latest version of Global eBook, its overview of the worldwide ebook market. This sweeping, highly informative report is available for free during the month of October.
The report contains lots of information about piracy and rights worldwide — attitudes, public policy initiatives, and technologies. A few conclusions in particular stand out. First, while growth of e-book reading appears to be slowing down, it has reached a level of 20% of book sales in the U.S. market (and even higher by unit volume). This puts e-books firmly in the mainstream of media consumption.
Accordingly, e-book piracy has become a mainstream concern. Publishers — and their trade associations, such as the Börsenverein des Deutschen Buchhandels in Germany, which is the most active on this issue — had been less involved in the online infringement issue than their counterparts in the music and film industries, but that’s changing now. Several studies have been done that generally show e-book piracy levels rising rapidly, but there’s wide disagreement on its volume. And virtually no data at all is available about the promotional vs. detrimental effects of unauthorized file-sharing on legitimate sales. Part of the problem is that e-book files are much smaller than music MP3s or (especially) digital video or games; therefore e-book files are more likely to be shared through email (which can’t be tracked) and less likely to be available through torrent sites.
The lack of quantitative understanding of infringement and its impact has led different countries to pursue different paths, in terms of both legal actions and the use of antipiracy technologies. Perhaps the most surprising of the latter trend — at least to those of us on this side of the Atlantic — is the rapid ascendancy of watermarking (a/k/a “social DRM”) in some European countries. For example:
- Netherlands: Arbeiderspers/Bruna, the country’s largest book publisher, switched from traditional DRM to watermarking for its entire catalog at the beginning of this year.
- Austria: 65% of the e-books available in the country have watermarks embedded, compared to only 35% with DRM.
- Hungary: Watermarking is now the preferred method of content protection.
- Sweden: Virtually all trade ebooks are DRM-free. The e-book distributor eLib (owned by the Swedish media giant Bonnier), uses watermarking for 80% of its titles.
- Italy: watermarking has grown from 15% to 42% of all e-books, overtaking the 35% that use DRM.
(Note that these are, with all due respect to them, second-tier European countries. I have anecdotal evidence that e-book watermarking is on the rise in the UK, but not much evidence of it in France or Germany. At the same time, the above countries are often test beds for technologies that, if successful, spread to larger markets — whether by design or market forces.)
Meanwhile, there’s still a total absence of data on the effects of both DRM and watermarking on users’ e-book behavior — which is why I have been discussing with the Book Industry Study Group the possibility of doing a study on this.
The prevailing attitude among authors is that DRM should still be used. An interesting data point on this came back in January when Lulu, one of the prominent online self-publishing services, decided to stop offering authors the option of DRM protection (using Adobe Content Server, the de facto standard DRM for ebooks outside of the Amazon and Apple ecosystems) for ebooks sold on the Lulu site. Lulu authors would still be able to distribute their titles through Amazon and other services that use DRM.
Lulu announced this in a blog post which elicited large numbers of comments, largely from authors. My pseudo-scientific tally of the authors’ comments showed that they are in favor of DRM — and unhappy with Lulu’s decision to drop it — by more than a two-to-one margin. Many said that they would drop Lulu and move to its competitor Smashwords, which continues to support DRM as an option. Remember that these are independent authors of mostly “long tail” titles in need of exposure, not bestselling authors or major publishers.
One reason for Lulu’s decision to drop DRM was undoubtedly the operational expense. Smashwords’ CEO, Mark Coker, expressed the attitudes of ebook distributors succintly in a Publishers Weekly article covering Lulu’s move when he said, “What’s relevant is whether the cost of DRM (measured by fees to Adobe, [and for consumers] increased complexity, decreased availability, decreased sharing and word of mouth, decreased customer satisfaction) outweigh the benefits[.]” As we used to say over here, that’s the $64,000 question.
Copyright and Accessibility June 19, 2013Posted by Bill Rosenblatt in Events, Law, Publishing, Standards, Uncategorized.
add a comment
Last week I received an education in the world of publishing for print-disabled people, including the blind and dyslexic. I was in Copenhagen to speak at Future Publishing and Accessibility, a conference produced by Nota, an organization within the Danish Ministry of Culture that provides materials for the print-disabled, and the DAISY Consortium, the promoter of global standards for talking books. The conference brought together speakers from the accessibility and mainstream publishing fields.
Before the conference, I had been wondering what the attitude of the accessibility community would be towards copyright. Would they view it as a restrictive construct that limits the spread of accessible information, allowing it to remain in the hands of publishers that put profit first?
As it turns out, the answer is no. The accessibility community, generally speaking, has a balanced view of copyright that reflects the growing importance of the print disabled to publishers as a business matter.
Digital publishing technology might be a convenience for normally sighted people, but for the print disabled, it’s a huge revelation. The same e-publishing standards that promote ease of production, distribution, and interoperability for mainstream consumers make it possible to automate and thus drastically lower the cost and time to produce content in Braille, large print, or spoken-word formats.
Once you understand this, it makes perfect sense that the IDPF (promoter of the EPUB standards for e-books) and DAISY Consortium share several key members. It was also pointed out at the conference that the print disabled constitute an audience that expands the market for publishers by roughly 10%. All this adds up to a market for accessible content that’s just too big to ignore.
As a result, the interests of the publishing industry and the accessibility community are aligning. Accessibility experts respect copyright because it helps preserve incentives for publishers to convert their products into versions for the print disabled. Although more and more accessibility conversion processes can be automated, manual effort is still necessary — particularly for complex works such as textbooks and scientific materials.
Publishers, for their part, view making content accessible to the print disabled as part of the value that they can add to content — value that still can’t exist without financial support and investment.
One example is Elsevier, the world’s largest scientific publisher. Elsevier has undertaken a broad, ambitious program to optimize its ability to produce versions of its titles for the print disabled. One speaker from the accessibility community called the program “the gold standard” for digital publishing. Not bad for a company that some in the academic community refer to as the Evil Empire.
This is not by any means to suggest that publishers and the accessibility community coexist in perfect harmony. There is still a long way to go to reach the state articulated at the conference by George Kerscher, who is both Secretary General of DAISY and President of IDPF: to make all materials available to the print disabled at the same time, and for the same price, as mainstream content.
The Future Publishing and Accessibility conference was timed to take place just before negotiations begin over a proposed WIPO treaty that would facilitate the production of accessible materials and distribution of them across borders. The negotiations are taking place this and next week in Marrakech, Morocco. This proposed treaty is already laden with concerns from the copyright industries that its provisions will create opportunities for abuse, and reciprocal concerns from the open Internet camp that the treaty will be overburdened with restrictions designed to limit such abuse. But as I found out in Denmark last week, there is enough practical common ground to hope that accessibility of content for the print disabled will continue to improve.
The Coming Two-Tiered World of Libary E-book Lending June 4, 2013Posted by Bill Rosenblatt in Libraries, Publishing, Services, United States.
A group of public libraries in California recently launched a beta version of EnkiLibrary, an e-book lending system that the libraries run themselves. EnkiLibrary is modeled on the Douglas County Libraries system in Colorado. It enables libraries to acquire e-book titles for lending in a model that approximates print book acquisition more closely than the existing model.
Small independent publishers are making their catalogs available to these library-owned systems on liberal terms, including low prices and a package of rights that emulates ownership. In contrast, major trade publishers license content to white-label service providers such as OverDrive under a varied, changing, and often confusing array of conditions — including limited catalog, higher prices than those charged to consumers, and limitations on the number of loans. The vast majority of public libraries in the United States use these systems: they choose which titles to license and offer those to their patrons.
Welcome to the coming two-tiered world of library e-book lending. E-lending systems like EnkiLibrary may well proliferate, but they are unlikely to take over; instead they will coexist with — or, in EnkiLibrary’s own words, “complement” — those used by the major publishers.
The reason for this is simple: indie publishers — and authors, working through publisher/aggregators like Smashwords — prioritize exposure over revenue, while for major publishers it’s the other way around. If more liberal rights granted to libraries means that borrowers “overshare” e-books, then so be it: some of that oversharing has promotional value that could translate into incremental, cost-free sales.
In some ways, the emerging dichotomy in library e-lending is like the dichotomy between major and indie labels regarding Internet music sales. Before 2009, the world of (legal) music downloads was divided into two camps: iTunes sold both major and indie music and used DRM that tied files to the Apple ecosystem; smaller services like eMusic sold only indie music, but the files were DRM-free MP3s that could be played on any device and copied freely. That year, iTunes dropped DRM, Amazon expanded its DRM-free MP3 download service to major-label music, and eventually eMusic tapered off into irrelevance.
Yet it would be a mistake to stretch the analogy too far. Major publishers are unlikely to license e-books for library lending on the liberal terms of a system like EnkiLibrary or Douglas County’s in the foreseeable future; the market dynamics are just not the same.
In 2008, iTunes had an inordinately large share of the music download market; the major labels had no leverage to negotiate more favorable licensing terms, such as the ability to charge variable prices for music. The majors had tried and failed to nurture viable competitors to iTunes. Amazon was their last and best hope. iTunes already had an easy-to-use system that was tightly integrated with Apple’s own highly popular devices. It became clear that the only meaningful advantage that another retailer could have over iTunes was lack of DRM. So the major labels were compelled to give up DRM in order to get Amazon on board. By 2009, DRM-free music from all labels became available through all major retailers.
No such competitive pressures exist in the library market. On the contrary, libraries themselves are under competition from the private sector, including Amazon. Furthermore, arguments that e-book lending under liberal terms leads to increased sales for small publishers won’t apply very much to major publishers, for reasons given above.
Therefore, unless libraries get e-lending rights under copyright law instead of relying on “publishers’ good graces” (as I put it at the recent IDPF Digital Book 2013 conference) for e-lending permission, it’s likely that libraries will have to labor under a two-tiered system for the foreseeable future. Douglas County Libraries director Jamie LaRue — increasingly seen as a revolutionary force in the library community — captured the attitude of many when he said, “It isn’t the job of libraries to keep publishers in business.” He’s right. Ergo the stalemate should continue for some time to come.
R.I.P. TOC May 9, 2013Posted by Bill Rosenblatt in Events, Publishing.
Here’s something that’s a little off topic for this blog but can’t be covered in 140 characters.
The O’Reilly Tools of Change for publishing (TOC) conference has been abruptly cancelled after a seven-year run that culminated in its last show in NYC earlier this year. The announcement was made by Tim O’Reilly, CEO of the iconic tech publishing company O’Reilly Media, on his blog late last week – along with some hints that O’Reilly may be commercializing the editorial workflow tool (Atlas) that O’Reilly has been developing in-house and using with its authors.
This is a real loss to the publishing community. It echoes the trajectory of Seybold, which had previously been the go-to conference for innovation and technology in publishing: Seybold rose with the desktop publishing revolution of the early 1990s, got hit badly in the dot-bomb crash of the early 2000s, and never recovered. Both conferences, in their heydays, attracted over a thousand paid attendees and featured well-constructed, jam-packed, multi-track agendas and large exhibit halls as well as a real sense of community among attendees, vendors, and speakers.
As someone who (in a smaller way) has been involved in conference production for over a decade, my view is that TOC was one of the best-organized and best-produced conferences ever — thanks to co-chairs Joe Wikert and Kat Meyer and their team. Their use of the web to organize the agenda, speakers, and community was unparalleled. The agendas were canny, creative mixes of basic education for publishers and sessions on innovative technologies and business practices; accordingly, the speakers were mixes of old hands and new upstarts. Keynote speakers weren’t the usual publishing industry luminaries but “outside the box” thinkers like, most recently, the media theorist/futurist Douglas Rushkoff. Hallway buzz was palpable.
While I don’t know the exact reasons why O’Reilly pulled the plug on TOC, I would guess that they were mainly financial. Kat Meyer told me that TOC was a poor stepchild among other much bigger events that O’Reilly produces, such as Strata (Big Data), oscon (open source), Velocity (web development), and Web 2.0 Summit (now also discontinued). It’s not at all unusual in the tech world for conferences to appear and disappear as tech trends wax and wane; for example, Jupitermedia, with which I produced the Digital Rights Strategies conferences in the mid-2000s, created and disbanded conferences all the time.
O’Reilly has a product mix that’s not unlike other B2B publishers such as Reed Business Information, McGraw-Hill, and United Business Media (not to mention digital natives like TechCrunch): its publications, conferences, training, and other services are all interdependent and represent cross-selling opportunities. When viewed this way, TOC was an anomaly: a conference about publishing, put on by a company whose real business is information technology (and that, like those others, happened to start out as a pure-play publisher in its field).
O’Reilly had few synergies between TOC and its other properties. Conferences are more usually put on by organizations that have other lines of business — such as industry trade associations (AAP, ALA, NAB, CES), market researchers (Outsell, Gartner), or vendors (Apple, Oracle, SAP), as well as B2B publishers. TOC was, by that perspective, a standalone property. It’s difficult to operate a standalone event and make a profit, particularly when you spend as much on infrastructure and community (and Manhattan hotel space) as O’Reilly did.
And the publishing industry is not exactly known for its lavish budgets. One commenter on a publishing blog demurred at having to pay US $1000 to attend TOC for two days; in contrast, conferences like Velocity and Strata charge as much as double that amount. As Tim O’Reilly himself commented at his last TOC keynote speech, “Why are we here? It’s not to make our fortune.”
There are other conferences about publishing, put on by companies that publish about publishing — such as Digital Book World (F&W Publishing) and Publishing Business Conference (NAPCO). Those organizations are probably celebrating TOC’s hasty demise, but it remains to be seen whether they will fill the void it has created.
Awareness Grows over Digital First Sale February 19, 2013Posted by Bill Rosenblatt in Business models, Law, Publishing.
1 comment so far
What would happen if the law were to definitively decide that users should get the same rights of ownership over digital downloads as they do with physical media products such as books, CDs, and DVDs? A growing crescendo of events over the last few weeks indicates a growing awareness of this fascinating topic.
Let’s start with late last month, when Amazon was granted a U.S. patent on a scheme for reselling digital objects. The patent describes a scheme for transferring “ownership” of digital content objects from one user to another, possibly with limits on the number of transfers, and handling the e-commerce behind each such transaction.
Digital resale is possible now. For example, the startup ReDigi is doing it for music downloads from iTunes and Amazon. The question is not whether it’s technically feasible to support digital resale with reasonable safeguards against abuse of the process (i.e., “reselling” your content while keeping your own copies). The question is whether doing so requires a license from content owners, or whether users have a legal right to resell their content without permission.
In the former case, any service (like ReDigi) that facilitates resale would have to pay royalties to copyright owners on every transaction. In the latter case, it need not pay anything. Since resold digital content is identical to “new” content, this would have highly disruptive implications for publishers and others in the value chain. The law is not clear on this point, but it may become clearer within the next couple of years through litigation, such as Capitol Records’ lawsuit against ReDigi, and the efforts of a lobbying group called the Owners’ Rights Initiative.
Amazon’s patent does not take a position on whether digital resale requires the copyright owner’s permission; it simply discloses a mechanism for doing digital resale. And of course just because Amazon has a patent does not mean it intends to implement such a system; Amazon was granted about 300 patents in 2012. Still, the issuance of the patent prompted Wired to run an article about digital first sale and its implications two weeks ago.
That brings us to last week, when the O’Reilly Tools of Change for Publishing (TOC) took place in NYC. TOC is the preeminent conference on technology and innovation in publishing. Just before the conference, the TOC folks held an invitation-only Executive Roundtable featuring John Ossenmacher, CEO of ReDigi. O’Reilly Media, a publisher of books and other information for IT professionals and a bellwether of technological innovation in publishing, confirmed that it is in talks with ReDigi to take the company into resale of e-books. The room was filled with traditional publishing executives who had a more skeptical view, though Ossenmacher survived the ordeal well.
The TOC organizers had asked me to give a talk on digital first sale at the conference; I did so later in the week (slides available on SlideShare). The room was packed with a broad mixture of editorial, business, and technology folks from the publishing industry. Publishers Weekly, the leading trade publication of the book publishing industry, decided that the topic was important enough to feature in an article summarizing my presentation. Most of the attendees were surprised at the highly disruptive implications for publishers, retailers, and libraries as well as users, though a few expressed the idea that digital resale is yet another inevitable type of change to legacy business models in the content industries.
Digimarc Acquires Attributor December 4, 2012Posted by Bill Rosenblatt in Fingerprinting, Images, Publishing, Watermarking.
1 comment so far
Digimarc announced yesterday that it has acquired Attributor Corp. Attributor, based in Silicon Valley, is one of a handful of companies that crawls the Internet looking for instances of copyrighted material that may be infringing, using a pattern-recognition technology akin to fingerprinting. Digimarc is a leader in digital watermarking technology, with a large and significant portfolio of IP in the space. The acquisition price was a total of US $7.5 Million in cash, stock, and contingent compensation.
This is a synergistic and strategically significant move for Digimarc. A few years ago, Digimarc had pruned its efforts to create products and services for digital media markets outside of still images. It had decided, in effect, to leave products and services to its IP licensees, companies such as Civolution of the Netherlands and MarkAny of South Korea. Attributor’s primary market is book publishing, with customers including four out of the “Big Six” trade book publishers as well as several leading educational and STM (scientific, technical, medical) publishers.
Digimarc intends to leverage Attributor’s relationships with book publishers to help it expand its watermarking technology into that market and to move into other markets such as magazine and financial publishing. The company cited the explosive growth in e-books as a reason for the acquisition.
Beyond that, Digimarc’s acquisition is another sign of the increasing importance of infringement monitoring services; the previous such sign came over the summer, when Thomson Reuters acquired MarkMonitor.
There are two reasons for this increase in importance. First is the rise of so-called progressive response legal regimes: copyright owners can monitor the Internet and submit data on alleged infringements to a legal authority, which sends users increasingly strong warning messages and, if they keep on infringing, potentially suspends their ISP accounts. The most advanced progressive response regime is HADOPI in France, early results from which are encouraging. The Copyright Alert System is supposedly gearing up for launch in the United States. A handful of other countries have progressive response in place or in process as well.
The second reason for the increasing importance of so-called piracy monitoring is that copyright owners are starting to realize the value of the data they generate, beyond catching infringers. Piracy is evidence of popularity of content — of demand for it. The data that these services generate can be valuable for analytics purposes, to see who is interested in the content and in what ways. Big Champagne, for example, has been supplying this type of data to the music industry for may years. Attributor has been working on a new service that integrates piracy data with social media analytics; Digimarc intends to integrate this into its own data offerings for the image market.
In fact, we’ll have a discussion on the value of piracy data tomorrow at Copyright and Technology NYC 2012. Leading the discussion will be Thomas Sehested of MarkMonitor. There’s little doubt he will be called upon to talk about his new competition.
You Bought It, You Own It. But Can a Library Lend It? November 12, 2012Posted by Bill Rosenblatt in Law, Libraries, Publishing.
I’ve been writing regularly about the battle that libraries are fighting over e-book lending — a battle whose outcome doesn’t look good for libraries right now. Libraries can only lend e-books at the pleasure of publishers and not through any legal right. I have said that libraries’ best shot at changing their fortunes as the reading world transitions to digital is to try to get e-lending rights enshrined in the law.
I also didn’t believe — with all due respect to the American Library Association and other library advocacy groups — that the library community stood much of a chance of getting Congress to pay attention to this issue. I had thought that if there were any path to change it would be through the long, hard slog of litigation.
That is, until I read about the Owners’ Rights Initiative, a lobbying group that began life last month. ”You bought it, you own it” is the ORI’s mantra. It’s run by Andrew Shore, a partner in a Washington law and lobbying boutique with experience in international trade issues.
The ORI unites a number of constituencies that stand to gain if the First Sale Doctrine is extended to digital content. Recall that First Sale, section 109 of the US copyright law, says that once you legally obtain a copyrighted work, you can do with it as you please without further involvement from the publisher: resell it, lend it, give it away, use it to line a bird cage. But First Sale is currently deemed not to apply to digital files, such as e-books and software.
Libraries have found some interesting allies in the ORI. Most of them are companies (or trade associations representing companies) that sell used merchandise, including eBay and various used computer equipment dealers. Used book sellers are there (Powell’s Books, Chegg). The video rental kiosk operator Redbox has joined to protect its interests as it undoubtedly plans to move from DVDs and Blu-rays to digital files. A few companies that facilitate e-commerce transactions are on board.
This textbook example of a “strange bedfellows” coalition does have synergies. It would have been inconceivable for companies like eBay and Redbox, let alone the several sellers of used enterprise software, to get anyone to take them seriously on this issue. Tying their opportunism over making money on used software and videos to the issue of public library lending gives them a much better story to tell. For their part, libraries get money and resources far beyond what they can muster on their own, plus a degree of business savvy that is outside of libraries’ comfort zone.
Ironically, the ORI jelled around a court case that is about hardcopy books, not digital content: Kirstaeng v. Wiley, which is currently before the Supreme Court. The case is about college textbooks (published by John Wiley & Sons) that Supap Kirtsaeng purchased in Thailand and has been reselling on eBay in the U.S. The Supreme Court has to determine how to reconcile two provisions of the copyright law that are at odds with one another in this case: First Sale says that the textbooks are his to resell in the U.S., despite the fact that the prices in Thailand are lower than they are here. On the other hand, section 602 of the law lets a publisher block importation of gray-market copies of its works into the country.
Even though Kirstaeng is about hardcopy, it’s not hard to see how the case could apply to digital works if the Supreme Court finds for Wiley. One of the ORI members, Quality King Distributors, was involved in a conflict between 109 and 602 in 1998 when another case, Quality King v. L’anza, went to the Supreme Court. Writing for the Court in that case, Justice Stephens specifically excluded “licensees” from First Sale rights because they are “non-owners” of copyrighted works. Because purchasers of digital files are currently considered licensees rather than owners, Stephens’s opinion could be interpreted to mean that they don’t get First Sale rights — though that remains to be tested.
The ORI looks like an interesting vehicle for the library community to get e-lending rights enshrined in law. Even so, it seems unlikely to succeed. First of all, the Kirstaeng case is an example of how hard the media industry is prepared to fight against First Sale: Wiley has hired no less than Ted Olson, the former U.S. Solicitor General, to argue its case.
Textbook publishers like Wiley don’t like First Sale — whether digital or hardcopy — because it enables the huge market for used textbooks; publishers would love to see the used textbook market go away. Movie studios hate it because it would harm their carefully maintained system of release windows. In general, media companies — as well as digital content retailers like Apple and Amazon — are against digital First Sale because it would create downward pricing pressure, as the “used” copies of digital content are (unlike their physical counterparts) not inferior to “new” ones.
The issue that’s likely to carry the day, if and when digital First Sale legislation is ever considered, is the one that the U.S. Copyright Office pointed out in its 2001 report on the subject: for digital First Sale to work fairly, users would have to delete their copies of files once they gave, lent, or resold them to someone else. Either users would have to be trusted to take this additional step voluntarily (including deletion of copies on all their devices, backups, etc.) or there would have to be a mandatory mechanism, similar to but much more sophisticated than the one that ReDigi has developed for music files, to delete the files automatically.
Neither contingency seems very likely to be enshrined in legislation. This augurs an unsuccessful outcome for libraries. Libraries don’t need the full set of First Sale rights in order to lend e-books without permission from publishers. As I have argued, libraries can get by with narrower rights; such rights could be granted through amendments to Section 108 of the copyright law, the section that extends extra rights to libraries and archives.
So the bottom line on libraries and the Owners’ Rights Initiative is that its critical mass is likely to get libraries more attention in Congress than they might on their own, but it’s unlikely to get the result they need to stay relevant as reading moves to e-books.
Publisher-Library Feud over E-Books Heats Up October 1, 2012Posted by Bill Rosenblatt in Law, Libraries, Publishing, Rights Licensing, United States.
The US trade associations for public libraries and book publishers exchanged heated words last week regarding the growing impasse over e-book lending. The American Library Association’s (ALA) newly-installed president, Maureen Sullivan, issued an open letter to trade publishers such as Simon & Schuster, Macmillan and Penguin demanding that they license e-books for digital lending. The Association of American Publishers (AAP) issued a response saying, in effect, “Sorry, our hands are tied.”
An article I wrote last year explains the legal background of this issue. Thanks to a legal doctrine known in the US as First Sale, libraries can buy print books and lend them without permission from publishers. But because First Sale doesn’t apply to digital downloads, libraries must get licenses from publishers to acquire e-books for lending. Thus some of the major trade (consumer) book publishers are refusing to license e-books to libraries or are placing restrictions on lending terms.
But that’s not all. E-book technology is also enabling companies like Amazon to supplant some library functions in the private sector, while indie authors and publishers are likely to increase giveaways of their content in digital form, in hopes of exposure. More and more people are reading digitally, while libraries may face a future of lending hardcopy books only. Library patrons will lose, and it’s far from clear that any (legal) private-sector function will completely fill in the gaps.
The good news is that public libraries are finally waking up from the what-me-worry stance they appeared to affect a year ago; Digital Book World says that Sullivan’s “open letter” was borne out of libraries’ frustration about the way things are going.
The bad news is that this situation is going to get worse before it gets better… if it ever does.
The problem with “open letters” is that they are often tacit admissions of powerlessness. Sullivan’s open letter is primarily an attempt to explain the value proposition of libraries to publishers. Yet that aspect of it contains little that publishers haven’t heard before. It also attempts to convince publishers that they, together with libraries, have a special role in society to spread information and culture that they must maintain. This aspect of it is likely to fall on deaf ears.
The heart of the problem is that libraries aren’t comfortable acting like businesses, while the major publishers are. Yet libraries are being forced into discussions with publishers about business terms instead of relying on laws like First Sale. Many library people find such discussions distasteful or distracting, because they believe (rightly) that theirs is a greater mission than being a “channel” for publishers. Moreover, the reality is that such discussions are unlikely to lead to satisfactory conclusions for libraries.
Library gurus such as Robert Darnton of Harvard have suggested innovative models for libraries and e-books. It’s possible that as wireless broadband and connected devices become more pervasive, publishers and libraries may be able to come to some arrangement that involves licensing e-books for time-limited cloud-based reading, instead of relying on downloads of DRM-packaged e-book files as they do now. But if publishers require that such deals reflect libraries’ true value in book sales, then the numbers may well come up short for libraries. They can argue (again, rightly) that they help publishers sell books in general by promoting reading, but it’s hard to quantify that benefit sufficiently.
The AAP’s don’t-look-at-us response to the ALA open letter is at least honest. Trade associations already labor under constant antitrust restrictions. Not for nothing does every trade association meeting begin with what lawyers call an “antitrust benediction” warning participants not to say anything that could be interpreted as collusion; talks I give at trade associations’ events have to be scrubbed by their antitrust attorneys. Furthermore, the Justice Department’s recent investigations into collusion with Apple over e-book price-setting have made it even more for difficult for publishers to collaborate, whether under the AAP banner or otherwise.
Publishers’ lack of ability to agree on library lending terms will only lead to more and more confusion and complexity for libraries and their patrons. In fact, publishers may be loathe to work together to create a workable solution for libraries precisely because it could backfire: if the ALA doesn’t like the terms on offer, it could sue on antitrust grounds.
Libraries may have better luck on the legal front than with technology or business terms. As I have explained, getting First Sale to apply to digital content in general (so that anyone can lend, sell, or give away lawfully obtained digital content) is virtually unthinkable. Yet it might be possible to get Congress to pass a narrower change in the law — specifically to Section 108 of the Copyright Act — that would give lending libraries statutory licenses to lend digital content without affecting First Sale rights in general. It remains to be seen whether the political climate in Washington could entertain such legislation, but it may be libraries’ best hope of survival in the e-reading age.
I am working with the International Digital Publishing Forum (IDPF), helping them define a new type of content protection standard that may be incorporated into the upcoming Version 3 of IDPF’s EPUB standard for e-books. We’re calling this new standard EPUB Lightweight Content Protection (EPUB LCP).
EPUB LCP is currently in a draft requirements stage. The draft requirements, along with some explanatory information, are publicly available; IDPF is requesting comments on them until June 8. I will be giving a talk about EPUB LCP, and the state of content protection for e-books in general, at Book Expo America in NYC next week, during IDPF’s Digital Book Program hosted by an SEO Agency on Tuesday June 5.
Now let’s get the disclaimer out of the way: the remainder of this article contains my own views, not necessarily those of IDPF, its management, or its board members. I’m a consultant to IDPF; any decisions made about EPUB LCP are ultimately IDPF’s. The requirements document mentioned above was written by me but edited by IDPF management to suit its own needs.
IDPF is defining a new standard for what amounts to a simple, lightweight, looser DRM. EPUB is widely used in the e-book industry (by just about everyone except Amazon), but lack of an interoperable DRM standard has caused fragmentation that has hampered its success in the market. Frankly, IDPF blew it on this years ago (before its current management came in). They bowed to pressures from online retailers and reading device makers not to make EPUB compliance contingent on adopting a standard DRM, and they considered DRM (understandably) not to be “low hanging fruit.”
IDPF first announced this initiative on May 18; it got press coverage in online publications such as Ars Technica, PaidContent.org, and others. The bulk of the comments were generally “DRM sucks no matter what you call it” or “Why bother with this at all, it won’t help prevent any infringement.” A small number of commenters said something on the order of “If there has to be DRM, this isn’t a bad alternative.” One very knowledgeable commenter on Ars Technica first judged the scheme to be pointless because it’s cryptographically weak, then came around to understanding what we’re trying to do and even offered some beneficial insights.
The draft requirements document provides the basic information about the design; my main purpose here is to focus more on the circumstances and motivation behind the initial design choices.
Let’s start at a high level, with the overall e-book market. (Those of you who read my article about this on PaidContent.org a few months ago can skip this and the next five paragraphs.) Right now it’s at a tipping point between two outcomes that are both undesirable for the publishing industry. The key figure to watch is Amazon’s market share, which is currently in the neighborhood of 60%; Barnes and Noble’s Nook is in second place with share somewhere in the 25-30% range.
One outcome is Amazon increasing its market share and entering monopoly territory (according to the benchmark of 70% market share often used in US law). If that happens, Amazon can do to the publishing industry as Apple has done for music downloads: dominate the market so much that it can both dictate economic terms and lock customers in to its own ecosystem of devices, software, and services.
The other outcome is that Amazon’s market share falls, say to 50% or lower, due to competition. In that case, the market fragments even further, putting a damper 0n overall growth in e-reading. Also not good for publishers.
Let’s look at what happens to DRM in each of these cases. In the first (Amazon monopoly) case, Amazon may drop DRM just as Apple did for music — but it will be too late: Amazon will have achieved lock-in and can preserve it in other ways, such as by making it generally inconvenient for users to use other devices or software to read Amazon e-books. Other e-book retailers would then drop DRM as well, but few will care.
In the second case, everyone will probably keep their DRMs in order to keep users from straying to competitors (though some individual publishers will opt out of it). In other words, if the DRM status quo remains, the likely alternatives are DRM-free monopoly or DRM and fragmentation.
If IDPF had included an interoperable DRM standard back in 2007 when both EPUB and the Kindle launched, e-books might well be more portable among devices and reading software than they are now. Yet the most desirable outcome for the reading public is 100% interoperability, and we know from the history of technology markets (with the admittedly major exception of HTML) that this is a chimera. (Again, I explained this in PaidContent.org a few months ago.)
To many people, the way out of this dilemma is obvious: everyone should get rid of DRM now. That certainly would be good for consumers. But most publishers — who control the terms by which e-books are licensed to retailers — don’t want to do this; neither do many authors, who own copyrights in their books.
E-book retailers and device vendors can get lock-in benefits from DRM. As for whether DRM does anything to benefit rights holders by improving consumers’ copyright compliance or reducing infringement, that’s a real question. Notwithstanding the opinions of the many self-styled experts in user behavior analysis and infringement data collection among the techblogorati and commentariat, the answer is unknown and possibly unknowable. Publishers are motivated to keep DRM if for no other reason than fear that once it goes away, they can never bring it back. Moreover, certain segments of the publishing industry (such as higher education) want DRM that’s even stronger than the current major schemes.
The fact is, none of the major DRMs in today’s e-book market are very sophisticated — at least not compared to content protection technologies used for video content. The economics of the e-book industry make this impossible: the publishers and authors who want DRM don’t pay for it, resulting in cost and complexity constraints. DRM helps retailers insofar as it promotes lock-in, but it doesn’t help them protect their overall services. In contrast, content protection helps pay TV operators (for example) protect their services, which they want protected just as much as Hollywood doesn’t want its content stolen; so they’re willing to pay for more sophisticated content protection.
The two leading e-book DRMs right now are Amazon’s Mobipocket DRM and Adobe’s Content Server 4; the latter is used by Barnes & Noble, Sony, and various others. Hackers have developed what I call “one-click hacks” for both. One-click hacks meet three criteria: people without special technical expertise can use them; they work on any file that’s packaged in the given DRM; and they work permanently (i.e., there is no way to recover from them). In contrast, pay TV content protection schemes are generally not one-click-hackable.
In other words one-click DRM hacks are like format converters, like the one built into Microsoft Word that converts files from WordPerfect or the ones built in to photo editing utilities that convert TIFF to JPEG. But there’s a difference: DRM hacks are illegal in many countries, including the United States, European Union member states, Brazil, India, Taiwan, and Australia; all other signatories to the Anti-Counterfeiting Trade Agreement will eventually have so-called anticircumvention laws too.
The effect of anticircumvention law has been to force DRM hacks into the shadows, making them less easily accessible to the non-tech-savvy and at least somewhat stigmatized. Without the law, we would have things like Nook devices and software with “Convert from Kindle Format” options (and vice versa). The popular, free Calibre e-book reading app, for example, had a DRM stripper but removed it (presumably under legal pressure) in 2009. A DRM removal plug-in for Calibre is available, but it’s not an official one; David Pogue of the New York Times — hardly a fan of DRM – recently dismissed it as difficult to use as well as illegal.
The US has a rich case history around anticircumvention law that has made the boundaries of legal acceptability reasonably clear. It has shut off the availability of hacks from “legitimate” sources and ensured that if your hack is causing enough trouble, you will be sued out of existence. I am not personally a fan of anticircumvention law, but I accept as fact that it has made hacks less accessible to the general public.
The foregoing line of thought got IDPF Executive Director Bill McCoy and me talking last year about what IDPF might be able to do about DRM in the upcoming version of EPUB, in order to help IDPF further its objective of making EPUB a universal standard for digital publishing and forestall the two undesirable market trajectories described above. We did not set out to design an “ultimate DRM” or even “yet another DRM”; we set out to design something intended to solve problems in the digital publishing market while working within existing marketplace constraints.
So now, with that background, here is a set of interrelated design principles we established for EPUB LCP:
- Require interoperability so that retailers cannot use it to promote lock-in. This is what the UltraViolet standard for video is attempting to do, albeit in a technically much more complex way. The idea of UltraViolet is to provide some of the interoperability and sharing features that users want while still maintaining some degree of control. Our theory is that both publishers and e-book retailers would be willing to accept a looser form of DRM that could break the above market dilemma while striking a similar balance between interoperability and control.
- Support functions that users really want, such as “social” sharing of e-books. Build on the idea of e-book watermarking, such as that used in Safari Books Online for PDF downloads and in the Pottermore Store for EPUB format e-books: embed users’ personal information into the content, on the expectation that users will only share files with people whom they trust not to abuse their personal information.
- Create a scheme that can support non-retail models such as library lending and can be extended to support additional business models (see below) or the stronger security that industry segments such as higher ed need.
- Include the kinds of user-friendly features that Reclaim Your Game has recommended for video game DRMs. These include respecting privacy by not “phoning home” to servers and ensuring permanent offline use so that files can be used even if the retailer goes out of business. They also include not jeopardizing the security or integrity of users’ devices, as in the infamous “rootkit” installed by CD copy protection technology for music several years ago.
- Eliminate design elements that add disproportionately to cost and complexity. Perhaps the biggest of these is the s0-called robustness rules that have become standard elements of DRMs such as OMA DRM, Marlin, and PlayReady where the DRM technology licensor doesn’t own the hardware or platform software. Eliminating “phoning home” also saves costs and complexity. Other elements to be eliminated include key revocation, recoverability, and fancy authentication schemes such as the domain authentication used in UltraViolet.
- Finally, don’t try very hard to make the scheme hack-proof. The strongest protection schemes for commercial content — such as those found in pay television — are those that minimize the impact of hacks so that they are temporary and recoverable; such schemes are too complex, invasive, and expensive for e-book retailers or e-reader makers to consider. Instead, assume that EPUB LCP will be hacked, and rely on two things to blunt the impact: anticircumvention law, and allowing enough differences among implementations that each one will require its own hack (a form of what security technologists call “code diversity.”).
With those design principles in mind, we have designed a scheme that takes its inspiration from two sources in particular: the content protection technology used in the eReader/FictionWise e-book technology that is now owned by Barnes & Noble, and the layered functionality concept built into the Digital Media Project‘s IDP (Interoperable DRM Platform) standard.
The central idea of EPUB LCP is a passphrase supplied by the user or retailer. This could be an item of personal information, such as a name, email address, or even credit card number; distributors or rights holders can decide what types of passphrases to use or require. The passphrase is irrecoverably obfuscated (e.g. through a hash function) so that even if a hack recovers the passphrase, it won’t recover the personal information; yet the retailer can link the obfuscated passphrase to the user. The obfuscated passphrase is then embedded into the e-book file. If the user wants to share an e-book, all she has to do is share the passphrase. Otherwise, the content must be hacked to be readable.
Other aspects of the draft requirements are covered in the document on the IDPF website. Apart from that, it’s worth mentioning that this type of scheme will not support certain content distribution models unless extensions are added to make them possible. Features intentionally left out of the basic EPUB LCP design include:
- Separate license delivery, which allows different sets of rights for a given file
- License chaining, which supports subscription services
- Domain authentication, which can support multi-device/multi-user “family accounts” a la UltraViolet
- Master-slave secure file transfer, for sideloading onto portable devices, a la Windows Media DRM
- Forward-and-delete, to implement “Digital Personal Property” a la the IEEE P1817 standard
Once again, we set out to design something that meets current market needs and works within current market constraints; EPUB LCP is not a research-lab R&D project.
Again, I’ll be discussing this, as well as the landscape for e-book content protection in general, at Book Expo America next week. Feel free to come and heckle (or just heckle in the comments right here). I’m sure I will have more to report as this very interesting project develops.