Public Library E-Book Lending Must Change to Survive December 4, 2011Posted by Bill Rosenblatt in DRM, Law, Publishing, Uncategorized.
A few events over the past few weeks illustrate the downward arc that I have suggested is in store for public libraries in the e-book age. First, Amazon introduced its own e-book “lending library” for members of its $79/year Amazon Prime service, which allows users to “borrow” one e-book at a time, with no due dates. Second, yet another major trade book publisher, Penguin, got into a spat with public libraries over e-book lending. Penguin stopped offering new titles and withheld Kindle access to all titles, out of unspecified security concerns with OverDrive (the service that powers most U.S. e-book library lending) and Amazon. (Penguin subsequently restored access for existing titles, but not for new ones.)
The Penguin incident is only the latest in what will undoubtedly be a long series of squabbles between publishers and libraries over e-book lending. In fact, five of the “Big Six” U.S. trade book publishers are now either limiting their e-book licensing to libraries or not licensing at all — and the sixth (and largest), Random House, is reportedly reconsidering its library e-book licensing policies. Such spats may well lead to a world of off-putting restrictions and confusion for libraries and their patrons.
Libraries have two fundamental problems here: they have less control over the situation than publishers do, and they are about to get some serious competition from the private sector. An article in Publishers Weekly gives an overview of Amazon’s e-book lending feature and its implications for publishers and authors. In a nutshell, the program is currently limited to a few thousand titles that originate either from Amazon itself or from smaller publishers that still sell e-books to Amazon under a wholesale model, as opposed to the “agent” model used by most major trade publishers, which forbids such activity.
But the Publishers Weekly piece only covers the impact of e-book lending on publishers and authors, many of whom are raising a fuss about Amazon’s program. It says nothing about the program’s impact on public libraries. The executive director of the American Library Association (ALA), Keith Fiels, has publicly expressed a lack of concern over the impact of Amazon’s lending program, given its limited range of titles and that it’s part of a subscription program that includes other features such as streaming video and free expedited shipping. The ALA is more concerned about major-publisher moves like Penguin’s.
Indeed, public libraries are experiencing major growth in e-book lending, especially since Amazon joined the e-lending world by opening up its DRM to enable lending and integrating it with OverDrive’s library lending service. Another piece of evidence that library e-lending is expanding is the entry of a Seattle-based startup called BlueFire Productions as the first serious competitor to OverDrive in the public library space.
At bottom, this is about two things: ways to make e-books available legally for free, and the promotional value of free distribution. That’s why libraries should be worried. First, consumers generally don’t care where they get free legal e-books, as long as they are available conveniently and can be read on their favorite devices. Second, what Amazon has started as a limited service that’s only available to an elite tier of customers will surely become more widely available and with more titles, especially with competitors like Barnes & Noble constantly looking for ways to differentiate themselves from the market leader.
Amazon subsidizes the wholesale cost of e-books that it lends to Amazon Prime members. It does this to make its own services and devices more attractive, not to spur sales of those e-books. If and when B&N offers an equivalent feature, it will undoubtedly do the same.
If I were Keith Fiels at the ALA, I would be very, very afraid. The e-book publishing world may be about to split up into the equivalent of the music industry’s major and indie labels: major labels tend to make deals that maximize revenue and limit free promotion, while indies try for maximum promotion in hopes of getting revenue later. When you apply this dichotomy to publishers and e-books, you will see that libraries will inevitably get squeezed out.
The majors will make life increasingly difficult for public libraries through refusal to license or restrictive and confusing licensing terms. Meanwhile, smaller publishers will “lend” their titles through Amazon and other e-book services — and will most likely be happy with the arrangement for the promotional value it gets them. And some indie publishers will give their e-books away outright — through e-book retailers or through sites like Facebook — in hopes of getting exposure for their authors and selling hardcopy titles, just as thousands of indie musicians used to give away MP3s on MySpace. And let’s not forget that e-book prices are often much lower than their hardcopy counterparts to begin with.
Then it will only be a matter of time until some publishing industry equivalent of Michael Robertson (the music industry’s digital provocateur) will create a search engine for finding free e-books from all of these sources in a single convenient place, storing them in an online locker, sharing them with friends, etc.
If you extrapolate from these changes, you can see how public libraries could become virtually irrelevant for e-book readers.
It’s all because publishers get to decide what e-book titles libraries may lend and (to some extent) under what terms. Again, think of this in music terms: radio stations get the right to play whatever music they want under a license granted by law — a so-called statutory license. Online equivalents of radio (e.g., Pandora, iHeartRadio) get similar rights. Library lending of digital music is virtually nonexistent; radio remains the primary promotional channel for record companies. Perhaps it’s time to think more carefully about public libraries in this light for e-books, as I’ll explain.
There is no equivalent of a statutory license for e-books that would allow libraries to lend them without explicit, title-by-title permission from publishers. As I’ve discussed previously, libraries do get rights under Section 108 of the copyright law to lend e-books under certain conditions. But because most publishers only give libraries e-books to lend as DRM-protected files with license terms attached to them, and Section 108 requires libraries to abide by those license terms, libraries can’t exercise those rights. In effect, those rights have no value for libraries.
Libraries simply do not have enough leverage against major publishers and retailers to improve this situation in the private sector. If they are to remain relevant in the e-book age, they are going to need to push for significant legal reforms, which both publishers and retailers will undoubtedly resist.
I previously suggested one option, albeit in a somewhat tongue-in-cheek manner: push for the Copyright Office to define an exemption to the law that criminalizes hacking of DRMs (Section 1201 of the Copyright Act) so that public libraries can legally remove DRM for the purpose of lending e-books if they repackage them with DRM to enforce lending terms. However, this has two disadvantages: exemptions to Section 1201 only last for three years, until the Copyright Office considers a new set of exemptions, and publishers could push for stronger DRMs that are harder to hack.
The “cleanest” solution to this problem would be to enact Digital First Sale, i.e., an extension to Section 109 of the copyright law that lets anyone do whatever they want with digital downloads once they have acquired them legally. (We had a great discussion on this subject at last week’s conference.) Public libraries owe their existence to First Sale (on physical goods) in the first place. But that won’t help for e-books as long as publishers distribute them with DRM and DRM hacking is still illegal; and anyway, as I discussed recently, Digital First Sale isn’t likely to happen anytime soon. Therefore it would be worth libraries’ while to investigate changes to the law that help them lend e-books while leaving Digital First Sale off the table.
One option would be to push for additional rights for libraries under Section 108. At a minimum, Subsection (f)(4) would have to be relaxed so that libraries may lend e-books even if the licenses they come with forbid this activity. This would be tantamount to a statutory license for libraries to lend e-books without explicit permission from publishers.
As a practical matter, this wouldn’t really change the way things are done today. Libraries lend e-books through third parties like OverDrive, which already get e-books from publishers without DRM and package them with DRM — just like music and video retail services. And provisions already exist in Section 108 that hold libraries liable if they make their own unauthorized copies of e-books. OverDrive and its ilk use DRM to enforce one-copy-at-a time lending as well as the lending time limits that are in libraries’ own best interests.
This change in the law would improve the situation for libraries substantially. However, the economics may have to change to make it palatable to publishers. For example, libraries acquire e-books for their collections by paying for them title by title, just as they pay for printed books. Radio stations, on the other hand, typically get free copies of recordings from record labels but pay royalties to the music industry for playing them on the air.
If publishers acknowledge the promotional value of library e-book lending, then they might be willing to accept a statutory license to lend e-books if they can negotiate a per-loan royalty rate in lieu of upfront purchase prices. The Copyright Clearance Center, for example, would be in a good position to manage these payments and royalty disbursements, just as ASCAP, BMI, and SoundExchange do for music.
This type of arrangement would enable libraries to maintain huge collections of e-books (through service providers like OverDrive and BlueFire, which would actually house and distribute the e-books) and thus serve the public well. At the same time, the negotiations would have to resolve questions of how many copies of an e-book a given library could lend out concurrently; one copy per library doesn’t reflect the fact that big libraries acquire multiple copies of popular titles. Is it possible for the numbers to defined so as to be fair to both publishers and libraries? That would be a good question for the Section 108 Study Group, the venue for recommending changes to that section of the copyright law, which used to convene every five years but was disbanded by Congress after its last report in 2008.
A limited form of just such a statutory license-type solution has actually been suggested in the private sector already, in the proposed settlement to publishers’ and authors’ lawsuits against Google. It includes giving public libraries rights to make every book scanned on Google’s behalf — over 12 million titles at last count — available on a single terminal within each library. Libraries would not even have to pay for this. However, this doesn’t allow e-books to be available outside of libraries’ physical confines, it doesn’t allow libraries to acquire multiple copies of e-books they want to make available to more than one patron at a time, and Google can withhold up to 15% of its scanned titles at its discretion.
The Google book settlement is still unresolved, but the terms in it show that publishers may be willing to grant libraries some limited e-book lending rights. Libraries have complained about the “table crumbs” offered to them in the Google book settlement. But unless they take action similar to what I’ve described here, those rights may be the best that public libraries can hope for as the e-book market expands.
Irdeto Acquires BayTSP October 24, 2011Posted by Bill Rosenblatt in Fingerprinting, Publishing, Services, Video.
Irdeto announced on Monday that it is acquiring the antipiracy services company BayTSP. Terms were not disclosed, but this is the culmination of a “strategic alternatives exploration” process that BayTSP had been engaging in for some time.
BayTSP monitors P2P networks, file-sharing services, and other places where unauthorized content might lurk and generates evidence that content owners can use to support legal action against infringers. It uses a range of technologies, including sophisticated network traffic analysis and fingerprinting. It has been one of a shrinking number of providers of such services as the industry has consolidated.
This is a good strategic fit for Irdeto in various ways. First, BayTSP will boost Irdeto’s existing antipiracy services; this will strengthen the company’s competitive positioning particularly against NDS, which is known to have robust antipiracy services to complement its content protection technologies. Second, BayTSP has made some recent forays into e-book antipiracy services, which will complement Irdeto’s own new content protection technology for the e-publishing market.
Yet the consolidation of antipiracy services within a major content protection company has interesting implications for the economics of content protection. Typically, copyright owners pay for antipiracy services such as those of BayTSP, Peer Media, and Attributor, but downstream entities such as network operators, online retailers, and device makers pay for content protection technologies such as conditional access and DRM. At the same time, pay TV operators are starting to launch services in which the content can go beyond the customer’s set top box, possibly onto their tablets, mobile handsets, and PCs. The question is: do pay TV operators believe it’s their responsibility to protect the content beyond the STB?
Irdeto will have to decide the answer to this question. Specifically: will it continue to charge content owners for BayTSP’s antipiracy services, or will it attempt to add to the fees it charges its operator customers? To put it more cynically, have Hollywood studios encouraged Irdeto to acquire BayTSP (as they encouraged Irdeto to buy BD+ Blu-ray content protection technology from Rovi just three months ago) so that they no longer have to pay for it?
Seen in this light, Irdeto’s acquisition of BayTSP becomes part of the company’s overall strategy to offer more comprehensive and higher-grade content protection services to pay TV operators, on the theory that they will pay more to get better protection. This is a risky strategy, but given the growing footprint that Irdeto has in the overall content protection market, it’s a risk that Irdeto can probably afford to take.
Amazon Kindle Cloud Reader Lowers the Speed Bump for E-Books August 31, 2011Posted by Bill Rosenblatt in DRM, Publishing, Services.
Amazon launched Kindle Cloud Reader a few weeks ago. This version of the Kindle e-reader app runs within web browsers and therefore on a wider variety of platforms than its hardware Kindle devices and pre-existing e-reader apps for platforms such as Apple iOS and Android.
The main intent of Kindle Cloud Reader is to get around app stores, so that Amazon can make e-books available on iPads, iPhones, and Android devices without having to pay Apple or Google — both competitors in the e-book space — a percentage of its revenues. Yet Kindle Cloud Reader is different from the others in a way that could turn out to be just as important as its interoperability: it doesn’t encrypt e-book files.
Various people have discovered that Kindle Cloud Reader is a straight HTML5 app and that the server sends it unencrypted content a chapter at a time. It would be fairly easy to build a program that captures the HTML and stores it locally. This would be roughly equivalent to “stream capture” for audio and video, except that the result would be a perfect browser-renderable copy of the e-book.
This means that Kindle Cloud Reader does not operate in the same way as other web-based e-readers, such as Google Editions or Amazon’s older Amazon Pages technology. These display page images that would have to be fed sequentially to an OCR engine in order to capture the text – a higher “speed bump” than Kindle Cloud Reader uses.
E-book DRM technologies have generally been hacked, but this move by Amazon lowers the e-book copying “speed bump” significantly — not as low as DRM-free music downloads, but getting there.
Furthermore, Kindle Cloud Reader lacks certain functionality that other e-readers have, such as copy-to-clipboard. Google Editions allows copy-to-clipboard with limits. Ironically, the lack of copy-to-clipboard in Kindle Cloud Reader has inspired hackers to figure out how to add this functionality and thereby stumble upon the fact that the content is not encrypted.
Three questions arise out of this development. First, why is Amazon doing this? Second, do the publishers that license material to Amazon know about it? Third, would a program that captures e-book content in Kindle Cloud Reader be illegal under anticircumvention law (DMCA 1201 in the United States)?
The first question is most likely answerable. This development indicates that Amazon is confident enough about its leadership position in the e-book market that it does not feel as much need to lock customers into its platform, as it has done (more strongly) with its DRM.
It also shows that Amazon intends to make its e-book money more on e-books themselves than on reader devices. This is in line with analysts’ projections that the tablet market will grow faster than e-reader devices and therefore that e-readers will come under increasing price pressure. Amazon’s intention to launch a tablet device of its own by the end of this year corroborates this.
The third question is an interesting one. The anticircumvention law was designed to place liability for hacks to “technical protection measures” (TPMs) on hackers themselves rather than on the suppliers of the TPMs. This has led to the question of how strong a TPM has to be in order to qualify for protection under this law.
The 7th Circuit appeals court addressed this question in Universal v. Reimerdes (2000) regarding the hacked CSS encryption scheme for DVDs: the defendants in the case suggested that CSS shouldn’t qualify for legal protection because it was so easily hacked. The court did not want to establish a test for TPM effectiveness, so it declined to address that issue.
More recently, a company called SunnComm that made CD copy protection technology threatened to sue a researcher for discovering that its technology was trivially easy to circumvent: just press the Shift key on a PC when inserting a protected CD into the PC’s drive and the copy protection mechanism could be bypassed. SunnComm withdrew the lawsuit. One reason for this could have been fear of the repercussions of an adverse court decision — which would most likely have resulted in just such a test for TPM effectiveness.
If a publisher sues someone under the anticircumvention law for making a program available that extracts e-book content from Kindle Cloud Reader, then we’ll see what the answer to the third question above is (if the suit goes to trial). Or, if a publisher sues Amazon for breach of licensing agreement over the lack of encryption, we’ll know the answer to question number two.
Of course, there is also a fourth question: is this the beginning of the end of DRM for e-books? I suspect the answer is yes, although this should happen more slowly (or not at all) for certain segments of the publishing market, such as higher education and expensive professional/technical content. In general, I don’t believe it will happen as quickly as it did for music.
The digital music industry is moving from a model based on file ownership to one based on cloud storage. Storage of content on servers instead of on users’ devices goes hand-in-hand with elimination of file encryption. This transition is just beginning and will take years to complete. Even so, cloud-based e-reading seems like more of a stretch than cloud-based music: although the “celestial jukebox” model has been available for several years, its uptake has been slow. People are only just now starting to envision a world without physical music ownership. It will take them considerably longer to envision a world without physical books.
Good News for the New York Times July 22, 2011Posted by Bill Rosenblatt in Business models, Publishing, Services.
A short postscript to yesterday’s article on the strong recent uptake in paid subscription music services:
Today the New York Times revealed that 281,000 users are paying to receive its content digitally, including 224,000 in its new Digital Subscriber program and the remaining 57,000 paying for Times subscriptions through e-readers. The Digital Subscriber service launched in March. The Times has already beaten its stated goal of 200,000 digital subscribers by the end of the first year. 224,000 is 26% of the paper’s daily print circulation; the figure does not include the 756,000 print subscribers who also have digital subscriptions. The 26% ratio is about the same as the percentage of digital to print subscribers to Cook’s Illustrated.
To put those numbers in context: the goal that the Times surely has in mind is 400,000. That’s the number of paid online subscribers to the Wall Street Journal. The other interesting number in the periodical publishing space is that of Consumer Reports, which is the largest paid online publication at 3.3 million online subscribers (as of November 2010). Neither Consumer Reports nor Cooks Illustrated carries advertising.
Different subject: How do you like the new site layout?
Book Industry Bodies Consider DRM… Again May 26, 2011Posted by Bill Rosenblatt in DRM, Publishing, Standards.
This week at Book Expo America in New York, the Book Industry Study Group (BISG) and the International Digital Publishing forum (IDPF) held an open meeting to discuss what the two industry bodies should do about DRM standardization.
Although this meeting wasn’t all that well attended — it was hampered by a hard-to-find location in the remote reaches of the cavernous Javits Center — it did provide good insight into book publishers’ attitudes about DRM, now that e-books have a much bigger impact on the industry than they did a few years ago.
Angela Bole of BISG kicked off the meeting by explaining the research and standards body’s role in the process. She emphasized that the reason for BISG’s interest in DRM standardization was to “take friction out of the supply chain” for publishers, retailers, and users. BISG has been successful in promoting other supply-chain-oriented initiatives, such as the ONIX standard for book product metadata.
Then Bill McCoy, Executive Director of the IDPF (and former e-publishing executive at Adobe), laid out a few possible choices for direction that the IDPF could help facilitate, and discussed their pros and cons (mostly the latter):
- Rely on e-books migrating to browser-based delivery on connected devices, meaning that users will no longer need to download e-books, making file-based DRM unnecessary (instead relying on what I call “screenshot DRM,” as currently practiced by Google Editions and Amazon’s “Look Inside” feature). This option isn’t practical because the technology won’t be in place for years, and people still want to own their e-books permanently.
- Go DRM-free. One of the advocates of this approach, Andrew Savikas from O’Reilly & Associates, argued for DRM-free and cited his company’s research to prove that “piracy helps sales” [see note below]. But few major publishers are interested in giving up DRM at this time.
- Gravitate towards a single-vendor solution, as the music industry effectively did with Apple and iTunes. This would improve the user experience, but it would result in a single entity with a stranglehold on supply chain economics; publishers would lose.
- Advance an interoperable DRM standard. By process of elimination, McCoy expressed interest in pushing this model.
The IDPF, and its predecessor organization the Open e-Book Forum (OeBF), have muffed the DRM issue twice already over the past decade. When it developed the highly successful EPUB format, IDPF opted not to include DRM in the specification. This happened primarily because the technology vendors that hold sway at the IDPF did not want a DRM standard: they either wanted to do without DRM entirely or to stick with their proprietary DRM; adopting a standard DRM would be an expense and hassle they would rather do without.
Before that, around 2003, the OeBF tried to define a standard rights expression language (REL) that publishers and retailers could use to express rights that they wanted to grant to consumers as part of a DRM system. The MPEG standards body adopted an REL standard (MPEG-REL) as part of its MPEG-21 suite of standards for digital multimedia. The OeBF decided to create an e-book-specific version of MPEG-REL. (I participated in this effort on behalf of the Association of American Publishers.) MPEG-REL has had negligible impact on the market, and the OeBF’s e-book REL effort went nowhere.
The current state of the e-book market makes any DRM standardization strategy challenging. There are now three dominant platform vendors, each with their own DRM: Amazon, Apple, and Adobe (used in virtually all other e-readers, including the Barnes & Noble Nooks and Sony and Kobo Readers). Any DRM standard would have to either promote interoperability among these or replace them. But the major players are already well established and therefore have little incentive to cooperate. Contrast this with Hollywood, where the market for digital video downloads is arguably less mature.
With that in mind, McCoy posited three possible approaches to interoperable DRM:
- Standardize on a single DRM, the way Hollywood did with AACS for Blu-ray (and HD-DVD).
- Instead of using file encryption, use a type of technique that McCoy has dubbed “Social DRM”: insert watermarks into e-books that contain personal information related to the user, such as a credit card number.
- Adopt a rights locker approach similar to that of Hollywood’s DECE (a/k/a UltraViolet), in which users pay for the right to download a title to one or more e-reading devices of their choice, as long as each device supports one of the approved DRMs.
The first of these options is a virtual impossibility with three platform vendors already established in the market. The “social DRM” technique has been tried in both e-books (by Microsoft in the previous decade) and music, with little success. Furthermore, it’s unclear how such a system would work with the EPUB text-markup format: for one thing, I don’t see how to avoid simple tools for stripping the watermark data from EPUB files without reverting to “regular” DRM.
That leaves the third option, which was the subject of some discussion at the meeting at BEA. The advantage of a DECE-type model for e-books is that it makes it unlikely that any of the platform vendors would need to scrap and replace their existing DRMs. DECE-approved DRMs must merely share certain basic technical characteristics, such as using the same crypto algorithm, so that the central rights locker can store encryption keys that work with all compliant DRMs.
But I don’t see how adopting DECE would be particularly helpful in reducing the number of e-book platforms or promoting interoperability. Of the three major platform providers, at least two (Apple and Amazon) have no history of cooperating with others. The latest market share statistics for e-book retailers, from Goldman Sachs in February, gives Amazon 58% of the market, Barnes & Noble 27%, and Apple’s iBooks 9%. If we assume that the remaining 6% consists of other retailers that use the Adobe platform (such as Sony), then we have Amazon and Adobe fighting it out at a reasonably competitive 58% vs. 33%.
Market forces alone may well reduce the number of dominant platforms to two, by marginalizing Apple as a DRM platform provider for e-books. Both Amazon and B&N have apps that run on popular mobile devices. So one way to achieve “interoperability” is simply to use an iPad, iPhone, Android, or BlackBerry (not to mention Windows or Mac) with both Kindle and Nook apps, and live with two e-bookstores. Apple’s iBooks, which only runs on Apple iOS devices, will isolate itself into irrelevance. And its dependence on the iTunes retail infrastructure hampers Apple from doing the previously unthinkable and switching iBooks to Adobe’s DRM (thereby joining B&N and others to weaken Amazon).
If the book industry really wants to achieve e-book interoperability among dedicated e-readers, then a fourth alternative, beyond those that Bill McCoy suggested, may be worth investigating: Coral. Coral was a consortium led by Intertrust that had developed a framework for actual interoperation among DRMs through trusted intermediary services. This approach makes it possible for a user to call a service to “translate” content from one DRM to another while maintaining security.
Coral still technically exists but has been quiescent over the last several years as Hollywood rejected it in favor of the DECE multi-DRM approach. DECE depends on online retailers building infrastructure to support all compliant DRMs — currently five of them — and agreeing to let users migrate from one retailer to another like GSM mobile subscribers do with their SIM cards. This is unlikely to fly with Amazon or Barnes & Noble.
Instead, Coral would enable users to use their e-books on other devices while letting retailers retain control of their users’ purchase information. This alternative seems more palatable to e-book retailers than the DECE approach, and it would help users.
Technical and licensing issues must be investigated in order to determine whether Coral might be suitable for current e-book platforms. As various participants stated at the BEA meeting, book publishers are far more likely to be successful in pushing for DRM interoperability through industry-wide vehicles than one publisher at a time. The major e-book retailers need incentives to adopt interoperability that will enhance the user experience and help the market grow faster. Publishers can push for such incentives in licensing deals. As long as their actions fall on the correct side of antitrust law, the IDPF has a way forward.
*O’Reilly commissioned my colleague Brian O’Leary to do a study on piracy’s effect on sales in 2008. O’Leary’s findings encouraged O’Reilly to stay away from DRM. When I asked Savikas what the study measured, he stressed that it was a limited study that was only relevant to the way O’Reilly sells and markets its content.
As the author of books published by O’Reilly myself, I would like to assert that O’Reilly is an outlier, and the research results should not be taken as representative of the book industry as a whole. I maintain that both piracy’s effect on sales and DRM’s effect on piracy (or sales) have yet to be measured with any degree of confidence for book publishing (or any other media industry segment) — and perhaps never will.
Here’s why O’Reilly is atypical: first, it is much more active and sophisticated than other book publishers at using online techniques to market and distribute content, thereby making it easier for O’Reilly to monetize content online. Second, this redounds doubly to O’Reilly’s benefit because of the tech-savvy of O’Reilly’s core audience of IT professionals. Finally, O’Reilly’s content attracts an open-source-oriented crowd that has a particular antipathy towards DRM, making a backlash more likely than for other publishers if O’Reilly were to implement it. O’Reilly & Associates is a superb publisher, but its study on piracy and DRM has limited meaning for the industry at large.
Amazon To Enter Library Lending Market April 20, 2011Posted by Bill Rosenblatt in Devices, DRM, Publishing, Services, United States.
Amazon announced today that it is launching Kindle Library Lending, working with OverDrive to support Kindles and Kindle apps on other platforms on OverDrive’s digital lending platform for public libraries. The timing of the announcement was unclear, given that the service won’t be available until “later this year.”
OverDrive is apparently adding server-side support for Amazon’s Kindle DRM technology, so that it can distribute e-books that are readable on all Kindle devices and apps. This will make OverDrive the first third-party service provider to support the Kindle DRM
This announcement throws an interesting twist into the recent controversy over lending of e-books from public libraries. One of the complaints that library and user advocates have made about digital lending is that DRM has prevented e-books from being readable on and portable across different reading devices and software. The distinction between the two is important, so let’s examine them.
Currently, patrons of libraries that use the OverDrive service can borrow e-books and read them on just about any popular device except Amazon Kindles. OverDrive uses the Adobe Content Server/Digital Editions platform, which runs on just about every e-reader devices except Kindles, as well as on software apps for Windows, Mac, Linux, Android, iOS (iPhone, iPad, etc.), and BlackBerry. When Kindle Library Lending launches, that limitation will be removed.
Instead, library patrons will most likely have to choose which e-book format they want based on what device they have. This will, ironically, lead to overlap: you will be able to choose either format if you have a PC, Mac, Android device, or Apple iOS device. If you have a Nook, Sony Reader, Kobo Reader, or IREX, you’ll choose the Adobe format; if you have a Kindle, you’ll choose the Kindle format. As far as portability is concerned, e-books will be readable across these two highly overlapping subsets of devices. Amazon’s Whispersync feature will even preserve margin notes you write on borrowed e-books without revealing them to other borrowers.
You still won’t be able to “re-lend” your e-book to a friend or family member unless they use your reading device or your user account, and you still won’t be able to move your e-book from a device in one of the ecosystems to one in the other ecosystem — for example, from a Nook to a Kindle or vice versa. But that’s a pretty low number of restrictions, given that this is library lending we’re talking about, not purchase and ownership.
Given the recent price drops, it looks like the Kindle is on its way to being a loss-leader product for Amazon — which will make up the revenue through its margins on e-book sales. So why would Amazon want to support library lending? Apparently because library e-book borrowing is popular, and the Kindle’s lack of support for it gives Amazon’s competitors a differentiating feature that consumers consider to be important. As Amazon’s press release suggests, the Kindles’ ability to read library e-books is up there with their display quality, battery life, and other features in the ultra-competitive e-book reader race.
Google Book Settlement Rejection: A Missed Opportunity March 30, 2011Posted by Bill Rosenblatt in Law, Publishing, Rights Licensing, United States.
U.S. federal judge Denny Chin last week rejected the latest iteration of the settlement agreement between book authors and publishers and Google over Google’s massive-scale scanning and indexing of books. Judge Chin rejected the proposed settlement after having heard from hundreds of parties that objected to it, including members of the author plaintiff class who did not agree with it, academic and public-policy amici curiae, and a coalition of the U.S. Justice Department and Google competitors (Microsoft, Amazon) organized by the prominent antitrust attorney Gary Reback.
The objections to which Judge Chin responded in his opinion focused on areas like Google’s de facto monopoly over the online availability of certain types of works, particularly so-called orphan works whose copyright owners are not in evidence, and the “blessing” that settlement approval would confer on steps that Google took without permission, such as scanning books and making snippets of their texts available online. But the broadest objection that Judge Chin seized on was that the settlement’s structure has such fundamental impact on copyright that it should not be the product of litigation among private parties; it is more properly the domain of Congress.
Large commercial entities such as (in the case of copyright law) major book publishers, record labels, and film studios often bring lawsuits like this one in the first place as a second-best alternative to pushing for legislation. It’s generally more expensive, time-consuming, and risky to litigate than to lobby Congress, but if Congress isn’t paying attention, then the legislative route is not viable.
A prominent example of this is the Supreme Court’s 2005 Grokster decision on file-sharing, which was the result of litigation that music companies instigated when it became clear that Congress wouldn’t enact a bill called the INDUCE Act of 2004. The outcome of Grokster ended up being similar to the INDUCE Act: it established a new class of secondary copyright infringement liability for someone who “induces” people to infringe copyright, in the same manner that someone can induce people to infringe a patent by marketing and profiting from some technology that makes it easy to do so. (The inducement principle for patents is long-established law.)
More recently, Viacom’s huge litigation against Google over YouTube is an attempt to increase network operators’ responsibility to act as “copyright police” over their own services beyond the notice-and-takedown requirements in the current law (17 USC 512). That case is currently making its way through the appeals process. Viacom would also most likely have preferred legislation over this protracted, expensive, and distracting lawsuit to achieve its ends.
Most of the talk over the rejection of the Google book settlement has focused on the issues that Judge Chin emphasized in his opinion: orphaned works, antitrust, and condoning unauthorized copying after the fact. But disappointingly scant attention has been paid to a feature of the settlement that had the potential to improve the global copyright scene for the digital age in a major way: the establishment of a global Book Rights Registry, which Google would have paid over US $30 Million to build.
Many of the problems in managing digital rights to content could be solved if there were complete, consistent, up-to-date, and easily accessible sources of information about content and rights holders. Private companies have made various attempts to solve this problem over the years; none have succeeded, owing to unrealistic profitability requirements, overly narrow scope, lack of cooperation from rights holders, and other factors.
Governments have been understandably reluctant to try to establish such databases — especially in an age where even registering copyrights is not considered mandatory. But the need is there, and it’s sorely felt. Notwithstanding its source, legality, or ethics, the Book Rights Registry could have been a real solution to this problem — moreover, one that would be paid for, not by taxpayers or even rights holders but by a company for whom the price would amount to a rounding error on its balance sheet.
Furthermore, the Book Rights Registry — now in the public view for at least two years — has become a source of inspiration for similar activity in other sectors of the media industry, such as the Global Repertory Database for music currently being contemplated in Europe. Many highly qualified managers and potential implementers have been lining up to build and run the BRR, thus helping to ensure good design and operations.
Now, with Judge Chin’s rejection of the settlement, the BRR looks like a lost cause. Judge Chin’s opinion suggests that a revised settlement could be approved if it works on the “opt in” instead of “opt out” principle, i.e., it should include only those works whose copyright owners proactively agree to let be included. This may pass various legal sniff tests. But any resulting Book Rights Registry under an opt-in regime would be of highly dubious value to the industry in general; in fact, it would scarcely differ from repositories of licensable material available today, such as Overdrive’s Content Reserve.
The parties to the proposed settlement are now in a daze over what to do next. Sentiment seems to be toward Google lobbying Congress to pass legislation that would make orphan works available to the public. Such legislation has been in the works for at least five years. But Congress’s attention nowadays is taken up with issues such as unemployment, wars, the deficit, and other issues which (let’s face it) are more important to U.S. society. Yet orphan works legislation has always sounded like a no-brainer.
Now that Google has an estimable lobbying presence in Washington, we may find ourselves in a world with orphaned works becoming available to the public and a Book Rights Registry that includes them as well as works with claimed ownership on an opt-in basis. That’s well short of the “castle in the air” rights information database that some of us have been dreaming of… but I suppose it’s better than nothing.
E-Book Lending: The Serpent in the Garden of Eden March 3, 2011Posted by Bill Rosenblatt in Business models, DRM, Law, Publishing, Services, United States.
I wrote my previous article about e-books and libraries in response to an article by my colleague Thad McIlroy on his Future of Publishing site. The news that HarperCollins had put restrictions into its e-book licenses for lending library services so that each “acquired” title could only be loaned out 26 times was fresh and appeared as a side note in my article. HarperCollins (a division of Rupert Murdoch’s News Corp) is one of the world’s largest trade book publishers. So, what about this major development?
First, let’s quickly review the technical and legal backdrop to what HarperCollins is doing. Libraries normally buy (acquire) books to lend to library patrons. This is made possible through the copyright law, specifically section 109, which is known as First Sale. Section 109 says that anyone who legitimately obtains a copy of a copyrighted work (e.g., a book) can do whatever she wants with it, including resell it, lend it, or give it away. Eventually physical books in lending libraries become worn and damaged; libraries may repair them or dispose of them. Libraries control lending abuses by collecting fines from patrons who return books late or not at all.
In the world of e-books, libraries don’t buy titles; they license e-books in order to license them to patrons. A license is a contract, the terms of which are ultimately up to the publisher. Copyright law allows libraries to lend digital works to their members, but DRM-packaged e-books are governed by licenses, and thus contract law, not copyright law.
Of course, it takes no effort to make a copy of an e-book. That’s why library services use DRM to ensure that e-books are loaned only to properly credentialed users (i.e. members of the library) and that those users can’t make copies for their million best friends. Service providers like Overdrive and NetLibrary have arisen to make it possible for libraries to “lend” e-books in a way that is very similar to the way they lend hardcopy books: you get access to the e-book for the library’s lending period (perhaps a couple of weeks, or for a reference work, a few hours), and then it “disappears” from your device and becomes available to another library member. Libraries can license multiple copies of popular works so that more than one patron at a time can borrow them.
The noted library technologist Eric Hellman calls this the “Pretend It’s Print” model — a characterization I don’t quite agree with, but leave that aside for the moment. Hellman characterizes “Pretend It’s Print” as a reasonable model, at least for the time being. But HarperCollins appears to be taking “Pretend It’s Print” quite literally: they seem to be trying to emulate physical wear and tear on a book that leads some libraries to discard books after a while. Still, Hellman’s blog post on the subject drips with contempt for HarperCollins.
I also believe that HarperCollins has done the wrong thing, but for a different set of reasons. Let me preface my reasons with a couple of caveats: I have no access to statistics on the expected lifespans of library books, though I found a couple of data points that expect between 20 and 35 loans until a book must be either discarded or repaired at a cost that may exceed its value — thus making HarperCollins’s 26 seem like an appropriate number (or did they find the same two articles I did?). I also have no insight into a library book’s promotional value to a publisher, but I suspect it’s not very high.
HarperCollins’s 26- loan limit is just a bad decision. It is bound to please absolutely no one. It is a lose-lose-lose proposition. The library community is up in arms on Twitter and elsewhere about the decision. Many are calling for libraries to boycott HarperCollins material in hardcopy as well as e-book format.
Yet at the same time, two other major publishers, Macmillan and Simon & Schuster, never licensed e-books for library lending in the first place. Librarians complain about this, but not very much.
As I said previously, I had heretofore considered e-book lending to be one of the real success stories of DRM. Libraries get to lend e-books, publishers get paid for those e-books, and library patrons can read them on a wide range of devices (pretty much anything but a Kindle) without leaving their homes or offices. Everybody wins.
Furthermore, let me be clear that some form of content protection is absolutely necessary for library e-book lending. To allow library patrons to make additional copies of “borrowed” digital materials with even relative impunity is just plain unfair to publishers and authors. (Yes, DRMs can be hacked; people can make digital scans of hardcopy books too.)
Yet HarperCollins is making two serious mistakes in DRM implementation. One is to try – too literally – to use DRM emulate a physical product in the digital domain. This has never worked, because a digital emulation will always contain one or more shortcomings with respect to the original physical model that will not meet user expectations. ”Pretend It’s Print” may be a convenient point of reference for consumers, but it is more effective to focus on the content access model rather than the physical product in designing digital content services. (As far as I know, record labels aren’t experimenting with DRMs that gradually introduce clicks, pops, and skips into digital music files.)
In this case, the HarperCollins model will fail to meet “user expectations” by angering librarians, who don’t like DRM in principle. Either the e-book will suddenly become unlendable without warning or the DRM system will warn librarians that they will soon have to pay for another license to keep lending the e-book. How many libraries will re-up? Not many, I suspect.
Furthermore, this move defies logic regarding publishers’ strategies for their backlists (catalogs of older content). Publishers believe that their backlist titles have less value than frontlist titles, and they constantly seek ways to invigorate sales of their backlists. By making it unlikely that e-books will be available for library lending after a year or so, HarperCollins is both cutting off access to products that it presumably does not value highly in the first place and hurting its ability to invigorate its backlist. This makes no sense at all.
The other mistake that HarperCollins has made is to introduce complexity into a DRM implementation in a way that adds no value for users. Many early digital music services failed to gain user acceptance because they were too complex for users to understand. Some, for example, had Byzantine pricing plans – X permanent downloads, Y timed downloads, and Z streams per month – that resembled the bad old days of confusing cell phone plans. iTunes won because it kept things simple. Nowadays, as music services take on more and more new features in their attempts to unseat the iTunes juggernaut, they risk similar user confusion and alienation (most egregious current example: the feature-overloaded MOG).
If HarperCollins wanted to try something different with licensing terms, it should have done something that offered value or choice. It could, for example, have offered a choice of limited-loan titles for less money or unlimited-loan for full price. (Eric Hellman tried polling this question; the responses he got prove little more than how emotional everyone is over this issue — which is exactly my point.)
If HarperCollins does not get value from e-book lending, then why not just pull its catalog entirely and join Simon & Schuster and Macmillan as library holdouts? If they do that instead, librarians need not bother boycotting HarperCollins’s e-books; and any threats to boycott the publisher’s hardcopy releases will surely ring hollow.
The end result of a move like this can only be the slow and painful death of library e-book lending. HarperCollins may hope that other publishers will follow its model – though not so closely as to invite antitrust scrutiny. This will only lead to further confusion for librarians and users alike: HarperCollins allows 26 loans, Random House allows 35, Penguin allows 20, etc. There is no way that a model like this can lead to the growth in library e-book lending that libraries need to survive as e-reading grows in popularity. `
Libraries are highly unlikely to reverse the tide in the market alone. Boycotts may be emotionally satisfying but will have no practical impact. Instead, the library community’s best hopes lie in the legal system.
The most likely route would be to try to get the Copyright Office, at its next DMCA rulemaking in 2013, to approve an exemption that would allow libraries to circumvent (hack) DRMs in order to lend e-books as long as they re-package them for the library patron with the same type or strength of DRM. This would be a more elaborate exception than any that the Copyright Office has granted in its four DMCA rulemakings to date. It also has various disadvantages: it could only last three years under the DMCA rulemaking rules (every exception only lasts until the next triennial rulemaking); it could cost libraries more money to support than they pay Overdrive or NetLibrary, which benefit from scale economies; and it could induce publishers to demand (and perhaps even pay for!) DRM that is more difficult to hack.
But perhaps it’s worth a try. Unlike the Section 108 Study Group — a body that recommends changes to the part of copyright law that covers libraries, which ironically has little bearing on the issue at hand — it is possible for anyone to submit a request for a DMCA exemption to the Copyright Office without first having to run a gauntlet of copyright industry lobbyists.
If the Copyright Office were to grant such an exemption, it would mean that a library could be free to purchase any e-book — not just those that the publisher decides to license — and lend it to its members on its own terms while respecting copyright. The result would be a better version of “Pretend It’s Print” — in the business model sense, where it counts.
Are Libraries Locked Out of the E-book World? February 27, 2011Posted by Bill Rosenblatt in DRM, Law, Publishing, Uncategorized, United States.
Publishing guru Thad McIlroy was kind enough to link to one of my stories on the e-book DRM scene in an article on his excellent Future of Publishing site. (I have had the pleasure of working with Thad on various projects over the years. Especially when it comes to production and output issues for publishers, he is The Man.) So it’s incumbent on me to return the favor.
In his piece, Thad accuses book publishers and Amazon of effectively colluding to shut out libraries from access to e-books. You can borrow e-books from many public libraries in the United States, but the process is clunky – because it entails using a system provided by a third party, Overdrive – and you can’t read them on a Kindle device or any of the Kindle apps.
On the one hand, de facto (if not necessarily explicit) collusions of this type are far from uncommon; in fact the history of copyright law is littered with such arrangements (read Jessica Litman’s Digital Copyright for a particularly jaundiced view on this). But on the other hand, there are a couple of aspects to this story that Thad didn’t cover. Frankly, his piece had me a bit befuddled, because for a long time I have pointed to e-book lending as one of the actual success stories of DRM, a model that increases consumer choice and convenience.
First of all, Amazon is not the only company with a popular e-book platform. Adobe’s e-book platform works on just about every e-reader except the Kindles (including the Barnes & Noble Nooks and Sony Readers) as well as on PCs, Macs, Android, and so on. The Adobe platform supports library lending and in fact is at the heart of Overdrive’s public library e-book lending service. Moreover, a very recent study indicates that the Kindle’s market share among the e-book reading public has dropped below 50%, mainly thanks to the Apple iPad… and regarding iOS devices’ compatibility with the Adobe e-book platform, yes, there’s an app for that. So, if you want to borrow e-books from your public library, just don’t use a Kindle; you have plenty of other choices.
In addition, there is a legal as well as technological or market-based angle to the problem of libraries in the era of digital content that’s worth discussing. Section 108 of the U.S. copyright law grants libraries and archives rights to content that exceed those granted to people under normal conditions. Among other things, it allows libraries to make copies of copyrighted works for noncommercial lending, as long as those copies are limited in number and afforded adequate protections against infringement.
There are various subtleties to Section 108 and its interplay with other areas of copyright law, not to mention moving-target implications of digital technologies. Accordingl, the law requires a group of interested parties to revisit Section 108 every five years and recommend any changes they deem necessary. The Section 108 Study Group is an analog to the better-known rulemaking on Section 1201, which the U.S. Copyright Office conducts every three years. Section 1201 — enacted as part of the Digital Millennium Copyright Act — is the law against circumventing (hacking) DRM on copyrighted works.
The Section 108 Study Group (in its 2008 incarnation, at least) has 19 members, which are well balanced between copyright-owner and library/archive interests: nine from each side and a neutral “legal advisor” from Columbia Law School.
Section 108 allows a library to make a copy of an e-book and lend it out to the library’s members. Under this law, a library could presumably buy an e-book and lend it out. But if the e-book is packaged with DRM, there are two problems. First, the library is not actually buying a copyrighted work, it is licensing the work; see below. Second, Section 108 doesn’t allow the library to hack the DRM in order to make the copy – not even if the library agrees to re-package the copy in a DRM scheme that lets a specific library patron read the e-book. Such hacking would have to be allowed as an exception to Section 1201, which is the province of the Section 1201 rulemaking, and thus of the Copyright Office, not the Section 108 Study Group. (See, I told you this stuff is subtle and complex.)
Because major publishers require DRM on their e-book releases, this means that libraries aren’t able to exercise rights under Section 108 just as a matter of law. This has given rise to services like Overdrive, which facilitate the licensing of e-books from publishers for library lending purposes.
A license is a contract. The licensing of digital content exists in a legal realm that is separate from copyright law – at least for the moment. The upshot is that publishers are free to choose whether to license their material in e-book form for library lending and to dictate some of the terms of those uses, such as the number of devices on which a given user can read the material, period of lending, or number of times an e-book can be loaned. For example, Simon & Schuster doesn’t license for e-book lending at all, and HarperCollins just introduced a policy to limit the number of loans per licensed e-book to 26, in an apparent move to mimic the lifespan of a physical book in library circulation.
Because libraries and publishers will perpetually disagree on these terms, it helps to have a third party like Overdrive or NetLibrary to act as a buffer or intermediary. Some publishers may also agree to license their content through these services because of the risk that their refusal to do so will cause the Section 108 Study Group to recommend changes in the copyright law that give libraries more latitude in lending digital works. As it is now, the copyright-owner contingent in the Study Group can point to services like Overdrive and NetLibrary as evidence that the market is providing solutions so no changes in the law are necessary.
The last Section 108 Study Group Report (for which I consulted to the Study Group) came out in 2008, which means that the activity in preparation for the next one will take place next year. The next Copyright Office 1201 rulemaking also takes place in 2013. If the members of the 108 Study Group who are on the “library side” want greater flexibility for libraries to lend digital works, they may want to try to get exemptions to the 1201 anti-hacking law for library lending proposed and approved.
If that happens, then Amazon and book publishers definitely will no longer have the “library lock-out” that Thad McIlroy described in his article.
Taking Pictures of Magazine Articles in a Bookstore: A Conundrum January 17, 2011Posted by Bill Rosenblatt in Images, Law, Publishing.
An article in last Sunday’s New York Times asked whether people who use their camera phones in bookstores to make copies of copyrighted material — in author Nick Bilton’s case, pages from books on home interior designs — are “pirates.”
To get an answer, Bilton turned to three academic experts. The two lawyers (the third was an economist) were Julie Ahrens, the director of the Fair Use Project at Stanford Law School, and Charles Nesson, the Harvard Law professor who tried to argue that his pro bono client Joel Tenenbaum was engaging in Fair Use when downloading music files from a P2P sharing network (Tenenbaum lost). Nesson’s views on Fair Use have been considered far-out even by the likes of Lawrence Lessig.
Fair and balanced? Draw your own conclusions, but here’s how the two legal academics cleverly finessed their well-known positions on matters like this: They allowed that the question of whether or not Bilton acted legally falls under the Fair Use factors that a court must consider under US copyright law. But they conveniently omitted the salient fact that of the four Fair Use tests, the one considered most important is this: does the use of the content negatively affect the market for the work? Bilton admits in his article that he and his wife sat on the floor of their local Barnes & Noble, took pictures of selected home designs with their iPhones, and left the bookstore without buying anything.
If that’s not an effect on the market for the work (albeit a very small one), I don’ t know what is.
The point here is not to try to finger Nick Bilton for copyright infringement due to his actions, which by themselves are rather inconsequential. Instead, the point is to explore the more interesting effects of his article. Bilton can be said to have published a recipe that others can use to perform actions that publishers may construe as copyright infringement, and to have done so in a very prominent publication.
Publishers and content licensors have been concerned with this for years. They already use technologies — not mentioned in Bilton’s article — to find copies of content online, such as image tracking from PicScout and text fingerprinting from Attributor. Of course, those technologies couldn’t be used in a personal device such as an iPhone if the resulting images are only used personally, unless hardware makers were compelled to build them into their devices.
But some may liken Bilton’s article to the code for cracking DVD encryption that a few people posted on public web pages — a legal matter that reached a United States Appeals Court. Publishers may grumble. But would they have a case against the New York Times?
I’m not a lawyer, so I invite others to comment with their opinions. Yet here are some factors that may be relevant:
- The analogy to DVD cases like Corley and Remeirdes fails because the latter people publicized ways of circumventing DRM, which is illegal under the DMCA (17 USC 1201). There is no DRM on printed books and magazines.
- What Bilton did is also doable with a pencil and paper, albeit with more time and effort (not to mention conspicuousness to store personnel).
- Neither Bilton nor the Times makes the tools; they’re just telling people about them, and the tools in question — digital cameras — have far more noninfringing than infringing uses.
- There is this little thing called the First Amendment.
On the other hand:
- It could be argued that the Times sought to gain from publishing this “infringement recipe,” since their editors choose articles to publish based on what will sell copies of the paper (or bring traffic to their website). That would introduce factors related to the “inducing infringement” theory behind the Supreme Court’s Grokster decision of 2005, which caused Grokster’s non-liability to be revisited.
- This behavior could be lumped under the heading of photocopying — as if the bookstore had a photocopier on the premises. For example, the Copyright Clearance Center charges corporations license fees for presumed photocopying of periodical articles and other content.
Otherwise, analogous legal precedents and cases in other forms of media are hard to find. Going to a record store and asking the manager to play something on the stereo, then surreptitiously recording it and going home, maybe? Perhaps this is uncharted legal territory.
What do you think?