Judge Dismisses E-Book DRM Antitrust Case December 12, 2013Posted by Bill Rosenblatt in DRM, Law, Publishing.
1 comment so far
Last week a federal judge in New York dismissed a lawsuit that a group of independent booksellers brought earlier this year against Amazon.com and the (then) Big Six trade publishers. The suit alleged that the publishers were conspiring with Amazon to use Amazon’s DRM to shut the indie booksellers out of the majority of the e-book market. The three bookstores sought class action status on behalf of all indie booksellers.
In most cases, independent booksellers can’t sell e-books that can be read on Amazon’s Kindle e-readers; instead they have a program through the Independent Booksellers Association that enables consumers to buy e-books from the stores’ websites via the Kobo e-book platform, which has apps for all major devices (PCs, iOS, Android, etc.) as well as Kobo eReaders.
Let’s get the full disclosure out of the way: I worked with the plaintiffs in this case as an expert witness. (Which is why I didn’t write about this case when it was brought several months ago.) I did so because, like others, I read the complaint and found that it reflected various misconceptions about DRM and its place in the e-book market; I thought that perhaps I could help educate the booksellers.
The booksellers asked the court to enjoin (force) Amazon to drop its proprietary DRM, and to enjoin the Big Six to allow independent bookstores to sell their e-books using an interoperable DRM that would presumably work with Kindles as well as iOS and Android devices, PCs, Macs, BlackBerrys, etc. (The term that the complaint used for the opposite of “interoperable DRM” was “inoperable DRM,” much to the amusement of some anti-DRM folks.)
There were two fundamental problems with the complaint. One was that it presupposed the existence of an idealized interoperable DRM that would work with any “interoperable or open architecture device,” and that “Amazon could easily, and without significant cost or disruption, eliminate its device specific restrictive  DRM and instead utilize an available interoperable system.”
There is no such thing, nor is one likely to come into being. I worked with the International Digital Publishing Form (IDPF), the trade association for e-books, to design a “lightweight” content protection scheme that would be attractive to a large number of retailers through low cost of adoption, but that project is far from fruition, and in any case, no one associated with it is under any illusion that all retailers will adopt the scheme. The only DRM that is guaranteed to work with all devices and all retailers forever is no DRM at all.
The closest thing there is to an “interoperable” DRM nowadays is Adobe Content Server (ACS) — which isn’t all that close. Adobe had intended ACS to become an interoperable standard, much like PDF is. Unlike Amazon’s Mobipocket DRM and Apple’s FairPlay DRM for iBooks, ACS can be licensed and used by makers of e-reader devices and apps. Several e-book platforms do use it. But the only retailer of any size in the United States market that does so is Barnes & Noble, which has modified it and combined it with another DRM that it had acquired years ago. Kobo has its own DRM and uses ACS only for interoperability with other environments.
More relevantly, I have heard it said that Amazon experimented with ACS before launching the Kindle with the Mobipocket DRM that it acquired back in 2005. But in any case, ACS’s presence in the US e-book market is on the wane, and Adobe has stopped actively working on the product.
The second misconception in the booksellers’ complaint was the implication that the major publishers had an interest in limiting their opportunities to sell e-books through indie bookstores. The reality is just the opposite: publishers, from the (now) Big Five on down, would like nothing more than to be able to sell e-books through every possible retailer onto every possible device. The complaint alleges that publishers “confirmed, affirmed, and/or condoned AMAZON’s use of restrictive DRMs” and thereby conspired to restrain trade in the e-book market.
Publishers have been wary of Amazon’s dominant market position for years, but they have tolerated its proprietary technology ecosystem — at least in part because many of them understand that technology-based media markets always settle down to steady states involving two or three different platforms, protocols, formats, etc. DRM helps vendors create walls around their ecosystems, but it is far from the only technology that does so.
As I’ve said before, the ideal of an “MP3 for e-books” is highly unlikely and is largely a mirage in any case. Copyright owners have a constant struggle to create and preserve level playing fields for retailers in the digital age, one that the more savvy among them recognize that they can’t win as much as they would like.
Judge Jed Rakoff picked up on this second point in his opinion dismissing the case. He said, “… nothing about [the] fact [that publishers made agreements with Amazon requiring DRM] suggests that the Publishers also required Amazon to use device-restrictive DRM limiting the devices on which the Publishers’ e-books can be display, or to place restrictions on Kindle devices and apps such that they could only display e-books enabled with Amazon’s proprietary DRM. Indeed, unlike DRM requirements, which clearly serve the Publishers’ economic interests by preventing copyright violations, these latter types of restrictions run counter to the Publishers’ interests …” (emphasis in original).
Indie bookstores are great things; it’s a shame that Amazon’s Kindle ecosystem doesn’t play nicely with them. But at the end of the day — as Judge Rakoff also pointed out — Amazon competes with independent booksellers, and “no business has a duty to aid competitors,” even under antitrust law.
In fact, Amazon has repeatedly shown that it will “cooperate” with competitors only as a means of cutting into their markets. Its extension of the Kindle platform to public library e-lending last year is best seen as part of its attempt to invade libraries’ territory. More recently, Amazon has attempted to get indie booksellers interested in selling Kindle devices in their stores, a move that has elicited frosty reactions from the bookstores.
The rest of Judge Rakoff’s opinion dealt with the booksellers’ failure to meet legal criteria under antitrust law. Independent booksellers might possibly have a case to bring against Amazon for boxing them out of the market as reading goes digital, but Book House of Stuyvesant Plaza et al v. Amazon.com et al wasn’t it.
Apple and Disney: A Copyright Conundrum November 25, 2013Posted by Bill Rosenblatt in Uncategorized.
Last week I was at Rutgers Law School in New Jersey. A law student struck up a conversation with me, and once he discovered that I was there to give a guest lecture in Prof. Michael Carrier‘s intellectual property class, he showed me something that had us both scratching our heads. It was a decal of Snow White, affixed to the lid of his MacBook laptop so that she was holding the Apple logo in her hands. It turns out that more than one designer has thought of this idea; here’s one example.
Let’s make the (fairly safe) assumption that the makers of these decals were not licensed by The Walt Disney Company. So the question is: would this be a fair use of the iconic cartoon image, or is the decal maker liable?
The design works as ironic commentary on a couple of levels. Those of you who have seen the classic 1937 Disney animated feature, or at least know the story of Snow White and the Seven Dwarfs, will understand she held an apple in the story, which was poisoned. (Snow White’s pose in the decal is the same as when she held the poisoned apple in the movie.) On another level, Snow White holding the Apple logo is a commentary on Apple’s relationship with Disney, given that Steve Jobs was on the Disney board and was the largest investor in the company.
Is the decal a “transformative” use of Disney’s intellectual property? (If a use of copyrighted material is transformative, it’s likely to be fair use.)
From what I can tell, the manufacturer of the decals is using Disney’s IP without permission by simply making copies of Snow White. There is nothing “transformative” about that by itself; it’s not part of a mashup, collage, remix, etc. The whole of Snow White was used, not a snippet or sample. The decal was sold commercially, though it probably doesn’t make people less likely to buy Snow White items from the Disney Store. It may or may not be an example of “appropriation art.”
The “transformative” use of the decal is made by the person who buys it and affixes it to his MacBook. One could argue that the decal was made specifically with that use in mind; one could say that the decal maker was “inducing” transformative uses of Snow White.
OK, copyright geeks, time to weigh in. Here’s a poll. Feel free to elaborate in the comments.
Copyright Technology, Gangnam Style November 7, 2013Posted by Bill Rosenblatt in Asia-Pacific, Events.
1 comment so far
The Samsung TV monitor wasn’t particularly large. But the picture was so sharp that, even from several meters away, you could make out each hair of the perfectly sculpted eyebrows on the perfectly groomed K-Pop starlets performing on the MTV Korea broadcast. The scene was a casual neighborhood restaurant in Gangnam, the upscale district of Seoul that PSY made world-famous. The makgeolli (Korean rice wine) flowed freely, if more freely to the locals than to those of us who had flown across many timezones to get there. We were all celebrating the successful end of ICOTEC, the International Copyright Technology Conference, which attracted 600 attendees earlier this week.
This is what happens when a government not only pays lip service to copyright but also puts its money where its mouth is. South Korea is in the midst of a perfect storm of growth in digital media: Samsung Galaxy mobile devices everywhere; broadband Internet at speeds several times those available in the west; K-Pop idol factories churning out one international sensation after another. The media, telecoms, and consumer electronics industries don’t agree on everything (e.g., consumer electronics companies refuse to pay copyright levies on their devices), but they largely cooperate – as their offices sit near one another in Digital Media City, a special district across town from Gangnam that emerged from a landfill site just over a decade ago. Much of that cooperation is fostered by the government, and it has resulted in a global digital media juggernaut.
The Ministry of Culture, Sports and Tourism (MCST) sponsored ICOTEC, now in its third year. It was as well organized as any private-sector conference I’ve seen, and admission was free. The conference program had more emphasis on technology than on law compared to my own Copyright and Technology conferences in New York and London. That’s fitting because, as I have mentioned, Korea and its Asia-Pacific neighbors produce more innovation in rights technologies (relative to their GDPs) than the traditional content-producing regions of the United States and Europe. But now Korea is producing technological innovations not just to satisfy the distant demands of western media companies; it’s launching innovative content models and technologies to protect copyright for the benefit of its own multi-billion-dollar content. As ICOTEC attendees could see, Korea has perhaps the most vibrant rights technologies industry of any single country in the world today.
The Korean government’s own foray into rights technology is particularly interesting: it operates the Copyright Protection Center (CPC), part of the Korean Federation of Copyright Organizations (known as KOFOCO). The CPC runs a piracy monitoring system called ICOP — a reverse-engineered acronym for Illegal Copyright Obstruction Program.
ICOP is analogous to private-sector piracy monitoring services such as MarkMonitor, Irdeto/BayTSP, Muso, Ayatii, and various others. It monitors various types of online services, including P2P file-sharing sites, web portals, “web-hard” (Korea’s equivalent to “cloud storage”) services, torrent sites, and so on. Like those piracy monitoring services, ICOP uses a combination of techniques including fingerprinting technology and analysis of metadata and information surrounding potentially infringing content on the monitored sites.
ICOP gathers data in real time and sends takedown notices to the sites, which remove the content to avoid action from Korean law enforcement. CPC claims that nearly all of the monitored sites take down the accused content when requested. The effort also includes crackdowns on physical piracy: CPC employees armed with smartphones use specially-designed apps to photograph sellers of pirated DVDs and CDs on streets, in subway stations, etc. The apps geo-tag the photos and send them to the CPC in real time, where the data is displayed on maps, analyzed, and passed on to law enforcement for further action.
ICOP is arguably more effective than monitoring efforts elsewhere because it’s better integrated with law enforcement. Media companies in the west use private-sector piracy monitoring services, which provide data that copyright owners must use to generate takedown notices and initiate litigation and law enforcement actions. The hands-off approach of western governments fosters competition among piracy monitoring services, which should theoretically lead to more effective technologies. But the reality has been a limit on revenue opportunities that has led to market consolidation, as many piracy monitoring services have merged, been acquired by larger companies for synergies with other businesses, or just ceased operations.
In contrast, CPC gets 90% of its funding from the Korean government. It operates on an annual budget of about US $5 Million, which works out to just 9 cents per Korean citizen; the other 10% comes from content industry trade associations. (By comparison, HADOPI costs the French about 15 cents per citizen per year.)
ICOP’s fingerprinting technology comes from the Electronics Telecommunications Research Institute (ETRI), a government-funded research lab. Although much of the system is automated, the CPC employs 110 people to monitor and report on illegal content on targeted online services. The employees are disabled citizens who work from their homes.
(The CPC operates completely separately from Korea’s “three strikes” graduated response regime, which, like HADOPI, targets individual downloaders instead of online service operators. Korea was actually the first country to implement graduated response.)
KOFOCO started ICOP in 2008 in response to the rapidly growing rate of piracy amid an equally rapidly growing content industry in Korea. CPC estimates that the size of the illegal content market shrank 27.6% between 2011 and 2012 alone, and that the infringement rate dropped 14% in the same time period.
Yet the Korean government doesn’t just focus on copyright enforcement. MCST supports what is perhaps the world’s leading effort to educate the public about copyright. It has also expanded the scope of fair use under Korean copyright law, increased access to public-sector content, introduced more cost-efficient legal processes for small infringement claims (for settlements below roughly US $10,000), and dramatically increased the efficiency of music rights licensing.
The ICOTEC conference put all this activity on proud display for attendees and speakers from around the world. It was a ton of very interesting information. The K-Pop CD sampler I bought at the airport on my way back should help — as the makgeolli did — to further my understanding of a country whose impact on the global digital media scene will only get bigger and bigger.
My keynote presentation at ICOTEC 2013 is available on SlideShare. Watch this space for links to other presentations from the conference.
Getty Images Reaches Image License Deal with Pinterest October 28, 2013Posted by Bill Rosenblatt in Fingerprinting, Images, Rights Licensing.
add a comment
A year ago, Getty Images, one of the world’s largest stock image agencies, reached a licensing deal with a startup called SparkRebel, which I described as “Pinterest for fashionistas, with Buy buttons.” On that site, people would post images of items of clothing they’re interested in. An image recognition engine would try to identify the photo and thus the identity of each apparel item. If the item was identified and its manufacturer had a deal with SparkRebel, the site would show a Buy button, which users could click to purchase the item. It was a clever use of content identification technology to support licensing of content used for commercial purposes.
SparkRebel used Getty Images’ ImageIRC image recognition technology. ImageIRC uses the concept of fingerprinting: it examines an image, calculates a set of numbers that represent the image, and looks those numbers up in a database of fingerprints to see if finds a match. Matches needn’t be exact; the fingerprinting algorithm can usually compute the correct fingerprint even if the image has been color-shifted, downsampled, cropped (up to a point), etc. In other words, Getty Images is to still images as Google’s Content ID is to YouTube videos and Audible Magic is to various sites that host music files.
In Getty Images’ deal with SparkRebel, SparkRebel would pay Getty Images a licensing fee whenever a user posted an image to which Getty owned the rights. Those of use who watched this deal at the time wondered if Getty Images was trying to get Pinterest — the leading site where users posted images of commercial products — to agree to a similar deal. Given Getty Images’ firm “no comment” replies to questions about it, the answer was clearly yes. Many of the photos posted on Pinterest (as opposed to, say, Instagram) are commercial images copied and pasted from other websites, so Getty could have made a case that Pinterest was promoting infringement of its copyrights.
It took a while, but Getty Images did conclude a licensing deal with Pinterest last Friday — a few months after SparkRebel ceased operations. Under the deal, whenever ImageIRC finds a match to an image that a user “pins” on Pinterest, Pinterest will pay Getty Images a licensing fee, just as with SparkRebel. The additional feature of the deal is that Getty Images will send Pinterest metadata about the matched image, which Pinterest can display for the user. The metadata includes the time and location of the photo, the identity of the photographer, caption, an image ID, and licensing information.
Neither Getty nor Pinterest has mentioned anything about blocking or flagging images that users aren’t permitted to pin to the site; Pinterest still allows any user photos on the site, regardless of the terms under which Getty normally licenses them. Pinterest continues to follow DMCA 512 policies of responding to takedown notices and terminating the accounts of users who repeatedly violate copyrights.
Pinterest’s announcement of the deal on its blog mentions the license fees, but otherwise does not mention any copyright issues; instead it focuses on “New data to help improve Pinterest.” Putting the fees aside, the deal is a win for Pinterest as well as Getty Images (not to mention Pinterest’s user community).
For Getty Images, this deal establishes an important precedent for image-sharing services that store lots of professional images and use them for commercial purposes. Other services that use images to drive commerce will likely follow Pinterest’s example and make licensing deals with Getty Images. But Getty gets another benefit besides money that could turn out to be just as important: distribution of image metadata.
One of the biggest problems that the stock image industry has with the Internet is that most ways of copying images from one place to another strip metadata away. When photographers and editors prepare images for distribution, they use tools like Adobe Photoshop, which incorporates Adobe’s XMP (eXtensible Metadata Platform) metadata scheme for storing metadata that travels with images. XMP metadata can be stored alongside images on web pages. But it doesn’t survive copying and pasting photos through web browsers.
It is actually illegal under section 1202 of the Digital Millennium Copyright Act to intentionally remove “copyright management information” from a copyrighted work in order to evade detection of infringement, though there is some ambiguity over issues such as what qualifies as copyright management information. Nevertheless, images that users copy and paste among websites generally have no copyright management information.* Getty Images’ arrangement with Pinterest recovers metadata for images posted to the site that match its database. This certainly won’t solve the image metadata problem in general, but it’s a start.
*Some images may have invisible embedded watermarks that indicate copyright management information. Typically such watermarks will contain IDs that point to entries in image licensors’ databases, which in turn contain things like the photographer’s name, licensing terms, and so on. Whether invisible embedded watermarks qualify as copyright management information under DMCA 1202 is somewhat up in the air. If a high enough court decided that they do, that could make tools for hacking “social DRM” e-book watermarks illegal in the United States.
MovieLabs Releases Best Practices for Video Content Protection October 23, 2013Posted by Bill Rosenblatt in DRM, Standards, Video.
As Hollywood prepares for its transition to 4k video (four times the resolution of HD), it appears to be adopting a new approach to content protection, one that promotes more service flexibility and quicker time to market than previous approaches but carries other risks. The recent publication of a best-practices document for content protection from MovieLabs, Hollywood’s R&D consortium, signals this new approach.
In previous generations of video technology, Hollywood studios got together with major technology companies and formed technology licensing entities to set and administer standards for content protection. For example, a subset of the major studios teamed up with IBM, Intel, Microsoft, Panasonic, and Toshiba to form AACS LA, the licensing authority for the AACS content protection scheme for Blu-ray discs and (originally) HD DVDs. AACS LA defines the technology specification, sets the terms and conditions under which it can be licensed, and performs other functions to maintain the technology.
A licensing authority like AACS LA (and there are a veritable alphabet soup of others) provides certainty to technology implementation including compliance, patent licensing, and interoperability among licensees. It helps insulate the major studios from accusations of collusion by being a separate entity in which at most a subset of them participate.
As we now know, the licensing-authority model has its drawbacks. One is that it can take the licensing authority several years to develop technology specs to a point where vendors can implement them — by which time they risk obsolescence. Another is that it does not offer much flexibility in how the technology can adapt to new device types and content delivery paradigms. For example, AACS was designed with optical discs in mind at a time when Internet video streaming was just a blip on the horizon.
A document published recently by MovieLabs signals a new approach. MovieLabs Specification for Enhanced Content Protection is not really a specification, in that it is in nowhere near enough detail to be usable as the basis for implementations. It is more a compendium of what we now understand as best practices for protecting digital video. It contains room for change and interpretation.
The best practices in the document amount to a wish list for Hollywood. They include things like:
- Techniques for limiting the impact of hacks to DRM schemes, such as requiring device as well as content keys, code diversity (a hack that works on one device won’t necessarily work on another), title diversity (a hack that works with one title won’t necessarily work on another), device revocation, and renewal of protection schemes.
- Proactive renewal of software components instead of “locking the barn door after the horse has escaped.”
- Component technologies that are currently considered safe from hacks by themselves, including standard AES encryption with minimum key length of 128 and version 2.2 or better of the HDCP scheme for protecting links such as HDMI cables (earlier versions were hacked).
- Hardware roots of trust on devices, running in secure execution environments, to limit opportunities for key leakage.
- Forensic watermarking, meaning that content should have information embedded in it about the device or user who requested it.
Those who saw Sony Pictures CTO Spencer Stephens’s talk at the Anti-Piracy and Content Protection Summit in LA back in July will find much of this familiar. Some of these techniques come from the current state of the art in content protection for pay TV services; for more detail on this, see my whitepaper The New Technologies for Pay TV Content Security. Others, such as the forensic watermarking requirement, come from current systems for distributing HD movies in early release windows. And some result from lessons learned from cracks to older technologies such as AACS, HDCP, and CSS (for DVDs).
MovieLabs is unable to act as a licensor of standards for content protection (or anything else, for that matter). The six major studios set it up in 2005 as a movie industry joint R&D consortium modeled on the cable television industry’s CableLabs and other organizations enabled by the National Cooperative Research Act of 1984, such as Bellcore (telecommunications) and SEMATECH (semiconductors). R&D consortia are allowed, under antitrust law, to engage in “pre-competitive” research and development, but not to develop technologies that are proprietary to their members.
Accordingly, the document contains a lot of language intended to disassociate these requirements from any actual implementations, standards, or studio policies, such as “Each studio will determine individually which practices are prerequisites to the distribution of its content in any particular situation” and “This document defined only one approach to security and compatibility, and other approaches may be available.”
Instead, the best-practices approach looks like it is intended to give “signals” from the major studios to content protection technology vendors, such as Microsoft, Irdeto, Intertrust, and Verimatrix, who work with content service providers. These vendors will then presumably develop protection schemes that follow the best practices, with an understanding that studios will then agree to license their content to those services.
The result of this approach should be legal content services for next-generation video that get to market faster. The best practices are independent of things like content delivery modalities (physical media, downloads, streaming) and largely independent of usage rules. Therefore they should enable a wider variety of services than is possible with the traditional licensing authority paradigm.
Yet this approach has two drawbacks compared to the older approach. (And of course the two approaches are not mutually exclusive.) First is that it jeopardizes the interoperability among services that Hollywood craves — and has gone to great lengths to preserve in the UltraViolet standard. Service providers and device makers can incorporate content protection schemes that follow MovieLabs’ best practices, but consumers may not be able to interoperate content among them, and service providers will be able to use content protection schemes to lock users in to their services. In contrast, many in Hollywood are now nostalgic for the DVD because, although its protection scheme was easily hacked, it guaranteed interoperability across all players (at least all within a given geographic region).
The other drawback is that the document is a wish list provided by organizations that won’t pay for the technology. This means that downstream entities such as device makers and service providers will treat it as the maximum amount of protection that they have to implement to get studio approval. Because there is no license agreement that they have to sign to get access to the technology, the downstream entities are likely to negotiate down from there. (Such negotiation already took place behind the scenes during the rollout of Blu-ray, as player makers refused to implement some of the more expensive protection features and some studios agreed to let them slip.)
Downstream entities are particularly likely to push back against some of MovieLabs’s best practices that involve costs and potential impairments of the user experience; examples include device connectivity to networks for purposes of authentication and revocation, proactive renewal of device software, and embedding of situation-specific watermarks.
Surely the studios understand all this. The publication of this document by MovieLabs shows that Hollywood is willing to entertain dialogues with service providers, device makers, and content protection vendors to speed up time-to-market of legitimate video services and ensure that downstream entities can innovate more freely. How much protection will the studios will ultimately end up with when 4k video reaches the mainstream? It will be very interesting to watch over the next couple of years.
add a comment
We have added another panel session to the Copyright and Technology London 2013 conference, which will take place next Thursday (17 October). The most important recent copyright litigation in the UK at the moment is the case of Ministry of Sound v. Spotify, in which the record label is objecting to Spotify making playlists available that mimic the compilation albums for which the label is best known. The case has broad implications for the limits of copyrightability in the digital age, at least under UK law.
Here is the panel description:
The Limits of Copyright in the Digital Age
The litigation that Ministry of Sound recently started against Spotify will test whether playlists on compilation albums have copyright protection. It will be played out in the context of the debate about to what extent we as a society are prepared to pay for curation. The same issue faces news-disseminating organisations over their headlines and sports reporters over game highlights. Does our society value the editorial/quality control/validation role that they play? This panel will explore the boundaries of what is – and should be – protected by copyright in the digital age and suggest what directions legal decisions in the future may take.
Although the case was only filed a month ago, we have been able to pull together an excellent group of authorities on both the legal and content aspects of the matter, thanks to the tireless efforts of Serena Tierney of Bircham Dyson Bell, the panel chair and herself an authority on copyright in the digital age. Panelists will include:
- Jeff Smith, Head of Music at BBC Radio 2 and 6; former Director of Music Programming at Napster
- Mo McRoberts, Head of the BBC Genome Project at the BBC Archive
- Lindsay Lane, Barrister at 8 New Square Intellectual Property and co-author of the standard copyright treatise Laddie, Prescott and Vitoria on The Modern Law of Copyright and Designs
- Andrew Orlowski, Executive Editor of The Register, who has covered this case.
This means that we will have a packed day of exciting sessions from all around the world of copyright. Places are still left, so register today!
“Netflix for E-Books” Approaches Reality October 7, 2013Posted by Bill Rosenblatt in Publishing, Services.
add a comment
Back in 2002, a startup company called listen.com had just concluded licensing deals with all of the (then five) major labels. The result was Rhapsody: the “celestial jukebox” finally brought to life, the first successful subscription on-demand music service. Rhapsody — whose original focus on classical music must have made it seem like a low-impact experiment to the majors — didn’t get on the map until they closed those deals.*
Eleven years later, something analogous is happening in the world of book publishing. Last week, the popular document sharing site Scribd obtained licenses to all backlist titles from HarperCollins, one of the Big Five trade book publishers (along with Penguin Random House, Simon & Schuster, Macmillan, and Hachette), for an $8.99/month all-you-can-read subscription service. It should only be a matter of time before the other four trickle in. The service had been in “soft launch” mode since January with catalogs from smaller publishers such as RosettaBooks and SourceBooks.
Why Scribd and not Oyster or any of the others? Because Scribd already has a huge user base — 80 million monthly visitors — making it an attractive existing audience instead of a speculative one.
Scribd started in 2006 as sort of a “YouTube for documents.” The vast majority of the documents on the site were free; many were individual authors’ writings, corporate whitepapers, court filings, and so on. Scribd also enabled authors to sell their documents as paid downloads (DRM optional). Eventually some publishers put e-books up for individual sale on the site, including major publishers in the higher ed and scholarly segments.
The publishing industry has been buzzing about the possibility of a “Netflix for books” for a couple of years now. A few startups, such as Oyster, have built out the infrastructure but have only gotten licenses from smaller publishers and independent authors. At least for now, only Scribd has a major publisher deal; that will make all the difference in taking the subscription model for e-books to the mainstream. Like it or not, major content providers are key to the success of a content retail site.
From a technical standpoint, Scribd’s subscription service has more in common with music apps like Rhapsody and Spotify than with video services like Netflix. Like those music services, Scribd is mainly a “streaming” service, a/k/a “cloud reading,” in that it retrieves content in small chunks instead of downloading entire e-books but also gives users the option of downloading content to their mobile devices. (Thereby enabling me to use it on the subways in NYC.) Files stored on mobile devices are obfuscated or encrypted, so that users will not have access to them anymore if they cancel their subscriptions. And also analogously to the interactive streaming music services, Scribd uses a simple proprietary “good enough” encryption scheme instead of a heavyweight name-brand DRM technology such as the Adobe DRM used with the Nook, Kobo, Sony Reader, and Bookish systems.
Although Scribd is the first paid subscription service with major-publisher licensing, it’s actually not the first way to read major-publisher trade e-books on a time-limited basis: OverDrive introduced OverDrive Read, an HTML5-based cloud reading app for its public library e-lending service, a year ago.
In fact, OverDrive Read is currently the only (legal) way to read frontlist e-book titles from major publishers through a browser app on a time-limited basis. And that leads to an important difference between Scribd’s service and interactive streaming music services: HarperCollins is only licensing backlist titles, not frontlist (latest bestsellers). From the publishers’ point of view, this is a smart move that other Big Five publishers will most likely follow.
In that respect, Scribd could become more like Netflix than Rhapsody or Spotify, in that Netflix only offers movies in the home entertainment window — Hollywood’s rough equivalent of “backlist.” In contrast, the major music labels licensed virtually their entire catalogs to interactive streaming services from the start, save only for some high-profile artist holdouts such as the Beatles and Led Zeppelin. Instead, the record labels have had to settle for (hard-won) price differentiation between top new releases and back catalog for paid downloads. Just as readers who want the latest frontlist titles in print have to pay for hardback, those who want them as e-books will have to buy them. (Or borrow them from the library.)
*The story of Rhapsody is somewhat sad. For music geeks like myself, the service was a revelation — a truly new way to listen to and explore music. But Rhapsody slogged through years of difficulty communicating the value of subscription services to users amid numerous ownership changes. Subscribership grew gradually and plateaued at about a million paying users; then it suffered unfairly from the tsunami of hype around Spotify’s US launch in 2011. It didn’t help that Rhapsody took too long to release a halfway decent mobile client; but otherwise Spotify’s functionality was virtually identical to Rhapsody at that time. Now Rhapsody is struggling yet again as it attempts to expand to markets where Spotify is already established, training its 24 million users to expect free on-demand streaming with ads while losing money hand over fist. And in the latest insult to its pioneering history, a 6,000-word feature on Spotify in Mashable — a tome by online journalism standards — mentions Rhapsody not once.
E-Book Watermarking Gains Traction in Europe October 3, 2013Posted by Bill Rosenblatt in DRM, Europe, Publishing, United States, Watermarking.
The firm Rüdiger Wischenbart Content and Consulting has just released the latest version of Global eBook, its overview of the worldwide ebook market. This sweeping, highly informative report is available for free during the month of October.
The report contains lots of information about piracy and rights worldwide — attitudes, public policy initiatives, and technologies. A few conclusions in particular stand out. First, while growth of e-book reading appears to be slowing down, it has reached a level of 20% of book sales in the U.S. market (and even higher by unit volume). This puts e-books firmly in the mainstream of media consumption.
Accordingly, e-book piracy has become a mainstream concern. Publishers — and their trade associations, such as the Börsenverein des Deutschen Buchhandels in Germany, which is the most active on this issue — had been less involved in the online infringement issue than their counterparts in the music and film industries, but that’s changing now. Several studies have been done that generally show e-book piracy levels rising rapidly, but there’s wide disagreement on its volume. And virtually no data at all is available about the promotional vs. detrimental effects of unauthorized file-sharing on legitimate sales. Part of the problem is that e-book files are much smaller than music MP3s or (especially) digital video or games; therefore e-book files are more likely to be shared through email (which can’t be tracked) and less likely to be available through torrent sites.
The lack of quantitative understanding of infringement and its impact has led different countries to pursue different paths, in terms of both legal actions and the use of antipiracy technologies. Perhaps the most surprising of the latter trend — at least to those of us on this side of the Atlantic — is the rapid ascendancy of watermarking (a/k/a “social DRM”) in some European countries. For example:
- Netherlands: Arbeiderspers/Bruna, the country’s largest book publisher, switched from traditional DRM to watermarking for its entire catalog at the beginning of this year.
- Austria: 65% of the e-books available in the country have watermarks embedded, compared to only 35% with DRM.
- Hungary: Watermarking is now the preferred method of content protection.
- Sweden: Virtually all trade ebooks are DRM-free. The e-book distributor eLib (owned by the Swedish media giant Bonnier), uses watermarking for 80% of its titles.
- Italy: watermarking has grown from 15% to 42% of all e-books, overtaking the 35% that use DRM.
(Note that these are, with all due respect to them, second-tier European countries. I have anecdotal evidence that e-book watermarking is on the rise in the UK, but not much evidence of it in France or Germany. At the same time, the above countries are often test beds for technologies that, if successful, spread to larger markets — whether by design or market forces.)
Meanwhile, there’s still a total absence of data on the effects of both DRM and watermarking on users’ e-book behavior — which is why I have been discussing with the Book Industry Study Group the possibility of doing a study on this.
The prevailing attitude among authors is that DRM should still be used. An interesting data point on this came back in January when Lulu, one of the prominent online self-publishing services, decided to stop offering authors the option of DRM protection (using Adobe Content Server, the de facto standard DRM for ebooks outside of the Amazon and Apple ecosystems) for ebooks sold on the Lulu site. Lulu authors would still be able to distribute their titles through Amazon and other services that use DRM.
Lulu announced this in a blog post which elicited large numbers of comments, largely from authors. My pseudo-scientific tally of the authors’ comments showed that they are in favor of DRM — and unhappy with Lulu’s decision to drop it — by more than a two-to-one margin. Many said that they would drop Lulu and move to its competitor Smashwords, which continues to support DRM as an option. Remember that these are independent authors of mostly “long tail” titles in need of exposure, not bestselling authors or major publishers.
One reason for Lulu’s decision to drop DRM was undoubtedly the operational expense. Smashwords’ CEO, Mark Coker, expressed the attitudes of ebook distributors succintly in a Publishers Weekly article covering Lulu’s move when he said, “What’s relevant is whether the cost of DRM (measured by fees to Adobe, [and for consumers] increased complexity, decreased availability, decreased sharing and word of mouth, decreased customer satisfaction) outweigh the benefits[.]” As we used to say over here, that’s the $64,000 question.
MEGA CEO to speak at Copyright and Technology London 2013 September 26, 2013Posted by Bill Rosenblatt in Asia-Pacific, Europe, Events, UK.
add a comment
With less than three weeks to go until our Copyright and Technology London 2013 conference, I’m very pleased to announce some additions and updates to the conference agenda. Most importantly, Vikram Kumar, CEO of Kim Dotcom’s new service MEGA, will appear by videoconference from New Zealand. I’ll be talking with him about copyright issues related to the market for cloud storage services, and he’ll take questions from the audience.
I’m also particularly excited about a few of the sessions at this year’s conference. A very hot topic in the digital copyright field nowadays is websites that attract traffic with offers of free unauthorized copyrighted material and make money from advertising, and what the advertising industry could be doing about this. Our panel on this issue will include Nick Stringer from the Interactive Advertising Bureau (representing the ad industry side) and Geoff Taylor from BPI (representing recorded music). It will also feature Jeremy Penston, an independent consultant in the UK who has been the brains behind piracy studies for PRS, Google, and Spotify; and Nick Swimer from the law firm of Reed Smith to provide the legal context.
And speaking of piracy studies, our panel on cyberlockers will feature David Price of NetNames (formerly Envisional), author of a highly detailed study called Sizing the Piracy Universe just this week.
We will also get a presentation on HADOPI, the French graduated response agency, from Pauline Blassel, HADOPI’s research director — just a week after the agency is to give a progress report in Paris. And speaking of progress reports, our session on the launch of the Global Repertory Database (featuring speakers from Google, STIM, and others) should be quite interesting.
With a spectacular setting at Reed Smith’s offices in London, Copyright and Technology London 2013 should be a great event. Please register today! Our sponsors, MarkMonitor and Civolution, are helping to keep registration fees low.
For those of you on the other side of the world from London, you may also be interested in the International Copyright Technology (ICOTEC) 2013 conference, which will take place at the COEX conference center in Seoul, Korea, on November 4-5. I will be giving one of the keynote addresses.
Comcast Adds Carrots to Sticks August 9, 2013Posted by Bill Rosenblatt in Fingerprinting, Services, Video.
add a comment
Variety magazine reported earlier this week that Comcast is developing a new scheme for detecting illegal file downloads over its Internet service. When it detects a user downloading content illegally, it will send a message to the user with links to legal alternatives, including from sources that aren’t Comcast properties. This scheme would be independent of the Copyright Alert System (CAS) that launched in the United States earlier this year.
What a difference the right economic incentives make. Comcast has significant incentive for offering carrots instead of sticks: it owns NBC Universal, a major movie studio and TV network. This means that Comcast has incentives to protect content revenue, even if it comes from third parties like iTunes, Netflix, or Amazon. In addition, if Comcast protects its own network from infringers, it has a stronger position from which to negotiate content distribution deals for its own Xfinity-branded services from other major studios.
Comcast will most likely use the same monitoring services as content owners — like NBC Universal, whose people are collaborating on the design of this (as yet unnamed) system — use to detect allegedly infringing downloads. It will be able to send messages to users in close to real time — in contrast to CAS, which processes data about detected downloads through a third party before they get sent to users.
This scheme is reminiscent of one of the earliest uses of fingerprinting technologies in a commercially licensed service: around 2005, a P2P file-sharing network called iMesh cut a deal with the major record labels (or at least some of them). They would allow iMesh to operate its network with audio fingerprinting (supplied by Audible Magic, still a leader in the field). The fingerprinting technology would detect attempts to upload copyrighted music to the network and block them. Instead, iMesh offered copyrighted music files supplied by the labels, encrypted with DRM, for purchase. Given that several other P2P file-sharing networks (such as LimeWire) continued to operate at the time without such restrictions, iMesh wasn’t much of a success.
Comcast is hoping to get other ISPs to adopt similar schemes, presumably both as a service to major content owners and in hopes that this anti-piracy feature doesn’t drive users to its competitors. But that gambit is unlikely to succeed. Of the four other major ISPs in the US — AT&T, Cablevision, Time Warner Cable, and Verizon — none are corporate siblings to major content owners. (Time Warner Cable was spun off from Time Warner in 2009, though it retains the name.) In other words, they won’t have the right incentives.
In contrast, France’s HADOPI scheme is supposed to steer people to legal alternatives by simply giving those services a “seal of approval” that they can use themselves. What Comcast has in mind ought to be more effective. In the world of movies and TV shows, it would be that much more effective if legal services were to offer content with anything like the completeness of record label catalogs offered through legal music services. But that’s another story for another day.