Digimarc Launches Social DRM for E-books September 17, 2014Posted by Bill Rosenblatt in Fingerprinting, Publishing, Technologies.
add a comment
Digimarc, the leading supplier of watermarking technology, announced this week the release of Digimarc Guardian Watermarking for Publishing, a transactional watermarking (a/k/a “social DRM”) scheme that complements its Guardian piracy monitoring service. Launch customers include the “big five” trade publisher HarperCollins, a division of News Corp., and the e-book supply chain company LibreDigital, a division of the printing giant RR Donnelley that distributes e-books for HarperCollins in the US.
With this development, Digimarc finally realizes the synergies inherent in its acquisition of Attributor almost two years ago. Digimarc’s roots are in digital image watermarking, and it has expanded into watermarking technology for music and other media types. Attributor’s original business was piracy monitoring for publishers via a form of fingerprinting — crawling the web in search of snippets of copyrighted text materials submitted by publisher customers.
One of the shortcomings in Attributor’s piracy monitoring technology was the difficulty in determining whether a piece of text that it found online was legitimately licensed or, if not, if it was likely to be a fair use copy. Attributor could use certain cues from surrounding text or HTML to help make these determinations, but they are educated guesses and not infallable.
The practical difference between fingerprinting and watermarking is that watermarking requires the publisher to insert something into its material that can be detected later, while fingerprinting doesn’t. But watermarking has two advantages over fingerprinting. One is that it provides a virtually unambiguous signal that the content was lifted wholesale from its source; thus a copy of content with a watermark is more likely to be infringing. The other is that while fingerprinting can be used to determine the identity of the content, watermarking can be used to embed any data at all into it (up to a size limit) — including data about the identity of the user who purchased the file.
The Digimarc Guardian watermark is complementary to the existing Attributor technology; Digimarc has most likely adapted Attributor’s web-crawling system to detect watermarks as well as use fingerprinting pattern-matching techniques to find copyrighted material online.
Digimarc had to develop a new type of watermark for this application, one that’s similar to those of Booxtream and other providers of what Bill McCoy of the International Digital Publishing Forum has called “social DRM.” Watermarks do not restrict or control use of content; they merely serve as forensic markers, so that watermark detection tools can find content in online places (such as cyberlockers or file-sharing services) where they probably shouldn’t be.
A “watermark” in an e-book can consist of text characters that are either plainly visible or hidden among the actual material. The type of data most often found in a “social DRM” scheme for e-books likewise can take two forms: personal information about the user who purchased the e-book (such as an email address) or an ID number that the distributor can use to look up the user or transaction in a database and is otherwise meaningless. (The idea behind the term “social DRM” is that the presence of the watermark is intended to deter users from “oversharing” files if they know that their identities are embedded in them.) The Digimarc scheme adopted by LibreDigital for HarperCollins uses hidden watermarks containing IDs that don’t reveal personal information by themselves.
In contrast, the tech publisher O’Reilly Media uses users’ email addresses as visible watermarks on its DRM-free e-books. Visible transactional watermarking for e-books dates back to Microsoft’s old Microsoft Reader (.LIT) scheme in the early 2000s, which gave publishers the option of embedding users’ credit card numbers in e-books — information that users surely would rather not “overshare.”
HarperCollins uses watermarks in conjunction with the various DRM schemes in which its e-books are distributed. The scheme is compatible with EPUB, PDF, and MOBI (Amazon Kindle) e-book formats, meaning that it could possibly work with the DRMs used by all of the leading e-book retailers.
However, it’s unclear which retailers’ e-books will actually include the watermarks. The scheme requires that LibreDigital feed individual e-book files to retailers for each transaction, rather than single files that the retailers then copy and distribute to end users; and the companies involved haven’t specified which retailers work with LibreDigital in this particular way. (I’m not betting on Amazon being one of them.) In any case, HarperCollins intends to use the scheme to gather information about which retailers are “leaky,” i.e., which ones distribute e-books that end up in illegal places online.
Hollywood routinely uses a combination of transactional watermarks and DRM for high-value content, such as high-definition movies in early release windows. And at least some of the major record labels have used a simpler form of this technique in music downloads for some time: when they send music files to retailers, they embed watermarks that indicate the identity of the retailer, not the end user. HarperCollins is unlikely to be the first publisher to use both “social DRM” watermarks and actual DRM, but it is the first one to be mentioned in a press release. The two technologies are complementary and have been used separately as well as together.
This Year’s Copyright Society Honoree to Keynote Copyright and Technology London Conference September 5, 2014Posted by Bill Rosenblatt in Events, UK.
A quick postscript to my previous piece about Peter Menell’s Brace Lecture for the Copyright Society of the USA: this year’s honoree is none other than Shira Perlmutter, who will be one of the keynote speakers at our Copyright and Technology London 2014 conference on October 1.
Perlmutter has been an international copyright luminary for many years. When I first met her in 2003, she was chief IP counsel at Time Warner. Prior to that she had posts at the U.S. Copyright Office, WIPO, the Administration of President Bill Clinton, and Catholic University as a law professor. After Time Warner, she went on to IFPI and the U.S. Patent and Trademark Office, where she is now. (The USPTO advises the Executive Branch of U.S. government on all intellectual property issues, including copyright.) She also has adjunct appointments at Oxford and the University of London.
She will be sharing keynote duties with Maria Martin-Prat, who is the chief copyright official at the European Commission and has an equally distinguished career. The two worked together at WIPO. They will also be on a panel in the afternoon on American copyright reform and its impact on the European scene. I’ll be moderating that panel, which will also include the eminent American copyright litigator Andrew Bridges of Fenwick & West and the UK copyright expert Stephen Edwards of ReedSmith.
Other panels I’m particularly excited about include one on the role of ISPs in copyright enforcement, given that the UK is planning on implementing a graduated response regime that’s purely “educational,” with no punitive component as in France, South Korea, and elsewhere. One of our panelists there is Thomas Dillon, an attorney — now at WIPO — who is one of the leading experts on graduated response programs worldwide. We’ll also have an interesting discussion about the content protection methods that Hollywood intends to require for its latest generation of ultra-high-resolution (“4K”) content, featuring Ron Wheeler of Fox — known as one of the most stringent of the major studios on content protection. And we’ll have a special keynote by Dominic Young, CEO of the UK Copyright Hub, to give us an update on that.
Please join us in London on October 1 – register today!
Improving Copyright’s Public Image September 3, 2014Posted by Bill Rosenblatt in Law, United States.
1 comment so far
The Copyright Society of the USA established the Donald C. Brace Memorial Lecture over 40 years ago as an opportunity to invite a distinguished member of the copyright legal community to create a talk to be given and published in the Society’s journal. The list of annual Brace Lecture givers is a Who’s Who of the American copyright community.
Last year’s lecture, which made it into the latest issue of the Journal of the Copyright Society, is well worth a read. It was given by Peter Menell, a professor at Berkeley Law School who co-directs the Berkeley Center for Law and Technology. It’s called This American Copyright Life: Reflections on Re-Equilibrating Copyright for the Internet Age. Since giving the lecture at Fordham Law School in NYC, Menell has been touring it (it has music and visual components) around various law schools and conferences.
Two things about Menell’s talk/paper caught my attention. First was this sentence, early on in the paper, regarding his love for both copyright and technology during the outbreak of the Copyright Wars in the 2000s: “I was passionately in the middle, perhaps the loneliest place of all.” Second was his focus on the public reputation of copyright and how it needs to be rehabilitated.
Menell’s basic thesis is that no one thought much about copyright when the limitations on copying media products were physical rather than legal; but when the digital age came along, the reason why you might not have made copies of your music recordings was because it was possibly against the law rather than because it took time and effort. He says: “‘My Generation’ did not see copyright as an oppressive regime. We thrived in ignorant bliss well below copyright’s enforcement radar and
were inspired by content industry products. The situation could not be more different for adolescents, teenagers, college students, and netizens today. Many perceive copyright to be an overbearing constraint on creativity, freedom, and access to creative works.” (The latter category apparently includes Menell’s own kids.) In other words, copyright law has appeared in the public consciousness as a limiter of people’s behavior instead of as the force that enables creative works to be made.
Here’s a figure from his paper that captures the decline in copyright’s public approval:
The icons in the figure refer respectively to the 1984 Universal v. Sony “Betamax” Supreme Court decision, the 2001 Ninth Circuit Napster decision, and the defeat of the Stop Online Piracy Act in 2012 from Silicon Valley-amplified public pressure.
Compare this with a slide from a guest lecture I gave at Rutgers Law School last year:
Menell provides a number of personal reflections about his engagement with technology and copyright over the years, including a story about how he and a friend created a slide show for their high school graduation ceremony with spliced-up music selections keyed to slide changes via a sync track on a reel-to-reel tape recorder. This combination of hack and mashup ought to establish Menell’s techie cred. In fact, the “live” version of the paper is itself a mashup of audio and video items.
He takes the reader through the history of the dramatic shift in public attitudes towards copyright after the advent of Napster. My favorite part of this is a fascinating vignette of copyleft icon Fred von Lohmann, then of the Electronic Frontier Foundation (EFF), stating on a conference panel in 2002 that many users of peer-to-peer file-sharing networks were probably infringing copyrights and that the most appropriate legal strategy for the media industry ought to be to sue them, instead of suing the operators of P2P networks as the RIAA had done with Napster. Menell’s reaction, including his own incredulity at the time that “EFF did not use its considerable bully pulpit within the post-Napster generations to encourage ethical behavior as digital content channels emerged,” is just as fascinating.
(Of course, the RIAA did begin doing just that — suing individuals — the very next year. Five years after that, the EFF posted an article that said “suing music fans is no answer to the P2P dilemma.” Fred von Lohmann was still there.)
He also provides examples of the general online public’s current attitudes towards copyright, which has gone long past “Big Media is evil”; he says that “the post-Napster generations possess the incredible human capacity for rationalizing their self-interest” by their implications that individual content creators should not get paid because they are “lazy” or “old-fashioned” or even “spoiled” — even while he admits that the sixteen-year-old Peter Menell might have fallen prey to the same sad rationalizations.
In the rest of the paper, Menell lays out a number of suggestions for how copyright law could change in order to make it more palatable to the public. These include what for me is the biggest breath of fresh air in the article: some of the only serious suggestions I’ve ever seen from copyright academics about using technology as an enabler of copyright rather than as its natural enemy. He touts the value of creating searchable databases of rights holder information and giving copyright owners the opportunity to deposit fingerprints of their content when they register their copyrights, in order to help prove and trace ownership. He also mentions encryption and DRM as means of controlling infringement that have succeeded in the video, software, and game industries, but he does not claim that they are or should be part of the legal system.
Menell also makes several suggestions about how to tweak the law itself to make it a better fit to the digital age. One of these is to establish different tiers of liability for individuals and corporations. He says that the threat of massively inflated statutory damages for copyright infringement has failed to act as a deterrent and that courts have paid little attention to the upper limits of damages anyway. Instead he calls for a realignment of enforcement efficiency, penalties, and incentives for individuals: “Copyright law should address garden variety file-sharing not through costly and complex federal court proceedings but instead through streamlined, higher detection probability, low-fine means — more in the nature of parking tickets, with inducements and nudges to steer consumers into better (e.g., subscription) parking plans.”
Another topic in Menell’s paper that brought a smile to my face was his call for “Operationalizing Fair Use” by such means as establishing “bright-line ‘fair use harbors’ to provide assurance in particular settings.” (I’ve occasionally said similar things and gotten nothing but funny looks from lawyers on all sides of the issue.)
One suggestion he makes along these lines is to establish a compulsory license, with relatively fixed royalties, for music used in remixes and mashups. That is, anyone who wants to use more than a tiny sample of music in a remix or mashup should pay a fee established by law (as opposed to by record labels or music publishers) that gets distributed to the appropriate rights holders. The idea is that such a scheme would strike a pragmatic and reasonable balance between rampant uncompensated use of content in remixes and unworkable (not to mention creativity-impeding) attempts to lock everything down. The U.S. Copyright Office would be tasked with figuring out suitable schemes for dividing up revenue from these licenses.
It goes without saying that establishing any scheme of that type will involve years and years of lobbying and haggling to determine the rates. Even then, several factions aren’t likely to be interested in this idea in principle. Although musical artists surely would like to be compensated for the use of their material in remixes, many artists are not (or are no longer) in favor of more compulsory licenses and would rather see proper compensation develop in the free market. And the copyleft crowd tends to view all remixes and mashups as fair use, and therefore not subject to royalties at all.
In general, Menell’s paper calls for changes to copyright law that are designed to improve its public image by making it seem more fair to both consumers and content creators. Changing behavioral norms in the online world is perhaps better done in narrowly targeted ways than broadly, but the paper ought to be a springboard for many more such ideas in the future.
Rights Management (The Other Kind) Workshop – Chicago, Sept. 11 August 20, 2014Posted by Bill Rosenblatt in Events, Rights Licensing.
add a comment
I will be co-teaching a workshop in rights management for DAM (digital asset management) at the Henry Stewart DAM conference in Chicago on Thursday, September 11. I’ll be partnering with Seth Earley, CEO of Earley & Associates. He’s a longtime colleague of mine as well as a highly regarded expert on metadata and content management. (This is another iteration of the workshop that Seth and I did in April in NYC.)
This isn’t about DRM. This is about how media companies — and others who handles copyrighted material in their businesses — need to manage information about rights to content and the processes that revolve around rights, such as permissions, clearances, licensing, royalties, revenue streams, and so on. Some large media companies have built highly complex processes, systems, and organizations to handle this, while others are still using spreadsheets and paper documents.
Rights information management has come of age over the years as a function within media companies. It has taken a while, but it is being recognized as a revenue opportunity as well as an overhead task or a way of avoiding legal liability — not just for traditional media companies but also for ad agencies, consumer product companies, museums, performing arts venues, and many others.
The subject of our workshop is “Creating a Rights Management Roadmap for your Organization.” We’ll be discussing real-world examples, business cases, and strategic elements of rights information management, and we’ll be getting into various aspects of how rights information management relates to digital asset management. Attendees will be asked to bring information from their own situations, and we’ll be doing some exercises that will help attendees get a sense of what they need to do to implement workable practices for rights management. We’ll touch on business rules, systems, processes, metadata taxonomies, and more.
For those of you who are unfamiliar with it, Henry Stewart (a publishing organization based in the UK) has been producing the highly successful DAM conferences for many years. I’ve seen the event grow in attendance and importance to the DAM community over the years. Come join us! (Use registration code EA100 for a discount.)
UPDATE: This workshop has been cancelled.
add a comment
President Obama recently signed into law a bill that allows people to “jailbreak” or “root” their mobile phones in order to switch wireless carriers. The Unlocking Consumer Choice and Wireless Competition Act was that rarest of rarities these days: a bipartisan bill that passed both houses of Congress by unanimous consent. Copyleft advocates such as Public Knowledge see this as an important step towards weakening the part of the Digital Millennium Copyright Act that outlaws hacks to DRM systems, known as DMCA 1201.
For those of you who might be scratching your heads wondering what jailbreaking your iPhone or rooting your Android device has to do with DRM hacking, here is some background. Last year, the U.S. Copyright Office declined to renew a temporary exception to DMCA 1201 that would make it legal to unlock mobile phones. A petition to the president to reverse the decision garnered over 100,000 signatures, but as he has no power to do this, I predicted that nothing would happen. I was wrong; Congress did take up the issue, with the resulting legislation breezing through Congress last month.
Around the time of the Copyright Office’s ruling last year, Zoe Lofgren, a Democrat who represents a chunk of Silicon Valley in Congress, introduced a bill called the Unlocking Technology Act that would go considerably further in weakening DMCA 1201. This legislation would sidestep the triennial rulemaking process in which the Copyright Office considers temporary exceptions to the law; it would create permanent exceptions to DMCA 1201 for any hack to a DRM scheme, as long as the primary purpose of the hack is not an infringement of copyright. The ostensible aim of this bill is to allow people to break their devices’ DRMs for such purposes as enabling read-aloud features in e-book readers, as well as to unlock their mobile phones.
DMCA 1201 was purposefully crafted so as to disallow any hacks to DRMs even if the resulting uses of content are noninfringing. There were two rationales for this. Most basically, if you could hack a DRM, then you would be able to get unencrypted content, which you could use for any reason, including emailing it to your million best friends (which would have been a consideration in the 1990s when the law was created, as Torrent trackers and cyberlockers weren’t around yet).
But more specifically, if it’s OK to hack DRMs for noninfringing purposes, then potentially sticky questions about whether a resulting use of content qualifies as fair use must be judged the old-fashioned way: through the legal system, not through technology. And if you are trying to enforce copyrights, once you fall through what I have called the trap door into the legal system, you lose: enforcement through the traditional legal system is massively less effective and efficient than enforcement through technology. The media industry doesn’t want judgments about fair use from hacked DRMs to be left up to consumers; it wants to reserve the benefit of the doubt for itself.
The tech industry, on the other hand, wants to allow fair uses of content obtained from hacked DRMs in order to make its products and services more useful to consumers. And there’s no question that the Unlocking Technology Act has aspects that would be beneficial to consumers. But there is a deeper principle at work here that renders the costs and benefits less clear.
The primary motivation for DMCA 1201 in the first place was to erect a legal backstop for DRM technology that wasn’t very effective — such as the CSS scheme for DVDs, which was the subject of several DMCA 1201 litigations in the previous decades. The media industry wanted to avoid an “arms race” against hackers. The telecommunications industry — which was on the opposite side of the negotiating table when these issues were debated in the early 1990s — was fine with this: telcos understood that with a legal backstop against hacks in place, they would have less responsibility to implement more expensive and complex DRM systems that were actually strong; furthermore, the law placed accountability for hacks squarely on hackers, and not on the service providers (such as telcos) that implemented the DRMs in the first place. In all, if there had to be a law against DRM hacking, DMCA 1201 was not a bad deal for today’s service providers and app developers.
The problem with the Unlocking Technology Act is in the interpretation of phrases in it like “primarily designed or produced for the purpose of facilitating noninfringing uses of [copyrighted] works.” Most DRM hacks that I’m familiar with are “marketed” with language like “Exercise your fair use rights to your content” and disclaimers — nudge, nudge, wink, wink — that the hack should not be used for copyright infringement. Hacks that developers sell for money are subject to the law against products and services that “induce” infringement, thanks to the Supreme Court’s 2005 Grokster decision, so commercial hackers have been on notice for years about avoiding promotional language that encourages infringement. (And of course none of these laws apply outside of the United States.)
So, if a law like the Unlocking Technology Act passes, then copyright owners could face challenges in getting courts to find that DRM hacks were not “primarily designed or produced for the purpose of facilitating noninfringing uses[.]” The question of liability would seem to shift from the supplier of the hack to the user. In other words, this law would render DMCA 1201 essentially toothless — which is what copyleft interests have wanted all along.
From a pragmatic perspective, this law could lead non-dominant retailers of digital content to build DRM hacks into their software for “interoperability” purposes, to help them compete with the market leaders. It’s particularly easy to see why Google should want this, as it has zillions of users but has struggled to get traction for its Google Play content retail operations. Under this law, Google could add an “Import from iTunes” option for video and “Import from Kindle/Nook/iBooks” options for e-books. (And once one retailer did this, all of the others would follow.) As long as those “import” options re-encrypted content in the native DRM, there shouldn’t be much of an issue with “fair use.” (There would be plenty of issues about users violating retailers’ license agreements, but that would be a separate matter.)
This in turn could cause retailers that use DRM to help lock consumers into their services to implement stronger, more complex, and more expensive DRM. They would have to use techniques that help thwart hacks over time, such as reverse engineering prevention, code diversity and renewability, and sophisticated key hiding techniques such as whitebox encryption. Some will argue that making lock-in more of a hassle will cause technology companies to stop trying. This argument is misguided: first, lock-in is fundamental to theories of markets in the networked digital economy and isn’t likely to go away over costs of DRM implementation; second, DRM is far from the only way to achieve lock-in.
The other question is whether Hollywood studios and other copyright owners will demand stronger DRM from service providers that have little motivation to implement it. The problem, as usual, is that copyright owners demand the technology (as a condition of licensing their content) but don’t pay for it. If there’s no effective legal backstop to weak DRM, then negotiations between copyright owners and technology companies may get tougher. However, this may not be an issue particularly where Hollywood is concerned, since studios tend to rely more heavily on terms in license agreements (such as robustness rules) than on DMCA 1201 to enforce the strength of DRM implementations.
Regardless, the passage of the mobile phone unlocking legislation has led to increased interest in the Unlocking Technology Act, such as the recent panel that Public Knowledge and other like-minded organizations put on in Washington. Rep. Lofgren has succeeded in getting several more members of Congress to co-sponsor her bill. The trouble is, all but one of them are Democrats (in a Republican-controlled House of Representatives not exactly known for cooperation with the other side of the aisle); and the Democratically-controlled Senate has not introduced parallel legislation. This means that the fate of the Unlocking Technology Act is likely to be similar to that of past attempts to do much the same thing: the Digital Media Consumers’ Rights Act of 2003 and the Freedom and Innovation Revitalizing United States Entrepreneurship (FAIR USE) Act of 2007. That is, it’s likely to go nowhere.
add a comment
Registration for Copyright and Technology London 2014 is now live. An earlybird discount is in place through August 8. Space is limited and we came close to filling the rooms last time, so please register today!
I am particularly excited about our two keynote speakers — two of the most important copyright policy officials in the European Union and United States respectively. Maria Martin-Prat will discuss efforts to harmonize aspects of copyright law throughout the 28 EU Member States, while Shira Perlmutter will provide an update on the long process that the US has started to revise its copyright law.
We have made one change to the Law and Policy track in the afternoon: we’ve added a panel called The Cloudy Future of Private Copying. This panel will deal with controversies in the already complex and often confusing world of laws in Europe that allow consumers to make copies of lawfully-obtained content for personal use.
The right of private copying throughout Europe was established in the European Union Copyright Directive of 2001, but the EU Member States’ implementations of private copying vary widely — as do the levies that makers of consumer electronics and blank media have to pay to copyright collecting societies in many countries on the presumption that consumers will make private copies of copyrighted material. Private copying was originally intended to apply to such straightforward scenarios as photocopying of text materials or taping vinyl albums onto cassette. But nowadays, cloud storage services, cyberlockers, and “cloud sync” services for music files — some of which allow streaming from the cloud or access to content by users other than those who uploaded the content — are coming into view regarding private copying.
The result is a growing amount of controversy among collecting societies, consumer electronics makers, retailers, and others; meanwhile the European Commission is seeking ways to harmonize the laws across Member States amid rapid technological change. Our panel will discuss these issues and consider whether there’s a rational way forward.
We have slots open for a chair and speakers on this panel; I will accept proposals through July 31. Please email your proposal(s) with the following information:
- Speaker’s name and full contact information
- Chair or speaker request?
- Description of speaker’s experience or point of view on the panel subject
- Brief narrative bio of speaker
- Contact info of representative, if different from speaker*
Finally, back over here across the Atlantic, I’ll note an interesting new development in the Aereo case that hasn’t gotten much press since the Supreme Court decision in the case a couple of weeks ago. Aereo had claimed that it had “bet the farm” on a court ruling that its service was legal and that “there is no Plan B,” implying that it didn’t have the money to pay for licenses with television networks. Various commentators have noted that Aereo wasn’t going to have much leverage in any such negotiations anyway.
As a result of the decision, Aereo has changed tactics. In the Supreme Court’s ruling, Justice Breyer stated that Aereo resembled a cable TV provider and therefore could not offer access to television networks’ content without a license. Now, in a filing with the New York district court that first heard the case, Aereo is claiming that it should be entitled to the statutory license for cable TV operators under section 111 of the copyright law, with royalty rates that are spelled out in 17 U.S.C § 111(d)(1).
In essence, Aereo is attempting to rely on the court for its negotiating leverage, and it has apparently decided that it can become a profitable business even if it has to pay the fees under that statutory license. Has Barry Diller — or another investor — stepped in with the promise of more cash to keep the company afloat? Regardless, in pursuing this tactic, Aereo is simply following the well-worn path of working litigation into a negotiation for a license to intellectual property.
*Please note that personal confirmation from speakers themselves is required before we will put them on the program.
Supreme Court’s Aereo Decision Clouds the Future July 3, 2014Posted by Bill Rosenblatt in Law, United States, Video.
add a comment
The Supreme Court has rendered various decisions that serve as rules of the road for the treatment of copyrighted works amid technological innovation. Universal v. Sony (1984) established the legality of “time shifting” video for personal viewing as well as the “substantial noninfringing uses” standard for new technologies that involve digital media. MGM v. Grokster (2005) took the concept of “inducing infringement” from patent law and applied it to copyright, so that services that directly and explicitly benefit from users’ infringement could be held liable. UMG v. Veoh (2011) taught that network service operators have no duty to proactively police their services for users’ infringements. These rulings are reasonably clear signposts that technologists can follow when contemplating new products and services.
Unfortunately, Justice Stephen Breyer’s ruling last week in ABC v. Aereo won’t be joining that list. He ruled against Aereo in a 6-3 majority that united the Court’s liberals and moderates. Justice Antonin Scalia’s forceful dissent described the problems that this decision will create for services in the future.
Several weeks ago, at the Copyright Clearance Center’s OnCopyright conference in NYC, Rick Cotton — former General Counsel of NBC Universal — predicted that the Supreme Court would come down against Aereo in a narrow decision that would avoid impact on other technologies. He got it right in terms of what Justice Breyer may have hoped to accomplish, but not in terms of what’s likely to happen in the future.
Instead of establishing principles that future technology designers can rely on, the Court simply took a law that was enacted almost 40 years ago to apply to an old technology, determined that Aereo resembles that old technology, and concluded that therefore the law should apply to it. The old technology in question is Community Access Television (CATV) — transmissions of broadcast television over cable to reach households that couldn’t receive the broadcasts over the air.
Justice Breyer observed that Congress made changes in the copyright law, with the Copyright Act of 1976, in order to stop CATV providers from being able to “free ride” on broadcast TV signals; he found that that Aereo was similarly free-riding and therefore ought to be subject to the same law.
Just in terms of functionality, the decision makes little sense: CATV was created to enable broadcast television to reach new audiences, while Aereo (nominally, at least) enabled an existing audience for broadcast TV to watch it on other devices and in other locations. In that respect, Aereo is more like the “cloud sync” services for music like DoubleTwist and MP3Tunes that popped up in the late 2000s, which automatically copied users’ MP3 music files and playlists across all of their devices. More on that analogy later.
More broadly, the Court’s decision is unlikely to be helpful in guiding future technologies; all it offers is a “does it look like cable TV?” test based on fact-specific interpretations of the public performance right in copyright law. Justice Breyer claimed that his opinion should not necessarily have implications for cloud computing and other new technologies, but that doesn’t make it so.
As Justice Scalia remarked in his dissent, “The Court vows that its ruling will not affect cloud-storage providers and cable television systems … , but it cannot deliver on that promise given the imprecision of its result-driven rule.” Justice Scalia felt that Aereo exploited a loophole in the copyright law but that it should be up to Congress instead of the Supreme Court to close it.
In fact, Justice Scalia agreed with the Court’s opinion that Aereo probably violates copyright law. But he stated that the decision the Court was called upon to make — regarding Aereo’s direct infringement liability and whether the TV networks’ request for a preliminary injunction should be upheld — wasn’t an appropriate vehicle for determining Aereo’s copyright liability, and that the Court should have left well enough alone. Instead, Justice Scalia offered that Aereo should be more properly held accountable based on secondary liability — just as the Court did in Grokster — and that a lower court could well reach such a finding later in the case after the preliminary injunction issue had been settled.
Secondary liability means that a service doesn’t infringe copyrights itself but somehow enables end users to do so. Of course there have been many cases where copyright owners have sued tech companies on the basis of secondary liability and forced them to go out of business (e.g., Napster, LimeWire), but there have been many others where lawsuits (or threats of lawsuits) have resulted in mutually beneficial license agreements between copyright owners and the technology companies.
And that brings us back to “cloud sync” services for music. DoubleTwist was built by Jon Lech Johansen, who had become notorious for hacking the encryption system for DVDs in the late 1990s. MP3Tunes was developed by Michael Robertson, who was equally notorious for his original MP3.com service. Cloud sync services enabled users to make copies of their music files without permission and didn’t share revenue (e.g., from advertising or premium subscriptions) with copyright owners. DoubleTwist, MP3Tunes, and a handful of similar services became moderately popular. In addition to their functionality, what MP3Tunes and DoubleTwist had in common was that they were developed by people who had first built blatantly illegal technology and then sought ways to push the legal envelope more gently.
Later on, Amazon, Apple, and Google followed the same latter path. They built cloud sync capabilities into their music services (thereby rendering small third-party services like DoubleTwist largely irrelevant). Amazon and Google launched their cloud sync capabilities without taking any licenses from record companies; record companies complained; confidential discussions ensued; and now everyone’s happy, including the consumers who use these handy services. (Apple took a license for its iTunes Match feature at the outset.)
The question for Aereo is whether it’s able to have such discussions with TV networks; the answer is clearly no. The company never entertained the possibility that it would have to (“there is no Plan B“), and its principal investor, video mogul Barry Diller, isn’t going to pump more money into the company to pay for licenses.
Of course, TV networks are cheering the result of the Supreme Court’s decision in Aereo. But it doesn’t help them in the long run if the rules of the road for future technologies are made cloudier instead of clearer. And Aereo would eventually have been doomed anyway if Justice Scalia had a majority.
Copyright Alert System Releases First Year Results June 10, 2014Posted by Bill Rosenblatt in Europe, Fingerprinting, Law, United States, Watermarking.
The Center for Copyright Information (CCI) released a report last month summarizing the first calendar year of activity of the Copyright Alert System (CAS), the United States’ voluntary graduated response scheme for involving ISPs in flagging their subscribers’ alleged copyright infringement. The report contains data from CAS activity as well as results of a study that CCI commissioned on consumer attitudes in the US towards copyright and file sharing.
There are two alerts at each level, for a total of six, but the three categories make it easier to compare the CAS with “three strikes” graduated response regimes in other countries. As I discussed recently, the CAS’s “mitigation” penalties are very minor compared to punitive measures in other systems such as those in France and South Korea.
The CCI’s report indicates that during the first ten months of operation, it sent out 1.3 million alerts. Of these, 72% were “educational,” 20% were “acknowledgement,” and 8% were “mitigation.” The CAS includes a process for users to submit mitigation alerts they receive to an independent review process. Only 265 review requests were sent, and among these, 47 (18%) resulted in the alert being overturned. Most of these 47 were overturned because the review process found that the user’s account was used by someone else without the user’s authorization. In no case did the review process turn up a false positive, i.e. a file that the user shared that was actually not unauthorized use of copyrighted material.
It’s particularly instructive to compare these results to France’s HADOPI system. This is possible thanks to the detailed research reports that HADOPI routinely issues. Two of these were presented at our Copyright and Technology London conferences and are available on SlideShare (2012 report here; 2013 report here). Here is a comparison of the percent of alerts issued by each system at each of the three levels:
|Alert Level||HADOPI 2012||HADOPI 2013||CAS 2013|
Of course these comparisons are not precise; but it is hard not to draw an inference from them that threats of harsher punitive measures succeed in deterring file-sharing. In the French system — in which users can face fines of up to €1500 and one year suspensions of their Internet service — only 0.03% of those who received notices kept receiving them up to the third level, and only a tiny handful of users actually received penalties. In the US system — where penalties are much lighter and not widely advertised — almost 8% of users who received alerts went all the way to the “mitigation” levels. (Of that 8%, 3% went to the sixth and final level.)
Furthermore, while the HADOPI results are consistent from 2012 to 2013, they reflect a slight upward shift in the number of users who receive second-level notices, while the percent of third-level notices — those that could involve fines or suspensions — remained constant. This reinforces the conclusion that actual punitive measures serve as deterrents. At the same time, the 2013 results also showed that while the HADOPI system did reduce P2P file sharing by about one-third during roughly the second year of the system’s operation, P2P usage stabilized and even rose slightly in the two years after that. This suggests that HADOPI has succeeded in deterring certain types of P2P file-sharers but that hardcore pirates remain undeterred — a reasonable conclusion.
It will be interesting to see if the CCI takes this type of data from other graduated response systems worldwide — including those with no punitive measures at all, such as the UK’s planned Vcap system — into account and uses it to adjust its level of punitive responses in the Copyright Alert System.
Dispatches from IDPF Digital Book 2014, Pt. 3: DRM June 5, 2014Posted by Bill Rosenblatt in DRM, Publishing, Standards.
1 comment so far
The final set of interesting developments at last week’s IDPF Digital Book 2014 in NYC has to do with DRM and rights.
Tom Doherty, founder of the science fiction publisher Tor Books, gave a speech about his company’s experimentation with DRM-free e-books and its launch of a line of e-novellas without DRM. The buildup to this speech (among those of us who were aware of the program in advance) was palpable, but the result fell with a thud. You had to listen hard to find the tiny morsel about how going DRM-free has barely affected sales; otherwise the speech was standard-issue dogma about DRM with virtually no new insights or data. And he did not take questions from the audience.
DRM has become something of a taboo subject even at conferences like this, so most of the rest of the discussion about it took the form of hallway buzz. And the buzz is that many are predicting that DRM will be on its way out for retail trade e-books within the next couple of years.
That’s the way things are likely to go if technology market forces play out the way they usually do. Retailers other than Amazon (and possibly Apple) will want to embrace more open standards so that they can offer greater interoperability and thus band together to compete with the dominant player; getting rid of DRM is certainly a step in that direction. Meanwhile, publishers, getting more and more fed up with or afraid of Amazon, will find common cause with other retailers and agree to license more of their material for distribution without DRM. (Several retailers in second-tier European countries as well as some retailers for self-publishing authors, such as Lulu, have already dropped DRM entirely.)
Such sentiments will eventually supersede most publishers’ current “faith-based” insistence on DRM. In other words, publishers and retailers will behave more or less the same way as the major record labels and non-Apple retailers behaved back in 2006-2007.
This course of events seems inevitable… unless publishers get some hard, credible data that tells them that DRM helps prevent piracy and “oversharing” more than it hurts the consumer experience. That’s the only way (other than outright inertia) that I can see DRM staying in place for trade books over the next couple of years.
The situation for educational, professional, and STM (scientific, technical, medical) books is another story (as are library lending and other non-retail models). Higher ed publishers in particular have reasons to stick with DRM: for example, e-textbook piracy has been rising dramatically in recent years and is up to 34% of students as of last year.
Adobe recently re-launched its DRM with a focus on these publishing market segments. I’d describe the re-launch as “awkward,” though publishers I’ve spoken to would characterize in it less polite terms. This has led to openings for other vendors, such as Sony DADC; and the Readium Foundation is still working on the open-source EPUB Lightweight Content Protection scheme.
The hallway buzz at IDPF Digital Book was that DRM for these market segments is here to stay — except that in higher ed, it may become unnecessary in a longer timeframe, when educational materials are delivered dynamically and in a fashion more akin to streaming than to downloads of e-books.
I attended a panel on EDUPUB, a standards initiative aimed at exactly this future for educational publishing. The effort, led by Pearson Education (the largest of the educational publishers), the IMS Global Learning Consortium, and IDPF, is impressive: it’s based on combining existing open standards (such as IDPF’s EPUB 3) instead of inventing new ones. It’s meant to be inclusive and beneficial to all players in the higher ed value chain, including Pearson’s competitors.
However, EDUPUB is in danger of making the same mistake as the IDPF did by ignoring DRM and other rights issues. When asked about DRM, Paul Belfanti, Pearson’s lead executive on EDUPUB, answered that EDUPUB is DRM-agnostic and would leave decisions on DRM to providers of content delivery platforms. This decision was problematic for trade publishers when IDPF made it for EPUB several years ago; it’s even more potentially problematic for higher ed; EDUPUB-based materials could certainly be delivered in e-textbook form.
EDUPUB could also help enable one of the Holy Grails of higher ed publishing, which is to combine materials from multiple publishers into custom textbooks or dynamically delivered digital content. Unlike most trade books, textbooks often contain hundreds or thousands of content components, each of which may have different rights associated with them.
Clearing rights for higher ed content is a manual, labor-intensive job. In tomorrow’s world of dynamic digital educational content, it will be more important than ever to make sure that the content being delivered has the proper clearances, in real time. In reality, this doesn’t necessarily involve DRM; it’s mainly a question of machine-readable rights metadata.
Attempts to standardize this type of rights metadata date back at least to the mid-1990s (when I was involved in such an attempt); none have succeeded. This is a “last mile” issue that EDUPUB will have to address, sooner rather than later, for it to make good on its very promising start. DRM and rights are not popular topics for standards bodies to address, but it has become increasingly clear that they must address these issues to be successful.
This is the second of three installments on interesting developments from last week’s IDPF Digital Book conference in NYC.
Another interesting panel at the conference was on public libraries. I’ve written several times (here’s one example) about the difficulties that public libraries are having in licensing e-books from major trade publishers, given that publishers are not legally obligated to license their e-books for library lending on the same terms as for printed books — or at all. The major trade publishers have established different licensing models with various restrictions, such as limited durations (measured in years or number of loans), lack of access to frontlist (current) titles, and/or prices that range up to several times those charged to consumers.
The panel presented some research findings that included some hard data about how libraries drive book sales — data that libraries badly need in order to bolster their case that publishers should license material to them on reasonable terms.
As we learned from Rebecca Miller from Library Journal, public libraries in the US currently spend only 9% of their acquisition budgets on e-books — which amounts to about $100 Million, or less than 3% of overall trade e-book revenue in the United States. Surely that percentage will increase, making e-book acquisition more and more important for the future of public libraries. And as e-books take up a larger portion of libraries’ acquisition budgets, the fact that libraries have little control over licensing terms will become a bigger and bigger problem for them.
The library community has issued a lot of rhetoric — including during that panel– about how important libraries are for book discovery. But publishers are ultimately only swayed by measurable revenue from sales of books that were driven by visits to or loans from libraries. They also want to know to what extent people don’t buy e-books because they can “borrow” them from libraries.
In that light, the library panel had one relevant statistic to offer, courtesy of a study done by my colleague Steve Paxhia for the Book Industry Study Group. The study found that 22% of library patrons ended up buying a book that they borrowed from the library at least once during the past year.
That’s quite a high number. Here’s how it works out to revenue for publishers: Given Pew Internet and American Life statistics about library usage (48% of the population visited libraries last year), and only counting people aged 18 years and up, it means that people bought about 25 million books last year after having borrowed them from libraries. Given that e-books made up 30% of book sales in unit volume last year and figuring an average retail price of $10, that’s $75 million in e-book sales directly attributable to library lending. The correct figure is probably higher, given that many library patrons discover books in ways other than borrowing them (e.g. browsing through them at the library) — though it may also be lower given that some people buy books in order to own physical objects (and thus the percentage of e-books purchased as a result of exposure in libraries may be lower than the corresponding percentage of print books).
So, in rough numbers, it’s safe to say that for the $100 Million that libraries spend on e-books per year, they deliver a similar amount again in sales through discovery. It’s just too bad that the study did not also measure how many people refrained from buying e-books because they could get them from public libraries. This would be an uncomfortable number to measure, but it would help lead to the truth about how public libraries help publishers sell books.
Update: Steve Paxhia found that his 22% number was of library lends leading to purchases during a period of six months, not a year. And the survey respondents may have purchased books after borrowing them more than once during that period. His data also shows that half of respondents indicated that they purchased other works from a given author after having borrowed one from the library. So, using the same rough formula as above, the amount of purchases attributable to library usage is more likely to be north of $150 million. Yet we still have no indication of the number of times someone did not purchase a book — particularly an e-book — because it was available through a public library system.