jump to navigation

MP3Tunes and the New DMCA Boundaries March 30, 2014

Posted by Bill Rosenblatt in Law, Music, Services, United States.

With last week’s jury verdict of copyright liability against Michael Robertson of MP3Tunes, copyright owners are finally starting to get some clarity around the limits of DMCA 512.  The law gives online service operators a “safe harbor” — a way to insulate themselves from copyright liability related to files that users post on their services by responding to takedown notices.

To qualify for the safe harbor, service providers have to have a policy for terminating the accounts of repeat infringers, and — more relevantly – cannot show “willful blindness” to users’ infringing actions.  At the same time, the law does not obligate service providers to proactively police their networks for copyright infringement.  The problem is that even when online services respond to takedown notices, the copyrighted works tend to be re-uploaded immediately.

The law was enacted in 1998, and copyright owners have brought a series of lawsuits against online services over the years to try to establish liability beyond the need to respond to one takedown notice at a time.  Some of these lawsuits tried to revisit the intent of Congress in passing this law, to convince courts that Congress did not intend to require them to spend millions of dollars a year playing Whac-a-Mole games to get their content removed.

In cases such as Viacom v. YouTube and Universal Music Group v. Veoh that date back to 2007, the media industry failed to get courts to revisit the idea that service providers should act as their own copyright police.  But over the past year, the industry has made progress along the “willful blindness” (a/k/a “looking the other way”) front.

These cases featured lots of arguments over what constitutes evidence of willful blindness or its close cousin, “red flag knowledge” of users’ infringements.  Courts had a hard time navigating the blurry lines between the “willlful blindness” and “no need to self-police” principles in the law, especially when the lines must be redrawn for each online service’s feature set, marketing pitch, and so on.

But within the past couple of years, two appeals courts established some of the contours of willful blindness and related principles to give copyright owners some comfort.  The New York-based (and typically media-industry-friendly) Second Circuit, in the YouTube case, found that certain types of evidence, such as company internal communications, could be evidence of willful blindness.  And even the California-based (and typically tech-friendly) Ninth Circuit found similar evidence last year in a case against the BitTorrent site IsoHunt.

The Second Circuit’s opinion in YouTube served as the guiding precedent in the EMI v. MP3Tunes case — and in a rather curious way.  Back in 2011, the district court judge in MP3Tunes handed down a summary judgment ruling that was favorable to Robertson in some but not all respects.  But after the Second Circuit’s YouTube opinion, EMI asked the lower court judge to revisit the case, suggesting that the new YouTube precedent created issues of fact regarding willful blindness that a jury should decide.  The judge was persuaded, the trial took place, and the jury decided for EMI.  Robertson could now be on the hook for tens of millions of dollars in damages.

(Eleanor Lackman and Simon Pulman of the media-focused law firm Cowan DeBaets have an excellent summary of the legal backdrop of the MP3Tunes trial; they say that it is “very unusual” for a judge to go back on a summary judgment ruling like that.)

The MP3Tunes verdict gives media companies some long-sought leverage against online service operators, which keep claiming that their only responsibility is to respond to each takedown notice, one at a time.  This is one — but only one — step of the many needed to clarify the rights of copyright owners and responsibilities of service providers to protect copyrights.  And as far as we can tell now, it does not obligate service providers to implement any technologies or take any more proactive steps to reduce infringement.  Yet it does now seem clear that if service providers want to look the other way, they at least have to keep quiet about it.

As for Robertson, he continues to think of new startup ideas that seem particularly calculated to goad copyright owners.  The latest one, radiosearchengine.com, is an attempt to turn streaming radio into an interactive, on-demand music service a la Spotify.  It lets users find and listen to Internet streams of radio stations that are currently playing specific songs (as well as artists, genres of music, etc.).

Radiosearchengine.com starts with a database of thousands of Internet radio stations, similar to TuneIn, iHeartRadio, Reciva, and various others.  These streaming radio services (many of which are simulcasts of AM or FM signals) carry program content data, such as the title and artist of the song currently playing.  Radiosearchengine.com retrieves this data from all of the stations in its database every few seconds, adds that information to the database, and makes it searchable by users.  Robertston has even created an API so that other developers can access his database.

Of course, radiosearchengine.com can’t predict that a station will play a certain song in the future (stations aren’t allowed to report it in advance), so users are likely to click on station links and hear their chosen songs starting in the middle.  But with the most popular songs — which are helpfully listed on the site’s left navbar — you can find many stations that are playing them, so you can presumably keep clicking until you find the song near its beginning.

This is something that TuneIn and others could have offered years ago if it didn’t seem so much like lawsuit bait.  On the other hand, Robertson isn’t the first one to think of this: there’s been an app for that for at least three years.

Disney and Apple’s UV FUD March 26, 2014

Posted by Bill Rosenblatt in Business models, Technologies, United States, Video.
add a comment

Last month Disney launched Disney Movies Anywhere, a service that lets users stream and download movies from Disney and associated studios on their Apple iOS devices.  You can purchase movies on the site or from the App Store app and stream them to any iPhone, iPad, or iPod Touch.  You can also get digital copies and streaming access with purchases of selected DVDs and Blu-ray discs.  And you can connect your iTunes account to your Disney Movies Anywhere account so that you can gain similar streaming and download access to your existing Disney iTunes purchases.

A couple of things about Disney Movies Anywhere are worth discussing.  First, this is yet more evidence of the strong bond between Disney and Apple, a relationship formed when Disney acquired Pixar from Steve Jobs, who became a Disney board member and the company’s largest shareholder.

More particularly, this service is a way for Apple to experiment with video streaming services without attaching its own brand name.  Disney Movies Anywhere works with only iOS devices, and there’s little indication that it will add support for Android or other platforms.  For whatever reason, Apple has shied away from streaming media services until quite recently (with iTunes Radio and the latest iteration of Apple TV).

More importantly, Disney Movies Anywhere is the first implementation of Disney’s KeyChest — a rights locker architecture that is similar to UltraViolet, the technology backed by the other five major Hollywood studios.  The idea common to both KeyChest and UltraViolet is that when you purchase a movie, you’re actually purchasing the right to download or stream it from a variety of sources; the rights locker maintains a record of your purchase.

One of the main motivations behind UltraViolet was to prevent content distributors or consumer electronics makers from dominating the economics of the digital video supply chain in the way that Apple dominated music downloads (and Amazon may dominate e-books), and thus from being able to dictate terms to copyright owners.  By making it possible for users to buy digital movies from one retailer and then download them in other formats from other retailers, the five studios hoped to create a level playing field among retailers as well as interoperability for users.  UltraViolet has several retail partners, including Target, Walmart (VUDU), and Best Buy (CinemaNow).

The problem with these technology schemes is that it is very hard to make them into universal standards.  Just about every software technology we use settles down to twos or threes.  In operating systems, it’s all twos: Windows and Mac OS for desktops and laptops; Android and iOS for mobile devices; Unix/Linux and Windows for servers.  Other markets are similar: in relational databases it’s Oracle/MySQL (Oracle Corp.), DB2 (IBM), and SQL Server (Microsoft); in music paid-download formats it’s MP4-AAC (Apple) and MP3 (Amazon); in e-books (in the US, at least) it’s Amazon, Barnes & Noble, and Apple iBooks.  Antitrust law prevents a single technology from dominating too much; market complexity prevents more than a handful from becoming roughly equal competitors.

It would be a shame if this also became true for rights lockers for movies and TV shows.  It does not help the studios if consumers get one flavor of “interoperability” for movies from all but one major studio and another flavor for movies from Disney.  Disney surely remembers the less-than-stellar success of its last solo venture into digital movie distribution: MovieBeam, which launched around 2004 and lasted less than four years.

And that brings us back around to Apple.  The only plausible explanation for this bifurcation is that Apple is really in charge here.  UltraViolet is not just an “every studio but Disney” consortium; it is also an “every technology company but Apple” initiative.  The list of technology companies participating in UltraViolet is huge, though Microsoft occupies a particularly important role as the source of the UltraViolet file format and the first commercial DRM to be approved for use with the system.  In other words, the KeyChest/UltraViolet dichotomy is shaping up to look very much like Apple vs. the Microsoft-led Windows ecosystem, or Apple vs. the Google-led Android ecosystem.

Still, the market for digital video is still in relatively early days, and things could change quite a bit — especially if consumers are confused by the choices on offer.  (Coincidentally, there’s a good overview of this confusion and its causes in today’s New York Times.)  UltraViolet is enjoying only modest success so far — compared, say, to Netflix or iTunes — and the introduction of Disney Movies Anywhere is unlikely to help make rights lockers any clearer to consumers.

In that respect, the UltraViolet/KeyChest dichotomy also has a precedent in the digital music market.  Back in 2001-2002, the (then) five major record labels lined up behind two different music distribution platforms: MusicNet and pressplay.  MusicNet was backed by Warner Music Group, EMI, BMG, and RealNetworks, while pressplay was backed by Sony Music and Universal Music Group.  MusicNet was a wholesale distribution platform that made deals with multiple retailers; pressplay was its own retailer.  In other words, MusicNet was UltraViolet, while pressplay was Disney Movies Anywhere.  Yet neither one was successful; both suffered from over-complexity (among other things).  Apple launched the much easier to use iTunes Music Store in 2003, and few people remember MusicNet or pressplay anymore.*

In other words, there are still opportunities for new digital video models to emerge and disrupt the current market.  And consumer confusion is a great way to hasten the disruption.

*The two music platforms did survive, in a way: MusicNet is now MediaNet, a wholesaler of digital music and other content with many retail partners; pressplay was sold to Roxio, rebranded as Napster (the legal version), and resold to Rhapsody, where it still exists under the Napster brand name outside of the US.


Viacom vs. YouTube: Not With a Bang, But a Whimper March 21, 2014

Posted by Bill Rosenblatt in Law, United States.
add a comment

Earlier this week Viacom settled its long-running lawsuit against Google over video clips containing Viacom’s copyrighted material that users posted on YouTube.  The lawsuit was filed seven years ago; Viacom sought well over a billion dollars in damages.  The last major development in the case was in 2010, when a district court judge ruled in Google’s favor.  The case had bounced around between the district and Second Circuit appeals courts.  The parties agreed to settle the case just days before oral argument was to take place before the Second Circuit.

It’s surprising how little press coverage the settlement has attracted — even in the legal press — considering the strategic implications for Viacom and copyright owners in general.

The main reasons for the lack of press attention are that details of the settlement are being kept secret, and that by now the facts at issue in the case are firmly in the past.  A few months after Viacom filed the lawsuit in March 2007, Google launched its Content ID program, which enables copyright owners to block user uploads of their content to YouTube — or monetize them through shared revenue from advertising.  The lawsuit was concerned with video clips that users uploaded before Content ID was put into place.

Viacom’s determination to carry on with the litigation was clearly meant primarily to get the underlying law changed.  Viacom has been a vocal opponent of the Digital Millennium Copyright Act (DMCA) in its current form.  The law allows service providers like YouTube to respond to copyright owners’ requests to remove content (takedown notices) in order to avoid liability.  It doesn’t require service providers to proactively police their services for copyright violations.  As a result, a copyright owner has to issue a new takedown notice every time a clip of the same content appears on the network — which often happens immediately after each clip is taken down.  As a result, companies like Viacom thus spend millions of dollars issuing takedown notices in a routine that has been likened to a game of Whac-a-Mole.  

From that perspective, Google’s Content ID system goes beyond its obligations under the DMCA (Content ID has become a big revenue source for Google), so Google’s compliance with the current DMCA isn’t the issue.  Countless other service providers don’t have technology like Content ID in place; moreover, since 2007 the courts have consistently — in other litigations such as Universal Music Group v. Veoh — interpreted the DMCA not to require  that service providers act as their own copyright police.  Viacom must still be interested in getting the law changed.

In this light, what’s most interesting is not that the settlement came just days before oral argument before the Third Circuit, but that it came just days after the House Judiciary Committee held hearings in Washington on the DMCA.  These were done in the context of Congress’s decision to start on the long road to revamping the entire US copyright law.

My theory is that, while Viacom may have had various reasons to settle, the company has decided that it has a better shot at changing the law through Congress than through the courts.  The journey to a new Copyright Act is likely to take years longer than the appeals process.  But if Viacom were to get the lower court’s decision overturned in the Third Circuit, the result would be a precedent that wouldn’t apply nationwide; in particular, it wouldn’t necessarily apply in the tech-friendly Ninth Circuit.  A fix to the actual law in Congress that’s favorable to copyright owners — if Congress delivers one — could have broader applicability, both geographically and to a wider variety of digital services.   Viacom has waited six years; it can wait another ten or so.

In Copyright Law, 200 Is a Magic Number March 2, 2014

Posted by Bill Rosenblatt in Images, Law, United States.
1 comment so far

An occasional recurring theme in this blog is how copyright law is a poor fit for the digital age because, while technology enables distribution and consumption of content to happen automatically, instantaneously, and at virtually no cost, decisions about legality under copyright law can’t be similarly automated.  The best/worst example of this is fair use.  Only a court can decide whether a copy is noninfringing under fair use.  Even leaving aside notions of legal due process, it’s not possible to create a “fair use deciding machine.”

In general, copyright law contains hardly any concrete, machine-decidable criteria.  Yet one of the precious few came to light over the past few months regarding a type of creative work that is often overlooked in discussions of copyright law: visual artworks.  Unlike most copyrighted works, works of visual art are routinely sold and then resold potentially many times, usually at higher prices each time.

A bill was introduced in Congress last week that would enable visual artists to collect royalties on their works every time they are resold.  One of the sponsors of the bill is Rep. Jerrold Nadler, who represents a chunk of New York City, one of the world’s largest concentrations of visual artists.

Of course, the types of copyrighted works that we usually talk about here — books, movies, TV shows, and music — aren’t subject to resale royalties; they are covered under first sale (Section 109 of the Copyright Act), which says that the buyer of any of these works is free to do whatever she likes with them, with no involvement from the original seller.  But visual artworks are different.  According to Section 101 of the copyright law, they are either unique objects (e.g. paintings) or reproduced in limited edition (e.g. photographs).  The magic number of copies that distinguishes a visual artwork from anything else?  200 or less.  The copies must be signed and numbered by the creator.

Under the proposed ART (Artist Royalties, Too) Act, five percent of the proceeds from a sale of a visual artwork would go to the artist, whether it’s the second, third, or hundredth sale of the work.  The law would apply to artworks that sell for more than $5,000 at auction houses that do at least $1 million in business per year.  It would require private collecting societies to collect and distribute the royalties on a regular basis, as SoundExchange does for digital music broadcasting.  This proposed law would follow in the footsteps of similar laws in many countries, including the UK, EU, Australia, Brazil, India, Mexico, and several others.  It would also emulate  “residual” and “rental” royalties for actors, playwrights, music composers, and others, which result from contracts with studios, theaters, orchestras, and so on.

The U.S. Copyright Office analyzed the art resale issue recently and published a report last December that summarized its findings.  The Office concluded that resale royalties would probably not harm the overall art market in the United States, and that a law like the ART Act isn’t a bad idea but is only one of several ways to institute resale royalties.

The Office had previously looked into resale royalties over 20 years ago.  Its newer research found that, based on evidence from other countries that have resale royalties, imposing them in the US would neither result in the flight of art dealers and auction houses from the country nor impose unduly onerous burdens for administration and enforcement of royalty payments.

Yet the Copyright Office’s report doesn’t overflow with unqualified enthusiasm for statutory royalties on sales.  One of the legislative alternatives it suggests is the idea of a “performance royalty” from public display of artworks.  If a collector wants to buy a work at auction and display it privately in her home, that’s different from a museum that charges people admission to see it.  Although this would mirror performance royalties for music, it would seem to favor wealthy individuals at the expense of public exposure to art.

The ART Act — which is actually a revision of legislation that Rep. Nadler introduced in 2011 — has drawn much attention within the art community, though little outside it.  Artists are generally in favor of it, of course.  But various others have criticized aspects of the bill, such as that it only applies to auction houses (thereby pushing more sales to private dealers, where transactions take place in secret instead of out in the open), that it only benefits the tiny percentage of already-successful artists instead of struggling newcomers, and that it unfairly privileges visual artists over other creators of both copyrighted works and physical objects (think Leica cameras or antique Cartier watches).

As an outsider to the art world, I have no opinion.  Instead it’s that 200 number that fascinates me.  That number may partially explain why the Alfred Eisenstaedt photograph of the conductor Leonard Bernstein that hangs in my wife’s office, signed and numbered 14 out of 250, is considerably less valuable than another Eisenstaedt available on eBay that’s signed and numbered 41 out of 50.

It begs the question of what happens when more and more visual artists use media that can be reproduced digitally without loss of quality.  Would an artist be better off limiting her output to 200 copies and getting the 5% on resale, or would she be better off making as many copies as possible and selling them for whatever the market will bear?  The answer is unknowable without years of real-world testing.  Given the choice, some artists may opt for the former route, which seems to go against the primary objective of copyright law: to maximize the availability of creative works to the public through incentives to creators.

Copyright minimalists question the relevance of copyright in an era when digital technologies make it possible to reproduce creative works at very little cost and perfect fidelity; they call on the media industry to stop trying to “profit from scarcity” and instead “profit from abundance.”  Here’s a situation where copyrighted works are the scarcest of all.

Nowadays no one would confuse one of Vermeer’s 35 (or possibly 36) masterpieces with a poster or hand-made reproduction of one.  People will be willing to travel to the Rijksmuseum, National Gallery, Met, etc., to see them for the foreseeable future.  Yet there will be some time in the non-near future when the scarcity of most copyrighted works is artificially imposed.  At that point, the sale (not resale) value of creative works will go toward zero, even if they are reproduced, signed, and sequentially numbered by super-micro-resolution 3D printers that sell at Staples for the equivalent of $200 today.

Perhaps the best indication of the future comes from Christo and Jeanne-Claude, the well-known husband-and-wife outdoor artists.  Christo and Jeanne-Claude designed the 2005 installation called The Gates in New York’s Central Park (which happens to be in Jerry Nadler’s congressional district).  Reproducing — let alone selling — this massive work is inconceivable.  Instead, Christo and Jeanne-Claude hand-signed thousands of copies of books, lithographs, postcards, and other easily-reproduced artifacts containing photos and drawings of the artwork, and sold them to help pay the eight-figure cost of the project.  To that just add an individualized auto-pen for automating the signatures, and you may have the future of visual art in a world without scarcity.

So, the question that Congress ought to consider when evaluating art resale legislation is how to create a legal environment in which the Christos and Jeanne-Claudes of tomorrow will even bother anymore.  That’s not a rhetorical question, either.

National Academies Calls for Hard Data on Digital Copyright February 4, 2014

Posted by Bill Rosenblatt in Economics, Law, United States.
1 comment so far

About three years ago, the National Academies — the scientific advisers to the U.S. federal government — held hearings on copyright policy in the digital age.  The intent of the project, of which the hearings were a part, was to gather input from a wide range of interested parties on the kinds of research that should be done to further our understanding of the effects of digital technologies on copyright.

The committee overseeing the project consisted of twelve people, including an economist specializing in digital content issues (Joel Waldfogel), a movie industry executive (Mitch Singer of Sony Pictures), a music technology expert (Paul Vidich, formerly of Warner Music Group), a federal judge with deep copyright experience (Marilyn Hall Patel of Napster fame), a library director (Michael Keller of Stanford University), a former director of Creative Commons (Molly van Houweling), and a few law professors.  The committee was chaired by Bill Raduchel, a Harvard economics professor turned technology executive perhaps best known as Scott McNealy’s mentor at Sun Microsystems.

Recently the National Academies Press published the results of the project in the form of Copyright in the Digital Era: Building Evidence for Policy, which is available as a free e-book or $35 paperback.  This 85-page booklet is, without exaggeration, the most important document in the field of copyright policy to be published in quite some time.  It is the first substantive attempt to take the debate on copyright policy out of the realm of “copyright wars,” where polemics and emotions rule, into the realm of hard data.

The document starts by decrying the lack of data on which deliberations on copyright policy are based, especially compared to the mountains of data used to support changes to the patent system.  It then goes on to describe various types of data that either exist or should be collected in order to fuel research that can finally tell us how copyright is faring in the digital era, with respect to its purpose to maximize public availability of creative works through incentives to creators.

The questions that Copyright in the Digital Era poses are fundamentally important.  They include issues of monetary and non-monetary motivations to content creators; the impact of sharply reduced distribution and transaction costs for digital compared to physical content; the costs and benefits of various copyright enforcement schemes; and the effects of US-specific legal constructs such as fair use, first sale, and the DMCA safe harbors.  My own testimony at the hearings emphasized the need for research into the costs and benefits of rights technologies such as DRM, and I was pleased to see this reflected in the document.

Copyright in the Digital Era concludes with lists of types of data that the project committee members believe should be collected in order to facilitate research, as well as descriptions of the types of research that should be done and the challenges of collecting the needed data.

This document should be required reading for everyone involved in copyright policy.  More than that, it should be seen as a gauntlet that has been thrown down to everyone involved in the so-called copyright wars.  The National Academies has set the research agenda.  Now that Congress has begun the long, arduous process of revamping America’s copyright law, we’ll see who is willing and able to fund the research and publish the results so that Congress gets the data it deserves.


E-Book Watermarking Gains Traction in Europe October 3, 2013

Posted by Bill Rosenblatt in DRM, Europe, Publishing, United States, Watermarking.

The firm Rüdiger Wischenbart Content and Consulting has just released the latest version of Global eBook, its overview of the worldwide ebook market.  This sweeping, highly informative report is available for free during the month of October.

The report contains lots of information about piracy and rights worldwide — attitudes, public policy initiatives, and technologies.  A few conclusions in particular stand out.  First, while growth of e-book reading appears to be slowing down, it has reached a level of 20% of book sales in the U.S. market (and even higher by unit volume).  This puts e-books firmly in the mainstream of media consumption.

Accordingly, e-book piracy has become a mainstream concern.  Publishers — and their trade associations, such as the Börsenverein des Deutschen Buchhandels in Germany, which is the most active on this issue — had been less involved in the online infringement issue than their counterparts in the music and film industries, but that’s changing now.  Several studies have been done that generally show e-book piracy levels rising rapidly, but there’s wide disagreement on its volume.  And virtually no data at all is available about the promotional vs. detrimental effects of unauthorized file-sharing on legitimate sales.  Part of the problem is that e-book files are much smaller than music MP3s or (especially) digital video or games; therefore e-book files are more likely to be shared through email (which can’t be tracked) and less likely to be available through torrent sites.

The lack of quantitative understanding of infringement and its impact has led different countries to pursue different paths, in terms of both legal actions and the use of antipiracy technologies.  Perhaps the most surprising of the latter trend — at least to those of us on this side of the Atlantic — is the rapid ascendancy of watermarking (a/k/a “social DRM”) in some European countries.  For example:

  • Netherlands: Arbeiderspers/Bruna, the country’s largest book publisher, switched from traditional DRM to watermarking for its entire catalog at the beginning of this year.
  • Austria: 65% of the e-books available in the country have watermarks embedded, compared to only 35% with DRM.
  • Hungary: Watermarking is now the preferred method of content protection.
  • Sweden: Virtually all trade ebooks are DRM-free.  The e-book distributor eLib (owned by the Swedish media giant Bonnier), uses watermarking for 80% of its titles.
  • Italy: watermarking has grown from 15% to 42% of all e-books, overtaking the 35% that use DRM.

(Note that these are, with all due respect to them, second-tier European countries.  I have anecdotal evidence that e-book watermarking is on the rise in the UK, but not much evidence of it in France or Germany.  At the same time, the above countries are often test beds for technologies that, if successful, spread to larger markets — whether by design or market forces.)

Meanwhile, there’s still a total absence of data on the effects of both DRM and watermarking on users’ e-book behavior — which is why I have been discussing with the Book Industry Study Group the possibility of doing a study on this.

The prevailing attitude among authors is that DRM should still be used.  An interesting data point on this came back in January when Lulu, one of the prominent online self-publishing services, decided to stop offering authors the option of DRM protection (using Adobe Content Server, the de facto standard DRM for ebooks outside of the Amazon and Apple ecosystems) for ebooks sold on the Lulu site.  Lulu authors would still be able to distribute their titles through Amazon and other services that use DRM.

Lulu announced this in a blog post which elicited large numbers of comments, largely from authors.  My pseudo-scientific tally of the authors’ comments showed that they are in favor of DRM — and unhappy with Lulu’s decision to drop it — by more than a two-to-one margin.  Many said that they would drop Lulu and move to its competitor Smashwords, which continues to support DRM as an option.  Remember that these are independent authors of mostly “long tail” titles in need of exposure, not bestselling authors or major publishers.

One reason for Lulu’s decision to drop DRM was undoubtedly the operational expense.  Smashwords’ CEO, Mark Coker, expressed the attitudes of ebook distributors succintly in a Publishers Weekly article covering Lulu’s move when he said, “What’s relevant is whether the cost of DRM (measured by fees to Adobe, [and for consumers] increased complexity, decreased availability, decreased sharing and word of mouth, decreased customer satisfaction) outweigh the benefits[.]”  As we used to say over here, that’s the $64,000 question.

Public Knowledge Weighs In on Digital First Sale July 14, 2013

Posted by Bill Rosenblatt in Law, United States.

The Internet advocacy group Public Knowledge (PK) recently published Copies, Rights, & Copyrights: Really Owning Your Digital Stuff, a think-piece on first sale for the digital age, authored by Sherwin Siy, PK’s VP of Legal Affairs.  PK’s position on digital first sale, characteristically, is that users should have the same types of rights to resell, lend, and give away their digital content as they do physical items such as books and CDs.  The law in the United States is at best ambiguous on this point, but as digital content consumption goes mainstream, the need for clarity increases.

This whitepaper, which mixes the scholarly with the pragmatic, should be fun for copyright law geeks to read and pick over.  It’s PK’s contribution to what will undoubtedly be years of dialog and argument over what to do about first sale for digital content.  It mines the history of first sale-related court decisions for precedents that justify the extrapolation of first sale rights to the digital age.

The doctrine of first sale, section 109 of the Copyright Act, says that once you buy (or otherwise lawfully obtain) a copy of a copyrighted work, it’s yours to do with as you please.  As Siy explains, the law evolved through litigation to cover things like the right to repair and publicly display one’s legitimately obtained works without permission from the publisher.  In examining how first sale ought to apply in the digital age, Siy considers not just obvious use cases such as reselling or lending downloaded music files or e-books, but also less straightforward scenarios like the right to “lend” access to a database through an Internet login, or to lend a DVD rented from Netflix (which is contrary to Netflix’s terms and conditions).

One mystifying aspect of this document is that despite the June 27 publication date, it is completely silent about the recent summary judgment against the MP3 resale startup ReDigi, which happened in a New York federal court three months previous.  This is tantamount to writing a piece on domestic terrorism without mentioning the Boston Marathon bombings.  One can’t help but wonder whether the omission was intentional, given that PK can’t have liked the way that case went, or if Siy is saving his ammunition for an amicus brief to be filed in ReDigi’s appeal.

At the heart of Siy’s analysis is the notion of copies made as “essential steps” in the normal usage of a copyrighted work.  If you rebind a book and put a new cover on it, for example, the law says that the result is a “derivative work” — a specific type of copy in legal terms — of the original book.   But one court decision said that you have the right to do this because of the essentiality of the derivative work to the functioning of the original.

Analogously, various copies of software or digital content that are made in the RAM of a PC or other devices can be argued to be made as essential steps in the use of the software or content.  Few people notice or care about such RAM copies.  They aren’t given much consideration in copyright law, and end-user license agreements (EULAs) usually don’t explicitly give users permission to make them.  In legal terms, they are de minimis copies.

ReDigi’s software — at least the version of it for which the court found ReDigi liable — made a copy of a music file as part of the process of making it available for resale.  It deleted the original copy in the process of making the new one, and it took steps to ensure that the user didn’t try to keep additional copies after the file was resold; but still, it made a new copy.  And the judge in the ReDigi case specifically ruled that the new copy was infringing.  In other words, the ReDigi opinion implied that the copy made for resale purposes was not essential to the normal use of digital music files.

In its white paper, PK, proposes short term fixes to the copyright law.  It also discusses longer term implications of first sale in an age where the concepts of copies of and access to content are increasingly muddy.  The short term fix focuses on adding language to the law that make “essential step” copies — including those made for transfer of ownership — presumptively legal.  Siy suggests that if publishers want to limit this at all, they can do so by putting terms into EULAs.

In addition, Siy proposes codifying distribution of files for transfer of ownership as part of the normal use of digital content:

“…the law could allow the lawful owners of lawful copies to make reproductions of the works necessary to the transfer of ownership of a copy to one other party, provided that the other party be the only one in possession of a copy at the end of the transfer, and that no more than one of the parties has the use of the work at one time.”

In other words, it should be legal to do this as long as the original copy disappears.  Siy adds:

“Skeptics of this approach might note that the copyright  holder would have to rely upon the goodwill of the transferring parties not to make more permanent reproductions in the course of the transfer and just keep the copy they claim to have sold to someone else. This is true. However, it is not a significant change from the state of play now. Photocopiers continue to operate without licenses from copyright holders despite the fact that they may be used for infringing reproductions.”

Here’s where his analysis starts to lose credibility.  First, the U.S. Copyright Office investigated this very issue in a 2001 report on digital first sale.  It decided that while it’s possible to build a mechanism to delete the original when making a copy — a “forward and delete” scheme similar to ReDigi’s — it would be neither prudent to trust people to do this nor practical to mandate such a mechanism.  Given that the Copyright Office is the advisor to Congress on copyright law, this report is tantamount to Congress’s last word on the subject.  (The Copyright Office hasn’t been asked to revise the report since 2001.)  Siy doesn’t mention this.

Secondly, there are two things wrong with Siy’s photocopying analogy.  First, while publishers don’t bother trying to seek licenses for photocopying from individuals, they do seek licenses from institutions via the Copyright Clearance Center that are based on an institution’s size, industry, and other factors; the vast majority of the Fortune 500, for example, pays fees for such licenses.  Second, and most fundamentally, a photocopy is not the same as the original, whereas a bit-for-bit copy of a digital file is.  This difference has to be relevant, as it was to Judge Sullivan in his ReDigi opinion.  I’d argue that the exactness of digital copies has to be at the heart of any debate on the future of first sale in the digital age, yet Siy doesn’t touch this issue.

After his suggested short-term solutions, Siy surveys the emerging landscape of content distribution models that focus more on access than on copies, and he wonders — quite appropriately — whether copies are still the most appropriate measure of the usage of copyrighted works.  Looking creatively at the present and future of content business models, he says:

“As technology advances, we can see the relationship diminishing between the structure of the Copyright Act and the reality of how authors and audiences alike value and use copyrighted works. … increasingly, consumption of copyrighted works comes not through the distribution of fixed copies, or even the distribution of digital ones. People listen to music via subscriptions to Spotify, pay for online access to the New York Times and Wall Street Journal, and ‘rent’ (actually, pay for streaming access to) digital movies from Amazon. Access, not copies, seems to be more the question. We own copies now; we don’t necessarily own access. Should we be able to trade access … ? This is actually more than just a fix for the first sale doctrine; it’s a realignment of how we think of copyright and what the value of the thing is.”

Yet at the end of the day, Siy decides that focusing on copies is still the most sensible approach to laws intended to balance the interests of content creators and the public.  The alternative would be to enable content distributors to grant or deny rights on every conceivable type of content access, including, for example, the ability to flip back and forth or search through text. He says that this could lead to a future that is “at best tedious and at worst dystopian,” and uses that as a rationale to conclude that maybe focusing on copies isn’t so bad after all.

Another landmark case in the world of digital first sale, which Siy does mention, is Vernor v. Autodesk.  In my discussion of that case, I suggested that specifying such fine-grained limitations in a EULA amounts to “verbal DRM,” which in a way is worse than technological DRM because of its potential for ambiguity and thus legal risk to users.  Commercial content distributors need to make their services easy to understand and use, because the alternative is irrelevance, not to mention piracy.  Just as importantly, digital content and software developers should know that byzantine usage restrictions in EULAs without technological measures to back them up are virtually impossible to enforce; users will merely ignore and/or complain about them.

Therefore I’m not so concerned about Sherwin Siy’s “tedious” and “dystopian” future, and I continue to wonder whether there’s a better way forward than looking at copies.

The Coming Two-Tiered World of Libary E-book Lending June 4, 2013

Posted by Bill Rosenblatt in Libraries, Publishing, Services, United States.

A group of public libraries in California recently launched a beta version of EnkiLibrary, an e-book lending system that the libraries run themselves.  EnkiLibrary is modeled on the Douglas County Libraries system in Colorado.  It enables libraries to acquire e-book titles for lending in a model that approximates print book acquisition more closely than the existing model.

Small independent publishers are making their catalogs available to these library-owned systems on liberal terms, including low prices and a package of rights that emulates ownership.  In contrast, major trade publishers license content to white-label service providers such as OverDrive under a varied, changing, and often confusing array of conditions — including limited catalog, higher prices than those charged to consumers, and limitations on the number of loans.  The vast majority of public libraries in the United States use these systems: they choose which titles to license and offer those to their patrons.

Welcome to the coming two-tiered world of library e-book lending.  E-lending systems like EnkiLibrary may well proliferate, but they are unlikely to take over; instead they will coexist with — or, in EnkiLibrary’s own words, “complement” — those used by the major publishers.

The reason for this is simple: indie publishers — and authors, working through publisher/aggregators like Smashwords — prioritize exposure over revenue, while for major publishers it’s the other way around.  If more liberal rights granted to libraries means that borrowers “overshare” e-books, then so be it: some of that oversharing has promotional value that could translate into incremental, cost-free sales.

In some ways, the emerging dichotomy in library e-lending is like the dichotomy between major and indie labels regarding Internet music sales.  Before 2009, the world of (legal) music downloads was divided into two camps: iTunes sold both major and indie music and used DRM that tied files to the Apple ecosystem; smaller services like eMusic sold only indie music, but the files were DRM-free MP3s that could be played on any device and copied freely.  That year, iTunes dropped DRM, Amazon expanded its DRM-free MP3 download service to major-label music, and eventually eMusic tapered off into irrelevance.

Yet it would be a mistake to stretch the analogy too far.  Major publishers are unlikely to license e-books for library lending on the liberal terms of a system like EnkiLibrary or Douglas County’s in the foreseeable future; the market dynamics are just not the same.

In 2008, iTunes had an inordinately large share of the music download market; the major labels had no leverage to negotiate more favorable licensing terms, such as the ability to charge variable prices for music.  The majors had tried and failed to nurture viable competitors to iTunes.  Amazon was their last and best hope.  iTunes already had an easy-to-use system that was tightly integrated with Apple’s own highly popular devices.  It became clear that the only meaningful advantage that another retailer could have over iTunes was lack of DRM.  So the major labels were compelled to give up DRM in order to get Amazon on board.  By 2009, DRM-free music from all labels became available through all major retailers.

No such competitive pressures exist in the library market.  On the contrary, libraries themselves are under competition from the private sector, including Amazon.  Furthermore, arguments that e-book lending under liberal terms leads to increased sales for small publishers won’t apply very much to major publishers, for reasons given above.

Therefore, unless libraries get e-lending rights under copyright law instead of relying on “publishers’ good graces” (as I put it at the recent IDPF Digital Book 2013 conference) for e-lending permission, it’s likely that libraries will have to labor under a two-tiered system for the foreseeable future.  Douglas County Libraries director Jamie LaRue — increasingly seen as a revolutionary force in the library community — captured the attitude of many when he said, “It isn’t the job of libraries to keep publishers in business.”  He’s right.  Ergo the stalemate should continue for some time to come.

Capitol Records Prevails in ReDigi Case April 1, 2013

Posted by Bill Rosenblatt in Law, Music, United States.

A federal court in New York City handed down summary judgment against ReDigi over the weekend in its legal fight with Capitol Records.  In his ruling , Judge Richard Sullivan found the digital resale service liable for primary and secondary copyright infringement.   He rejected ReDigi’s arguments that its service, which enables users to resell music tracks purchased on iTunes, is legal under the doctrines of fair use and first sale.

The decision is a surprising blow to the Boston-based startup, especially given that Judge Sullivan refused Capitol’s request for a preliminary injuction early on in the case.

The central holding in Judge Sullivan’s opinion was that in order to resell a digital file, a user has to make another copy of it — even if the original copy disappears, and even if two copies never coexist simultaneously.  He based this holding on a literal interpretation of the phrase “copies are material objects” from Section 101 of the Copyright Act.

Once Judge Sullivan established that the ReDigi system causes another copy to be made as part of the resale process, the rest of his opinion flowed from there:

  • The user didn’t have a right to make that new copy, therefore it’s infringement — specifically of Capitol’s reproduction and distribution rights under copyright law.
  • ReDigi knowingly aided and abetted, and benefited from, users’ acts of infringement, therefore it’s secondary as well as primary infringement.
  • The user resold the new copy, not the original one, therefore it’s not protected under first sale (which says that a consumer can do whatever she wants with a copy of a copyrighted work that she lawfully obtains).
  • The “new” copies made in the ReDigi process don’t qualify as fair use: they are identical to the originals and thus aren’t “transformative”; they are made for commercial purposes; they undercut the originals and thus diminish the market for them.

In sum, as Judge Sullivan put it bluntly, “ReDigi, by virtue of its design, is incapable of compliance with the law.”  At the same time, he was quick to point out that his was a narrow ruling based on a literal interpretation of the law, saying that “this is a court of law and not a congressional subcommittee or technology blog[.]”  He investigated Congress’s intent regarding digital first sale and found that it hadn’t advanced since the U.S. Copyright Office — the copyright advisors to Congress — had counseled against allowing digital resale back in 2001.

I’ve always assumed that any district court decision in this case would be minimally relevant, as it would be appealed.  ReDigi has already stated that it will appeal.  And the opinion does contain patches of daylight through which an appeal could possibly be launched.

Most important is the opinion’s focus on the making of a “new copy” during the resale process.  It’s hard to see how this gibes with the many “new copies” of digital files made during normal content distribution processes, including streaming as well as downloads.

In other words, if ReDigi is making “new copies” without authorization, then so are countless other technologies.  Some such copies might be covered under fair use or the DMCA safe harbors.  Other “new copies” are considered “incidental” (not requiring permission from the copyright holder); the judge didn’t explain why copies made by the ReDigi system don’t qualify as incidental.  ReDigi did make a similar argument; the judge didn’t buy it because it didn’t involve the issues in this case, but a higher court, looking at the broader picture of digital first sale, might see things differently.

Judge Sullivan’s reliance on the Copyright Office’s 2001 report on digital first sale is also somewhat problematic.   The Copyright Office believed that a “forward-and-delete” mechanism — not unlike what ReDigi has built — could actually support digital first sale.  The Copyright Office simply concluded that such a mechanism would not be practical to implement.  This does not comport with Judge Sullivan’s assertion that “forward-and-delete” requires a new copy to be made and thus cannot qualify as first sale in the first place.

Another notable feature of Judge Sullivan’s opinion is his assertion that “a ReDigi user owns the phonorecord that was created when she purchased and downloaded a song from iTunes to her hard disk.”  The assertion that a user “owns” a digital download is itself controversial and not based on legal precedent.  Judge Sullivan found no legal precedent for digital first sale, but somehow he did find a basis for asserting that digital downloads are “owned.”

Retailers of digital goods believe that they don’t actually sell them in the way that books, CDs, or DVDs are sold; instead they license them to users under terms that may resemble sale.  The question of sale vs. licensing of copyrighted digital content is a gray area in the law, and it wasn’t up for examination here: Apple, for example, wasn’t a party to the case and remained silent throughout.  But if Apple (or another digital content retailer) ever objects to its content being “resold” through a third-party service, it will have to deal with Judge Sullivan’s language; and once again, it may be harder for a higher court to ignore this aspect of digital resale when determining its legality.

It remains to be seen whether the above issues can be forged into a legal theory that can convince the Second Circuit appeals court to reverse Judge Sullivan’s ruling.  Yet even if ReDigi throws in the towel and ceases operations, its very existence has called a lot of attention to the idea of digital resale.  The mechanisms are in place today: beyond ReDigi, there’s at least one more startup (the NYC-based ReKiosk); and Amazon was recently granted a patent for resale of digital goods.  Indie music labels and a few e-book publishers, at first, will most likely experiment with it.

This court ruling won’t eliminate digital resale; if let stand, it will simply restrict it to content that copyright owners have given permission to resell — permission that will probably include say over pricing, timing, and other factors.  This will complicate the lives of resellers, but it will ensure that digital resale doesn’t harm copyright holders.  In other words, ReDigi has let the digital resale genie out of the lamp.  It’s bound to happen, one way or another.

Supreme Court Affirms First Sale in Kirtsaeng Case March 20, 2013

Posted by Bill Rosenblatt in Law, Libraries, United States.

The copyleft was jubilant, and Big Media disgruntled, at the Supreme Court’s opinion on Tuesday in Kirtsaeng v. Wiley, a case about the first sale doctrine in US copyright law.  First sale, known as “exhaustion” outside of the US, states that the publisher of a copyrighted work has no say or control in distribution of it after the first sale.  The law says that if you have obtained a copy of a work legally, you can sell it, lend it, give it away, use it to line a birdcage, or anything else, without consent of the original publisher.

The Kirtsaeng case existed firmly in the realm of physical products.  It concerned a tension in the law between first sale (section 109) and another provision (section 602) that makes it illegal to import copyrighted works from outside the US into the country without permission.

Supap Kirtsaeng, a Thai citizen living in the US, got his friends and family to buy textbooks published in his native land at prices that were much lower than those charged here.  They sent him the books; he resold them here and pocketed the difference.  The books were published by a subsidiary of John Wiley & Sons and were virtually identical to titles published by Wiley in the US.  (Disclosure: Wiley is the publisher of one of my books.)

Wiley sued, claiming that Kirtsaeng was infringing under section 602.  Kirtsaeng claimed first sale rights to resell the books.  Kirtsaeng lost in the lower courts, but the Supreme Court reversed.  Now the case goes back to the Second Circuit in New York for a re-hearing consistent with Tuesday’s decision.

Many people are asking me what impact this decision may have on digital first sale, and more specifically, the fortunes of the digital resale startup ReDigi, which is fighting a lawsuit brought by Capitol Records.  While I’m not in the business of reading Supreme Court tea leaves, I’d say there are two ways to look at it.

The narrower view is: not very much.  Justice Stephen Breyer’s opinion was an exemplar of judicial restraint.  It spent a lot of time analyzing key words in the first sale law (specifically that a copy had to be “lawfully made under this title” to qualify for first sale) and the factors specific to its geographic interpretation vis-a-vis section 602.  It also focused on divining Congress’s intent in making the law in the first place and emphasized the law’s “impeccable common law pedigree” dating back over 100 years.  It’s no wonder that the 6-3 majority crossed “party lines,” with conservative Justices Roberts, Thomas, and Alito joining liberals Breyer, Kagan, and Sotomayor.

The opinion also concerned itself with the decision’s impact on libraries and museums, saying that if the case went Wiley’s way, it would place undue burdens on them to get permission before they could lend or exhibit foreign-made works.

What Breyer did not do was spend much time discussing the business implications of the case.  He said little about both  the impact on publishers and Kirtsaeng’s right to carry on his resale business.  Justice Ruth Bader Ginsburg’s dissenting opinion focused much more on those aspects.

That leads me to believe that if and when the Supreme Court revisits first sale, it will be more receptive to arguments from the library and museum communities than those about industry factions, which often suffuse high-profile copyright litigation.  And libraries especially face difficulties without clear digital first sale rights.  The Owners Rights Initiative, a lobbying organization set up specifically to deal with this case, turns out to have done the right thing by enlisting library organizations to be part of its “public face” rather than the likes of CCIA and eBay.  (The list of organizations that submitted or signed on to amicus briefs in this case is a mile long.)

The other possible view of the Kirtsaeng decision is the bigger-picture one: that the Supreme Court is taking a broad view of first sale by refusing to weigh it down with exceptions like those in section 602, and therefore the Court may take the same broad view when it’s asked to opine on digital first sale — that is, when it’s asked to interpret another group of words in the copyright act: “‘Copies’ are material objects…”

(Props to Andrew Bridges of Fenwick & West for his insights.)


Get every new post delivered to your Inbox.

Join 568 other followers