jump to navigation

MP3Tunes and the New DMCA Boundaries March 30, 2014

Posted by Bill Rosenblatt in Law, Music, Services, United States.
2 comments

With last week’s jury verdict of copyright liability against Michael Robertson of MP3Tunes, copyright owners are finally starting to get some clarity around the limits of DMCA 512.  The law gives online service operators a “safe harbor” — a way to insulate themselves from copyright liability related to files that users post on their services by responding to takedown notices.

To qualify for the safe harbor, service providers have to have a policy for terminating the accounts of repeat infringers, and — more relevantly – cannot show “willful blindness” to users’ infringing actions.  At the same time, the law does not obligate service providers to proactively police their networks for copyright infringement.  The problem is that even when online services respond to takedown notices, the copyrighted works tend to be re-uploaded immediately.

The law was enacted in 1998, and copyright owners have brought a series of lawsuits against online services over the years to try to establish liability beyond the need to respond to one takedown notice at a time.  Some of these lawsuits tried to revisit the intent of Congress in passing this law, to convince courts that Congress did not intend to require them to spend millions of dollars a year playing Whac-a-Mole games to get their content removed.

In cases such as Viacom v. YouTube and Universal Music Group v. Veoh that date back to 2007, the media industry failed to get courts to revisit the idea that service providers should act as their own copyright police.  But over the past year, the industry has made progress along the “willful blindness” (a/k/a “looking the other way”) front.

These cases featured lots of arguments over what constitutes evidence of willful blindness or its close cousin, “red flag knowledge” of users’ infringements.  Courts had a hard time navigating the blurry lines between the “willlful blindness” and “no need to self-police” principles in the law, especially when the lines must be redrawn for each online service’s feature set, marketing pitch, and so on.

But within the past couple of years, two appeals courts established some of the contours of willful blindness and related principles to give copyright owners some comfort.  The New York-based (and typically media-industry-friendly) Second Circuit, in the YouTube case, found that certain types of evidence, such as company internal communications, could be evidence of willful blindness.  And even the California-based (and typically tech-friendly) Ninth Circuit found similar evidence last year in a case against the BitTorrent site IsoHunt.

The Second Circuit’s opinion in YouTube served as the guiding precedent in the EMI v. MP3Tunes case — and in a rather curious way.  Back in 2011, the district court judge in MP3Tunes handed down a summary judgment ruling that was favorable to Robertson in some but not all respects.  But after the Second Circuit’s YouTube opinion, EMI asked the lower court judge to revisit the case, suggesting that the new YouTube precedent created issues of fact regarding willful blindness that a jury should decide.  The judge was persuaded, the trial took place, and the jury decided for EMI.  Robertson could now be on the hook for tens of millions of dollars in damages.

(Eleanor Lackman and Simon Pulman of the media-focused law firm Cowan DeBaets have an excellent summary of the legal backdrop of the MP3Tunes trial; they say that it is “very unusual” for a judge to go back on a summary judgment ruling like that.)

The MP3Tunes verdict gives media companies some long-sought leverage against online service operators, which keep claiming that their only responsibility is to respond to each takedown notice, one at a time.  This is one — but only one — step of the many needed to clarify the rights of copyright owners and responsibilities of service providers to protect copyrights.  And as far as we can tell now, it does not obligate service providers to implement any technologies or take any more proactive steps to reduce infringement.  Yet it does now seem clear that if service providers want to look the other way, they at least have to keep quiet about it.

As for Robertson, he continues to think of new startup ideas that seem particularly calculated to goad copyright owners.  The latest one, radiosearchengine.com, is an attempt to turn streaming radio into an interactive, on-demand music service a la Spotify.  It lets users find and listen to Internet streams of radio stations that are currently playing specific songs (as well as artists, genres of music, etc.).

Radiosearchengine.com starts with a database of thousands of Internet radio stations, similar to TuneIn, iHeartRadio, Reciva, and various others.  These streaming radio services (many of which are simulcasts of AM or FM signals) carry program content data, such as the title and artist of the song currently playing.  Radiosearchengine.com retrieves this data from all of the stations in its database every few seconds, adds that information to the database, and makes it searchable by users.  Robertston has even created an API so that other developers can access his database.

Of course, radiosearchengine.com can’t predict that a station will play a certain song in the future (stations aren’t allowed to report it in advance), so users are likely to click on station links and hear their chosen songs starting in the middle.  But with the most popular songs — which are helpfully listed on the site’s left navbar — you can find many stations that are playing them, so you can presumably keep clicking until you find the song near its beginning.

This is something that TuneIn and others could have offered years ago if it didn’t seem so much like lawsuit bait.  On the other hand, Robertson isn’t the first one to think of this: there’s been an app for that for at least three years.

Viacom vs. YouTube: Not With a Bang, But a Whimper March 21, 2014

Posted by Bill Rosenblatt in Law, United States.
add a comment

Earlier this week Viacom settled its long-running lawsuit against Google over video clips containing Viacom’s copyrighted material that users posted on YouTube.  The lawsuit was filed seven years ago; Viacom sought well over a billion dollars in damages.  The last major development in the case was in 2010, when a district court judge ruled in Google’s favor.  The case had bounced around between the district and Second Circuit appeals courts.  The parties agreed to settle the case just days before oral argument was to take place before the Second Circuit.

It’s surprising how little press coverage the settlement has attracted — even in the legal press — considering the strategic implications for Viacom and copyright owners in general.

The main reasons for the lack of press attention are that details of the settlement are being kept secret, and that by now the facts at issue in the case are firmly in the past.  A few months after Viacom filed the lawsuit in March 2007, Google launched its Content ID program, which enables copyright owners to block user uploads of their content to YouTube — or monetize them through shared revenue from advertising.  The lawsuit was concerned with video clips that users uploaded before Content ID was put into place.

Viacom’s determination to carry on with the litigation was clearly meant primarily to get the underlying law changed.  Viacom has been a vocal opponent of the Digital Millennium Copyright Act (DMCA) in its current form.  The law allows service providers like YouTube to respond to copyright owners’ requests to remove content (takedown notices) in order to avoid liability.  It doesn’t require service providers to proactively police their services for copyright violations.  As a result, a copyright owner has to issue a new takedown notice every time a clip of the same content appears on the network — which often happens immediately after each clip is taken down.  As a result, companies like Viacom thus spend millions of dollars issuing takedown notices in a routine that has been likened to a game of Whac-a-Mole.  

From that perspective, Google’s Content ID system goes beyond its obligations under the DMCA (Content ID has become a big revenue source for Google), so Google’s compliance with the current DMCA isn’t the issue.  Countless other service providers don’t have technology like Content ID in place; moreover, since 2007 the courts have consistently — in other litigations such as Universal Music Group v. Veoh — interpreted the DMCA not to require  that service providers act as their own copyright police.  Viacom must still be interested in getting the law changed.

In this light, what’s most interesting is not that the settlement came just days before oral argument before the Third Circuit, but that it came just days after the House Judiciary Committee held hearings in Washington on the DMCA.  These were done in the context of Congress’s decision to start on the long road to revamping the entire US copyright law.

My theory is that, while Viacom may have had various reasons to settle, the company has decided that it has a better shot at changing the law through Congress than through the courts.  The journey to a new Copyright Act is likely to take years longer than the appeals process.  But if Viacom were to get the lower court’s decision overturned in the Third Circuit, the result would be a precedent that wouldn’t apply nationwide; in particular, it wouldn’t necessarily apply in the tech-friendly Ninth Circuit.  A fix to the actual law in Congress that’s favorable to copyright owners — if Congress delivers one — could have broader applicability, both geographically and to a wider variety of digital services.   Viacom has waited six years; it can wait another ten or so.

In Copyright Law, 200 Is a Magic Number March 2, 2014

Posted by Bill Rosenblatt in Images, Law, United States.
1 comment so far

An occasional recurring theme in this blog is how copyright law is a poor fit for the digital age because, while technology enables distribution and consumption of content to happen automatically, instantaneously, and at virtually no cost, decisions about legality under copyright law can’t be similarly automated.  The best/worst example of this is fair use.  Only a court can decide whether a copy is noninfringing under fair use.  Even leaving aside notions of legal due process, it’s not possible to create a “fair use deciding machine.”

In general, copyright law contains hardly any concrete, machine-decidable criteria.  Yet one of the precious few came to light over the past few months regarding a type of creative work that is often overlooked in discussions of copyright law: visual artworks.  Unlike most copyrighted works, works of visual art are routinely sold and then resold potentially many times, usually at higher prices each time.

A bill was introduced in Congress last week that would enable visual artists to collect royalties on their works every time they are resold.  One of the sponsors of the bill is Rep. Jerrold Nadler, who represents a chunk of New York City, one of the world’s largest concentrations of visual artists.

Of course, the types of copyrighted works that we usually talk about here — books, movies, TV shows, and music — aren’t subject to resale royalties; they are covered under first sale (Section 109 of the Copyright Act), which says that the buyer of any of these works is free to do whatever she likes with them, with no involvement from the original seller.  But visual artworks are different.  According to Section 101 of the copyright law, they are either unique objects (e.g. paintings) or reproduced in limited edition (e.g. photographs).  The magic number of copies that distinguishes a visual artwork from anything else?  200 or less.  The copies must be signed and numbered by the creator.

Under the proposed ART (Artist Royalties, Too) Act, five percent of the proceeds from a sale of a visual artwork would go to the artist, whether it’s the second, third, or hundredth sale of the work.  The law would apply to artworks that sell for more than $5,000 at auction houses that do at least $1 million in business per year.  It would require private collecting societies to collect and distribute the royalties on a regular basis, as SoundExchange does for digital music broadcasting.  This proposed law would follow in the footsteps of similar laws in many countries, including the UK, EU, Australia, Brazil, India, Mexico, and several others.  It would also emulate  “residual” and “rental” royalties for actors, playwrights, music composers, and others, which result from contracts with studios, theaters, orchestras, and so on.

The U.S. Copyright Office analyzed the art resale issue recently and published a report last December that summarized its findings.  The Office concluded that resale royalties would probably not harm the overall art market in the United States, and that a law like the ART Act isn’t a bad idea but is only one of several ways to institute resale royalties.

The Office had previously looked into resale royalties over 20 years ago.  Its newer research found that, based on evidence from other countries that have resale royalties, imposing them in the US would neither result in the flight of art dealers and auction houses from the country nor impose unduly onerous burdens for administration and enforcement of royalty payments.

Yet the Copyright Office’s report doesn’t overflow with unqualified enthusiasm for statutory royalties on sales.  One of the legislative alternatives it suggests is the idea of a “performance royalty” from public display of artworks.  If a collector wants to buy a work at auction and display it privately in her home, that’s different from a museum that charges people admission to see it.  Although this would mirror performance royalties for music, it would seem to favor wealthy individuals at the expense of public exposure to art.

The ART Act — which is actually a revision of legislation that Rep. Nadler introduced in 2011 — has drawn much attention within the art community, though little outside it.  Artists are generally in favor of it, of course.  But various others have criticized aspects of the bill, such as that it only applies to auction houses (thereby pushing more sales to private dealers, where transactions take place in secret instead of out in the open), that it only benefits the tiny percentage of already-successful artists instead of struggling newcomers, and that it unfairly privileges visual artists over other creators of both copyrighted works and physical objects (think Leica cameras or antique Cartier watches).

As an outsider to the art world, I have no opinion.  Instead it’s that 200 number that fascinates me.  That number may partially explain why the Alfred Eisenstaedt photograph of the conductor Leonard Bernstein that hangs in my wife’s office, signed and numbered 14 out of 250, is considerably less valuable than another Eisenstaedt available on eBay that’s signed and numbered 41 out of 50.

It begs the question of what happens when more and more visual artists use media that can be reproduced digitally without loss of quality.  Would an artist be better off limiting her output to 200 copies and getting the 5% on resale, or would she be better off making as many copies as possible and selling them for whatever the market will bear?  The answer is unknowable without years of real-world testing.  Given the choice, some artists may opt for the former route, which seems to go against the primary objective of copyright law: to maximize the availability of creative works to the public through incentives to creators.

Copyright minimalists question the relevance of copyright in an era when digital technologies make it possible to reproduce creative works at very little cost and perfect fidelity; they call on the media industry to stop trying to “profit from scarcity” and instead “profit from abundance.”  Here’s a situation where copyrighted works are the scarcest of all.

Nowadays no one would confuse one of Vermeer’s 35 (or possibly 36) masterpieces with a poster or hand-made reproduction of one.  People will be willing to travel to the Rijksmuseum, National Gallery, Met, etc., to see them for the foreseeable future.  Yet there will be some time in the non-near future when the scarcity of most copyrighted works is artificially imposed.  At that point, the sale (not resale) value of creative works will go toward zero, even if they are reproduced, signed, and sequentially numbered by super-micro-resolution 3D printers that sell at Staples for the equivalent of $200 today.

Perhaps the best indication of the future comes from Christo and Jeanne-Claude, the well-known husband-and-wife outdoor artists.  Christo and Jeanne-Claude designed the 2005 installation called The Gates in New York’s Central Park (which happens to be in Jerry Nadler’s congressional district).  Reproducing — let alone selling — this massive work is inconceivable.  Instead, Christo and Jeanne-Claude hand-signed thousands of copies of books, lithographs, postcards, and other easily-reproduced artifacts containing photos and drawings of the artwork, and sold them to help pay the eight-figure cost of the project.  To that just add an individualized auto-pen for automating the signatures, and you may have the future of visual art in a world without scarcity.

So, the question that Congress ought to consider when evaluating art resale legislation is how to create a legal environment in which the Christos and Jeanne-Claudes of tomorrow will even bother anymore.  That’s not a rhetorical question, either.

National Academies Calls for Hard Data on Digital Copyright February 4, 2014

Posted by Bill Rosenblatt in Economics, Law, United States.
1 comment so far

About three years ago, the National Academies — the scientific advisers to the U.S. federal government — held hearings on copyright policy in the digital age.  The intent of the project, of which the hearings were a part, was to gather input from a wide range of interested parties on the kinds of research that should be done to further our understanding of the effects of digital technologies on copyright.

The committee overseeing the project consisted of twelve people, including an economist specializing in digital content issues (Joel Waldfogel), a movie industry executive (Mitch Singer of Sony Pictures), a music technology expert (Paul Vidich, formerly of Warner Music Group), a federal judge with deep copyright experience (Marilyn Hall Patel of Napster fame), a library director (Michael Keller of Stanford University), a former director of Creative Commons (Molly van Houweling), and a few law professors.  The committee was chaired by Bill Raduchel, a Harvard economics professor turned technology executive perhaps best known as Scott McNealy’s mentor at Sun Microsystems.

Recently the National Academies Press published the results of the project in the form of Copyright in the Digital Era: Building Evidence for Policy, which is available as a free e-book or $35 paperback.  This 85-page booklet is, without exaggeration, the most important document in the field of copyright policy to be published in quite some time.  It is the first substantive attempt to take the debate on copyright policy out of the realm of “copyright wars,” where polemics and emotions rule, into the realm of hard data.

The document starts by decrying the lack of data on which deliberations on copyright policy are based, especially compared to the mountains of data used to support changes to the patent system.  It then goes on to describe various types of data that either exist or should be collected in order to fuel research that can finally tell us how copyright is faring in the digital era, with respect to its purpose to maximize public availability of creative works through incentives to creators.

The questions that Copyright in the Digital Era poses are fundamentally important.  They include issues of monetary and non-monetary motivations to content creators; the impact of sharply reduced distribution and transaction costs for digital compared to physical content; the costs and benefits of various copyright enforcement schemes; and the effects of US-specific legal constructs such as fair use, first sale, and the DMCA safe harbors.  My own testimony at the hearings emphasized the need for research into the costs and benefits of rights technologies such as DRM, and I was pleased to see this reflected in the document.

Copyright in the Digital Era concludes with lists of types of data that the project committee members believe should be collected in order to facilitate research, as well as descriptions of the types of research that should be done and the challenges of collecting the needed data.

This document should be required reading for everyone involved in copyright policy.  More than that, it should be seen as a gauntlet that has been thrown down to everyone involved in the so-called copyright wars.  The National Academies has set the research agenda.  Now that Congress has begun the long, arduous process of revamping America’s copyright law, we’ll see who is willing and able to fund the research and publish the results so that Congress gets the data it deserves.

 

Judge Dismisses E-Book DRM Antitrust Case December 12, 2013

Posted by Bill Rosenblatt in DRM, Law, Publishing.
2 comments

Last week a federal judge in New York dismissed a lawsuit that a group of independent booksellers brought earlier this year against Amazon.com and the (then) Big Six trade publishers.  The suit alleged that the publishers were conspiring with Amazon to use Amazon’s DRM to shut the indie booksellers out of the majority of the e-book market.  The three bookstores sought class action status on behalf of all indie booksellers.

In most cases, independent booksellers can’t sell e-books that can be read on Amazon’s Kindle e-readers; instead they have a program through the Independent Booksellers Association that enables consumers to buy e-books from the stores’ websites via the Kobo e-book platform, which has apps for all major devices (PCs, iOS, Android, etc.) as well as Kobo eReaders.

Let’s get the full disclosure out of the way: I worked with the plaintiffs in this case as an expert witness.  (Which is why I didn’t write about this case when it was brought several months ago.)  I did so because, like others, I read the complaint and found that it reflected various misconceptions about DRM and its place in the e-book market; I thought that perhaps I could help educate the booksellers.

The booksellers asked the court to enjoin (force) Amazon to drop its proprietary DRM, and to enjoin the Big Six to allow independent bookstores to sell their e-books using an interoperable DRM that would presumably work with Kindles as well as iOS and Android devices, PCs, Macs, BlackBerrys, etc.  (The term that the complaint used for the opposite of “interoperable DRM” was “inoperable DRM,” much to the amusement of some anti-DRM folks.)

There were two fundamental problems with the complaint.  One was that it presupposed the existence of an idealized interoperable DRM that would work with any “interoperable or open architecture device,” and that “Amazon could easily, and without significant cost or disruption, eliminate its device specific restrictive [] DRM and instead utilize an available interoperable system.”

There is no such thing, nor is one likely to come into being.  I worked with the International Digital Publishing Form (IDPF), the trade association for e-books, to design a “lightweight” content protection scheme that would be attractive to a large number of retailers through low cost of adoption, but that project is far from fruition, and in any case, no one associated with it is under any illusion that all retailers will adopt the scheme.  The only DRM that is guaranteed to work with all devices and all retailers forever is no DRM at all.

The closest thing there is to an “interoperable” DRM nowadays is Adobe Content Server (ACS) — which isn’t all that close.  Adobe had intended ACS to become an interoperable standard, much like PDF is.  Unlike Amazon’s Mobipocket DRM and Apple’s FairPlay DRM for iBooks, ACS can be licensed and used by makers of e-reader devices and apps.  Several e-book platforms do use it.  But the only retailer with significant market share in the United States that does so is Barnes & Noble, which has modified it and combined it with another DRM that it had acquired years ago.  Kobo has its own DRM and uses ACS only for interoperability with other environments.

More relevantly, I have heard it said that Amazon experimented with ACS before launching the Kindle with the Mobipocket DRM that it acquired back in 2005.  But in any case, ACS’s presence in the US e-book market is on the wane, and Adobe has stopped actively working on the product.

The second misconception in the booksellers’ complaint was the implication that the major publishers had an interest in limiting their opportunities to sell e-books through indie bookstores.  The reality is just the opposite: publishers, from the (now) Big Five on down, would like nothing more than to be able to sell e-books through every possible retailer onto every possible device.  The complaint alleges that publishers “confirmed, affirmed, and/or condoned AMAZON’s use of restrictive DRMs” and thereby conspired to restrain trade in the e-book market.

Publishers have been wary of Amazon’s dominant market position for years, but they have tolerated its proprietary technology ecosystem — at least in part because many of them understand that technology-based media markets always settle down to steady states involving two or three different platforms, protocols, formats, etc.  DRM helps vendors create walls around their ecosystems, but it is far from the only technology that does so.

As I’ve said before, the ideal of an “MP3 for e-books” is highly unlikely and is largely a mirage in any case.  Copyright owners have a constant struggle to create and preserve level playing fields for retailers in the digital age, one that the more savvy among them recognize that they can’t win as much as they would like.

Judge Jed Rakoff picked up on this second point in his opinion dismissing the case. He said, “… nothing about [the] fact [that publishers made agreements with Amazon requiring DRM] suggests that the Publishers also required Amazon to use device-restrictive DRM limiting the devices on which the Publishers’ e-books can be display, or to place restrictions on Kindle devices and apps such that they could only display e-books enabled with Amazon’s proprietary DRM. Indeed, unlike DRM requirements, which clearly serve the Publishers’ economic interests by preventing copyright violations, these latter types of restrictions run counter to the Publishers’ interests …” (emphasis in original).

Indie bookstores are great things; it’s a shame that Amazon’s Kindle ecosystem doesn’t play nicely with them. But at the end of the day — as Judge Rakoff also pointed out — Amazon competes with independent booksellers, and “no business has a duty to aid competitors,” even under antitrust law.

In fact, Amazon has repeatedly shown that it will “cooperate” with competitors only as a means of cutting into their markets. Its extension of the Kindle platform to public library e-lending last year is best seen as part of its attempt to invade libraries’ territory. More recently, Amazon has attempted to get indie booksellers interested in selling Kindle devices in their stores, a move that has elicited frosty reactions from the bookstores.

The rest of Judge Rakoff’s opinion dealt with the booksellers’ failure to meet legal criteria under antitrust law.  Independent booksellers might possibly have a case to bring against Amazon for boxing them out of the market as reading goes digital, but Book House of Stuyvesant Plaza et al v. Amazon.com et al wasn’t it.

Apple and Disney: A Copyright Conundrum November 25, 2013

Posted by Bill Rosenblatt in Images, Law.
2 comments

Last week I was at Rutgers Law School in New Jersey.  A law student struck up a conversation with me, and once he discovered that I was there to give a guest lecture in Prof. Michael Carrier‘s intellectual property class, he showed me something that had us both scratching our heads.  It was a decal of Snow White, affixed to the lid of his MacBook laptop so that she was holding the Apple logo in her hands.  It turns out that more than one designer has thought of this idea; here’s one example

Let’s make the (fairly safe) assumption that the makers of these decals were not licensed by The Walt Disney Company.  So the question is: would this be a fair use of the iconic cartoon image, or is the decal maker liable?

The design works as ironic commentary on a couple of levels.  Those of you who have seen the classic 1937 Disney animated feature, or at least know the story of Snow White and the Seven Dwarfs, will understand she held an apple in the story, which was poisoned.  (Snow White’s pose in the decal is the same as when she held the poisoned apple in the movie.)  On another level, Snow White holding the Apple logo is a commentary on Apple’s relationship with Disney, given that Steve Jobs was on the Disney board and was the largest investor in the company.

Is the decal a “transformative” use of Disney’s intellectual property?  (If a use of copyrighted material is transformative, it’s likely to be fair use.)

From what I can tell, the manufacturer of the decals is using Disney’s IP without permission by simply making copies of Snow White.  There is nothing “transformative” about that by itself; it’s not part of a mashup, collage, remix, etc.  The whole of Snow White was used, not a snippet or sample.  The decal was sold commercially, though it probably doesn’t make people less likely to buy Snow White items from the Disney Store.  It may or may not be an example of “appropriation art.”

The “transformative” use of the decal is made by the person who buys it and affixes it to his MacBook.  One could argue that the decal was made specifically with that use in mind; one could say that the decal maker was “inducing” transformative uses of Snow White.

OK, copyright geeks, time to weigh in.  Here’s a poll.  Feel free to elaborate in the comments.

Panel on Ministry of Sound Added at Copyright and Technology London 2013 October 10, 2013

Posted by Bill Rosenblatt in Events, Law, UK, Uncategorized.
add a comment

We have added another panel session to the Copyright and Technology London 2013 conference, which will take place next Thursday (17 October).  The most important recent copyright litigation in the UK at the moment is the case of Ministry of Sound v. Spotify, in which the record label is objecting to Spotify making playlists available that mimic the compilation albums for which the label is best known.  The case has broad implications for the limits of copyrightability in the digital age, at least under UK law.

Here is the panel description:

The Limits of Copyright in the Digital Age

The litigation that Ministry of Sound recently started against Spotify will test whether playlists on compilation albums have copyright protection.  It will be played out in the context of the debate about to what extent we as a society are prepared to pay for curation. The same issue faces news-disseminating organisations over their headlines and sports reporters over game highlights. Does our society value the editorial/quality control/validation role that they play? This panel will explore the boundaries of what is – and should be – protected by copyright in the digital age and suggest what directions legal decisions in the future may take.

Although the case was only filed a month ago, we have been able to pull together an excellent group of authorities on both the legal and content aspects of the matter, thanks to the tireless efforts of Serena Tierney of Bircham Dyson Bell, the panel chair and herself an authority on copyright in the digital age.  Panelists will include:

  • Jeff Smith, Head of Music at BBC Radio 2 and 6; former Director of Music Programming at Napster
  • Mo McRoberts, Head of the BBC Genome Project at the BBC Archive
  • Lindsay Lane, Barrister at 8 New Square Intellectual Property and co-author of the standard copyright treatise Laddie, Prescott and Vitoria on The Modern Law of Copyright and Designs
  • Andrew Orlowski, Executive Editor of The Register, who has covered this case.

This means that we will have a packed day of exciting sessions from all around the world of copyright.  Places are still left, so register today!

Ad Networks Adopt Notice-and-Takedown for Ads on Pirate Sites July 21, 2013

Posted by Bill Rosenblatt in Economics, Law.
7 comments

Eight top Internet advertising networks will participate in a scheme for reducing ads that they place on pirate sites — websites that exist primarily to attract traffic by offering infringing content as well as counterfeit goods.  The Best Practice Guidelines for Ad Networks to Address Piracy and Counterfeiting document, announced on July 15th, specifies a process modeled on the US copyright law’s notice-and-takedown regime, a/k/a DMCA 512: a copyright owner can send an ad network detailed information about websites on which it placed ads and that feature pirated material; then the ad network can decide to remove its ads from the site.

Although this scheme may result in some ads being pulled from obvious pirate sites, it has several major shortcomings.  First of all, because this is a voluntary scheme, ad networks don’t risk legal liability for failing to comply with takedown notices, as they do under the DMCA.

So how will compliance be enforced?  Consider this: the companies that have signed on to these guidelines are 24/7 Media, Adtegrity, AOL, Condé Nast, Google, Microsoft, SpotXchange, and Yahoo!.  These companies have agreed to have the Interactive Advertising Bureau (IAB, the trade association for internet advertising) monitor them for compliance.  The largest six of these eight companies have seats on the IAB board.  In other words, this is rather like foxes agreeing to be monitored by the American Fox Association for compliance with henhouse guarding guidelines.

Secondly, the ad networks that are actually causing the most trouble aren’t involved.  None of the ad networks listed as the top ten worst offenders in the latest (June) edition of the USC Annenberg Innovation Lab’s Advertising Transparency Report have signed on to the guidelines.  Two of the eight that did sign on, Google and Yahoo!, were on the top 10 list when the Ad Transparency Report first came out in January but have come off since.

Another factoid: of the current worst offenders, only one (ZEDO) is a member of IAB at all.  Ad network operators on that list with names like “Sumotorrent” are surely not going to be observing these guidelines.

This extralegal agreement between the ad networks and the major content industries follows a well-trodden Washington path: government threatens to regulate industries in order to curb bad behavior; the industry to be regulated responds with a set of “voluntary best practices”; these are just barely serious enough to get government to back off and the instigators of the regulation to at least admit that it’s better than nothing.

We’ve seen this game played before many times, such as when organizations that look out for children’s interests pushed for ways to filter Internet porn and obscenities, or countless attempts to fend off substantive online privacy laws (an area in which the IAB has been heavily involved).

The Best Practices for Ad Networks were announced by Victoria Espinel, President Obama’s intellectual property enforcement czar.  That the document was actually written by the IAB is evident from its ample supply of equivocations and accountability dodges.  While it’s tempting to go through it line by line and point all of these out (I’d rather see Chris Castle do this; he’s great at picking these things apart), I’d rather focus on the missed opportunity here: instead of modeling this scheme on DMCA 512, the ad networks should have agreed to link the scheme to that law.  (The scheme does have a couple of minor links to the DMCA itself, but they are almost beside the point.)

Thanks to the Google Transparency Report and the USC Annenberg Innovation Lab, we have learned that DMCA takedown notices provide an excellent proxy metric of the most infringing websites.  The Google Transparency Report shows the number of DMCA takedown notices that Google received per month for sites in its search results.  The sites that exceed a certain threshold — say, 10,000 takedown notices per month — are all obvious pirate sites, while even the biggest mainstream consumer websites fall well below that threshold.

The Best Practices for Ad Networks could have made use of these statistics by tying ad takedowns to DMCA takedown notices received by the sites in question via the Google Transparency Report — instead of requiring copyright owners to generate an entirely new set of detailed notices under a voluntary regime with a Grand Canyon’s worth of wiggle room.  In other words, the “best practice” could have been very simple:  do not place ads on sites that generate more than X DMCA takedown notices per month.  This would accomplish the mutually reinforcing goals of reducing ad revenue for pirate sites and reducing their access to the content itself.

Even better would have been to establish a repository of DMCA takedown notices independent of the ones that Google collects for its search results.  Anyone who sends a site a takedown notice could simply send a copy to the keeper of this repository, which could be a neutral third party agreed to by the content industries and the IAB, or the U.S. Copyright Office. I have suggested adopting a standard format for takedown notices, which would facilitate this process.

(Such a repository could be supported by fees from copyright owners who submit large numbers of takedown notices, so that individuals and small copyright owners wouldn’t have to pay, and as a deterrent of abuse.  The fees would be much smaller than those charged by the piracy monitoring services that big media companies hire, which could submit the notices to the repository and bundle the fees into their own.)

The IAB generated the Best Practices for Ad Networks under pressure from Big Media lobbying groups as well as the White House.  But neither of these entities put money in ad networks’ pockets; advertisers do.  The result would undoubtedly have been stronger if actual advertisers had added their voices to the process, though none appear to have done so.

Musician David Lowery, a longtime fighter against ad-sponsored piracy, has identified a few large advertisers that have taken steps to mitigate their involvement with pirate sites, such as here (Starbucks, Costco, Walmart), here (Coca-Cola, Pepsi), and here (Levi’s), but these are few compared with the many brands that keep advertising on them.  Lowery and others do their best to shame consumer brands into awareness over this issue, but the amount of real change will depend on how much that shame translates into real involvement.  As always, it’s best to follow the money.

Public Knowledge Weighs In on Digital First Sale July 14, 2013

Posted by Bill Rosenblatt in Law, United States.
2 comments

The Internet advocacy group Public Knowledge (PK) recently published Copies, Rights, & Copyrights: Really Owning Your Digital Stuff, a think-piece on first sale for the digital age, authored by Sherwin Siy, PK’s VP of Legal Affairs.  PK’s position on digital first sale, characteristically, is that users should have the same types of rights to resell, lend, and give away their digital content as they do physical items such as books and CDs.  The law in the United States is at best ambiguous on this point, but as digital content consumption goes mainstream, the need for clarity increases.

This whitepaper, which mixes the scholarly with the pragmatic, should be fun for copyright law geeks to read and pick over.  It’s PK’s contribution to what will undoubtedly be years of dialog and argument over what to do about first sale for digital content.  It mines the history of first sale-related court decisions for precedents that justify the extrapolation of first sale rights to the digital age.

The doctrine of first sale, section 109 of the Copyright Act, says that once you buy (or otherwise lawfully obtain) a copy of a copyrighted work, it’s yours to do with as you please.  As Siy explains, the law evolved through litigation to cover things like the right to repair and publicly display one’s legitimately obtained works without permission from the publisher.  In examining how first sale ought to apply in the digital age, Siy considers not just obvious use cases such as reselling or lending downloaded music files or e-books, but also less straightforward scenarios like the right to “lend” access to a database through an Internet login, or to lend a DVD rented from Netflix (which is contrary to Netflix’s terms and conditions).

One mystifying aspect of this document is that despite the June 27 publication date, it is completely silent about the recent summary judgment against the MP3 resale startup ReDigi, which happened in a New York federal court three months previous.  This is tantamount to writing a piece on domestic terrorism without mentioning the Boston Marathon bombings.  One can’t help but wonder whether the omission was intentional, given that PK can’t have liked the way that case went, or if Siy is saving his ammunition for an amicus brief to be filed in ReDigi’s appeal.

At the heart of Siy’s analysis is the notion of copies made as “essential steps” in the normal usage of a copyrighted work.  If you rebind a book and put a new cover on it, for example, the law says that the result is a “derivative work” — a specific type of copy in legal terms — of the original book.   But one court decision said that you have the right to do this because of the essentiality of the derivative work to the functioning of the original.

Analogously, various copies of software or digital content that are made in the RAM of a PC or other devices can be argued to be made as essential steps in the use of the software or content.  Few people notice or care about such RAM copies.  They aren’t given much consideration in copyright law, and end-user license agreements (EULAs) usually don’t explicitly give users permission to make them.  In legal terms, they are de minimis copies.

ReDigi’s software — at least the version of it for which the court found ReDigi liable — made a copy of a music file as part of the process of making it available for resale.  It deleted the original copy in the process of making the new one, and it took steps to ensure that the user didn’t try to keep additional copies after the file was resold; but still, it made a new copy.  And the judge in the ReDigi case specifically ruled that the new copy was infringing.  In other words, the ReDigi opinion implied that the copy made for resale purposes was not essential to the normal use of digital music files.

In its white paper, PK, proposes short term fixes to the copyright law.  It also discusses longer term implications of first sale in an age where the concepts of copies of and access to content are increasingly muddy.  The short term fix focuses on adding language to the law that make “essential step” copies — including those made for transfer of ownership — presumptively legal.  Siy suggests that if publishers want to limit this at all, they can do so by putting terms into EULAs.

In addition, Siy proposes codifying distribution of files for transfer of ownership as part of the normal use of digital content:

“…the law could allow the lawful owners of lawful copies to make reproductions of the works necessary to the transfer of ownership of a copy to one other party, provided that the other party be the only one in possession of a copy at the end of the transfer, and that no more than one of the parties has the use of the work at one time.”

In other words, it should be legal to do this as long as the original copy disappears.  Siy adds:

“Skeptics of this approach might note that the copyright  holder would have to rely upon the goodwill of the transferring parties not to make more permanent reproductions in the course of the transfer and just keep the copy they claim to have sold to someone else. This is true. However, it is not a significant change from the state of play now. Photocopiers continue to operate without licenses from copyright holders despite the fact that they may be used for infringing reproductions.”

Here’s where his analysis starts to lose credibility.  First, the U.S. Copyright Office investigated this very issue in a 2001 report on digital first sale.  It decided that while it’s possible to build a mechanism to delete the original when making a copy — a “forward and delete” scheme similar to ReDigi’s — it would be neither prudent to trust people to do this nor practical to mandate such a mechanism.  Given that the Copyright Office is the advisor to Congress on copyright law, this report is tantamount to Congress’s last word on the subject.  (The Copyright Office hasn’t been asked to revise the report since 2001.)  Siy doesn’t mention this.

Secondly, there are two things wrong with Siy’s photocopying analogy.  First, while publishers don’t bother trying to seek licenses for photocopying from individuals, they do seek licenses from institutions via the Copyright Clearance Center that are based on an institution’s size, industry, and other factors; the vast majority of the Fortune 500, for example, pays fees for such licenses.  Second, and most fundamentally, a photocopy is not the same as the original, whereas a bit-for-bit copy of a digital file is.  This difference has to be relevant, as it was to Judge Sullivan in his ReDigi opinion.  I’d argue that the exactness of digital copies has to be at the heart of any debate on the future of first sale in the digital age, yet Siy doesn’t touch this issue.

After his suggested short-term solutions, Siy surveys the emerging landscape of content distribution models that focus more on access than on copies, and he wonders — quite appropriately — whether copies are still the most appropriate measure of the usage of copyrighted works.  Looking creatively at the present and future of content business models, he says:

“As technology advances, we can see the relationship diminishing between the structure of the Copyright Act and the reality of how authors and audiences alike value and use copyrighted works. … increasingly, consumption of copyrighted works comes not through the distribution of fixed copies, or even the distribution of digital ones. People listen to music via subscriptions to Spotify, pay for online access to the New York Times and Wall Street Journal, and ‘rent’ (actually, pay for streaming access to) digital movies from Amazon. Access, not copies, seems to be more the question. We own copies now; we don’t necessarily own access. Should we be able to trade access … ? This is actually more than just a fix for the first sale doctrine; it’s a realignment of how we think of copyright and what the value of the thing is.”

Yet at the end of the day, Siy decides that focusing on copies is still the most sensible approach to laws intended to balance the interests of content creators and the public.  The alternative would be to enable content distributors to grant or deny rights on every conceivable type of content access, including, for example, the ability to flip back and forth or search through text. He says that this could lead to a future that is “at best tedious and at worst dystopian,” and uses that as a rationale to conclude that maybe focusing on copies isn’t so bad after all.

Another landmark case in the world of digital first sale, which Siy does mention, is Vernor v. Autodesk.  In my discussion of that case, I suggested that specifying such fine-grained limitations in a EULA amounts to “verbal DRM,” which in a way is worse than technological DRM because of its potential for ambiguity and thus legal risk to users.  Commercial content distributors need to make their services easy to understand and use, because the alternative is irrelevance, not to mention piracy.  Just as importantly, digital content and software developers should know that byzantine usage restrictions in EULAs without technological measures to back them up are virtually impossible to enforce; users will merely ignore and/or complain about them.

Therefore I’m not so concerned about Sherwin Siy’s “tedious” and “dystopian” future, and I continue to wonder whether there’s a better way forward than looking at copies.

Copyright and Accessibility June 19, 2013

Posted by Bill Rosenblatt in Events, Law, Publishing, Standards, Uncategorized.
add a comment

Last week I received an education in the world of publishing for print-disabled people, including the blind and dyslexic.  I was in Copenhagen to speak at Future Publishing and Accessibility, a conference produced by Nota, an organization within the Danish Ministry of Culture that provides materials for the print-disabled, and the DAISY Consortium, the promoter of global standards for talking books.  The conference brought together speakers from the accessibility and mainstream publishing fields.

Before the conference, I had been wondering what the attitude of the accessibility community would be towards copyright.  Would they view it as a restrictive construct that limits the spread of accessible information, allowing it to remain in the hands of publishers that put profit first?

As it turns out, the answer is no.  The accessibility community, generally speaking, has a balanced view of copyright that reflects the growing importance of the print disabled to publishers as a business matter.

Digital publishing technology might be a convenience for normally sighted people, but for the print disabled, it’s a huge revelation.  The same e-publishing standards that promote ease of production, distribution, and interoperability for mainstream consumers make it possible to automate and thus drastically lower the cost and time to produce content in Braille, large print, or spoken-word formats.

Once you understand this, it makes perfect sense that the IDPF (promoter of the EPUB standards for e-books) and DAISY Consortium share several key members.  It was also pointed out at the conference that the print disabled constitute an audience that expands the market for publishers by roughly 10%.  All this adds up to a market for accessible content that’s just too big to ignore.

As a result, the interests of the publishing industry and the accessibility community are aligning.  Accessibility experts respect copyright because it helps preserve incentives for publishers to convert their products into versions for the print disabled.  Although more and more accessibility conversion processes can be automated, manual effort is still necessary — particularly for complex works such as textbooks and scientific materials.

Publishers, for their part, view making content accessible to the print disabled as part of the value that they can add to content — value that still can’t exist without financial support and investment.

One example is Elsevier, the world’s largest scientific publisher.  Elsevier has undertaken a broad, ambitious program to optimize its ability to produce versions of its titles for the print disabled.  One speaker from the accessibility community called the program “the gold standard” for digital publishing.  Not bad for a company that some in the academic community refer to as the Evil Empire.

This is not by any means to suggest that publishers and the accessibility community coexist in perfect harmony.  There is still a long way to go to reach the state articulated at the conference by George Kerscher, who is both Secretary General of DAISY and President of IDPF: to make all materials available to the print disabled at the same time, and for the same price, as mainstream content.

The Future Publishing and Accessibility conference was timed to take place just before negotiations begin over a proposed WIPO treaty that would facilitate the production of accessible materials and distribution of them across borders.  The negotiations are taking place this and next week in Marrakech, Morocco.  This proposed treaty is already laden with concerns from the copyright industries that its provisions will create opportunities for abuse, and reciprocal concerns from the open Internet camp that the treaty will be overburdened with restrictions designed to limit such abuse.  But as I found out in Denmark last week, there is enough practical common ground to hope that accessibility of content for the print disabled will continue to improve.

Follow

Get every new post delivered to your Inbox.

Join 568 other followers