Copyright Office Launches Inquiries into DMCA January 4, 2016Posted by Bill Rosenblatt in Events, Law, United States.
1 comment so far
Last week, the U.S. Copyright Office published Notices of Request and Public Comment for both parts of the Digital Millennium Copyright Act: Section 512 (limitations of copyright liability for online service providers) and Section 1201 (prohibition of DRM circumvention). Jacqueline Charlesworth, General Counsel of the Copyright Office, will be discussing both of these — and more — when she gives the keynote speech at our Copyright and Technology NYC 2016 conference on Tuesday January 19th. (Register today!)
In both of the studies, anyone can submit written comments, and after the comments have been posted, the Office will hold public discussions. Deadlines for written submissions are February 25 for the Section 1201 study and March 21 for the Section 512 study, though the Section 1201 study also has a March 25 deadline for replies to comments submitted by the February 25 deadline.
The Section 512 study is broad in scope. It invites people to submit comments on a wide range of issues, including effectiveness of the notice-and-takedown process; accessibility of the law to small entities (both copyright owners and service providers); efficacy of automated processes for both detection of alleged infringements (e.g., through fingerprinting) and processing of notices; effectiveness and fairness of the counter-notification process; the potential for replacing “notice and takedown” with “notice and staydown”; and the overall effectiveness of the law in striking a balance between the interests of copyright holders and service providers.
The study notice mentions that because of a lack of specificity in the statutory language, courts have had to interpret various portions of Section 512. As a result, we look to many court opinions to get clarity on concepts such as “red flag knowledge” of or “willful blindness” to alleged infringements, the “financial benefit” and “right and ability to control” standards for service provider liability, the precision of content identifiers or locators required in takedown notices, policies that service providers must have in place for terminating the accounts of “repeat infringers” in order to qualify for the safe harbors, and so on. The study invites people to comment on whether or not the courts have interpreted these statutory concepts appropriately. The study cites a “greatest hits” of Section 512-related litigations: Viacom v. YouTube, UMG v. Shelter Capital (Veoh), CCBill v. Hotfile, Columbia Pictures v. Fung (IsoHunt), UMG v. Lenz (“Dancing Baby”), and various others.
A huge amount of virtual ink has been spilled about inadequacies of Section 512. Copyright owners, service providers, legal scholars, and others have all expressed frustration with it in Congressional hearings, court, the press, legal publications, blogs, etc.; the study notice summarizes the concerns that have been raised. This is the case even though at least one study has found that stakeholders would generally prefer to have the law as it is rather than not have it at all.
This study should attract a larger number and broader range of inputs into Section 512 than previous attempts to assess it such as the March 2014 Congressional hearings. It promises to be an excellent vehicle for intelligent summary of stakeholders’ concerns — including those who aren’t lawyers and can’t afford lobbyists — and of distillation of conclusions into recommendations. The Copyright Office has no direct power to change the law, but as its mission is to serve as Congress’s official consultant on copyright, the findings from this study ought to carry a lot of weight in any legislative changes that Congress might consider.
The Section 1201 study, in contrast, is considerably narrower in its scope. Section 1201 has been the subject of fewer high-profile litigations than 512, especially during the last few years. The way it was originally set up, Section 1201 forbids circumvention of technical measures designed to control access to creative works, yet it defines two sets of exceptions — types or situations of circumvention that aren’t forbidden. There are eight permanent exceptions for things like security testing, cryptography research, and activities of nonprofit libraries and archives.
There is also a set of temporary exceptions that the Copyright Office defines in rulemaking actions every three years; these exceptions expire and must be renewed through fresh evidence at each rulemaking. The rulemaking process solicits public input but has typically been dominated by public policy entities who know how to “work the system” and submit exception scenarios that fit the Copyright Office’s criteria based on past experience.
The temporary exception process was complicated recently by Congress’s passage of the Unlocking Consumer Choice and Wireless Competition Act of 2014, which effectively overrides Section 1201 to make it legal for people to “circumvent technical measures” by jailbreaking or rooting mobile phones. Meanwhile, the Copyright Office has complained that the triennial rulemaking process is very resource-intensive, and some have complained that the Office’s criteria for exception scenarios have changed over the six rulemakings that it has run since Section 1201’s enactment.
Thus the main thrust of the Section 1201 study is to revisit the exception process: to re-evaluate the permanent exceptions, and to see if there is a more efficient way of soliciting and deciding on exemptions over time that still minimizes the ever-present risk of technological obsolescence. The study notice assumes that the basic idea of Section 1201 — to forbid DRM hacking except where the hack fits one of the current exceptions — will stay as is. In addition, the Section 1201 study will revisit the prohibition on “trafficking” of circumvention tools in situations where people legitimately need to have third parties help them perform activities that include acceptable circumvention.
We will have a lot of discussion about these issues at Copyright and Technology NYC 2016 on January 19th. In addition to Jacqueline Charlesworth’s keynote, we will be featuring a presentation by Jennifer Urban of Berkeley Law School and Joe Karaganis of the American Assembly at Columbia University of their original research on the efficacy of Section 512 and the processes that copyright owners and service providers have evolved over time to deal with it. This research will undoubtedly serve as important input to the Copyright Office’s Section 512 study — and our conference will be its first public airing.
In addition, we will feature two afternoon sessions on Section 512 issues featuring expert panelists: From Takedown to Staydown and Pleasures of the Harbor: DMCA Safe Harbor Eligibility. So why not register and participate in these cutting-edge copyright discussions yourself?
New Research to Be Presented at January Conference November 30, 2015Posted by Bill Rosenblatt in Events, Fingerprinting, Law, United States.
add a comment
I am excited to announce that Copyright and Technology NYC 2016 will feature a special presentation of new research: Notice and Takedown in Everyday Practice: Robots, Artisans, and the Fight to Protect Copyrights, Expression and Competition on the Internet. This is a landmark study on how the Notice and Takedown provisions of Section 512 of U.S. copyright law work in practice. It is the result of many interviews with copyright holders, service providers, and copyright enforcement services, as well as analysis of large numbers of takedown notices submitted to the Chilling Effects database. Authors of the study are Jennifer Urban and Brianna Schofield of the Samuelson Law, Technology & Public Policy Clinic at BerkeleyLaw, and Joe Karaganis of The American Assembly at Columbia University. The talk at Copyright and Technology NYC 2016 on Tuesday, January 19th will be its first public presentation.
Until now, very little empirical research has been done on the effectiveness of the DMCA’s notice and takedown provisions in addressing copyright infringement as well as due process for notice targets. This talk will summarize research comprising three studies that draw back the curtain on notice and takedown: it gathers information on how online service providers and rightsholders experience and practice notice and takedown, examines over 100 million notices generated during a six-month period, and looks specifically at a subset of those notices that were sent to Google Image Search.
The findings suggest that whether notice and takedown “works” is highly dependent on who is using it and how it is practiced, though all respondents agreed that the Section 512 safe harbors remain fundamental to the online ecosystem. Perhaps surprisingly, a large portion of service providers still receive relatively few notices and process them by hand. For some major players, however, the scale of online infringement has led to automated systems that leave little room for human review or discretion, and in a few cases notice and takedown has been abandoned in favor of techniques such as content filtering. Further, surprisingly high percentage of notices raise questions about their validity. The findings strongly suggest that the notice and takedown system is under strain but that there is no “one size fits all” approach to improving it. The study concludes with suggestions of various targeted reforms and best practices.
Please come and see this important research presentation on January 19th — register today! Early bird registration ends December 11.
Crowdsourced Cover Collections: A Copyright Conundrum November 17, 2015Posted by Bill Rosenblatt in Law, Music, Services.
A friend recently introduced me to a website called Cover Me, in which contributors write about cover versions of songs. There are artist-cover anthologies, lists of “Five Good Covers” of oft-covered songs, and entire albums’ worth of cover versions. Accompanying text describes the songs, the original artists, and the various cover versions. Sometimes the “art” is in finding cover versions of obscure songs; other times it’s in selecting among many cover versions of better-known songs and putting together an interesting collection.
Entire albums featured on this site include Led Zeppelin III, XTC’s Skylarking, and several Ramones albums. The site is run by Ray Padgett, a Brooklynite whose day job is at a small PR/social media firm.
Now here’s the conundrum. All of the song titles link to audio of the cover versions. Some are simply links to files on SoundCloud, YouTube, or other hosting sites, while others are links to iTunes or Amazon purchase pages. But many others are downloadable MP3s hosted on the site, with embedded MP3 players for streaming the music. Most of these tracks are available on YouTube; some are available on Spotify.
My friend told me that the site will take the MP3 files down when requested, although it does not have a written copyright policy. The site does not carry ads and has no apparent source of revenue; the contributors are volunteers.
What do you think? Fair use or not?
Partnership with Stroz Friedberg November 3, 2015Posted by Bill Rosenblatt in Law.
1 comment so far
As some of you know, over the past several years I have occasionally served as an expert witness and consultant in litigations on digital media, copyright, and related issues. I’m now thrilled to announce a partnership between my firm, GiantSteps Media Technology Strategies, and Stroz Friedberg. Stroz Friedberg is a global leader in cybersecurity, digital forensics, investigations and risk management, as well as IP litigation consulting and IP strategy and analytics. They have been involved in such high-profile and diverse matters as Oracle USA, Inc. et al. v. Rimini Street, Inc. et al. (copyright case), Paul Ceglia vs. Mark Zuckerberg (Facebook founder’s contract dispute), Enron’s “Nigerian Barge” trial (as the government’s expert), and Silicon Knights, Inc. v. Epic Games, Inc. (copyright case).
I had the good fortune to work side by side recently with some experts from Stroz Friedberg on a couple of different cases, and a partnership between my firm and theirs was a natural next step. Stroz Friedberg’s depth of knowledge, intelligence, diligence, and effectiveness are very impressive, and they are a pleasure to work with. We will be supporting each other in future engagements as needs arise, enabling GiantSteps and Stroz Friedberg to complement our mutual areas of expertise and bring a broader array of capabilities to our clients.
Forbes: Is Ad Blocking the New Piracy? September 25, 2015Posted by Bill Rosenblatt in Economics, Law.
My latest column in Forbes takes the Apple’s decision to add ad-blocking primitives to iOS 9 as an occasion to look at the fast-growing phenomenon blocking ads in web browsers, and specifically to compare it to online copyright infringement.
Both developments lead to revenue losses for content publishers. Both are occasioned by technological tools that make it easy and (in most cases) free for non-tech-savvy consumers to do. And both have engendered cottage industries of technologies that attempt to combat the phenomena. The article deals with the revenue models for such companies and the industry factions that are lining up on the sides of these debates.
The upshot is that ad-blocking is not a “Big Media vs. Big Tech” issue; it is more accurately described as a “Big Media and Some of Big Tech vs. Other Big Tech” issue. In particular, Google — which earns over 90% of its revenue from ads — is not a big fan of ad blocking. The trade associations that are addressing the ad-blocking issue appear to be learning lessons from the Copyright Wars by trying to establish best practices for online advertising that, if adopted, could avoid technological arms races.
Ninth Circuit Calls for Takedown Notices to Address Fair Use September 15, 2015Posted by Bill Rosenblatt in Fingerprinting, Law, Music.
add a comment
This past Monday’s ruling from the Ninth Circuit Appeals Court in Lenz v. Universal Music Group, a/k/a the Dancing Baby Video case, is being hailed as an important one in establishing the role of fair use in the online world. The case involved a common enough occurrence: a homemade video clip of someone’s child, with music (Prince’s “Let’s Go Crazy”) in the background, posted to YouTube.* UMG sent a takedown notice, Stephanie Lenz sent a counter-notice, and an eight-year legal battle ensued. Monday’s ruling was not a decision on the defendant’s liability but merely a denial of summary judgment, meaning that case will now go to trial.
The three-judge panel produced two important holdings: first, that fair use is really a user’s right, and not just an affirmative defense to a charge of infringement. The second is that copyright holders have to take fair use into account in issuing DMCA takedown notices. As we’ll discuss here, this will have some effect on copyright holders’ ability to use automated means to enforce copyright online.
Under the DMCA (Section 512 of U.S. copyright law), online service providers can avoid copyright liability if they respond to notices requesting that allegedly infringing material be taken down. Notices have to comply with legal requirements, one of which is a good faith belief that the user who put the work up online was not authorized to do so. This court now says that fair use is not merely a defense to a charge of infringement — to be asserted after the copyright holder files a lawsuit — but is actually a form of authorization.
It follows that the copyright holder must profess a good faith belief that the user wasn’t making a fair use of the work in order for a takedown notice to be valid. The court also held that this good faith belief can be “subjective” rather than based on objective facts; but it’s ultimately up to a jury to decide whether it buys the complainant’s basis for its good faith belief is valid.
The question for us here is how this ruling will affect the technologies and automated processes that many copyright owners use to police their works online, often through copyright monitoring services like MarkMonitor, Muso, Friend MTS, Entura, and various others. These services use fingerprinting and other techniques to identify content online, create takedown notices from templates, and send them — many thousands per day — to online services. Page 19 of the Lenz decision contains a hint:
“We note, without passing judgment, that the implementation of computer algorithms appears to be a valid and good faith middle ground for processing a plethora of content while still meeting the DMCA’s requirements to somehow consider fair use. . . . For example, consideration of fair use may be sufficient if copyright holders utilize computer programs that automatically identify for takedown notifications content where: (1) the video track matches the video track of a copyrighted work submitted by a content owner; (2) the audio track matches the audio track of that same copyrighted work; and (3) nearly the entirety . . . is comprised of a single copyrighted work. . . . Copyright holders could then employ [humans] to review the minimal remaining content a computer program does not cull.” (Internal citations and quotation marks omitted.)
At the same time, another clue lies in pp. 31-32, in a footnote to Judge Milan Smith’s partial dissent:
“The majority opinion implies that a copyright holder could form a good faith belief that a work was not a fair use by utilizing computer programs that automatically identify possible infringing content. I agree that such programs may be useful in identifying infringing content. However, the record does not disclose whether these programs are currently capable of analyzing fair use. Section 107 specifically enumerates the factors to be considered in analyzing fair use. These include: ‘the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes’; ‘the nature of the copyrighted work; ‘the amount and substantiality of the portion used in relation to the copyrighted work as a whole’; and ‘the effect of the use upon the potential market for or value of the copyrighted work.’ 17 U.S.C. § 107. For a copyright holder to rely solely on a computer algorithm to form a good faith belief that a work is infringing, that algorithm must be capable of applying the factors enumerated in § 107.”
To follow this ruling, takedown notices will now presumably have to contain language that describes the copyright holder’s good faith belief that the user who posted the file did not have a fair use right. This can be a “subjective” basis, and the source of that information cannot “solely” be a “computer algorithm.”
It is, of course, impossible for any computer algorithm to determine whether a copy of a file was made by fair use; there is no such thing as a “fair use deciding machine.” But that’s not what’s required here — only evidence that some (unspecified portion) of the four fair use factors were not met, other than “because I said so.” Two of the four factors are easy: “the nature of the copyrighted work” ought to be self-evident to the owner of the copyright, and today’s widely-used content recognition tools can determine whether “the amount and substantiality of the portion used” was the entire work. The majority in Lenz suggested that this latter factor “may be sufficient . . . for consideration of fair use.” Apart from that, for example, the fact that a file appears on a website touting “Free MP3 downloads!” and featuring banner ads could be cited as evidence of an “effect of the use upon the potential market for or value of the copyrighted work” or “the purpose and character of the use.”
In other words, some of the characterizations of a work as “not fair use” that are often written into lawsuit complaints (written by lawyers) may have to find their way into takedown notices (generated automatically by technology). As a practical matter, copyright monitoring services may want to produce takedown notices with more situation-specific information in order to pass the non-fair use test — such as characterizations of the online service or other circumstances in which works are found. This could require a greater number of different takedown notice templates and more effort required to populate them with specifics before sending them to online services — yet the processes still ought to be automatable.
The upshot of the Lenz decision, then, is that copyright holders may have to go to somewhat more effort to generated automated takedown notices under the DMCA that will survive a court challenge. Just how much more effort and how much more verbiage in notices is necessary will be a subject for the Lenz trial and future litigations. But today’s basic paradigm of copyright monitoring services using content recognition algorithms and other technological tools to automate enforcement processes is likely to continue, largely unchanged.
*I had a very similar experience two years ago. I took a video of my daughter’s dance recital on my smartphone from the audience, and I posted it on YouTube under a private URL known only to her uncles and grandparents. UMG issued a takedown notice — on one of the three one-minute-long song samples used in that dance routine. I tried filing a counter-notice, which UMG denied; so I gave up and emailed the clip to the relatives. I suspect that no human ever analyzed this clip: the Jennifer Lopez track that UMG complained of was one of two tracks owned by UMG, while the other, a techno track by Basement Jaxx, is one that services like Shazam have a hard time recognizing.
1 comment so far
The European Commission this past week published the first in a series of documents that mark its progress toward the objective of creating a Digital Single Market for all EU member states. The 20-page Communication, published last Wednesday, lays out a series of steps that Brussels will take over the next two years, including “[l]egislative proposals for a reform of the copyright regime” starting this calendar year.
The Communication describes two particular areas of focus within the realm of copyright. Both reflect input from lobbyists on both sides of the issue — the media and tech industries — and show that the EC wants to examine factors that trade off the concerns of both sides.
One area of focus is a pain point for anyone trying to launch digital media businesses in Europe: the lack of cross-border licensing opportunities and content portability. A would-be digital media distributor has, in many cases, to negotiate a separate content license in each of the countries in which it wants to operate. This process is especially painful for startups with limited resources. As a result, smaller countries in particular get legitimate content services years later than larger ones, if they get them at all.
This leads to a related problem: the use of geoblocking technology to confine services to single countries where it isn’t strictly necessary. Residents of smaller countries often resort to virtual private networks (VPNs) to obtain fake IP addresses in countries in which licensed services are offered. The EC is looking to streamline cross-border licensing and make it easier to carry services accessed on portable devices across borders; at the same time, it hopes to eliminate unnecessary geoblocking.
Cross-border licensing within Europe has been an issue for many years, certainly since I consulted to the the EC’s Media and Information Society Directorate back in mid-late 2000s. There are two obstacles to getting such schemes enacted throughout Europe. First is that regulators in Brussels don’t hear about it very much: the companies that can afford to send lobbyists to Brussels (and join trade associations) tend to also be able to afford enough lawyers to go around all the member states for licensing — and to benefit from existing consumer demand, because their services are already known. In other words, startups tend to be shut out of this discussion.
The second problem is that smaller countries’ culture ministries see pan-European licensing as counterproductive. Their jobs are to promote their countries’ local content, yet cross-border licensing often makes it easier for bigger EU member states (not to mention the US) with better-known content to license it in but not to license their content out. The focus on reducing the use of geoblocking ought to help ameliorate this concern, because it helps consumers’ money fall into the hands of local rights holders instead of VPN operators.
The other major area of focus is to address online infringement. The EC is looking at two particular approaches to this. One is a “follow the money” approach to enforcement that focuses on “commercial-scale infringements,” presumably as opposed to small-scale activities by individuals. The other is “clarifying the rules on the activities of intermediaries in relation to copyright-protected content.” Although the language in the Communication is high-level and non-specific, the focus is not likely to drift far beyond the existing safe harbors for online service providers that arise out of the EU e-Commerce Directive, which are roughly equivalent to the DMCA in the US.
Here again, the Communication reflects tradeoffs between the concerns of the content and tech industries. The EC wants to look at “new measures to tackle illegal content on the Internet, with due regard to their impact on the fundamental right to freedom of expression and information, such as rigorous procedures for removing illegal content while avoiding the take down of legal content[.]” In other words, the EC probably wants to focus primarily on notice and takedown, to explore ways to resolve the tension between, on the one hand, the media industry’s concerns about overly onerous notice requirements and the lack of “takedown and staydown,” and on the other hand, online service providers’ concerns about abuse of the notice process.
The Commission expects to begin assessments of these areas this year. This Communication is the start of a multi-year journey toward any actual changes in national laws, and the first glimpse of the parameters of those changes. Communications are not legally binding; they lead to proposals for new laws and changes in existing laws. The next step is one or more Directives, which are legally binding on EU member states, though member states are free to implement them in ways that make sense within their own bodies of laws — a process that itself can take several years. Unifying several aspects of digital life in Europe through this long and complex process will be a challenge, to say the least.
Improving Copyright’s Public Image September 3, 2014Posted by Bill Rosenblatt in Law, United States.
1 comment so far
The Copyright Society of the USA established the Donald C. Brace Memorial Lecture over 40 years ago as an opportunity to invite a distinguished member of the copyright legal community to create a talk to be given and published in the Society’s journal. The list of annual Brace Lecture givers is a Who’s Who of the American copyright community.
Last year’s lecture, which made it into the latest issue of the Journal of the Copyright Society, is well worth a read. It was given by Peter Menell, a professor at Berkeley Law School who co-directs the Berkeley Center for Law and Technology. It’s called This American Copyright Life: Reflections on Re-Equilibrating Copyright for the Internet Age. Since giving the lecture at Fordham Law School in NYC, Menell has been touring it (it has music and visual components) around various law schools and conferences.
Two things about Menell’s talk/paper caught my attention. First was this sentence, early on in the paper, regarding his love for both copyright and technology during the outbreak of the Copyright Wars in the 2000s: “I was passionately in the middle, perhaps the loneliest place of all.” Second was his focus on the public reputation of copyright and how it needs to be rehabilitated.
Menell’s basic thesis is that no one thought much about copyright when the limitations on copying media products were physical rather than legal; but when the digital age came along, the reason why you might not have made copies of your music recordings was because it was possibly against the law rather than because it took time and effort. He says: “‘My Generation’ did not see copyright as an oppressive regime. We thrived in ignorant bliss well below copyright’s enforcement radar and
were inspired by content industry products. The situation could not be more different for adolescents, teenagers, college students, and netizens today. Many perceive copyright to be an overbearing constraint on creativity, freedom, and access to creative works.” (The latter category apparently includes Menell’s own kids.) In other words, copyright law has appeared in the public consciousness as a limiter of people’s behavior instead of as the force that enables creative works to be made.
Here’s a figure from his paper that captures the decline in copyright’s public approval:
The icons in the figure refer respectively to the 1984 Universal v. Sony “Betamax” Supreme Court decision, the 2001 Ninth Circuit Napster decision, and the defeat of the Stop Online Piracy Act in 2012 from Silicon Valley-amplified public pressure.
Compare this with a slide from a guest lecture I gave at Rutgers Law School last year:
Menell provides a number of personal reflections about his engagement with technology and copyright over the years, including a story about how he and a friend created a slide show for their high school graduation ceremony with spliced-up music selections keyed to slide changes via a sync track on a reel-to-reel tape recorder. This combination of hack and mashup ought to establish Menell’s techie cred. In fact, the “live” version of the paper is itself a mashup of audio and video items.
He takes the reader through the history of the dramatic shift in public attitudes towards copyright after the advent of Napster. My favorite part of this is a fascinating vignette of copyleft icon Fred von Lohmann, then of the Electronic Frontier Foundation (EFF), stating on a conference panel in 2002 that many users of peer-to-peer file-sharing networks were probably infringing copyrights and that the most appropriate legal strategy for the media industry ought to be to sue them, instead of suing the operators of P2P networks as the RIAA had done with Napster. Menell’s reaction, including his own incredulity at the time that “EFF did not use its considerable bully pulpit within the post-Napster generations to encourage ethical behavior as digital content channels emerged,” is just as fascinating.
(Of course, the RIAA did begin doing just that — suing individuals — the very next year. Five years after that, the EFF posted an article that said “suing music fans is no answer to the P2P dilemma.” Fred von Lohmann was still there.)
He also provides examples of the general online public’s current attitudes towards copyright, which has gone long past “Big Media is evil”; he says that “the post-Napster generations possess the incredible human capacity for rationalizing their self-interest” by their implications that individual content creators should not get paid because they are “lazy” or “old-fashioned” or even “spoiled” — even while he admits that the sixteen-year-old Peter Menell might have fallen prey to the same sad rationalizations.
In the rest of the paper, Menell lays out a number of suggestions for how copyright law could change in order to make it more palatable to the public. These include what for me is the biggest breath of fresh air in the article: some of the only serious suggestions I’ve ever seen from copyright academics about using technology as an enabler of copyright rather than as its natural enemy. He touts the value of creating searchable databases of rights holder information and giving copyright owners the opportunity to deposit fingerprints of their content when they register their copyrights, in order to help prove and trace ownership. He also mentions encryption and DRM as means of controlling infringement that have succeeded in the video, software, and game industries, but he does not claim that they are or should be part of the legal system.
Menell also makes several suggestions about how to tweak the law itself to make it a better fit to the digital age. One of these is to establish different tiers of liability for individuals and corporations. He says that the threat of massively inflated statutory damages for copyright infringement has failed to act as a deterrent and that courts have paid little attention to the upper limits of damages anyway. Instead he calls for a realignment of enforcement efficiency, penalties, and incentives for individuals: “Copyright law should address garden variety file-sharing not through costly and complex federal court proceedings but instead through streamlined, higher detection probability, low-fine means — more in the nature of parking tickets, with inducements and nudges to steer consumers into better (e.g., subscription) parking plans.”
Another topic in Menell’s paper that brought a smile to my face was his call for “Operationalizing Fair Use” by such means as establishing “bright-line ‘fair use harbors’ to provide assurance in particular settings.” (I’ve occasionally said similar things and gotten nothing but funny looks from lawyers on all sides of the issue.)
One suggestion he makes along these lines is to establish a compulsory license, with relatively fixed royalties, for music used in remixes and mashups. That is, anyone who wants to use more than a tiny sample of music in a remix or mashup should pay a fee established by law (as opposed to by record labels or music publishers) that gets distributed to the appropriate rights holders. The idea is that such a scheme would strike a pragmatic and reasonable balance between rampant uncompensated use of content in remixes and unworkable (not to mention creativity-impeding) attempts to lock everything down. The U.S. Copyright Office would be tasked with figuring out suitable schemes for dividing up revenue from these licenses.
It goes without saying that establishing any scheme of that type will involve years and years of lobbying and haggling to determine the rates. Even then, several factions aren’t likely to be interested in this idea in principle. Although musical artists surely would like to be compensated for the use of their material in remixes, many artists are not (or are no longer) in favor of more compulsory licenses and would rather see proper compensation develop in the free market. And the copyleft crowd tends to view all remixes and mashups as fair use, and therefore not subject to royalties at all.
In general, Menell’s paper calls for changes to copyright law that are designed to improve its public image by making it seem more fair to both consumers and content creators. Changing behavioral norms in the online world is perhaps better done in narrowly targeted ways than broadly, but the paper ought to be a springboard for many more such ideas in the future.
add a comment
President Obama recently signed into law a bill that allows people to “jailbreak” or “root” their mobile phones in order to switch wireless carriers. The Unlocking Consumer Choice and Wireless Competition Act was that rarest of rarities these days: a bipartisan bill that passed both houses of Congress by unanimous consent. Copyleft advocates such as Public Knowledge see this as an important step towards weakening the part of the Digital Millennium Copyright Act that outlaws hacks to DRM systems, known as DMCA 1201.
For those of you who might be scratching your heads wondering what jailbreaking your iPhone or rooting your Android device has to do with DRM hacking, here is some background. Last year, the U.S. Copyright Office declined to renew a temporary exception to DMCA 1201 that would make it legal to unlock mobile phones. A petition to the president to reverse the decision garnered over 100,000 signatures, but as he has no power to do this, I predicted that nothing would happen. I was wrong; Congress did take up the issue, with the resulting legislation breezing through Congress last month.
Around the time of the Copyright Office’s ruling last year, Zoe Lofgren, a Democrat who represents a chunk of Silicon Valley in Congress, introduced a bill called the Unlocking Technology Act that would go considerably further in weakening DMCA 1201. This legislation would sidestep the triennial rulemaking process in which the Copyright Office considers temporary exceptions to the law; it would create permanent exceptions to DMCA 1201 for any hack to a DRM scheme, as long as the primary purpose of the hack is not an infringement of copyright. The ostensible aim of this bill is to allow people to break their devices’ DRMs for such purposes as enabling read-aloud features in e-book readers, as well as to unlock their mobile phones.
DMCA 1201 was purposefully crafted so as to disallow any hacks to DRMs even if the resulting uses of content are noninfringing. There were two rationales for this. Most basically, if you could hack a DRM, then you would be able to get unencrypted content, which you could use for any reason, including emailing it to your million best friends (which would have been a consideration in the 1990s when the law was created, as Torrent trackers and cyberlockers weren’t around yet).
But more specifically, if it’s OK to hack DRMs for noninfringing purposes, then potentially sticky questions about whether a resulting use of content qualifies as fair use must be judged the old-fashioned way: through the legal system, not through technology. And if you are trying to enforce copyrights, once you fall through what I have called the trap door into the legal system, you lose: enforcement through the traditional legal system is massively less effective and efficient than enforcement through technology. The media industry doesn’t want judgments about fair use from hacked DRMs to be left up to consumers; it wants to reserve the benefit of the doubt for itself.
The tech industry, on the other hand, wants to allow fair uses of content obtained from hacked DRMs in order to make its products and services more useful to consumers. And there’s no question that the Unlocking Technology Act has aspects that would be beneficial to consumers. But there is a deeper principle at work here that renders the costs and benefits less clear.
The primary motivation for DMCA 1201 in the first place was to erect a legal backstop for DRM technology that wasn’t very effective — such as the CSS scheme for DVDs, which was the subject of several DMCA 1201 litigations in the previous decades. The media industry wanted to avoid an “arms race” against hackers. The telecommunications industry — which was on the opposite side of the negotiating table when these issues were debated in the early 1990s — was fine with this: telcos understood that with a legal backstop against hacks in place, they would have less responsibility to implement more expensive and complex DRM systems that were actually strong; furthermore, the law placed accountability for hacks squarely on hackers, and not on the service providers (such as telcos) that implemented the DRMs in the first place. In all, if there had to be a law against DRM hacking, DMCA 1201 was not a bad deal for today’s service providers and app developers.
The problem with the Unlocking Technology Act is in the interpretation of phrases in it like “primarily designed or produced for the purpose of facilitating noninfringing uses of [copyrighted] works.” Most DRM hacks that I’m familiar with are “marketed” with language like “Exercise your fair use rights to your content” and disclaimers — nudge, nudge, wink, wink — that the hack should not be used for copyright infringement. Hacks that developers sell for money are subject to the law against products and services that “induce” infringement, thanks to the Supreme Court’s 2005 Grokster decision, so commercial hackers have been on notice for years about avoiding promotional language that encourages infringement. (And of course none of these laws apply outside of the United States.)
So, if a law like the Unlocking Technology Act passes, then copyright owners could face challenges in getting courts to find that DRM hacks were not “primarily designed or produced for the purpose of facilitating noninfringing uses[.]” The question of liability would seem to shift from the supplier of the hack to the user. In other words, this law would render DMCA 1201 essentially toothless — which is what copyleft interests have wanted all along.
From a pragmatic perspective, this law could lead non-dominant retailers of digital content to build DRM hacks into their software for “interoperability” purposes, to help them compete with the market leaders. It’s particularly easy to see why Google should want this, as it has zillions of users but has struggled to get traction for its Google Play content retail operations. Under this law, Google could add an “Import from iTunes” option for video and “Import from Kindle/Nook/iBooks” options for e-books. (And once one retailer did this, all of the others would follow.) As long as those “import” options re-encrypted content in the native DRM, there shouldn’t be much of an issue with “fair use.” (There would be plenty of issues about users violating retailers’ license agreements, but that would be a separate matter.)
This in turn could cause retailers that use DRM to help lock consumers into their services to implement stronger, more complex, and more expensive DRM. They would have to use techniques that help thwart hacks over time, such as reverse engineering prevention, code diversity and renewability, and sophisticated key hiding techniques such as whitebox encryption. Some will argue that making lock-in more of a hassle will cause technology companies to stop trying. This argument is misguided: first, lock-in is fundamental to theories of markets in the networked digital economy and isn’t likely to go away over costs of DRM implementation; second, DRM is far from the only way to achieve lock-in.
The other question is whether Hollywood studios and other copyright owners will demand stronger DRM from service providers that have little motivation to implement it. The problem, as usual, is that copyright owners demand the technology (as a condition of licensing their content) but don’t pay for it. If there’s no effective legal backstop to weak DRM, then negotiations between copyright owners and technology companies may get tougher. However, this may not be an issue particularly where Hollywood is concerned, since studios tend to rely more heavily on terms in license agreements (such as robustness rules) than on DMCA 1201 to enforce the strength of DRM implementations.
Regardless, the passage of the mobile phone unlocking legislation has led to increased interest in the Unlocking Technology Act, such as the recent panel that Public Knowledge and other like-minded organizations put on in Washington. Rep. Lofgren has succeeded in getting several more members of Congress to co-sponsor her bill. The trouble is, all but one of them are Democrats (in a Republican-controlled House of Representatives not exactly known for cooperation with the other side of the aisle); and the Democratically-controlled Senate has not introduced parallel legislation. This means that the fate of the Unlocking Technology Act is likely to be similar to that of past attempts to do much the same thing: the Digital Media Consumers’ Rights Act of 2003 and the Freedom and Innovation Revitalizing United States Entrepreneurship (FAIR USE) Act of 2007. That is, it’s likely to go nowhere.
add a comment
Registration for Copyright and Technology London 2014 is now live. An earlybird discount is in place through August 8. Space is limited and we came close to filling the rooms last time, so please register today!
I am particularly excited about our two keynote speakers — two of the most important copyright policy officials in the European Union and United States respectively. Maria Martin-Prat will discuss efforts to harmonize aspects of copyright law throughout the 28 EU Member States, while Shira Perlmutter will provide an update on the long process that the US has started to revise its copyright law.
We have made one change to the Law and Policy track in the afternoon: we’ve added a panel called The Cloudy Future of Private Copying. This panel will deal with controversies in the already complex and often confusing world of laws in Europe that allow consumers to make copies of lawfully-obtained content for personal use.
The right of private copying throughout Europe was established in the European Union Copyright Directive of 2001, but the EU Member States’ implementations of private copying vary widely — as do the levies that makers of consumer electronics and blank media have to pay to copyright collecting societies in many countries on the presumption that consumers will make private copies of copyrighted material. Private copying was originally intended to apply to such straightforward scenarios as photocopying of text materials or taping vinyl albums onto cassette. But nowadays, cloud storage services, cyberlockers, and “cloud sync” services for music files — some of which allow streaming from the cloud or access to content by users other than those who uploaded the content — are coming into view regarding private copying.
The result is a growing amount of controversy among collecting societies, consumer electronics makers, retailers, and others; meanwhile the European Commission is seeking ways to harmonize the laws across Member States amid rapid technological change. Our panel will discuss these issues and consider whether there’s a rational way forward.
We have slots open for a chair and speakers on this panel; I will accept proposals through July 31. Please email your proposal(s) with the following information:
- Speaker’s name and full contact information
- Chair or speaker request?
- Description of speaker’s experience or point of view on the panel subject
- Brief narrative bio of speaker
- Contact info of representative, if different from speaker*
Finally, back over here across the Atlantic, I’ll note an interesting new development in the Aereo case that hasn’t gotten much press since the Supreme Court decision in the case a couple of weeks ago. Aereo had claimed that it had “bet the farm” on a court ruling that its service was legal and that “there is no Plan B,” implying that it didn’t have the money to pay for licenses with television networks. Various commentators have noted that Aereo wasn’t going to have much leverage in any such negotiations anyway.
As a result of the decision, Aereo has changed tactics. In the Supreme Court’s ruling, Justice Breyer stated that Aereo resembled a cable TV provider and therefore could not offer access to television networks’ content without a license. Now, in a filing with the New York district court that first heard the case, Aereo is claiming that it should be entitled to the statutory license for cable TV operators under section 111 of the copyright law, with royalty rates that are spelled out in 17 U.S.C § 111(d)(1).
In essence, Aereo is attempting to rely on the court for its negotiating leverage, and it has apparently decided that it can become a profitable business even if it has to pay the fees under that statutory license. Has Barry Diller — or another investor — stepped in with the promise of more cash to keep the company afloat? Regardless, in pursuing this tactic, Aereo is simply following the well-worn path of working litigation into a negotiation for a license to intellectual property.
*Please note that personal confirmation from speakers themselves is required before we will put them on the program.