Five years ago, the U.S. Copyright Office commenced a study on Section 512 of the copyright law, the section that defines limitations of copyright liability (“safe harbors”) for online service providers, arguably the most important part of American copyright law in the digital age. Last week the Office released the results of the study in a report over 200 pages long. Copyright owners are likely to be happier than online services with the Office’s recommendations in the Report for changes to the law.
The Copyright Office is the copyright advisor to Congress, which enacted Section 512 in 1998 as part of the Digital Millennium Copyright Act. The Office describes Section 512 as an attempt to balance the interests of, and encourage cooperation among, copyright owners and service providers regarding copyright protection and liability in the then-emerging Internet age. It describes the intent of the Section 512 study as an assessment of whether the law has succeeded in striking the balance of interests that Congress intended, and if not, to make recommendations on how to restore the balance. To do this, the Office looked at several sources of information: statements from Congress during the run-up to the enactment of Section 512; the statute itself; the many federal court decisions in cases involving Section 512; and over 90,000 written submissions from stakeholders during the course of the study as well as three public roundtables.
The Office draws two headline conclusions in the study: that as the technology and industry landscapes have changed over the past twenty years, the balance of interests that the law is supposed to maintain has gone askew in favor of online services; and that the cooperation and compromise that Congress envisioned has hardly happened at all.
First, it’s worth mentioning that the study should be a must-read for anyone looking to bone up on Section 512 jurisprudence; it’s a detailed grand tour of all of the relevant federal court decisions as well as the legislative history, organized according to each provision of the law. The following is a summary of key issues raised in Section 512 and the recommendations that the Office makes about them in the study.
Boundaries of the Safe Harbors
Section 512’s main purpose is to provide safe harbors for online services that offer four different kinds of functionality and to set out the requirements for eligibility for each of the four. The four types of functionality are transitory network communications for transmission or routing of material (conduits, § 512(a)), temporary storage (caching, § 512(b)), online storage of users’ content (§ 512(c)), and information location tools such as directories or indexes (§ 512(d)).
The Report considers the boundaries of each of the safe harbors and whether they should apply to certain services or their features. In the late-1990s world of dialup modems and AOL, online services were much simpler than they are today: they were simply conduit ISPs, or they were directories (like the original Yahoo!), or they were file storage and sharing services (like the original Napster); so they were easy to put into the four buckets. Since then, services have become increasingly sophisticated and multifarious; they have argued that they are eligible for more than one safe harbor, or that more and more of their features should be lumped into each of the safe harbors.
This has been particularly true of § 512(c) for online storage, which shields service providers from liability “by reason of the storage at the direction of the user” of copyrighted content. Courts have interpreted “by reason of the storage” to mean any automated feature related to content hosting, such as transcoding, streaming, and recommendations of related content. The Office disagrees with this and suggests that the definition of the § 512(c) safe harbor should be narrowed, in line with what it claims was original Congressional intent.
The Office also calls for Congress to revisit aspects of the boundaries of other safe harbors–such as § 512(b) for caching, where the court in Field v. Google (2006) ruled that storage for 14 to 20 days qualified as “temporary,” or § 512(a) for conduits, where plain-vanilla ISPs are no longer the only kinds of services that provide connectivity.
Repeat Infringer Policy
To qualify for any of the safe harbors, online services have to meet threshold eligibility requirements in § 512(i) that include a “policy that provides for the termination in appropriate circumstances of subscribers and account holders of the service provider’s system or network who are repeat infringers,” known as the “repeat infringer policy” requirement. This got a lot of attention in the Report because of its vagueness in the statute: it doesn’t specify whether “infringer” means an adjudicated infringer or someone who has merely been accused of infringement, and it doesn’t define what “appropriate circumstances” or “repeat” means.
In its recommendations, the Office sides with courts in recent cases such as BMG v. Cox and determines that “repeat infringer” should mean an alleged infringer, not an adjudicated infringer. It concludes that court cases involving § 512(i) found liability for “only the most extreme failures” of service providers to terminate users, and that the bar for eligibility has been set too low. It recommends that Congress fine-tune the language to be more specific about what “repeat” means–twice? Three times? Six times? It depends?–and that service providers be require to state their repeat infringer termination policies explicitly so that users don’t have to guess. It also recommends that college and university ISPs be held to a different standard, because they are usually the only ways for students on campus to access the Internet and therefore the only ways for students to get an increasing amount of their educational materials.
Notice and Takedown
Two of the safe harbors, § 512(c) and § 512 (d), involve “notice and takedown,” whereby a rightsholder sends a service provider a description of content that the service provider stores or points to, and the service provider can avoid liability if it removes the content (or a link to the content) from its service. Notice and takedown has been a hot issue between rightsholders and service providers, as the former complain of “Whac-a-Mole” and send notices in the millions while the latter complain of abuse and the burden of processing all those notices. The Office evaluated several aspects of notice and takedown in the Report.
The statute allows rightsholders to submit “representative lists” of material to service providers with the idea that they should take down all content described in a representative list. The Office concludes that courts have eviscerated the representative list standard by requiring precise locators (such as URLs) for all allegedly infringing content in takedown notices, and it recommends that Congress revisit the statutory language to restore the representative list standard.
The Office also addresses the issue of how rightsholders should be able to submit takedown notices to service providers. Rightsholders complained that many services have their own web forms for submitting notices and that some of them seem designed to put up roadblocks rather than to make the process efficient; and the Office found this to be the case in its own research. Both sides called for more standardization of takedown notice sending while also making the process accessible to small rightsholders that don’t have access to scalable technology for notice sending.
Yet there is already a standard format for notice sending: Automated Copyright Notice System (ACNS), which was designed by Universal Studios and Universal Music Group over a decade ago. It’s an XML-based language that anyone can use freely and is accepted via email by many service providers today. The main benefit of ACNS is that a rightsholder can send largely identical ACNS messages about her content to any number of online services without having to figure out each service’s web form. Strangely, ACNS appears nowhere in the Report; it was mentioned in exactly one of the more than 92,000 written submissions to the study (and not by a rightsholder representative). Although ACNS is currently mainly used by large rightsholders (and third-party copyright monitoring services that they hire), it’s possible to design tools that make it accessible to small content creators without technical expertise.
In any case, the Office recommends that requirements for takedown notices be tightened up, and that this should be done not by new statutory language but by putting a new regulatory process in place, analogous to the Copyright Office’s regulatory processes for setting certain digital music royalty rates. This would enable the requirements to change over time to reflect new technologies.
The Office also addresses the issue of counter-notices and how the takedown notice processes has been subject to abuse, such as for political purposes during run-ups to elections. Everyone agrees that the timeframes for submitting and responding to counter-notices are too long; the Office’s recommended solution is to adopt an ADR (alternative dispute resolution) process, such as arbitration, as a faster and cheaper alternative to lawsuits filed in federal court.
Finally, the Report covers the requirement that takedown notices contain statements of good faith belief that the material found on an online service is infringing. This was strictly boilerplate language in takedown notices until 2015, when the 9th Circuit decided the Lenz v. Universal Music Corp. a/k/a “Dancing Baby” case. The 9th Circuit ruled that a rightsholder has to take fair use into account before professing a good faith belief in a takedown notice that the use of a work is infringing. That ruling attracted a lot of controversy, including much talk about the roles of automated copyright monitoring processes and their abilities (or lack thereof) to determine fair use without much understanding of how these processes work (or could work).
It seems that the Section 512 study stakeholders have all missed an essential point about this, or at least failed to express it in a way that the Office could pick up on. The Dancing Baby ruling led to discussion of whether or how it’s possible for rightsholders to analyze fair use when they are constantly sending millions of takedown notices to dozens of online services. Service providers and their allies have insisted that such analysis has to be undertaken manually, because it’s impossible for algorithms to determine fair use, and therefore an automatically-generated notice can’t be a valid notice. Rightsholders countered merely that their automated copyright monitoring processes are getting more and more sophisticated.
What everyone seems to have missed here is that the 9th Circuit called for rightsholders to conduct some sort of fair use analysis, not necessarily a “complete” one or one that a court (which has the sole prerogative to determine fair use) will agree with. Specifically, Judge Milan Smith, in a footnote in his partial dissent in Lenz, wrote that “[f]or a copyright holder to rely solely on a computer algorithm to form a good faith belief that a work is infringing, that algorithm must be capable of applying the [fair use] factors[.]” That does not mean that an algorithm has to be a completely effective fair use deciding machine, which is indeed impossible. Yet an automated process certainly could determine things like the type of content a file contains, whether it is identical to the original work or “transforms” it in some way, or whether it’s the entire content or just a portion; and it could make a decent stab at determining whether the use of the content is commercial or not. That covers at least some of the fair use factors, and a court could possibly find that sufficient per Lenz.
The problem that the Office sees in this state of affairs is a slightly different one: that if rightsholders are required to take fair use into account in formulating their takedown notices, they could actually be found liable for “knowing misrepresentation[s]” per § 512(f). It’s possible that courts could decide this by finding that, for example, a rightsholder designed an algorithm to make a stab at a fair use determination and deliberately ignored a certain test that it could have made, or even that a rightsholder deliberately chose not to have humans review algorithms’ fair use determinations. Therefore, the Office’s recommendation in the Report is merely that “Congress monitor how the courts apply Lenz, and consider clarifying the statutory language if needed.”
Knowledge of Infringements
Section 512 contains several concepts about service providers’ knowledge of infringements on their services, which courts have had trouble interpreting because they sometimes overlap or are at odds with one another: red flag knowledge, actual knowledge, willful blindness, and monitoring. The Report considers those concepts at length.
Service providers aren’t required to monitor their services proactively for copyright compliance (“affirmatively seek facts indicating infringing activity”), per § 512(m). However, they are required to act on “actual knowledge” as well as “red flag knowledge” to qualify for safe harbors. Service providers have argued that the only valid source of information about potential infringements is in takedown notices. But the Office concludes that Congress intended for service providers to act on information about infringements even in the absence of takedown notices, and that “red flag knowledge” means “enough information to indicate a likelihood of infringement.” The difficulty is in determining the boundaries between this and monitoring. Those boundaries depend heavily on the type of online service, and court decisions (such as EMI v. MP3Tunes) that have ruled on this point did so in the specific context of the service involved.
The Office concludes that Congress intended the red flag knowledge standard to impose a “limited duty of inquiry” on online services that falls short of both actual knowledge and proactive monitoring, and that Congress should update the statutory language to reflect this. It says that the standard should be flexible enough to make sense for different types and sizes of online service providers: a major online service should have a higher duty of care than a small hobbyist site. (Different levels of responsibility for large online services run by multibillion-dollar companies vs. small services run by individuals is a common theme throughout the Report.)
Willful blindness means being able to detect instances of infringement but choosing not to do so. As with red flag knowledge vis-a-vis actual knowledge, the Office concludes that courts have interpreted willful blindness in the Section 512 context as requiring blindness to specific instances of infringing material instead of mere blindness to the fact that infringing activities are taking place on a service; and in doing so, the courts have set the bar too high. The Office recommends that Congress revisit the willful blindness language in the statute.
Related to willful blindness are “right and ability to control” and “financial benefit directly attributable to the infringing material,” which come up in the § 512(c) and § 512(d) safe harbors. The Office finds that although Congress did not intend account setup or monthly subscription fees to be “financial benefit[s] directly attributable to the infringing material,” rightsholders should not have to show direct links between content at issue in litigations and financial benefits. It says that a more appropriate test is to ask whether the existence of infringing material on a service is a primary draw for users.
Standard Technical Measures
The Report also looks at a feature of Section 512 that the courts have almost entirely ignored over the past 22 years: the safe harbor eligibility requirement that online services not interfere with “standard technical measures … that are used by copyright owners to identify and protect copyrighted works” and, among other things, “have been developed pursuant to a broad consensus of copyright owners and service providers in an open, fair, voluntary, multi-industry standards process[.]”
The Office assumes that this term was meant to include content identification technologies such as watermarking and fingerprinting. Commenters noted that no such standard technical measures (STMs) have been defined to date. They list various reasons for this, including wide varieties of content, services, and identification requirements that preclude one-size-fits all solutions; there were many calls for an “STM Summit” to help move this process along. The Office suggests that it should have regulatory authority to develop STMs and that the National Institute of Standards and Technology (NIST) should lead a multi-stakeholder working group. It plans to hold an STM Summit once travel becomes possible again.
Yet these observations all ignore perhaps the biggest reason why STMs haven’t been standardized: the relevant technologies, which have existed for a long time, are owned by a wide variety of private companies, and some–like Google’s Content ID–only exist in forms that are tightly interwoven with other proprietary technologies.
There would be many advantages to standardizing on content identification technologies; for one thing, as I’ve noted elsewhere, copyright owners could deposit fingerprints with the Copyright Office on registration. But using whose fingerprinting algorithm? Similarly, watermark payloads (the data embedded in a file as a watermark) could be standardized, but whose algorithm for embedding and reading them would be used? The RIAA attempted to introduce a watermark payload standard for music files in the early 2000s, but lack of interoperability among–and intellectual property rights in–watermarking schemes made that idea impracticable. Similarly, one of the (many) reasons why the Secure Digital Music Initiative (SDMI) failed in 1999 was because of intellectual property interests in relevant technologies such as watermarking and DRM.
Any solution to this problem will have to take into account the thorny issues of IP policies, calls for essential patents, and contentious technology bake-offs, and the especially thorny issue of equitable apportionment of adoption duties and royalties (if any) among the various stakeholders. The outcome could well be worth it, as long as everyone recognizes the complexity.
Finally, the Office chose to ignore concepts like “notice and staydown” and website blocking that that rightsholders advocated but that emerged after the enactment of the DMCA; it considers these to be “alternative stakeholder proposals” that exist outside of Section 512 as originally conceived and written, and therefore outside of its scope in conducting this study. The Report takes notice of adoption of these schemes in Europe but considers them to be limited (in the case of Germany’s adoption of “notice and staydown” in its Störerhaftung intermediary liability theory) or unproven (in the case of Article 17 of the European Union’s recent Copyright Directive); and it recommends waiting and seeing how those schemes fare elsewhere before considering adoption in the U.S.
In all, the Office’s Section 512 Report recommends various tweaks instead of major structural changes to the law. The Report is also tinged with a tone of exasperation about the lack of cooperation and compromise that Congress anticipated. In summing up the Report, it says:
“When considering the rebalancing of responsibilities between rightsholders and OSPs [online service providers], both sides argue that the other should bear the greatest brunt of the responsibility. Rightsholders believe that OSPs should be grateful for the safe harbors that have enabled their success, and accordingly should shoulder more of the burden of addressing infringement on their services. In contrast, OSPs believe that rightsholders should be grateful for the mechanisms provided by section 512 for addressing online infringement without resort to costly civil litigation, making it appropriate that they shoulder most of the (financial and other) burden of policing their rights online. As with many things, the answer is likely somewhere in the middle.”
Yet if and when Congress acts on the Office’s recommended changes, the various stakeholders will find that doing so will shift the balance from its current state towards rightsholders.
The “problem” of 512(f) possibly ever having any of its intended tempering effect whatsoever has been solved in the courts already; aren’t they aware of that? I guess the rightsholders whispering in the Copyright Office’s ear want to see that section stricken completely from the statutes, just in case.
I don’t see how, under the recommendations, OSPs of any kind will be able afford to operate a business that lets the public self-publish pretty much anything. And that’s just what the rightsholders want: a world in which only industry/rightsholder-approved content is produced, transmitted, and consumed, like the pre-Internet media landscape. They are getting it by hook or by crook. This was not the intent of Congress with the DMCA, and yet here we are.