EU Article 13 (Now Article 17) Passes After More Changes, Making Copyright Filtering More Likely

The European Union’s copyright directive finally passed last week, with 56% of the European Parliament vote, after several rounds of significant changes to the text. On its way to final passage, the controversial Article 13 — now Article 17 — went through yet another round of changes that are worth discussing here. Two issues in particular stand out: how the law will affect startups and niche-market content services, and the nature of copyright filtering requirements.

The first issue is about how this law will affect startups and online communities that serve niche audiences by imposing legal risks and costs of licenses or filtering technology. The final text attempts to be more specific about the types of online services that are exempt from Article 17. But it ends up both ambiguous and needlessly burdensome on startups.

On the one hand, the text now includes elaborate definitions of online services to which the law is supposed to apply that are intended to protect small and niche-market services by writing them out of the law. But on the other hand, that same elaborateness invites years and years of disputes over interpretation that will only be accessible to organizations that can afford lawyers to argue them.

The latest definition of “online content-sharing service provider” that is supposed to either take licenses to copyrighted works or take steps to keep them off their networks is this mouthful:

“… online services that play an important role on the online content market by competing with other online content services, such as online audio and video streaming services, for the same audiences … the main or one of the main purposes of which is to store and enable users to upload and share a large amount of copyright-protected content with the purpose of obtaining profit therefrom, either directly or indirectly, by organising it and promoting it in order to attract a larger audience, including by categorising it and using targeted promotion within it.”

In other words, the law is — presumably — targeted only to major content-sharing services (YouTube) that compete with media companies’ owned or licensed services that don’t accept user-uploaded content (Netflix, Spotify). The list of types of services that are exempt from the law has changed a bit from the previous version of the bill: business-to-business cloud services (DropBox) are now exempt but “service providers the main purpose of which is to engage in or to facilitate copyright piracy” (Sci-Hub) are not. The enacted legislation maintains exemptions from previous drafts for the likes of Wikipedia, GitHub, iCloud, Google Drive, and non-profit scientific and educational content repositories.

At the same time, paragraph 4 of Article 17 — which contains the meat of the license, takedown, and filter provisions — contains language that will be worrying to startups and their potential investors. Despite the fact that other parts of Article 17 attempt to exempt small and niche-market services in general, Paragraph 6 calls for three tiers of responsibilities that depend on the age and size of the online service:

  1. Services that are less than three years old and have annual revenues under €10 million (US $11.2 million) must make “best efforts” to take licenses to content or, if no license is available, respond to takedown notices.
  2. Of those services, those with more than 5 million average monthly users must also “ma[k]e best efforts to prevent [] future uploads” of works that have been taken down pursuant to takedown notices — i.e., to implement what has been called “takedown and staydown.”
  3. Services that are more than three years old or make more than €10 million also have to make “best efforts to ensure the unavailability of specific works and other subject matter for which the rightholders have provided the service providers with the relevant and necessary information[,]” i.e., to filter content proactively instead of just reactively after takedowns.

In other words, new and small services — as a practical matter, and provided that someone decides they meet the criteria for “online content-sharing service provider” shown above — will need to implement a notice-and-takedown regime similar to the U.S. DMCA — but also possibly be liable for not making “best efforts” to obtain licenses.

The lighter requirements are not difficult to administer, assuming that “best efforts” to obtain licenses is understood to mean working with the usual collecting societies and not tracking down arbitrary copyright owners for every piece of content uploaded. The problem is that the requirements expire in three years. In other words, when an entrepreneur or investor embarks on a new venture, the clock starts ticking until legal risks and expenses increase … maybe, depending on those definitions of applicable services above.

At least for the foreseeable future, this scheme will cast a pall not only over entrepreneurship but also over online services that serve niche communities. It also gives large copyright owners — and, indirectly, large service providers — perpetual leverage over the little guys, which is not good for competition. Contrast this with U.S. regulations for Internet radio (webcasting): while big commercial webcasters have to record stream data and pay royalties on a per-stream basis, small ones (as defined by revenue and/or non-profit status) can get away with simply paying nominal annual fees. (Advantages for small commercial webcasters have been whittled away over the years but they remain for the smallest educational non-profits.) 

The second issue is the law’s requirements for copyright filtering. Under the new law, larger and older content services that choose not to take licenses to content will need to implement “takedown and staydown” (No. 2 on the list above) and then ultimately “ensure unavailability of works for which rightholders have provided information” (No. 3). In fact the final version of the legislation is more forthright about the filtering requirement than the previous version, despite its protestation (required under European law) that “[t]he application of [] Article [17] shall not lead to any general monitoring obligation.”

The difference between No. 2 and No. 3 is meaningful but not huge as a practical matter. The former requires service providers to keep lists of identifying information about content that has been the subject of takedown notices, while the latter requires them to use lists of identifying information about all content that rights holders don’t want uploaded in the first place. The latter implies the type of content recognition scheme that’s most widely used nowadays — such as Google’s Content ID or Audible Magic. A technology like those could be used for “takedown and staydown” by simply applying it only to items of content that someone has tried to upload instead of applying it to every item that rights holders submit to the vendor.

In other words, the now-official version of this bill lends credence to MEP Julia Reda’s statements that certain types of content-sharing services will have no choice but to implement filters — though the way I’d prefer to say it is that many of them will want to choose the filtering option as the less legally risky option. At the same time, there’s been a lot of talk about how expensive and elaborate filtering schemes will need to be in order to satisfy both rights holders and content service providers under this law; I never believed these doomsday predictions, and I still don’t.

My previous views, before the final round of text changes, were that it would take many years of high-powered lobbying and litigation to figure out what the filtering requirements actually are (given how vaguely they are worded), and that content services will shy away from overfiltering (false positives of content identification) because it will cost them audiences compared to competitors that don’t overfilter. While I still believe the latter, the latest changes to the law lead me to a different view on the former point.

There are two reasons why I suspect that content services will manage to avoid taking on expensive R&D-level projects that push the envelope of content recognition technology. First, paragraph 5 of Article 17 calls for the principle of “proportionality” — a fundamental concept in French law — to take into account the availability, cost, and complexity of technical measures as well as the online service’s type of content and audience. In the case of content identification technologies, it’s generally understood that advances in technology lead to diminishing returns in improved accuracy and effectiveness — particularly with regard to the fair use-like cases enumerated in paragraph 7 of Article 17, which are basically impossible to automate in filtering systems. In other words, fancy R&D initiatives to improve filtering are easily shown not to be “proportional.”

The second reason has to do with a little-noticed yet important new provision in the legislation that passed last week. Historically, deliberations between copyright owners and service providers about filtering technologies have have taken place in the dark, mostly hidden under private nondisclosure agreements or courts’ protective orders in lawsuits. U.S. courts have offered limited guidance through a few decisions about the adequacy of these technologies (or lack thereof), in cases such as Arista v. LimeWireUniversal Music Group v. Veoh, and Universal Music Group v. Escape Media (Grooveshark). The world at large knows little about how, and how well, these technologies work.

That secrecy and reticence could come to an end in Europe. Paragraph 10 of Article 17 sets up mandatory “stakeholder dialogues to discuss best practices for cooperation between online content-sharing service providers and rightholders … regarding the cooperation referred to in paragraph 4.” There are two important new wrinkles to this provision in the final version of the text. One is: “For the purpose of the stakeholder dialogues, users’ organisations shall have access to adequate information from online content-sharing service providers on the functioning of their practices with regard to paragraph 4.” The other is: “The [European] Commission shall, … taking into account the results of the stakeholder dialogues, issue guidance … regarding the cooperation referred to in paragraph 4.”

This means that online services that choose to filter rather than license will need to disclose what filtering schemes they are using. Service providers may try to use ambiguous and opaque language to meet this vaguely worded requirement in hopes that no one sues them. But everyone would be much better off in the long run if the European Commission could define standards for this information (akin to labeling standards that have been suggested in the U.S. for DRM) so that everyone understands who is using which technologies. When the European Commission issues its “guidance,” which presumably will be precedential on service providers, everyone will be better informed.

If this happens, the dialogues and guidance will not necessarily lead to more stringent filtering schemes — in fact, I’d say probably not. First, they could give users information to help them choose services with the most reasonable filtering schemes, which in turn will encourage more reasonable filtering, since the services with stricter or coarser filtering schemes will die off.

In addition, it’s important to remember that regulations or government-issued “guidance” never establish lower bounds that stakeholders are motivated to exceed; on the contrary, they establish basic standards that stakeholders have to barely meet. It’s safe to assume that the tech industry will argue vehemently in stakeholder dialogues against filtering requirements that cost them lots of money. Therefore if any innovation comes out of the process, it’s more likely to be in the form of more cost-efficient ways of doing the bare minimum.

It’s possible that these forces will balance each other out and minimize the burden on service providers that this law imposes while maximizing services’ utility for users. There’s no doubt that this law will burden online services and users alike; but at least as far as the stakeholder dialogues are concerned, it’s possible that balance and (yes) proportionality could prevail. And let’s not forget that it could also actually achieve its core objective and put more money in creators’ pockets.

Of course, all this depends on a process of clarifying the highly complex and vague language in the Directive and implementing it in the laws of all EU Member States. This process will take many years — at which time it remains to be seen if it’s even relevant anymore.

 

 

2 comments

  1. […] GIFs, and other forms of online communication. While the law unfortunately passed anyway, Reda successfully advocated changes to the legislation and effectively tapped into public discontent. Especially as […]

  2. […] is an overarching concern that the recently passed Article 17 (formerly Article 13) in EU Copyright Law could lead to the eradication of Mashups altogether, if […]

Leave a comment