The Washington-based advocacy organization Public Knowledge last month published Forcing the Net Through a Sieve: Why Copyright Filtering is Not a Viable Solution for U.S. ISPs. The white paper was a submission to the Federal Communications Commission in connection with its National Broadband Plan.
The paper covers many technical, policy, and legal reasons why it’s a bad idea to adopt various types of technologies to keep unauthorized copyrighted material off the Internet. Some of the considerations include inaccuracy in identifying actually infringing material (whether false positives or false negatives), hampering ISP network performance, infringing on fair use rights, and forcing ISPs to incur costs that may be passed on to consumers.
Countries outside of the US, such as France and Belgium, have been seriously considering legal mandates for filtering copyrighted material from ISPs’ networks. Some ISPs in the US, like AT&T, have been experimenting with filtering technologies — in AT&T’s case, Vobile‘s video fingerprinting — while others, like Verizon, are against it.
Unfortunately, this white paper contains some errors and mischaracterizations that dampen its value in influencing regulations. The most serious of these occur in the sections on identifying content.
The paper discusses the difficulty of identifying a piece of content through metadata, such as ID3 tags commonly used in digital music. This is true as far as it goes. But it makes no mention of content ID standards that are gaining traction in various media industry segments, such as ISRC in music, ISAN in video, and DOI in various publishing industry segments. The use of content IDs, especially in watermarks, would greatly improve the efficiency and accuracy of content identification over other schemes.
The authors also mischaracterize the state of watermarking. They say that watermarks can be removed from content, leading to a pointless “arms race” between hackers and developers of watermarking technology. To support this, they point to research done by Ed Felten and his Princeton team in 2001 in connection with the failed SDMI watermark. Not only is this research ancient history with regard to watermarking techniques used today, but it is also off-target: the SDMI watermark was intended for a different purpose and thus was designed differently from watermarks used to identify content for forensic purposes. Such watermarks can be designed so that removing them leaves content that is perceptually degraded.
Finally, the authors claim that watermark detection won’t do anything to filter content from CDs, DVDs, or camcorded movies. This is not true. These can be and are watermarked as well; and the watermarks are designed to withstand transformations such as digital-analog-digital conversion.
There are other more general, almost “rhetorical” devices used in this paper that I would call questionable. One is the persistent use of the term “downloading” to describe what an ISP must do in order to find content to filter. The report accurately describes deep packet inspection, but this need not involve “downloading,” a term that implies an operation that takes time and is a departure from the usual process of routing Internet traffic. In fact, routers already examine Internet traffic for malware and various other types of content; they do this on the fly without “downloading.” Technology companies such as Zeitera are working on hardware-based fingerprinting technology that would work similarly for content identification that could be used for filtering.
Another such rhetorical device is use of the term “underinclusive,” meaning technologies that let infringing content through instead of blocking it (i.e., false negatives) — as opposed to “overinclusive,” meaning false positives. Content owners who favor filtering technologies are not necessarily looking to eliminate false negatives. This is reminiscent of the copyleft canard that antipiracy technologies are worthless because they aren’t perfect.
Finally, the white paper makes various connections between copyright filtering and net neutrality that are conspiracy-theoretical stretches. One example is the discussion about using filtering to slow down or speed up traffic through networks. I am not aware of any copyright filtering discussion that encompasses bandwidth throttling.
There are indeed serious concerns about copyright filtering, many of which this white paper raises effectively. Network efficiency and false positives that abridge fair use rights are the two big ones. Some of the technologies that this white paper claims are being considered for copyright filtering are just bad ideas, such as traffic pattern analysis, architecture-based filtering (e.g. P2P), and protocol-based filtering (e.g. BitTorrent). But an exposition of the negative aspects of this type of technology should at least lay out the arguments without resorting to trial-lawyer-esque rhetorical devices and factual gaps.
I’m also skeptical of any legally mandated technological scheme for controlling copyright. Ultimately, assuming that the technology can be made to work adequately, the use of copyright filters ought to be a matter of economics and private sector deliberation — something that the movie and user-generated-content industries have already attempted. Public Knowledge’s white paper does address the most important economic principle, namely the question of who pays for the technology. Any scheme that saddles consumers with the burden of cost for copyright filtering, such as the one proposed in the UK’s Digital Britain report in January, is inherently flawed. Private sector deliberations over copyright filtering should use this as a starting point if they are going to arrive at a workable solution.
[…] a long way over the last few years, it remains a fairly crude instrument for curbing piracy and suffers from false positives. That’s because it’s remarkably difficult to accurately distinguish between […]
I think the net-neutrality debate overlooks a very important concept. The consumer (ISP user) has rights too. These rights are being ignored.
The consumer (ISP user) is hiring the ISP to deliver packets. Think of this as being similar to mail delivery through either the US Post Office, Fedex, or UPS. By what right can a third party actively and without due cause claim a right to inspect the “mail”.
Or to put this a little bit differently, by what right can a content provider and ISP enter into a contract that diminishes what you hired the ISP to do. The implication is clear, two parties would have the ability to collude to deprive a third person of their rights.
While I am NOT a lawyer, the ability of a private party to willfully and at anytime to “break into your virtual house” for the purpose of finding supposedly illegal content seems to be inconsistent with the concept of due process which is a fundamental legal building block.
One can say that a content provider has a right to protect their content, but they do not have a right to protect their content at your expense by depriving you of your rights.
[…] a long way over the last few years, it remains a fairly crude instrument for curbing piracy and suffers from false positives. That’s because it’s remarkably difficult to accurately distinguish between unauthorized […]