jump to navigation

Mayweather-Pacquiao Fight Exposes Live Event Streaming Piracy Challenges May 6, 2015

Posted by Bill Rosenblatt in Fingerprinting, Video, Watermarking.
add a comment

Piracy of live-streamed sports events ceased to be “inside baseball” (pun intended) for the media industry last weekend with HBO’s broadcast of the Floyd Mayweather-Manny Pacquiao boxing match in the US market.  Even in the mainstream media (such as here and here), it seems that the public’s ability to watch the fight online for free in close to real time got more attention than the fight itself.

This is why protection of live sports event streams is a growth area in the field of anti-piracy technology today.  Broadcasters like HBO pay huge sums of money for exclusive rights to live sports; therefore they have big incentives to protect the streams from infringement.  Recent articles in re/code and Mashable attempted — with limited success — to explain how HBO’s stream was massively pirated and how that piracy could possibly have been curtailed.

Both articles focused on the many pirated streams of the fight that were available on the Periscope app, which allows users to broadcast video in real time from their iOS devices, and is owned by Twitter.  As Peter Kafka at re/code explained (accurately enough), it’s not possible to use fingerprint-based systems like Google’s Content ID with live event streams.  Such systems depend on a service provider getting a copy of the content in advance so that it can take a “fingerprint” — a shorthand numerical representation of it — and use that to flag attempted user uploads of the same content later.  By definition, no advance copy of a live event exists, so fingerprinting can’t be used.

Furthermore, just because a single service uses fingerprinting to block unauthorized uploads doesn’t mean that other services do.  YouTube might block an upload thanks to Content ID, but that doesn’t prevent a user from putting the same file up on BitTorrent or a cyberlocker.

However, it is possible to use watermarks to flag content.  HBO could insert watermarks into the live video as it goes out the door.  Watermarks are much more efficient to detect and calculate than fingerprints, and a well-designed watermark can be detected even if the content is “camcorded” from a TV screen.

Two things can happen with watermarks.  First, a cooperating service could agree to detect the watermark and block the content — or do something else, such as allow the content through, play an ad, and share the revenue with the rights holder, as Google does with Content ID.  Second, a piracy monitoring service could detect watermarks of streams out in the wild (including on Periscope) and rapidly serve takedown notices on the services that are distributing the unauthorized streams, meaning that the services need not do anything proactive.

Given what Christina Warren at Mashable experienced (camcorded streams appearing on Periscope and then disappearing later), the latter probably happened.  Several streaming providers and anti-piracy services use watermarks to aid detection of unauthorized copies of live streams.  In the Caribbean market, for example, Netherlands-based pay-TV platform provider Cleeng carried the pay-per-view broadcast of the fight for Sportsmax TV, and it’s likely that Cleeng used its live-stream watermarking technology to protect the content.  (Another anti-piracy provider, Irdeto, has similar technology but admitted to Bloomberg that it wasn’t working on the fight.  That leaves Friend MTS as my guess for the provider that monitored the fight in other geographies such as Europe and North America.)

It is also possible to automate the process more fully by embedding so-called session-based watermarks that contain identifiers for the user accounts or devices that are receiving the content legally — such as set-top boxes receiving HBO over cable or satellite services.  Session-based watermarks are used today with movies released in early windows in high definition, and Hollywood would like them to be used in all 4K/UHD movie distributions.

With session-based watermarks, a monitoring service can (in many cases) determine the device from which the unauthorized stream originated and inform the pay-TV provider, which can then shut off the signal to that device. The entire process would require no human intervention and take just a few seconds.

But with Periscope-style camcording, this could lead to the following interesting situation: Alice invites some friends over to watch the fight on her big-screen TV and pays the $100 fee to HBO through her cable company.  Everyone sits down, and the fight starts.  Bob pulls out his iPhone and fires up Periscope.  A few seconds later, the TV goes blank or displays a warning message about possible copyright infringement.  Alice calls her cable company and finds herself on hold, waiting behind the hundreds or thousands of other furious customers to whom the same thing happened.

Ergo I don’t believe HBO is able to require session-based watermarking to protect its live events through pay-TV providers.  The situation with live sports is different from early-window HD movies: movies have already been in theaters (where they have been camcorded), and users value the timeliness of Periscope-style camcords for live events more than their often questionable quality.

What also clearly did not happen is that HBO made a deal with Twitter to detect the watermarks and block the live Periscope streams.  As both the Mashable and re/code articles note, Twitter/Periscope experienced a ton of traffic before, during, and after the event, much of which was “second-screen” in nature, such as commentary on the fight and the fighters.  Yet Google’s Content ID showed that a service provider could be willing to detect copyrighted materially proactively if given sufficient incentive.  If the likes of HBO can find sufficient incentives — cross-promotion, ad revenue share, or something else — then the Periscopes of the world might be inclined to follow in Google’s footsteps.

Copyright Alert System Releases First Year Results June 10, 2014

Posted by Bill Rosenblatt in Europe, Fingerprinting, Law, United States, Watermarking.

The Center for Copyright Information (CCI) released a report last month summarizing the first calendar year of activity of the Copyright Alert System (CAS), the United States’ voluntary graduated response scheme for involving ISPs in flagging their subscribers’ alleged copyright infringement. The report contains data from CAS activity as well as results of a study that CCI commissioned on consumer attitudes in the US towards copyright and file sharing.

The way CAS works, copyright owners monitor ISPs’ networks for allegedly illegal file sharing, using MarkMonitor’s piracy monitoring service.  MarkMonitor determines which ISP manages the user’s IP address and sends a notice to that ISP.  The ISP then looks up the subscriber ID associated with that ISP address and sends a copyright alert.  The first two copyright alerts sent to a given user are purely educational, not requiring the user to take any action. Subsequent alerts proceed from “educational” to “acknowledgement” (requiring the user to acknowledge the alert in some way, similar to the way in which users agree to terms of use on websites) and then to “mitigation” (the ISP imposes some penalty, such as temporarily throttling the user’s bandwidth).

There are two alerts at each level, for a total of six, but the three categories make it easier to compare the CAS with “three strikes” graduated response regimes in other countries.  As I discussed recently, the CAS’s “mitigation” penalties are very minor compared to punitive measures in other systems such as those in France and South Korea.

The CCI’s report indicates that during the first ten months of operation, it sent out 1.3 million alerts.  Of these, 72% were “educational,” 20% were “acknowledgement,” and 8% were “mitigation.”  The CAS includes a process for users to submit mitigation alerts they receive to an independent review process.  Only 265 review requests were sent, and among these, 47 (18%) resulted in the alert being overturned.  Most of these 47 were overturned because the review process found that the user’s account was used by someone else without the user’s authorization.  In no case did the review process turn up a false positive, i.e. a file that the user shared that was actually not unauthorized use of copyrighted material.

It’s particularly instructive to compare these results to France’s HADOPI system.  This is possible thanks to the detailed research reports that HADOPI routinely issues.  Two of these were presented at our Copyright and Technology London conferences and are available on SlideShare (2012 report here; 2013 report here).  Here is a comparison of the percent of alerts issued by each system at each of the three levels:

Alert Level HADOPI 2012 HADOPI 2013 CAS 2013
1st/Educational 91.65% 90.80% 72.39%
2nd/Acknowledgement 8.32% 9.17% 20.05%
3rd/Mitigation 0.03% 0.03% 7.56%

Of course these comparisons are not precise; but it is hard not to draw an inference from them that threats of harsher punitive measures succeed in deterring file-sharing.  In the French system — in which users can face fines of up to €1500 and one year suspensions of their Internet service — only 0.03% of those who received notices kept receiving them up to the third level, and only a tiny handful of users actually received penalties.  In the US system — where penalties are much lighter and not widely advertised — almost 8% of users who received alerts went all the way to the “mitigation” levels. (Of that 8%, 3% went to the sixth and final level.)

Furthermore, while the HADOPI results are consistent from 2012 to 2013, they reflect a slight upward shift in the number of users who receive second-level notices, while the percent of third-level notices — those that could involve fines or suspensions — remained constant.  This reinforces the conclusion that actual punitive measures serve as deterrents.  At the same time, the 2013 results also showed that while the HADOPI system did reduce P2P file sharing by about one-third during roughly the second year of the system’s operation, P2P usage stabilized and even rose slightly in the two years after that.  This suggests that HADOPI has succeeded in deterring certain types of P2P file-sharers but that hardcore pirates remain undeterred — a reasonable conclusion.

It will be interesting to see if the CCI takes this type of data from other graduated response systems worldwide — including those with no punitive measures at all, such as the UK’s planned Vcap system — into account and uses it to adjust its level of punitive responses in the Copyright Alert System.





E-Book Watermarking Gains Traction in Europe October 3, 2013

Posted by Bill Rosenblatt in DRM, Europe, Publishing, United States, Watermarking.

The firm Rüdiger Wischenbart Content and Consulting has just released the latest version of Global eBook, its overview of the worldwide ebook market.  This sweeping, highly informative report is available for free during the month of October.

The report contains lots of information about piracy and rights worldwide — attitudes, public policy initiatives, and technologies.  A few conclusions in particular stand out.  First, while growth of e-book reading appears to be slowing down, it has reached a level of 20% of book sales in the U.S. market (and even higher by unit volume).  This puts e-books firmly in the mainstream of media consumption.

Accordingly, e-book piracy has become a mainstream concern.  Publishers — and their trade associations, such as the Börsenverein des Deutschen Buchhandels in Germany, which is the most active on this issue — had been less involved in the online infringement issue than their counterparts in the music and film industries, but that’s changing now.  Several studies have been done that generally show e-book piracy levels rising rapidly, but there’s wide disagreement on its volume.  And virtually no data at all is available about the promotional vs. detrimental effects of unauthorized file-sharing on legitimate sales.  Part of the problem is that e-book files are much smaller than music MP3s or (especially) digital video or games; therefore e-book files are more likely to be shared through email (which can’t be tracked) and less likely to be available through torrent sites.

The lack of quantitative understanding of infringement and its impact has led different countries to pursue different paths, in terms of both legal actions and the use of antipiracy technologies.  Perhaps the most surprising of the latter trend — at least to those of us on this side of the Atlantic — is the rapid ascendancy of watermarking (a/k/a “social DRM”) in some European countries.  For example:

  • Netherlands: Arbeiderspers/Bruna, the country’s largest book publisher, switched from traditional DRM to watermarking for its entire catalog at the beginning of this year.
  • Austria: 65% of the e-books available in the country have watermarks embedded, compared to only 35% with DRM.
  • Hungary: Watermarking is now the preferred method of content protection.
  • Sweden: Virtually all trade ebooks are DRM-free.  The e-book distributor eLib (owned by the Swedish media giant Bonnier), uses watermarking for 80% of its titles.
  • Italy: watermarking has grown from 15% to 42% of all e-books, overtaking the 35% that use DRM.

(Note that these are, with all due respect to them, second-tier European countries.  I have anecdotal evidence that e-book watermarking is on the rise in the UK, but not much evidence of it in France or Germany.  At the same time, the above countries are often test beds for technologies that, if successful, spread to larger markets — whether by design or market forces.)

Meanwhile, there’s still a total absence of data on the effects of both DRM and watermarking on users’ e-book behavior — which is why I have been discussing with the Book Industry Study Group the possibility of doing a study on this.

The prevailing attitude among authors is that DRM should still be used.  An interesting data point on this came back in January when Lulu, one of the prominent online self-publishing services, decided to stop offering authors the option of DRM protection (using Adobe Content Server, the de facto standard DRM for ebooks outside of the Amazon and Apple ecosystems) for ebooks sold on the Lulu site.  Lulu authors would still be able to distribute their titles through Amazon and other services that use DRM.

Lulu announced this in a blog post which elicited large numbers of comments, largely from authors.  My pseudo-scientific tally of the authors’ comments showed that they are in favor of DRM — and unhappy with Lulu’s decision to drop it — by more than a two-to-one margin.  Many said that they would drop Lulu and move to its competitor Smashwords, which continues to support DRM as an option.  Remember that these are independent authors of mostly “long tail” titles in need of exposure, not bestselling authors or major publishers.

One reason for Lulu’s decision to drop DRM was undoubtedly the operational expense.  Smashwords’ CEO, Mark Coker, expressed the attitudes of ebook distributors succintly in a Publishers Weekly article covering Lulu’s move when he said, “What’s relevant is whether the cost of DRM (measured by fees to Adobe, [and for consumers] increased complexity, decreased availability, decreased sharing and word of mouth, decreased customer satisfaction) outweigh the benefits[.]”  As we used to say over here, that’s the $64,000 question.

Content Protection for 4k Video July 2, 2013

Posted by Bill Rosenblatt in DRM, Technologies, Video, Watermarking.

As Hollywood adepts know, the next phase in picture quality beyond HD is something called 4k.  Although the name suggests 4k (perhaps 4096) pixels in the vertical or horizontal direction, its resolution is actually 3840 × 2160, i.e., twice the pixels of HD in both horizontal and vertical directions.

4k is the highest quality of image actually captured by digital cinematography right now.  The question is, how will it be delivered to consumers, in what timeframe, and how will it be protected?

Those of us who attended the Anti-Piracy and Content Protection Summit in LA last week learned that the answer to the latter question is unknown as yet.  Spencer Stephens, CTO of Sony Pictures, gave a brief presentation explaining what 4k is and outlining his studio’s wish list for 4k content protection.  He said that it was an opportunity to start fresh with a new design, compared to the AACS content protection technology for Blu-ray discs, which is 10 years old.

This is interesting on a couple of levels.  First, it implies that the studios have not predetermined a standard for 4k content protection; in contrast, Blu-ray discs were introduced in the market about three years after AACS was designed.  Second, Stephens’s remarks had the flavor of a semi-public appeal to the community of content protection vendors — some of which were in the audience at this conference — for help in designing DRM schemes for 4k that met his requirements.

Stephens’s wish list included such elements as:

  • Title-by-title diversity, so that  a technique used to hack one movie title doesn’t necessarily apply to another
  • Requiring players to authenticate themselves online before playback, which enables hacked players to be denied but makes it impossible to play 4k content without an Internet connection
  • The use of HDCP 2.2 to protect digital outputs, since older versions of HDCP have been hacked
  • Session-based watermarking, so that each 4k file is marked with the identity of the device or user that downloaded it (a technique used today with early-window HD content)
  • The use of trusted execution environments (TEE) for playback, which combine the security of hardware with the renewability of software

From time to time I hear from startup companies that claim to have designed better technologies for video content protection.  I tell them that getting studio approval for new content protection schemes is a tricky business.  You can get studio technology executives excited about your technology, but they don’t actually “approve” it such that they guarantee they’ll accept it if it’s used in a content service.  Instead, they expect service providers to propose the technology in the context of the overall service, and the studios will consider providing licenses to their content in that broader context.  And of course the studios don’t actually pay for the technology; the service providers or consumer device makers do.

In other words, studios “bless” new content protection technologies, but otherwise the entire sales process takes place at arms’ length from the studios.  In that sense, the studios act somewhat like a regulatory agency does when setting guidelines for compliance with a regulation such as HIPAA and GLB (for information privacy in healthcare and financial services respectively).  The resulting technology often meets the letter but not the spirit of the regulations.

In this respect, Stephens’s remarks were a bit of fresh air.  They are an invitation to more open dialog among vendors, studios, and service providers about the types of content protection that they may be willing to implement when it comes time to distribute 4k content to consumers.

In the past, such discussions often happened behind closed doors, took the form of unilateral “unfunded mandates,” and/or resulted in implementations that plainly did not work.  As technology gets more sophisticated and the world gets more complex, Hollywood is going to have to work more closely with downstream entities in the content distribution chain if it wants its content protected.  Spencer Stephens’s presentation was a good start in that direction.

Digimarc Acquires Attributor December 4, 2012

Posted by Bill Rosenblatt in Fingerprinting, Images, Publishing, Watermarking.
1 comment so far

Digimarc announced yesterday that it has acquired Attributor Corp.  Attributor, based in Silicon Valley, is one of a handful of companies that crawls the Internet looking for instances of copyrighted material that may be infringing, using a pattern-recognition technology akin to fingerprinting.  Digimarc is a leader in digital watermarking technology, with a large and significant portfolio of IP in the space.  The acquisition price was a total of US $7.5 Million in cash, stock, and contingent compensation.

This is a synergistic and strategically significant move for Digimarc.  A few years ago, Digimarc had pruned its efforts to create products and services for digital media markets outside of still images.  It had decided, in effect, to leave products and services to its IP licensees, companies such as Civolution of the Netherlands and MarkAny of South Korea.  Attributor’s primary market is book publishing, with customers including four out of the “Big Six” trade book publishers as well as several leading educational and STM (scientific, technical, medical) publishers.

Digimarc intends to leverage Attributor’s relationships with book publishers to help it expand its watermarking technology into that market and to move into other markets such as magazine and financial publishing.  The company cited the explosive growth in e-books as a reason for the acquisition.

Beyond that, Digimarc’s acquisition is another sign of the increasing importance of infringement monitoring services; the previous such sign came over the summer, when Thomson Reuters acquired MarkMonitor.

There are two reasons for this increase in importance.  First is the rise of so-called progressive response legal regimes: copyright owners can monitor the Internet and submit data on alleged infringements to a legal authority, which sends users increasingly strong warning messages and, if they keep on infringing, potentially suspends their ISP accounts.  The most advanced progressive response regime is HADOPI in France, early results from which are encouraging.  The Copyright Alert System is supposedly gearing up for launch in the United States.  A handful of other countries have progressive response in place or in process as well.

The second reason for the increasing importance of so-called piracy monitoring is that copyright owners are starting to realize the value of the data they generate, beyond catching infringers.  Piracy is evidence of popularity of content — of demand for it.  The data that these services generate can be valuable for analytics purposes, to see who is interested in the content and in what ways.  Big Champagne, for example, has been supplying this type of data to the music industry for may years.  Attributor has been working on a new service that integrates piracy data with social media analytics; Digimarc intends to integrate this into its own data offerings for the image market.

In fact, we’ll have a discussion on the value of piracy data tomorrow at Copyright and Technology NYC 2012.  Leading the discussion will be Thomas Sehested of MarkMonitor.  There’s little doubt he will be called upon to talk about his new competition.

Webinar on Studios’ Content Security Policies April 24, 2012

Posted by Bill Rosenblatt in Conditional Access, DRM, Events, Video, Watermarking.
add a comment

For those who couldn’t attend the breakfast event at the NAB trade show last week, I will be doing a webinar on Content Security Requirements for Multi-Screen Video Services, on Thursday April 26 at noon US east coast time/1700 GMT.  I’ll be presenting a synopsis of the whitepaper I published last December on the topic.  I will be joined by Petr Peterka, CTO of Verimatrix, sponsor of the webinar.  Click here to register.

New White Paper: Content Security Requirements for Multi-Screen Video Services January 9, 2012

Posted by Bill Rosenblatt in Conditional Access, DRM, Technologies, Video, Watermarking, White Papers.
add a comment

I have released a new white paper on content security requirements for video services that distribute content to multiple devices.  This white paper discusses copyright owners’ requirements for security in today’s world of proliferating devices and delivery channels.

So-called managed networks (cable, satellite, and telco TV) are under increasing pressure to compete with “over the top” (OTT) video services that can run on any IP-based (unmanaged) network to a variety of devices — services like Netflix and Hulu.  In the US, in fact, total subscriberships of OTT services are fast approaching the total subscriberships of cable, satellite, and telco TV.

Therefore pay-TV operators have to respond by making their content available on a similar variety of devices and even through unmanaged networks.  While some major pay-TV providers like Comcast and Time Warner Cable are launching “TV Everywhere” services, many more pay-TV operators are trying to keep up by building their own service extensions onto mobile phones, tablets, and home devices other than traditional set-top boxes (STBs).

Content security is one of the many requirements that operators have to meet in order to license content from studios, TV networks, sports leagues, and other major content sources.  Life for pay-TV operators used to be relatively simple: adopt a conditional access (CA) technology that was equally effective in thwarting signal theft as it was in thwarting content piracy.  Economic and security goals were aligned between operators and copyright owners.  Now life is considerably more complicated, as operators have to support home networks and branch out into mobile services.  Content security requirements are more complicated as well.

This white paper gathers security requirements from major content owners and describes them in a single document.  The intent is to help pay-TV operators and other video service providers  that are looking to launch multi-screen video services, so that they know what to expect and avoid any unpleasant surprises with regard to security requirements when licensing content to offer through their services.

I spoke to representatives from most of the major Hollywood studios to get their requirements.  Although it is not possible to build a gigantic table that an operator can use to look up DRM or conditional access requirements for any given delivery modality and client device — among other things, such a table would become obsolete very quickly — I was able to create a set of guidelines that should be useful for operators.

Content security guidelines do depend on certain factors, including release windows (how long after a film’s theatrical release or a TV show’s first airing), display quality, and the usage rules granted to users and their devices.  In the white paper, I map these factors to certain specific content security requirements, such as roots of trust, watermarks, software hardening, and DRM robustness rules.  Security guidelines also depend on external market factors that the white paper also describes.

Many thanks to Verimatrix for commissioning this white paper.   To obtain it, follow this link and fill out the form for a PDF download.  Feel free to contact me with any questions or other follow-up.

New White Paper: The New Technologies for Pay TV Content Security August 18, 2011

Posted by Bill Rosenblatt in DRM, Fingerprinting, Technologies, Video, Watermarking, White Papers.
add a comment

I have just published a new white paper: The New Technologies for Pay TV Content Security.  This white paper was commissioned by Irdeto.

The 28-page paper describes the current state of the art of techniques for protecting video content delivered over pay television networks such as cable and satellite.  The two primary theses of the white paper are:

  • Pay TV often leads in content protection innovation over other media types and delivery modalities.  That is because, among other reasons, it is a fairly rare case where the economic interests of content owners and service providers are aligned: content owners don’t want their content used without authorization, and pay-TV operators don’t want their signals stolen.  Therefore pay-TV operators have incentives to implement strong and innovative content security solutions.
  • Before today, many content security schemes could be described as hack-it-and-it’s-broken (such as CSS for DVDs) or a cycle of hack-patch-hack-patch-etc. (such as AACS for Blu-ray or FairPlay for iTunes).  Now technologies are available that break the hack-patch-hack-patch cycle, thereby decreasing long-term costs (TCO) and complexity.

The white paper starts with a brief history of content protection technologies for digital pay TV, starting with the adoption of the Digital Video Broadcasting (DVB) standard in 1994.  Then it describes various newer technologies, including building blocks like ECC (elliptical curve cryptography), flash memory, and secure silicon; and it describes new techniques such as individualization, renewability, diversity, and whitebox cryptography.  It ties these techniques together into the concept of security lifecycle services, which include breach response and monitoring.

The final section of the paper discusses fingerprinting and watermarking as two techniques that complement encryption as ways of finding unauthorized content “in the wild.”

My thanks to Irdeto for sponsoring this paper.

The Early Release Window Experiment Continues June 29, 2011

Posted by Niels Thorwirth in Video, Watermarking.

The early release window, which offers Hollywood content for home consumption while it is still showing at theaters, has been debated for many years – in fact, I wrote about an enabling FCC ruling about a year ago. But now the debate about its success is raging more than ever.

Adding fuel to the fire is a current price tag of US $30. At this price point, the discussions revolve around the comparison of an expensive VOD movie to movie theater tickets that cost, on average, less than $8. Cinema owners and movie directors have voiced their concerns about the shift in content consumption habits.  Though after all, it is impossible to reliably predict consumer interest – otherwise every Hollywood title would be a blockbuster.

I think that it will be an interesting offer for, initially, a small percentage of consumers.  And while the rate of the adoption is questionable, it’s obvious to me that movie theaters won’t disappear any time soon and that electronic distribution will continue to grow.

The participating studios certainly have conducted their own research and it is evident that they have high enough hopes to shake the traditional models and to support this offer.

But I see the most relevant indicator in recent discussions that I had with operators. They are evaluating this opportunity seriously and investing time and resources in the studios’ requirement that early-window content be digitally watermarked as well as encrypted. This may be because even a small uptake by consumers will translate into a relevant chunk of revenue.

One technically interesting point is that operators often prefer server-side integration of watermarking. The tradeoff is whether the integration is done in the client device or in the video server before delivery. While a client-based approach has the advantage of distributed processing without head-end integration, server-side watermarking integration does not require modification to client devices.

The overall application is the same, yet the head-end component requires a very different technology approach. The manipulation of video pixels is too slow when considering the complex coding of compression schemes like H.264. The server-side manipulation has to be applied in the compressed and possibly encrypted domain, and applied while the content is delivered.

Efficiency is key, because the delivery infrastructure is all about delivering the maximum number of parallel streams.  If watermarking introduces overhead to it, it must be small and fast. This is a fundamental difference from previous watermarking schemes that only focused on survivability (robustness).  At the same time, with an expected broad deployment across multiple head end infrastructures, ease of integration is crucial to the adoption of digital watermarking.

This development will remain interesting because it’s an experiment on the technical front as well in business models, and I am sure there will be more progress to report in the future.

Public Knowledge White Paper Attacks Copyright Filtering August 20, 2009

Posted by Bill Rosenblatt in Fingerprinting, Law, Watermarking.

The Washington-based advocacy organization Public Knowledge last month published Forcing the Net Through a Sieve: Why Copyright Filtering is Not a Viable Solution for U.S. ISPs. The white paper was a submission to the Federal Communications Commission in connection with its National Broadband Plan.

The paper covers many technical, policy, and legal reasons why it’s a bad idea to adopt various types of technologies to keep unauthorized copyrighted material off the Internet.  Some of the considerations include  inaccuracy in identifying actually infringing material (whether false positives or false negatives), hampering ISP network performance, infringing on fair use rights, and forcing ISPs to incur costs that may be passed on to consumers.

Countries outside of the US, such as France and Belgium, have been seriously considering legal mandates for filtering copyrighted material from ISPs’ networks.  Some ISPs in the US, like AT&T, have been experimenting with filtering technologies — in AT&T’s case, Vobile‘s video fingerprinting — while others, like Verizon, are against it.

Unfortunately, this white paper contains some errors and mischaracterizations that dampen its value in influencing regulations.  The most serious of these occur in the sections on identifying content.

The paper discusses the difficulty of identifying a piece of content through metadata, such as ID3 tags commonly used in digital music.  This is true as far as it goes.  But it makes no mention of content ID standards that are gaining traction in various media industry segments, such as ISRC in music, ISAN in video, and DOI in various publishing industry segments.  The use of content IDs, especially in watermarks, would greatly improve the efficiency and accuracy of content identification over other schemes.

The authors also mischaracterize the state of watermarking.  They say that watermarks can be removed from content, leading to a pointless “arms race” between hackers and developers of watermarking technology.  To support this, they point to research done by Ed Felten and his Princeton team in 2001 in connection with the failed SDMI watermark.  Not only is this research ancient history with regard to watermarking techniques used today, but it is also off-target: the SDMI watermark was intended for a different purpose and thus was designed differently from watermarks used to identify content for forensic purposes.  Such watermarks can be designed so that removing them leaves content that is perceptually degraded.

Finally, the authors claim that watermark detection won’t do anything to filter content from CDs, DVDs, or camcorded movies.  This is not true.  These can be and are watermarked as well; and the watermarks are designed to withstand transformations such as digital-analog-digital conversion.

There are other more general, almost “rhetorical” devices used in this paper that I would call questionable.  One is the persistent use of the term “downloading” to describe what an ISP must do in order to find content to filter.  The report accurately describes deep packet inspection, but this need not involve “downloading,” a term that implies an operation that takes time and is a departure from the usual process of routing Internet traffic.  In fact, routers already examine Internet traffic for malware and various other types of content; they do this on the fly without “downloading.”  Technology companies such as Zeitera are working on hardware-based fingerprinting technology that would work similarly for content identification that could be used for filtering.

Another such rhetorical device is use of the term “underinclusive,” meaning technologies that let infringing content through instead of blocking it (i.e., false negatives) — as opposed to “overinclusive,” meaning false positives.  Content owners who favor filtering technologies are not necessarily looking to eliminate false negatives.  This is reminiscent of the copyleft canard that antipiracy technologies are worthless because they aren’t perfect.

Finally, the white paper makes various connections between copyright filtering and net neutrality that are conspiracy-theoretical stretches. One example is the discussion about using filtering to slow down or speed up traffic through networks.  I am not aware of any copyright filtering discussion that encompasses bandwidth throttling.

There are indeed serious concerns about copyright filtering, many of which this white paper raises effectively.  Network efficiency and false positives that abridge fair use rights are the two big ones.  Some of the technologies that this white paper claims are being considered for copyright filtering are just bad ideas, such as traffic pattern analysis, architecture-based filtering (e.g. P2P), and protocol-based filtering (e.g. BitTorrent).  But an exposition of the negative aspects of this type of technology should at least lay out the arguments without resorting to trial-lawyer-esque rhetorical devices and factual gaps.

I’m also skeptical of any legally mandated technological scheme for controlling copyright.  Ultimately, assuming that the technology can be made to work adequately, the use of copyright filters ought to be a matter of economics and private sector deliberation — something that the movie and user-generated-content industries have already attempted.  Public Knowledge’s white paper does address the most important economic principle, namely the question of who pays for the technology.  Any scheme that saddles consumers with the burden of cost for copyright filtering, such as the one proposed in the UK’s Digital Britain report in January, is inherently flawed.  Private sector deliberations over copyright filtering should use this as a starting point if they are going to arrive at a workable solution.


Get every new post delivered to your Inbox.

Join 710 other followers