Copyright Alert System Launches in U.S. February 25, 2013Posted by Bill Rosenblatt in Fingerprinting, Law, Music, Video.
With today’s launch of the Copyright Alert System (CAS) by the Center for Copyright Information, the United States joins the list of countries that have adopted a so-called graduated response system for educating Internet users about online copyright infringement and taking steps to punish repeat offenders. The CAS is finally launching after a few months’ delay, part of which was supposedly due to the effects of Sandy, the mega-storm that hit the northeast U.S. late last year. Other graduated response countries include France, New Zealand, and South Korea; the United Kingdom is currently struggling with its own implementation.
The CAS is a partnership between music and video content owners on the one hand and major ISPs on the other. The content owner representatives include not just the majors (RIAA and MPAA) but also the Independent Film and Television Alliance (IFTA) and American Association of Independent Music (A2IM). On the ISP side, membership includes the five largest providers: AT&T, Verizon, Time Warner Cable, Comcast, and Cablevision. Book and game publishers are not involved at this point.
The CAS is run by Jill Lesser, a tech policy veteran with deep experience on both the content and ISP sides. It has an advisory board whose principal function seems to be to curb abuses: it includes advocates for looser copyright laws (Gigi Sohn of Public Knowledge) and user privacy (Jules Polonetsky of the Future of Privacy Forum).
The CAS works similarly to other graduated response regimes: copyright owners employ infringement monitoring services, which can identify copyrighted works as users send them around the Internet using fingerprinting and other content recognition technologies. The monitoring services send notices to ISPs, which issue warning messages to users. The warnings get stronger with repeat infringements.
ISPs can opt to punish repeat alleged offenders by such means as throttling bandwidth and making users watch videos about copyright. (ISPs already have policies for terminating repeat infringers’ accounts, which they must have in order to maintain their eligibility for the DMCA safe harbor.)
Where the CAS differs from other graduated response systems is that it is not tied to law enforcement. The arrangement between content owners and ISPs is voluntary. ISPs will not terminate or suspend users’ Internet accounts, nor will they pass information about infringements on to copyright owners. Another difference is that the CAS is not being funded through taxes or levies on Internet service (although funding sources are confidential).
In other words, the CAS is a more purely educational approach than France’s HADOPI or other systems. Analysis of the CAS’s results will therefore be more useful in determining how successful education by itself can be in getting people to respect copyright. The hope is that education will do more than draconian statutory damages or blunt-instrument legislation.
Given how little effect those approaches have had, it may not be difficult to declare the Copyright Alert System a relative success in the years to come. As it is now, it seems like quite a reasonable system: it raises awareness about the importance of copyright by using advanced Internet technologies instead of relegating enforcement to outmoded nontechnical legal means; it is permeated with references to legal content sources; and it doesn’t cost users a thing.
Digimarc Acquires Attributor December 4, 2012Posted by Bill Rosenblatt in Fingerprinting, Images, Publishing, Watermarking.
add a comment
Digimarc announced yesterday that it has acquired Attributor Corp. Attributor, based in Silicon Valley, is one of a handful of companies that crawls the Internet looking for instances of copyrighted material that may be infringing, using a pattern-recognition technology akin to fingerprinting. Digimarc is a leader in digital watermarking technology, with a large and significant portfolio of IP in the space. The acquisition price was a total of US $7.5 Million in cash, stock, and contingent compensation.
This is a synergistic and strategically significant move for Digimarc. A few years ago, Digimarc had pruned its efforts to create products and services for digital media markets outside of still images. It had decided, in effect, to leave products and services to its IP licensees, companies such as Civolution of the Netherlands and MarkAny of South Korea. Attributor’s primary market is book publishing, with customers including four out of the “Big Six” trade book publishers as well as several leading educational and STM (scientific, technical, medical) publishers.
Digimarc intends to leverage Attributor’s relationships with book publishers to help it expand its watermarking technology into that market and to move into other markets such as magazine and financial publishing. The company cited the explosive growth in e-books as a reason for the acquisition.
Beyond that, Digimarc’s acquisition is another sign of the increasing importance of infringement monitoring services; the previous such sign came over the summer, when Thomson Reuters acquired MarkMonitor.
There are two reasons for this increase in importance. First is the rise of so-called progressive response legal regimes: copyright owners can monitor the Internet and submit data on alleged infringements to a legal authority, which sends users increasingly strong warning messages and, if they keep on infringing, potentially suspends their ISP accounts. The most advanced progressive response regime is HADOPI in France, early results from which are encouraging. The Copyright Alert System is supposedly gearing up for launch in the United States. A handful of other countries have progressive response in place or in process as well.
The second reason for the increasing importance of so-called piracy monitoring is that copyright owners are starting to realize the value of the data they generate, beyond catching infringers. Piracy is evidence of popularity of content — of demand for it. The data that these services generate can be valuable for analytics purposes, to see who is interested in the content and in what ways. Big Champagne, for example, has been supplying this type of data to the music industry for may years. Attributor has been working on a new service that integrates piracy data with social media analytics; Digimarc intends to integrate this into its own data offerings for the image market.
In fact, we’ll have a discussion on the value of piracy data tomorrow at Copyright and Technology NYC 2012. Leading the discussion will be Thomas Sehested of MarkMonitor. There’s little doubt he will be called upon to talk about his new competition.
Getty Images Launches Automated Rights Licensing for Photo Sharing Services September 12, 2012Posted by Bill Rosenblatt in Fingerprinting, Images, Law, Rights Licensing, Services.
add a comment
Getty Images announced on Monday a deal with SparkRebel, a site that calls itself a “collaborative fashion and shopping inspiration” — but is perhaps more expediently described as “Pinterest for fashionistas, with Buy buttons” — in which images that users post to the site are recognized and their owners compensated for the use. The arrangement uses ImageIRC technology from PicScout, the Israeli company that Getty Images acquired last year. ImageIRC is a combination of an online image rights registry and image recognition technology based on fingerprinting. It also uses PicScout’s Post Usage Billing system to manage royalty compensation.
Here’s how it works: SparkRebel users post images of fashion items they like to their profile pages. Whenever a user posts an image, SparkRebel calls PicScout’s content identification service to recognize it. If it finds the image’s fingerprint in its database, it uses ImageIRC to determine the rights holders; then SparkRebel pays any royalty owed through Post Usage Billing. PicScout ImageIRC’s database includes Getty’s own images; it is the largest stock image agency in the world. (Getty Images itself was sold just last month to Carlyle Group, the private equity giant, for over US $3 Billion.) In all, ImageIRC includes data on over 80 million images from more than 200 licensors, which can opt in to the arrangement with SparkRebel (and presumably similar deals in the future).
This deal is a landmark in various ways. It is a more practically useful application for image recognition than ever before, and it brings digital images into some of the same online copyright controversies that have existed for music, video, and other types of content.
Several content recognition platforms exist; examples include Civolution’s Teletrax service for video; Attributor for text; and Audible Magic, Gracenote, and Rovi for music. Many of these technologies were first designed for catching would-be infringers: blocking uploads and supplying evidence for takedown notices and other legal actions. Some of them evolved to add rights licensing functionality, so that when they find content on a website, blog, etc., instead of sending a nastygram, copyright owners can offer revenue-sharing or other licensing terms. The music industry has experimented with audio fingerprinting to automate radio royalty calculations.
The idea of extending a content identification and licensing service to user-posted content is also not new: Google’s Content ID technology for YouTube has led to YouTube becoming a major legal content platform and likely the largest source of ad revenue from music in the world. But while Content ID is exclusive to YouTube, PicScout ImageIRC and Post Usage Billing are platforms that can be used by any service that publishes digital images.
PicScout has had the basic technology components of this system for a while; SparkRebel merely had to implement some simple code in its photo-upload function to put the pieces together. So why don’t we see this on Pinterest, not to mention Flickr, Tumblr, and so many others?
The usual reason: money. Put simply, SparkRebel has more to gain from this arrangement than most other image-sharing sites. SparkRebel has to pay royalties on many of the images that its users post. Yet many of those images are of products that SparkRebel sells; therefore if an image is very popular on the site, it will cost SparkRebel more in royalties but likely lead to more commissions on product sales. Furthermore, a site devoted to fashion is likely to have a much higher percentage of copyrighted images posted to it than, say, Flickr.
Yet where there’s no carrot, there might be a stick. Getty Images and other image licensors have been known to be at odds with sites like Pinterest over copyright issues. Pinterest takes a position that is typical of social-media sites: that it is covered (in the United States) by DMCA 512, the law that enables them to avoid liability by responding to takedown notices — and as long as it responds to them expeditiously, it has no further copyright responsibility.
Courts in cases such as UMG v. Veoh and Viacom v. Google (YouTube) have also held that online services have no obligation to use content identification technology to deal with copyright issues proactively. Yet the media industry is trying to change this; for example, that’s most likely the ultimate goal of Viacom’s pending appeal in the YouTube case. (That case concerns content that users uploaded before Google put Content ID into place.)
On the other hand, the issue for site operators in cases like this is not just royalty payments; it’s also the cost of implementing the technology that identifies content and acts accordingly. A photo-sharing site can implement PicScout’s technology easily and (unlike analogous technology for video) with virtually no impact on its server infrastructure or the response time for users. This combined with the “make it easy to do the right thing” aspect of the scheme may bring the sides closer together after all.
The DMCA and Presidential Politics July 29, 2012Posted by Bill Rosenblatt in Fingerprinting, Law, Music, United States.
A minor firestorm has hit the techblogosphere over the past several days regarding the removal of a Mitt Romney campaign ad on YouTube that contained a short clip of President Obama singing Al Green’s “Let’s Stay Together” (while at a campaign stop at the Apollo Theater in Harlem). Commentators used this as an occasion to blast an aspect of DMCA 512, the U.S. law that provides for “notice and takedown.” The knee-jerk reactions to this incident have been wrong-headed and a little bit depressing.
The law says that if a copyright owner sends a proper notice to a site operator (in this case Google for YouTube) about an unauthorized content item, then the operator may take the item down to avoid liability. The law enables the operator to provide counternotice but stipulates that the operator must wait 10 days after issuing the counternotice for a reply period before it can repost the item without risk of liability.
Sites like Public Knowledge and Ars Technica have focused on the fact that the five-second clip in the Romney ad is highly likely to be fair use, how dare BMG Music Publishing do this, etc., etc. Public Knowledge also complained that the counternotice period forced the political ad off the air for too long a time and thus constituted abuse of copyright.
There’s no question that the clip makes a fair use of the song snippet; the “fair use analyses” done by people like Public Knowledge’s Sherwin Siy are beside the point. More importantly, it’s wrong to blame the “evil music company” for instigating the takedown.
Here’s a much more likely explanation of what happened: The Obama campaign contacted the copyright owner and asked them to issue the takedown notice, as a tactical response to Romney’s attack ad. BMGMP issued the notice as a routine clerical matter, as it does all the time at the request of songwriters or their management. The notice triggered YouTube’s automated system, which took the clip down.
Mike Masnick at TechDirt — the only one here who appears to have done some actual investigation instead of mere grandstanding — noticed that other YouTube clips of Obama singing the song remained up for a while until they were taken down as well. He also found that other singers’ versions of the 1972 classic hit remained up. Masnick attributed this to overzealous lawyers at BMGMP ”doubl[ing] down” on takedowns for the sake of consistency.
Uh,no. The truth, once again, most likely lies in campaign tactics. The Romney campaign (or allied interests) probably tried to re-post the ad several times with different titles or metadata. The Obama camp then responded by asking BMGMP to use YouTube’s automated Content ID scheme (based on fingerprinting), which would find all instances of the singing president and get them taken down as well. And once again, BMGMP would have handled this as a routine request. This was the only way that the Obamians could have ensured that the attack ad would not reappear.
It’s also worth pointing out here that the DMCA 512 does not obligate anyone to take content down; it only enables someone to avoid liability by doing so. YouTube automates 512 takedowns to minimize risk of liability and do so as efficiently as possible.
In other words, YouTube also responded to this situation in a routine fashion. I would venture to guess that if a lawyer at YouTube actually looked at BMGMP’s takedown notice, he or she would have left the clip up, secure in the knowledge that no one would bother to file an actual copyright lawsuit against it. (Similarly, I’m convinced that no one with a legal brain at BMGMP looked at this initially either.)
In other words, if anyone is liable for abuse of copyright — which is itself actionable — it’s the Obama campaign, which simply used routine mechanisms at both BMGMP and YouTube to accomplish its aims. (Disclosure: I plan to vote for Obama in November.) Otherwise, the errors were of omission, not commission; no actual human beings at BMGMP or YouTube appear to have thought or cared about, let alone considered the fair use implications of, this incident.
Meanwhile, clips of Obama’s Apollo Theater performance have been restored to YouTube. Yes, it took time, but that’s what you get when humans have to decide questions of Fair Use.
P.S. Romney’s ad has always been available elsewhere, just not on YouTube.
UK Digital Economy Bill Survives Last Legal Challenge March 11, 2012Posted by Bill Rosenblatt in Fingerprinting, Law, UK.
1 comment so far
The UK Court of Appeal last week dismissed a final attempt by two of the country’s largest ISP’s, BT (British Telecom) and Talk Talk, to have the 2010 Digital Economy Act ruled illegal due to incompatibility with European law. There are various features of the Digital Economy Act, but as one result of this decision, the UK will become the next country to implement a graduated response regime similar to the Hadopi system in France.
Of course, the British Phonographic Industry (BPI), the equivalent to the RIAA in the United States, lost no time in hailing the decision and claiming almost total victory over ISPs in the two-year legal battle. But the word “almost” takes on an interesting resonance regarding the one point that the media industry didn’t really win: the apportionment of costs for the progressive response program.
As I keep saying (and thereby quoting the brilliant Jonathan Zittrain of Harvard Law School), the question of who pays is the “gravamen” — the essence or most serious part — of these disputes over copyright policing. The final Court of Appeal process revealed payment terms that otherwise got very little attention during the deliberations over the Digital Economy Act. It turns out that copyright owners have to pay 75% of the costs of running the network monitoring functionality, the judicial process, and appeals costs. ISPs have to pay 25% of the first two but none of the third cost category; the latter was the point that ISPs won.
The financial terms actually fall far short of the results that copyright owners would like to achieve in similar legal disputes. For example, Viacom would no doubt like YouTube (and other content-sharing sites) to pay all of the costs of enforcing copyright on their sites. Such costs would run into millions per year (whether in pounds or dollars).
By that standard, as far as this particular aspect of the Digital Economy Act is concerned, I would not call this a victory for copyright owners at all; I’d call it a 75% capitulation.
Yet I would also say that it’s good news for the industry in general. If copyright owners are responsible for the majority of costs of operating the progressive response system, then they will have an incentive to see that it runs accurately, fairly, and efficiently. If the technical mechanism for detecting infringers is too aggressive, then they will spend too much money on the appeals end (and deal with public outcry which could lead to repeal of the law). If it’s too loose, then they don’t catch infringers and waste their money. The onus for efficiency and accuracy will be on the content recognition and network monitoring vendor that is selected to run the system. If the technology doesn’t work well, the vendor will need to improve it or be (as they say over there) sacked. That’s as it should be.
These graduated response regimes are best viewed as experiments in reducing online copyright infringement, and they should be continued if an appropriate balance among accuracy, cost-efficiency, and fairness to the public can be found.
The missing piece in the Digital Economy Bill is that that copyright owners have no incentive to ensure that the technical mechanism does not disadvantage users by hindering the ISPs’ network performance. My understanding is that this aspect of it needs to be determined by Ofcom, the UK’s telecommunications regulator, and that this has not happened yet (feel free to correct me by comment if I’m wrong). Ofcom needs to ensure that technical mechanisms do not interfere with ISPs’ performance and that any disputes should be resolved by facts and independent measurements. And if it turns out that ISPs need to install more equipment (e.g. faster servers or routers) to restore network efficiency, then copyright owners should contribute to those costs as well.
At a more abstract level, I’d say that copyright owners have been given a bigger prize than the Act itself: the right and responsibility, mandated by law, to ensure that these rights technologies work fairly and efficiently. (Copyright owners already pay network monitoring companies like MarkMonitor and Peer Media, but not as part of an institutionalized, nationwide infrastructure that is connected to legal apparatus.) This will be healthy for the rights technology industry.
In this way, the Digital Economy Act is an improvement over anticircumvention legislation, such as in the U.S. Digital Millennium Copyright Act, which gives vendors of DRM technology legal backstops so that they have limited accountability for how well their technologies actually work. True accountability only comes if the entity paying for the technology has no choice but to demand that it works well.
UltraViolet Gets Two Lifelines January 12, 2012Posted by Bill Rosenblatt in Economics, Fingerprinting, Services, Standards, Video.
add a comment
A panel at this week’s CES show in Las Vegas yielded two pieces of positive news for the DECE/UltraViolet standard, after a launch several months ago with Warner Bros. and its Flixster subsidiary that could charitably be called “premature.” Of the two news items, one is a nice to have, but the other is a game-changer.
Let’s get to the game-changer first: Amazon announced that a major Hollywood studio is licensing its content for UltraViolet distribution through the online retail giant. The Amazon executive didn’t name the studio, though many assume it’s Warner Bros. Even if it’s a single studio, the importance of this announcement to the likelihood of UltraViolet’s success in the market cannot be overstated.
Leaving aside UltraViolet’s initial technical glitches and shortage of available titles, the problem with UltraViolet from a market perspective had always been a lukewarm interest from online retailers. As I’ll explain, this hasn’t been a surprise, but Amazon’s new interest in UltraViolet could make all the difference.
UltraViolet is the “brand name” of a standard from a group called the Digital Entertainment Content Ecosystem (DECE), headed by Sony Pictures executive Mitch Singer. It implements a so-called rights locker for digital movies and other video content. Users can establish UltraViolet accounts for themselves and family members. Then they can obtain movies in one format (say, Blu-ray) and be entitled to get it in other formats for other devices (say, Windows Media file download for PCs). They can also stream the content to a web browser anywhere. The rights locker, managed by Neustar Inc., tracks each user’s purchases.
In other words, UltraViolet promises users format independence and a hedge against format obsolescence, while providing some protection for the content by requiring it to be packaged in several approved DRM and stream encryption schemes. It includes a few limitations on the number of devices and family members that can be associated with a single UltraViolet account, but in general UltraViolet is designed to make video content more portable and interoperable than, say, DVDs or iTunes downloads.
Five of the six major Hollywood studios (all but Disney*), plus the “major indie” Lionsgate, are participating in UltraViolet.
One of the design goals of UltraViolet was to ensure that no single retailer could attain a market share large enough to be able to control downstream economics — in other words, to avoid a replay of Apple’s dominance of digital music downloads (and possibly Amazon’s dominance of e-books). To do this, the DECE studios pushed for ways to thwart consumer lock-in by online retailers that would sell UltraViolet content.
The most important example of this is rights locker portability: users can access their rights lockers from any participating retailer. UltraViolet retailers must compete with each other through value-added features.
Amazon’s Kindle e-book scheme offers a good illustration of platform lock-in and how it differs from other features that a retailer can build or offer. If you buy an e-book on Amazon, you can download and read it on a wide variety of devices: not just Kindle e-readers but also iPads, iPhones, Android devices, BlackBerrys, PCs, and Macs — in other words, pretty much everything but other e-reader devices. You get e-book portability — it will even remember where you last left off if you resume reading an e-book on another device — but you are still tied to Amazon as a retailer. If you want to read the same e-book on a Nook, for example, you have to buy it separately from Barnes & Noble (and then you can read that e-book on your PC, Mac, iPhone, Android, etc.).
This lock-in gives Amazon power in the market as a retailer; it had 58% market share as of February 2011 (by comparison, Apple has over 70% of the music download market). UltraViolet wants to make it as difficult as possible for a single digital video retailer to assert such market power.
The downside of that policy has been a lack of enthusiasm among retailers to sell UltraViolet-licensed content — which entails significant development investment and operational expenses. A good shorthand way to evaluate the potential impact of a standards initiative is to look at the list of participants: what points in the value chain are represented, how many of the top companies in each category, and so on. In DECE’s case, members have included most of the major movie studios, plenty of consumer device makers, lots of DRM and conditional access technology vendors, and so on, but few big-name retailers… one of which (Best Buy) already had a different system for delivering digital video content via Sonic Solutions.
Warner Bros. tried to jump-start the UltraViolet ecosystem by acquiring Flixster, a movie-oriented social networking startup, adding digital video e-commerce capability, and using it as an UltraViolet retailer for a handful of Warner titles. This has been little more than a proof-of-concept test, which was plagued by some technical glitches and suboptimal user experience — all of which, according to Singer, have been fixed.
It would be unworkable for Hollywood to pin its hopes for its next big digital format on a small unknown retailer owned by one of the studios. It has been vitally necessary to attract a big-name retailer to both validate the concept and provide the necessary marketing and infrastructure footprints. There had been talk of Wal-Mart entering the UltraViolet ecosystem, although it already has its own video delivery scheme through VUDU. But otherwise, the membership list had been short on major retailers.
Of course, Amazon is the major-est online retailer of them all. And it so happens that Amazon’s digital video strategy is a good fit to UltraViolet in two ways. First, Amazon currently runs a streaming service (Amazon Instant Video), whereas UltraViolet is primarily focused on downloads, a/k/a Electronic Sell Through (EST): the idea of UltraViolet is to buy a download and only then be able to view it via streaming.
Second, Amazon Instant Video does not look particularly successful. Of course, Amazon does not reveal user numbers, but it is telling that Amazon included Instant Video Unlimited as a perk in its US $79/year Amazon Prime program… and that when people extol the virtues of Amazon Prime, they tend to emphasize the free overnight shipping but rarely the streaming video.
The biggest winner thus far in the paid online video sweepstakes is Netflix, with about 24 million subscribers as of mid-2011. Netflix’s subscription-on-demand model is most likely far more popular than Amazon Instant Video’s pay-per-view (except for Amazon Prime members) model. Thus Amazon may be looking for ways to improve its market position in video without having to hack away at the Netflix streaming juggernaut.
The video download market is in comparative infancy. It has no runaway market leader a la Netflix, or Apple in music. If this situation persists long enough, and if Amazon’s trial run with UltraViolet is successful, then other retailers might see UltraViolet as a viable format as well… precisely because it will make them better able to compete with the Online Retailing Gorilla.
Yet the other dimension of UltraViolet that is currently lacking is availability of titles. And that’s where the other CES announcement comes in. Samsung announced a “Disc to Digital” feature that it will incorporate into new Blu-ray players later this year. With this feature, users can slide in their Blu-ray discs or DVDs, and if the content is “eligible,” they can choose to have that content available in their UltraViolet rights lockers for delivery in any UltraViolet-compliant format.
The Disc to Digital feature is a collaboration between Flixster (i.e. Warner Bros.) as online retailer and Rovi as technology supplier. It works in a manner that is analogous to “scan and match” services for music such as Apple iTunes Match: it scans your DVD or Blu-ray disc, identifies the movie, and if the movie is available in the UltraViolet library of licensed content, gives you an UltraViolet rights locker entry for that movie. Rovi’s content identification technology and metadata library are undoubtedly at the heart of this scheme.
There are two catches: first, users will have to pay a “nominal” fee per disc for this service, which is even larger (and as yet unspecified) if they want it in high definition; second, it is limited to “eligible” content, and no one has offered a definition of “eligible” yet (beyond the fact that the content must come from one of the DECE participating studios). But surely the “eligible” catalog will exceed the current list (19 titles) by orders of magnitude, or the service will not be worth launching.
Nevertheless, these developments are very positive news for DECE/UltraViolet after months of embarrassments and bad press. DECE still has lots of work to do to make UltraViolet successful enough to be the major studios’ designated successor to Blu-ray, but at last it’s on track.
*Yes, I’m aware of the irony of using a tag line from “Who Wants to Be a Millionare” in the title of this article: Disney owns the home entertainment distribution rights to that hit TV game show.
European High Court Says No to ISP-Level Copyright Filtering November 28, 2011Posted by Bill Rosenblatt in Europe, Fingerprinting, Law, Music, Services.
add a comment
Last Thursday the European Court of Justice (ECJ) ruled that ISPs cannot be held responsible for filtering traffic on their networks in order to catch copyright infringements. This ruling was the final step in the journey of the litigation between the Belgian music rights collecting society SABAM and the ISP Scarlet, but it is a landmark decision for all of Europe.
This ruling overturned the Belgian Court of First Instance, which four years ago required Scarlet to install filtering technology such as acoustic fingerprinting to monitor Internet traffic and block uploads of copyrighted material to the network. Scarlet appealed this decision to the Brussels Court of Appeals, which sought guidance from the ECJ.
The ECJ’s statement affirmed copyright holders’ rights to seek injunctions from ISPs like Scarlet to prevent copyright infringement, but it said that the Belgian court’s injunction requiring ISP-level copyright filtering went too far. It cited Article 3 of European Union Directive 2004/48, which states that “measures, procedures and remedies [for enforcing intellectual property rights] shall be fair and equitable, shall not be unnecessarily complicated or costly and not impose unreasonable time-limits or unwarranted delays.” The ECJ decided that the mechanism defined in the appeals court’s ruling did not meet these criteria.
The real issues here are the requirement that the ISP bear the cost and complexity of running the filtering technology, and the fact that running it would slow down the network for all ISP users. It’s easy to see how this would not meet the requirements in the above EU Directive.
This decision has direct applicability in the European Union, but its implications could reach further afield. For example, the issue currently being argued between Viacom and Google at the appeals court level in the United States boils down to the same thing: whose bears the cost and responsibilty to police copyrights on the Internet?
Of course, EU law doesn’t apply in the United States. In the Viacom/Google litigation, Google is relying on the “notice and takedown” portion of the Digital Millennium Copyright Act (DMCA), a/k/a section 512 of the US copyright law. This says that if a copyright holder (e.g., Viacom) sees one of its works online without its authorization, it can issue a notice to the network service provider to take the work down, and if it does so, it won’t be liable for infringement. Google’s argument is that it follows section 512 assiduously and therefore should not be liable.
Viacom’s task in this litigation is to convince the court that the DMCA doesn’t go far enough. More specifically, its argument is that the legislative intent behind the DMCA is not served well enough by the notice-and-takedown provisions, that network service providers should be required to take more proactive responsibility for policing copyrights on their services instead of requiring copyright owners to play the Whack-a-Mole game of notice and takedown.
The ECJ’s decsion in SABAM v. Scarlet has no precedential weight in Viacom v. Google. But it may help get the Third Circuit Appeals Court to focus on what Jonathan Zittrain of Harvard Law School has called the “gravamen” (which is legalese for “MacGuffin“) in this case: who should be paying for protecting copyrights.
Irdeto Acquires BayTSP October 24, 2011Posted by Bill Rosenblatt in Fingerprinting, Publishing, Services, Video.
Irdeto announced on Monday that it is acquiring the antipiracy services company BayTSP. Terms were not disclosed, but this is the culmination of a “strategic alternatives exploration” process that BayTSP had been engaging in for some time.
BayTSP monitors P2P networks, file-sharing services, and other places where unauthorized content might lurk and generates evidence that content owners can use to support legal action against infringers. It uses a range of technologies, including sophisticated network traffic analysis and fingerprinting. It has been one of a shrinking number of providers of such services as the industry has consolidated.
This is a good strategic fit for Irdeto in various ways. First, BayTSP will boost Irdeto’s existing antipiracy services; this will strengthen the company’s competitive positioning particularly against NDS, which is known to have robust antipiracy services to complement its content protection technologies. Second, BayTSP has made some recent forays into e-book antipiracy services, which will complement Irdeto’s own new content protection technology for the e-publishing market.
Yet the consolidation of antipiracy services within a major content protection company has interesting implications for the economics of content protection. Typically, copyright owners pay for antipiracy services such as those of BayTSP, Peer Media, and Attributor, but downstream entities such as network operators, online retailers, and device makers pay for content protection technologies such as conditional access and DRM. At the same time, pay TV operators are starting to launch services in which the content can go beyond the customer’s set top box, possibly onto their tablets, mobile handsets, and PCs. The question is: do pay TV operators believe it’s their responsibility to protect the content beyond the STB?
Irdeto will have to decide the answer to this question. Specifically: will it continue to charge content owners for BayTSP’s antipiracy services, or will it attempt to add to the fees it charges its operator customers? To put it more cynically, have Hollywood studios encouraged Irdeto to acquire BayTSP (as they encouraged Irdeto to buy BD+ Blu-ray content protection technology from Rovi just three months ago) so that they no longer have to pay for it?
Seen in this light, Irdeto’s acquisition of BayTSP becomes part of the company’s overall strategy to offer more comprehensive and higher-grade content protection services to pay TV operators, on the theory that they will pay more to get better protection. This is a risky strategy, but given the growing footprint that Irdeto has in the overall content protection market, it’s a risk that Irdeto can probably afford to take.
iTunes Match Goes Beta, and It Downloads September 2, 2011Posted by Bill Rosenblatt in Fingerprinting, Music, Services.
When Apple announced iCloud back in June, it announced an intriguing feature called iTunes Match. iTunes Match will scan users’ hard drives for music files and identify them using techniques such as acoustic fingerprinting and scanning ID3 metadata in MP3 files. If it identifies a track that’s in the massive iTunes library, it will download that track to the user’s Apple devices or PCs/Macs running iTunes software. Apple will charge US $24.95/year for iTunes Match. Earlier this week, Apple took it into beta and released it to developers.
Astute readers may have caught a very interesting word in the previous paragraph: download.
We had been speculating whether Apple would supply tracks to users’ devices by download or streaming; Apple itself had been ambiguous — I would say intentionally — on this point. A poll of Copyright and Technology readers suggested that streaming was the likely method, by more than a two-to-one margin. No: in the latest version of the beta, as of August 31, it’s downloading. (To be more precise: progressive downloading, meaning that the track starts playing shortly after the download starts.)
I imagine that stream vs. download was an issue in Apple’s licensing negotiations with the music industry leading up to the iTunes Match launch; and it’s possible that Apple may move to streaming at some point in the future. Royalty structures for downloads and streams differ. Streaming is cheaper yet requires much more technical infrastructure — although Apple supposedly owns such infrastructure as the result of its purchase of the streaming service la la in late 2009.
The implications of iTunes Match as a downloading “cloud sync” service are worth considering, and they don’t look very favorable to the record companies. ITunes Match helps Apple lock users into the iTunes/iPod technology stack now that it no longer uses DRM — although all of the files involved are unencrypted and therefore easy to use in non-Apple music players.
At the same time, iTunes Match is essentially an amnesty service for people who have unauthorized music files. For $25 per year, you can get pristine, legal AAC-encoded copies of up to 2500 of your music files on all of your devices. That’s a penny a track to go legal and get the added convenience of music synced to all your Apple devices.
On the one hand, this service probably won’t appeal to hoarders — those people who have accumulated multi-terabyte hard drives full of dubiously legal content. 2500 tracks, roughly 250 albums’ worth, is not much for hoarders. It’s unlikely that many of them will be interested in paying $25 to ease worries about infringement for a small fraction of their holdings.
The use case that Apple (and record companies) most likely had in mind is, in fact, very much like the DRM use case: to apply to so-called casual copiers, who may have ripped a few of their friends’ CDs or downloaded the occasional track from a file-sharing network but would pay a modest amount for legal music plus the convenience of keeping it on multiple devices.
On the other hand, the opportunities for abuse — the analogs to DRM hacks — are interesting to contemplate.
Here’s one example. I presume that iTunes Match uses Gracenote’s music identification technology, because iTunes already uses Gracenote. Yet this is different from the usual content identification use case, in which it’s safe to assume that ID3 tags actually signify the music in the file. In other words, music ID technology typically looks for ID3 tags (or equivalent metadata in other file formats) first and stops if it finds them, otherwise it goes on to analyze the actual content in the file using acoustic fingerprinting.
If iTunes Match comes across a music file, does it check to make sure that the music in the file is actually the music that the metadata describes? One would think not, because this would be inefficient. But in that case, it would be possible to create libraries of MP3 files that contain dummy MP3 data along with ID3 tags signifying actual music. Do you want a nice collection of a couple thousand tunes in your favorite genre? Just download this ZIP file of fake MP3s and run iTunes Match on them; you’ll get legal files of all those tracks on all of your Apple devices.
Although such dummy files would take some effort to create, they would be easy enough for non-techies to use with iTunes Match. To me this sounds just like a hack to a weak DRM, with one big difference: whereas it’s a crime to hack DRMs, this hack is perfectly legal. Furthermore, I would argue that because the files are unprotected, this type of hack is more of a problem for record companies than for Apple compared to DRM.
iTunes Match is still in beta, with launch expected in the coming weeks. We’ll see whether this feature leads to more abuse than DRM hacks relative to the money that it puts in record companies’ pockets.
add a comment
The 28-page paper describes the current state of the art of techniques for protecting video content delivered over pay television networks such as cable and satellite. The two primary theses of the white paper are:
- Pay TV often leads in content protection innovation over other media types and delivery modalities. That is because, among other reasons, it is a fairly rare case where the economic interests of content owners and service providers are aligned: content owners don’t want their content used without authorization, and pay-TV operators don’t want their signals stolen. Therefore pay-TV operators have incentives to implement strong and innovative content security solutions.
- Before today, many content security schemes could be described as hack-it-and-it’s-broken (such as CSS for DVDs) or a cycle of hack-patch-hack-patch-etc. (such as AACS for Blu-ray or FairPlay for iTunes). Now technologies are available that break the hack-patch-hack-patch cycle, thereby decreasing long-term costs (TCO) and complexity.
The white paper starts with a brief history of content protection technologies for digital pay TV, starting with the adoption of the Digital Video Broadcasting (DVB) standard in 1994. Then it describes various newer technologies, including building blocks like ECC (elliptical curve cryptography), flash memory, and secure silicon; and it describes new techniques such as individualization, renewability, diversity, and whitebox cryptography. It ties these techniques together into the concept of security lifecycle services, which include breach response and monitoring.
The final section of the paper discusses fingerprinting and watermarking as two techniques that complement encryption as ways of finding unauthorized content “in the wild.”
My thanks to Irdeto for sponsoring this paper.