New Research to Be Presented at January Conference November 30, 2015Posted by Bill Rosenblatt in Events, Fingerprinting, Law, United States.
add a comment
I am excited to announce that Copyright and Technology NYC 2016 will feature a special presentation of new research: Notice and Takedown in Everyday Practice: Robots, Artisans, and the Fight to Protect Copyrights, Expression and Competition on the Internet. This is a landmark study on how the Notice and Takedown provisions of Section 512 of U.S. copyright law work in practice. It is the result of many interviews with copyright holders, service providers, and copyright enforcement services, as well as analysis of large numbers of takedown notices submitted to the Chilling Effects database. Authors of the study are Jennifer Urban and Brianna Schofield of the Samuelson Law, Technology & Public Policy Clinic at BerkeleyLaw, and Joe Karaganis of The American Assembly at Columbia University. The talk at Copyright and Technology NYC 2016 on Tuesday, January 19th will be its first public presentation.
Until now, very little empirical research has been done on the effectiveness of the DMCA’s notice and takedown provisions in addressing copyright infringement as well as due process for notice targets. This talk will summarize research comprising three studies that draw back the curtain on notice and takedown: it gathers information on how online service providers and rightsholders experience and practice notice and takedown, examines over 100 million notices generated during a six-month period, and looks specifically at a subset of those notices that were sent to Google Image Search.
The findings suggest that whether notice and takedown “works” is highly dependent on who is using it and how it is practiced, though all respondents agreed that the Section 512 safe harbors remain fundamental to the online ecosystem. Perhaps surprisingly, a large portion of service providers still receive relatively few notices and process them by hand. For some major players, however, the scale of online infringement has led to automated systems that leave little room for human review or discretion, and in a few cases notice and takedown has been abandoned in favor of techniques such as content filtering. Further, surprisingly high percentage of notices raise questions about their validity. The findings strongly suggest that the notice and takedown system is under strain but that there is no “one size fits all” approach to improving it. The study concludes with suggestions of various targeted reforms and best practices.
Please come and see this important research presentation on January 19th — register today! Early bird registration ends December 11.
Ninth Circuit Calls for Takedown Notices to Address Fair Use September 15, 2015Posted by Bill Rosenblatt in Fingerprinting, Law, Music.
add a comment
This past Monday’s ruling from the Ninth Circuit Appeals Court in Lenz v. Universal Music Group, a/k/a the Dancing Baby Video case, is being hailed as an important one in establishing the role of fair use in the online world. The case involved a common enough occurrence: a homemade video clip of someone’s child, with music (Prince’s “Let’s Go Crazy”) in the background, posted to YouTube.* UMG sent a takedown notice, Stephanie Lenz sent a counter-notice, and an eight-year legal battle ensued. Monday’s ruling was not a decision on the defendant’s liability but merely a denial of summary judgment, meaning that case will now go to trial.
The three-judge panel produced two important holdings: first, that fair use is really a user’s right, and not just an affirmative defense to a charge of infringement. The second is that copyright holders have to take fair use into account in issuing DMCA takedown notices. As we’ll discuss here, this will have some effect on copyright holders’ ability to use automated means to enforce copyright online.
Under the DMCA (Section 512 of U.S. copyright law), online service providers can avoid copyright liability if they respond to notices requesting that allegedly infringing material be taken down. Notices have to comply with legal requirements, one of which is a good faith belief that the user who put the work up online was not authorized to do so. This court now says that fair use is not merely a defense to a charge of infringement — to be asserted after the copyright holder files a lawsuit — but is actually a form of authorization.
It follows that the copyright holder must profess a good faith belief that the user wasn’t making a fair use of the work in order for a takedown notice to be valid. The court also held that this good faith belief can be “subjective” rather than based on objective facts; but it’s ultimately up to a jury to decide whether it buys the complainant’s basis for its good faith belief is valid.
The question for us here is how this ruling will affect the technologies and automated processes that many copyright owners use to police their works online, often through copyright monitoring services like MarkMonitor, Muso, Friend MTS, Entura, and various others. These services use fingerprinting and other techniques to identify content online, create takedown notices from templates, and send them — many thousands per day — to online services. Page 19 of the Lenz decision contains a hint:
“We note, without passing judgment, that the implementation of computer algorithms appears to be a valid and good faith middle ground for processing a plethora of content while still meeting the DMCA’s requirements to somehow consider fair use. . . . For example, consideration of fair use may be sufficient if copyright holders utilize computer programs that automatically identify for takedown notifications content where: (1) the video track matches the video track of a copyrighted work submitted by a content owner; (2) the audio track matches the audio track of that same copyrighted work; and (3) nearly the entirety . . . is comprised of a single copyrighted work. . . . Copyright holders could then employ [humans] to review the minimal remaining content a computer program does not cull.” (Internal citations and quotation marks omitted.)
At the same time, another clue lies in pp. 31-32, in a footnote to Judge Milan Smith’s partial dissent:
“The majority opinion implies that a copyright holder could form a good faith belief that a work was not a fair use by utilizing computer programs that automatically identify possible infringing content. I agree that such programs may be useful in identifying infringing content. However, the record does not disclose whether these programs are currently capable of analyzing fair use. Section 107 specifically enumerates the factors to be considered in analyzing fair use. These include: ‘the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes’; ‘the nature of the copyrighted work; ‘the amount and substantiality of the portion used in relation to the copyrighted work as a whole’; and ‘the effect of the use upon the potential market for or value of the copyrighted work.’ 17 U.S.C. § 107. For a copyright holder to rely solely on a computer algorithm to form a good faith belief that a work is infringing, that algorithm must be capable of applying the factors enumerated in § 107.”
To follow this ruling, takedown notices will now presumably have to contain language that describes the copyright holder’s good faith belief that the user who posted the file did not have a fair use right. This can be a “subjective” basis, and the source of that information cannot “solely” be a “computer algorithm.”
It is, of course, impossible for any computer algorithm to determine whether a copy of a file was made by fair use; there is no such thing as a “fair use deciding machine.” But that’s not what’s required here — only evidence that some (unspecified portion) of the four fair use factors were not met, other than “because I said so.” Two of the four factors are easy: “the nature of the copyrighted work” ought to be self-evident to the owner of the copyright, and today’s widely-used content recognition tools can determine whether “the amount and substantiality of the portion used” was the entire work. The majority in Lenz suggested that this latter factor “may be sufficient . . . for consideration of fair use.” Apart from that, for example, the fact that a file appears on a website touting “Free MP3 downloads!” and featuring banner ads could be cited as evidence of an “effect of the use upon the potential market for or value of the copyrighted work” or “the purpose and character of the use.”
In other words, some of the characterizations of a work as “not fair use” that are often written into lawsuit complaints (written by lawyers) may have to find their way into takedown notices (generated automatically by technology). As a practical matter, copyright monitoring services may want to produce takedown notices with more situation-specific information in order to pass the non-fair use test — such as characterizations of the online service or other circumstances in which works are found. This could require a greater number of different takedown notice templates and more effort required to populate them with specifics before sending them to online services — yet the processes still ought to be automatable.
The upshot of the Lenz decision, then, is that copyright holders may have to go to somewhat more effort to generated automated takedown notices under the DMCA that will survive a court challenge. Just how much more effort and how much more verbiage in notices is necessary will be a subject for the Lenz trial and future litigations. But today’s basic paradigm of copyright monitoring services using content recognition algorithms and other technological tools to automate enforcement processes is likely to continue, largely unchanged.
*I had a very similar experience two years ago. I took a video of my daughter’s dance recital on my smartphone from the audience, and I posted it on YouTube under a private URL known only to her uncles and grandparents. UMG issued a takedown notice — on one of the three one-minute-long song samples used in that dance routine. I tried filing a counter-notice, which UMG denied; so I gave up and emailed the clip to the relatives. I suspect that no human ever analyzed this clip: the Jennifer Lopez track that UMG complained of was one of two tracks owned by UMG, while the other, a techno track by Basement Jaxx, is one that services like Shazam have a hard time recognizing.
add a comment
Piracy of live-streamed sports events ceased to be “inside baseball” (pun intended) for the media industry last weekend with HBO’s broadcast of the Floyd Mayweather-Manny Pacquiao boxing match in the US market. Even in the mainstream media (such as here and here), it seems that the public’s ability to watch the fight online for free in close to real time got more attention than the fight itself.
This is why protection of live sports event streams is a growth area in the field of anti-piracy technology today. Broadcasters like HBO pay huge sums of money for exclusive rights to live sports; therefore they have big incentives to protect the streams from infringement. Recent articles in re/code and Mashable attempted — with limited success — to explain how HBO’s stream was massively pirated and how that piracy could possibly have been curtailed.
Both articles focused on the many pirated streams of the fight that were available on the Periscope app, which allows users to broadcast video in real time from their iOS devices, and is owned by Twitter. As Peter Kafka at re/code explained (accurately enough), it’s not possible to use fingerprint-based systems like Google’s Content ID with live event streams. Such systems depend on a service provider getting a copy of the content in advance so that it can take a “fingerprint” — a shorthand numerical representation of it — and use that to flag attempted user uploads of the same content later. By definition, no advance copy of a live event exists, so fingerprinting can’t be used.
Furthermore, just because a single service uses fingerprinting to block unauthorized uploads doesn’t mean that other services do. YouTube might block an upload thanks to Content ID, but that doesn’t prevent a user from putting the same file up on BitTorrent or a cyberlocker.
However, it is possible to use watermarks to flag content. HBO could insert watermarks into the live video as it goes out the door. Watermarks are much more efficient to detect and calculate than fingerprints, and a well-designed watermark can be detected even if the content is “camcorded” from a TV screen.
Two things can happen with watermarks. First, a cooperating service could agree to detect the watermark and block the content — or do something else, such as allow the content through, play an ad, and share the revenue with the rights holder, as Google does with Content ID. Second, a piracy monitoring service could detect watermarks of streams out in the wild (including on Periscope) and rapidly serve takedown notices on the services that are distributing the unauthorized streams, meaning that the services need not do anything proactive.
Given what Christina Warren at Mashable experienced (camcorded streams appearing on Periscope and then disappearing later), the latter probably happened. Several streaming providers and anti-piracy services use watermarks to aid detection of unauthorized copies of live streams. In the Caribbean market, for example, Netherlands-based pay-TV platform provider Cleeng carried the pay-per-view broadcast of the fight for Sportsmax TV, and it’s likely that Cleeng used its live-stream watermarking technology to protect the content. (Another anti-piracy provider, Irdeto, has similar technology but admitted to Bloomberg that it wasn’t working on the fight. That leaves Friend MTS as my guess for the provider that monitored the fight in other geographies such as Europe and North America.)
It is also possible to automate the process more fully by embedding so-called session-based watermarks that contain identifiers for the user accounts or devices that are receiving the content legally — such as set-top boxes receiving HBO over cable or satellite services. Session-based watermarks are used today with movies released in early windows in high definition, and Hollywood would like them to be used in all 4K/UHD movie distributions.
With session-based watermarks, a monitoring service can (in many cases) determine the device from which the unauthorized stream originated and inform the pay-TV provider, which can then shut off the signal to that device. The entire process would require no human intervention and take just a few seconds.
But with Periscope-style camcording, this could lead to the following interesting situation: Alice invites some friends over to watch the fight on her big-screen TV and pays the $100 fee to HBO through her cable company. Everyone sits down, and the fight starts. Bob pulls out his iPhone and fires up Periscope. A few seconds later, the TV goes blank or displays a warning message about possible copyright infringement. Alice calls her cable company and finds herself on hold, waiting behind the hundreds or thousands of other furious customers to whom the same thing happened.
Ergo I don’t believe HBO is able to require session-based watermarking to protect its live events through pay-TV providers. The situation with live sports is different from early-window HD movies: movies have already been in theaters (where they have been camcorded), and users value the timeliness of Periscope-style camcords for live events more than their often questionable quality.
What also clearly did not happen is that HBO made a deal with Twitter to detect the watermarks and block the live Periscope streams. As both the Mashable and re/code articles note, Twitter/Periscope experienced a ton of traffic before, during, and after the event, much of which was “second-screen” in nature, such as commentary on the fight and the fighters. Yet Google’s Content ID showed that a service provider could be willing to detect copyrighted materially proactively if given sufficient incentive. If the likes of HBO can find sufficient incentives — cross-promotion, ad revenue share, or something else — then the Periscopes of the world might be inclined to follow in Google’s footsteps.
Digimarc Launches Social DRM for E-books September 17, 2014Posted by Bill Rosenblatt in Fingerprinting, Publishing, Technologies.
add a comment
Digimarc, the leading supplier of watermarking technology, announced this week the release of Digimarc Guardian Watermarking for Publishing, a transactional watermarking (a/k/a “social DRM”) scheme that complements its Guardian piracy monitoring service. Launch customers include the “big five” trade publisher HarperCollins, a division of News Corp., and the e-book supply chain company LibreDigital, a division of the printing giant RR Donnelley that distributes e-books for HarperCollins in the US.
With this development, Digimarc finally realizes the synergies inherent in its acquisition of Attributor almost two years ago. Digimarc’s roots are in digital image watermarking, and it has expanded into watermarking technology for music and other media types. Attributor’s original business was piracy monitoring for publishers via a form of fingerprinting — crawling the web in search of snippets of copyrighted text materials submitted by publisher customers.
One of the shortcomings in Attributor’s piracy monitoring technology was the difficulty in determining whether a piece of text that it found online was legitimately licensed or, if not, if it was likely to be a fair use copy. Attributor could use certain cues from surrounding text or HTML to help make these determinations, but they are educated guesses and not infallable.
The practical difference between fingerprinting and watermarking is that watermarking requires the publisher to insert something into its material that can be detected later, while fingerprinting doesn’t. But watermarking has two advantages over fingerprinting. One is that it provides a virtually unambiguous signal that the content was lifted wholesale from its source; thus a copy of content with a watermark is more likely to be infringing. The other is that while fingerprinting can be used to determine the identity of the content, watermarking can be used to embed any data at all into it (up to a size limit) — including data about the identity of the user who purchased the file.
The Digimarc Guardian watermark is complementary to the existing Attributor technology; Digimarc has most likely adapted Attributor’s web-crawling system to detect watermarks as well as use fingerprinting pattern-matching techniques to find copyrighted material online.
Digimarc had to develop a new type of watermark for this application, one that’s similar to those of Booxtream and other providers of what Bill McCoy of the International Digital Publishing Forum has called “social DRM.” Watermarks do not restrict or control use of content; they merely serve as forensic markers, so that watermark detection tools can find content in online places (such as cyberlockers or file-sharing services) where they probably shouldn’t be.
A “watermark” in an e-book can consist of text characters that are either plainly visible or hidden among the actual material. The type of data most often found in a “social DRM” scheme for e-books likewise can take two forms: personal information about the user who purchased the e-book (such as an email address) or an ID number that the distributor can use to look up the user or transaction in a database and is otherwise meaningless. (The idea behind the term “social DRM” is that the presence of the watermark is intended to deter users from “oversharing” files if they know that their identities are embedded in them.) The Digimarc scheme adopted by LibreDigital for HarperCollins uses hidden watermarks containing IDs that don’t reveal personal information by themselves.
In contrast, the tech publisher O’Reilly Media uses users’ email addresses as visible watermarks on its DRM-free e-books. Visible transactional watermarking for e-books dates back to Microsoft’s old Microsoft Reader (.LIT) scheme in the early 2000s, which gave publishers the option of embedding users’ credit card numbers in e-books — information that users surely would rather not “overshare.”
HarperCollins uses watermarks in conjunction with the various DRM schemes in which its e-books are distributed. The scheme is compatible with EPUB, PDF, and MOBI (Amazon Kindle) e-book formats, meaning that it could possibly work with the DRMs used by all of the leading e-book retailers.
However, it’s unclear which retailers’ e-books will actually include the watermarks. The scheme requires that LibreDigital feed individual e-book files to retailers for each transaction, rather than single files that the retailers then copy and distribute to end users; and the companies involved haven’t specified which retailers work with LibreDigital in this particular way. (I’m not betting on Amazon being one of them.) In any case, HarperCollins intends to use the scheme to gather information about which retailers are “leaky,” i.e., which ones distribute e-books that end up in illegal places online.
Hollywood routinely uses a combination of transactional watermarks and DRM for high-value content, such as high-definition movies in early release windows. And at least some of the major record labels have used a simpler form of this technique in music downloads for some time: when they send music files to retailers, they embed watermarks that indicate the identity of the retailer, not the end user. HarperCollins is unlikely to be the first publisher to use both “social DRM” watermarks and actual DRM, but it is the first one to be mentioned in a press release. The two technologies are complementary and have been used separately as well as together.
Copyright Alert System Releases First Year Results June 10, 2014Posted by Bill Rosenblatt in Europe, Fingerprinting, Law, United States, Watermarking.
The Center for Copyright Information (CCI) released a report last month summarizing the first calendar year of activity of the Copyright Alert System (CAS), the United States’ voluntary graduated response scheme for involving ISPs in flagging their subscribers’ alleged copyright infringement. The report contains data from CAS activity as well as results of a study that CCI commissioned on consumer attitudes in the US towards copyright and file sharing.
There are two alerts at each level, for a total of six, but the three categories make it easier to compare the CAS with “three strikes” graduated response regimes in other countries. As I discussed recently, the CAS’s “mitigation” penalties are very minor compared to punitive measures in other systems such as those in France and South Korea.
The CCI’s report indicates that during the first ten months of operation, it sent out 1.3 million alerts. Of these, 72% were “educational,” 20% were “acknowledgement,” and 8% were “mitigation.” The CAS includes a process for users to submit mitigation alerts they receive to an independent review process. Only 265 review requests were sent, and among these, 47 (18%) resulted in the alert being overturned. Most of these 47 were overturned because the review process found that the user’s account was used by someone else without the user’s authorization. In no case did the review process turn up a false positive, i.e. a file that the user shared that was actually not unauthorized use of copyrighted material.
It’s particularly instructive to compare these results to France’s HADOPI system. This is possible thanks to the detailed research reports that HADOPI routinely issues. Two of these were presented at our Copyright and Technology London conferences and are available on SlideShare (2012 report here; 2013 report here). Here is a comparison of the percent of alerts issued by each system at each of the three levels:
|Alert Level||HADOPI 2012||HADOPI 2013||CAS 2013|
Of course these comparisons are not precise; but it is hard not to draw an inference from them that threats of harsher punitive measures succeed in deterring file-sharing. In the French system — in which users can face fines of up to €1500 and one year suspensions of their Internet service — only 0.03% of those who received notices kept receiving them up to the third level, and only a tiny handful of users actually received penalties. In the US system — where penalties are much lighter and not widely advertised — almost 8% of users who received alerts went all the way to the “mitigation” levels. (Of that 8%, 3% went to the sixth and final level.)
Furthermore, while the HADOPI results are consistent from 2012 to 2013, they reflect a slight upward shift in the number of users who receive second-level notices, while the percent of third-level notices — those that could involve fines or suspensions — remained constant. This reinforces the conclusion that actual punitive measures serve as deterrents. At the same time, the 2013 results also showed that while the HADOPI system did reduce P2P file sharing by about one-third during roughly the second year of the system’s operation, P2P usage stabilized and even rose slightly in the two years after that. This suggests that HADOPI has succeeded in deterring certain types of P2P file-sharers but that hardcore pirates remain undeterred — a reasonable conclusion.
It will be interesting to see if the CCI takes this type of data from other graduated response systems worldwide — including those with no punitive measures at all, such as the UK’s planned Vcap system — into account and uses it to adjust its level of punitive responses in the Copyright Alert System.
1 comment so far
The BBC has discovered documents that detail a so-called graduated response program for detecting illegal downloads done by customers of major UK ISPs and sending alert messages to them. The program is called the Voluntary Copyright Alert Programme (Vcap). It was negotiated between the UK’s four major ISPs (BT, Sky, Virgin Media, and TalkTalk) and trade associations for the music and film industries, and it is expected to launch sometime next year.
Vcap is a much watered-down version of measures defined in the Digital Economy Act of 2012, in that it calls only for repeated “educational” messages to be sent to ISP subscribers and for no punitive measures such as suspension or termination of their accounts.
In general, graduated response programs work like this: copyright owners engage network monitoring firms to monitor ISPs’ networks for infringing behavior. Monitoring firms use a range of technologies, including fingerprinting to automatically recognize content that users are downloading. If they find evidence of illegal behavior, they report it to a central authority, which passes the information to the relevant ISP, typically including the IP address of the user’s device. The ISP determines the identity of the targeted subscriber and takes some action, which depends on the details of the program.
In some cases (as in France and South Korea), the central authority is empowered to force the ISP to take punitive action; in other cases (as in the United States’ Copyright Alert System (CAS) as well as Vcap), ISPs take action voluntarily.
Assuming that Vcap launches on schedule, we could soon have data points about the effectiveness of various types of programs for monitoring ISP subscribers’ illegal downloading behaviors. The most important question to answer is whether truly punitive measures really make a difference in deterring online copyright infringement, or whether purely “educational” measures are enough to do the job. Currently there are graduated response programs in South Korea, New Zealand, Taiwan, and France that have punitive components, as well as one in Ireland (with Eircom, the country’s largest ISP) that is considered non-punitive.
Is America’s CAS punitive or educational? That’s a good question. CAS has been called a “six strikes” system (as opposed to other countries’ “three strikes”), because it defines six levels of alerts that ISPs must generate, although ISPs are intended to take “mitigation measures” against their subscribers starting at the fifth “strike.” What are these mitigation measures? It’s largely unclear. The CAS’s rules are ambiguous and leave quite a bit of wiggle room for each participating ISP to define its own actions.
Instead, you have to look at the policies of each of the five ISPs to find details about any punitive measures they may take — information that is often ambiguous or nonexistent. For example:
- AT&T: its online documentation contains no specifics at all about mitigation measures.
- Cablevision (Optimum Online): its policy is ambiguous, stating that it “may temporarily suspend your Internet access for a set period of time, or until you contact Optimum.” Other language in Cablevision’s policy suggests that the temporary suspension period is 24 hours.
- Comcast (Xfinity): Comcast’s written policy is also ambiguous, saying only that it will continue to post alert messages until the subscriber “resolve[s] the matter” and that it will never terminate an account.
- Time Warner Cable: also ambiguous but suggesting nothing on the order of suspension or termination, or bandwidth throttling. It states that “The range of actions may include redirection to a landing page for a period or until you contact Time Warner Cable.”
- Verizon: Verizon’s policy is the only one with much specificity. On the fifth alert, Verizon throttles the user’s Internet speed to 256kbps — equivalent to a bottom-of-the-line residential DSL connection in the US — for a period of two days after a 14-day advance warning. At the sixth alert, it throttles bandwidth for three days.
In other words, the so-called mitigation measures are not very punitive at all, not even at their worst — at least not compared to these penalties in other countries:
- France: up to ISP account suspension for up to one year and fines of up to €1500 (US $2000), although the fate of the HADOPI system in France is currently under legal review.
- New Zealand: account suspension of up to six months and fines of up to NZ $15,000 (US $13,000).
- South Korea: account suspension of up to six months.
- Taiwan: suspension or termination of accounts, although the fate of Taiwan’s graduated response program is also in doubt.
[Major hat tip to Thomas Dillon’s graduatedresponse.org blog for much of this information.]
In contrast, Vcap will be restricted to sending out four alerts that must be “educational” and “promot[e] an increase in awareness” of copyright issues. Vcap is intended to run for three years, after which it will be re-evaluated — and if judged to be ineffective, possibly replaced with something that more closely resembles the original, stricter provisions in the Digital Economy Act. By 2018, the UK should also have plenty of data to draw on from other countries’ graduated response regimes about any relationship between punitive measures and reduced infringements.
Getty Images Reaches Image License Deal with Pinterest October 28, 2013Posted by Bill Rosenblatt in Fingerprinting, Images, Rights Licensing.
1 comment so far
A year ago, Getty Images, one of the world’s largest stock image agencies, reached a licensing deal with a startup called SparkRebel, which I described as “Pinterest for fashionistas, with Buy buttons.” On that site, people would post images of items of clothing they’re interested in. An image recognition engine would try to identify the photo and thus the identity of each apparel item. If the item was identified and its manufacturer had a deal with SparkRebel, the site would show a Buy button, which users could click to purchase the item. It was a clever use of content identification technology to support licensing of content used for commercial purposes.
SparkRebel used Getty Images’ ImageIRC image recognition technology. ImageIRC uses the concept of fingerprinting: it examines an image, calculates a set of numbers that represent the image, and looks those numbers up in a database of fingerprints to see if finds a match. Matches needn’t be exact; the fingerprinting algorithm can usually compute the correct fingerprint even if the image has been color-shifted, downsampled, cropped (up to a point), etc. In other words, Getty Images is to still images as Google’s Content ID is to YouTube videos and Audible Magic is to various sites that host music files.
In Getty Images’ deal with SparkRebel, SparkRebel would pay Getty Images a licensing fee whenever a user posted an image to which Getty owned the rights. Those of use who watched this deal at the time wondered if Getty Images was trying to get Pinterest — the leading site where users posted images of commercial products — to agree to a similar deal. Given Getty Images’ firm “no comment” replies to questions about it, the answer was clearly yes. Many of the photos posted on Pinterest (as opposed to, say, Instagram) are commercial images copied and pasted from other websites, so Getty could have made a case that Pinterest was promoting infringement of its copyrights.
It took a while, but Getty Images did conclude a licensing deal with Pinterest last Friday — a few months after SparkRebel ceased operations. Under the deal, whenever ImageIRC finds a match to an image that a user “pins” on Pinterest, Pinterest will pay Getty Images a licensing fee, just as with SparkRebel. The additional feature of the deal is that Getty Images will send Pinterest metadata about the matched image, which Pinterest can display for the user. The metadata includes the time and location of the photo, the identity of the photographer, caption, an image ID, and licensing information.
Neither Getty nor Pinterest has mentioned anything about blocking or flagging images that users aren’t permitted to pin to the site; Pinterest still allows any user photos on the site, regardless of the terms under which Getty normally licenses them. Pinterest continues to follow DMCA 512 policies of responding to takedown notices and terminating the accounts of users who repeatedly violate copyrights.
Pinterest’s announcement of the deal on its blog mentions the license fees, but otherwise does not mention any copyright issues; instead it focuses on “New data to help improve Pinterest.” Putting the fees aside, the deal is a win for Pinterest as well as Getty Images (not to mention Pinterest’s user community).
For Getty Images, this deal establishes an important precedent for image-sharing services that store lots of professional images and use them for commercial purposes. Other services that use images to drive commerce will likely follow Pinterest’s example and make licensing deals with Getty Images. But Getty gets another benefit besides money that could turn out to be just as important: distribution of image metadata.
One of the biggest problems that the stock image industry has with the Internet is that most ways of copying images from one place to another strip metadata away. When photographers and editors prepare images for distribution, they use tools like Adobe Photoshop, which incorporates Adobe’s XMP (eXtensible Metadata Platform) metadata scheme for storing metadata that travels with images. XMP metadata can be stored alongside images on web pages. But it doesn’t survive copying and pasting photos through web browsers.
It is actually illegal under section 1202 of the Digital Millennium Copyright Act to intentionally remove “copyright management information” from a copyrighted work in order to evade detection of infringement, though there is some ambiguity over issues such as what qualifies as copyright management information. Nevertheless, images that users copy and paste among websites generally have no copyright management information.* Getty Images’ arrangement with Pinterest recovers metadata for images posted to the site that match its database. This certainly won’t solve the image metadata problem in general, but it’s a start.
*Some images may have invisible embedded watermarks that indicate copyright management information. Typically such watermarks will contain IDs that point to entries in image licensors’ databases, which in turn contain things like the photographer’s name, licensing terms, and so on. Whether invisible embedded watermarks qualify as copyright management information under DMCA 1202 is somewhat up in the air. If a high enough court decided that they do, that could make tools for hacking “social DRM” e-book watermarks illegal in the United States.
Comcast Adds Carrots to Sticks August 9, 2013Posted by Bill Rosenblatt in Fingerprinting, Services, Video.
1 comment so far
Variety magazine reported earlier this week that Comcast is developing a new scheme for detecting illegal file downloads over its Internet service. When it detects a user downloading content illegally, it will send a message to the user with links to legal alternatives, including from sources that aren’t Comcast properties. This scheme would be independent of the Copyright Alert System (CAS) that launched in the United States earlier this year.
What a difference the right economic incentives make. Comcast has significant incentive for offering carrots instead of sticks: it owns NBC Universal, a major movie studio and TV network. This means that Comcast has incentives to protect content revenue, even if it comes from third parties like iTunes, Netflix, or Amazon. In addition, if Comcast protects its own network from infringers, it has a stronger position from which to negotiate content distribution deals for its own Xfinity-branded services from other major studios.
Comcast will most likely use the same monitoring services as content owners — like NBC Universal, whose people are collaborating on the design of this (as yet unnamed) system — use to detect allegedly infringing downloads. It will be able to send messages to users in close to real time — in contrast to CAS, which processes data about detected downloads through a third party before they get sent to users.
This scheme is reminiscent of one of the earliest uses of fingerprinting technologies in a commercially licensed service: around 2005, a P2P file-sharing network called iMesh cut a deal with the major record labels (or at least some of them). They would allow iMesh to operate its network with audio fingerprinting (supplied by Audible Magic, still a leader in the field). The fingerprinting technology would detect attempts to upload copyrighted music to the network and block them. Instead, iMesh offered copyrighted music files supplied by the labels, encrypted with DRM, for purchase. Given that several other P2P file-sharing networks (such as LimeWire) continued to operate at the time without such restrictions, iMesh wasn’t much of a success.
Comcast is hoping to get other ISPs to adopt similar schemes, presumably both as a service to major content owners and in hopes that this anti-piracy feature doesn’t drive users to its competitors. But that gambit is unlikely to succeed. Of the four other major ISPs in the US — AT&T, Cablevision, Time Warner Cable, and Verizon — none are corporate siblings to major content owners. (Time Warner Cable was spun off from Time Warner in 2009, though it retains the name.) In other words, they won’t have the right incentives.
In contrast, France’s HADOPI scheme is supposed to steer people to legal alternatives by simply giving those services a “seal of approval” that they can use themselves. What Comcast has in mind ought to be more effective. In the world of movies and TV shows, it would be that much more effective if legal services were to offer content with anything like the completeness of record label catalogs offered through legal music services. But that’s another story for another day.
Copyright Alert System Launches in U.S. February 25, 2013Posted by Bill Rosenblatt in Fingerprinting, Law, Music, Video.
With today’s launch of the Copyright Alert System (CAS) by the Center for Copyright Information, the United States joins the list of countries that have adopted a so-called graduated response system for educating Internet users about online copyright infringement and taking steps to punish repeat offenders. The CAS is finally launching after a few months’ delay, part of which was supposedly due to the effects of Sandy, the mega-storm that hit the northeast U.S. late last year. Other graduated response countries include France, New Zealand, and South Korea; the United Kingdom is currently struggling with its own implementation.
The CAS is a partnership between music and video content owners on the one hand and major ISPs on the other. The content owner representatives include not just the majors (RIAA and MPAA) but also the Independent Film and Television Alliance (IFTA) and American Association of Independent Music (A2IM). On the ISP side, membership includes the five largest providers: AT&T, Verizon, Time Warner Cable, Comcast, and Cablevision. Book and game publishers are not involved at this point.
The CAS is run by Jill Lesser, a tech policy veteran with deep experience on both the content and ISP sides. It has an advisory board whose principal function seems to be to curb abuses: it includes advocates for looser copyright laws (Gigi Sohn of Public Knowledge) and user privacy (Jules Polonetsky of the Future of Privacy Forum).
The CAS works similarly to other graduated response regimes: copyright owners employ infringement monitoring services, which can identify copyrighted works as users send them around the Internet using fingerprinting and other content recognition technologies. The monitoring services send notices to ISPs, which issue warning messages to users. The warnings get stronger with repeat infringements.
ISPs can opt to punish repeat alleged offenders by such means as throttling bandwidth and making users watch videos about copyright. (ISPs already have policies for terminating repeat infringers’ accounts, which they must have in order to maintain their eligibility for the DMCA safe harbor.)
Where the CAS differs from other graduated response systems is that it is not tied to law enforcement. The arrangement between content owners and ISPs is voluntary. ISPs will not terminate or suspend users’ Internet accounts, nor will they pass information about infringements on to copyright owners. Another difference is that the CAS is not being funded through taxes or levies on Internet service (although funding sources are confidential).
In other words, the CAS is a more purely educational approach than France’s HADOPI or other systems. Analysis of the CAS’s results will therefore be more useful in determining how successful education by itself can be in getting people to respect copyright. The hope is that education will do more than draconian statutory damages or blunt-instrument legislation.
Given how little effect those approaches have had, it may not be difficult to declare the Copyright Alert System a relative success in the years to come. As it is now, it seems like quite a reasonable system: it raises awareness about the importance of copyright by using advanced Internet technologies instead of relegating enforcement to outmoded nontechnical legal means; it is permeated with references to legal content sources; and it doesn’t cost users a thing.
Digimarc Acquires Attributor December 4, 2012Posted by Bill Rosenblatt in Fingerprinting, Images, Publishing, Watermarking.
1 comment so far
Digimarc announced yesterday that it has acquired Attributor Corp. Attributor, based in Silicon Valley, is one of a handful of companies that crawls the Internet looking for instances of copyrighted material that may be infringing, using a pattern-recognition technology akin to fingerprinting. Digimarc is a leader in digital watermarking technology, with a large and significant portfolio of IP in the space. The acquisition price was a total of US $7.5 Million in cash, stock, and contingent compensation.
This is a synergistic and strategically significant move for Digimarc. A few years ago, Digimarc had pruned its efforts to create products and services for digital media markets outside of still images. It had decided, in effect, to leave products and services to its IP licensees, companies such as Civolution of the Netherlands and MarkAny of South Korea. Attributor’s primary market is book publishing, with customers including four out of the “Big Six” trade book publishers as well as several leading educational and STM (scientific, technical, medical) publishers.
Digimarc intends to leverage Attributor’s relationships with book publishers to help it expand its watermarking technology into that market and to move into other markets such as magazine and financial publishing. The company cited the explosive growth in e-books as a reason for the acquisition.
Beyond that, Digimarc’s acquisition is another sign of the increasing importance of infringement monitoring services; the previous such sign came over the summer, when Thomson Reuters acquired MarkMonitor.
There are two reasons for this increase in importance. First is the rise of so-called progressive response legal regimes: copyright owners can monitor the Internet and submit data on alleged infringements to a legal authority, which sends users increasingly strong warning messages and, if they keep on infringing, potentially suspends their ISP accounts. The most advanced progressive response regime is HADOPI in France, early results from which are encouraging. The Copyright Alert System is supposedly gearing up for launch in the United States. A handful of other countries have progressive response in place or in process as well.
The second reason for the increasing importance of so-called piracy monitoring is that copyright owners are starting to realize the value of the data they generate, beyond catching infringers. Piracy is evidence of popularity of content — of demand for it. The data that these services generate can be valuable for analytics purposes, to see who is interested in the content and in what ways. Big Champagne, for example, has been supplying this type of data to the music industry for may years. Attributor has been working on a new service that integrates piracy data with social media analytics; Digimarc intends to integrate this into its own data offerings for the image market.
In fact, we’ll have a discussion on the value of piracy data tomorrow at Copyright and Technology NYC 2012. Leading the discussion will be Thomas Sehested of MarkMonitor. There’s little doubt he will be called upon to talk about his new competition.