In Copyright Law, 200 Is a Magic Number March 2, 2014Posted by Bill Rosenblatt in Images, Law, United States.
1 comment so far
An occasional recurring theme in this blog is how copyright law is a poor fit for the digital age because, while technology enables distribution and consumption of content to happen automatically, instantaneously, and at virtually no cost, decisions about legality under copyright law can’t be similarly automated. The best/worst example of this is fair use. Only a court can decide whether a copy is noninfringing under fair use. Even leaving aside notions of legal due process, it’s not possible to create a “fair use deciding machine.”
In general, copyright law contains hardly any concrete, machine-decidable criteria. Yet one of the precious few came to light over the past few months regarding a type of creative work that is often overlooked in discussions of copyright law: visual artworks. Unlike most copyrighted works, works of visual art are routinely sold and then resold potentially many times, usually at higher prices each time.
A bill was introduced in Congress last week that would enable visual artists to collect royalties on their works every time they are resold. One of the sponsors of the bill is Rep. Jerrold Nadler, who represents a chunk of New York City, one of the world’s largest concentrations of visual artists.
Of course, the types of copyrighted works that we usually talk about here — books, movies, TV shows, and music — aren’t subject to resale royalties; they are covered under first sale (Section 109 of the Copyright Act), which says that the buyer of any of these works is free to do whatever she likes with them, with no involvement from the original seller. But visual artworks are different. According to Section 101 of the copyright law, they are either unique objects (e.g. paintings) or reproduced in limited edition (e.g. photographs). The magic number of copies that distinguishes a visual artwork from anything else? 200 or less. The copies must be signed and numbered by the creator.
Under the proposed ART (Artist Royalties, Too) Act, five percent of the proceeds from a sale of a visual artwork would go to the artist, whether it’s the second, third, or hundredth sale of the work. The law would apply to artworks that sell for more than $5,000 at auction houses that do at least $1 million in business per year. It would require private collecting societies to collect and distribute the royalties on a regular basis, as SoundExchange does for digital music broadcasting. This proposed law would follow in the footsteps of similar laws in many countries, including the UK, EU, Australia, Brazil, India, Mexico, and several others. It would also emulate ”residual” and “rental” royalties for actors, playwrights, music composers, and others, which result from contracts with studios, theaters, orchestras, and so on.
The U.S. Copyright Office analyzed the art resale issue recently and published a report last December that summarized its findings. The Office concluded that resale royalties would probably not harm the overall art market in the United States, and that a law like the ART Act isn’t a bad idea but is only one of several ways to institute resale royalties.
The Office had previously looked into resale royalties over 20 years ago. Its newer research found that, based on evidence from other countries that have resale royalties, imposing them in the US would neither result in the flight of art dealers and auction houses from the country nor impose unduly onerous burdens for administration and enforcement of royalty payments.
Yet the Copyright Office’s report doesn’t overflow with unqualified enthusiasm for statutory royalties on sales. One of the legislative alternatives it suggests is the idea of a “performance royalty” from public display of artworks. If a collector wants to buy a work at auction and display it privately in her home, that’s different from a museum that charges people admission to see it. Although this would mirror performance royalties for music, it would seem to favor wealthy individuals at the expense of public exposure to art.
The ART Act — which is actually a revision of legislation that Rep. Nadler introduced in 2011 — has drawn much attention within the art community, though little outside it. Artists are generally in favor of it, of course. But various others have criticized aspects of the bill, such as that it only applies to auction houses (thereby pushing more sales to private dealers, where transactions take place in secret instead of out in the open), that it only benefits the tiny percentage of already-successful artists instead of struggling newcomers, and that it unfairly privileges visual artists over other creators of both copyrighted works and physical objects (think Leica cameras or antique Cartier watches).
As an outsider to the art world, I have no opinion. Instead it’s that 200 number that fascinates me. That number may partially explain why the Alfred Eisenstaedt photograph of the conductor Leonard Bernstein that hangs in my wife’s office, signed and numbered 14 out of 250, is considerably less valuable than another Eisenstaedt available on eBay that’s signed and numbered 41 out of 50.
It begs the question of what happens when more and more visual artists use media that can be reproduced digitally without loss of quality. Would an artist be better off limiting her output to 200 copies and getting the 5% on resale, or would she be better off making as many copies as possible and selling them for whatever the market will bear? The answer is unknowable without years of real-world testing. Given the choice, some artists may opt for the former route, which seems to go against the primary objective of copyright law: to maximize the availability of creative works to the public through incentives to creators.
Copyright minimalists question the relevance of copyright in an era when digital technologies make it possible to reproduce creative works at very little cost and perfect fidelity; they call on the media industry to stop trying to “profit from scarcity” and instead “profit from abundance.” Here’s a situation where copyrighted works are the scarcest of all.
Nowadays no one would confuse one of Vermeer’s 35 (or possibly 36) masterpieces with a poster or hand-made reproduction of one. People will be willing to travel to the Rijksmuseum, National Gallery, Met, etc., to see them for the foreseeable future. Yet there will be some time in the non-near future when the scarcity of most copyrighted works is artificially imposed. At that point, the sale (not resale) value of creative works will go toward zero, even if they are reproduced, signed, and sequentially numbered by super-micro-resolution 3D printers that sell at Staples for the equivalent of $200 today.
Perhaps the best indication of the future comes from Christo and Jeanne-Claude, the well-known husband-and-wife outdoor artists. Christo and Jeanne-Claude designed the 2005 installation called The Gates in New York’s Central Park (which happens to be in Jerry Nadler’s congressional district). Reproducing — let alone selling — this massive work is inconceivable. Instead, Christo and Jeanne-Claude hand-signed thousands of copies of books, lithographs, postcards, and other easily-reproduced artifacts containing photos and drawings of the artwork, and sold them to help pay the eight-figure cost of the project. To that just add an individualized auto-pen for automating the signatures, and you may have the future of visual art in a world without scarcity.
So, the question that Congress ought to consider when evaluating art resale legislation is how to create a legal environment in which the Christos and Jeanne-Claudes of tomorrow will even bother anymore. That’s not a rhetorical question, either.
Adobe Resurrects E-Book DRM… Again February 10, 2014Posted by Bill Rosenblatt in DRM, Publishing.
Over the past couple of weeks, Adobe has made a series of low-key announcements regarding new versions of its DRM for e-books, Adobe Content Server and Rights Management SDK. The new versions are ACS5 and RMSDK10 respectively, and they are released on major platforms now (iOS, Android, etc.) with more to come next month.
The new releases, though rumored for a while, came as something of a surprise to those of us who understood Adobe to have lost interest in the e-book market… again. They did so for the first time back in 2006, before the launch of the Kindle kicked the market into high gear. At that time, Adobe announced that version 3 of ACS would be discontinued. Then the following year, Adobe reversed course and introduced ACS4. ACS4 supports the International Digital Publishing Forum (IDPF)’s EPUB standard as well as Adobe’s PDF.
This saga repeated itself, roughly speaking, over the past year. As the IDPF worked on version 3 of EPUB, Adobe indicated that it would not upgrade its e-reader software to work with it, nor would it guarantee that ACS4 would support it. The DRM products were transferred to an offshore maintenance group within Adobe, and all indications were that Adobe was not going to develop it any further. Now that’s all changed.
Adobe had originally positioned ACS in the e-book market as a de facto standard DRM. It licensed the technology to a large number of makers of e-reader devices and applications, and e-book distributors around the world. At first this strategy seemed to work: ACS looked like an “everyone but Amazon” de facto standard, and some e-reader vendors (such as Sony) even migrated from proprietary DRMs to the Adobe technology.
But then cracks began to appear: Barnes & Noble “forked” ACS with its own extensions to support features such as user-to-user lending in the Nook system; Apple launched iBooks with a variant of its FairPlay DRM for iTunes content; and independent bookstores’ IndieBound system adopted Kobo, which has its own DRM. Furthermore, interoperability of e-book files among different RMSDK-based e-readers was not exactly seamless. As of today, “pure” ACS represents only a minor part of the e-book retail market, at least in the US, including Google Play, SmashWords, and retailers served by OverDrive and other wholesalers.
It’s unclear why Adobe chose to go back into the e-book DRM game, though pressure from publishers must have been a factor. Adobe can’t do much about interoperability glitches among retailers and readers, but publishers and distributors alike have asked for various features to be added to ACS over the years. Publishers have mainly been concerned with the relatively easy availability of hacks, while distributors have also expressed the desire for a DRM that facilitates certain content access models that ACS4 does not currently support.
The new ACS5/RMSDK10 platform promises to give both publishers and distributors just about everything they have asked for. First, Adobe has beefed up the client-side security using (what appear to be) software hardening, key management, and crypto renewability techniques that are commonly used for video and games nowadays.
Adobe has also added support for several interesting content access models. At the top of the list of most requested models is subscriptions. ACS5 will not only support periodical-style subscriptions but also periodic updates to existing files; the latter is useful in STM (scientific, technical, medical) and various professional publishing markets.
ACS5 also contains two enhancements that are of interest to the educational market. One is support for collections of content shared among multiple devices, which is useful for institutional libraries. Another is support for “bulk fulfillment,” such as pre-loading e-reader devices with encrypted books (such as textbooks). Bulk fulfillment requires a feature called separate license delivery, which is supported in many DRMs but hasn’t been in ACS thus far. With separate license delivery, DRM-packaged files can be delivered in any way (download, optical disk, device pre-load, etc.), and then the user’s device or app can obtain licenses for them as needed.
Finally, ACS5 will support the Readium Foundation’s open-source EPUB3 e-reader software. Adobe is “evaluating the feasibility” of supporting the Readium EPUB 3 SDK in its Adobe Reader Mobile SDK; but this means that distributors will now definitely be able to accommodate EPUB3 in their apps.
In all, ACS5 fulfills many of the wish list items that I have heard from publishers over the past couple of years, leaving one with the impression that it could expand its market share again and move towards Adobe’s original goal of de facto standard-hood (except for Amazon and possibly Apple). ACS5 is backward compatible with older versions of ACS and does not require that e-books be re-packaged; in other words, users can read their older files in RMSDK10-enabled e-readers.
Yet Adobe made a gaffe in its announcements that immediately jeopardized all this potential: it initially gave the impression that it would force upgrades to ACS5/RMSDK10 this July. (Watch this webinar video from Adobe’s partner Datalogics, starting around the 21-minute mark.) Distributors would have to upgrade their apps to the latest versions, with the hardened security; and users would have to install the upgrades before being able to read e-books packaged with the new DRM. Furthermore, if users obtain e-books packaged with the new DRM, they would not be able to read them on e-readers based on the older RMSDK. (Yet another sign that Adobe has acted on pressure from publishers rather than distributors.) In other words, Adobe wanted to force the entire ACS ecosystem to move to a more secure DRM client in lock-step.
This forced-upgrade routine is similar to what DRM-enabled download services like iTunes (video) do with their client software. But then Apple doesn’t rely on a network of distributors, almost all of which maintain their own e-reading devices and apps.
In any case, the backlash from distributors and the e-publishing blogosphere was swift and harsh; and Adobe quickly relented. Now the story is that distributors can decide on their own upgrade timelines. In other words, publishers will themselves have to put pressure on distributors to upgrade the DRM, at least for the traditional retail and library-lending models; and some less-secure implementations will likely remain out there for some time to come.
Adobe’s new release balances between divergent effects of DRM. On the one hand, DRM interoperability is more important than ever for publishers and distributors alike, to counteract the dominance of Amazon in the e-book retail market; and the surest way to achieve DRM interoperability is to do away with DRM altogether. (There are other ways to inhibit interoperability that have nothing to do with DRM.) But on the other hand, integrating interoperability with support for content access models that are unsupportable without some form of content access control — such as subscriptions and institutional library access — seems like an attractive idea. Adobe has survived tugs-of-war with publishers and distributors over DRM restrictions before, so this one probably won’t be fatal.
National Academies Calls for Hard Data on Digital Copyright February 4, 2014Posted by Bill Rosenblatt in Economics, Law, United States.
1 comment so far
About three years ago, the National Academies — the scientific advisers to the U.S. federal government — held hearings on copyright policy in the digital age. The intent of the project, of which the hearings were a part, was to gather input from a wide range of interested parties on the kinds of research that should be done to further our understanding of the effects of digital technologies on copyright.
The committee overseeing the project consisted of twelve people, including an economist specializing in digital content issues (Joel Waldfogel), a movie industry executive (Mitch Singer of Sony Pictures), a music technology expert (Paul Vidich, formerly of Warner Music Group), a federal judge with deep copyright experience (Marilyn Hall Patel of Napster fame), a library director (Michael Keller of Stanford University), a former director of Creative Commons (Molly van Houweling), and a few law professors. The committee was chaired by Bill Raduchel, a Harvard economics professor turned technology executive perhaps best known as Scott McNealy’s mentor at Sun Microsystems.
Recently the National Academies Press published the results of the project in the form of Copyright in the Digital Era: Building Evidence for Policy, which is available as a free e-book or $35 paperback. This 85-page booklet is, without exaggeration, the most important document in the field of copyright policy to be published in quite some time. It is the first substantive attempt to take the debate on copyright policy out of the realm of “copyright wars,” where polemics and emotions rule, into the realm of hard data.
The document starts by decrying the lack of data on which deliberations on copyright policy are based, especially compared to the mountains of data used to support changes to the patent system. It then goes on to describe various types of data that either exist or should be collected in order to fuel research that can finally tell us how copyright is faring in the digital era, with respect to its purpose to maximize public availability of creative works through incentives to creators.
The questions that Copyright in the Digital Era poses are fundamentally important. They include issues of monetary and non-monetary motivations to content creators; the impact of sharply reduced distribution and transaction costs for digital compared to physical content; the costs and benefits of various copyright enforcement schemes; and the effects of US-specific legal constructs such as fair use, first sale, and the DMCA safe harbors. My own testimony at the hearings emphasized the need for research into the costs and benefits of rights technologies such as DRM, and I was pleased to see this reflected in the document.
Copyright in the Digital Era concludes with lists of types of data that the project committee members believe should be collected in order to facilitate research, as well as descriptions of the types of research that should be done and the challenges of collecting the needed data.
This document should be required reading for everyone involved in copyright policy. More than that, it should be seen as a gauntlet that has been thrown down to everyone involved in the so-called copyright wars. The National Academies has set the research agenda. Now that Congress has begun the long, arduous process of revamping America’s copyright law, we’ll see who is willing and able to fund the research and publish the results so that Congress gets the data it deserves.
Images, Search Engines, and Doing the Right Thing January 13, 2014Posted by Bill Rosenblatt in Rights Licensing, Standards.
A recent blog post by Larry Lessig pointed out that the image search feature in Microsoft’s Bing allows users to filter search results so that only images with selected Creative Commons licenses appear. A commenter to the post found that Google also has this feature, albeit buried deeply in the Advanced Search menu (see for example here; scroll down to “usage rights”). These features offer a tantalizing glimpse at an idea that has been brewing for years: the ability to license content directly and automatically through web browsers.
Let’s face it: most people use image search to find images to copy and paste into their PowerPoint presentations (or blog posts or web pages). As Lessig points out, these features help users to ensure that they have the rights to use the images they find in those ways. But they don’t help if an image is licensable in any way other than a subset of Creative Commons terms — regardless of whether royalties are involved. I’d stop short of calling this discrimination against image licensors, but it certainly doesn’t help them.
Those who want to license images for commercial purposes — such as graphic artists laying out advertisements — typically go to stock photo agencies like Getty Images and Corbis, which have powerful search facilities on their websites. Below Getty and Corbis, the stock image market is fragmented into niches, mostly by subject matter. Graphic artists have to know where to go to find the kinds of images they need.
There is a small “one-stop” search engine for images called PictureEngine, which includes links to licensing pages in its search results. But it would surely be better if the mainstream search engines’ image search functions included the ability to display and filter on licensing terms, and to enable links to licensing opportunities.
There have been various attempts over the years to make it easy for users to “do the right thing,” i.e. to license commercial content that they find online and intend to copy and use for purposes that aren’t clearly covered under fair use or equivalents. Most such efforts have focused on text content. The startup QPass had a vogue among big-name newspaper and magazine brands during the first Internet bubble: it provided a service for publishers to sell archived articles to consumers for a dollar or two apiece. It effectively disappeared during the post-bubble crash. ICopyright, which has been around since the late 1990s, provides a toolbar that publishers can use on web pages to offer various licensing options such as print, email, republish, and excerpt; most of its customers are B2B publishers like Dow Jones and Investors Business Daily.
Images are just as easy as text (compared, say, to audio and video) to copy and paste from web pages, but they are more discrete units of content; therefore it ought to be easier to automate licensing of them. When you copy and paste an image with a Creative Commons license, you’re effectively getting a license, because the license is expressed in XML metadata attached to the image.
If search engines can index Creative Commons terms embedded within images that they find online, they ought to be able to index other licensing terms, including commercial terms. The most prominent standard for image rights is PLUS (Picture Licensing Universal System), but that’s intended for describing rights B-to-B licensing arrangements, not to the general public; and I’m not aware of any efforts that the PLUS Coalition has made to integrate with web search engines.
No, the solution to this problem is not only clear but has been in evidence for years: augment Creative Commons so that it can handle commercial as well as noncommercial licensing. Creative Commons flirted with this idea several years ago with something called CC+ (CCPlus), a way to add additional terms to Creative Commons licenses that envisioned commercial licenses in particular.
Although Creative Commons has its detractors among the commercial content community (mostly lawyers who feel disintermediated — which is part of the point), I have heard major publishers express admiration for it as well as interest in finding ways to use it with their content. At this point, the biggest obstacle to extending Creative Commons to apply to commercial licensing is the Creative Commons organization’s lack of interest in doing so. Creative Commons’ innovations have put it at the center of the copyright world on the Internet; it would be a shame if Creative Commons’ refusal to acknowledge that some people would like to get paid for their work results in that possibility being closed off.
Judge Dismisses E-Book DRM Antitrust Case December 12, 2013Posted by Bill Rosenblatt in DRM, Law, Publishing.
Last week a federal judge in New York dismissed a lawsuit that a group of independent booksellers brought earlier this year against Amazon.com and the (then) Big Six trade publishers. The suit alleged that the publishers were conspiring with Amazon to use Amazon’s DRM to shut the indie booksellers out of the majority of the e-book market. The three bookstores sought class action status on behalf of all indie booksellers.
In most cases, independent booksellers can’t sell e-books that can be read on Amazon’s Kindle e-readers; instead they have a program through the Independent Booksellers Association that enables consumers to buy e-books from the stores’ websites via the Kobo e-book platform, which has apps for all major devices (PCs, iOS, Android, etc.) as well as Kobo eReaders.
Let’s get the full disclosure out of the way: I worked with the plaintiffs in this case as an expert witness. (Which is why I didn’t write about this case when it was brought several months ago.) I did so because, like others, I read the complaint and found that it reflected various misconceptions about DRM and its place in the e-book market; I thought that perhaps I could help educate the booksellers.
The booksellers asked the court to enjoin (force) Amazon to drop its proprietary DRM, and to enjoin the Big Six to allow independent bookstores to sell their e-books using an interoperable DRM that would presumably work with Kindles as well as iOS and Android devices, PCs, Macs, BlackBerrys, etc. (The term that the complaint used for the opposite of “interoperable DRM” was “inoperable DRM,” much to the amusement of some anti-DRM folks.)
There were two fundamental problems with the complaint. One was that it presupposed the existence of an idealized interoperable DRM that would work with any “interoperable or open architecture device,” and that “Amazon could easily, and without significant cost or disruption, eliminate its device specific restrictive  DRM and instead utilize an available interoperable system.”
There is no such thing, nor is one likely to come into being. I worked with the International Digital Publishing Form (IDPF), the trade association for e-books, to design a “lightweight” content protection scheme that would be attractive to a large number of retailers through low cost of adoption, but that project is far from fruition, and in any case, no one associated with it is under any illusion that all retailers will adopt the scheme. The only DRM that is guaranteed to work with all devices and all retailers forever is no DRM at all.
The closest thing there is to an “interoperable” DRM nowadays is Adobe Content Server (ACS) — which isn’t all that close. Adobe had intended ACS to become an interoperable standard, much like PDF is. Unlike Amazon’s Mobipocket DRM and Apple’s FairPlay DRM for iBooks, ACS can be licensed and used by makers of e-reader devices and apps. Several e-book platforms do use it. But the only retailer with significant market share in the United States that does so is Barnes & Noble, which has modified it and combined it with another DRM that it had acquired years ago. Kobo has its own DRM and uses ACS only for interoperability with other environments.
More relevantly, I have heard it said that Amazon experimented with ACS before launching the Kindle with the Mobipocket DRM that it acquired back in 2005. But in any case, ACS’s presence in the US e-book market is on the wane, and Adobe has stopped actively working on the product.
The second misconception in the booksellers’ complaint was the implication that the major publishers had an interest in limiting their opportunities to sell e-books through indie bookstores. The reality is just the opposite: publishers, from the (now) Big Five on down, would like nothing more than to be able to sell e-books through every possible retailer onto every possible device. The complaint alleges that publishers “confirmed, affirmed, and/or condoned AMAZON’s use of restrictive DRMs” and thereby conspired to restrain trade in the e-book market.
Publishers have been wary of Amazon’s dominant market position for years, but they have tolerated its proprietary technology ecosystem — at least in part because many of them understand that technology-based media markets always settle down to steady states involving two or three different platforms, protocols, formats, etc. DRM helps vendors create walls around their ecosystems, but it is far from the only technology that does so.
As I’ve said before, the ideal of an “MP3 for e-books” is highly unlikely and is largely a mirage in any case. Copyright owners have a constant struggle to create and preserve level playing fields for retailers in the digital age, one that the more savvy among them recognize that they can’t win as much as they would like.
Judge Jed Rakoff picked up on this second point in his opinion dismissing the case. He said, “… nothing about [the] fact [that publishers made agreements with Amazon requiring DRM] suggests that the Publishers also required Amazon to use device-restrictive DRM limiting the devices on which the Publishers’ e-books can be display, or to place restrictions on Kindle devices and apps such that they could only display e-books enabled with Amazon’s proprietary DRM. Indeed, unlike DRM requirements, which clearly serve the Publishers’ economic interests by preventing copyright violations, these latter types of restrictions run counter to the Publishers’ interests …” (emphasis in original).
Indie bookstores are great things; it’s a shame that Amazon’s Kindle ecosystem doesn’t play nicely with them. But at the end of the day — as Judge Rakoff also pointed out — Amazon competes with independent booksellers, and “no business has a duty to aid competitors,” even under antitrust law.
In fact, Amazon has repeatedly shown that it will “cooperate” with competitors only as a means of cutting into their markets. Its extension of the Kindle platform to public library e-lending last year is best seen as part of its attempt to invade libraries’ territory. More recently, Amazon has attempted to get indie booksellers interested in selling Kindle devices in their stores, a move that has elicited frosty reactions from the bookstores.
The rest of Judge Rakoff’s opinion dealt with the booksellers’ failure to meet legal criteria under antitrust law. Independent booksellers might possibly have a case to bring against Amazon for boxing them out of the market as reading goes digital, but Book House of Stuyvesant Plaza et al v. Amazon.com et al wasn’t it.
Apple and Disney: A Copyright Conundrum November 25, 2013Posted by Bill Rosenblatt in Images, Law.
Last week I was at Rutgers Law School in New Jersey. A law student struck up a conversation with me, and once he discovered that I was there to give a guest lecture in Prof. Michael Carrier‘s intellectual property class, he showed me something that had us both scratching our heads. It was a decal of Snow White, affixed to the lid of his MacBook laptop so that she was holding the Apple logo in her hands. It turns out that more than one designer has thought of this idea; here’s one example.
Let’s make the (fairly safe) assumption that the makers of these decals were not licensed by The Walt Disney Company. So the question is: would this be a fair use of the iconic cartoon image, or is the decal maker liable?
The design works as ironic commentary on a couple of levels. Those of you who have seen the classic 1937 Disney animated feature, or at least know the story of Snow White and the Seven Dwarfs, will understand she held an apple in the story, which was poisoned. (Snow White’s pose in the decal is the same as when she held the poisoned apple in the movie.) On another level, Snow White holding the Apple logo is a commentary on Apple’s relationship with Disney, given that Steve Jobs was on the Disney board and was the largest investor in the company.
Is the decal a “transformative” use of Disney’s intellectual property? (If a use of copyrighted material is transformative, it’s likely to be fair use.)
From what I can tell, the manufacturer of the decals is using Disney’s IP without permission by simply making copies of Snow White. There is nothing “transformative” about that by itself; it’s not part of a mashup, collage, remix, etc. The whole of Snow White was used, not a snippet or sample. The decal was sold commercially, though it probably doesn’t make people less likely to buy Snow White items from the Disney Store. It may or may not be an example of “appropriation art.”
The “transformative” use of the decal is made by the person who buys it and affixes it to his MacBook. One could argue that the decal was made specifically with that use in mind; one could say that the decal maker was “inducing” transformative uses of Snow White.
OK, copyright geeks, time to weigh in. Here’s a poll. Feel free to elaborate in the comments.
Copyright Technology, Gangnam Style November 7, 2013Posted by Bill Rosenblatt in Asia-Pacific, Events.
1 comment so far
The Samsung TV monitor wasn’t particularly large. But the picture was so sharp that, even from several meters away, you could make out each hair of the perfectly sculpted eyebrows on the perfectly groomed K-Pop starlets performing on the MTV Korea broadcast. The scene was a casual neighborhood restaurant in Gangnam, the upscale district of Seoul that PSY made world-famous. The makgeolli (Korean rice wine) flowed freely, if more freely to the locals than to those of us who had flown across many timezones to get there. We were all celebrating the successful end of ICOTEC, the International Copyright Technology Conference, which attracted 600 attendees earlier this week.
This is what happens when a government not only pays lip service to copyright but also puts its money where its mouth is. South Korea is in the midst of a perfect storm of growth in digital media: Samsung Galaxy mobile devices everywhere; broadband Internet at speeds several times those available in the west; K-Pop idol factories churning out one international sensation after another. The media, telecoms, and consumer electronics industries don’t agree on everything (e.g., consumer electronics companies refuse to pay copyright levies on their devices), but they largely cooperate – as their offices sit near one another in Digital Media City, a special district across town from Gangnam that emerged from a landfill site just over a decade ago. Much of that cooperation is fostered by the government, and it has resulted in a global digital media juggernaut.
The Ministry of Culture, Sports and Tourism (MCST) sponsored ICOTEC, now in its third year. It was as well organized as any private-sector conference I’ve seen, and admission was free. The conference program had more emphasis on technology than on law compared to my own Copyright and Technology conferences in New York and London. That’s fitting because, as I have mentioned, Korea and its Asia-Pacific neighbors produce more innovation in rights technologies (relative to their GDPs) than the traditional content-producing regions of the United States and Europe. But now Korea is producing technological innovations not just to satisfy the distant demands of western media companies; it’s launching innovative content models and technologies to protect copyright for the benefit of its own multi-billion-dollar content. As ICOTEC attendees could see, Korea has perhaps the most vibrant rights technologies industry of any single country in the world today.
The Korean government’s own foray into rights technology is particularly interesting: it operates the Copyright Protection Center (CPC), part of the Korean Federation of Copyright Organizations (known as KOFOCO). The CPC runs a piracy monitoring system called ICOP — a reverse-engineered acronym for Illegal Copyright Obstruction Program.
ICOP is analogous to private-sector piracy monitoring services such as MarkMonitor, Irdeto/BayTSP, Muso, Ayatii, and various others. It monitors various types of online services, including P2P file-sharing sites, web portals, “web-hard” (Korea’s equivalent to “cloud storage”) services, torrent sites, and so on. Like those piracy monitoring services, ICOP uses a combination of techniques including fingerprinting technology and analysis of metadata and information surrounding potentially infringing content on the monitored sites.
ICOP gathers data in real time and sends takedown notices to the sites, which remove the content to avoid action from Korean law enforcement. CPC claims that nearly all of the monitored sites take down the accused content when requested. The effort also includes crackdowns on physical piracy: CPC employees armed with smartphones use specially-designed apps to photograph sellers of pirated DVDs and CDs on streets, in subway stations, etc. The apps geo-tag the photos and send them to the CPC in real time, where the data is displayed on maps, analyzed, and passed on to law enforcement for further action.
ICOP is arguably more effective than monitoring efforts elsewhere because it’s better integrated with law enforcement. Media companies in the west use private-sector piracy monitoring services, which provide data that copyright owners must use to generate takedown notices and initiate litigation and law enforcement actions. The hands-off approach of western governments fosters competition among piracy monitoring services, which should theoretically lead to more effective technologies. But the reality has been a limit on revenue opportunities that has led to market consolidation, as many piracy monitoring services have merged, been acquired by larger companies for synergies with other businesses, or just ceased operations.
In contrast, CPC gets 90% of its funding from the Korean government. It operates on an annual budget of about US $5 Million, which works out to just 9 cents per Korean citizen; the other 10% comes from content industry trade associations. (By comparison, HADOPI costs the French about 15 cents per citizen per year.)
ICOP’s fingerprinting technology comes from the Electronics Telecommunications Research Institute (ETRI), a government-funded research lab. Although much of the system is automated, the CPC employs 110 people to monitor and report on illegal content on targeted online services. The employees are disabled citizens who work from their homes.
(The CPC operates completely separately from Korea’s “three strikes” graduated response regime, which, like HADOPI, targets individual downloaders instead of online service operators. Korea was actually the first country to implement graduated response.)
KOFOCO started ICOP in 2008 in response to the rapidly growing rate of piracy amid an equally rapidly growing content industry in Korea. CPC estimates that the size of the illegal content market shrank 27.6% between 2011 and 2012 alone, and that the infringement rate dropped 14% in the same time period.
Yet the Korean government doesn’t just focus on copyright enforcement. MCST supports what is perhaps the world’s leading effort to educate the public about copyright. It has also expanded the scope of fair use under Korean copyright law, increased access to public-sector content, introduced more cost-efficient legal processes for small infringement claims (for settlements below roughly US $10,000), and dramatically increased the efficiency of music rights licensing.
The ICOTEC conference put all this activity on proud display for attendees and speakers from around the world. It was a ton of very interesting information. The K-Pop CD sampler I bought at the airport on my way back should help — as the makgeolli did — to further my understanding of a country whose impact on the global digital media scene will only get bigger and bigger.
My keynote presentation at ICOTEC 2013 is available on SlideShare. Watch this space for links to other presentations from the conference.
Getty Images Reaches Image License Deal with Pinterest October 28, 2013Posted by Bill Rosenblatt in Fingerprinting, Images, Rights Licensing.
add a comment
A year ago, Getty Images, one of the world’s largest stock image agencies, reached a licensing deal with a startup called SparkRebel, which I described as “Pinterest for fashionistas, with Buy buttons.” On that site, people would post images of items of clothing they’re interested in. An image recognition engine would try to identify the photo and thus the identity of each apparel item. If the item was identified and its manufacturer had a deal with SparkRebel, the site would show a Buy button, which users could click to purchase the item. It was a clever use of content identification technology to support licensing of content used for commercial purposes.
SparkRebel used Getty Images’ ImageIRC image recognition technology. ImageIRC uses the concept of fingerprinting: it examines an image, calculates a set of numbers that represent the image, and looks those numbers up in a database of fingerprints to see if finds a match. Matches needn’t be exact; the fingerprinting algorithm can usually compute the correct fingerprint even if the image has been color-shifted, downsampled, cropped (up to a point), etc. In other words, Getty Images is to still images as Google’s Content ID is to YouTube videos and Audible Magic is to various sites that host music files.
In Getty Images’ deal with SparkRebel, SparkRebel would pay Getty Images a licensing fee whenever a user posted an image to which Getty owned the rights. Those of use who watched this deal at the time wondered if Getty Images was trying to get Pinterest — the leading site where users posted images of commercial products — to agree to a similar deal. Given Getty Images’ firm “no comment” replies to questions about it, the answer was clearly yes. Many of the photos posted on Pinterest (as opposed to, say, Instagram) are commercial images copied and pasted from other websites, so Getty could have made a case that Pinterest was promoting infringement of its copyrights.
It took a while, but Getty Images did conclude a licensing deal with Pinterest last Friday — a few months after SparkRebel ceased operations. Under the deal, whenever ImageIRC finds a match to an image that a user “pins” on Pinterest, Pinterest will pay Getty Images a licensing fee, just as with SparkRebel. The additional feature of the deal is that Getty Images will send Pinterest metadata about the matched image, which Pinterest can display for the user. The metadata includes the time and location of the photo, the identity of the photographer, caption, an image ID, and licensing information.
Neither Getty nor Pinterest has mentioned anything about blocking or flagging images that users aren’t permitted to pin to the site; Pinterest still allows any user photos on the site, regardless of the terms under which Getty normally licenses them. Pinterest continues to follow DMCA 512 policies of responding to takedown notices and terminating the accounts of users who repeatedly violate copyrights.
Pinterest’s announcement of the deal on its blog mentions the license fees, but otherwise does not mention any copyright issues; instead it focuses on “New data to help improve Pinterest.” Putting the fees aside, the deal is a win for Pinterest as well as Getty Images (not to mention Pinterest’s user community).
For Getty Images, this deal establishes an important precedent for image-sharing services that store lots of professional images and use them for commercial purposes. Other services that use images to drive commerce will likely follow Pinterest’s example and make licensing deals with Getty Images. But Getty gets another benefit besides money that could turn out to be just as important: distribution of image metadata.
One of the biggest problems that the stock image industry has with the Internet is that most ways of copying images from one place to another strip metadata away. When photographers and editors prepare images for distribution, they use tools like Adobe Photoshop, which incorporates Adobe’s XMP (eXtensible Metadata Platform) metadata scheme for storing metadata that travels with images. XMP metadata can be stored alongside images on web pages. But it doesn’t survive copying and pasting photos through web browsers.
It is actually illegal under section 1202 of the Digital Millennium Copyright Act to intentionally remove “copyright management information” from a copyrighted work in order to evade detection of infringement, though there is some ambiguity over issues such as what qualifies as copyright management information. Nevertheless, images that users copy and paste among websites generally have no copyright management information.* Getty Images’ arrangement with Pinterest recovers metadata for images posted to the site that match its database. This certainly won’t solve the image metadata problem in general, but it’s a start.
*Some images may have invisible embedded watermarks that indicate copyright management information. Typically such watermarks will contain IDs that point to entries in image licensors’ databases, which in turn contain things like the photographer’s name, licensing terms, and so on. Whether invisible embedded watermarks qualify as copyright management information under DMCA 1202 is somewhat up in the air. If a high enough court decided that they do, that could make tools for hacking “social DRM” e-book watermarks illegal in the United States.
MovieLabs Releases Best Practices for Video Content Protection October 23, 2013Posted by Bill Rosenblatt in DRM, Standards, Video.
As Hollywood prepares for its transition to 4k video (four times the resolution of HD), it appears to be adopting a new approach to content protection, one that promotes more service flexibility and quicker time to market than previous approaches but carries other risks. The recent publication of a best-practices document for content protection from MovieLabs, Hollywood’s R&D consortium, signals this new approach.
In previous generations of video technology, Hollywood studios got together with major technology companies and formed technology licensing entities to set and administer standards for content protection. For example, a subset of the major studios teamed up with IBM, Intel, Microsoft, Panasonic, and Toshiba to form AACS LA, the licensing authority for the AACS content protection scheme for Blu-ray discs and (originally) HD DVDs. AACS LA defines the technology specification, sets the terms and conditions under which it can be licensed, and performs other functions to maintain the technology.
A licensing authority like AACS LA (and there are a veritable alphabet soup of others) provides certainty to technology implementation including compliance, patent licensing, and interoperability among licensees. It helps insulate the major studios from accusations of collusion by being a separate entity in which at most a subset of them participate.
As we now know, the licensing-authority model has its drawbacks. One is that it can take the licensing authority several years to develop technology specs to a point where vendors can implement them — by which time they risk obsolescence. Another is that it does not offer much flexibility in how the technology can adapt to new device types and content delivery paradigms. For example, AACS was designed with optical discs in mind at a time when Internet video streaming was just a blip on the horizon.
A document published recently by MovieLabs signals a new approach. MovieLabs Specification for Enhanced Content Protection is not really a specification, in that it is in nowhere near enough detail to be usable as the basis for implementations. It is more a compendium of what we now understand as best practices for protecting digital video. It contains room for change and interpretation.
The best practices in the document amount to a wish list for Hollywood. They include things like:
- Techniques for limiting the impact of hacks to DRM schemes, such as requiring device as well as content keys, code diversity (a hack that works on one device won’t necessarily work on another), title diversity (a hack that works with one title won’t necessarily work on another), device revocation, and renewal of protection schemes.
- Proactive renewal of software components instead of “locking the barn door after the horse has escaped.”
- Component technologies that are currently considered safe from hacks by themselves, including standard AES encryption with minimum key length of 128 and version 2.2 or better of the HDCP scheme for protecting links such as HDMI cables (earlier versions were hacked).
- Hardware roots of trust on devices, running in secure execution environments, to limit opportunities for key leakage.
- Forensic watermarking, meaning that content should have information embedded in it about the device or user who requested it.
Those who saw Sony Pictures CTO Spencer Stephens’s talk at the Anti-Piracy and Content Protection Summit in LA back in July will find much of this familiar. Some of these techniques come from the current state of the art in content protection for pay TV services; for more detail on this, see my whitepaper The New Technologies for Pay TV Content Security. Others, such as the forensic watermarking requirement, come from current systems for distributing HD movies in early release windows. And some result from lessons learned from cracks to older technologies such as AACS, HDCP, and CSS (for DVDs).
MovieLabs is unable to act as a licensor of standards for content protection (or anything else, for that matter). The six major studios set it up in 2005 as a movie industry joint R&D consortium modeled on the cable television industry’s CableLabs and other organizations enabled by the National Cooperative Research Act of 1984, such as Bellcore (telecommunications) and SEMATECH (semiconductors). R&D consortia are allowed, under antitrust law, to engage in “pre-competitive” research and development, but not to develop technologies that are proprietary to their members.
Accordingly, the document contains a lot of language intended to disassociate these requirements from any actual implementations, standards, or studio policies, such as “Each studio will determine individually which practices are prerequisites to the distribution of its content in any particular situation” and “This document defined only one approach to security and compatibility, and other approaches may be available.”
Instead, the best-practices approach looks like it is intended to give “signals” from the major studios to content protection technology vendors, such as Microsoft, Irdeto, Intertrust, and Verimatrix, who work with content service providers. These vendors will then presumably develop protection schemes that follow the best practices, with an understanding that studios will then agree to license their content to those services.
The result of this approach should be legal content services for next-generation video that get to market faster. The best practices are independent of things like content delivery modalities (physical media, downloads, streaming) and largely independent of usage rules. Therefore they should enable a wider variety of services than is possible with the traditional licensing authority paradigm.
Yet this approach has two drawbacks compared to the older approach. (And of course the two approaches are not mutually exclusive.) First is that it jeopardizes the interoperability among services that Hollywood craves — and has gone to great lengths to preserve in the UltraViolet standard. Service providers and device makers can incorporate content protection schemes that follow MovieLabs’ best practices, but consumers may not be able to interoperate content among them, and service providers will be able to use content protection schemes to lock users in to their services. In contrast, many in Hollywood are now nostalgic for the DVD because, although its protection scheme was easily hacked, it guaranteed interoperability across all players (at least all within a given geographic region).
The other drawback is that the document is a wish list provided by organizations that won’t pay for the technology. This means that downstream entities such as device makers and service providers will treat it as the maximum amount of protection that they have to implement to get studio approval. Because there is no license agreement that they have to sign to get access to the technology, the downstream entities are likely to negotiate down from there. (Such negotiation already took place behind the scenes during the rollout of Blu-ray, as player makers refused to implement some of the more expensive protection features and some studios agreed to let them slip.)
Downstream entities are particularly likely to push back against some of MovieLabs’s best practices that involve costs and potential impairments of the user experience; examples include device connectivity to networks for purposes of authentication and revocation, proactive renewal of device software, and embedding of situation-specific watermarks.
Surely the studios understand all this. The publication of this document by MovieLabs shows that Hollywood is willing to entertain dialogues with service providers, device makers, and content protection vendors to speed up time-to-market of legitimate video services and ensure that downstream entities can innovate more freely. How much protection will the studios will ultimately end up with when 4k video reaches the mainstream? It will be very interesting to watch over the next couple of years.
add a comment
We have added another panel session to the Copyright and Technology London 2013 conference, which will take place next Thursday (17 October). The most important recent copyright litigation in the UK at the moment is the case of Ministry of Sound v. Spotify, in which the record label is objecting to Spotify making playlists available that mimic the compilation albums for which the label is best known. The case has broad implications for the limits of copyrightability in the digital age, at least under UK law.
Here is the panel description:
The Limits of Copyright in the Digital Age
The litigation that Ministry of Sound recently started against Spotify will test whether playlists on compilation albums have copyright protection. It will be played out in the context of the debate about to what extent we as a society are prepared to pay for curation. The same issue faces news-disseminating organisations over their headlines and sports reporters over game highlights. Does our society value the editorial/quality control/validation role that they play? This panel will explore the boundaries of what is – and should be – protected by copyright in the digital age and suggest what directions legal decisions in the future may take.
Although the case was only filed a month ago, we have been able to pull together an excellent group of authorities on both the legal and content aspects of the matter, thanks to the tireless efforts of Serena Tierney of Bircham Dyson Bell, the panel chair and herself an authority on copyright in the digital age. Panelists will include:
- Jeff Smith, Head of Music at BBC Radio 2 and 6; former Director of Music Programming at Napster
- Mo McRoberts, Head of the BBC Genome Project at the BBC Archive
- Lindsay Lane, Barrister at 8 New Square Intellectual Property and co-author of the standard copyright treatise Laddie, Prescott and Vitoria on The Modern Law of Copyright and Designs
- Andrew Orlowski, Executive Editor of The Register, who has covered this case.
This means that we will have a packed day of exciting sessions from all around the world of copyright. Places are still left, so register today!