Images, Search Engines, and Doing the Right Thing January 13, 2014Posted by Bill Rosenblatt in Rights Licensing, Standards.
A recent blog post by Larry Lessig pointed out that the image search feature in Microsoft’s Bing allows users to filter search results so that only images with selected Creative Commons licenses appear. A commenter to the post found that Google also has this feature, albeit buried deeply in the Advanced Search menu (see for example here; scroll down to “usage rights”). These features offer a tantalizing glimpse at an idea that has been brewing for years: the ability to license content directly and automatically through web browsers.
Let’s face it: most people use image search to find images to copy and paste into their PowerPoint presentations (or blog posts or web pages). As Lessig points out, these features help users to ensure that they have the rights to use the images they find in those ways. But they don’t help if an image is licensable in any way other than a subset of Creative Commons terms — regardless of whether royalties are involved. I’d stop short of calling this discrimination against image licensors, but it certainly doesn’t help them.
Those who want to license images for commercial purposes — such as graphic artists laying out advertisements — typically go to stock photo agencies like Getty Images and Corbis, which have powerful search facilities on their websites. Below Getty and Corbis, the stock image market is fragmented into niches, mostly by subject matter. Graphic artists have to know where to go to find the kinds of images they need.
There is a small “one-stop” search engine for images called PictureEngine, which includes links to licensing pages in its search results. But it would surely be better if the mainstream search engines’ image search functions included the ability to display and filter on licensing terms, and to enable links to licensing opportunities.
There have been various attempts over the years to make it easy for users to “do the right thing,” i.e. to license commercial content that they find online and intend to copy and use for purposes that aren’t clearly covered under fair use or equivalents. Most such efforts have focused on text content. The startup QPass had a vogue among big-name newspaper and magazine brands during the first Internet bubble: it provided a service for publishers to sell archived articles to consumers for a dollar or two apiece. It effectively disappeared during the post-bubble crash. ICopyright, which has been around since the late 1990s, provides a toolbar that publishers can use on web pages to offer various licensing options such as print, email, republish, and excerpt; most of its customers are B2B publishers like Dow Jones and Investors Business Daily.
Images are just as easy as text (compared, say, to audio and video) to copy and paste from web pages, but they are more discrete units of content; therefore it ought to be easier to automate licensing of them. When you copy and paste an image with a Creative Commons license, you’re effectively getting a license, because the license is expressed in XML metadata attached to the image.
If search engines can index Creative Commons terms embedded within images that they find online, they ought to be able to index other licensing terms, including commercial terms. The most prominent standard for image rights is PLUS (Picture Licensing Universal System), but that’s intended for describing rights B-to-B licensing arrangements, not to the general public; and I’m not aware of any efforts that the PLUS Coalition has made to integrate with web search engines.
No, the solution to this problem is not only clear but has been in evidence for years: augment Creative Commons so that it can handle commercial as well as noncommercial licensing. Creative Commons flirted with this idea several years ago with something called CC+ (CCPlus), a way to add additional terms to Creative Commons licenses that envisioned commercial licenses in particular.
Although Creative Commons has its detractors among the commercial content community (mostly lawyers who feel disintermediated — which is part of the point), I have heard major publishers express admiration for it as well as interest in finding ways to use it with their content. At this point, the biggest obstacle to extending Creative Commons to apply to commercial licensing is the Creative Commons organization’s lack of interest in doing so. Creative Commons’ innovations have put it at the center of the copyright world on the Internet; it would be a shame if Creative Commons’ refusal to acknowledge that some people would like to get paid for their work results in that possibility being closed off.
MovieLabs Releases Best Practices for Video Content Protection October 23, 2013Posted by Bill Rosenblatt in DRM, Standards, Video.
As Hollywood prepares for its transition to 4k video (four times the resolution of HD), it appears to be adopting a new approach to content protection, one that promotes more service flexibility and quicker time to market than previous approaches but carries other risks. The recent publication of a best-practices document for content protection from MovieLabs, Hollywood’s R&D consortium, signals this new approach.
In previous generations of video technology, Hollywood studios got together with major technology companies and formed technology licensing entities to set and administer standards for content protection. For example, a subset of the major studios teamed up with IBM, Intel, Microsoft, Panasonic, and Toshiba to form AACS LA, the licensing authority for the AACS content protection scheme for Blu-ray discs and (originally) HD DVDs. AACS LA defines the technology specification, sets the terms and conditions under which it can be licensed, and performs other functions to maintain the technology.
A licensing authority like AACS LA (and there are a veritable alphabet soup of others) provides certainty to technology implementation including compliance, patent licensing, and interoperability among licensees. It helps insulate the major studios from accusations of collusion by being a separate entity in which at most a subset of them participate.
As we now know, the licensing-authority model has its drawbacks. One is that it can take the licensing authority several years to develop technology specs to a point where vendors can implement them — by which time they risk obsolescence. Another is that it does not offer much flexibility in how the technology can adapt to new device types and content delivery paradigms. For example, AACS was designed with optical discs in mind at a time when Internet video streaming was just a blip on the horizon.
A document published recently by MovieLabs signals a new approach. MovieLabs Specification for Enhanced Content Protection is not really a specification, in that it is in nowhere near enough detail to be usable as the basis for implementations. It is more a compendium of what we now understand as best practices for protecting digital video. It contains room for change and interpretation.
The best practices in the document amount to a wish list for Hollywood. They include things like:
- Techniques for limiting the impact of hacks to DRM schemes, such as requiring device as well as content keys, code diversity (a hack that works on one device won’t necessarily work on another), title diversity (a hack that works with one title won’t necessarily work on another), device revocation, and renewal of protection schemes.
- Proactive renewal of software components instead of “locking the barn door after the horse has escaped.”
- Component technologies that are currently considered safe from hacks by themselves, including standard AES encryption with minimum key length of 128 and version 2.2 or better of the HDCP scheme for protecting links such as HDMI cables (earlier versions were hacked).
- Hardware roots of trust on devices, running in secure execution environments, to limit opportunities for key leakage.
- Forensic watermarking, meaning that content should have information embedded in it about the device or user who requested it.
Those who saw Sony Pictures CTO Spencer Stephens’s talk at the Anti-Piracy and Content Protection Summit in LA back in July will find much of this familiar. Some of these techniques come from the current state of the art in content protection for pay TV services; for more detail on this, see my whitepaper The New Technologies for Pay TV Content Security. Others, such as the forensic watermarking requirement, come from current systems for distributing HD movies in early release windows. And some result from lessons learned from cracks to older technologies such as AACS, HDCP, and CSS (for DVDs).
MovieLabs is unable to act as a licensor of standards for content protection (or anything else, for that matter). The six major studios set it up in 2005 as a movie industry joint R&D consortium modeled on the cable television industry’s CableLabs and other organizations enabled by the National Cooperative Research Act of 1984, such as Bellcore (telecommunications) and SEMATECH (semiconductors). R&D consortia are allowed, under antitrust law, to engage in “pre-competitive” research and development, but not to develop technologies that are proprietary to their members.
Accordingly, the document contains a lot of language intended to disassociate these requirements from any actual implementations, standards, or studio policies, such as “Each studio will determine individually which practices are prerequisites to the distribution of its content in any particular situation” and “This document defined only one approach to security and compatibility, and other approaches may be available.”
Instead, the best-practices approach looks like it is intended to give “signals” from the major studios to content protection technology vendors, such as Microsoft, Irdeto, Intertrust, and Verimatrix, who work with content service providers. These vendors will then presumably develop protection schemes that follow the best practices, with an understanding that studios will then agree to license their content to those services.
The result of this approach should be legal content services for next-generation video that get to market faster. The best practices are independent of things like content delivery modalities (physical media, downloads, streaming) and largely independent of usage rules. Therefore they should enable a wider variety of services than is possible with the traditional licensing authority paradigm.
Yet this approach has two drawbacks compared to the older approach. (And of course the two approaches are not mutually exclusive.) First is that it jeopardizes the interoperability among services that Hollywood craves — and has gone to great lengths to preserve in the UltraViolet standard. Service providers and device makers can incorporate content protection schemes that follow MovieLabs’ best practices, but consumers may not be able to interoperate content among them, and service providers will be able to use content protection schemes to lock users in to their services. In contrast, many in Hollywood are now nostalgic for the DVD because, although its protection scheme was easily hacked, it guaranteed interoperability across all players (at least all within a given geographic region).
The other drawback is that the document is a wish list provided by organizations that won’t pay for the technology. This means that downstream entities such as device makers and service providers will treat it as the maximum amount of protection that they have to implement to get studio approval. Because there is no license agreement that they have to sign to get access to the technology, the downstream entities are likely to negotiate down from there. (Such negotiation already took place behind the scenes during the rollout of Blu-ray, as player makers refused to implement some of the more expensive protection features and some studios agreed to let them slip.)
Downstream entities are particularly likely to push back against some of MovieLabs’s best practices that involve costs and potential impairments of the user experience; examples include device connectivity to networks for purposes of authentication and revocation, proactive renewal of device software, and embedding of situation-specific watermarks.
Surely the studios understand all this. The publication of this document by MovieLabs shows that Hollywood is willing to entertain dialogues with service providers, device makers, and content protection vendors to speed up time-to-market of legitimate video services and ensure that downstream entities can innovate more freely. How much protection will the studios will ultimately end up with when 4k video reaches the mainstream? It will be very interesting to watch over the next couple of years.
Copyright and Accessibility June 19, 2013Posted by Bill Rosenblatt in Events, Law, Publishing, Standards, Uncategorized.
add a comment
Last week I received an education in the world of publishing for print-disabled people, including the blind and dyslexic. I was in Copenhagen to speak at Future Publishing and Accessibility, a conference produced by Nota, an organization within the Danish Ministry of Culture that provides materials for the print-disabled, and the DAISY Consortium, the promoter of global standards for talking books. The conference brought together speakers from the accessibility and mainstream publishing fields.
Before the conference, I had been wondering what the attitude of the accessibility community would be towards copyright. Would they view it as a restrictive construct that limits the spread of accessible information, allowing it to remain in the hands of publishers that put profit first?
As it turns out, the answer is no. The accessibility community, generally speaking, has a balanced view of copyright that reflects the growing importance of the print disabled to publishers as a business matter.
Digital publishing technology might be a convenience for normally sighted people, but for the print disabled, it’s a huge revelation. The same e-publishing standards that promote ease of production, distribution, and interoperability for mainstream consumers make it possible to automate and thus drastically lower the cost and time to produce content in Braille, large print, or spoken-word formats.
Once you understand this, it makes perfect sense that the IDPF (promoter of the EPUB standards for e-books) and DAISY Consortium share several key members. It was also pointed out at the conference that the print disabled constitute an audience that expands the market for publishers by roughly 10%. All this adds up to a market for accessible content that’s just too big to ignore.
As a result, the interests of the publishing industry and the accessibility community are aligning. Accessibility experts respect copyright because it helps preserve incentives for publishers to convert their products into versions for the print disabled. Although more and more accessibility conversion processes can be automated, manual effort is still necessary — particularly for complex works such as textbooks and scientific materials.
Publishers, for their part, view making content accessible to the print disabled as part of the value that they can add to content — value that still can’t exist without financial support and investment.
One example is Elsevier, the world’s largest scientific publisher. Elsevier has undertaken a broad, ambitious program to optimize its ability to produce versions of its titles for the print disabled. One speaker from the accessibility community called the program “the gold standard” for digital publishing. Not bad for a company that some in the academic community refer to as the Evil Empire.
This is not by any means to suggest that publishers and the accessibility community coexist in perfect harmony. There is still a long way to go to reach the state articulated at the conference by George Kerscher, who is both Secretary General of DAISY and President of IDPF: to make all materials available to the print disabled at the same time, and for the same price, as mainstream content.
The Future Publishing and Accessibility conference was timed to take place just before negotiations begin over a proposed WIPO treaty that would facilitate the production of accessible materials and distribution of them across borders. The negotiations are taking place this and next week in Marrakech, Morocco. This proposed treaty is already laden with concerns from the copyright industries that its provisions will create opportunities for abuse, and reciprocal concerns from the open Internet camp that the treaty will be overburdened with restrictions designed to limit such abuse. But as I found out in Denmark last week, there is enough practical common ground to hope that accessibility of content for the print disabled will continue to improve.
Withholding Ads from Illegal Download Sites January 7, 2013Posted by Bill Rosenblatt in Standards.
add a comment
Over the past few months, the conversation about online infringement has shifted from topics like graduated response and DMCA-related litigation to cutting off ad revenue for sites that provide illegal downloads. This issue gained some importance during the run-up to SOPA and PIPA in 2011. David Lowery of The Trichordist gave it new visibility last year by initiating a stream of screenshots showing major consumer brands that advertise on sites like FilesTube, IsoHunt, and MP3Skull.
Last week, the issue took on a new level of importance when a report from the University of Southern California’s Annenberg Innovation Lab confirmed that major online ad networks, including those of Google and Yahoo, routinely place ads on pirate sites. The Innovation Lab has come up with an automated way of tracking the ad networks that place ads sites that (according to the Google Transparency Report) attract the most DMCA takedown notices. The study ranks the top ten ad networks that serve ads on pirate sites and will be updated monthly.
The idea is to shame consumer brands by showing them that their ads are appearing on pirate sites amid ads for pornography, mail-order brides, etc. The Annenberg study has already led to at least one major consumer brand insisting that its ads be pulled from pirate sites.
The focus on online ad networks is — as Lowery admitted at our Copyright and Technology NYC 2012 conference last month — not an ideal solution to the problem of online infringement but rather a “low hanging fruit” approach that appeals to real business imperatives without requiring lawyers or lobbyists. It’s an acknowledgement that legislation to address online infringement is not going to be achievable, at least in the near future, in the aftermath of the defeats of SOPA and PIPA in early 2012.
Yet the tactic’s effectiveness is limited by the quality of information about ad buys that flows through ad networks. For example, it’s sometimes not possible for an advertiser to know where its ads are being placed because, among other reasons, ad networks resell inventory to each other.
Let’s assume that most consumer brands would rather not have their ads placed on pirate sites. Then two things are required to solve this problem. One is standards for information about ad buys — advertiser identity, inventory, ad network, type of placement, and so on — and protocols for communicating that information up the chain of intermediaries from the website on which the ad was placed all the way up to the advertiser. The other is agreement throughout the online ad industry to use such standards in communication and reporting.
If you follow efforts to develop standard content identifiers and online rights registries, you should see the analogy here.
The good news is that the Interactive Advertising Bureau (IAB) has been working on standards that look like they could apply here with some tweaks. The IAB launched the eBusiness Interactive Standards initiative in 2008 with the goal of increasing efficiency and reducing errors in online ad workflows. The eBusiness Interactive Standards spec defines XML structures for communicating information among advertisers, agencies, and publishers (websites) from RFPs (requests for proposal) through to IOs (insertion orders).
Now the bad news. The IAB standards would need some modification to cover the requirements here: they don’t appear to work through multiple levels of ad networks, they don’t include globally unique identifiers for ad placements (though this would be simple to add), and they aren’t designed to cover performance reporting. Furthermore, progress in getting the standard to market appears to be slow: it entered a beta phase with limited customers in 2011, and no progress has been apparent since then.
Yet even if the right standards were adopted, the advertising industry would still need to commit to the kinds of transparency that would be necessary to ensure, if an advertiser wishes, that its ads don’t appear on pirate sites. For one thing, advertisers often buy “blind” a/k/a run-of-network inventory in order to get discount pricing, and there is no reliable way to ensure that such buys don’t inadvertently end up on pirate sites. A related problem is where to draw the line between obvious pirate sites like the ones mentioned above and those that happen to occasionally host unauthorized material.
Regulatory initiatives seem unlikely here. Indeed, the Obama Administration and Congress in 2011 asked the ad industry to adopt a “pledge” against advertising on pirate sites; the industry’s two major U.S. trade associations responded last May with a statement full of equivocation and wiggle-room.
Ultimately, the pressure would have to come from advertisers themselves. They could demand, for example, that even blind buys not appear on the 200-250 sites that show more than 10,000 takedown notices a month in the Google Transparency Report (mainstream sites like Facebook, Tumblr, DailyMotion, Scribd, and SoundCloud fall well below this threshold). A good set of technical standards for tracking and reporting would help convince them that they can demand to withhold their ads from these sites with a reasonable chance of success.
As I have worked with video service providers that are trying to upgrade their offerings to include online and mobile services, I’ve seen bewilderment about the maze of codecs, streaming protocols, and player apps as well as content protection technologies that those service providers need to understand. Yet a development that took place earlier this month should help ease some of the complexity.
Microsoft’s PlayReady is becoming a popular choice for content protection. Dozens of service providers use it, including BSkyB, Canal+, HBO, Hulu, MTV, Netflix, and many ISPs, pay TV operators, and wireless carriers. PlayReady handles both downloads and streaming, and it is currently the only commercial DRM technology certified for use with UltraViolet (though that should change soon). Microsoft has developed a healthy ecosystem of vendors that supply things like player apps for different platforms, “hardening” of client implementations to ensure robustness, server-side integration services, and end-to-end services. And after years of putting in very little effort on marketing, Microsoft has finally upgraded its PlayReady website with information to make it easier to understand how to use and license the technology.
Streaming protocols are still a bit of an issue, though. Several vendors have created so-called adaptive streaming protocols, which monitor the user’s throughput and vary the bit rate of the content to ensure optimal quality without interruptions. Apple has HTTP Live Streaming (HLS), Microsoft has Smooth Streaming, Adobe has HTTP Dynamic Streaming (HDS), and Google has technology it acquired from Widevine. Yet operators have been more interested in Dynamic Adaptive Streaming over HTTP (DASH), an emerging vendor-independent MPEG standard. The hope with MPEG-DASH is that operators can use the same protocol to stream to a wide variety of client devices, thereby making deployment of TV Everywhere-type services cheaper and easier.
MPEG-DASH took a significant step towards real-world viability over the last few months with the establishment of the DASH Industry Forum, a trade association that promotes market adoption of the standard. Microsoft and Adobe are members, though not Apple or Google, indicating that at least some of the vendors of proprietary adaptive streaming will embrace the standard. The membership also includes a healthy critical mass of vendors in the video content protection space: Adobe, BuyDRM, castLabs, Irdeto, Nagra, and Verimatrix — plus Cisco, owner of NDS, and Motorola, owner of SecureMedia.
Adaptive streaming protocols need to be integrated with content protection schemes. PlayReady was originally designed to work with Smooth Streaming. It has also been integrated with HLS, which is probably the most popular of the proprietary adaptive streaming schemes. Integration of PlayReady with MPEG-DASH is likely to be viewed as a safe choice, in line with the way the industry is going. That solution came into view this month as BuyDRM and Fraunhofer IIS announced an integration of MPEG-DASH with PlayReady for the HBO GO service in Europe. HBO GO is HBO’s “over the top” service for subscribers.
For the HBO GO demo, BuyDRM implemented a version of its PlayReady client that uses Fraunhofer’s AAC 5.1 surround-sound codec, which ships with devices that run Android 4.1 Jelly Bean. The integration is being showcased with HD quality video on HBO’s “Boardwalk Empire” series. Users can connect Android 4.1 devices with the proper outputs — even handsets — to home-theater audio playback systems to get an experience equivalent to playing a Blu-ray disc. The current implementation supports live broadcasting, with VOD support on the way shortly.
PlayReady integrated with MPEG-DASH is likely to be a popular choice for a variety of video service providers, ranging from traditional pay TV operators to over-the-top services like HBO Go. BuyDRM and Fraunhofer’s deployment is an important step towards that choice becoming widely feasible.
UK IPO Publishes Digital Copyright Hub Report August 13, 2012Posted by Bill Rosenblatt in Rights Licensing, Standards, UK.
add a comment
Last month, the UK Intellectual Property Office published a report called Copyright Works: streamlining copyright licensing for the digital age. This is the second report in Richard Hooper CBE and Dr. Ros Lynch’s engagement with the UK IPO. Hooper’s background includes positions at the top of the UK’s media and telecommunications industries; Lynch is a senior civil servant in the UK’s Department for Business, Innovation and Skills.
The second Hooper Report follows on the heels of several important developments in the UK regarding copyright in the digital age, most recently including the Digital Economy Act and the Hargreaves Review. Having found (in the first Hooper Report) that the legal content marketplace is being held back by several obstacles, such as licensing difficulties, lack of standards, and deficiencies in both content and metadata, the second Hooper Report makes recommendations on how to solve the problems.
Unfortunately the recommendations in the second Hooper Report don’t go far enough. Hooper and Lynch did a lot of research, talked to lots of people, and synthesized lots of information. Most of their input appears to have come from established industry sources, including the major licensing entities in the UK, such as PRS and PPL (UK analogs to ASCAP and RIAA in the US); major media companies; trade associations; and standards initiatives engendered by the EU Digital Agenda such as the Global Repertoire Database (GRD) and Linked Content Coalition (LCC). They also researched important initiatives outside of the UK, such as the Copyright Clearance Center’s RightsLink service in the US.
Whereas the first Hooper Report established that major problems exist, this new report is best appreciated as a summary of the various initiatives being planned to solve pieces of them — such as the GRD and LCC. Hooper and Lynch offer cogent explanations of problems to be solved: difficulty of licensing content into legitimate services, lack of complete and consistent information about content and rights, lack of standards for rights information and communication among relevant entities, resistance of collective licensing schemes to new business models, and the relative lack of content available for legal use through various channels.
The authors appear to understand that the various efforts being proposed are not going to solve all the problems by themselves. On the other hand, they also understand problems of “not invented here,” and they take the pragmatic view that the best way forward is to work with existing standards and integrate them together rather than try to come up with some kind of overall solution that may not be practicable.
So far, so good; but that’s essentially where it all stops. After explaining the problems and summarizing existing initiatives, the report tantalizingly lays out a vision for a Copyright Hub that will bring everything together. It recommends government seed funding as a way of both kick-starting the Copyright Hub and ensuring that people work together to build it.
Unfortunately, the vision for the Copyright Hub turns out to be an inch deep. It also lacks explanations of how, or if, all these initiatives — ranging from PRS and PPL’s efforts to offer “one-stop” music licenses all the way up through the technically sophisticated GRD — could fit together or even how they map to the elements in the proposed Copyright Hub. The LCC project is looking at technical aspects of the integration issue, but it is conceived as an enabler of standards, not as a marketplace solution. It’s possible that such a solution is envisioned as a next step in the process. But the report betrays evidence of a lack of technical understanding that would have benefited both the analysis and the envisioning of solutions.
For example: The report has a section on digital images, which discusses the problem that many images are stripped of their rights metadata as part of normal publishing processes. It discusses the possibility of using Internet-standard Uniform Resource Identifiers (URIs) to identify images and the work that entities such as Getty Images and the PLUS coalition are doing to create image registries and automate rights licensing. But when put in this context, the solution to the metadata stripping problem is obvious: watermarking, the standard way of ensuring that data travels with content. The problem can be solved with a standard watermarking scheme whose payload includes a serial identifier that can be used to reference a URI in a registry. This is what the RIAA proposed for music in the U.S. in 2009, albeit to precious little fanfare; but Hooper and his people didn’t see it. (They use the word “embed” without appearing to understand its meaning.) There are other examples like this.
The report mentions “long tail” licensing — not as in long tail content, but as in long tail uses of content rights. The work that needs to be done should, the report rightly says, address the large and growing number of low-value licensing transactions rather than, say, Universal Music Group licensing to Spotify or Deezer (the kind of deal that will always get done the old-fashioned way). Unfortunately, the authors don’t seem to have talked to many people who try to get such licensing. They should, for example, have sought out startup companies that have to navigate the impenetrable maze of direct licensing deals with rights holders, face the rigidity of collecting societies that won’t accommodate their innovative business models, and make separate deals in 27 member states to get a pan-European service launched.
Overall, the second Hooper Report reads like a particularly well-informed version of the typical industry response to a government body’s investigation into industry practices: look at all the steps we’re already taking to solve this problem; leave us alone.
As a result, the new Hooper Report is a solid foundation on which to build solutions, but it doesn’t provide enough forward direction. It’s all very well to talk about respecting the growing body of valuable work that different organizations are doing to solve online content licensing problems, avoiding “not invented here,” promoting open standards, and so on. But the work that must be done will necessarily include tasks that are tedious and contentious, aspects that the Hooper Report glosses over.
Metadata schemes will have to be rationalized against one another; gaps and incompatibilities will have to be identified and eliminated. Rights holders whose metadata is incomplete or poor quality will have to be identified and given sufficient incentive to improve. Well-intentioned standards initiatives with overlapping or conflicting goals will have to change. Digital holdouts will have to be convinced to participate. And the many organizations with vested interests in maintaining the status quo will have to be called out as part of the problem rather than the solution. This may be ugly work, but it will have to get done.
I am working with the International Digital Publishing Forum (IDPF), helping them define a new type of content protection standard that may be incorporated into the upcoming Version 3 of IDPF’s EPUB standard for e-books. We’re calling this new standard EPUB Lightweight Content Protection (EPUB LCP).
EPUB LCP is currently in a draft requirements stage. The draft requirements, along with some explanatory information, are publicly available; IDPF is requesting comments on them until June 8. I will be giving a talk about EPUB LCP, and the state of content protection for e-books in general, at Book Expo America in NYC next week, during IDPF’s Digital Book Program on Tuesday June 5.
Now let’s get the disclaimer out of the way: the remainder of this article contains my own views, not necessarily those of IDPF, its management, or its board members. I’m a consultant to IDPF; any decisions made about EPUB LCP are ultimately IDPF’s. The requirements document mentioned above was written by me but edited by IDPF management to suit its own needs.
IDPF is defining a new standard for what amounts to a simple, lightweight, looser DRM. EPUB is widely used in the e-book industry (by just about everyone except Amazon), but lack of an interoperable DRM standard has caused fragmentation that has hampered its success in the market. Frankly, IDPF blew it on this years ago (before its current management came in). They bowed to pressures from online retailers and reading device makers not to make EPUB compliance contingent on adopting a standard DRM, and they considered DRM (understandably) not to be “low hanging fruit.”
IDPF first announced this initiative on May 18; it got press coverage in online publications such as Ars Technica, PaidContent.org, and others. The bulk of the comments were generally “DRM sucks no matter what you call it” or “Why bother with this at all, it won’t help prevent any infringement.” A small number of commenters said something on the order of “If there has to be DRM, this isn’t a bad alternative.” One very knowledgeable commenter on Ars Technica first judged the scheme to be pointless because it’s cryptographically weak, then came around to understanding what we’re trying to do and even offered some beneficial insights.
The draft requirements document provides the basic information about the design; my main purpose here is to focus more on the circumstances and motivation behind the initial design choices.
Let’s start at a high level, with the overall e-book market. (Those of you who read my article about this on PaidContent.org a few months ago can skip this and the next five paragraphs.) Right now it’s at a tipping point between two outcomes that are both undesirable for the publishing industry. The key figure to watch is Amazon’s market share, which is currently in the neighborhood of 60%; Barnes and Noble’s Nook is in second place with share somewhere in the 25-30% range.
One outcome is Amazon increasing its market share and entering monopoly territory (according to the benchmark of 70% market share often used in US law). If that happens, Amazon can do to the publishing industry as Apple has done for music downloads: dominate the market so much that it can both dictate economic terms and lock customers in to its own ecosystem of devices, software, and services.
The other outcome is that Amazon’s market share falls, say to 50% or lower, due to competition. In that case, the market fragments even further, putting a damper 0n overall growth in e-reading. Also not good for publishers.
Let’s look at what happens to DRM in each of these cases. In the first (Amazon monopoly) case, Amazon may drop DRM just as Apple did for music — but it will be too late: Amazon will have achieved lock-in and can preserve it in other ways, such as by making it generally inconvenient for users to use other devices or software to read Amazon e-books. Other e-book retailers would then drop DRM as well, but few will care.
In the second case, everyone will probably keep their DRMs in order to keep users from straying to competitors (though some individual publishers will opt out of it). In other words, if the DRM status quo remains, the likely alternatives are DRM-free monopoly or DRM and fragmentation.
If IDPF had included an interoperable DRM standard back in 2007 when both EPUB and the Kindle launched, e-books might well be more portable among devices and reading software than they are now. Yet the most desirable outcome for the reading public is 100% interoperability, and we know from the history of technology markets (with the admittedly major exception of HTML) that this is a chimera. (Again, I explained this in PaidContent.org a few months ago.)
To many people, the way out of this dilemma is obvious: everyone should get rid of DRM now. That certainly would be good for consumers. But most publishers — who control the terms by which e-books are licensed to retailers — don’t want to do this; neither do many authors, who own copyrights in their books.
E-book retailers and device vendors can get lock-in benefits from DRM. As for whether DRM does anything to benefit rights holders by improving consumers’ copyright compliance or reducing infringement, that’s a real question. Notwithstanding the opinions of the many self-styled experts in user behavior analysis and infringement data collection among the techblogorati and commentariat, the answer is unknown and possibly unknowable. Publishers are motivated to keep DRM if for no other reason than fear that once it goes away, they can never bring it back. Moreover, certain segments of the publishing industry (such as higher education) want DRM that’s even stronger than the current major schemes.
The fact is, none of the major DRMs in today’s e-book market are very sophisticated — at least not compared to content protection technologies used for video content. The economics of the e-book industry make this impossible: the publishers and authors who want DRM don’t pay for it, resulting in cost and complexity constraints. DRM helps retailers insofar as it promotes lock-in, but it doesn’t help them protect their overall services. In contrast, content protection helps pay TV operators (for example) protect their services, which they want protected just as much as Hollywood doesn’t want its content stolen; so they’re willing to pay for more sophisticated content protection.
The two leading e-book DRMs right now are Amazon’s Mobipocket DRM and Adobe’s Content Server 4; the latter is used by Barnes & Noble, Sony, and various others. Hackers have developed what I call “one-click hacks” for both. One-click hacks meet three criteria: people without special technical expertise can use them; they work on any file that’s packaged in the given DRM; and they work permanently (i.e., there is no way to recover from them). In contrast, pay TV content protection schemes are generally not one-click-hackable.
In other words one-click DRM hacks are like format converters, like the one built into Microsoft Word that converts files from WordPerfect or the ones built in to photo editing utilities that convert TIFF to JPEG. But there’s a difference: DRM hacks are illegal in many countries, including the United States, European Union member states, Brazil, India, Taiwan, and Australia; all other signatories to the Anti-Counterfeiting Trade Agreement will eventually have so-called anticircumvention laws too.
The effect of anticircumvention law has been to force DRM hacks into the shadows, making them less easily accessible to the non-tech-savvy and at least somewhat stigmatized. Without the law, we would have things like Nook devices and software with “Convert from Kindle Format” options (and vice versa). The popular, free Calibre e-book reading app, for example, had a DRM stripper but removed it (presumably under legal pressure) in 2009. A DRM removal plug-in for Calibre is available, but it’s not an official one; David Pogue of the New York Times — hardly a fan of DRM – recently dismissed it as difficult to use as well as illegal.
The US has a rich case history around anticircumvention law that has made the boundaries of legal acceptability reasonably clear. It has shut off the availability of hacks from “legitimate” sources and ensured that if your hack is causing enough trouble, you will be sued out of existence. I am not personally a fan of anticircumvention law, but I accept as fact that it has made hacks less accessible to the general public.
The foregoing line of thought got IDPF Executive Director Bill McCoy and me talking last year about what IDPF might be able to do about DRM in the upcoming version of EPUB, in order to help IDPF further its objective of making EPUB a universal standard for digital publishing and forestall the two undesirable market trajectories described above. We did not set out to design an “ultimate DRM” or even “yet another DRM”; we set out to design something intended to solve problems in the digital publishing market while working within existing marketplace constraints.
So now, with that background, here is a set of interrelated design principles we established for EPUB LCP:
- Require interoperability so that retailers cannot use it to promote lock-in. This is what the UltraViolet standard for video is attempting to do, albeit in a technically much more complex way. The idea of UltraViolet is to provide some of the interoperability and sharing features that users want while still maintaining some degree of control. Our theory is that both publishers and e-book retailers would be willing to accept a looser form of DRM that could break the above market dilemma while striking a similar balance between interoperability and control.
- Support functions that users really want, such as “social” sharing of e-books. Build on the idea of e-book watermarking, such as that used in Safari Books Online for PDF downloads and in the Pottermore Store for EPUB format e-books: embed users’ personal information into the content, on the expectation that users will only share files with people whom they trust not to abuse their personal information.
- Create a scheme that can support non-retail models such as library lending and can be extended to support additional business models (see below) or the stronger security that industry segments such as higher ed need.
- Include the kinds of user-friendly features that Reclaim Your Game has recommended for video game DRMs. These include respecting privacy by not “phoning home” to servers and ensuring permanent offline use so that files can be used even if the retailer goes out of business. They also include not jeopardizing the security or integrity of users’ devices, as in the infamous “rootkit” installed by CD copy protection technology for music several years ago.
- Eliminate design elements that add disproportionately to cost and complexity. Perhaps the biggest of these is the s0-called robustness rules that have become standard elements of DRMs such as OMA DRM, Marlin, and PlayReady where the DRM technology licensor doesn’t own the hardware or platform software. Eliminating “phoning home” also saves costs and complexity. Other elements to be eliminated include key revocation, recoverability, and fancy authentication schemes such as the domain authentication used in UltraViolet.
- Finally, don’t try very hard to make the scheme hack-proof. The strongest protection schemes for commercial content — such as those found in pay television — are those that minimize the impact of hacks so that they are temporary and recoverable; such schemes are too complex, invasive, and expensive for e-book retailers or e-reader makers to consider. Instead, assume that EPUB LCP will be hacked, and rely on two things to blunt the impact: anticircumvention law, and allowing enough differences among implementations that each one will require its own hack (a form of what security technologists call “code diversity.”).
With those design principles in mind, we have designed a scheme that takes its inspiration from two sources in particular: the content protection technology used in the eReader/FictionWise e-book technology that is now owned by Barnes & Noble, and the layered functionality concept built into the Digital Media Project‘s IDP (Interoperable DRM Platform) standard.
The central idea of EPUB LCP is a passphrase supplied by the user or retailer. This could be an item of personal information, such as a name, email address, or even credit card number; distributors or rights holders can decide what types of passphrases to use or require. The passphrase is irrecoverably obfuscated (e.g. through a hash function) so that even if a hack recovers the passphrase, it won’t recover the personal information; yet the retailer can link the obfuscated passphrase to the user. The obfuscated passphrase is then embedded into the e-book file. If the user wants to share an e-book, all she has to do is share the passphrase. Otherwise, the content must be hacked to be readable.
Other aspects of the draft requirements are covered in the document on the IDPF website. Apart from that, it’s worth mentioning that this type of scheme will not support certain content distribution models unless extensions are added to make them possible. Features intentionally left out of the basic EPUB LCP design include:
- Separate license delivery, which allows different sets of rights for a given file
- License chaining, which supports subscription services
- Domain authentication, which can support multi-device/multi-user “family accounts” a la UltraViolet
- Master-slave secure file transfer, for sideloading onto portable devices, a la Windows Media DRM
- Forward-and-delete, to implement “Digital Personal Property” a la the IEEE P1817 standard
Once again, we set out to design something that meets current market needs and works within current market constraints; EPUB LCP is not a research-lab R&D project.
Again, I’ll be discussing this, as well as the landscape for e-book content protection in general, at Book Expo America next week. Feel free to come and heckle (or just heckle in the comments right here). I’m sure I will have more to report as this very interesting project develops.
Creative Commons for Music: What’s the Point? January 22, 2012Posted by Bill Rosenblatt in Law, Music, Rights Licensing, Services, Standards.
I recently came across a music startup called Airborne Music, which touts two features: a business model based on “subscribing to an artist” for US $1/month, and music distributed under Creative Commons licenses. Like other music services that use Creative Commons, Airborne Music appeals primarily to indie artists who are looking to get exposure for their work. This got me thinking about how — or whether — Creative Commons has any real economic value for creative artists.
I have been fascinated by a dichotomy of indie vs. major-label music: indie musicians value promotion over immediate revenue, while for major-label artists it’s the other way around. (Same for book authors with respect to the Big 6 trade publishers, photographers with respect to Getty and Corbis, etc.) Back when the major labels were only allowing digital downloads with DRM — a technology intended to preserve revenue at the expense of promotion — I wondered if those few indie artists who landed major-label deals were getting the optimal promotion-versus-revenue tradeoffs, or if this issue even figured into major-label thinking about licensing terms and rights technologies.
When I looked at Airborne Music, it dawned on me that Creative Commons is interesting for indie artists who want to promote their works while preserving the right (if not the ability) to make money from them later. The Creative Commons website lists ten existing sites that enable musicians to distribute their music under CC, including big ones like the bulge-bracket-funded startup SoundCloud and the commercially-oriented BandCamp.
This is an eminently practical application of Creative Commons’s motto: “Some rights reserved.” Many CC-licensing services use the BY-SA (Attribution-Share-Alike) Creative Commons license, which gives you the right to copy and distribute the artist’s music as long as you attribute it to the artist and redistribute (i.e. share) it under the same terms. That’s exactly what indie artists want: to get their content distributed as widely as possible but to make sure that everyone knows it’s their work. Some use BY-SA-NC (Attribution-Share-Alike-Noncommercial), which adds the condition that you can’t sell the content, meaning that the artist is preserving her ability to make money from it.
It sounds great in theory. It’s just too bad that there isn’t a way to make sure that those rights are actually respected. There is a rights expression language for Creative Commons (CC REL), which makes it possible for content rendering or editing software to read the license (in XML RDFa) and act accordingly. As a technology, the REL concept originated with Mark Stefik at Xerox PARC in the mid-1990s; the eminent MIT computer scientist Hal Abelson created CC REL in 2008. Since then, the Creative Commons organization has maintained something of an arms-length relationship with CC REL: it describes the language and offers links to information about it, but it doesn’t (for example) include CC REL code in the actual licenses it offers.
More to the point, while there are code libraries for generating CC REL code, I have yet to hear of a working system that actually reads CC REL license terms and acts on them. (Yes, this would be extraordinarily difficult to achieve with any completeness, e.g., taking Fair Use into account.)
Without a real enforcement mechanism, CC licenses are all little more than labels, like the garment care hieroglyphics mandated by the Federal Trade Commission in the United States. For example, some BY-SA-licensed music tracks may end up in mashups. How many of those mashups will attribute the sources’ artists properly? Not many, I would guess. Conversely, what really prevents someone who gets music licensed under ND (No Derivative Works) terms from remixing or excerpting in ways that aren’t considered Fair Use? Are these people really afraid of being sued? I hardly think so.
This trap door into the legal system, as I have called it, makes Creative Commons licensing of more theoretical than practical interest. The practical value of CC seems to be concentrated in business-to-business content licensing agreements, where corporations need to take more responsibility for observing licensing terms and CC’s ready-made licenses make it easy for them to do so. The music site Jamendo is a good example of this: it licenses its members’ music content for commercial sync rights to movie and TV producers while making it free to the public.
Free culture advocates like to tell content creators that they should give up control over their content in the digital age. As far as I’m concerned, anyone who claims to welcome the end of control and also supports Creative Commons is talking through both sides of his mouth. If you use a Creative Commons license, you express a desire for control, even if you don’t actually get very much of it. What you really get is a badge that describes your intentions — a badge that a large and increasing number of web-savvy people recognize. Yet as a practical matter, a Creative Commons logo on your site is tantamount to a statement to the average user that the content is free for the taking.
The truth is that sometimes artists benefit most from lack of control over their content, while other times they benefit from more control. The copyright system is supposed to make sure that the public’s and creators’ benefits from creative works are balanced in order to optimize creative output. Creative Commons purports to provide simple means of redressing what its designers believe is a lack of balance in the current copyright law. But to be attractive to artists, CC needs to offer them ways to determine their levels of control in ways that the copyright system does not support.
In the end, Creative Commons is a burglar alarm sign on your lawn without the actual alarm system. You can easily buy fake alarm signs for a few dollars, whereas real alarm systems cost thousands. It’s the same with digital content. At least Creative Commons, like almost all of the content licensed with it, is free.
(I should add that I wear the badge myself. My whitepapers and this blog are licensed under Creative Commons BY-NC-ND (Attribution-Noncommercial-No Derivative Works) terms. I would at least rather have the copyright-savvy people who read this know my intentions.)
UltraViolet Gets Two Lifelines January 12, 2012Posted by Bill Rosenblatt in Economics, Fingerprinting, Services, Standards, Video.
add a comment
A panel at this week’s CES show in Las Vegas yielded two pieces of positive news for the DECE/UltraViolet standard, after a launch several months ago with Warner Bros. and its Flixster subsidiary that could charitably be called “premature.” Of the two news items, one is a nice to have, but the other is a game-changer.
Let’s get to the game-changer first: Amazon announced that a major Hollywood studio is licensing its content for UltraViolet distribution through the online retail giant. The Amazon executive didn’t name the studio, though many assume it’s Warner Bros. Even if it’s a single studio, the importance of this announcement to the likelihood of UltraViolet’s success in the market cannot be overstated.
Leaving aside UltraViolet’s initial technical glitches and shortage of available titles, the problem with UltraViolet from a market perspective had always been a lukewarm interest from online retailers. As I’ll explain, this hasn’t been a surprise, but Amazon’s new interest in UltraViolet could make all the difference.
UltraViolet is the “brand name” of a standard from a group called the Digital Entertainment Content Ecosystem (DECE), headed by Sony Pictures executive Mitch Singer. It implements a so-called rights locker for digital movies and other video content. Users can establish UltraViolet accounts for themselves and family members. Then they can obtain movies in one format (say, Blu-ray) and be entitled to get it in other formats for other devices (say, Windows Media file download for PCs). They can also stream the content to a web browser anywhere. The rights locker, managed by Neustar Inc., tracks each user’s purchases.
In other words, UltraViolet promises users format independence and a hedge against format obsolescence, while providing some protection for the content by requiring it to be packaged in several approved DRM and stream encryption schemes. It includes a few limitations on the number of devices and family members that can be associated with a single UltraViolet account, but in general UltraViolet is designed to make video content more portable and interoperable than, say, DVDs or iTunes downloads.
Five of the six major Hollywood studios (all but Disney*), plus the “major indie” Lionsgate, are participating in UltraViolet.
One of the design goals of UltraViolet was to ensure that no single retailer could attain a market share large enough to be able to control downstream economics — in other words, to avoid a replay of Apple’s dominance of digital music downloads (and possibly Amazon’s dominance of e-books). To do this, the DECE studios pushed for ways to thwart consumer lock-in by online retailers that would sell UltraViolet content.
The most important example of this is rights locker portability: users can access their rights lockers from any participating retailer. UltraViolet retailers must compete with each other through value-added features.
Amazon’s Kindle e-book scheme offers a good illustration of platform lock-in and how it differs from other features that a retailer can build or offer. If you buy an e-book on Amazon, you can download and read it on a wide variety of devices: not just Kindle e-readers but also iPads, iPhones, Android devices, BlackBerrys, PCs, and Macs — in other words, pretty much everything but other e-reader devices. You get e-book portability — it will even remember where you last left off if you resume reading an e-book on another device — but you are still tied to Amazon as a retailer. If you want to read the same e-book on a Nook, for example, you have to buy it separately from Barnes & Noble (and then you can read that e-book on your PC, Mac, iPhone, Android, etc.).
This lock-in gives Amazon power in the market as a retailer; it had 58% market share as of February 2011 (by comparison, Apple has over 70% of the music download market). UltraViolet wants to make it as difficult as possible for a single digital video retailer to assert such market power.
The downside of that policy has been a lack of enthusiasm among retailers to sell UltraViolet-licensed content — which entails significant development investment and operational expenses. A good shorthand way to evaluate the potential impact of a standards initiative is to look at the list of participants: what points in the value chain are represented, how many of the top companies in each category, and so on. In DECE’s case, members have included most of the major movie studios, plenty of consumer device makers, lots of DRM and conditional access technology vendors, and so on, but few big-name retailers… one of which (Best Buy) already had a different system for delivering digital video content via Sonic Solutions.
Warner Bros. tried to jump-start the UltraViolet ecosystem by acquiring Flixster, a movie-oriented social networking startup, adding digital video e-commerce capability, and using it as an UltraViolet retailer for a handful of Warner titles. This has been little more than a proof-of-concept test, which was plagued by some technical glitches and suboptimal user experience — all of which, according to Singer, have been fixed.
It would be unworkable for Hollywood to pin its hopes for its next big digital format on a small unknown retailer owned by one of the studios. It has been vitally necessary to attract a big-name retailer to both validate the concept and provide the necessary marketing and infrastructure footprints. There had been talk of Wal-Mart entering the UltraViolet ecosystem, although it already has its own video delivery scheme through VUDU. But otherwise, the membership list had been short on major retailers.
Of course, Amazon is the major-est online retailer of them all. And it so happens that Amazon’s digital video strategy is a good fit to UltraViolet in two ways. First, Amazon currently runs a streaming service (Amazon Instant Video), whereas UltraViolet is primarily focused on downloads, a/k/a Electronic Sell Through (EST): the idea of UltraViolet is to buy a download and only then be able to view it via streaming.
Second, Amazon Instant Video does not look particularly successful. Of course, Amazon does not reveal user numbers, but it is telling that Amazon included Instant Video Unlimited as a perk in its US $79/year Amazon Prime program… and that when people extol the virtues of Amazon Prime, they tend to emphasize the free overnight shipping but rarely the streaming video.
The biggest winner thus far in the paid online video sweepstakes is Netflix, with about 24 million subscribers as of mid-2011. Netflix’s subscription-on-demand model is most likely far more popular than Amazon Instant Video’s pay-per-view (except for Amazon Prime members) model. Thus Amazon may be looking for ways to improve its market position in video without having to hack away at the Netflix streaming juggernaut.
The video download market is in comparative infancy. It has no runaway market leader a la Netflix, or Apple in music. If this situation persists long enough, and if Amazon’s trial run with UltraViolet is successful, then other retailers might see UltraViolet as a viable format as well… precisely because it will make them better able to compete with the Online Retailing Gorilla.
Yet the other dimension of UltraViolet that is currently lacking is availability of titles. And that’s where the other CES announcement comes in. Samsung announced a “Disc to Digital” feature that it will incorporate into new Blu-ray players later this year. With this feature, users can slide in their Blu-ray discs or DVDs, and if the content is “eligible,” they can choose to have that content available in their UltraViolet rights lockers for delivery in any UltraViolet-compliant format.
The Disc to Digital feature is a collaboration between Flixster (i.e. Warner Bros.) as online retailer and Rovi as technology supplier. It works in a manner that is analogous to “scan and match” services for music such as Apple iTunes Match: it scans your DVD or Blu-ray disc, identifies the movie, and if the movie is available in the UltraViolet library of licensed content, gives you an UltraViolet rights locker entry for that movie. Rovi’s content identification technology and metadata library are undoubtedly at the heart of this scheme.
There are two catches: first, users will have to pay a “nominal” fee per disc for this service, which is even larger (and as yet unspecified) if they want it in high definition; second, it is limited to “eligible” content, and no one has offered a definition of “eligible” yet (beyond the fact that the content must come from one of the DECE participating studios). But surely the “eligible” catalog will exceed the current list (19 titles) by orders of magnitude, or the service will not be worth launching.
Nevertheless, these developments are very positive news for DECE/UltraViolet after months of embarrassments and bad press. DECE still has lots of work to do to make UltraViolet successful enough to be the major studios’ designated successor to Blu-ray, but at last it’s on track.
*Yes, I’m aware of the irony of using a tag line from “Who Wants to Be a Millionare” in the title of this article: Disney owns the home entertainment distribution rights to that hit TV game show.
The Future of Music: From Blanket Licensing to Registries October 10, 2011Posted by Bill Rosenblatt in Law, Music, Rights Licensing, Standards.
The Future of Music Coalition Policy Summit, which took place last week, has been a fixture in Washington, DC for a decade now. For those interested in how copyright has to find its way in the ever-changing world of digital music, this is a wonderful place to spend a couple of days. The FMC Policy Summit is a great event — and an inspiration for our own Copyright and Technology Conference — because it gathers many different types of people and forces them into a single room to get to know one another. As an organization, FMC represents the interests of independent musicians and songwriters, but the subject matter discussed at its Policy Summit should be of interest to anyone contemplating the future of music.
The panels at the FMC Policy Summit cover a range of topics beyond copyright. But last week’s conference had two panels on copyright arcana that were linked implicitly if not explicitly: on the first day, a panel on blanket licensing; on the second, a panel on music copyright registries. Perhaps the most remarkable aspect of these two panels was that digital music expert/ideologue Jim Griffin was on the latter panel, not the former.
Let me take a couple of steps back to explain why this is remarkable.
The treatment of music copyrights most countries is a horrible mess. It is so complex as to be virtually incomprehensible to content creators — the people who need to understand them the most.
If you make a music recording, you have two sets of copyrights: one for the underlying composition (which could be someone else’s if you didn’t write the music), and another for the recorded performance of it. Each of those rights needs to be owned by, granted by law to, or licensed by entities such as record labels, distributors, service providers, and end-users. These rights are handled in various different ways in the United States. Some are implicit copyright rights; some come from so-called statutory licenses that have been added to the copyright law; some result from ad-hoc license agreements; and some come through collecting societies (a/k/a PROs or Performing Rights Organizations) like ASCAP and BMI, which represent only those rights holders who sign up with them.
If you’re already confused, welcome to a very large club.
A few panelists at the FMC Summit — mainly law-professor types who habitually think in terms of concepts and idealism instead of practicalities and the real world — contemplated blowing up the entire system and starting from scratch. Others, such as the new Register of Copyrights, Maria Pallante, settled for “Sure it’s bad here in the US, but it’s worse elsewhere” arguments. Her predecessor, Marybeth Peters, was an advocate of streamlining the entire music licensing process so that content creators can come closer to “one-stop shopping,” as countries such as the UK have attempted.
There are two schools of thought on how to improve a system that, in the words of Gary Greenstein of the law firm Wilson Sonsini (who will also speak at Copyright and Technology 2011), exists primarily to preserve the many jobs that would be eliminated under a more streamlined system. One is to move to a comprehensive system of blanket licensing, i.e. forming entities that represent all music rights holders and license their works under fixed terms. Another is to use technology to measure all usages of copyrighted works and compensate rights holders accordingly.
These two schools of thought are not mutually exclusive. Automated measurement and compensation can work in a blanket or statutory licensing regime if the technology is pervasive and accurate enough. Yet blanket licensing usually works with compensation schemes derived from sampling (e.g., BMI requires radio stations to log the music they play for a couple of weeks each year) or levies (“copyright taxes” collected from makers of consumer electronics or blank recording media). These are blunt-instrument approaches which all but guarantee that “long tail” content creators will not be compensated fairly and that abuses will creep in.
The blunt-instrument school of thought has persisted for quite a while as a lowest common denominator that is at least practicable, even if it has outlived its usefulness. Yet recent developments have proved two important things: first, the blunt-instrument approach has serious limitations in the digital world, given the Byzantine nature of the underlying system; second, better alternatives not only exist but are exposing the inherent inadequacies of the blunt-instrument approach.
The better alternative that has emerged here in the States, according to the views of most FMC Policy Summit attendees, is SoundExchange. SoundExchange came in to being in the early 2000s as the result of laws enacted in the late 90s that established “performance rights in sound recordings”; this meant that online music services had to pay royalties for playing recordings, not just for the underlying compositions. The latter royalties are administered by composers’ collecting societies like ASCAP and BMI. As the result of the new laws, online music services would have to pay performance royalties, though terrestrial broadcast radio would not. (See, I told you this was a confusing mess.)
SoundExchange requires online music services to collect data on the music they play, report the data, and pay royalties accordingly. (Small noncommercial webcasters are exempt from this process and only pay a small flat annual fee.) SoundExchange negotiates royalty rates for various types of digital music services (such as webcasters and satellite radio) through periodic rate-setting proceedings before panels of judges in Washington.
FMC Policy Summit attendees — who tend to be musicians, songwriters, or indie label people — see SoundExchange as a beacon of light in the darkness, an organization that gets musicians paid and does it with relative transparency and low overhead, at least compared to older organizations like ASCAP and BMI.
While SoundExchange has shown that automated, data-driven royalty compensation can be done, advocates of blanket licensing have run into a major snag: if you’re going to offer an online music service a blanket license to music, you have to offer it for “all music,” not just some of it, otherwise what you’re offering is not going to be very helpful to the online music service. The problem is that offering a license to “all music” is just plain impossible, at least without an act of Congress like that which produced SoundExchange.
With this insight, naive and idealistic notions such as charging all ISP subscribers a monthly “music tax” that gets (somehow) distributed to rights holders go straight out the window. This is where we finally get back to Jim Griffin: blanket-licensing schemes such as Choruss, the business that Jim Griffin ran for Warner Music Group, are revealed to be the impossibilities they are.
Griffin, a battle-scarred veteran of the early days of digital music, had been an articulate blanket-licensing ideologue for years when WMG CEO Edgar Bronfman asked him to set up a blanket licensing business, which they called Choruss. Choruss failed about a year ago; as I explained at that time, the primary reason for its failure was that it couldn’t get licenses to anywhere near “all music.”
So Griffin has acknowledged the impossibility and moved on. He has turned his attention to an underlying problem that is even more complex and fundamental: the lack of a global registry of all music rights information that would be required to support any kind of comprehensive and fair licensing scheme. At the FMC Policy Summit, Griffin was on a panel on music rights data; he was talking about the International Music Registry (IMR), a project led by the World Intellectual Property Organization (WIPO). Griffin is one of over two dozen people from around the world working on the IMR.
IMR is adopting a federated approach to rights registries that acknowledges and leverages the existences of various “island” registries throughout the world and attempts to build a unifying layer on top of them. (One of these “islands” is the so-called Global Repertoire Database, which is initially focused on Europe.) This approach is analogous to the Digital Object Identifier (DOI) standard that I helped define in the publishing industry in the late 1990s: we wanted a copyright work identifier and registry that could coexist peacefully with various existing standards and registries such as ISBN for books, ISSN for journals, PII for other journals, URL for online resources, and so on. On the other hand, it differs from the Book Rights Registry contemplated in Google’s settlement with book publishers and authors, which would have been a single über-registry for all book content, at least in the United States.
So that’s a long way of explaining what Jim Griffin was doing on the music registry panel instead of the blanket licensing panel at the FMC Policy Summit, and why that’s important. The rights registry problem is the right (no pun intended) one to be working on. If it can be solved, it would get us away from blunt-instrument schemes that encourage systemic abuses and favor big-name artists over the long tail, and it would facilitate content creators actually getting paid according to how much their music is played. It’s a problem that’s worth the monumental effort it will take to solve… if it’s even solvable at all. It will take years to find out one way or another, but it’s worth the journey.