This is the second of three installments on interesting developments from last week’s IDPF Digital Book conference in NYC.
Another interesting panel at the conference was on public libraries. I’ve written several times (here’s one example) about the difficulties that public libraries are having in licensing e-books from major trade publishers, given that publishers are not legally obligated to license their e-books for library lending on the same terms as for printed books — or at all. The major trade publishers have established different licensing models with various restrictions, such as limited durations (measured in years or number of loans), lack of access to frontlist (current) titles, and/or prices that range up to several times those charged to consumers.
The panel presented some research findings that included some hard data about how libraries drive book sales — data that libraries badly need in order to bolster their case that publishers should license material to them on reasonable terms.
As we learned from Rebecca Miller from Library Journal, public libraries in the US currently spend only 9% of their acquisition budgets on e-books — which amounts to about $100 Million, or less than 3% of overall trade e-book revenue in the United States. Surely that percentage will increase, making e-book acquisition more and more important for the future of public libraries. And as e-books take up a larger portion of libraries’ acquisition budgets, the fact that libraries have little control over licensing terms will become a bigger and bigger problem for them.
The library community has issued a lot of rhetoric — including during that panel– about how important libraries are for book discovery. But publishers are ultimately only swayed by measurable revenue from sales of books that were driven by visits to or loans from libraries. They also want to know to what extent people don’t buy e-books because they can “borrow” them from libraries.
In that light, the library panel had one relevant statistic to offer, courtesy of a study done by my colleague Steve Paxhia for the Book Industry Study Group. The study found that 22% of library patrons ended up buying a book that they borrowed from the library at least once during the past year.
That’s quite a high number. Here’s how it works out to revenue for publishers: Given Pew Internet and American Life statistics about library usage (48% of the population visited libraries last year), and only counting people aged 18 years and up, it means that people bought about 25 million books last year after having borrowed them from libraries. Given that e-books made up 30% of book sales in unit volume last year and figuring an average retail price of $10, that’s $75 million in e-book sales directly attributable to library lending. The correct figure is probably higher, given that many library patrons discover books in ways other than borrowing them (e.g. browsing through them at the library) — though it may also be lower given that some people buy books in order to own physical objects (and thus the percentage of e-books purchased as a result of exposure in libraries may be lower than the corresponding percentage of print books).
So, in rough numbers, it’s safe to say that for the $100 Million that libraries spend on e-books per year, they deliver a similar amount again in sales through discovery. It’s just too bad that the study did not also measure how many people refrained from buying e-books because they could get them from public libraries. This would be an uncomfortable number to measure, but it would help lead to the truth about how public libraries help publishers sell books.
Update: Steve Paxhia found that his 22% number was of library lends leading to purchases during a period of six months, not a year. And the survey respondents may have purchased books after borrowing them more than once during that period. His data also shows that half of respondents indicated that they purchased other works from a given author after having borrowed one from the library. So, using the same rough formula as above, the amount of purchases attributable to library usage is more likely to be north of $150 million. Yet we still have no indication of the number of times someone did not purchase a book — particularly an e-book — because it was available through a public library system.
National Academies Calls for Hard Data on Digital Copyright February 4, 2014Posted by Bill Rosenblatt in Economics, Law, United States.
1 comment so far
About three years ago, the National Academies — the scientific advisers to the U.S. federal government — held hearings on copyright policy in the digital age. The intent of the project, of which the hearings were a part, was to gather input from a wide range of interested parties on the kinds of research that should be done to further our understanding of the effects of digital technologies on copyright.
The committee overseeing the project consisted of twelve people, including an economist specializing in digital content issues (Joel Waldfogel), a movie industry executive (Mitch Singer of Sony Pictures), a music technology expert (Paul Vidich, formerly of Warner Music Group), a federal judge with deep copyright experience (Marilyn Hall Patel of Napster fame), a library director (Michael Keller of Stanford University), a former director of Creative Commons (Molly van Houweling), and a few law professors. The committee was chaired by Bill Raduchel, a Harvard economics professor turned technology executive perhaps best known as Scott McNealy’s mentor at Sun Microsystems.
Recently the National Academies Press published the results of the project in the form of Copyright in the Digital Era: Building Evidence for Policy, which is available as a free e-book or $35 paperback. This 85-page booklet is, without exaggeration, the most important document in the field of copyright policy to be published in quite some time. It is the first substantive attempt to take the debate on copyright policy out of the realm of “copyright wars,” where polemics and emotions rule, into the realm of hard data.
The document starts by decrying the lack of data on which deliberations on copyright policy are based, especially compared to the mountains of data used to support changes to the patent system. It then goes on to describe various types of data that either exist or should be collected in order to fuel research that can finally tell us how copyright is faring in the digital era, with respect to its purpose to maximize public availability of creative works through incentives to creators.
The questions that Copyright in the Digital Era poses are fundamentally important. They include issues of monetary and non-monetary motivations to content creators; the impact of sharply reduced distribution and transaction costs for digital compared to physical content; the costs and benefits of various copyright enforcement schemes; and the effects of US-specific legal constructs such as fair use, first sale, and the DMCA safe harbors. My own testimony at the hearings emphasized the need for research into the costs and benefits of rights technologies such as DRM, and I was pleased to see this reflected in the document.
Copyright in the Digital Era concludes with lists of types of data that the project committee members believe should be collected in order to facilitate research, as well as descriptions of the types of research that should be done and the challenges of collecting the needed data.
This document should be required reading for everyone involved in copyright policy. More than that, it should be seen as a gauntlet that has been thrown down to everyone involved in the so-called copyright wars. The National Academies has set the research agenda. Now that Congress has begun the long, arduous process of revamping America’s copyright law, we’ll see who is willing and able to fund the research and publish the results so that Congress gets the data it deserves.
Eight top Internet advertising networks will participate in a scheme for reducing ads that they place on pirate sites — websites that exist primarily to attract traffic by offering infringing content as well as counterfeit goods. The Best Practice Guidelines for Ad Networks to Address Piracy and Counterfeiting document, announced on July 15th, specifies a process modeled on the US copyright law’s notice-and-takedown regime, a/k/a DMCA 512: a copyright owner can send an ad network detailed information about websites on which it placed ads and that feature pirated material; then the ad network can decide to remove its ads from the site.
Although this scheme may result in some ads being pulled from obvious pirate sites, it has several major shortcomings. First of all, because this is a voluntary scheme, ad networks don’t risk legal liability for failing to comply with takedown notices, as they do under the DMCA.
So how will compliance be enforced? Consider this: the companies that have signed on to these guidelines are 24/7 Media, Adtegrity, AOL, Condé Nast, Google, Microsoft, SpotXchange, and Yahoo!. These companies have agreed to have the Interactive Advertising Bureau (IAB, the trade association for internet advertising) monitor them for compliance. The largest six of these eight companies have seats on the IAB board. In other words, this is rather like foxes agreeing to be monitored by the American Fox Association for compliance with henhouse guarding guidelines.
Secondly, the ad networks that are actually causing the most trouble aren’t involved. None of the ad networks listed as the top ten worst offenders in the latest (June) edition of the USC Annenberg Innovation Lab’s Advertising Transparency Report have signed on to the guidelines. Two of the eight that did sign on, Google and Yahoo!, were on the top 10 list when the Ad Transparency Report first came out in January but have come off since.
Another factoid: of the current worst offenders, only one (ZEDO) is a member of IAB at all. Ad network operators on that list with names like “Sumotorrent” are surely not going to be observing these guidelines.
This extralegal agreement between the ad networks and the major content industries follows a well-trodden Washington path: government threatens to regulate industries in order to curb bad behavior; the industry to be regulated responds with a set of “voluntary best practices”; these are just barely serious enough to get government to back off and the instigators of the regulation to at least admit that it’s better than nothing.
We’ve seen this game played before many times, such as when organizations that look out for children’s interests pushed for ways to filter Internet porn and obscenities, or countless attempts to fend off substantive online privacy laws (an area in which the IAB has been heavily involved).
The Best Practices for Ad Networks were announced by Victoria Espinel, President Obama’s intellectual property enforcement czar. That the document was actually written by the IAB is evident from its ample supply of equivocations and accountability dodges. While it’s tempting to go through it line by line and point all of these out (I’d rather see Chris Castle do this; he’s great at picking these things apart), I’d rather focus on the missed opportunity here: instead of modeling this scheme on DMCA 512, the ad networks should have agreed to link the scheme to that law. (The scheme does have a couple of minor links to the DMCA itself, but they are almost beside the point.)
Thanks to the Google Transparency Report and the USC Annenberg Innovation Lab, we have learned that DMCA takedown notices provide an excellent proxy metric of the most infringing websites. The Google Transparency Report shows the number of DMCA takedown notices that Google received per month for sites in its search results. The sites that exceed a certain threshold — say, 10,000 takedown notices per month — are all obvious pirate sites, while even the biggest mainstream consumer websites fall well below that threshold.
The Best Practices for Ad Networks could have made use of these statistics by tying ad takedowns to DMCA takedown notices received by the sites in question via the Google Transparency Report — instead of requiring copyright owners to generate an entirely new set of detailed notices under a voluntary regime with a Grand Canyon’s worth of wiggle room. In other words, the “best practice” could have been very simple: do not place ads on sites that generate more than X DMCA takedown notices per month. This would accomplish the mutually reinforcing goals of reducing ad revenue for pirate sites and reducing their access to the content itself.
Even better would have been to establish a repository of DMCA takedown notices independent of the ones that Google collects for its search results. Anyone who sends a site a takedown notice could simply send a copy to the keeper of this repository, which could be a neutral third party agreed to by the content industries and the IAB, or the U.S. Copyright Office. I have suggested adopting a standard format for takedown notices, which would facilitate this process.
(Such a repository could be supported by fees from copyright owners who submit large numbers of takedown notices, so that individuals and small copyright owners wouldn’t have to pay, and as a deterrent of abuse. The fees would be much smaller than those charged by the piracy monitoring services that big media companies hire, which could submit the notices to the repository and bundle the fees into their own.)
The IAB generated the Best Practices for Ad Networks under pressure from Big Media lobbying groups as well as the White House. But neither of these entities put money in ad networks’ pockets; advertisers do. The result would undoubtedly have been stronger if actual advertisers had added their voices to the process, though none appear to have done so.
Musician David Lowery, a longtime fighter against ad-sponsored piracy, has identified a few large advertisers that have taken steps to mitigate their involvement with pirate sites, such as here (Starbucks, Costco, Walmart), here (Coca-Cola, Pepsi), and here (Levi’s), but these are few compared with the many brands that keep advertising on them. Lowery and others do their best to shame consumer brands into awareness over this issue, but the amount of real change will depend on how much that shame translates into real involvement. As always, it’s best to follow the money.
Yes, Piracy Does Cause Economic Harm January 27, 2013Posted by Bill Rosenblatt in Economics, Uncategorized.
Back in 2010, the Government Accountability Office (GAO) published a meta-study of the economic effects of intellectual property infringement (including counterfeit goods as well as copyrighted works). The GAO concluded that IP infringement is a problem for the economy, but it’s not possible to quantify the extent of the damage — and may never be. It looked at many existing studies and found bias or methodological problems in every one.
More recently, Michael Smith and Rahul Telang, two professors at Carnegie-Mellon University, published another meta-study that serves as a sort of rejoinder to the GAO study. This was the subject of Prof. Smith’s talk at the recent Digital Book World (DBW) conference in NYC.
Assessing the Academic Literature Regarding the Impact of Media Piracy on Sales summarizes what has been a growing body of studies on the economic effects of so-called media piracy. Their conclusion is that piracy does have a negative effect on revenue — if for no other reason than the vast majority of studies come to that conclusion.
Smith’s presentation at DBW listed no less than 29 studies on media piracy that take actual data into account (as opposed to merely theoretical papers such as this one). Of those, 25 found economic harm from piracy, while 4 didn’t. When the list is restricted to papers published in peer-reviewed academic journals, the ratio is similar: 12 found harm; 2 didn’t. Interestingly, almost half of the cited studies were published after the GAO’s 2010 report.
(When Smith and Telang’s paper was originally published last year, many discredited it instantly because the MPAA helped fund the research. Yet I take the researchers at their word when they say that the funding source had no effect on the outcomes — an assertion bolstered by the paper’s exclusion of the MPAA’s own study from 2006.)
The paper explains why some studies’ methodologies are better than others and discusses shortcomings in some of the studies, such as the Oberholzer-Gee & Strumpf paper from 2007 that showed no harm to sales of music from piracy by laptops/PCs and therefore has been widely cited among the copyleft.
It’s easy to poke holes in the methodologies of studies that have to rely on real-world data over which the researchers have little or no control. And as someone who wouldn’t know an “endogenous dependent variable” if one bit me in the face, I find it hard to look at criticisms of these studies’ methodologies and determine which ones to believe. Yet it’s obvious that any study on piracy must rely on real-world data in order to have any credibility at all.
Decisions about business and policy have to be made based on the best information we have available. After a certain point, simply poking holes in studies — particularly those whose results you don’t happen to like — isn’t sufficient.
It may indeed, as the GAO suggested, be impossible to measure the economic effects of piracy with a large amount of accuracy. But if dozens of researchers have tried, all using different methodologies, then their conclusions in the aggregate are the best we’re going to do. Put another way, it will henceforth be very difficult to dislodge Smith and Telang’s conclusion that piracy does economic harm to content creators.
Netherlands Rejects Ban on Illegal Downloads December 21, 2012Posted by Bill Rosenblatt in Economics, Europe, Law.
1 comment so far
A lengthy political debate in the Netherlands has resulted in rejection of a law banning illegal downloads in that country, as the Dutch parliament finally voted against the law yesterday. This development paves the way for enactment of a private copying levy of up to €5 per device on devices such as PCs, smartphones, tablets, and set-top boxes.
Without a law against downloading infringing content, it will be impossible for the Netherlands to adopt the kind of graduated response scheme that France has implemented and that shows promising early results. Instead, the country will go down a path that has led to unfairness, confusion, and inaccuracies in compensating rights holders according to actual use of content.
Levies on consumer electronics have their origins in German taxes on photocopiers. Under EU copyright law, people have the right to make copies of content for their personal use, but rights holders have the right to be compensated for those copies. Levy schemes were enacted in order to compensate rights holders according to formulas for estimating the value of copies likely to be made by each owner of consumer electronics. These schemes vary widely from one country to the next and have been the source of unnecessary complexity in European content licensing as well as gray-market consumer electronics sales in high-levy countries.
(A notable exception to this is the UK, which has neither levies nor private copying rights, though the latter, at least, is about to change.)
The European Commission has been working for years to eliminate — or if not possible, at least harmonize — the unfathomable levy system in the EU. This step in the Netherlands works against the EU’s efforts. It is, admittedly, politically expedient: given the choice, politicians would rather be seen adding a tax onto consumer electronics purchases (thereby motivating Dutch people to drive the short distance to Luxembourg, where consumer electronics are levy-free) than passing a law that criminalizes online infringement. Media companies also find levies desirable because they create more stable and predictable revenue streams.
Yet this is a retrograde move. Levies are blunt, unfair instruments in an age where fairness and accuracy — at least relatively speaking — are available through technology. Everyone has to pay the same levy regardless of how many copies of files they make or whether those files are infringing or not. It’s not even clear whether the levy is meant to compensate rights holders for infringement or for private copies (of anything). It is especially disappointing to see levies spread in the home country of Europe’s leading authority on levy chaos, Prof. Bernt Hugenholtz of the University of Amsterdam.
The new levies are set to take effect in the new year. Yet this issue may not be resolved after all, as several makers of consumer electronics have filed suit against the Dutch government over the levies.
The Future of HADOPI October 26, 2012Posted by Bill Rosenblatt in Economics, Europe, Law.
add a comment
A recently-released report from the French government, Rapport sur les autorités publiques indépendantes (Report on the Independent Public Authorities), includes a section on HADOPI (Haute Autorité pour la diffusion des oeuvres et la protection des droits sur internet), the regulatory body set up to oversee France’s “graduated response” law for issuing warnings and potentially punishments to online copyright infringers.
The headline that most Anglophone writers took away from the 24 pages in this report that were devoted to HADOPI was “HADOPI’s budget to be cut by 23%.” These writers took their cues from anti-HADOPI statements by various French politicians — including new French President Francois Hollande — and mischaracterized a statement about HADOPI by the French culture minister, Aurélie Filippetti.
Unfortunately, none of these people appear to have actually read the government report. (Yes, it’s in French, but there is Google Translate. I used it.) HADOPI is not on the way out; not even close.
Let’s get the most obvious facts out of the way first. Yes, HADOPI’s operating budget is being cut from €10.3 Million to €8 Million, but its headcount is being increased (from 56.2 to 65.2 FTE). Apparently the budget cut reflects the fact that HADOPI’s ramp-up period is coming to an end in 2012, and the focus is being shifted to increasing operational efficiencies and cutting overhead. Moreover, HADOPI’s purview is being expanded to include video games as well as music and video content.
Another bit of factual cherry-picking in the Anglophone press: HADOPI has merely sent out more than a million emails but only prosecuted 14 people and only fined one (less than €200), so therefore it must be a big waste of money.
On the contrary: all of the data in the report, as well as the conclusions it draws, point to an agency whose successes are outnumbering its failures and whose mission is quite properly being optimized.
As it turns out, HADOPI has several objectives, not just issuing warning notices to illegal downloaders. Those other functions are where HADOPI does not look as successful as hoped. One objective is to increase the number of legal content offerings in France. To do this, it has put a labeling system into place, along with a website called PUR (Promotions des Usages Responsables, also an acronym for the French word for “pure”) that lists all of the labeled services. Although the report cites a sharp increase in the number of such services in France over the past year, that increase is surely attributable to market forces and is no different from similar increases in other countries.
Another of HADOPI’s objectives is to regulate the use of DRM technology according to rules derived from the European Union Copyright Directive of 2001. This means both ensuring that DRM systems don’t unduly restrict users’ rights to content and that DRM circumvention schemes (hacks) are prosecuted under the law. So far, HADOPI has only been asked to intervene in two DRM disputes concerning users’ rights, and both reviews are ongoing. This can’t be counted as a great success either.
Yet regarding HADOPI’s core “graduated response” function, the data in the report shows nothing but success so far. Fining people (a maximum of €1500) and suspending their Internet access (up to one month) is not the objective; reducing copyright infringement is. The number of people who have been fined or had their Internet access suspended is simply the wrong metric.
The good news is that HADOPI appears to be succeeding as an education program rather than as a punitive one. In 2011, HADOPI reports that fully 96% of people who received a first warning message did not receive a second one; this number stayed about the same in 2012. In addition, the percentage of people who received second notices but not third ones rose from 90% to 98% from 2011 to 2012. (The legal steps that could lead to fines or suspensions begin after the third notice.) To buttress this data, HADOPI has published results from four independent research reports that note significant decreases in illegal downloading in 2011. No one has substantively debunked any of these findings.
Furthermore, HADOPI does not simply take complaints from copyright owners — which monitor the Internet and submit complaints to HADOPI — at face value. For more than half of the users who received three warnings, HADOPI chose not to send the cases to French authorities for prosecution.
It is also interesting to note that the educational aspect of HADOPI appears to be succeeding despite the fact that it treats violations as misdemeanors, with small punishments, in contrast to the enormous criminal penalties associated with copyright infringement in France (as they are in the U.S.). This points to the conclusion that online education is more effective than large statutory damages in curbing infringement.
Now let’s talk about the economics. Ideally, this type of program would be funded by copyright holders — the ones with rights that they want protected. France is funding HADOPI with taxpayers’ money, although copyright owners do pay for the monitoring services that detect allegedly illegal downloads and report them to HADOPI.
At the same time, €8 Million isn’t a bad deal. France currently has about 50 million overall Internet users and about 25 million fixed broadband subscribers. Let’s assume that the total number of French people who pay for Internet subscriptions is about 30 million. In that case, HADOPI’s annual budget could be apportioned as a levy on Internet subscribers of about €0.27 (US $0.35). This is two orders of magnitude smaller than the £20 (US $32) annual antipiracy levy on ISP subscribers that the Digital Britain Report proposed for the UK in 2009.
Furthermore, HADOPI measured the market impact of unauthorized downloading (not counting P2P) as €51 to 72.5 Million annually. Although this figure can’t be taken as a magnitude of lost sales, the worst case break-even point for HADOPI’s cost-effectiveness would be that 16% of illegal downloads displace sales (one study attempted to measure promotional effects vs. sales displacement and suggested that about two-thirds of illegal downloads displace sales).
It’s still too early to proclaim HADOPI’s success or failure. For example, the more determined infringers could move to ways of obtaining content that evade detection (e.g. HADOPI only deals with downloads and not streaming). But the signs are encouraging enough that the French government has decided to keep the experiment going.
(By the way, if you would like to argue with me about this, I will be in Paris from November 7 through 11, speaking at the SNE conference “Les assizes du livre numeriques” on Thursday November 8.)
The Shame Factor August 30, 2012Posted by Bill Rosenblatt in Economics, Music.
Larry Lessig’s first book, Code and Other Rules of Cyberspace, is a landmark work in many respects. One of the less-mentioned ones is his description, starting on p. 88, of the four forces that govern cyberspace (or any other environment that humans inhabit or interact with): the market (economics), architecture (technology), people’s behaviors (norms), and laws (self-explanatory). This insight is an infinitely powerful tool for evaluating the digital world and attempts to influence the way it works.
In the world of content, we can view attempts to enforce copyright and uphold the value of creative works through the framework of Lessig’s four forces. At one time or another, pro-copyright interests fight the battle on all four fronts: they support new business models that “compete with free” (the market), they try to implement technologies that limit what users can do with content or monitor cyberspace for copyright abuses (architecture), they try to educate consumers on behavior regarding copyrighted material (norms), and they litigate or lobby for stronger copyright protections (laws).
Most of what we talk about here is some combination of market, architecture, and legal factors. The norms front has been both uninteresting and ineffective: it consists mainly of copyright holders’ desires to “make it easy to do the right thing” (through arms-length licensing deals with third parties that have other agendas, e.g. profit) and preachy educational campaigns (through trade associations that no one trusts).
Now David Lowery, of “Open Letter to Emily White” fame, has come up with what might just be the first interesting twist on norms: shaming big businesses. In his blog The Trichordist, he has written a series of posts that all follow the same template: “[Musician with Artistic Cred] Exploited by [Name-Brand Companies]!!” The musical artists with critical/indie cred have included Peter Gabriel, Neko Case, Aimee Mann, Neil Young, Jared Leto, Talib Kweli, and Tom Waits; the name-brand companies have included Volkswagen, LG, Ford, Target, Macy’s, Levi’s, Wells Fargo, BMW, Toyota, American Express, AT&T, Wendy’s, and many others.
Here’s what Lowery is trying to accomplish. Torrent and file-sharing sites make money by selling ads that they show to people who come to those sites to download infringing music. The artists and songwriters make no money from these ads (unlike, say, on YouTube, which shares ad revenue in many cases). The companies that advertise don’t buy the ads themselves, of course; instead they are placed by online ad networks like ValueClick, Turn Media, 24/7 Real Media, AdBrite, Collective Network, Specific Media, and those run by Google, Yahoo, AOL, and Microsoft. Some ad networks buy ad inventory wholesale from other ad networks. In other words, the name-brand companies may not even know where their ads are being placed.
Many companies have policies with ad networks that their ads should not be placed on certain types of sites, including sites that offer infringing content (as well as porn, political extremism, etc.). This is analogous to traditional advertising, where companies tell media buyers where and where not to place ads in publications, on TV shows, and so on. The problem is that such policies often aren’t enforced — especially when multiple layers of ad networks sit between the advertiser and the site with the inventory.
Lowery’s objective is to generate negative publicity that will shame these companies into actually enforcing these policies, through audits and other measures, thereby starving the infringing sites of ad revenue. He constructs his posts in such a way as to appeal to journalists looking for sensationalist angles like “Hip/Not-Rich Artist Exploited!” (He isn’t complaining about exploitation of Lady Gaga or Jay-Z.)
I admire Lowery and his tactics. He’s trying to do what he can with the tools he has (e.g. no multi-million-dollar budget for lawyers or lobbyists) and to build on the momentum he generated in the firestorm following his Open Letter to Emily. Yet I had not been impressed with his emphasis on norms, or as his blog slogan has it, Artists for an Ethical Internet.
In general, people behave economically rationally. If there’s a way to get something for free instead of paying for it, and the likelihood of getting caught is virtually zero, people will choose free. If your boyfriend offers to fill your iPod with several gigabytes of his favorite music, you’ll take it and dive right in. Trying to change this behavior through appeals to “ethics” is tantamount to fund drives on public broadcasting: it might work for a small, affluent minority but is hardly enough to sustain creativity in general.
Yet ethics do have economic value to corporations with consumer brands. Bad PR can cost real money. No company brand manager wants another Apple/Foxconn type situation on his or her hands. Lowery has written to advertising departments of consumer product companies and gotten a couple of positive responses: thanks for bringing this to our attention, we will certainly clamp down on this in the future. To add oomph to his message, Lowery often points out that the sites that feature infringing material usually also have ads from companies that offer Ukrainian mail-order brides, porn, and other things with which mainstream consumer product companies probably don’t want to be associated.
It’s an interesting and innovative gambit. However, I have to wonder how effective it will be. So far, no journalists appear to have picked up on any of Lowery’s posts, even though he has been at this for a few weeks. Maybe he’ll have better luck after everyone returns from summer vacations, but he could use some help in getting the message out. (Hello, Future of Music Coalition??)
The economics behind Lowery’s approach are in line with those of the failed SOPA and PIPA legislation: focus on squelching the supply of infringing content by cutting off economic benefits to the suppliers. This is considered to be “low hanging fruit” because it does not directly affect consumer behavior. But it has a major limitation: squelching supply of infringing content is highly unlikely to affect demand for it. If people can’t get their free content from KickassTorrents or FilesTube, they’ll get it from places that don’t make ad revenue, of which there are plenty. The most serious long-term issue is the dwindling perceived value of content. Getting AT&T and Ford to pull their ads from TorrentReactor and IsoHunt won’t help solve this problem.
ADDENDUM: One of Lowery’s posts did get noticed on adland.tv, a site featuring insider-y discussion of advertising industry topics that appears to be frequented primarily by art directors, i.e. the creative types who make the ads, not the media buyers. The upshot of the discussion there is “how difficult it is to find a network where the buyer has control” over where ads are placed.
The Loweryquake June 27, 2012Posted by Bill Rosenblatt in Economics, Law, Music, Uncategorized.
David Lowrey is a semi-legendary musician in one of techdom’s most beloved genres, indie rock. He sits on Groupon’s advisory board. He’s neither a rich rock star nor a spokesman for the RIAA. As a university professor, he is more a beneficiary of what Larry Lessig calls “the academic patronage system” than of copyright. In other words, you’d expect David Lowrey to be one for “sticking it to the man.” Yet last week, he wrote a 3800-word masterpiece about the dire state of musical artists in the digital age and the moral compromises that got us there.
As everyone involved with music knows by now, Lowery’s “Letter to Emily White” was originally occasioned by a blog post by an intern of that name at National Public Radio, who admitted to being a big music fan and possessing 11,000 tracks of digital music but only having paid for less than 2% of them (which puts her well below the generally-accepted figure of 5%). It went viral online and got mentions in the New York Times as well as other major media and blogosphere outlets.
Paul Resnikoff in Digital Music News said it best, in perhaps the most cogent piece of analysis I’ve ever read from him:
Our digital innocence just died … after a decade of drunken digitalia, this is the hangover that finally throbs, is finally faced with Monday morning, finally stares in the mirror and admits there’s a problem. And condenses everything into a detailed ‘moment of clarity’.
Over the years, I have written occasionally about the “race to the bottom,” in which the price of content is tending inexorably towards zero. The massive amount of free and illegal content available now, coupled with legal content services’ needs to “compete with free,” has led to more and more legal content offers for less and less money. Emily White’s frank admission shows that, for a growing number of young people, the race to the bottom in music is over, and musicians and songwriters have lost.
I won’t comment on Lowrey’s piece per se, except to recommend strongly that you read it. And I will say that as I read more of the posts on his blog, The Trichordist (by other authors as well as Lowrey himself), I found some attitudes about intellectual property that I felt were a little extreme and/or ignorant in their own ways.
Instead, I want to focus on the range of comments people have posted about Lowrey’s Letter to Emily, particularly the negative ones. The Trichordist curates comments by hand (and has been “accused” of favoring positive comments heavily as they cope with comment volumes that are orders of magnitude higher than usual), but they have appeared unfiltered on other sites — thousands of them.
Some of the negative comments are sober economic arguments that conclude with “This is just the way it is, and we can’t change it, so we all just have to adapt,” citing principles such as supply and demand, value migration, or cost of goods sold. While I disagree with the “we can’t change it” part, the economics are hard to argue with.
Yet the bulk of the negative comments are remarkable for their defensive attitudes, as expressed through smugness, arrogance, misinformation, rationalizations, and most telling of all, outright hostility towards Lowery. Many of them remind me of the rhetoric of right-wing political extremists when backed into a corner. Apart from the ad hominem attacks against Lowrey, the negative comments fall roughly into the following buckets:
- Economic rationalization (record companies): The record companies rip artists off anyway. Lowrey rips this one apart in his piece.
- Economic rationalizations (artists): Musicians can make money touring instead. Ditto. (Did the people who wrote these comments actually read Lowrey’s piece?)
- Economic rationalization (users): Emily is just a poor young intern and isn’t able to pay for that music anyway. See below on the perceived value of music.
- Legal rationalization: What Emily did was “fair use.” When your prom date gives you a “present” of 15GB worth of digital music, it’s probably not fair use. (Of course, that this is even a question is a problem with fair use itself, but that’s another subject.)
- Terminological distractions: So-called piracy is not “stealing” because the original remains once you have copied it. As even TechDirt’s Mike Masnick points out, what you call it doesn’t matter; it’s copyright infringement, which is against the law.
- Exceptions that prove the rule: So-and-so has figured out how to thrive under the new system, so there must be ways to do it. This one is Masnick’s specialité de la Maison. He seeks out these examples in order to encourage others to follow them. That’s fine, but they continue to be few and far between.
- Market research cherry-picking: I saw a study that says that piracy actually benefits music sales and/or the RIAA/MPAA’s piracy studies are biased. Let’s agree that no study of the economic effects of copyright infringement is both methodologically unassailable and unbiased, and perhaps that the “real” effect may be unmeasurable. But if we’re going to cite studies, we should at least look at all of them instead of putting up strawmen for the purpose of knocking them down. I have looked at all of the studies (and not just those about music) and found that those that claim economic damage from infringement outweigh those that claim economic benefit by a wide margin, even when studies commissioned by the RIAA or MPAA are ignored.
I am also reminded of a conversation that took place at the Copyright and Technology conference last week in London. The eminent copyright litigator Andrew Bridges echoed the common copyleft refrain that “copyright infringement is not a problem” except perhaps that “some companies are losing money.” He also asserted that the sky-high statutory damages under United States law act as an effective deterrent to copyright infringement because they scare people.
I disagreed with both statements. The case of Emily White is the best counter-argument I could have made to both points if I had known about it at the time. For every Joel Tenenbaum or Jammie Thomas-Rasset who makes headlines getting nailed for copyright infringement (and getting Harvard Law professors to defend them), there are millions of Emily Whites who don’t, and millions more who have no idea about copyright infringement, let alone statutory damages.
However, none of these arguments addresses the real problem. The real problem is that the value that people perceive in music has virtually disappeared. As Jaron Lanier pointed out in his book You Are Not a Gadget and subsequent writings, there is a profound cost to society as the perceived value of original content goes to zero. And the cost goes well beyond questions of whether there is “enough creative content” if artists can’t make livings.
Lowrey’s Letter to Emily is more about morals and ethics than about the inherent value of content. The problem is that simply preaching ethics to people in order to get them to change their behavior doesn’t work. At best, as Ben Sisario points out in the New York Times, this gets musicians to the status of charity recipients.
A more recent post on The Trichordist, by Lowrey’s Camper Van Beethoven bandmate Jonathan Segel, focuses exclusively on perceived value — after providing an illuminating history of musicians’ compensation since Beethoven. Killer quote:
What is happening here seems to be a willful ignorance that the inherent value is still there, not being paid for in the distribution of additional copies. These same individuals would certainly make the claim that they are copying the music in order to listen to it … but are refusing to admit the relevance of the social contract that says that that inherent value is what is used in the exchange rate with monetary currency. I see this as a hypocrisy: either music has no value at all, (in which case why copy it to begin with?), or it has value and the copiers are refusing to admit that it does, simply because it is a copy.
Once this behavior becomes normal — i.e. becomes standard practice for the Emily Whites of the world — then the taint of hypocrisy disappears. Once that happens, concern over the value of content evaporates, as then does the value itself.
The time for questioning whether or not this is a problem is over. The proper question is how to solve it.
UltraViolet Gets Two Lifelines January 12, 2012Posted by Bill Rosenblatt in Economics, Fingerprinting, Services, Standards, Video.
add a comment
A panel at this week’s CES show in Las Vegas yielded two pieces of positive news for the DECE/UltraViolet standard, after a launch several months ago with Warner Bros. and its Flixster subsidiary that could charitably be called “premature.” Of the two news items, one is a nice to have, but the other is a game-changer.
Let’s get to the game-changer first: Amazon announced that a major Hollywood studio is licensing its content for UltraViolet distribution through the online retail giant. The Amazon executive didn’t name the studio, though many assume it’s Warner Bros. Even if it’s a single studio, the importance of this announcement to the likelihood of UltraViolet’s success in the market cannot be overstated.
Leaving aside UltraViolet’s initial technical glitches and shortage of available titles, the problem with UltraViolet from a market perspective had always been a lukewarm interest from online retailers. As I’ll explain, this hasn’t been a surprise, but Amazon’s new interest in UltraViolet could make all the difference.
UltraViolet is the “brand name” of a standard from a group called the Digital Entertainment Content Ecosystem (DECE), headed by Sony Pictures executive Mitch Singer. It implements a so-called rights locker for digital movies and other video content. Users can establish UltraViolet accounts for themselves and family members. Then they can obtain movies in one format (say, Blu-ray) and be entitled to get it in other formats for other devices (say, Windows Media file download for PCs). They can also stream the content to a web browser anywhere. The rights locker, managed by Neustar Inc., tracks each user’s purchases.
In other words, UltraViolet promises users format independence and a hedge against format obsolescence, while providing some protection for the content by requiring it to be packaged in several approved DRM and stream encryption schemes. It includes a few limitations on the number of devices and family members that can be associated with a single UltraViolet account, but in general UltraViolet is designed to make video content more portable and interoperable than, say, DVDs or iTunes downloads.
Five of the six major Hollywood studios (all but Disney*), plus the “major indie” Lionsgate, are participating in UltraViolet.
One of the design goals of UltraViolet was to ensure that no single retailer could attain a market share large enough to be able to control downstream economics — in other words, to avoid a replay of Apple’s dominance of digital music downloads (and possibly Amazon’s dominance of e-books). To do this, the DECE studios pushed for ways to thwart consumer lock-in by online retailers that would sell UltraViolet content.
The most important example of this is rights locker portability: users can access their rights lockers from any participating retailer. UltraViolet retailers must compete with each other through value-added features.
Amazon’s Kindle e-book scheme offers a good illustration of platform lock-in and how it differs from other features that a retailer can build or offer. If you buy an e-book on Amazon, you can download and read it on a wide variety of devices: not just Kindle e-readers but also iPads, iPhones, Android devices, BlackBerrys, PCs, and Macs — in other words, pretty much everything but other e-reader devices. You get e-book portability — it will even remember where you last left off if you resume reading an e-book on another device — but you are still tied to Amazon as a retailer. If you want to read the same e-book on a Nook, for example, you have to buy it separately from Barnes & Noble (and then you can read that e-book on your PC, Mac, iPhone, Android, etc.).
This lock-in gives Amazon power in the market as a retailer; it had 58% market share as of February 2011 (by comparison, Apple has over 70% of the music download market). UltraViolet wants to make it as difficult as possible for a single digital video retailer to assert such market power.
The downside of that policy has been a lack of enthusiasm among retailers to sell UltraViolet-licensed content — which entails significant development investment and operational expenses. A good shorthand way to evaluate the potential impact of a standards initiative is to look at the list of participants: what points in the value chain are represented, how many of the top companies in each category, and so on. In DECE’s case, members have included most of the major movie studios, plenty of consumer device makers, lots of DRM and conditional access technology vendors, and so on, but few big-name retailers… one of which (Best Buy) already had a different system for delivering digital video content via Sonic Solutions.
Warner Bros. tried to jump-start the UltraViolet ecosystem by acquiring Flixster, a movie-oriented social networking startup, adding digital video e-commerce capability, and using it as an UltraViolet retailer for a handful of Warner titles. This has been little more than a proof-of-concept test, which was plagued by some technical glitches and suboptimal user experience — all of which, according to Singer, have been fixed.
It would be unworkable for Hollywood to pin its hopes for its next big digital format on a small unknown retailer owned by one of the studios. It has been vitally necessary to attract a big-name retailer to both validate the concept and provide the necessary marketing and infrastructure footprints. There had been talk of Wal-Mart entering the UltraViolet ecosystem, although it already has its own video delivery scheme through VUDU. But otherwise, the membership list had been short on major retailers.
Of course, Amazon is the major-est online retailer of them all. And it so happens that Amazon’s digital video strategy is a good fit to UltraViolet in two ways. First, Amazon currently runs a streaming service (Amazon Instant Video), whereas UltraViolet is primarily focused on downloads, a/k/a Electronic Sell Through (EST): the idea of UltraViolet is to buy a download and only then be able to view it via streaming.
Second, Amazon Instant Video does not look particularly successful. Of course, Amazon does not reveal user numbers, but it is telling that Amazon included Instant Video Unlimited as a perk in its US $79/year Amazon Prime program… and that when people extol the virtues of Amazon Prime, they tend to emphasize the free overnight shipping but rarely the streaming video.
The biggest winner thus far in the paid online video sweepstakes is Netflix, with about 24 million subscribers as of mid-2011. Netflix’s subscription-on-demand model is most likely far more popular than Amazon Instant Video’s pay-per-view (except for Amazon Prime members) model. Thus Amazon may be looking for ways to improve its market position in video without having to hack away at the Netflix streaming juggernaut.
The video download market is in comparative infancy. It has no runaway market leader a la Netflix, or Apple in music. If this situation persists long enough, and if Amazon’s trial run with UltraViolet is successful, then other retailers might see UltraViolet as a viable format as well… precisely because it will make them better able to compete with the Online Retailing Gorilla.
Yet the other dimension of UltraViolet that is currently lacking is availability of titles. And that’s where the other CES announcement comes in. Samsung announced a “Disc to Digital” feature that it will incorporate into new Blu-ray players later this year. With this feature, users can slide in their Blu-ray discs or DVDs, and if the content is “eligible,” they can choose to have that content available in their UltraViolet rights lockers for delivery in any UltraViolet-compliant format.
The Disc to Digital feature is a collaboration between Flixster (i.e. Warner Bros.) as online retailer and Rovi as technology supplier. It works in a manner that is analogous to “scan and match” services for music such as Apple iTunes Match: it scans your DVD or Blu-ray disc, identifies the movie, and if the movie is available in the UltraViolet library of licensed content, gives you an UltraViolet rights locker entry for that movie. Rovi’s content identification technology and metadata library are undoubtedly at the heart of this scheme.
There are two catches: first, users will have to pay a “nominal” fee per disc for this service, which is even larger (and as yet unspecified) if they want it in high definition; second, it is limited to “eligible” content, and no one has offered a definition of “eligible” yet (beyond the fact that the content must come from one of the DECE participating studios). But surely the “eligible” catalog will exceed the current list (19 titles) by orders of magnitude, or the service will not be worth launching.
Nevertheless, these developments are very positive news for DECE/UltraViolet after months of embarrassments and bad press. DECE still has lots of work to do to make UltraViolet successful enough to be the major studios’ designated successor to Blu-ray, but at last it’s on track.
*Yes, I’m aware of the irony of using a tag line from “Who Wants to Be a Millionare” in the title of this article: Disney owns the home entertainment distribution rights to that hit TV game show.
ReDigi Gets RIAA Nastygram November 15, 2011Posted by Bill Rosenblatt in Economics, Law, Music, Services, United States.
Last week the RIAA issued a cease-and-desist letter to a music startup called ReDigi, which has been attempting to create a market for “used” digital music files. It allows users to sell their music files for prices below those of “new” files on iTunes or Amazon, and gives a portion of the proceeds to record labels. (It does not have licenses from the labels to do this.)
I had been paying attention to ReDigi since it had gotten some attention on the tech blogs when it issued a beta release a month ago, and I consulted a couple of copyright law experts about the legality of what they are doing. Based on the results of my research, the RIAA’s actions towards ReDigi were about as surprising to me as an announcement that the sun will rise tomorrow morning.
Who were the “legal experts” that ReDigi claims told it that what it does is within the law? What investors were credulous or rash enough to finance this venture? Or did everyone involved do this just to try to make a point? Regardless of the motivation, ReDigi’s legally embattled state has been a foregone conclusion.
ReDigi purports to implement something called Digital First Sale. The First Sale Doctrine (a/k/a Section 109 of the U.S. copyright law, and known as Exhaustion in most other countries) says that if you obtain a copy of a copyrighted work legally, you can do as you wish with it – keep it, lend it, sell it, give it away, use it to line a birdcage – as long as you obtained it legally and you don’t do anything with it that infringes copyright law, such as make unauthorized copies.
The issue is that this law was designed to apply to physical goods; no one is quite sure about its applicability to piles of bits. The U.S. Copyright Office was asked for an opinion on Digital First Sale a decade ago. The Office stated that Digital First Sale would require a complex technical mechanism that ensured that once you gave your copy of a file to someone else (whether for money or not; whether permanently or not), you had no further access to the file. The technical shorthand for such a mechanism is “forward and delete.” The Office opined that such a mechanism might be feasible at some point in the future but wasn’t then, so it declined to endorse the concept of Digital First Sale.
ReDigi claims to have implemented a robust forward-and-delete mechanism. It uses acoustic fingerprinting from Gracenote to ensure that once a user has sold a file, the same song no longer exists on the user’s PC or iPod. There are ways to hack the system, but that’s somewhat beside the point.
Digital First Sale remains very much unsettled law, even according to copyleft legal scholars, such as Jason Schultz of Berkeley (formerly of the Electronic Frontier Foundation), who would generally like to see Digital First Sale become reality.
But wait a minute: if the Terms of Service forbid users from doing something that copyright law allows, which one prevails? Apparently that’s an unsettled question as well, according to both a senior legal authority at the Copyright Office and one of America’s leading copyright litigators. The latter told me “the ink is not dry” on this area of copyright law.
Yet one thing is very clear: Digital First Sale scares the media industry to death. Think about it: if anyone could resell their digital content at any price, then ReDigi would only be the beginning. There would be many competing content-resale marketplaces. People could auction their “used” files on eBay. People could “donate” them to public libraries with virtually no cost or effort – and get a tax deduction for a charitable donation. All perfectly legal. The result of this would be a rapid acceleration of what I have called the race to the bottom: the price of legal content would drop to near its cost of coping and distribution, i.e., virtually nothing. Furthermore, the major copyright owners would lose a lot of control over distribution; for example, Hollywood studios’ release windows would become virtually meaningless.
It’s also evident that the media industry would much rather nip this trend in the bud than endure years of litigation with uncertain outcomes. Even attempting to negotiate a license with a service like ReDigi would imply some comfort with Digital First Sale at a conceptual level, which is something that the media industry would surely want to avoid. Thus the RIAA’s actions against ReDigi come as no surprise.
The RIAA’s “nastygram” points to file copying that must take place in order for ReDigi’s system to work as evidence of copyright infringement, even though, of course, that’s not the real issue here. Other litigation concerning Digital First Sale, such as Vernor v. Autodesk (commercial software), is working its way through the courts. Whatever happens with Digital First Sale, the law will take years to reach clarity — and until then, services like ReDigi will continue to be in limbo.
Incidentally, Digital First Sale is going to be a topic at our Copyright and Technology conference week after next (Wednesday November 30). We will have legal experts on this topic as well as Paul Sweazey of the IEEE 1817 standards initiative, which is another attempt to implement something approximating Digital First Sale. The discounted registration offer I made last week still stands.