The National Academies’ Workshop on Copyright in the Digital Age

The National Academies held a workshop last Friday at its headquarters in Washington as part of its efforts to launch a research program into copyright policy in the digital age.  A total of 17 invited presenters gave 10-minute talks followed by Q&A.  There were a few revelations and surprises among these.

The committee overseeing the research program sought input on what issues they should be addressing in their research.  My own presentation (audio available) identified two areas: the lack of understanding of costs and benefits of rights technologies, and the ambiguity inherent in US copyright law that makes it difficult for technology to decide whether uses of content are legal or not.

The biggest surprise among the presentations came from Cary Sherman, President of the RIAA, representing the music recording industry.  He called for the US to revert to a copyright system that requires registration in order to get the benefit of copyright protection.  The current system makes registration automatic and only requires it as a precondition to infringement litigation.  Automatic registration is a feature of the Berne Convention, an international copyright agreement that dates back to 1986 and which the US adopted in 1989.

Sherman’s call for “opt-in” copyright registration was a shocker, especially considering that Larry Lessig and other copyleft icons have been advocating this position for years.  Lessig’s rationale is that intellectual property protection should only be necessary for those who actually care enough to register their copyrights.

Other media industry representatives were at odds with Sherman’s newfound opt-in religion.  (Among other things, making this change in copyright law would put the US at odds with international copyright laws.)  Even the MPAA (represented by Fritz Attaway), whose movie-studio members routinely register their copyrights, was against this idea.

Why would the major record companies be interested in reverting to opt-in copyright registration?  Essentially for the same reason that Lessig is, but viewed from a slightly different angle: to make copyright the exclusive province of those who want it; to keep out the riff-raff, if you will.

The RIAA’s rationale is that the world is flooded with user-generated and indie content that overwhelmingly outnumbers the recording industry’s output; the vast majority of such content comes from people who aren’t interested in protecting copyrights or aware of the benefits of doing so.  If registration is made mandatory, then the likely outcome is that a much higher proportion of copyrighted music will come from major labels.

Frankly, I don’t see the point.  The only practical advantage of having a copyright in music in the digital age is to be able to sue for infringement.  There would be other advantages if there were an online database of copyrighted music works, analogous to the database for books that Google intends to fund as part of its settlement with publishers and authors.  With such a database, it would be possible, say, for a digital music service to restrict sharing of music tracks that have copyrights while allowing unlimited sharing of those not in the database.  But — as several presenters at the workshop noted — such a database does not exist.

Another surprise at the National Academies workshop was the antagonistic stances of scholarly publishers and academic researchers.  Presenters representing the American Chemical Society (John Ochs) and the Professional and Scholarly Publishers division of AAP (Bill Cook) made impassioned speeches about the need for stronger copyright protection, the devastating effect of piracy, the important roles their businesses play in disseminating scholarship, and the unfairness of open-access policies.  These hard-line presentations were like throwbacks to Jack Valenti speeches from a decade ago, while today’s MPAA and RIAA (see above) have moved on to more nuanced dialog and engagement with the technology community.

Meanwhile, Columbia University statistics professor Victoria Stodden bemoaned the restrictions that copyright laws place on free sharing of research data.  One of the research committee members hammered away at the scholarly publishers for being beneficiaries of the academic tenure system who don’t pay their authors.  The exchanges did not put the scholarly publishing community in a very positive light.

The National Academies project committee was mainly interested in finding sources of data that they could draw on for their research — either existing data or places from which to mine fresh data.  Their goals are extremely worthy, but some of the data they would like to get may be hard to come by.  For example, one of the common threads of the discussion was the “amateurization” of content creation and its effect on culture.  Do professionals create “better” content than nonprofessionals?  How do you measure quality — or should that be judged at all?

One of the law-professor types noted that any differentiation of treatment of content based on some notion of quality runs counter to the First Amendment.  But that wasn’t the point.  The point was whether it’s possible to measure the effect of the explosion of user-generated content on culture by assessing quality of the works that are available.  No one was able to identify a reliable measure.  In the end, the argument put forth by Derek Slater of Google that “one man’s trash is another man’s treasure” — which more or less defines the success of YouTube — was hard to refute.

Nevertheless, the research that the National Academies proposes to do should be very worthwhile.  In the prospectus for the research program, they state that one of the motivations for the research is to offer facts and economic principles to frame the debate on digital copyright, rather than the philosophical and emotional arguments that have largely framed it thus far.  I could not agree more.

3 comments

  1. Thanks again for a great post, Bill!

    For those of us who have been watching — and occasionally have been active in — this space over the past two decades, the idea that in 2010 a gathering might be entitled “Workshop on Copyright in the Digital Age” at first leaves us scratching our heads. And after following your live tweets last week and reading this account, my hair hurts…

    The focus of Stakeholders should be on making the rights embodied by their works more easily transacted, which starts when they — the specific rights, not simply the subjects of those rights — have been unambiguously identified. The various stakeholder communities have over the past few years been developing metadata and identifier infrastructure that could make low overhead, fine-grained rights transactions practical — if associated with accessible, reasonable pricing models.

    We’re increasing seeing how a low-overhead transactional rights ecosystem might work as media companies adopt APIs to make “content” available. It’s not too much of a stretch to see how MovieLabs’ “Common Metadata” model, combined with robust identifiers, efficient transactions and standardized pricing, could lead to a revolution in “copyright.”

    Not only do the new models enable more fluid, in-place transactions, but the same models allow content owners to apply powerful analytical tools to usage streams. To some of us old farts, this is a refrain we first heard in ca. 1995; the difference between now and then is that the current methods are based on open protocols and accepted Web best practices.

    The media companies *know* how to sell content this way; the likes of Tribune Media Services (TMS), Rovi, even Hoovers have shown the value of e.g. keyed APIs and standard pricing in reducing the friction of copyright-based transactions for their ancillary content. Their core problem is, they haven’t acknowledged that they need to it for their big-ticket items as well.

  2. Thanks John. First, I absolutely agree that standard identifiers and metadata are the problem and the root of any decent solution. And I also remember talking about this in 1994 (beat ya by one year!) on the publishing committee that produced the DOI.

    On the one hand, the DOI’s impact has been limited — ironically enough, given my remarks about them in this post — to scholarly publishing, where it is very widely and rigorously used. But on the other hand, the DOI was supposed to be the universal ID standard for all types of content. Clearly it isn’t.

    The root cause of the problem is very simple to state: it takes time, money, and effort to adopt standard identifiers where they need to be adopted: at content companies; but it’s extremely difficult to sell business managers at those companies on the ROI of adopting them. I recently finished a standard identifier project at a major publisher, and lemme tell ya, it was a very big deal to get buy-in on it.

    Meanwhile, metadata aggregators such as Rovi and TMS can only do so much if they are given incomplete and inconsistent data. I’ve seen this for myself as well.

    Yet behind these “engineering problems” lies a morass of licensing rules set up by law that just confuses the hell out of everyone; and entrenched interests benefit from that confusion. It’s a mess. That’s why I love what the National Academies are trying to do – they are trying to get real irrefutable data.

  3. Hi,

    related with some topics of this post, the recently released document of the EU “The New Renaissance” promoted the idea that registration of the works must be necessary for exercise of copyrights. Richard Posner from the economic analysis of law argues the same. Nowdays, both, cultural and economic approach encourages to change the Berne Convention. It is hard work, but looks necessary.

    JP.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: