In this part of our annual year-end review, I’ll look back at legal developments related to digital copyright over the past year. The most interesting development was the enactment of so-called progressive response laws in Singapore, South Korea, and France. Under these laws, users who upload content to the Internet illegally get progressively stronger warnings followed by suspensions or terminations of their Internet accounts.
France enacted its progressive response law after much controversy. The original intent of the legislation was to set up a government-run authority that would automatically suspend or terminate repeat offenders’ accounts. But this provision was judged unconstitutional, a violation of due process and the presumption of innocence. The law that was ultimately passed did without the automation and instead referred repeat offenders to (human) judges — thereby ensuring that only the most egregious cases will be heard.
Nevertheless, the law has inspired other countries to follow along. Singapore and South Korea are reporting positive results, in terms of reduction in unauthorized content usage, from their laws.
The UK could be next. The possibility of progressive response legislation was mentioned in the Queen’s Speech last November, though my colleague Bill Jones believes that actual legislation won’t come to pass until at least 2011.
In the United States, the media industry has been holding off on getting Congress to pass stricter piracy control measures. Congress, after all, has been very difficult to distract from healthcare reform, the sluggish economy, various wars, and terrorism.
Instead, the most interesting US action in 2009 was in the form of litigation. Universal Music Group lost its district court lawsuit against video-sharing site Veoh for secondary copyright infringement, a decision that the music giant is expected to appeal. The massive infringement litigation between Viacom and Google/YouTube is only just getting out of its discovery phase (the early steps of fact-gathering and deposition-taking).
Both of these cases could determine whether online service providers should be required to take more responsibility for their copyright behaviors. This will require changes in the law that lower courts are unlikely to make; the real action will be at the appeals or even Supreme Court level. In other words, expect years to pass by until any definitive results occur.
Meanwhile, Google’s settlement with the book publishing industry and book authors is still waiting for the judge’s approval, which many expect to be granted in the coming year. The major bone of contention between the settling parties and those who object to the settlement is control over orphaned works — books for which no copyright owner has stepped forward.
It’s self-evident why the publishing industry should not care about Google’s desire to exert sole control over orphaned works, and Google may even have believed that it is doing the world a favor by digitizing them and making them accessible online. But several entities stepped forward to object, including advocates of a robust public domain as well as those whose motives are more opportunistic or ulterior (think Microsoft).
Google is likely to relent on orphaned works; many of the remaining objections are related to payment terms for publishers and authors.
The next step beyond settlement approval is the establishment of a Book Rights Registry (BRR) that will make data about scanned books, and their rights, available to whatever service provider wants them. As part of the settlement agreement, Google is paying US $35 Million to build the BRR, but it will be run by staff appointed and overseen by publishers and authors. This could take years. The settlement agreement is (understandably) short on details about how the BRR will be designed or work, but let’s just say there are ways to do it “quick and dirty” and ways to do it right.
The final interesting area related to technology and copyright law is the simmering dispute between news publishers and search engines. News publishers are concerned that search engines are “free riding” on their content by letting users view it in search results instead of on news publishers’ own sites (particularly those behind a pay wall, such as the Wall Street Journal and Financial Times).
News publishers had tried to address this problem by defining a technical standard called ACAP (Automated Content Access Protocol), which publishers could use to define rules by which search engines could index content and display it in search results. ACAP is a set of HTML metatags that specify things like the duration of a search engine’s right to index the content (say, a week after publication) or how it can show the content in search results (say, only a snippet or thumbnail). It’s not a DRM scheme; search engines could simply ignore the tags.
It’s now evident that search engines have no intention of adopting ACAP. It’s not hard to understand why: there’s little incentive for them to do so.
Instead, the Associated Press announced hNews, its own “microformat” standard for tagging news content. In addition to their use as HNews tags also serve as “beacons” for web crawling technology that can detect AP stories where they aren’t licensed to be, such as on blogs or splogs (spam-blogs).
HNews is intended to solve a slightly different problem than ACAP, but it should have a better future, for the simple reason that it offers search engines (and other content aggregators) reasons to use it: hNews will also contain metadata about news content that should actually help search engines and aggregators do what they want to do. And it’s based on open standards related to Creative Commons, which gives it a world of potential applicability to the oceans of user-generated content out there.
The AP’s hNews initiative should dovetail well with the news industry’s renewed interest in direct monetization of content, now that online ad revenues have dried up. As the AP attempts to move the dialog between newsgatherers and search engines from lawsuits to widely applicable business models based on content discovery and licensing, it should help preserve the news industry’s viability over the long term.