add a comment
Registration for Copyright and Technology London 2014 is now live. An earlybird discount is in place through August 8. Space is limited and we came close to filling the rooms last time, so please register today!
I am particularly excited about our two keynote speakers — two of the most important copyright policy officials in the European Union and United States respectively. Maria Martin-Prat will discuss efforts to harmonize aspects of copyright law throughout the 28 EU Member States, while Shira Perlmutter will provide an update on the long process that the US has started to revise its copyright law.
We have made one change to the Law and Policy track in the afternoon: we’ve added a panel called The Cloudy Future of Private Copying. This panel will deal with controversies in the already complex and often confusing world of laws in Europe that allow consumers to make copies of lawfully-obtained content for personal use.
The right of private copying throughout Europe was established in the European Union Copyright Directive of 2001, but the EU Member States’ implementations of private copying vary widely — as do the levies that makers of consumer electronics and blank media have to pay to copyright collecting societies in many countries on the presumption that consumers will make private copies of copyrighted material. Private copying was originally intended to apply to such straightforward scenarios as photocopying of text materials or taping vinyl albums onto cassette. But nowadays, cloud storage services, cyberlockers, and “cloud sync” services for music files — some of which allow streaming from the cloud or access to content by users other than those who uploaded the content — are coming into view regarding private copying.
The result is a growing amount of controversy among collecting societies, consumer electronics makers, retailers, and others; meanwhile the European Commission is seeking ways to harmonize the laws across Member States amid rapid technological change. Our panel will discuss these issues and consider whether there’s a rational way forward.
We have slots open for a chair and speakers on this panel; I will accept proposals through July 31. Please email your proposal(s) with the following information:
- Speaker’s name and full contact information
- Chair or speaker request?
- Description of speaker’s experience or point of view on the panel subject
- Brief narrative bio of speaker
- Contact info of representative, if different from speaker*
Finally, back over here across the Atlantic, I’ll note an interesting new development in the Aereo case that hasn’t gotten much press since the Supreme Court decision in the case a couple of weeks ago. Aereo had claimed that it had “bet the farm” on a court ruling that its service was legal and that “there is no Plan B,” implying that it didn’t have the money to pay for licenses with television networks. Various commentators have noted that Aereo wasn’t going to have much leverage in any such negotiations anyway.
As a result of the decision, Aereo has changed tactics. In the Supreme Court’s ruling, Justice Breyer stated that Aereo resembled a cable TV provider and therefore could not offer access to television networks’ content without a license. Now, in a filing with the New York district court that first heard the case, Aereo is claiming that it should be entitled to the statutory license for cable TV operators under section 111 of the copyright law, with royalty rates that are spelled out in 17 U.S.C § 111(d)(1).
In essence, Aereo is attempting to rely on the court for its negotiating leverage, and it has apparently decided that it can become a profitable business even if it has to pay the fees under that statutory license. Has Barry Diller — or another investor — stepped in with the promise of more cash to keep the company afloat? Regardless, in pursuing this tactic, Aereo is simply following the well-worn path of working litigation into a negotiation for a license to intellectual property.
*Please note that personal confirmation from speakers themselves is required before we will put them on the program.
Supreme Court’s Aereo Decision Clouds the Future July 3, 2014Posted by Bill Rosenblatt in Law, United States, Video.
add a comment
The Supreme Court has rendered various decisions that serve as rules of the road for the treatment of copyrighted works amid technological innovation. Universal v. Sony (1984) established the legality of “time shifting” video for personal viewing as well as the “substantial noninfringing uses” standard for new technologies that involve digital media. MGM v. Grokster (2005) took the concept of “inducing infringement” from patent law and applied it to copyright, so that services that directly and explicitly benefit from users’ infringement could be held liable. UMG v. Veoh (2011) taught that network service operators have no duty to proactively police their services for users’ infringements. These rulings are reasonably clear signposts that technologists can follow when contemplating new products and services.
Unfortunately, Justice Stephen Breyer’s ruling last week in ABC v. Aereo won’t be joining that list. He ruled against Aereo in a 6-3 majority that united the Court’s liberals and moderates. Justice Antonin Scalia’s forceful dissent described the problems that this decision will create for services in the future.
Several weeks ago, at the Copyright Clearance Center’s OnCopyright conference in NYC, Rick Cotton — former General Counsel of NBC Universal — predicted that the Supreme Court would come down against Aereo in a narrow decision that would avoid impact on other technologies. He got it right in terms of what Justice Breyer may have hoped to accomplish, but not in terms of what’s likely to happen in the future.
Instead of establishing principles that future technology designers can rely on, the Court simply took a law that was enacted almost 40 years ago to apply to an old technology, determined that Aereo resembles that old technology, and concluded that therefore the law should apply to it. The old technology in question is Community Access Television (CATV) — transmissions of broadcast television over cable to reach households that couldn’t receive the broadcasts over the air.
Justice Breyer observed that Congress made changes in the copyright law, with the Copyright Act of 1976, in order to stop CATV providers from being able to “free ride” on broadcast TV signals; he found that that Aereo was similarly free-riding and therefore ought to be subject to the same law.
Just in terms of functionality, the decision makes little sense: CATV was created to enable broadcast television to reach new audiences, while Aereo (nominally, at least) enabled an existing audience for broadcast TV to watch it on other devices and in other locations. In that respect, Aereo is more like the “cloud sync” services for music like DoubleTwist and MP3Tunes that popped up in the late 2000s, which automatically copied users’ MP3 music files and playlists across all of their devices. More on that analogy later.
More broadly, the Court’s decision is unlikely to be helpful in guiding future technologies; all it offers is a “does it look like cable TV?” test based on fact-specific interpretations of the public performance right in copyright law. Justice Breyer claimed that his opinion should not necessarily have implications for cloud computing and other new technologies, but that doesn’t make it so.
As Justice Scalia remarked in his dissent, “The Court vows that its ruling will not affect cloud-storage providers and cable television systems … , but it cannot deliver on that promise given the imprecision of its result-driven rule.” Justice Scalia felt that Aereo exploited a loophole in the copyright law but that it should be up to Congress instead of the Supreme Court to close it.
In fact, Justice Scalia agreed with the Court’s opinion that Aereo probably violates copyright law. But he stated that the decision the Court was called upon to make — regarding Aereo’s direct infringement liability and whether the TV networks’ request for a preliminary injunction should be upheld — wasn’t an appropriate vehicle for determining Aereo’s copyright liability, and that the Court should have left well enough alone. Instead, Justice Scalia offered that Aereo should be more properly held accountable based on secondary liability — just as the Court did in Grokster — and that a lower court could well reach such a finding later in the case after the preliminary injunction issue had been settled.
Secondary liability means that a service doesn’t infringe copyrights itself but somehow enables end users to do so. Of course there have been many cases where copyright owners have sued tech companies on the basis of secondary liability and forced them to go out of business (e.g., Napster, LimeWire), but there have been many others where lawsuits (or threats of lawsuits) have resulted in mutually beneficial license agreements between copyright owners and the technology companies.
And that brings us back to “cloud sync” services for music. DoubleTwist was built by Jon Lech Johansen, who had become notorious for hacking the encryption system for DVDs in the late 1990s. MP3Tunes was developed by Michael Robertson, who was equally notorious for his original MP3.com service. Cloud sync services enabled users to make copies of their music files without permission and didn’t share revenue (e.g., from advertising or premium subscriptions) with copyright owners. DoubleTwist, MP3Tunes, and a handful of similar services became moderately popular. In addition to their functionality, what MP3Tunes and DoubleTwist had in common was that they were developed by people who had first built blatantly illegal technology and then sought ways to push the legal envelope more gently.
Later on, Amazon, Apple, and Google followed the same latter path. They built cloud sync capabilities into their music services (thereby rendering small third-party services like DoubleTwist largely irrelevant). Amazon and Google launched their cloud sync capabilities without taking any licenses from record companies; record companies complained; confidential discussions ensued; and now everyone’s happy, including the consumers who use these handy services. (Apple took a license for its iTunes Match feature at the outset.)
The question for Aereo is whether it’s able to have such discussions with TV networks; the answer is clearly no. The company never entertained the possibility that it would have to (“there is no Plan B“), and its principal investor, video mogul Barry Diller, isn’t going to pump more money into the company to pay for licenses.
Of course, TV networks are cheering the result of the Supreme Court’s decision in Aereo. But it doesn’t help them in the long run if the rules of the road for future technologies are made cloudier instead of clearer. And Aereo would eventually have been doomed anyway if Justice Scalia had a majority.
Copyright Alert System Releases First Year Results June 10, 2014Posted by Bill Rosenblatt in Europe, Fingerprinting, Law, United States, Watermarking.
1 comment so far
The Center for Copyright Information (CCI) released a report last month summarizing the first calendar year of activity of the Copyright Alert System (CAS), the United States’ voluntary graduated response scheme for involving ISPs in flagging their subscribers’ alleged copyright infringement. The report contains data from CAS activity as well as results of a study that CCI commissioned on consumer attitudes in the US towards copyright and file sharing.
There are two alerts at each level, for a total of six, but the three categories make it easier to compare the CAS with “three strikes” graduated response regimes in other countries. As I discussed recently, the CAS’s “mitigation” penalties are very minor compared to punitive measures in other systems such as those in France and South Korea.
The CCI’s report indicates that during the first ten months of operation, it sent out 1.3 million alerts. Of these, 72% were “educational,” 20% were “acknowledgement,” and 8% were “mitigation.” The CAS includes a process for users to submit mitigation alerts they receive to an independent review process. Only 265 review requests were sent, and among these, 47 (18%) resulted in the alert being overturned. Most of these 47 were overturned because the review process found that the user’s account was used by someone else without the user’s authorization. In no case did the review process turn up a false positive, i.e. a file that the user shared that was actually not unauthorized use of copyrighted material.
It’s particularly instructive to compare these results to France’s HADOPI system. This is possible thanks to the detailed research reports that HADOPI routinely issues. Two of these were presented at our Copyright and Technology London conferences and are available on SlideShare (2012 report here; 2013 report here). Here is a comparison of the percent of alerts issued by each system at each of the three levels:
|Alert Level||HADOPI 2012||HADOPI 2013||CAS 2013|
Of course these comparisons are not precise; but it is hard not to draw an inference from them that threats of harsher punitive measures succeed in deterring file-sharing. In the French system — in which users can face fines of up to €1500 and one year suspensions of their Internet service — only 0.03% of those who received notices kept receiving them up to the third level, and only a tiny handful of users actually received penalties. In the US system — where penalties are much lighter and not widely advertised — almost 8% of users who received alerts went all the way to the “mitigation” levels. (Of that 8%, 3% went to the sixth and final level.)
Furthermore, while the HADOPI results are consistent from 2012 to 2013, they reflect a slight upward shift in the number of users who receive second-level notices, while the percent of third-level notices — those that could involve fines or suspensions – remained constant. This reinforces the conclusion that actual punitive measures serve as deterrents. At the same time, the 2013 results also showed that while the HADOPI system did reduce P2P file sharing by about one-third during roughly the second year of the system’s operation, P2P usage stabilized and even rose slightly in the two years after that. This suggests that HADOPI has succeeded in deterring certain types of P2P file-sharers but that hardcore pirates remain undeterred — a reasonable conclusion.
It will be interesting to see if the CCI takes this type of data from other graduated response systems worldwide — including those with no punitive measures at all, such as the UK’s planned Vcap system — into account and uses it to adjust its level of punitive responses in the Copyright Alert System.
This is the second of three installments on interesting developments from last week’s IDPF Digital Book conference in NYC.
Another interesting panel at the conference was on public libraries. I’ve written several times (here’s one example) about the difficulties that public libraries are having in licensing e-books from major trade publishers, given that publishers are not legally obligated to license their e-books for library lending on the same terms as for printed books — or at all. The major trade publishers have established different licensing models with various restrictions, such as limited durations (measured in years or number of loans), lack of access to frontlist (current) titles, and/or prices that range up to several times those charged to consumers.
The panel presented some research findings that included some hard data about how libraries drive book sales — data that libraries badly need in order to bolster their case that publishers should license material to them on reasonable terms.
As we learned from Rebecca Miller from Library Journal, public libraries in the US currently spend only 9% of their acquisition budgets on e-books — which amounts to about $100 Million, or less than 3% of overall trade e-book revenue in the United States. Surely that percentage will increase, making e-book acquisition more and more important for the future of public libraries. And as e-books take up a larger portion of libraries’ acquisition budgets, the fact that libraries have little control over licensing terms will become a bigger and bigger problem for them.
The library community has issued a lot of rhetoric — including during that panel– about how important libraries are for book discovery. But publishers are ultimately only swayed by measurable revenue from sales of books that were driven by visits to or loans from libraries. They also want to know to what extent people don’t buy e-books because they can “borrow” them from libraries.
In that light, the library panel had one relevant statistic to offer, courtesy of a study done by my colleague Steve Paxhia for the Book Industry Study Group. The study found that 22% of library patrons ended up buying a book that they borrowed from the library at least once during the past year.
That’s quite a high number. Here’s how it works out to revenue for publishers: Given Pew Internet and American Life statistics about library usage (48% of the population visited libraries last year), and only counting people aged 18 years and up, it means that people bought about 25 million books last year after having borrowed them from libraries. Given that e-books made up 30% of book sales in unit volume last year and figuring an average retail price of $10, that’s $75 million in e-book sales directly attributable to library lending. The correct figure is probably higher, given that many library patrons discover books in ways other than borrowing them (e.g. browsing through them at the library) — though it may also be lower given that some people buy books in order to own physical objects (and thus the percentage of e-books purchased as a result of exposure in libraries may be lower than the corresponding percentage of print books).
So, in rough numbers, it’s safe to say that for the $100 Million that libraries spend on e-books per year, they deliver a similar amount again in sales through discovery. It’s just too bad that the study did not also measure how many people refrained from buying e-books because they could get them from public libraries. This would be an uncomfortable number to measure, but it would help lead to the truth about how public libraries help publishers sell books.
Update: Steve Paxhia found that his 22% number was of library lends leading to purchases during a period of six months, not a year. And the survey respondents may have purchased books after borrowing them more than once during that period. His data also shows that half of respondents indicated that they purchased other works from a given author after having borrowed one from the library. So, using the same rough formula as above, the amount of purchases attributable to library usage is more likely to be north of $150 million. Yet we still have no indication of the number of times someone did not purchase a book — particularly an e-book — because it was available through a public library system.
1 comment so far
The BBC has discovered documents that detail a so-called graduated response program for detecting illegal downloads done by customers of major UK ISPs and sending alert messages to them. The program is called the Voluntary Copyright Alert Programme (Vcap). It was negotiated between the UK’s four major ISPs (BT, Sky, Virgin Media, and TalkTalk) and trade associations for the music and film industries, and it is expected to launch sometime next year.
Vcap is a much watered-down version of measures defined in the Digital Economy Act of 2012, in that it calls only for repeated “educational” messages to be sent to ISP subscribers and for no punitive measures such as suspension or termination of their accounts.
In general, graduated response programs work like this: copyright owners engage network monitoring firms to monitor ISPs’ networks for infringing behavior. Monitoring firms use a range of technologies, including fingerprinting to automatically recognize content that users are downloading. If they find evidence of illegal behavior, they report it to a central authority, which passes the information to the relevant ISP, typically including the IP address of the user’s device. The ISP determines the identity of the targeted subscriber and takes some action, which depends on the details of the program.
In some cases (as in France and South Korea), the central authority is empowered to force the ISP to take punitive action; in other cases (as in the United States’ Copyright Alert System (CAS) as well as Vcap), ISPs take action voluntarily.
Assuming that Vcap launches on schedule, we could soon have data points about the effectiveness of various types of programs for monitoring ISP subscribers’ illegal downloading behaviors. The most important question to answer is whether truly punitive measures really make a difference in deterring online copyright infringement, or whether purely “educational” measures are enough to do the job. Currently there are graduated response programs in South Korea, New Zealand, Taiwan, and France that have punitive components, as well as one in Ireland (with Eircom, the country’s largest ISP) that is considered non-punitive.
Is America’s CAS punitive or educational? That’s a good question. CAS has been called a “six strikes” system (as opposed to other countries’ “three strikes”), because it defines six levels of alerts that ISPs must generate, although ISPs are intended to take “mitigation measures” against their subscribers starting at the fifth “strike.” What are these mitigation measures? It’s largely unclear. The CAS’s rules are ambiguous and leave quite a bit of wiggle room for each participating ISP to define its own actions.
Instead, you have to look at the policies of each of the five ISPs to find details about any punitive measures they may take – information that is often ambiguous or nonexistent. For example:
- AT&T: its online documentation contains no specifics at all about mitigation measures.
- Cablevision (Optimum Online): its policy is ambiguous, stating that it “may temporarily suspend your Internet access for a set period of time, or until you contact Optimum.” Other language in Cablevision’s policy suggests that the temporary suspension period is 24 hours.
- Comcast (Xfinity): Comcast’s written policy is also ambiguous, saying only that it will continue to post alert messages until the subscriber “resolve[s] the matter” and that it will never terminate an account.
- Time Warner Cable: also ambiguous but suggesting nothing on the order of suspension or termination, or bandwidth throttling. It states that “The range of actions may include redirection to a landing page for a period or until you contact Time Warner Cable.”
- Verizon: Verizon’s policy is the only one with much specificity. On the fifth alert, Verizon throttles the user’s Internet speed to 256kbps — equivalent to a bottom-of-the-line residential DSL connection in the US — for a period of two days after a 14-day advance warning. At the sixth alert, it throttles bandwidth for three days.
In other words, the so-called mitigation measures are not very punitive at all, not even at their worst — at least not compared to these penalties in other countries:
- France: up to ISP account suspension for up to one year and fines of up to €1500 (US $2000), although the fate of the HADOPI system in France is currently under legal review.
- New Zealand: account suspension of up to six months and fines of up to NZ $15,000 (US $13,000).
- South Korea: account suspension of up to six months.
- Taiwan: suspension or termination of accounts, although the fate of Taiwan’s graduated response program is also in doubt.
[Major hat tip to Thomas Dillon's graduatedresponse.org blog for much of this information.]
In contrast, Vcap will be restricted to sending out four alerts that must be “educational” and “promot[e] an increase in awareness” of copyright issues. Vcap is intended to run for three years, after which it will be re-evaluated — and if judged to be ineffective, possibly replaced with something that more closely resembles the original, stricter provisions in the Digital Economy Act. By 2018, the UK should also have plenty of data to draw on from other countries’ graduated response regimes about any relationship between punitive measures and reduced infringements.
MP3Tunes and the New DMCA Boundaries March 30, 2014Posted by Bill Rosenblatt in Law, Music, Services, United States.
With last week’s jury verdict of copyright liability against Michael Robertson of MP3Tunes, copyright owners are finally starting to get some clarity around the limits of DMCA 512. The law gives online service operators a “safe harbor” — a way to insulate themselves from copyright liability related to files that users post on their services by responding to takedown notices.
To qualify for the safe harbor, service providers have to have a policy for terminating the accounts of repeat infringers, and — more relevantly – cannot show “willful blindness” to users’ infringing actions. At the same time, the law does not obligate service providers to proactively police their networks for copyright infringement. The problem is that even when online services respond to takedown notices, the copyrighted works tend to be re-uploaded immediately.
The law was enacted in 1998, and copyright owners have brought a series of lawsuits against online services over the years to try to establish liability beyond the need to respond to one takedown notice at a time. Some of these lawsuits tried to revisit the intent of Congress in passing this law, to convince courts that Congress did not intend to require them to spend millions of dollars a year playing Whac-a-Mole games to get their content removed.
In cases such as Viacom v. YouTube and Universal Music Group v. Veoh that date back to 2007, the media industry failed to get courts to revisit the idea that service providers should act as their own copyright police. But over the past year, the industry has made progress along the “willful blindness” (a/k/a “looking the other way”) front.
These cases featured lots of arguments over what constitutes evidence of willful blindness or its close cousin, “red flag knowledge” of users’ infringements. Courts had a hard time navigating the blurry lines between the “willlful blindness” and “no need to self-police” principles in the law, especially when the lines must be redrawn for each online service’s feature set, marketing pitch, and so on.
But within the past couple of years, two appeals courts established some of the contours of willful blindness and related principles to give copyright owners some comfort. The New York-based (and typically media-industry-friendly) Second Circuit, in the YouTube case, found that certain types of evidence, such as company internal communications, could be evidence of willful blindness. And even the California-based (and typically tech-friendly) Ninth Circuit found similar evidence last year in a case against the BitTorrent site IsoHunt.
The Second Circuit’s opinion in YouTube served as the guiding precedent in the EMI v. MP3Tunes case — and in a rather curious way. Back in 2011, the district court judge in MP3Tunes handed down a summary judgment ruling that was favorable to Robertson in some but not all respects. But after the Second Circuit’s YouTube opinion, EMI asked the lower court judge to revisit the case, suggesting that the new YouTube precedent created issues of fact regarding willful blindness that a jury should decide. The judge was persuaded, the trial took place, and the jury decided for EMI. Robertson could now be on the hook for tens of millions of dollars in damages.
(Eleanor Lackman and Simon Pulman of the media-focused law firm Cowan DeBaets have an excellent summary of the legal backdrop of the MP3Tunes trial; they say that it is “very unusual” for a judge to go back on a summary judgment ruling like that.)
The MP3Tunes verdict gives media companies some long-sought leverage against online service operators, which keep claiming that their only responsibility is to respond to each takedown notice, one at a time. This is one — but only one — step of the many needed to clarify the rights of copyright owners and responsibilities of service providers to protect copyrights. And as far as we can tell now, it does not obligate service providers to implement any technologies or take any more proactive steps to reduce infringement. Yet it does now seem clear that if service providers want to look the other way, they at least have to keep quiet about it.
As for Robertson, he continues to think of new startup ideas that seem particularly calculated to goad copyright owners. The latest one, radiosearchengine.com, is an attempt to turn streaming radio into an interactive, on-demand music service a la Spotify. It lets users find and listen to Internet streams of radio stations that are currently playing specific songs (as well as artists, genres of music, etc.).
Radiosearchengine.com starts with a database of thousands of Internet radio stations, similar to TuneIn, iHeartRadio, Reciva, and various others. These streaming radio services (many of which are simulcasts of AM or FM signals) carry program content data, such as the title and artist of the song currently playing. Radiosearchengine.com retrieves this data from all of the stations in its database every few seconds, adds that information to the database, and makes it searchable by users. Robertston has even created an API so that other developers can access his database.
Of course, radiosearchengine.com can’t predict that a station will play a certain song in the future (stations aren’t allowed to report it in advance), so users are likely to click on station links and hear their chosen songs starting in the middle. But with the most popular songs — which are helpfully listed on the site’s left navbar — you can find many stations that are playing them, so you can presumably keep clicking until you find the song near its beginning.
This is something that TuneIn and others could have offered years ago if it didn’t seem so much like lawsuit bait. On the other hand, Robertson isn’t the first one to think of this: there’s been an app for that for at least three years.
Disney and Apple’s UV FUD March 26, 2014Posted by Bill Rosenblatt in Business models, Technologies, United States, Video.
add a comment
Last month Disney launched Disney Movies Anywhere, a service that lets users stream and download movies from Disney and associated studios on their Apple iOS devices. You can purchase movies on the site or from the App Store app and stream them to any iPhone, iPad, or iPod Touch. You can also get digital copies and streaming access with purchases of selected DVDs and Blu-ray discs. And you can connect your iTunes account to your Disney Movies Anywhere account so that you can gain similar streaming and download access to your existing Disney iTunes purchases.
A couple of things about Disney Movies Anywhere are worth discussing. First, this is yet more evidence of the strong bond between Disney and Apple, a relationship formed when Disney acquired Pixar from Steve Jobs, who became a Disney board member and the company’s largest shareholder.
More particularly, this service is a way for Apple to experiment with video streaming services without attaching its own brand name. Disney Movies Anywhere works with only iOS devices, and there’s little indication that it will add support for Android or other platforms. For whatever reason, Apple has shied away from streaming media services until quite recently (with iTunes Radio and the latest iteration of Apple TV).
More importantly, Disney Movies Anywhere is the first implementation of Disney’s KeyChest — a rights locker architecture that is similar to UltraViolet, the technology backed by the other five major Hollywood studios. The idea common to both KeyChest and UltraViolet is that when you purchase a movie, you’re actually purchasing the right to download or stream it from a variety of sources; the rights locker maintains a record of your purchase.
One of the main motivations behind UltraViolet was to prevent content distributors or consumer electronics makers from dominating the economics of the digital video supply chain in the way that Apple dominated music downloads (and Amazon may dominate e-books), and thus from being able to dictate terms to copyright owners. By making it possible for users to buy digital movies from one retailer and then download them in other formats from other retailers, the five studios hoped to create a level playing field among retailers as well as interoperability for users. UltraViolet has several retail partners, including Target, Walmart (VUDU), and Best Buy (CinemaNow).
The problem with these technology schemes is that it is very hard to make them into universal standards. Just about every software technology we use settles down to twos or threes. In operating systems, it’s all twos: Windows and Mac OS for desktops and laptops; Android and iOS for mobile devices; Unix/Linux and Windows for servers. Other markets are similar: in relational databases it’s Oracle/MySQL (Oracle Corp.), DB2 (IBM), and SQL Server (Microsoft); in music paid-download formats it’s MP4-AAC (Apple) and MP3 (Amazon); in e-books (in the US, at least) it’s Amazon, Barnes & Noble, and Apple iBooks. Antitrust law prevents a single technology from dominating too much; market complexity prevents more than a handful from becoming roughly equal competitors.
It would be a shame if this also became true for rights lockers for movies and TV shows. It does not help the studios if consumers get one flavor of “interoperability” for movies from all but one major studio and another flavor for movies from Disney. Disney surely remembers the less-than-stellar success of its last solo venture into digital movie distribution: MovieBeam, which launched around 2004 and lasted less than four years.
And that brings us back around to Apple. The only plausible explanation for this bifurcation is that Apple is really in charge here. UltraViolet is not just an “every studio but Disney” consortium; it is also an “every technology company but Apple” initiative. The list of technology companies participating in UltraViolet is huge, though Microsoft occupies a particularly important role as the source of the UltraViolet file format and the first commercial DRM to be approved for use with the system. In other words, the KeyChest/UltraViolet dichotomy is shaping up to look very much like Apple vs. the Microsoft-led Windows ecosystem, or Apple vs. the Google-led Android ecosystem.
Still, the market for digital video is still in relatively early days, and things could change quite a bit — especially if consumers are confused by the choices on offer. (Coincidentally, there’s a good overview of this confusion and its causes in today’s New York Times.) UltraViolet is enjoying only modest success so far — compared, say, to Netflix or iTunes — and the introduction of Disney Movies Anywhere is unlikely to help make rights lockers any clearer to consumers.
In that respect, the UltraViolet/KeyChest dichotomy also has a precedent in the digital music market. Back in 2001-2002, the (then) five major record labels lined up behind two different music distribution platforms: MusicNet and pressplay. MusicNet was backed by Warner Music Group, EMI, BMG, and RealNetworks, while pressplay was backed by Sony Music and Universal Music Group. MusicNet was a wholesale distribution platform that made deals with multiple retailers; pressplay was its own retailer. In other words, MusicNet was UltraViolet, while pressplay was Disney Movies Anywhere. Yet neither one was successful; both suffered from over-complexity (among other things). Apple launched the much easier to use iTunes Music Store in 2003, and few people remember MusicNet or pressplay anymore.*
In other words, there are still opportunities for new digital video models to emerge and disrupt the current market. And consumer confusion is a great way to hasten the disruption.
*The two music platforms did survive, in a way: MusicNet is now MediaNet, a wholesaler of digital music and other content with many retail partners; pressplay was sold to Roxio, rebranded as Napster (the legal version), and resold to Rhapsody, where it still exists under the Napster brand name outside of the US.
Viacom vs. YouTube: Not With a Bang, But a Whimper March 21, 2014Posted by Bill Rosenblatt in Law, United States.
add a comment
Earlier this week Viacom settled its long-running lawsuit against Google over video clips containing Viacom’s copyrighted material that users posted on YouTube. The lawsuit was filed seven years ago; Viacom sought well over a billion dollars in damages. The last major development in the case was in 2010, when a district court judge ruled in Google’s favor. The case had bounced around between the district and Second Circuit appeals courts. The parties agreed to settle the case just days before oral argument was to take place before the Second Circuit.
It’s surprising how little press coverage the settlement has attracted — even in the legal press — considering the strategic implications for Viacom and copyright owners in general.
The main reasons for the lack of press attention are that details of the settlement are being kept secret, and that by now the facts at issue in the case are firmly in the past. A few months after Viacom filed the lawsuit in March 2007, Google launched its Content ID program, which enables copyright owners to block user uploads of their content to YouTube — or monetize them through shared revenue from advertising. The lawsuit was concerned with video clips that users uploaded before Content ID was put into place.
Viacom’s determination to carry on with the litigation was clearly meant primarily to get the underlying law changed. Viacom has been a vocal opponent of the Digital Millennium Copyright Act (DMCA) in its current form. The law allows service providers like YouTube to respond to copyright owners’ requests to remove content (takedown notices) in order to avoid liability. It doesn’t require service providers to proactively police their services for copyright violations. As a result, a copyright owner has to issue a new takedown notice every time a clip of the same content appears on the network — which often happens immediately after each clip is taken down. As a result, companies like Viacom thus spend millions of dollars issuing takedown notices in a routine that has been likened to a game of Whac-a-Mole.
From that perspective, Google’s Content ID system goes beyond its obligations under the DMCA (Content ID has become a big revenue source for Google), so Google’s compliance with the current DMCA isn’t the issue. Countless other service providers don’t have technology like Content ID in place; moreover, since 2007 the courts have consistently — in other litigations such as Universal Music Group v. Veoh — interpreted the DMCA not to require that service providers act as their own copyright police. Viacom must still be interested in getting the law changed.
In this light, what’s most interesting is not that the settlement came just days before oral argument before the Third Circuit, but that it came just days after the House Judiciary Committee held hearings in Washington on the DMCA. These were done in the context of Congress’s decision to start on the long road to revamping the entire US copyright law.
My theory is that, while Viacom may have had various reasons to settle, the company has decided that it has a better shot at changing the law through Congress than through the courts. The journey to a new Copyright Act is likely to take years longer than the appeals process. But if Viacom were to get the lower court’s decision overturned in the Third Circuit, the result would be a precedent that wouldn’t apply nationwide; in particular, it wouldn’t necessarily apply in the tech-friendly Ninth Circuit. A fix to the actual law in Congress that’s favorable to copyright owners — if Congress delivers one — could have broader applicability, both geographically and to a wider variety of digital services. Viacom has waited six years; it can wait another ten or so.
In Copyright Law, 200 Is a Magic Number March 2, 2014Posted by Bill Rosenblatt in Images, Law, United States.
An occasional recurring theme in this blog is how copyright law is a poor fit for the digital age because, while technology enables distribution and consumption of content to happen automatically, instantaneously, and at virtually no cost, decisions about legality under copyright law can’t be similarly automated. The best/worst example of this is fair use. Only a court can decide whether a copy is noninfringing under fair use. Even leaving aside notions of legal due process, it’s not possible to create a “fair use deciding machine.”
In general, copyright law contains hardly any concrete, machine-decidable criteria. Yet one of the precious few came to light over the past few months regarding a type of creative work that is often overlooked in discussions of copyright law: visual artworks. Unlike most copyrighted works, works of visual art are routinely sold and then resold potentially many times, usually at higher prices each time.
A bill was introduced in Congress last week that would enable visual artists to collect royalties on their works every time they are resold. One of the sponsors of the bill is Rep. Jerrold Nadler, who represents a chunk of New York City, one of the world’s largest concentrations of visual artists.
Of course, the types of copyrighted works that we usually talk about here — books, movies, TV shows, and music — aren’t subject to resale royalties; they are covered under first sale (Section 109 of the Copyright Act), which says that the buyer of any of these works is free to do whatever she likes with them, with no involvement from the original seller. But visual artworks are different. According to Section 101 of the copyright law, they are either unique objects (e.g. paintings) or reproduced in limited edition (e.g. photographs). The magic number of copies that distinguishes a visual artwork from anything else? 200 or less. The copies must be signed and numbered by the creator.
Under the proposed ART (Artist Royalties, Too) Act, five percent of the proceeds from a sale of a visual artwork would go to the artist, whether it’s the second, third, or hundredth sale of the work. The law would apply to artworks that sell for more than $5,000 at auction houses that do at least $1 million in business per year. It would require private collecting societies to collect and distribute the royalties on a regular basis, as SoundExchange does for digital music broadcasting. This proposed law would follow in the footsteps of similar laws in many countries, including the UK, EU, Australia, Brazil, India, Mexico, and several others. It would also emulate “residual” and “rental” royalties for actors, playwrights, music composers, and others, which result from contracts with studios, theaters, orchestras, and so on.
The U.S. Copyright Office analyzed the art resale issue recently and published a report last December that summarized its findings. The Office concluded that resale royalties would probably not harm the overall art market in the United States, and that a law like the ART Act isn’t a bad idea but is only one of several ways to institute resale royalties.
The Office had previously looked into resale royalties over 20 years ago. Its newer research found that, based on evidence from other countries that have resale royalties, imposing them in the US would neither result in the flight of art dealers and auction houses from the country nor impose unduly onerous burdens for administration and enforcement of royalty payments.
Yet the Copyright Office’s report doesn’t overflow with unqualified enthusiasm for statutory royalties on sales. One of the legislative alternatives it suggests is the idea of a “performance royalty” from public display of artworks. If a collector wants to buy a work at auction and display it privately in her home, that’s different from a museum that charges people admission to see it. Although this would mirror performance royalties for music, it would seem to favor wealthy individuals at the expense of public exposure to art.
The ART Act — which is actually a revision of legislation that Rep. Nadler introduced in 2011 — has drawn much attention within the art community, though little outside it. Artists are generally in favor of it, of course. But various others have criticized aspects of the bill, such as that it only applies to auction houses (thereby pushing more sales to private dealers, where transactions take place in secret instead of out in the open), that it only benefits the tiny percentage of already-successful artists instead of struggling newcomers, and that it unfairly privileges visual artists over other creators of both copyrighted works and physical objects (think Leica cameras or antique Cartier watches).
As an outsider to the art world, I have no opinion. Instead it’s that 200 number that fascinates me. That number may partially explain why the Alfred Eisenstaedt photograph of the conductor Leonard Bernstein that hangs in my wife’s office, signed and numbered 14 out of 250, is considerably less valuable than another Eisenstaedt available on eBay that’s signed and numbered 41 out of 50.
It begs the question of what happens when more and more visual artists use media that can be reproduced digitally without loss of quality. Would an artist be better off limiting her output to 200 copies and getting the 5% on resale, or would she be better off making as many copies as possible and selling them for whatever the market will bear? The answer is unknowable without years of real-world testing. Given the choice, some artists may opt for the former route, which seems to go against the primary objective of copyright law: to maximize the availability of creative works to the public through incentives to creators.
Copyright minimalists question the relevance of copyright in an era when digital technologies make it possible to reproduce creative works at very little cost and perfect fidelity; they call on the media industry to stop trying to “profit from scarcity” and instead “profit from abundance.” Here’s a situation where copyrighted works are the scarcest of all.
Nowadays no one would confuse one of Vermeer’s 35 (or possibly 36) masterpieces with a poster or hand-made reproduction of one. People will be willing to travel to the Rijksmuseum, National Gallery, Met, etc., to see them for the foreseeable future. Yet there will be some time in the non-near future when the scarcity of most copyrighted works is artificially imposed. At that point, the sale (not resale) value of creative works will go toward zero, even if they are reproduced, signed, and sequentially numbered by super-micro-resolution 3D printers that sell at Staples for the equivalent of $200 today.
Perhaps the best indication of the future comes from Christo and Jeanne-Claude, the well-known husband-and-wife outdoor artists. Christo and Jeanne-Claude designed the 2005 installation called The Gates in New York’s Central Park (which happens to be in Jerry Nadler’s congressional district). Reproducing — let alone selling — this massive work is inconceivable. Instead, Christo and Jeanne-Claude hand-signed thousands of copies of books, lithographs, postcards, and other easily-reproduced artifacts containing photos and drawings of the artwork, and sold them to help pay the eight-figure cost of the project. To that just add an individualized auto-pen for automating the signatures, and you may have the future of visual art in a world without scarcity.
So, the question that Congress ought to consider when evaluating art resale legislation is how to create a legal environment in which the Christos and Jeanne-Claudes of tomorrow will even bother anymore. That’s not a rhetorical question, either.
National Academies Calls for Hard Data on Digital Copyright February 4, 2014Posted by Bill Rosenblatt in Economics, Law, United States.
1 comment so far
About three years ago, the National Academies — the scientific advisers to the U.S. federal government — held hearings on copyright policy in the digital age. The intent of the project, of which the hearings were a part, was to gather input from a wide range of interested parties on the kinds of research that should be done to further our understanding of the effects of digital technologies on copyright.
The committee overseeing the project consisted of twelve people, including an economist specializing in digital content issues (Joel Waldfogel), a movie industry executive (Mitch Singer of Sony Pictures), a music technology expert (Paul Vidich, formerly of Warner Music Group), a federal judge with deep copyright experience (Marilyn Hall Patel of Napster fame), a library director (Michael Keller of Stanford University), a former director of Creative Commons (Molly van Houweling), and a few law professors. The committee was chaired by Bill Raduchel, a Harvard economics professor turned technology executive perhaps best known as Scott McNealy’s mentor at Sun Microsystems.
Recently the National Academies Press published the results of the project in the form of Copyright in the Digital Era: Building Evidence for Policy, which is available as a free e-book or $35 paperback. This 85-page booklet is, without exaggeration, the most important document in the field of copyright policy to be published in quite some time. It is the first substantive attempt to take the debate on copyright policy out of the realm of “copyright wars,” where polemics and emotions rule, into the realm of hard data.
The document starts by decrying the lack of data on which deliberations on copyright policy are based, especially compared to the mountains of data used to support changes to the patent system. It then goes on to describe various types of data that either exist or should be collected in order to fuel research that can finally tell us how copyright is faring in the digital era, with respect to its purpose to maximize public availability of creative works through incentives to creators.
The questions that Copyright in the Digital Era poses are fundamentally important. They include issues of monetary and non-monetary motivations to content creators; the impact of sharply reduced distribution and transaction costs for digital compared to physical content; the costs and benefits of various copyright enforcement schemes; and the effects of US-specific legal constructs such as fair use, first sale, and the DMCA safe harbors. My own testimony at the hearings emphasized the need for research into the costs and benefits of rights technologies such as DRM, and I was pleased to see this reflected in the document.
Copyright in the Digital Era concludes with lists of types of data that the project committee members believe should be collected in order to facilitate research, as well as descriptions of the types of research that should be done and the challenges of collecting the needed data.
This document should be required reading for everyone involved in copyright policy. More than that, it should be seen as a gauntlet that has been thrown down to everyone involved in the so-called copyright wars. The National Academies has set the research agenda. Now that Congress has begun the long, arduous process of revamping America’s copyright law, we’ll see who is willing and able to fund the research and publish the results so that Congress gets the data it deserves.