add a comment
Registration for Copyright and Technology London 2014 is now live. An earlybird discount is in place through August 8. Space is limited and we came close to filling the rooms last time, so please register today!
I am particularly excited about our two keynote speakers — two of the most important copyright policy officials in the European Union and United States respectively. Maria Martin-Prat will discuss efforts to harmonize aspects of copyright law throughout the 28 EU Member States, while Shira Perlmutter will provide an update on the long process that the US has started to revise its copyright law.
We have made one change to the Law and Policy track in the afternoon: we’ve added a panel called The Cloudy Future of Private Copying. This panel will deal with controversies in the already complex and often confusing world of laws in Europe that allow consumers to make copies of lawfully-obtained content for personal use.
The right of private copying throughout Europe was established in the European Union Copyright Directive of 2001, but the EU Member States’ implementations of private copying vary widely — as do the levies that makers of consumer electronics and blank media have to pay to copyright collecting societies in many countries on the presumption that consumers will make private copies of copyrighted material. Private copying was originally intended to apply to such straightforward scenarios as photocopying of text materials or taping vinyl albums onto cassette. But nowadays, cloud storage services, cyberlockers, and “cloud sync” services for music files — some of which allow streaming from the cloud or access to content by users other than those who uploaded the content — are coming into view regarding private copying.
The result is a growing amount of controversy among collecting societies, consumer electronics makers, retailers, and others; meanwhile the European Commission is seeking ways to harmonize the laws across Member States amid rapid technological change. Our panel will discuss these issues and consider whether there’s a rational way forward.
We have slots open for a chair and speakers on this panel; I will accept proposals through July 31. Please email your proposal(s) with the following information:
- Speaker’s name and full contact information
- Chair or speaker request?
- Description of speaker’s experience or point of view on the panel subject
- Brief narrative bio of speaker
- Contact info of representative, if different from speaker*
Finally, back over here across the Atlantic, I’ll note an interesting new development in the Aereo case that hasn’t gotten much press since the Supreme Court decision in the case a couple of weeks ago. Aereo had claimed that it had “bet the farm” on a court ruling that its service was legal and that “there is no Plan B,” implying that it didn’t have the money to pay for licenses with television networks. Various commentators have noted that Aereo wasn’t going to have much leverage in any such negotiations anyway.
As a result of the decision, Aereo has changed tactics. In the Supreme Court’s ruling, Justice Breyer stated that Aereo resembled a cable TV provider and therefore could not offer access to television networks’ content without a license. Now, in a filing with the New York district court that first heard the case, Aereo is claiming that it should be entitled to the statutory license for cable TV operators under section 111 of the copyright law, with royalty rates that are spelled out in 17 U.S.C § 111(d)(1).
In essence, Aereo is attempting to rely on the court for its negotiating leverage, and it has apparently decided that it can become a profitable business even if it has to pay the fees under that statutory license. Has Barry Diller — or another investor — stepped in with the promise of more cash to keep the company afloat? Regardless, in pursuing this tactic, Aereo is simply following the well-worn path of working litigation into a negotiation for a license to intellectual property.
*Please note that personal confirmation from speakers themselves is required before we will put them on the program.
Supreme Court’s Aereo Decision Clouds the Future July 3, 2014Posted by Bill Rosenblatt in Law, United States, Video.
add a comment
The Supreme Court has rendered various decisions that serve as rules of the road for the treatment of copyrighted works amid technological innovation. Universal v. Sony (1984) established the legality of “time shifting” video for personal viewing as well as the “substantial noninfringing uses” standard for new technologies that involve digital media. MGM v. Grokster (2005) took the concept of “inducing infringement” from patent law and applied it to copyright, so that services that directly and explicitly benefit from users’ infringement could be held liable. UMG v. Veoh (2011) taught that network service operators have no duty to proactively police their services for users’ infringements. These rulings are reasonably clear signposts that technologists can follow when contemplating new products and services.
Unfortunately, Justice Stephen Breyer’s ruling last week in ABC v. Aereo won’t be joining that list. He ruled against Aereo in a 6-3 majority that united the Court’s liberals and moderates. Justice Antonin Scalia’s forceful dissent described the problems that this decision will create for services in the future.
Several weeks ago, at the Copyright Clearance Center’s OnCopyright conference in NYC, Rick Cotton — former General Counsel of NBC Universal — predicted that the Supreme Court would come down against Aereo in a narrow decision that would avoid impact on other technologies. He got it right in terms of what Justice Breyer may have hoped to accomplish, but not in terms of what’s likely to happen in the future.
Instead of establishing principles that future technology designers can rely on, the Court simply took a law that was enacted almost 40 years ago to apply to an old technology, determined that Aereo resembles that old technology, and concluded that therefore the law should apply to it. The old technology in question is Community Access Television (CATV) — transmissions of broadcast television over cable to reach households that couldn’t receive the broadcasts over the air.
Justice Breyer observed that Congress made changes in the copyright law, with the Copyright Act of 1976, in order to stop CATV providers from being able to “free ride” on broadcast TV signals; he found that that Aereo was similarly free-riding and therefore ought to be subject to the same law.
Just in terms of functionality, the decision makes little sense: CATV was created to enable broadcast television to reach new audiences, while Aereo (nominally, at least) enabled an existing audience for broadcast TV to watch it on other devices and in other locations. In that respect, Aereo is more like the “cloud sync” services for music like DoubleTwist and MP3Tunes that popped up in the late 2000s, which automatically copied users’ MP3 music files and playlists across all of their devices. More on that analogy later.
More broadly, the Court’s decision is unlikely to be helpful in guiding future technologies; all it offers is a “does it look like cable TV?” test based on fact-specific interpretations of the public performance right in copyright law. Justice Breyer claimed that his opinion should not necessarily have implications for cloud computing and other new technologies, but that doesn’t make it so.
As Justice Scalia remarked in his dissent, “The Court vows that its ruling will not affect cloud-storage providers and cable television systems … , but it cannot deliver on that promise given the imprecision of its result-driven rule.” Justice Scalia felt that Aereo exploited a loophole in the copyright law but that it should be up to Congress instead of the Supreme Court to close it.
In fact, Justice Scalia agreed with the Court’s opinion that Aereo probably violates copyright law. But he stated that the decision the Court was called upon to make — regarding Aereo’s direct infringement liability and whether the TV networks’ request for a preliminary injunction should be upheld — wasn’t an appropriate vehicle for determining Aereo’s copyright liability, and that the Court should have left well enough alone. Instead, Justice Scalia offered that Aereo should be more properly held accountable based on secondary liability — just as the Court did in Grokster — and that a lower court could well reach such a finding later in the case after the preliminary injunction issue had been settled.
Secondary liability means that a service doesn’t infringe copyrights itself but somehow enables end users to do so. Of course there have been many cases where copyright owners have sued tech companies on the basis of secondary liability and forced them to go out of business (e.g., Napster, LimeWire), but there have been many others where lawsuits (or threats of lawsuits) have resulted in mutually beneficial license agreements between copyright owners and the technology companies.
And that brings us back to “cloud sync” services for music. DoubleTwist was built by Jon Lech Johansen, who had become notorious for hacking the encryption system for DVDs in the late 1990s. MP3Tunes was developed by Michael Robertson, who was equally notorious for his original MP3.com service. Cloud sync services enabled users to make copies of their music files without permission and didn’t share revenue (e.g., from advertising or premium subscriptions) with copyright owners. DoubleTwist, MP3Tunes, and a handful of similar services became moderately popular. In addition to their functionality, what MP3Tunes and DoubleTwist had in common was that they were developed by people who had first built blatantly illegal technology and then sought ways to push the legal envelope more gently.
Later on, Amazon, Apple, and Google followed the same latter path. They built cloud sync capabilities into their music services (thereby rendering small third-party services like DoubleTwist largely irrelevant). Amazon and Google launched their cloud sync capabilities without taking any licenses from record companies; record companies complained; confidential discussions ensued; and now everyone’s happy, including the consumers who use these handy services. (Apple took a license for its iTunes Match feature at the outset.)
The question for Aereo is whether it’s able to have such discussions with TV networks; the answer is clearly no. The company never entertained the possibility that it would have to (“there is no Plan B“), and its principal investor, video mogul Barry Diller, isn’t going to pump more money into the company to pay for licenses.
Of course, TV networks are cheering the result of the Supreme Court’s decision in Aereo. But it doesn’t help them in the long run if the rules of the road for future technologies are made cloudier instead of clearer. And Aereo would eventually have been doomed anyway if Justice Scalia had a majority.
Copyright Alert System Releases First Year Results June 10, 2014Posted by Bill Rosenblatt in Europe, Fingerprinting, Law, United States, Watermarking.
The Center for Copyright Information (CCI) released a report last month summarizing the first calendar year of activity of the Copyright Alert System (CAS), the United States’ voluntary graduated response scheme for involving ISPs in flagging their subscribers’ alleged copyright infringement. The report contains data from CAS activity as well as results of a study that CCI commissioned on consumer attitudes in the US towards copyright and file sharing.
There are two alerts at each level, for a total of six, but the three categories make it easier to compare the CAS with “three strikes” graduated response regimes in other countries. As I discussed recently, the CAS’s “mitigation” penalties are very minor compared to punitive measures in other systems such as those in France and South Korea.
The CCI’s report indicates that during the first ten months of operation, it sent out 1.3 million alerts. Of these, 72% were “educational,” 20% were “acknowledgement,” and 8% were “mitigation.” The CAS includes a process for users to submit mitigation alerts they receive to an independent review process. Only 265 review requests were sent, and among these, 47 (18%) resulted in the alert being overturned. Most of these 47 were overturned because the review process found that the user’s account was used by someone else without the user’s authorization. In no case did the review process turn up a false positive, i.e. a file that the user shared that was actually not unauthorized use of copyrighted material.
It’s particularly instructive to compare these results to France’s HADOPI system. This is possible thanks to the detailed research reports that HADOPI routinely issues. Two of these were presented at our Copyright and Technology London conferences and are available on SlideShare (2012 report here; 2013 report here). Here is a comparison of the percent of alerts issued by each system at each of the three levels:
|Alert Level||HADOPI 2012||HADOPI 2013||CAS 2013|
Of course these comparisons are not precise; but it is hard not to draw an inference from them that threats of harsher punitive measures succeed in deterring file-sharing. In the French system — in which users can face fines of up to €1500 and one year suspensions of their Internet service — only 0.03% of those who received notices kept receiving them up to the third level, and only a tiny handful of users actually received penalties. In the US system — where penalties are much lighter and not widely advertised — almost 8% of users who received alerts went all the way to the “mitigation” levels. (Of that 8%, 3% went to the sixth and final level.)
Furthermore, while the HADOPI results are consistent from 2012 to 2013, they reflect a slight upward shift in the number of users who receive second-level notices, while the percent of third-level notices — those that could involve fines or suspensions — remained constant. This reinforces the conclusion that actual punitive measures serve as deterrents. At the same time, the 2013 results also showed that while the HADOPI system did reduce P2P file sharing by about one-third during roughly the second year of the system’s operation, P2P usage stabilized and even rose slightly in the two years after that. This suggests that HADOPI has succeeded in deterring certain types of P2P file-sharers but that hardcore pirates remain undeterred — a reasonable conclusion.
It will be interesting to see if the CCI takes this type of data from other graduated response systems worldwide — including those with no punitive measures at all, such as the UK’s planned Vcap system — into account and uses it to adjust its level of punitive responses in the Copyright Alert System.
Dispatches from IDPF Digital Book 2014, Pt. 3: DRM June 5, 2014Posted by Bill Rosenblatt in DRM, Publishing, Standards.
add a comment
The final set of interesting developments at last week’s IDPF Digital Book 2014 in NYC has to do with DRM and rights.
Tom Doherty, founder of the science fiction publisher Tor Books, gave a speech about his company’s experimentation with DRM-free e-books and its launch of a line of e-novellas without DRM. The buildup to this speech (among those of us who were aware of the program in advance) was palpable, but the result fell with a thud. You had to listen hard to find the tiny morsel about how going DRM-free has barely affected sales; otherwise the speech was standard-issue dogma about DRM with virtually no new insights or data. And he did not take questions from the audience.
DRM has become something of a taboo subject even at conferences like this, so most of the rest of the discussion about it took the form of hallway buzz. And the buzz is that many are predicting that DRM will be on its way out for retail trade e-books within the next couple of years.
That’s the way things are likely to go if technology market forces play out the way they usually do. Retailers other than Amazon (and possibly Apple) will want to embrace more open standards so that they can offer greater interoperability and thus band together to compete with the dominant player; getting rid of DRM is certainly a step in that direction. Meanwhile, publishers, getting more and more fed up with or afraid of Amazon, will find common cause with other retailers and agree to license more of their material for distribution without DRM. (Several retailers in second-tier European countries as well as some retailers for self-publishing authors, such as Lulu, have already dropped DRM entirely.)
Such sentiments will eventually supersede most publishers’ current “faith-based” insistence on DRM. In other words, publishers and retailers will behave more or less the same way as the major record labels and non-Apple retailers behaved back in 2006-2007.
This course of events seems inevitable… unless publishers get some hard, credible data that tells them that DRM helps prevent piracy and “oversharing” more than it hurts the consumer experience. That’s the only way (other than outright inertia) that I can see DRM staying in place for trade books over the next couple of years.
The situation for educational, professional, and STM (scientific, technical, medical) books is another story (as are library lending and other non-retail models). Higher ed publishers in particular have reasons to stick with DRM: for example, e-textbook piracy has been rising dramatically in recent years and is up to 34% of students as of last year.
Adobe recently re-launched its DRM with a focus on these publishing market segments. I’d describe the re-launch as “awkward,” though publishers I’ve spoken to would characterize in it less polite terms. This has led to openings for other vendors, such as Sony DADC; and the Readium Foundation is still working on the open-source EPUB Lightweight Content Protection scheme.
The hallway buzz at IDPF Digital Book was that DRM for these market segments is here to stay — except that in higher ed, it may become unnecessary in a longer timeframe, when educational materials are delivered dynamically and in a fashion more akin to streaming than to downloads of e-books.
I attended a panel on EDUPUB, a standards initiative aimed at exactly this future for educational publishing. The effort, led by Pearson Education (the largest of the educational publishers), the IMS Global Learning Consortium, and IDPF, is impressive: it’s based on combining existing open standards (such as IDPF’s EPUB 3) instead of inventing new ones. It’s meant to be inclusive and beneficial to all players in the higher ed value chain, including Pearson’s competitors.
However, EDUPUB is in danger of making the same mistake as the IDPF did by ignoring DRM and other rights issues. When asked about DRM, Paul Belfanti, Pearson’s lead executive on EDUPUB, answered that EDUPUB is DRM-agnostic and would leave decisions on DRM to providers of content delivery platforms. This decision was problematic for trade publishers when IDPF made it for EPUB several years ago; it’s even more potentially problematic for higher ed; EDUPUB-based materials could certainly be delivered in e-textbook form.
EDUPUB could also help enable one of the Holy Grails of higher ed publishing, which is to combine materials from multiple publishers into custom textbooks or dynamically delivered digital content. Unlike most trade books, textbooks often contain hundreds or thousands of content components, each of which may have different rights associated with them.
Clearing rights for higher ed content is a manual, labor-intensive job. In tomorrow’s world of dynamic digital educational content, it will be more important than ever to make sure that the content being delivered has the proper clearances, in real time. In reality, this doesn’t necessarily involve DRM; it’s mainly a question of machine-readable rights metadata.
Attempts to standardize this type of rights metadata date back at least to the mid-1990s (when I was involved in such an attempt); none have succeeded. This is a “last mile” issue that EDUPUB will have to address, sooner rather than later, for it to make good on its very promising start. DRM and rights are not popular topics for standards bodies to address, but it has become increasingly clear that they must address these issues to be successful.
This is the second of three installments on interesting developments from last week’s IDPF Digital Book conference in NYC.
Another interesting panel at the conference was on public libraries. I’ve written several times (here’s one example) about the difficulties that public libraries are having in licensing e-books from major trade publishers, given that publishers are not legally obligated to license their e-books for library lending on the same terms as for printed books — or at all. The major trade publishers have established different licensing models with various restrictions, such as limited durations (measured in years or number of loans), lack of access to frontlist (current) titles, and/or prices that range up to several times those charged to consumers.
The panel presented some research findings that included some hard data about how libraries drive book sales — data that libraries badly need in order to bolster their case that publishers should license material to them on reasonable terms.
As we learned from Rebecca Miller from Library Journal, public libraries in the US currently spend only 9% of their acquisition budgets on e-books — which amounts to about $100 Million, or less than 3% of overall trade e-book revenue in the United States. Surely that percentage will increase, making e-book acquisition more and more important for the future of public libraries. And as e-books take up a larger portion of libraries’ acquisition budgets, the fact that libraries have little control over licensing terms will become a bigger and bigger problem for them.
The library community has issued a lot of rhetoric — including during that panel– about how important libraries are for book discovery. But publishers are ultimately only swayed by measurable revenue from sales of books that were driven by visits to or loans from libraries. They also want to know to what extent people don’t buy e-books because they can “borrow” them from libraries.
In that light, the library panel had one relevant statistic to offer, courtesy of a study done by my colleague Steve Paxhia for the Book Industry Study Group. The study found that 22% of library patrons ended up buying a book that they borrowed from the library at least once during the past year.
That’s quite a high number. Here’s how it works out to revenue for publishers: Given Pew Internet and American Life statistics about library usage (48% of the population visited libraries last year), and only counting people aged 18 years and up, it means that people bought about 25 million books last year after having borrowed them from libraries. Given that e-books made up 30% of book sales in unit volume last year and figuring an average retail price of $10, that’s $75 million in e-book sales directly attributable to library lending. The correct figure is probably higher, given that many library patrons discover books in ways other than borrowing them (e.g. browsing through them at the library) — though it may also be lower given that some people buy books in order to own physical objects (and thus the percentage of e-books purchased as a result of exposure in libraries may be lower than the corresponding percentage of print books).
So, in rough numbers, it’s safe to say that for the $100 Million that libraries spend on e-books per year, they deliver a similar amount again in sales through discovery. It’s just too bad that the study did not also measure how many people refrained from buying e-books because they could get them from public libraries. This would be an uncomfortable number to measure, but it would help lead to the truth about how public libraries help publishers sell books.
Update: Steve Paxhia found that his 22% number was of library lends leading to purchases during a period of six months, not a year. And the survey respondents may have purchased books after borrowing them more than once during that period. His data also shows that half of respondents indicated that they purchased other works from a given author after having borrowed one from the library. So, using the same rough formula as above, the amount of purchases attributable to library usage is more likely to be north of $150 million. Yet we still have no indication of the number of times someone did not purchase a book — particularly an e-book — because it was available through a public library system.
The International Digital Publishing Forum (IDPF), the standards body in charge of the EPUB standard for digital book publishing, puts on a conference-within-a-conference called IDPF Digital Book inside the gigantic Book Expo America in NYC each year. Various panels and hallway buzz at this year’s event, which took place last week, showed how book publishing is developing regarding issues we address here. I’ll cover these in three installments.
First and most remarkable is the emergence of Wattpad as the next step in the disruption of the value chain for authors’ content. The Toronto-based company’s CEO, Allen Lau, spoke on a panel at the conference that I moderated. Wattpad can be thought of as a successor to Scribd as “YouTube for writings.”
There are a few important differences between Scribd and Wattpad. First, whereas Scribd had become a giant, variegated catchall for technical white papers, vendor sales collateral, court decisions, academic papers, resumes, etc., etc., along with more recently acquired content from commercial publishers, Wattpad is focused tightly on text-based “stories.”
Second, Wattpad is optimized for reading and writing on mobile devices, whereas Scribd focuses on uploads of existing documents, many of which are in not-very-mobile-friendly PDF. Third and most importantly, Scribd allows contributors to sell their content, either piecemeal or as part of Scribd’s increasingly popular Netflix-like subscription plan; in contrast, Wattpad has no commerce component whatsoever.
In fact, the most remarkable thing about Wattpad is that it has raised over $60 million in venture funding, almost exactly $1 million for each of the company’s current employees. But like YouTube, Facebook, Tumblr, and various others in their early days, it has no apparent revenue model — other than a recent experiment with crowdfunding a la Kickstarter or Indiegogo. There’s no way to buy or sell content, and no advertising.
Instead, Wattpad attracts writers on the same rationales by which YouTube attracts video creators (and Huffington Post attracts bloggers, etc.): to give them exposure, either for its own sake or so that they can make money some other way. For example, Lau touted the fact that one of its authors recently secured a movie deal for her serialized story.
Apparently Wattpad has become a vibrant home for serialized fiction, fan fiction, and stories featuring celebrities as characters (which may be a legal gray area in Canada). It has also become a haven for unauthorized uploads of copyrighted material, although it has taken some steps to combat this through a filtering scheme developed in cooperation with some of the major trade publishers. Wattpad has 25 million users and growing — fast.
This all makes me wonder: why isn’t everyone in the traditional publishing value chain — authors, publishers, and retailers — scared to death of Wattpad? It strikes me as a conduit for tens of millions of dollars in VC funding to create expectations among its youthful audience that content should be free and that authors need not be paid.
There’s a qualitative difference between Wattpad and other social networking services. Copyright infringement aside, TV networks and movie studios didn’t have much to fear from YouTube in its early days of cat videos. Facebook and Tumblr started out as venues for youthful self-expression, but little of that was threatening to professional content creators.
In contrast, Wattpad seems to have crossed a line. Much of the writing on Wattpad — apart from its length — directly substitutes for the material that trade publishers sell. Wattpad started out as a platform for writers to critique each others’ work — which sounds innocuous (and useful) enough — but it’s clearly moved on to become a place where the readers vastly outnumber the writers. (How else to explain the fact that despite the myriad usage statistics on its website, Wattpad does not disclose a number of active authors?)
In other words, Wattpad has become a sort of Pied Piper leading young writers away from the idea or expectation of doing it professionally. Moreover, there are indications that Wattpad expects to make money from publishers looking to use it as a promotional platform for their own authors’ content, even though — unlike Scribd — it can’t be sold on the site.
By the time Wattpad burns through its massive treasure chest and really needs to convert its large and fast-growing audience into revenue from consumers, it may be too late.
1 comment so far
I’m happy to announce the lineup of panels for Copyright and Technology London 2014, which will take place on Wednesday 1 October at the offices of ReedSmith in the City of London, produced by my good friends at Music Ally. This is a call for proposals to chair (that’s “moderate” for Americans) and speak on panels.
As with past conferences, we will have a morning full of plenary sessions, after which we will separate into Technology and Law & Policy tracks. Our morning session will feature a keynote address by Maria Martin-Prat, Head of Copyright Unit, Intellectual Property Directorate, Internal Market and Services, European Commission.
Here are the panels for the Technology track:
- New Challenges and Responses to Online Piracy
The proliferation of cyberlockers, cloud storage, and BitTorrent sites has led to new challenges for media companies looking to reduce the amount of infringing content stored online. Piracy monitoring services must keep up with new data storage and distribution schemes as well as new ways in which large-scale infringers can make content available. We’ll review some of the new challenges and responses to online piracy as well as the nature of demands on piracy monitoring services.
- Content Protection for 4K Video
The next frontier in digital video, known as 4K, offers four times the pixels of HD. Although movie studios are capturing content in 4K, the ecosystems for delivering it to consumers are still being defined. Along with superior viewing experiences, 4K gives Hollywood an opportunity to call for redesigned content protection schemes that remedy some of the deficiencies of existing ones. In this session, we’ll discuss Hollywood’s objectives for 4K content protection and hear about some proposed solutions and their tradeoffs.
- Rights Expression Languages: Automating Communication of Content Rights
The idea of machine-readable languages for expressing rights was introduced with some of the first DRM systems some time ago. But more recently, rights expression languages have found their ways into various interesting applications for conveying rights information among links in content value chains, to support commerce and licensing agreements efficiently and unambiguously. In this session, we’ll hear from organizations who are developing schemes to apply machine-readable rights expressions to digital images, news, and other forms of content.
And here are the panel session for the Law and Policy track:
- Should Internet Service Providers Be Copyright Cops?
Internet service providers (ISPs) are beginning to take responsibility for copyright infringement that occurs over their networks – whether voluntarily (as in the UK and USA) or by force of law (as in France and Austria). On this panel, we will discuss developments that have taken place both in courts and behind the scenes that chart the progress of the content industries in getting ISPs to take responsibility for the copyright behavior of their subscribers, and whether educational or punitive measures are necessary to reduce infringement online.
- Ripples Across the Pond: The Influence of American Copyright Reform
The United States has begun the long journey of reviewing and reforming its Copyright Act, which dates back to 1976. Although opinions on how or whether to revise the law differ greatly, most agree that the law is a poor match for today’s rapid developments in digital content and services. Our panel of multi-national experts will speculate on the areas of the law that are most likely to change during the ensuing review process, and on how those changes will be likely to affect developments in law and technology in the UK, Europe, and beyond.
- Copyright and Personal Digital Property
Recent legal activity throughout Europe has profound reverberations concerning citizens’ rights to the data they put up online – or that is put online on their behalf. European courts have decided that people have certain rights to have their personal information removed from online services. Meanwhile, the European Commission is reopening debate around a Notice-and-Action scheme for removal of copyrighted material online, similar to the U.S. Notice-and-Takedown regime, which some advocates claim impinges on free speech while others claim is ineffective at curbing infringement. Are we headed toward an online society that respects information as personal property, and if so, is this a good idea? We’ll discuss these issues.
At this point we are accepting proposals to chair or speak on any of these panels. Deadline is Friday, June 13. Please email your proposal(s) with the following information:
- Speaker’s name and full contact information
- Panel requested
- Chair or speaker request?
- Description of speaker’s experience or point of view on the panel subject
- Brief narrative bio of speaker
- Contact info of representative, if different from speaker*
As mentioned above, the agenda is subject to change. If you have another idea for a panel, we’d love to hear about that as well.
If you are interested in sponsorship opportunities, we have three levels, which are described in our brochure; please ask and we’ll send you one. The top-level Conference Sponsorship is a single opportunity that we offer on a first-come, first-served basis to work with the program chair (that’s me) to define a plenary session of interest to our audience. Thanks in advance for your interest!
*Please note that personal confirmation from speakers themselves is required before we will put them on the program.
1 comment so far
The BBC has discovered documents that detail a so-called graduated response program for detecting illegal downloads done by customers of major UK ISPs and sending alert messages to them. The program is called the Voluntary Copyright Alert Programme (Vcap). It was negotiated between the UK’s four major ISPs (BT, Sky, Virgin Media, and TalkTalk) and trade associations for the music and film industries, and it is expected to launch sometime next year.
Vcap is a much watered-down version of measures defined in the Digital Economy Act of 2012, in that it calls only for repeated “educational” messages to be sent to ISP subscribers and for no punitive measures such as suspension or termination of their accounts.
In general, graduated response programs work like this: copyright owners engage network monitoring firms to monitor ISPs’ networks for infringing behavior. Monitoring firms use a range of technologies, including fingerprinting to automatically recognize content that users are downloading. If they find evidence of illegal behavior, they report it to a central authority, which passes the information to the relevant ISP, typically including the IP address of the user’s device. The ISP determines the identity of the targeted subscriber and takes some action, which depends on the details of the program.
In some cases (as in France and South Korea), the central authority is empowered to force the ISP to take punitive action; in other cases (as in the United States’ Copyright Alert System (CAS) as well as Vcap), ISPs take action voluntarily.
Assuming that Vcap launches on schedule, we could soon have data points about the effectiveness of various types of programs for monitoring ISP subscribers’ illegal downloading behaviors. The most important question to answer is whether truly punitive measures really make a difference in deterring online copyright infringement, or whether purely “educational” measures are enough to do the job. Currently there are graduated response programs in South Korea, New Zealand, Taiwan, and France that have punitive components, as well as one in Ireland (with Eircom, the country’s largest ISP) that is considered non-punitive.
Is America’s CAS punitive or educational? That’s a good question. CAS has been called a “six strikes” system (as opposed to other countries’ “three strikes”), because it defines six levels of alerts that ISPs must generate, although ISPs are intended to take “mitigation measures” against their subscribers starting at the fifth “strike.” What are these mitigation measures? It’s largely unclear. The CAS’s rules are ambiguous and leave quite a bit of wiggle room for each participating ISP to define its own actions.
Instead, you have to look at the policies of each of the five ISPs to find details about any punitive measures they may take — information that is often ambiguous or nonexistent. For example:
- AT&T: its online documentation contains no specifics at all about mitigation measures.
- Cablevision (Optimum Online): its policy is ambiguous, stating that it “may temporarily suspend your Internet access for a set period of time, or until you contact Optimum.” Other language in Cablevision’s policy suggests that the temporary suspension period is 24 hours.
- Comcast (Xfinity): Comcast’s written policy is also ambiguous, saying only that it will continue to post alert messages until the subscriber “resolve[s] the matter” and that it will never terminate an account.
- Time Warner Cable: also ambiguous but suggesting nothing on the order of suspension or termination, or bandwidth throttling. It states that “The range of actions may include redirection to a landing page for a period or until you contact Time Warner Cable.”
- Verizon: Verizon’s policy is the only one with much specificity. On the fifth alert, Verizon throttles the user’s Internet speed to 256kbps — equivalent to a bottom-of-the-line residential DSL connection in the US — for a period of two days after a 14-day advance warning. At the sixth alert, it throttles bandwidth for three days.
In other words, the so-called mitigation measures are not very punitive at all, not even at their worst — at least not compared to these penalties in other countries:
- France: up to ISP account suspension for up to one year and fines of up to €1500 (US $2000), although the fate of the HADOPI system in France is currently under legal review.
- New Zealand: account suspension of up to six months and fines of up to NZ $15,000 (US $13,000).
- South Korea: account suspension of up to six months.
- Taiwan: suspension or termination of accounts, although the fate of Taiwan’s graduated response program is also in doubt.
[Major hat tip to Thomas Dillon's graduatedresponse.org blog for much of this information.]
In contrast, Vcap will be restricted to sending out four alerts that must be “educational” and “promot[e] an increase in awareness” of copyright issues. Vcap is intended to run for three years, after which it will be re-evaluated — and if judged to be ineffective, possibly replaced with something that more closely resembles the original, stricter provisions in the Digital Economy Act. By 2018, the UK should also have plenty of data to draw on from other countries’ graduated response regimes about any relationship between punitive measures and reduced infringements.
Announcing Copyright and Technology London 2014 April 25, 2014Posted by Bill Rosenblatt in Europe, Events, UK, Uncategorized.
add a comment
I’m pleased to announce that our next Copyright and Technology London conference will take place on Wednesday, October 1, at the offices of ReedSmith in the City of London. This is the same beautiful venue as our conference last October, with 360-degree floor-to-ceiling views of the city. Music Ally is producing the event. Now in our fifth year, the mission of the Copyright and Technology conferences is to bring together a diverse group of lawyers, technologists, policymakers, and business people for education and intelligent dialog about the nexus of copyright and technology. The London conference focuses on issues of particular interest in the UK and the rest of Europe but also offers international perspectives from the US, Australia, and beyond.
At this point, I am soliciting ideas for sessions. What are the hot issues for people concerned with copyright in the UK, Europe, and beyond? As in past Copyright and Technology conferences, the agenda will consist of plenary sessions with a keynote speaker in the morning, and afternoon breakouts into Technology and Law & Public Policy tracks. Please feel free to suggest session topics that will appeal to technologists, law and government professionals, or all of the above. Also feel free to put forward names of speakers for sessions.
We plan to have a working agenda in place by June, so please send me your session proposals by May 16.
As in the past, sponsorship opportunities are available. Copyright and Technology London 2014 is a great opportunity to connect with top-tier decision makers from law firms, media companies, technology vendors, service providers, and government. Please inquire if you are interested in learning more.
Rights Management (The Other Kind) Workshop, NYC, April 30 April 14, 2014Posted by Bill Rosenblatt in Events, Rights Licensing.
1 comment so far
I will be co-teaching a workshop in rights management for DAM (digital asset management) at the Henry Stewart DAM conference in NYC on Wednesday, April 30. I’ll be partnering with Seth Earley, CEO of Earley & Associates. He’s a longtime colleague of mine as well as a highly regarded expert on metadata and content management.
This isn’t about DRM. This is about how media companies — and others who handles copyrighted material in their businesses — need to manage information about rights to content and the processes that revolve around rights, such as permissions, clearances, licensing, royalties, revenue streams, and so on. Some large media companies have built highly complex processes, systems, and organizations to handle this, while others are still using spreadsheets and paper documents.
Rights information management has come of age over the years as a function within media companies. It has taken a while, but it is being recognized as a revenue opportunity as well as an overhead task or a way of avoiding legal liability — not just for traditional media companies but also for ad agencies, consumer product companies, museums, performing arts venues, and many others.
The subject of our workshop is “Creating a Rights Management Roadmap for your Organization.” We’ll be discussing real-world examples, business cases, and strategic elements of rights information management, and we’ll be getting into various aspects of how rights information management relates to digital asset management. Attendees will be asked to bring information from their own situations, and we’ll be doing some exercises that will help attendees get a sense of what they need to do to implement workable practices for rights management. We’ll touch on business rules, systems, processes, metadata taxonomies, and more.
For those of you who are unfamiliar with it, Henry Stewart (a publishing organization based in the UK) has been producing the highly successful DAM conferences for many years. I’ve seen the event grow in attendance and importance to the DAM community over the years. Come join us!
And speaking of events: I’m pleased to announce October 1 as the date for our next Copyright and Technology London event. Details to follow.