R.I.P. TOC May 9, 2013Posted by Bill Rosenblatt in Events, Publishing.
Here’s something that’s a little off topic for this blog but can’t be covered in 140 characters.
The O’Reilly Tools of Change for Publishing (TOC) conference has been abruptly cancelled after a seven-year run that culminated in its last show in NYC earlier this year. The announcement was made by Tim O’Reilly, CEO of the iconic tech publishing company O’Reilly Media, on his blog late last week – along with some hints that O’Reilly may be commercializing the editorial workflow tool (Atlas) that O’Reilly has been developing in-house and using with its authors.
This is a real loss to the publishing community. It echoes the trajectory of Seybold, which had previously been the go-to conference for innovation and technology in publishing: Seybold rose with the desktop publishing revolution of the early 1990s, got hit badly in the dot-bomb crash of the early 2000s, and never recovered. Both conferences, in their respective heydays, attracted over a thousand paid attendees and featured well-constructed, jam-packed, multi-track agendas and large exhibit halls as well as a real sense of community among attendees, vendors, and speakers.
As someone who (in a smaller way) has been involved in conference production for over a decade, my view is that TOC was one of the best-organized and best-produced conferences ever — thanks to co-chairs Joe Wikert and Kat Meyer and their team. Their use of the web to organize the agenda, speakers, and community was unparalleled. The agendas were canny, creative mixes of basic education for publishers and sessions on innovative technologies and business practices; accordingly, the speakers were mixes of old hands and new upstarts. Keynote speakers weren’t the usual publishing industry bigwigs but “outside the box” thinkers like, most recently, the media theorist/futurist Douglas Rushkoff. Hallway buzz was palpable.
While I don’t know the exact reasons why O’Reilly pulled the plug on TOC, I would guess that they were mainly financial. Kat Meyer told me that TOC was a poor stepchild among other much bigger events that O’Reilly produces, such as Strata (Big Data), oscon (open source), Velocity (web development), and Web 2.0 Summit (now also discontinued). It’s not at all unusual in the tech world for conferences to appear and disappear as tech trends wax and wane; for example, Jupitermedia, with which I produced the Digital Rights Strategies conferences in the mid-2000s, created and dissolved conferences all the time.
O’Reilly has a product mix that’s not unlike other B2B publishers such as Reed Business Information, PennWell, and United Business Media (not to mention digital natives like TechCrunch): its publications, conferences, training, and other services are all interdependent and represent cross-selling opportunities. When viewed this way, TOC was an anomaly: a conference about publishing, put on by a company whose real expertise is information technology.
As an author of books published by O’Reilly, I can attest that it had developed its own publishing tools and that they were optimized for tech content and pushed the state of the art. (For example, O’Reilly was an early adopter of XML and a creator of the DocBook standard for technical document markup.) So it could be said that O’Reilly had developed expertise in publishing innovation, and thus it wasn’t a great stretch to think about sharing that expertise with the rest of the industry. And as a big believer in open source software, Tim O’Reilly surely preferred to monetize that expertise through ways other than commercializing its publishing tools. (Though is he changing his thinking now?)
Still, O’Reilly enjoyed few synergies between TOC and its other properties. Apart from B2B publishers, organizations that produce conferences usually do so as ancillaries to other lines of business — such as industry trade associations (NAB, CES, ALA, AAP), market researchers (Outsell, Gartner), or vendors (Apple, Oracle, SAP). TOC was, by that perspective, a standalone property. It’s difficult to operate a standalone event and make a profit, particularly when you spend as much on infrastructure and community as O’Reilly did.
And the publishing industry is not exactly known for its lavish budgets. One commenter on a publishing blog demurred at having to pay US $1000 to attend TOC for two days; in contrast, conferences like Velocity and Strata charge as much as double that amount. As Tim O’Reilly himself commented at his last TOC keynote speech, “Why are we here? It’s not to make our fortune.”
There are other conferences about publishing, put on by companies that publish about publishing — such as Digital Book World (F+W Media) and Publishing Business Conference (NAPCO). Those organizations are probably celebrating TOC’s hasty demise, but it remains to be seen whether they will fill the void it has created.
Capitol Records Prevails in ReDigi Case April 1, 2013Posted by Bill Rosenblatt in Law, Music, United States.
A federal court in New York City handed down summary judgment against ReDigi over the weekend in its legal fight with Capitol Records. In his ruling , Judge Richard Sullivan found the digital resale service liable for primary and secondary copyright infringement. He rejected ReDigi’s arguments that its service, which enables users to resell music tracks purchased on iTunes, is legal under the doctrines of fair use and first sale.
The decision is a surprising blow to the Boston-based startup, especially given that Judge Sullivan refused Capitol’s request for a preliminary injuction early on in the case.
The central holding in Judge Sullivan’s opinion was that in order to resell a digital file, a user has to make another copy of it — even if the original copy disappears, and even if two copies never coexist simultaneously. He based this holding on a literal interpretation of the phrase “copies are material objects” from Section 101 of the Copyright Act.
Once Judge Sullivan established that the ReDigi system causes another copy to be made as part of the resale process, the rest of his opinion flowed from there:
- The user didn’t have a right to make that new copy, therefore it’s infringement — specifically of Capitol’s reproduction and distribution rights under copyright law.
- ReDigi knowingly aided and abetted, and benefited from, users’ acts of infringement, therefore it’s secondary as well as primary infringement.
- The user resold the new copy, not the original one, therefore it’s not protected under first sale (which says that a consumer can do whatever she wants with a copy of a copyrighted work that she lawfully obtains).
- The “new” copies made in the ReDigi process don’t qualify as fair use: they are identical to the originals and thus aren’t “transformative”; they are made for commercial purposes; they undercut the originals and thus diminish the market for them.
In sum, as Judge Sullivan put it bluntly, “ReDigi, by virtue of its design, is incapable of compliance with the law.” At the same time, he was quick to point out that his was a narrow ruling based on a literal interpretation of the law, saying that “this is a court of law and not a congressional subcommittee or technology blog[.]” He investigated Congress’s intent regarding digital first sale and found that it hadn’t advanced since the U.S. Copyright Office — the copyright advisors to Congress — had counseled against allowing digital resale back in 2001.
I’ve always assumed that any district court decision in this case would be minimally relevant, as it would be appealed. ReDigi has already stated that it will appeal. And the opinion does contain patches of daylight through which an appeal could possibly be launched.
Most important is the opinion’s focus on the making of a “new copy” during the resale process. It’s hard to see how this gibes with the many “new copies” of digital files made during normal content distribution processes, including streaming as well as downloads.
In other words, if ReDigi is making “new copies” without authorization, then so are countless other technologies. Some such copies might be covered under fair use or the DMCA safe harbors. Other “new copies” are considered “incidental” (not requiring permission from the copyright holder); the judge didn’t explain why copies made by the ReDigi system don’t qualify as incidental. ReDigi did make a similar argument; the judge didn’t buy it because it didn’t involve the issues in this case, but a higher court, looking at the broader picture of digital first sale, might see things differently.
Judge Sullivan’s reliance on the Copyright Office’s 2001 report on digital first sale is also somewhat problematic. The Copyright Office believed that a “forward-and-delete” mechanism — not unlike what ReDigi has built — could actually support digital first sale. The Copyright Office simply concluded that such a mechanism would not be practical to implement. This does not comport with Judge Sullivan’s assertion that “forward-and-delete” requires a new copy to be made and thus cannot qualify as first sale in the first place.
Another notable feature of Judge Sullivan’s opinion is his assertion that “a ReDigi user owns the phonorecord that was created when she purchased and downloaded a song from iTunes to her hard disk.” The assertion that a user “owns” a digital download is itself controversial and not based on legal precedent. Judge Sullivan found no legal precedent for digital first sale, but somehow he did find a basis for asserting that digital downloads are “owned.”
Retailers of digital goods believe that they don’t actually sell them in the way that books, CDs, or DVDs are sold; instead they license them to users under terms that may resemble sale. The question of sale vs. licensing of copyrighted digital content is a gray area in the law, and it wasn’t up for examination here: Apple, for example, wasn’t a party to the case and remained silent throughout. But if Apple (or another digital content retailer) ever objects to its content being “resold” through a third-party service, it will have to deal with Judge Sullivan’s language; and once again, it may be harder for a higher court to ignore this aspect of digital resale when determining its legality.
It remains to be seen whether the above issues can be forged into a legal theory that can convince the Second Circuit appeals court to reverse Judge Sullivan’s ruling. Yet even if ReDigi throws in the towel and ceases operations, its very existence has called a lot of attention to the idea of digital resale. The mechanisms are in place today: beyond ReDigi, there’s at least one more startup (the NYC-based ReKiosk); and Amazon was recently granted a patent for resale of digital goods. Indie music labels and a few e-book publishers, at first, will most likely experiment with it.
This court ruling won’t eliminate digital resale; if let stand, it will simply restrict it to content that copyright owners have given permission to resell — permission that will probably include say over pricing, timing, and other factors. This will complicate the lives of resellers, but it will ensure that digital resale doesn’t harm copyright holders. In other words, ReDigi has let the digital resale genie out of the lamp. It’s bound to happen, one way or another.
Supreme Court Affirms First Sale in Kirtsaeng Case March 20, 2013Posted by Bill Rosenblatt in Law, United States.
The copyleft was jubilant, and Big Media disgruntled, at the Supreme Court’s opinion on Tuesday in Kirtsaeng v. Wiley, a case about the first sale doctrine in US copyright law. First sale, known as “exhaustion” outside of the US, states that the publisher of a copyrighted work has no say or control in distribution of it after the first sale. The law says that if you have obtained a copy of a work legally, you can sell it, lend it, give it away, use it to line a birdcage, or anything else, without consent of the original publisher.
The Kirtsaeng case existed firmly in the realm of physical products. It concerned a tension in the law between first sale (section 109) and another provision (section 602) that makes it illegal to import copyrighted works from outside the US into the country without permission.
Supap Kirtsaeng, a Thai citizen living in the US, got his friends and family to buy textbooks published in his native land at prices that were much lower than those charged here. They sent him the books; he resold them here and pocketed the difference. The books were published by a subsidiary of John Wiley & Sons and were virtually identical to titles published by Wiley in the US. (Disclosure: Wiley is the publisher of one of my books.)
Wiley sued, claiming that Kirtsaeng was infringing under section 602. Kirtsaeng claimed first sale rights to resell the books. Kirtsaeng lost in the lower courts, but the Supreme Court reversed. Now the case goes back to the Second Circuit in New York for a re-hearing consistent with Tuesday’s decision.
Many people are asking me what impact this decision may have on digital first sale, and more specifically, the fortunes of the digital resale startup ReDigi, which is fighting a lawsuit brought by Capitol Records. While I’m not in the business of reading Supreme Court tea leaves, I’d say there are two ways to look at it.
The narrower view is: not very much. Justice Stephen Breyer’s opinion was an exemplar of judicial restraint. It spent a lot of time analyzing key words in the first sale law (specifically that a copy had to be “lawfully made under this title” to qualify for first sale) and the factors specific to its geographic interpretation vis-a-vis section 602. It also focused on divining Congress’s intent in making the law in the first place and emphasized the law’s “impeccable common law pedigree” dating back over 100 years. It’s no wonder that the 6-3 majority crossed “party lines,” with conservative Justices Roberts, Thomas, and Alito joining liberals Breyer, Kagan, and Sotomayor.
The opinion also concerned itself with the decision’s impact on libraries and museums, saying that if the case went Wiley’s way, it would place undue burdens on them to get permission before they could lend or exhibit foreign-made works.
What Breyer did not do was spend much time discussing the business implications of the case. He said little about both the impact on publishers and Kirtsaeng’s right to carry on his resale business. Justice Ruth Bader Ginsburg’s dissenting opinion focused much more on those aspects.
That leads me to believe that if and when the Supreme Court revisits first sale, it will be more receptive to arguments from the library and museum communities than those about industry factions, which often suffuse high-profile copyright litigation. And libraries especially face difficulties without clear digital first sale rights. The Owners Rights Initiative, a lobbying organization set up specifically to deal with this case, turns out to have done the right thing by enlisting library organizations to be part of its “public face” rather than the likes of CCIA and eBay. (The list of organizations that submitted or signed on to amicus briefs in this case is a mile long.)
The other possible view of the Kirtsaeng decision is the bigger-picture one: that the Supreme Court is taking a broad view of first sale by refusing to weigh it down with exceptions like those in section 602, and therefore the Court may take the same broad view when it’s asked to opine on digital first sale — that is, when it’s asked to interpret another group of words in the copyright act: “‘Copies’ are material objects…”
(Props to Andrew Bridges of Fenwick & West for his insights.)
The DMCA and Presidential Politics, Part 2 March 4, 2013Posted by Bill Rosenblatt in Law, United States.
add a comment
A minor war of words broke out yesterday in the U.S. government over consumers’ rights to “jailbreak” (unlock) their mobile phones. The White House and the FCC both made public statements in which they politely condemned the U.S. Copyright Office’s decision not to renew the DMCA 1201 exception for jailbreaking and stood in favor of unlocking mobile phones for the purpose of switching wireless carriers.
This is what happens when a government process that’s supposed to be confined to relatively arcane business interests spills over into the public sphere. The question is, why are we even talking about this at all?
A little background for those who need it: the Digital Millennium Copyright Act of 1998 has two parts. The part that has gotten most of the attention over the past few years is the second part (Title II, section 512), which includes the “notice and takedown” regime that online services have to follow to avoid copyright liability for files that users upload. This part of the DMCA has been the subject of several recent high-profile litigations, such as Viacom v. Google, EMI v. MP3Tunes, and UMG v. Veoh.
The first part of the DMCA, section 1201, makes it illegal to crack DRMs. This law was originally used to go after DRM hackers such as those who distributed DVD ripping software, in cases such as Universal v. Reimerdes. But that was many years ago.
Since then, we’ve only heard about how this law has been distended out of shape by the likes of makers of garage door openers and laser printer toner cartridges. And soon after Apple ushered in the smartphone revolution with the introduction of the iPhone in 2007, the major wireless carriers appropriated it to cover mobile phone jailbreaking. Let’s be clear: these are all abuses of a law that’s dubious to begin with.
There is a provision in DMCA 1201 that requires the U.S. Copyright Office — the agency that advises Congress on the copyright law — to conduct a “rulemaking” every three years to consider whether any exemptions to the anti-hacking law should be made. Anyone may submit proposals for such exemptions, though the requirements are fairly rigid. The Office evaluates the proposed exemptions and may approve some of them, but the approved exemptions only last three years, until the next rulemaking. They must be proposed and approved again in order to last longer.
In 2009, the Copyright Office approved an exemption for mobile phone jailbreaking. In the subsequent 2012 rulemaking, the Office chose not to renew it; instead they listened to wireless industry lobbyists who persuaded them that consumer choice and competition were doing fine, and therefore that jailbreaking wasn’t necessary. The 2009 exemption expired at the end of January 2013.
An entrepreneur named Sina Khanifar decided to do something about this: he submitted a petition to the White House, through its We the People online petition system, which has a policy of responding to petitions that get over 100,000 signatures within 30 days. The petition did cross that threshold, and the White House did respond.
It would be nice to do something to curtail these abuses of the DMCA. Right now, the DMCA is only “useful” in that it keeps actual DRM hacks in the shadows and prevents things like a “Convert from Nook” option in your Kindle (or vice versa).
But does anyone seriously expect any results from the White House’s populist grandstanding on this issue? The executive branch has no power to implement changes in the DMCA, and it’s unlikely that the FCC (also part of the executive branch) has any relevant authority either. Only Congress can change the law, and the Copyright Office is Congress’s legal advisor. The Office’s own statement on the matter (released via email, not yet available on the Office’s website) basically said “The White House is right, this is a bigger public policy matter than the arcane issues we usually deal with in these rulemakings” — in other words, that they’ve simply done their job according to the law.
The connection between mobile phone jailbreaking and the original intent of DMCA 1201 is tenuous at best. Maybe Khanifar’s petition will spur Congress to act, but I’m not holding my breath.
Copyright Alert System Launches in U.S. February 25, 2013Posted by Bill Rosenblatt in Fingerprinting, Law, Music, Video.
With today’s launch of the Copyright Alert System (CAS) by the Center for Copyright Information, the United States joins the list of countries that have adopted a so-called graduated response system for educating Internet users about online copyright infringement and taking steps to punish repeat offenders. The CAS is finally launching after a few months’ delay, part of which was supposedly due to the effects of Sandy, the mega-storm that hit the northeast U.S. late last year. Other graduated response countries include France, New Zealand, and South Korea; the United Kingdom is currently struggling with its own implementation.
The CAS is a partnership between music and video content owners on the one hand and major ISPs on the other. The content owner representatives include not just the majors (RIAA and MPAA) but also the Independent Film and Television Alliance (IFTA) and American Association of Independent Music (A2IM). On the ISP side, membership includes the five largest providers: AT&T, Verizon, Time Warner Cable, Comcast, and Cablevision. Book and game publishers are not involved at this point.
The CAS is run by Jill Lesser, a tech policy veteran with deep experience on both the content and ISP sides. It has an advisory board whose principal function seems to be to curb abuses: it includes advocates for looser copyright laws (Gigi Sohn of Public Knowledge) and user privacy (Jules Polonetsky of the Future of Privacy Forum).
The CAS works similarly to other graduated response regimes: copyright owners employ infringement monitoring services, which can identify copyrighted works as users send them around the Internet using fingerprinting and other content recognition technologies. The monitoring services send notices to ISPs, which issue warning messages to users. The warnings get stronger with repeat infringements.
ISPs can opt to punish repeat alleged offenders by such means as throttling bandwidth and making users watch videos about copyright. (ISPs already have policies for terminating repeat infringers’ accounts, which they must have in order to maintain their eligibility for the DMCA safe harbor.)
Where the CAS differs from other graduated response systems is that it is not tied to law enforcement. The arrangement between content owners and ISPs is voluntary. ISPs will not terminate or suspend users’ Internet accounts, nor will they pass information about infringements on to copyright owners. Another difference is that the CAS is not being funded through taxes or levies on Internet service (although funding sources are confidential).
In other words, the CAS is a more purely educational approach than France’s HADOPI or other systems. Analysis of the CAS’s results will therefore be more useful in determining how successful education by itself can be in getting people to respect copyright. The hope is that education will do more than draconian statutory damages or blunt-instrument legislation.
Given how little effect those approaches have had, it may not be difficult to declare the Copyright Alert System a relative success in the years to come. As it is now, it seems like quite a reasonable system: it raises awareness about the importance of copyright by using advanced Internet technologies instead of relegating enforcement to outmoded nontechnical legal means; it is permeated with references to legal content sources; and it doesn’t cost users a thing.
Awareness Grows over Digital First Sale February 19, 2013Posted by Bill Rosenblatt in Business models, Law, Publishing.
1 comment so far
What would happen if the law were to definitively decide that users should get the same rights of ownership over digital downloads as they do with physical media products such as books, CDs, and DVDs? A growing crescendo of events over the last few weeks indicates a growing awareness of this fascinating topic.
Let’s start with late last month, when Amazon was granted a U.S. patent on a scheme for reselling digital objects. The patent describes a scheme for transferring “ownership” of digital content objects from one user to another, possibly with limits on the number of transfers, and handling the e-commerce behind each such transaction.
Digital resale is possible now. For example, the startup ReDigi is doing it for music downloads from iTunes and Amazon. The question is not whether it’s technically feasible to support digital resale with reasonable safeguards against abuse of the process (i.e., “reselling” your content while keeping your own copies). The question is whether doing so requires a license from content owners, or whether users have a legal right to resell their content without permission.
In the former case, any service (like ReDigi) that facilitates resale would have to pay royalties to copyright owners on every transaction. In the latter case, it need not pay anything. Since resold digital content is identical to “new” content, this would have highly disruptive implications for publishers and others in the value chain. The law is not clear on this point, but it may become clearer within the next couple of years through litigation, such as Capitol Records’ lawsuit against ReDigi, and the efforts of a lobbying group called the Owners’ Rights Initiative.
Amazon’s patent does not take a position on whether digital resale requires the copyright owner’s permission; it simply discloses a mechanism for doing digital resale. And of course just because Amazon has a patent does not mean it intends to implement such a system; Amazon was granted about 300 patents in 2012. Still, the issuance of the patent prompted Wired to run an article about digital first sale and its implications two weeks ago.
That brings us to last week, when the O’Reilly Tools of Change for Publishing (TOC) took place in NYC. TOC is the preeminent conference on technology and innovation in publishing. Just before the conference, the TOC folks held an invitation-only Executive Roundtable featuring John Ossenmacher, CEO of ReDigi. O’Reilly Media, a publisher of books and other information for IT professionals and a bellwether of technological innovation in publishing, confirmed that it is in talks with ReDigi to take the company into resale of e-books. The room was filled with traditional publishing executives who had a more skeptical view, though Ossenmacher survived the ordeal well.
The TOC organizers had asked me to give a talk on digital first sale at the conference; I did so later in the week (slides available on SlideShare). The room was packed with a broad mixture of editorial, business, and technology folks from the publishing industry. Publishers Weekly, the leading trade publication of the book publishing industry, decided that the topic was important enough to feature in an article summarizing my presentation. Most of the attendees were surprised at the highly disruptive implications for publishers, retailers, and libraries as well as users, though a few expressed the idea that digital resale is yet another inevitable type of change to legacy business models in the content industries.
Mega’s Aggressive Takedown Policy? February 1, 2013Posted by Bill Rosenblatt in Law, New Zealand, Services.
add a comment
Here is an interesting addendum to last week’s story about Mega, the new file storage service from Kim Dotcom of MegaUpload fame.
Recall that Mega encrypts files that users store on its servers, with keys that only the users know… unless they publish URLs that contain the keys, like this one. This means that Mega can’t know whether or not files on its servers are infringing, unless a user publishes a URL like that.
As TorrentFreak has found, Mega is crawling the web in search of public URLs that contain Mega encryption keys. When it finds one, it proactively removes the content from its server — at least if the file in question contains audio or video content — and it sends the user who uploaded the file a message saying that it has taken down the file due to receipt of a takedown notice from the copyright owner.
It’s impossible to say for sure whether this is a blanket policy, and of course Mega’s web-crawling technology probably doesn’t work perfectly. But if this is Mega’s policy, then Mega is being at least as aggressive as RapidShare in going after public links to infringing content. RapidShare finds public links to files on its service and, apparently, examines them with content identification technology to see if they are infringing. According to TorrentFreak’s findings, Mega does no analysis; it uses no fingerprinting or other content identification technology; it just takes the content down. It has taken down unambiguously legal content. (My file wasn’t taken down, because it’s just a PDF of a presentation that I created, and/or because it’s only on this blog and not on a known P2P index site.)
Mega could be doing this in order to conform to the terms of Kim Dotcom’s arrest. Whatever the reason, it helps make sure that pirated material on Mega can only be shared by sending encryption keys through means such as email… or perhaps URLs that are publicly available but are themselves encrypted. And if you truly want to share audio or video material to which you have the rights, then Mega wasn’t going to be the best place for you anyway.
A commenter on TechDirt put it best: “So we’re still allowed to share the stuff, but just not on linking sites? Seems fair enough to me. Probably for the best too, since some dumbasses clearly don’t know how to hide their copyrighted material properly.”
Yes, Piracy Does Cause Economic Harm January 27, 2013Posted by Bill Rosenblatt in Economics, Uncategorized.
Back in 2010, the Government Accountability Office (GAO) published a meta-study of the economic effects of intellectual property infringement (including counterfeit goods as well as copyrighted works). The GAO concluded that IP infringement is a problem for the economy, but it’s not possible to quantify the extent of the damage — and may never be. It looked at many existing studies and found bias or methodological problems in every one.
More recently, Michael Smith and Rahul Telang, two professors at Carnegie-Mellon University, published another meta-study that serves as a sort of rejoinder to the GAO study. This was the subject of Prof. Smith’s talk at the recent Digital Book World (DBW) conference in NYC.
Assessing the Academic Literature Regarding the Impact of Media Piracy on Sales summarizes what has been a growing body of studies on the economic effects of so-called media piracy. Their conclusion is that piracy does have a negative effect on revenue — if for no other reason than the vast majority of studies come to that conclusion.
Smith’s presentation at DBW listed no less than 29 studies on media piracy that take actual data into account (as opposed to merely theoretical papers such as this one). Of those, 25 found economic harm from piracy, while 4 didn’t. When the list is restricted to papers published in peer-reviewed academic journals, the ratio is similar: 12 found harm; 2 didn’t. Interestingly, almost half of the cited studies were published after the GAO’s 2010 report.
(When Smith and Telang’s paper was originally published last year, many discredited it instantly because the MPAA helped fund the research. Yet I take the researchers at their word when they say that the funding source had no effect on the outcomes — an assertion bolstered by the paper’s exclusion of the MPAA’s own study from 2006.)
The paper explains why some studies’ methodologies are better than others and discusses shortcomings in some of the studies, such as the Oberholzer-Gee & Strumpf paper from 2007 that showed no harm to sales of music from piracy and therefore has been widely cited among the copyleft.
It’s easy to poke holes in the methodologies of studies that have to rely on real-world data over which the researchers have little or no control. And as someone who wouldn’t know an “endogenous dependent variable” if one bit me in the face, I find it hard to look at criticisms of these studies’ methodologies and determine which ones to believe. Yet it’s obvious that any study on piracy must rely on real-world data in order to have any credibility at all.
Decisions about business and policy have to be made based on the best information we have available. After a certain point, simply poking holes in studies — particularly those whose results you don’t happen to like — isn’t sufficient.
It may indeed, as the GAO suggested, be impossible to measure the economic effects of piracy with a large amount of accuracy. But if dozens of researchers have tried, all using different methodologies, then their conclusions in the aggregate are the best we’re going to do. Put another way, it will henceforth be very difficult to dislodge Smith and Telang’s conclusion that piracy does economic harm to content creators.
Kim Dotcom Embraces DRM January 22, 2013Posted by Bill Rosenblatt in DRM, New Zealand, Services.
add a comment
Kim Dotcom launched a new cloud file storage service, the New Zealand-based Mega, last weekend on the one-year anniversary of the shutdown of his previous site, the notorious MegaUpload. (The massive initial interest in the site* prevented me from trying out the new service until today.)
Mega encrypts users’ files, using what looks like a content key (using AES-128) protected by 2048-bit RSA asymmetric-key encryption. It derives the latter keys from users’ passwords and other pseudo-random data. Downloading a file from a Mega account requires knowing either the password that was used to generate the RSA key (i.e., logging in to the account used to upload the file) or the key itself.
Hmm. Content encrypted with symmetric keys that in turn are protected by asymmetric keys… sounds quite a bit like DRM, doesn’t it?
Well, not quite. While DRM systems assume that file owners won’t want to publish keys used to encrypt the files, Mega not only allows but enables you to publish your files’ keys. Mega lets you retrieve the key for a given file in the form of a URL; just right-click on the file you want and select “Get link.” (Here‘s a sample.) You can put the resulting URL into a blog post, tweet, email message, or website featuring banner ads for porn and mail-order brides.
(And of course, unlike DRM systems, once you obtain a key and download a file, it’s yours in unencrypted form to do with as you please. The encryption isn’t integrated into a secure player app.)
Yet in practical terms, Mega is really no different from file-storage services that let users publish URLs to files they store — examples of which include RapidShare, 4Shared, and any of dozens of file transfer services (YouSendIt, WhaleMail, DropSend, Pando, SendUIt, etc.).
Mega touts its use of encryption as a privacy benefit. What it really offers is privacy from the kinds of piracy monitoring services that media companies use to generate takedown notices — an application of encryption that hardcore pirates have used and that Kim Dotcom purports to “take … out to the mainstream.” It will be impossible to use content identification technologies, such as fingerprinting, to detect the presence of copyrighted materials on Mega’s servers. RapidShare, for example, analyzes third-party links to files on its site for potential infringements; Mega can’t do any such thing, by design.
Mega’s use of encryption also plays into the question of whether it could ever be held secondarily liable for its users’ infringements under laws such as DMCA 512 in the United States. The Beltway tech policy writer Paul Sweeting wrote an astute analysis of Mega’s chances against the DMCA over the weekend.
Is Kim Dotcom simply thumbing his nose at Big Media again? Or is he seriously trying to make Mega a competitor to legitimate, prosaic file storage services such as DropBox? The track records of services known for piracy trying to go “legit” are not encouraging — just ask Bram Cohen (BitTorrent Entertainment Network) or Global Gaming Factory (purchasers of The Pirate Bay’s assets). Still, this is one to watch as the year unfolds.
*Or, just possibly, server meltdowns faked to generate mountains of credulous hype?
Withholding Ads from Illegal Download Sites January 7, 2013Posted by Bill Rosenblatt in Standards.
add a comment
Over the past few months, the conversation about online infringement has shifted from topics like graduated response and DMCA-related litigation to cutting off ad revenue for sites that provide illegal downloads. This issue gained some importance during the run-up to SOPA and PIPA in 2011. David Lowery of The Trichordist gave it new visibility last year by initiating a stream of screenshots showing major consumer brands that advertise on sites like FilesTube, IsoHunt, and MP3Skull.
Last week, the issue took on a new level of importance when a report from the University of Southern California’s Annenberg Innovation Lab confirmed that major online ad networks, including those of Google and Yahoo, routinely place ads on pirate sites. The Innovation Lab has come up with an automated way of tracking the ad networks that place ads sites that (according to the Google Transparency Report) attract the most DMCA takedown notices. The study ranks the top ten ad networks that serve ads on pirate sites and will be updated monthly.
The idea is to shame consumer brands by showing them that their ads are appearing on pirate sites amid ads for pornography, mail-order brides, etc. The Annenberg study has already led to at least one major consumer brand insisting that its ads be pulled from pirate sites.
The focus on online ad networks is — as Lowery admitted at our Copyright and Technology NYC 2012 conference last month — not an ideal solution to the problem of online infringement but rather a “low hanging fruit” approach that appeals to real business imperatives without requiring lawyers or lobbyists. It’s an acknowledgement that legislation to address online infringement is not going to be achievable, at least in the near future, in the aftermath of the defeats of SOPA and PIPA in early 2012.
Yet the tactic’s effectiveness is limited by the quality of information about ad buys that flows through ad networks. For example, it’s sometimes not possible for an advertiser to know where its ads are being placed because, among other reasons, ad networks resell inventory to each other.
Let’s assume that most consumer brands would rather not have their ads placed on pirate sites. Then two things are required to solve this problem. One is standards for information about ad buys — advertiser identity, inventory, ad network, type of placement, and so on — and protocols for communicating that information up the chain of intermediaries from the website on which the ad was placed all the way up to the advertiser. The other is agreement throughout the online ad industry to use such standards in communication and reporting.
If you follow efforts to develop standard content identifiers and online rights registries, you should see the analogy here.
The good news is that the Interactive Advertising Bureau (IAB) has been working on standards that look like they could apply here with some tweaks. The IAB launched the eBusiness Interactive Standards initiative in 2008 with the goal of increasing efficiency and reducing errors in online ad workflows. The eBusiness Interactive Standards spec defines XML structures for communicating information among advertisers, agencies, and publishers (websites) from RFPs (requests for proposal) through to IOs (insertion orders).
Now the bad news. The IAB standards would need some modification to cover the requirements here: they don’t appear to work through multiple levels of ad networks, they don’t include globally unique identifiers for ad placements (though this would be simple to add), and they aren’t designed to cover performance reporting. Furthermore, progress in getting the standard to market appears to be slow: it entered a beta phase with limited customers in 2011, and no progress has been apparent since then.
Yet even if the right standards were adopted, the advertising industry would still need to commit to the kinds of transparency that would be necessary to ensure, if an advertiser wishes, that its ads don’t appear on pirate sites. For one thing, advertisers often buy “blind” a/k/a run-of-network inventory in order to get discount pricing, and there is no reliable way to ensure that such buys don’t inadvertently end up on pirate sites. A related problem is where to draw the line between obvious pirate sites like the ones mentioned above and those that happen to occasionally host unauthorized material.
Regulatory initiatives seem unlikely here. Indeed, the Obama Administration and Congress in 2011 asked the ad industry to adopt a “pledge” against advertising on pirate sites; the industry’s two major U.S. trade associations responded last May with a statement full of equivocation and wiggle-room.
Ultimately, the pressure would have to come from advertisers themselves. They could demand, for example, that even blind buys not appear on the 200-250 sites that show more than 10,000 takedown notices a month in the Google Transparency Report (mainstream sites like Facebook, Tumblr, DailyMotion, Scribd, and SoundCloud fall well below this threshold). A good set of technical standards for tracking and reporting would help convince them that they can demand to withhold their ads from these sites with a reasonable chance of success.