jump to navigation

Rights Management (The Other Kind) Workshop, NYC, April 30 April 14, 2014

Posted by Bill Rosenblatt in Events, Rights Licensing.
1 comment so far

I will be co-teaching a workshop in rights management for DAM (digital asset management) at the Henry Stewart DAM conference in NYC on Wednesday, April 30.  I’ll be partnering with Seth Earley, CEO of Earley & Associates.  He’s a longtime colleague of mine as well as a highly regarded expert on metadata and content management.

This isn’t about DRM.  This is about how media companies — and others who handles copyrighted material in their businesses — need to manage information about rights to content and the processes that revolve around rights, such as permissions, clearances, licensing, royalties, revenue streams, and so on.  Some large media companies have built highly complex processes, systems, and organizations to handle this, while others are still using spreadsheets and paper documents.

Rights information management has come of age over the years as a function within media companies.  It has taken a while, but it is being recognized as a revenue opportunity as well as an overhead task or a way of avoiding legal liability — not just for traditional media companies but also for ad agencies, consumer product companies, museums, performing arts venues, and many others.

The subject of our workshop is “Creating a Rights Management Roadmap for your Organization.”  We’ll be discussing real-world examples, business cases, and strategic elements of rights information management, and we’ll be getting into various aspects of how rights information management relates to digital asset management.  Attendees will be asked to bring information from their own situations, and we’ll be doing some exercises that will help attendees get a sense of what they need to do to implement workable practices for rights management.  We’ll touch on business rules, systems, processes, metadata taxonomies, and more.

For those of you who are unfamiliar with it, Henry Stewart (a publishing organization based in the UK) has been producing the highly successful DAM conferences for many years.  I’ve seen the event grow in attendance and importance to the DAM community over the years.  Come join us!

And speaking of events: I’m pleased to announce October 1 as the date for our next Copyright and Technology London event.  Details to follow.

MP3Tunes and the New DMCA Boundaries March 30, 2014

Posted by Bill Rosenblatt in Law, Music, Services, United States.

With last week’s jury verdict of copyright liability against Michael Robertson of MP3Tunes, copyright owners are finally starting to get some clarity around the limits of DMCA 512.  The law gives online service operators a “safe harbor” — a way to insulate themselves from copyright liability related to files that users post on their services by responding to takedown notices.

To qualify for the safe harbor, service providers have to have a policy for terminating the accounts of repeat infringers, and — more relevantly – cannot show “willful blindness” to users’ infringing actions.  At the same time, the law does not obligate service providers to proactively police their networks for copyright infringement.  The problem is that even when online services respond to takedown notices, the copyrighted works tend to be re-uploaded immediately.

The law was enacted in 1998, and copyright owners have brought a series of lawsuits against online services over the years to try to establish liability beyond the need to respond to one takedown notice at a time.  Some of these lawsuits tried to revisit the intent of Congress in passing this law, to convince courts that Congress did not intend to require them to spend millions of dollars a year playing Whac-a-Mole games to get their content removed.

In cases such as Viacom v. YouTube and Universal Music Group v. Veoh that date back to 2007, the media industry failed to get courts to revisit the idea that service providers should act as their own copyright police.  But over the past year, the industry has made progress along the “willful blindness” (a/k/a “looking the other way”) front.

These cases featured lots of arguments over what constitutes evidence of willful blindness or its close cousin, “red flag knowledge” of users’ infringements.  Courts had a hard time navigating the blurry lines between the “willlful blindness” and “no need to self-police” principles in the law, especially when the lines must be redrawn for each online service’s feature set, marketing pitch, and so on.

But within the past couple of years, two appeals courts established some of the contours of willful blindness and related principles to give copyright owners some comfort.  The New York-based (and typically media-industry-friendly) Second Circuit, in the YouTube case, found that certain types of evidence, such as company internal communications, could be evidence of willful blindness.  And even the California-based (and typically tech-friendly) Ninth Circuit found similar evidence last year in a case against the BitTorrent site IsoHunt.

The Second Circuit’s opinion in YouTube served as the guiding precedent in the EMI v. MP3Tunes case — and in a rather curious way.  Back in 2011, the district court judge in MP3Tunes handed down a summary judgment ruling that was favorable to Robertson in some but not all respects.  But after the Second Circuit’s YouTube opinion, EMI asked the lower court judge to revisit the case, suggesting that the new YouTube precedent created issues of fact regarding willful blindness that a jury should decide.  The judge was persuaded, the trial took place, and the jury decided for EMI.  Robertson could now be on the hook for tens of millions of dollars in damages.

(Eleanor Lackman and Simon Pulman of the media-focused law firm Cowan DeBaets have an excellent summary of the legal backdrop of the MP3Tunes trial; they say that it is “very unusual” for a judge to go back on a summary judgment ruling like that.)

The MP3Tunes verdict gives media companies some long-sought leverage against online service operators, which keep claiming that their only responsibility is to respond to each takedown notice, one at a time.  This is one — but only one — step of the many needed to clarify the rights of copyright owners and responsibilities of service providers to protect copyrights.  And as far as we can tell now, it does not obligate service providers to implement any technologies or take any more proactive steps to reduce infringement.  Yet it does now seem clear that if service providers want to look the other way, they at least have to keep quiet about it.

As for Robertson, he continues to think of new startup ideas that seem particularly calculated to goad copyright owners.  The latest one, radiosearchengine.com, is an attempt to turn streaming radio into an interactive, on-demand music service a la Spotify.  It lets users find and listen to Internet streams of radio stations that are currently playing specific songs (as well as artists, genres of music, etc.).

Radiosearchengine.com starts with a database of thousands of Internet radio stations, similar to TuneIn, iHeartRadio, Reciva, and various others.  These streaming radio services (many of which are simulcasts of AM or FM signals) carry program content data, such as the title and artist of the song currently playing.  Radiosearchengine.com retrieves this data from all of the stations in its database every few seconds, adds that information to the database, and makes it searchable by users.  Robertston has even created an API so that other developers can access his database.

Of course, radiosearchengine.com can’t predict that a station will play a certain song in the future (stations aren’t allowed to report it in advance), so users are likely to click on station links and hear their chosen songs starting in the middle.  But with the most popular songs — which are helpfully listed on the site’s left navbar — you can find many stations that are playing them, so you can presumably keep clicking until you find the song near its beginning.

This is something that TuneIn and others could have offered years ago if it didn’t seem so much like lawsuit bait.  On the other hand, Robertson isn’t the first one to think of this: there’s been an app for that for at least three years.

Disney and Apple’s UV FUD March 26, 2014

Posted by Bill Rosenblatt in Business models, Technologies, United States, Video.
add a comment

Last month Disney launched Disney Movies Anywhere, a service that lets users stream and download movies from Disney and associated studios on their Apple iOS devices.  You can purchase movies on the site or from the App Store app and stream them to any iPhone, iPad, or iPod Touch.  You can also get digital copies and streaming access with purchases of selected DVDs and Blu-ray discs.  And you can connect your iTunes account to your Disney Movies Anywhere account so that you can gain similar streaming and download access to your existing Disney iTunes purchases.

A couple of things about Disney Movies Anywhere are worth discussing.  First, this is yet more evidence of the strong bond between Disney and Apple, a relationship formed when Disney acquired Pixar from Steve Jobs, who became a Disney board member and the company’s largest shareholder.

More particularly, this service is a way for Apple to experiment with video streaming services without attaching its own brand name.  Disney Movies Anywhere works with only iOS devices, and there’s little indication that it will add support for Android or other platforms.  For whatever reason, Apple has shied away from streaming media services until quite recently (with iTunes Radio and the latest iteration of Apple TV).

More importantly, Disney Movies Anywhere is the first implementation of Disney’s KeyChest — a rights locker architecture that is similar to UltraViolet, the technology backed by the other five major Hollywood studios.  The idea common to both KeyChest and UltraViolet is that when you purchase a movie, you’re actually purchasing the right to download or stream it from a variety of sources; the rights locker maintains a record of your purchase.

One of the main motivations behind UltraViolet was to prevent content distributors or consumer electronics makers from dominating the economics of the digital video supply chain in the way that Apple dominated music downloads (and Amazon may dominate e-books), and thus from being able to dictate terms to copyright owners.  By making it possible for users to buy digital movies from one retailer and then download them in other formats from other retailers, the five studios hoped to create a level playing field among retailers as well as interoperability for users.  UltraViolet has several retail partners, including Target, Walmart (VUDU), and Best Buy (CinemaNow).

The problem with these technology schemes is that it is very hard to make them into universal standards.  Just about every software technology we use settles down to twos or threes.  In operating systems, it’s all twos: Windows and Mac OS for desktops and laptops; Android and iOS for mobile devices; Unix/Linux and Windows for servers.  Other markets are similar: in relational databases it’s Oracle/MySQL (Oracle Corp.), DB2 (IBM), and SQL Server (Microsoft); in music paid-download formats it’s MP4-AAC (Apple) and MP3 (Amazon); in e-books (in the US, at least) it’s Amazon, Barnes & Noble, and Apple iBooks.  Antitrust law prevents a single technology from dominating too much; market complexity prevents more than a handful from becoming roughly equal competitors.

It would be a shame if this also became true for rights lockers for movies and TV shows.  It does not help the studios if consumers get one flavor of “interoperability” for movies from all but one major studio and another flavor for movies from Disney.  Disney surely remembers the less-than-stellar success of its last solo venture into digital movie distribution: MovieBeam, which launched around 2004 and lasted less than four years.

And that brings us back around to Apple.  The only plausible explanation for this bifurcation is that Apple is really in charge here.  UltraViolet is not just an “every studio but Disney” consortium; it is also an “every technology company but Apple” initiative.  The list of technology companies participating in UltraViolet is huge, though Microsoft occupies a particularly important role as the source of the UltraViolet file format and the first commercial DRM to be approved for use with the system.  In other words, the KeyChest/UltraViolet dichotomy is shaping up to look very much like Apple vs. the Microsoft-led Windows ecosystem, or Apple vs. the Google-led Android ecosystem.

Still, the market for digital video is still in relatively early days, and things could change quite a bit — especially if consumers are confused by the choices on offer.  (Coincidentally, there’s a good overview of this confusion and its causes in today’s New York Times.)  UltraViolet is enjoying only modest success so far — compared, say, to Netflix or iTunes — and the introduction of Disney Movies Anywhere is unlikely to help make rights lockers any clearer to consumers.

In that respect, the UltraViolet/KeyChest dichotomy also has a precedent in the digital music market.  Back in 2001-2002, the (then) five major record labels lined up behind two different music distribution platforms: MusicNet and pressplay.  MusicNet was backed by Warner Music Group, EMI, BMG, and RealNetworks, while pressplay was backed by Sony Music and Universal Music Group.  MusicNet was a wholesale distribution platform that made deals with multiple retailers; pressplay was its own retailer.  In other words, MusicNet was UltraViolet, while pressplay was Disney Movies Anywhere.  Yet neither one was successful; both suffered from over-complexity (among other things).  Apple launched the much easier to use iTunes Music Store in 2003, and few people remember MusicNet or pressplay anymore.*

In other words, there are still opportunities for new digital video models to emerge and disrupt the current market.  And consumer confusion is a great way to hasten the disruption.

*The two music platforms did survive, in a way: MusicNet is now MediaNet, a wholesaler of digital music and other content with many retail partners; pressplay was sold to Roxio, rebranded as Napster (the legal version), and resold to Rhapsody, where it still exists under the Napster brand name outside of the US.


Viacom vs. YouTube: Not With a Bang, But a Whimper March 21, 2014

Posted by Bill Rosenblatt in Law, United States.
add a comment

Earlier this week Viacom settled its long-running lawsuit against Google over video clips containing Viacom’s copyrighted material that users posted on YouTube.  The lawsuit was filed seven years ago; Viacom sought well over a billion dollars in damages.  The last major development in the case was in 2010, when a district court judge ruled in Google’s favor.  The case had bounced around between the district and Second Circuit appeals courts.  The parties agreed to settle the case just days before oral argument was to take place before the Second Circuit.

It’s surprising how little press coverage the settlement has attracted — even in the legal press — considering the strategic implications for Viacom and copyright owners in general.

The main reasons for the lack of press attention are that details of the settlement are being kept secret, and that by now the facts at issue in the case are firmly in the past.  A few months after Viacom filed the lawsuit in March 2007, Google launched its Content ID program, which enables copyright owners to block user uploads of their content to YouTube — or monetize them through shared revenue from advertising.  The lawsuit was concerned with video clips that users uploaded before Content ID was put into place.

Viacom’s determination to carry on with the litigation was clearly meant primarily to get the underlying law changed.  Viacom has been a vocal opponent of the Digital Millennium Copyright Act (DMCA) in its current form.  The law allows service providers like YouTube to respond to copyright owners’ requests to remove content (takedown notices) in order to avoid liability.  It doesn’t require service providers to proactively police their services for copyright violations.  As a result, a copyright owner has to issue a new takedown notice every time a clip of the same content appears on the network — which often happens immediately after each clip is taken down.  As a result, companies like Viacom thus spend millions of dollars issuing takedown notices in a routine that has been likened to a game of Whac-a-Mole.  

From that perspective, Google’s Content ID system goes beyond its obligations under the DMCA (Content ID has become a big revenue source for Google), so Google’s compliance with the current DMCA isn’t the issue.  Countless other service providers don’t have technology like Content ID in place; moreover, since 2007 the courts have consistently — in other litigations such as Universal Music Group v. Veoh — interpreted the DMCA not to require  that service providers act as their own copyright police.  Viacom must still be interested in getting the law changed.

In this light, what’s most interesting is not that the settlement came just days before oral argument before the Third Circuit, but that it came just days after the House Judiciary Committee held hearings in Washington on the DMCA.  These were done in the context of Congress’s decision to start on the long road to revamping the entire US copyright law.

My theory is that, while Viacom may have had various reasons to settle, the company has decided that it has a better shot at changing the law through Congress than through the courts.  The journey to a new Copyright Act is likely to take years longer than the appeals process.  But if Viacom were to get the lower court’s decision overturned in the Third Circuit, the result would be a precedent that wouldn’t apply nationwide; in particular, it wouldn’t necessarily apply in the tech-friendly Ninth Circuit.  A fix to the actual law in Congress that’s favorable to copyright owners — if Congress delivers one — could have broader applicability, both geographically and to a wider variety of digital services.   Viacom has waited six years; it can wait another ten or so.

Getty Images Competes with Free (and Easy) March 9, 2014

Posted by Bill Rosenblatt in Images, Rights Licensing.
1 comment so far

Last month I wrote about how search engines like Google and Bing let users filter image searches so that only images available under Creative Commons licenses show up in search results.  This helps users find images that they can paste into their blogs and websites with confidence that they aren’t violating copyrights.  Last week Getty Images, the world’s largest stock image agency, announced a new service that’s essentially complementary to this search engine feature: users can select photos from Getty’s repository and embed them in blogs, websites, social media posts, and so on, legally and at no charge.  Getty Images is launching the embed feature at SXSW in Austin, TX this week.

The image embedding feature works very much like analogous features on many other content-sharing sites: videos on YouTube, presentations on SlideShare, documents on Scribd, etc.  More to the point, it works much like the analogous feature on Flickr.  (Interestingly, WordPress supports Getty Images embedding but rejects Flickr photo embeds — which can’t be a coincidence.)  The feature is available for many, though not all, of Getty’s vast archive of images — including current celebrity photos like this one:

To use this feature, do a search on Getty Images’ website and mouse over an image in search results.  A popup window will appear.  If the popup has an embed button (“</>”), click on it.  It will display a few lines of HTML code.  Select the code, copy it, and paste it into your website or blog. The resulting image has a URL that leads to a Getty Images web page with information about licensing the image for commercial uses.  As you can see, it also has its own embed button.

In contrast, with Google Images, you can simply search, select an image, right-click the image, select Copy Image, and paste it into your blog.  That, in a nutshell, is the problem that Getty Images has to solve.  This is a particular problem with still images.  It’s not possible to right-click, copy, and paste a video, music track, PDF document, or presentation into your blog or website, but it is possible with images.

With this move, Getty Images has done what so many other content licensors have done in the face of easy and rampant unauthorized use of content online: it has given up on directly monetizing use of its content by anyone other than businesses that can actually sign licensing agreements and assume liability.  In fact, this type of embedding feature has been available on so many other media sharing sites (see above) for so long that I imagine the decision to implement it must have resulted from long, contentious discussions within Getty Images, and/or between Getty Images and photographers and other image sources.  (The fact that not all images have embed rights reinforces this theory.)

So, instead of direct revenue, Getty Images is evidently hoping to use images for indirect revenue through analytics and promotion.  The most obvious benefit is that it can use the images that people embed in their blog posts and websites as promotions for commercial licensing of those images to publishers, ad agencies, and so on.  But that’s not much of a benefit: professionals who license images commercially are unlikely to search for images they want on blogs and random websites.

Instead, the bigger potential benefit has got to be data collection for analytics.  With the embed feature, Getty Images hopes to be able to amass data about popularity of images — and the people, places, or things in them — that they can sell.  For example, data about placements of photos like the one above of Lorde ought to be of interest to her management.  Getty Images could also use the data to help it determine which kinds of images it should be acquiring from photographers and other licensors, and what it should be charging for commercial uses of them.

This new feature is also complementary to the deal that Getty Images made with Pinterest last October.  That deal focused on the many images that users post on Pinterest to which Getty Images holds the rights.   Getty Images’ ImageIRC image recognition technology identifies those images and collects licensing fees for them from Pinterest — which benefits commercially by using those images to draw traffic to its site.

The difference here is that Getty Images has stopped imagining (if it ever did) that it’s possible to get licensing fees for images on individuals’ blogs and websites — even if those blogs and websites make money through ad sales or other ways — that exceed the cost of policing their copyrights.  The popup windows for embedding images say “Embedded images may not be used for commercial purposes,” but Getty Images surely doesn’t expect that it’s worthwhile to enforce that very effectively.

It remains to be seen whether the lure of Big Data for Getty Images turns out to be a siren song.   The trouble with giving things away is that it becomes much harder to charge for them again later.

In Copyright Law, 200 Is a Magic Number March 2, 2014

Posted by Bill Rosenblatt in Images, Law, United States.
1 comment so far

An occasional recurring theme in this blog is how copyright law is a poor fit for the digital age because, while technology enables distribution and consumption of content to happen automatically, instantaneously, and at virtually no cost, decisions about legality under copyright law can’t be similarly automated.  The best/worst example of this is fair use.  Only a court can decide whether a copy is noninfringing under fair use.  Even leaving aside notions of legal due process, it’s not possible to create a “fair use deciding machine.”

In general, copyright law contains hardly any concrete, machine-decidable criteria.  Yet one of the precious few came to light over the past few months regarding a type of creative work that is often overlooked in discussions of copyright law: visual artworks.  Unlike most copyrighted works, works of visual art are routinely sold and then resold potentially many times, usually at higher prices each time.

A bill was introduced in Congress last week that would enable visual artists to collect royalties on their works every time they are resold.  One of the sponsors of the bill is Rep. Jerrold Nadler, who represents a chunk of New York City, one of the world’s largest concentrations of visual artists.

Of course, the types of copyrighted works that we usually talk about here — books, movies, TV shows, and music — aren’t subject to resale royalties; they are covered under first sale (Section 109 of the Copyright Act), which says that the buyer of any of these works is free to do whatever she likes with them, with no involvement from the original seller.  But visual artworks are different.  According to Section 101 of the copyright law, they are either unique objects (e.g. paintings) or reproduced in limited edition (e.g. photographs).  The magic number of copies that distinguishes a visual artwork from anything else?  200 or less.  The copies must be signed and numbered by the creator.

Under the proposed ART (Artist Royalties, Too) Act, five percent of the proceeds from a sale of a visual artwork would go to the artist, whether it’s the second, third, or hundredth sale of the work.  The law would apply to artworks that sell for more than $5,000 at auction houses that do at least $1 million in business per year.  It would require private collecting societies to collect and distribute the royalties on a regular basis, as SoundExchange does for digital music broadcasting.  This proposed law would follow in the footsteps of similar laws in many countries, including the UK, EU, Australia, Brazil, India, Mexico, and several others.  It would also emulate  “residual” and “rental” royalties for actors, playwrights, music composers, and others, which result from contracts with studios, theaters, orchestras, and so on.

The U.S. Copyright Office analyzed the art resale issue recently and published a report last December that summarized its findings.  The Office concluded that resale royalties would probably not harm the overall art market in the United States, and that a law like the ART Act isn’t a bad idea but is only one of several ways to institute resale royalties.

The Office had previously looked into resale royalties over 20 years ago.  Its newer research found that, based on evidence from other countries that have resale royalties, imposing them in the US would neither result in the flight of art dealers and auction houses from the country nor impose unduly onerous burdens for administration and enforcement of royalty payments.

Yet the Copyright Office’s report doesn’t overflow with unqualified enthusiasm for statutory royalties on sales.  One of the legislative alternatives it suggests is the idea of a “performance royalty” from public display of artworks.  If a collector wants to buy a work at auction and display it privately in her home, that’s different from a museum that charges people admission to see it.  Although this would mirror performance royalties for music, it would seem to favor wealthy individuals at the expense of public exposure to art.

The ART Act — which is actually a revision of legislation that Rep. Nadler introduced in 2011 — has drawn much attention within the art community, though little outside it.  Artists are generally in favor of it, of course.  But various others have criticized aspects of the bill, such as that it only applies to auction houses (thereby pushing more sales to private dealers, where transactions take place in secret instead of out in the open), that it only benefits the tiny percentage of already-successful artists instead of struggling newcomers, and that it unfairly privileges visual artists over other creators of both copyrighted works and physical objects (think Leica cameras or antique Cartier watches).

As an outsider to the art world, I have no opinion.  Instead it’s that 200 number that fascinates me.  That number may partially explain why the Alfred Eisenstaedt photograph of the conductor Leonard Bernstein that hangs in my wife’s office, signed and numbered 14 out of 250, is considerably less valuable than another Eisenstaedt available on eBay that’s signed and numbered 41 out of 50.

It begs the question of what happens when more and more visual artists use media that can be reproduced digitally without loss of quality.  Would an artist be better off limiting her output to 200 copies and getting the 5% on resale, or would she be better off making as many copies as possible and selling them for whatever the market will bear?  The answer is unknowable without years of real-world testing.  Given the choice, some artists may opt for the former route, which seems to go against the primary objective of copyright law: to maximize the availability of creative works to the public through incentives to creators.

Copyright minimalists question the relevance of copyright in an era when digital technologies make it possible to reproduce creative works at very little cost and perfect fidelity; they call on the media industry to stop trying to “profit from scarcity” and instead “profit from abundance.”  Here’s a situation where copyrighted works are the scarcest of all.

Nowadays no one would confuse one of Vermeer’s 35 (or possibly 36) masterpieces with a poster or hand-made reproduction of one.  People will be willing to travel to the Rijksmuseum, National Gallery, Met, etc., to see them for the foreseeable future.  Yet there will be some time in the non-near future when the scarcity of most copyrighted works is artificially imposed.  At that point, the sale (not resale) value of creative works will go toward zero, even if they are reproduced, signed, and sequentially numbered by super-micro-resolution 3D printers that sell at Staples for the equivalent of $200 today.

Perhaps the best indication of the future comes from Christo and Jeanne-Claude, the well-known husband-and-wife outdoor artists.  Christo and Jeanne-Claude designed the 2005 installation called The Gates in New York’s Central Park (which happens to be in Jerry Nadler’s congressional district).  Reproducing — let alone selling — this massive work is inconceivable.  Instead, Christo and Jeanne-Claude hand-signed thousands of copies of books, lithographs, postcards, and other easily-reproduced artifacts containing photos and drawings of the artwork, and sold them to help pay the eight-figure cost of the project.  To that just add an individualized auto-pen for automating the signatures, and you may have the future of visual art in a world without scarcity.

So, the question that Congress ought to consider when evaluating art resale legislation is how to create a legal environment in which the Christos and Jeanne-Claudes of tomorrow will even bother anymore.  That’s not a rhetorical question, either.

Adobe Resurrects E-Book DRM… Again February 10, 2014

Posted by Bill Rosenblatt in DRM, Publishing.

Over the past couple of weeks, Adobe has made a series of low-key announcements regarding new versions of its DRM for e-books, Adobe Content Server and Rights Management SDK.  The new versions are ACS5 and RMSDK10 respectively, and they are released on major platforms now (iOS, Android, etc.) with more to come next month.

The new releases, though rumored for a while, came as something of a surprise to those of us who understood Adobe to have lost interest in the e-book market… again.  They did so for the first time back in 2006, before the launch of the Kindle kicked the market into high gear.  At that time, Adobe announced that version 3 of ACS would be discontinued.  Then the following year, Adobe reversed course and introduced ACS4.  ACS4 supports the International Digital Publishing Forum (IDPF)’s EPUB standard as well as Adobe’s PDF.

This saga repeated itself, roughly speaking, over the past year.  As the IDPF worked on version 3 of EPUB, Adobe indicated that it would not upgrade its e-reader software to work with it, nor would it guarantee that ACS4 would support it.  The DRM products were transferred to an offshore maintenance group within Adobe, and all indications were that Adobe was not going to develop it any further.  Now that’s all changed.

Adobe had originally positioned ACS in the e-book market as a de facto standard DRM.  It licensed the technology to a large number of makers of e-reader devices and applications, and e-book distributors around the world.  At first this strategy seemed to work: ACS looked like an “everyone but Amazon” de facto standard, and some e-reader vendors (such as Sony) even migrated from proprietary DRMs to the Adobe technology.

But then cracks began to appear: Barnes & Noble “forked” ACS with its own extensions to support features such as user-to-user lending in the Nook system; Apple launched iBooks with a variant of its FairPlay DRM for iTunes content; and independent bookstores’ IndieBound system adopted Kobo, which has its own DRM.  Furthermore, interoperability of e-book files among different RMSDK-based e-readers was not exactly seamless.  As of today, “pure” ACS represents only a minor part of the e-book retail market, at least in the US, including Google Play, SmashWords, and retailers served by OverDrive and other wholesalers.

It’s unclear why Adobe chose to go back into the e-book DRM game, though pressure from publishers must have been a factor.  Adobe can’t do much about interoperability glitches among retailers and readers, but publishers and distributors alike have asked for various features to be added to ACS over the years.  Publishers have mainly been concerned with the relatively easy availability of hacks, while distributors have also expressed the desire for a DRM that facilitates certain content access models that ACS4 does not currently support.

The new ACS5/RMSDK10 platform promises to give both publishers and distributors just about everything they have asked for.  First, Adobe has beefed up the client-side security using (what appear to be) software hardening, key management, and crypto renewability techniques that are commonly used for video and games nowadays.

Adobe has also added support for several interesting content access models. At the top of the list of most requested models is subscriptions.  ACS5 will not only support periodical-style subscriptions but also periodic updates to existing files; the latter is useful in STM (scientific, technical, medical) and various professional publishing markets.

ACS5 also contains two enhancements that are of interest to the educational market.  One is support for collections of content shared among multiple devices, which is useful for institutional libraries.  Another is support for “bulk fulfillment,” such as pre-loading e-reader devices with encrypted books (such as textbooks).  Bulk fulfillment requires a feature called separate license delivery, which is supported in many DRMs but hasn’t been in ACS thus far.  With separate license delivery, DRM-packaged files can be delivered in any way (download, optical disk, device pre-load, etc.), and then the user’s device or app can obtain licenses for them as needed.

Finally, ACS5 will support the Readium Foundation’s open-source EPUB3 e-reader software.  Adobe is “evaluating the feasibility” of supporting the Readium EPUB 3 SDK in its Adobe Reader Mobile SDK; but this means that distributors will now definitely be able to accommodate EPUB3 in their apps.

In all, ACS5 fulfills many of the wish list items that I have heard from publishers over the past couple of years, leaving one with the impression that it could expand its market share again and move towards Adobe’s original goal of de facto standard-hood (except for Amazon and possibly Apple).  ACS5 is backward compatible with older versions of ACS and does not require that e-books be re-packaged; in other words, users can read their older files in RMSDK10-enabled e-readers.

Yet Adobe made a gaffe in its announcements that immediately jeopardized all this potential: it initially gave the impression that it would force upgrades to ACS5/RMSDK10 this July.  (Watch this webinar video from Adobe’s partner Datalogics, starting around the 21-minute mark.)  Distributors would have to upgrade their apps to the latest versions, with the hardened security; and users would have to install the upgrades before being able to read e-books packaged with the new DRM.  Furthermore, if users obtain e-books packaged with the new DRM, they would not be able to read them on e-readers based on the older RMSDK. (Yet another sign that Adobe has acted on pressure from publishers rather than distributors.)  In other words, Adobe wanted to force the entire ACS ecosystem to move to a more secure DRM client in lock-step.

This forced-upgrade routine is similar to what DRM-enabled download services like iTunes (video) do with their client software.  But then Apple doesn’t rely on a network of distributors, almost all of which maintain their own e-reading devices and apps.

In any case, the backlash from distributors and the e-publishing blogosphere was swift and harsh; and Adobe quickly relented.  Now the story is that distributors can decide on their own upgrade timelines.  In other words, publishers will themselves have to put pressure on distributors to upgrade the DRM, at least for the traditional retail and library-lending models; and some less-secure implementations will likely remain out there for some time to come.

Adobe’s new release balances between divergent effects of DRM.  On the one hand, DRM interoperability is more important than ever for publishers and distributors alike, to counteract the dominance of Amazon in the e-book retail market; and the surest way to achieve DRM interoperability is to do away with DRM altogether.  (There are other ways to inhibit interoperability that have nothing to do with DRM.)  But on the other hand, integrating interoperability with support for content access models that are unsupportable without some form of content access control — such as subscriptions and institutional library access — seems like an attractive idea.  Adobe has survived tugs-of-war with publishers and distributors over DRM restrictions before, so this one probably won’t be fatal.

National Academies Calls for Hard Data on Digital Copyright February 4, 2014

Posted by Bill Rosenblatt in Economics, Law, United States.
1 comment so far

About three years ago, the National Academies — the scientific advisers to the U.S. federal government — held hearings on copyright policy in the digital age.  The intent of the project, of which the hearings were a part, was to gather input from a wide range of interested parties on the kinds of research that should be done to further our understanding of the effects of digital technologies on copyright.

The committee overseeing the project consisted of twelve people, including an economist specializing in digital content issues (Joel Waldfogel), a movie industry executive (Mitch Singer of Sony Pictures), a music technology expert (Paul Vidich, formerly of Warner Music Group), a federal judge with deep copyright experience (Marilyn Hall Patel of Napster fame), a library director (Michael Keller of Stanford University), a former director of Creative Commons (Molly van Houweling), and a few law professors.  The committee was chaired by Bill Raduchel, a Harvard economics professor turned technology executive perhaps best known as Scott McNealy’s mentor at Sun Microsystems.

Recently the National Academies Press published the results of the project in the form of Copyright in the Digital Era: Building Evidence for Policy, which is available as a free e-book or $35 paperback.  This 85-page booklet is, without exaggeration, the most important document in the field of copyright policy to be published in quite some time.  It is the first substantive attempt to take the debate on copyright policy out of the realm of “copyright wars,” where polemics and emotions rule, into the realm of hard data.

The document starts by decrying the lack of data on which deliberations on copyright policy are based, especially compared to the mountains of data used to support changes to the patent system.  It then goes on to describe various types of data that either exist or should be collected in order to fuel research that can finally tell us how copyright is faring in the digital era, with respect to its purpose to maximize public availability of creative works through incentives to creators.

The questions that Copyright in the Digital Era poses are fundamentally important.  They include issues of monetary and non-monetary motivations to content creators; the impact of sharply reduced distribution and transaction costs for digital compared to physical content; the costs and benefits of various copyright enforcement schemes; and the effects of US-specific legal constructs such as fair use, first sale, and the DMCA safe harbors.  My own testimony at the hearings emphasized the need for research into the costs and benefits of rights technologies such as DRM, and I was pleased to see this reflected in the document.

Copyright in the Digital Era concludes with lists of types of data that the project committee members believe should be collected in order to facilitate research, as well as descriptions of the types of research that should be done and the challenges of collecting the needed data.

This document should be required reading for everyone involved in copyright policy.  More than that, it should be seen as a gauntlet that has been thrown down to everyone involved in the so-called copyright wars.  The National Academies has set the research agenda.  Now that Congress has begun the long, arduous process of revamping America’s copyright law, we’ll see who is willing and able to fund the research and publish the results so that Congress gets the data it deserves.


Images, Search Engines, and Doing the Right Thing January 13, 2014

Posted by Bill Rosenblatt in Rights Licensing, Standards.

A recent blog post by Larry Lessig pointed out that the image search feature in Microsoft’s Bing allows users to filter search results so that only images with selected Creative Commons licenses appear.  A commenter to the post found that Google also has this feature, albeit buried deeply in the Advanced Search menu (see for example here; scroll down to “usage rights”).  These features offer a tantalizing glimpse at an idea that has been brewing for years: the ability to license content directly and automatically through web browsers.  

Let’s face it: most people use image search to find images to copy and paste into their PowerPoint presentations (or blog posts or web pages).  As Lessig points out, these features help users to ensure that they have the rights to use the images they find in those ways.  But they don’t help if an image is licensable in any way other than a subset of Creative Commons terms — regardless of whether royalties are involved.  I’d stop short of calling this discrimination against image licensors, but it certainly doesn’t help them.

Those who want to license images for commercial purposes — such as graphic artists laying out advertisements — typically go to stock photo agencies like Getty Images and Corbis, which have powerful search facilities on their websites.  Below Getty and Corbis, the stock image market is fragmented into niches, mostly by subject matter.  Graphic artists have to know where to go to find the kinds of images they need.

There is a small “one-stop” search engine for images called PictureEngine, which includes links to licensing pages in its search results.  But it would surely be better if the mainstream search engines’ image search functions included the ability to display and filter on licensing terms, and to enable links to licensing opportunities.

There have been various attempts over the years to make it easy for users to “do the right thing,” i.e. to license commercial content that they find online and intend to copy and use for purposes that aren’t clearly covered under fair use or equivalents.  Most such efforts have focused on text content.  The startup QPass had a vogue among big-name newspaper and magazine brands during the first Internet bubble: it provided a service for publishers to sell archived articles to consumers for a dollar or two apiece.  It effectively disappeared during the post-bubble crash.  ICopyright, which has been around since the late 1990s, provides a toolbar that publishers can use on web pages to offer various licensing options such as print, email, republish, and excerpt; most of its customers are B2B publishers like Dow Jones and Investors Business Daily.

Images are just as easy as text (compared, say, to audio and video) to copy and paste from web pages, but they are more discrete units of content; therefore it ought to be easier to automate licensing of them.  When you copy and paste an image with a Creative Commons license, you’re effectively getting a license, because the license is expressed in XML metadata attached to the image.

If search engines can index Creative Commons terms embedded within images that they find online, they ought to be able to index other licensing terms, including commercial terms.  The most prominent standard for image rights is PLUS (Picture Licensing Universal System), but that’s intended for describing rights B-to-B licensing arrangements, not to the general public; and I’m not aware of any efforts that the PLUS Coalition has made to integrate with web search engines.

No, the solution to this problem is not only clear but has been in evidence for years: augment Creative Commons so that it can handle commercial as well as noncommercial licensing.  Creative Commons flirted with this idea several years ago with something called CC+ (CCPlus), a way to add additional terms to Creative Commons licenses that envisioned commercial licenses in particular.

Although Creative Commons has its detractors among the commercial content community (mostly lawyers who feel disintermediated — which is part of the point), I have heard major publishers express admiration for it as well as interest in finding ways to use it with their content.  At this point, the biggest obstacle to extending Creative Commons to apply to commercial licensing is the Creative Commons organization’s lack of interest in doing so.  Creative Commons’ innovations have put it at the center of the copyright world on the Internet; it would be a shame if Creative Commons’ refusal to acknowledge that some people would like to get paid for their work results in that possibility being closed off.

Judge Dismisses E-Book DRM Antitrust Case December 12, 2013

Posted by Bill Rosenblatt in DRM, Law, Publishing.

Last week a federal judge in New York dismissed a lawsuit that a group of independent booksellers brought earlier this year against Amazon.com and the (then) Big Six trade publishers.  The suit alleged that the publishers were conspiring with Amazon to use Amazon’s DRM to shut the indie booksellers out of the majority of the e-book market.  The three bookstores sought class action status on behalf of all indie booksellers.

In most cases, independent booksellers can’t sell e-books that can be read on Amazon’s Kindle e-readers; instead they have a program through the Independent Booksellers Association that enables consumers to buy e-books from the stores’ websites via the Kobo e-book platform, which has apps for all major devices (PCs, iOS, Android, etc.) as well as Kobo eReaders.

Let’s get the full disclosure out of the way: I worked with the plaintiffs in this case as an expert witness.  (Which is why I didn’t write about this case when it was brought several months ago.)  I did so because, like others, I read the complaint and found that it reflected various misconceptions about DRM and its place in the e-book market; I thought that perhaps I could help educate the booksellers.

The booksellers asked the court to enjoin (force) Amazon to drop its proprietary DRM, and to enjoin the Big Six to allow independent bookstores to sell their e-books using an interoperable DRM that would presumably work with Kindles as well as iOS and Android devices, PCs, Macs, BlackBerrys, etc.  (The term that the complaint used for the opposite of “interoperable DRM” was “inoperable DRM,” much to the amusement of some anti-DRM folks.)

There were two fundamental problems with the complaint.  One was that it presupposed the existence of an idealized interoperable DRM that would work with any “interoperable or open architecture device,” and that “Amazon could easily, and without significant cost or disruption, eliminate its device specific restrictive [] DRM and instead utilize an available interoperable system.”

There is no such thing, nor is one likely to come into being.  I worked with the International Digital Publishing Form (IDPF), the trade association for e-books, to design a “lightweight” content protection scheme that would be attractive to a large number of retailers through low cost of adoption, but that project is far from fruition, and in any case, no one associated with it is under any illusion that all retailers will adopt the scheme.  The only DRM that is guaranteed to work with all devices and all retailers forever is no DRM at all.

The closest thing there is to an “interoperable” DRM nowadays is Adobe Content Server (ACS) — which isn’t all that close.  Adobe had intended ACS to become an interoperable standard, much like PDF is.  Unlike Amazon’s Mobipocket DRM and Apple’s FairPlay DRM for iBooks, ACS can be licensed and used by makers of e-reader devices and apps.  Several e-book platforms do use it.  But the only retailer with significant market share in the United States that does so is Barnes & Noble, which has modified it and combined it with another DRM that it had acquired years ago.  Kobo has its own DRM and uses ACS only for interoperability with other environments.

More relevantly, I have heard it said that Amazon experimented with ACS before launching the Kindle with the Mobipocket DRM that it acquired back in 2005.  But in any case, ACS’s presence in the US e-book market is on the wane, and Adobe has stopped actively working on the product.

The second misconception in the booksellers’ complaint was the implication that the major publishers had an interest in limiting their opportunities to sell e-books through indie bookstores.  The reality is just the opposite: publishers, from the (now) Big Five on down, would like nothing more than to be able to sell e-books through every possible retailer onto every possible device.  The complaint alleges that publishers “confirmed, affirmed, and/or condoned AMAZON’s use of restrictive DRMs” and thereby conspired to restrain trade in the e-book market.

Publishers have been wary of Amazon’s dominant market position for years, but they have tolerated its proprietary technology ecosystem — at least in part because many of them understand that technology-based media markets always settle down to steady states involving two or three different platforms, protocols, formats, etc.  DRM helps vendors create walls around their ecosystems, but it is far from the only technology that does so.

As I’ve said before, the ideal of an “MP3 for e-books” is highly unlikely and is largely a mirage in any case.  Copyright owners have a constant struggle to create and preserve level playing fields for retailers in the digital age, one that the more savvy among them recognize that they can’t win as much as they would like.

Judge Jed Rakoff picked up on this second point in his opinion dismissing the case. He said, “… nothing about [the] fact [that publishers made agreements with Amazon requiring DRM] suggests that the Publishers also required Amazon to use device-restrictive DRM limiting the devices on which the Publishers’ e-books can be display, or to place restrictions on Kindle devices and apps such that they could only display e-books enabled with Amazon’s proprietary DRM. Indeed, unlike DRM requirements, which clearly serve the Publishers’ economic interests by preventing copyright violations, these latter types of restrictions run counter to the Publishers’ interests …” (emphasis in original).

Indie bookstores are great things; it’s a shame that Amazon’s Kindle ecosystem doesn’t play nicely with them. But at the end of the day — as Judge Rakoff also pointed out — Amazon competes with independent booksellers, and “no business has a duty to aid competitors,” even under antitrust law.

In fact, Amazon has repeatedly shown that it will “cooperate” with competitors only as a means of cutting into their markets. Its extension of the Kindle platform to public library e-lending last year is best seen as part of its attempt to invade libraries’ territory. More recently, Amazon has attempted to get indie booksellers interested in selling Kindle devices in their stores, a move that has elicited frosty reactions from the bookstores.

The rest of Judge Rakoff’s opinion dealt with the booksellers’ failure to meet legal criteria under antitrust law.  Independent booksellers might possibly have a case to bring against Amazon for boxing them out of the market as reading goes digital, but Book House of Stuyvesant Plaza et al v. Amazon.com et al wasn’t it.


Get every new post delivered to your Inbox.

Join 568 other followers