jump to navigation

Flickr’s Wall Art Program Exposes Weaknesses in Licensing Automation December 7, 2014

Posted by Bill Rosenblatt in Images, Rights Licensing, Standards.
2 comments

Suppose you’re a musician.  You put your songs up on SoundCloud to get exposure for them.  Later you find out that SoundCloud has started a program for selling your music as high-quality CDs and giving you none of the proceeds.  Or suppose you’re a writer who put your serialized novel up on WattPad; then you find out that WattPad has started selling it in a coffee-table-worthy hardcover edition and not sharing revenue with you.   The odds are that in either case you would not be thrilled.

Yet those are rough equivalents of what Flickr, the Yahoo-owned photo-sharing site, has been doing with its Flickr Wall Art program.  Flickr Wall Art started out, back in October, as a way for users to order professional-quality hangable prints of their own photos, in the same way that a site like Zazzle lets users make t-shirts or coffee mugs with their images on them (or Lulu publishes printed books).

More recently, Flickr expanded the Wall Art program to let users order framed prints of any of tens of millions of images that users uploaded to the site.  This has raised the ire of some of the professional photographers who post their images on Flickr for the same reason that musicians post music on SoundCloud and similar sites: to expose their art to the public.

The core issue here is the license terms under which users upload their images to Flickr.  Like SoundCloud and WattPad, Flickr offers users the option of selecting a Creative Commons license for their work when they upload it.  Many Flickr users do this in order to encourage other users to share their images and thereby increase their exposure — so that, perhaps, some magazine editor or advertising art director will see their work and pay them for it.

The fact that a hosting website might exploit a Creative Commons-licensed work for its own commercial gain doesn’t sit right with many content creators who have operated under two assumptions that, as Flickr has shown, are naive.  One is that these big Internet sites just want to get users to contribute content in order to build their audience and that they will make money some other way, such as through premium memberships or advertising.  The other is that Creative Commons licenses are some sort of magic bullet that help artists get exposure for their work while preventing unfair commercial exploitation of it.

Let’s get one thing out of the way: as others have pointed out, what Flickr is doing is perfectly legal.  It takes advantage of the fact that many users upload photos to the site under Creative Commons licenses that allow others to exploit them commercially — which three out of the six Creative Commons license options do.   It seems that many photographers choose one of those licenses when they upload their work and don’t think too much about the consequences.

Flickr does allow users to change their images’ license terms at any time, and more recently it expanded the Wall Art program to enable photographers to get 51% of revenue from their images if they choose licenses that allow commercial use.  But currently that option is limited to those few photographers whom Flickr has invited into its commercial licensing program, Flickr Marketplace, which it launched this past July.  Flickr Marketplace is intended to be an attractive source of high-quality images for the likes of The New York Times and Reuters, and thus is curated by editors.

Some copyleft folks have circled their wagons around Flickr, maintaining that it shows yet again why content creators should not expect copyright to help them keep control of what happens to their work on the Internet.  But that’s a perversion of what’s going on here.

Flickr is — still, after ten years of existence — a major outlet for photos online.  As such, Flickr has the means to control, to some extent, what happens to the images posted on its service; and with Flickr Marketplace, it is effectively wresting some control of commercial licensing opportunities away from photographers.  Some degree of control over content distribution and use does exist on the Internet, even if copyright law itself doesn’t contribute directly to that control.  The controllers are the entities that Jaron Lanier has called “lords of the cloud” — one of which is Yahoo.

This doesn’t mean that Flickr is particularly outrageous or evil  — although it’s at least ironic that while these major content hosting services claim to help content creators through exposure and sharing, Flickr is now making money from objects that are not very shareable at all.  (In fact, what Flickr is doing is not unusual for a mature technology business facing stiff competition from new upstarts on the low end of the market — Instagram and Snapchat in this case: it is migrating to the premium/professional end of the market, where prices and margins are higher but volume is lower.)

The problem here is the lack of both flexibility and infrastructure for commercial licensing in the Creative Commons ecosystem.  Creative Commons is a clever and highly successful way of bringing some degree of badly-needed rationalization and automation to the abstruse world of content licensing.  But it gives creators hardly any options for commercial exploitation of their works.

Several years ago, Creative Commons flirted with the idea of extending their licenses to cover terms of commercial use (among other things) by launching a scheme called CC+.  A handful of startups emerged that used CC+ to enable commercial licensing of content on blogs and so on — companies that, interestingly enough, came from across the ideological spectrum of copyright.  One was Ozmo from the Copyright Clearance Center, which helped with the design of CC+; another, RightsAgent, was started by the then Executive Director of the Berkman Center for Internet and Society at Harvard Law School.  Yet none of these succeeded, and it didn’t help that the Creative Commons organization’s heart wasn’t really in CC+ in the first place.

But the picture changes — no pun intended — when big content hosting sites start to monetize user-generated content directly instead of merely using it as a draw for advertising and paid online storage.  Ideas for automated licensing of online content have been kicking around long before Flickr or CC+ (here’s one example).  Licensing automation mechanisms that can be adopted by big Internet services and individual creators alike for consumer-facing business models are needed now more than ever.

 

Dispatches from IDPF Digital Book 2014, Pt. 3: DRM June 5, 2014

Posted by Bill Rosenblatt in DRM, Publishing, Standards.
1 comment so far

The final set of interesting developments at last week’s IDPF Digital Book 2014 in NYC has to do with DRM and rights.

Tom Doherty, founder of the science fiction publisher Tor Books, gave a speech about his company’s experimentation with DRM-free e-books and its launch of a line of e-novellas without DRM.  The buildup to this speech (among those of us who were aware of the program in advance) was palpable, but the result fell with a thud.  You had to listen hard to find the tiny morsel about how going DRM-free has barely affected sales; otherwise the speech was standard-issue dogma about DRM with virtually no new insights or data.  And he did not take questions from the audience.

DRM has become something of a taboo subject even at conferences like this, so most of the rest of the discussion about it took the form of hallway buzz.  And the buzz is that many are predicting that DRM will be on its way out for retail trade e-books within the next couple of years.

That’s the way things are likely to go if technology market forces play out the way they usually do.  Retailers other than Amazon (and possibly Apple) will want to embrace more open standards so that they can offer greater interoperability and thus band together to compete with the dominant player; getting rid of DRM is certainly a step in that direction.  Meanwhile, publishers, getting more and more fed up with or afraid of Amazon, will find common cause with other retailers and agree to license more of their material for distribution without DRM.  (Several retailers in second-tier European countries as well as some retailers for self-publishing authors, such as Lulu, have already dropped DRM entirely.)

Such sentiments will eventually supersede most publishers’ current “faith-based” insistence on DRM.  In other words, publishers and retailers will behave more or less the same way as the major record labels and non-Apple retailers behaved back in 2006-2007.

This course of events seems inevitable… unless publishers get some hard, credible data that tells them that DRM helps prevent piracy and “oversharing” more than it hurts the consumer experience.  That’s the only way (other than outright inertia) that I can see DRM staying in place for trade books over the next couple of years.

The situation for educational, professional, and STM (scientific, technical, medical) books is another story (as are library lending and other non-retail models).  Higher ed publishers in particular have reasons to stick with DRM: for example, e-textbook piracy has been rising dramatically in recent years and is up to 34% of students as of last year.

Adobe recently re-launched its DRM with a focus on these publishing market segments. I’d describe the re-launch as “awkward,” though publishers I’ve spoken to would characterize in it less polite terms.  This has led to openings for other vendors, such as Sony DADC; and the Readium Foundation is still working on the open-source EPUB Lightweight Content Protection scheme.

The hallway buzz at IDPF Digital Book was that DRM for these market segments is here to stay — except that in higher ed, it may become unnecessary in a longer timeframe, when educational materials are delivered dynamically and in a fashion more akin to streaming than to downloads of e-books.

I attended a panel on EDUPUB, a standards initiative aimed at exactly this future for educational publishing.  The effort, led by Pearson Education (the largest of the educational publishers), the IMS Global Learning Consortium, and IDPF, is impressive: it’s based on combining existing open standards (such as IDPF’s EPUB 3) instead of inventing new ones.  It’s meant to be inclusive and beneficial to all players in the higher ed value chain, including Pearson’s competitors.

However, EDUPUB is in danger of making the same mistake as the IDPF did by ignoring DRM and other rights issues.  When asked about DRM, Paul Belfanti, Pearson’s lead executive on EDUPUB, answered that EDUPUB is DRM-agnostic and would leave decisions on DRM to providers of content delivery platforms. This decision was problematic for trade publishers when IDPF made it for EPUB several years ago; it’s even more potentially problematic for higher ed; EDUPUB-based materials could certainly be delivered in e-textbook form.

EDUPUB could also help enable one of the Holy Grails of higher ed publishing, which is to combine materials from multiple publishers into custom textbooks or dynamically delivered digital content.  Unlike most trade books, textbooks often contain hundreds or thousands of content components, each of which may have different rights associated with them.

Clearing rights for higher ed content is a manual, labor-intensive job.  In tomorrow’s world of dynamic digital educational content, it will be more important than ever to make sure that the content being delivered has the proper clearances, in real time.  In reality, this doesn’t necessarily involve DRM; it’s mainly a question of machine-readable rights metadata.

Attempts to standardize this type of rights metadata date back at least to the mid-1990s (when I was involved in such an attempt); none have succeeded.  This is a “last mile” issue that EDUPUB will have to address, sooner rather than later, for it to make good on its very promising start.  DRM and rights are not popular topics for standards bodies to address, but it has become increasingly clear that they must address these issues to be successful.

Images, Search Engines, and Doing the Right Thing January 13, 2014

Posted by Bill Rosenblatt in Rights Licensing, Standards.
4 comments

A recent blog post by Larry Lessig pointed out that the image search feature in Microsoft’s Bing allows users to filter search results so that only images with selected Creative Commons licenses appear.  A commenter to the post found that Google also has this feature, albeit buried deeply in the Advanced Search menu (see for example here; scroll down to “usage rights”).  These features offer a tantalizing glimpse at an idea that has been brewing for years: the ability to license content directly and automatically through web browsers.  

Let’s face it: most people use image search to find images to copy and paste into their PowerPoint presentations (or blog posts or web pages).  As Lessig points out, these features help users to ensure that they have the rights to use the images they find in those ways.  But they don’t help if an image is licensable in any way other than a subset of Creative Commons terms — regardless of whether royalties are involved.  I’d stop short of calling this discrimination against image licensors, but it certainly doesn’t help them.

Those who want to license images for commercial purposes — such as graphic artists laying out advertisements — typically go to stock photo agencies like Getty Images and Corbis, which have powerful search facilities on their websites.  Below Getty and Corbis, the stock image market is fragmented into niches, mostly by subject matter.  Graphic artists have to know where to go to find the kinds of images they need.

There is a small “one-stop” search engine for images called PictureEngine, which includes links to licensing pages in its search results.  But it would surely be better if the mainstream search engines’ image search functions included the ability to display and filter on licensing terms, and to enable links to licensing opportunities.

There have been various attempts over the years to make it easy for users to “do the right thing,” i.e. to license commercial content that they find online and intend to copy and use for purposes that aren’t clearly covered under fair use or equivalents.  Most such efforts have focused on text content.  The startup QPass had a vogue among big-name newspaper and magazine brands during the first Internet bubble: it provided a service for publishers to sell archived articles to consumers for a dollar or two apiece.  It effectively disappeared during the post-bubble crash.  ICopyright, which has been around since the late 1990s, provides a toolbar that publishers can use on web pages to offer various licensing options such as print, email, republish, and excerpt; most of its customers are B2B publishers like Dow Jones and Investors Business Daily.

Images are just as easy as text (compared, say, to audio and video) to copy and paste from web pages, but they are more discrete units of content; therefore it ought to be easier to automate licensing of them.  When you copy and paste an image with a Creative Commons license, you’re effectively getting a license, because the license is expressed in XML metadata attached to the image.

If search engines can index Creative Commons terms embedded within images that they find online, they ought to be able to index other licensing terms, including commercial terms.  The most prominent standard for image rights is PLUS (Picture Licensing Universal System), but that’s intended for describing rights B-to-B licensing arrangements, not to the general public; and I’m not aware of any efforts that the PLUS Coalition has made to integrate with web search engines.

No, the solution to this problem is not only clear but has been in evidence for years: augment Creative Commons so that it can handle commercial as well as noncommercial licensing.  Creative Commons flirted with this idea several years ago with something called CC+ (CCPlus), a way to add additional terms to Creative Commons licenses that envisioned commercial licenses in particular.

Although Creative Commons has its detractors among the commercial content community (mostly lawyers who feel disintermediated — which is part of the point), I have heard major publishers express admiration for it as well as interest in finding ways to use it with their content.  At this point, the biggest obstacle to extending Creative Commons to apply to commercial licensing is the Creative Commons organization’s lack of interest in doing so.  Creative Commons’ innovations have put it at the center of the copyright world on the Internet; it would be a shame if Creative Commons’ refusal to acknowledge that some people would like to get paid for their work results in that possibility being closed off.

MovieLabs Releases Best Practices for Video Content Protection October 23, 2013

Posted by Bill Rosenblatt in DRM, Standards, Video.
3 comments

As Hollywood prepares for its transition to 4k video (four times the resolution of HD), it appears to be adopting a new approach to content protection, one that promotes more service flexibility and quicker time to market than previous approaches but carries other risks.  The recent publication of a best-practices document for content protection from MovieLabs, Hollywood’s R&D consortium, signals this new approach.

In previous generations of video technology, Hollywood studios got together with major technology companies and formed technology licensing entities to set and administer standards for content protection.  For example, a subset of the major studios teamed up with IBM, Intel, Microsoft, Panasonic, and Toshiba to form AACS LA, the licensing authority for the AACS content protection scheme for Blu-ray discs and (originally) HD DVDs.  AACS LA defines the technology specification, sets the terms and conditions under which it can be licensed, and performs other functions to maintain the technology.

A licensing authority like AACS LA (and there are a veritable alphabet soup of others) provides certainty to technology implementation including compliance, patent licensing, and interoperability among licensees.  It helps insulate the major studios from accusations of collusion by being a separate entity in which at most a subset of them participate.

As we now know, the licensing-authority model has its drawbacks.  One is that it can take the licensing authority several years to develop technology specs to a point where vendors can implement them — by which time they risk obsolescence.  Another is that it does not offer much flexibility in how the technology can adapt to new device types and content delivery paradigms.  For example, AACS was designed with optical discs in mind at a time when Internet video streaming was just a blip on the horizon.

A document published recently by MovieLabs signals a new approach.  MovieLabs Specification for Enhanced Content Protection is not really a specification, in that it is in nowhere near enough detail to be usable as the basis for implementations.  It is more a compendium of what we now understand as best practices for protecting digital video.  It contains room for change and interpretation.

The best practices in the document amount to a wish list for Hollywood.  They include things like:

  • Techniques for limiting the impact of hacks to DRM schemes, such as requiring device as well as content keys, code diversity (a hack that works on one device won’t necessarily work on another), title diversity (a hack that works with one title won’t necessarily work on another), device revocation, and renewal of protection schemes.
  • Proactive renewal of software components instead of “locking the barn door after the horse has escaped.”
  • Component technologies that are currently considered safe from hacks by themselves, including standard AES encryption with minimum key length of 128 and version 2.2 or better of the HDCP scheme for protecting links such as HDMI cables (earlier versions were hacked).
  • Hardware roots of trust on devices, running in secure execution environments, to limit opportunities for key leakage.
  • Forensic watermarking, meaning that content should have information embedded in it about the device or user who requested it.

Those who saw Sony Pictures CTO Spencer Stephens’s talk at the  Anti-Piracy and Content Protection Summit in LA back in July will find much of this familiar.  Some of these techniques come from the current state of the art in content protection for pay TV services; for more detail on this, see my whitepaper The New Technologies for Pay TV Content Security.  Others, such as the forensic watermarking requirement, come from current systems for distributing HD movies in early release windows.  And some result from lessons learned from cracks to older technologies such as AACS, HDCP, and CSS (for DVDs).

MovieLabs is unable to act as a licensor of standards for content protection (or anything else, for that matter).  The six major studios set it up in 2005 as a movie industry joint R&D consortium modeled on the cable television industry’s CableLabs and other organizations enabled by the National Cooperative Research Act of 1984, such as Bellcore (telecommunications) and SEMATECH (semiconductors).  R&D consortia are allowed, under antitrust law, to engage in “pre-competitive” research and development, but not to develop technologies that are proprietary to their members.

Accordingly, the document contains a lot of language intended to disassociate these requirements from any actual implementations, standards, or studio policies, such as “Each studio will determine individually which practices are prerequisites to the distribution of its content in any particular situation” and “This document defined only one approach to security and compatibility, and other approaches may be available.”

Instead, the best-practices approach looks like it is intended to give “signals” from the major studios to content protection technology vendors, such as Microsoft, Irdeto, Intertrust, and Verimatrix, who work with content service providers.  These vendors will then presumably develop protection schemes that follow the best practices, with an understanding that studios will then agree to license their content to those services.

The result of this approach should be legal content services for next-generation video that get to market faster.  The best practices are independent of things like content delivery modalities (physical media, downloads, streaming) and largely independent of usage rules.  Therefore they should enable a wider variety of services than is possible with the traditional licensing authority paradigm.

Yet this approach has two drawbacks compared to the older approach.  (And of course the two approaches are not mutually exclusive.)  First is that it jeopardizes the interoperability among services that Hollywood craves — and has gone to great lengths to preserve in the UltraViolet standard.  Service providers and device makers can incorporate content protection schemes that follow MovieLabs’ best practices, but consumers may not be able to interoperate content among them, and service providers will be able to use content protection schemes to lock users in to their services.  In contrast, many in Hollywood are now nostalgic for the DVD because, although its protection scheme was easily hacked, it guaranteed interoperability across all players (at least all within a given geographic region).

The other drawback is that the document is a wish list provided by organizations that won’t pay for the technology.  This means that downstream entities such as device makers and service providers will treat it as the maximum amount of protection that they have to implement to get studio approval.  Because there is no license agreement that they have to sign to get access to the technology, the downstream entities are likely to negotiate down from there.  (Such negotiation already took place behind the scenes during the rollout of Blu-ray, as player makers refused to implement some of the more expensive protection features and some studios agreed to let them slip.)

Downstream entities are particularly likely to push back against some of MovieLabs’s best practices that involve costs and potential impairments of the user experience; examples include device connectivity to networks for purposes of authentication and revocation, proactive renewal of device software, and embedding of situation-specific watermarks.

Surely the studios understand all this.  The publication of this document by MovieLabs shows that Hollywood is willing to entertain dialogues with service providers, device makers, and content protection vendors to speed up time-to-market of legitimate video services and ensure that downstream entities can innovate more freely.  How much protection will the studios will ultimately end up with when 4k video reaches the mainstream?  It will be very interesting to watch over the next couple of years.

Copyright and Accessibility June 19, 2013

Posted by Bill Rosenblatt in Events, Law, Publishing, Standards, Uncategorized.
add a comment

Last week I received an education in the world of publishing for print-disabled people, including the blind and dyslexic.  I was in Copenhagen to speak at Future Publishing and Accessibility, a conference produced by Nota, an organization within the Danish Ministry of Culture that provides materials for the print-disabled, and the DAISY Consortium, the promoter of global standards for talking books.  The conference brought together speakers from the accessibility and mainstream publishing fields.

Before the conference, I had been wondering what the attitude of the accessibility community would be towards copyright.  Would they view it as a restrictive construct that limits the spread of accessible information, allowing it to remain in the hands of publishers that put profit first?

As it turns out, the answer is no.  The accessibility community, generally speaking, has a balanced view of copyright that reflects the growing importance of the print disabled to publishers as a business matter.

Digital publishing technology might be a convenience for normally sighted people, but for the print disabled, it’s a huge revelation.  The same e-publishing standards that promote ease of production, distribution, and interoperability for mainstream consumers make it possible to automate and thus drastically lower the cost and time to produce content in Braille, large print, or spoken-word formats.

Once you understand this, it makes perfect sense that the IDPF (promoter of the EPUB standards for e-books) and DAISY Consortium share several key members.  It was also pointed out at the conference that the print disabled constitute an audience that expands the market for publishers by roughly 10%.  All this adds up to a market for accessible content that’s just too big to ignore.

As a result, the interests of the publishing industry and the accessibility community are aligning.  Accessibility experts respect copyright because it helps preserve incentives for publishers to convert their products into versions for the print disabled.  Although more and more accessibility conversion processes can be automated, manual effort is still necessary — particularly for complex works such as textbooks and scientific materials.

Publishers, for their part, view making content accessible to the print disabled as part of the value that they can add to content — value that still can’t exist without financial support and investment.

One example is Elsevier, the world’s largest scientific publisher.  Elsevier has undertaken a broad, ambitious program to optimize its ability to produce versions of its titles for the print disabled.  One speaker from the accessibility community called the program “the gold standard” for digital publishing.  Not bad for a company that some in the academic community refer to as the Evil Empire.

This is not by any means to suggest that publishers and the accessibility community coexist in perfect harmony.  There is still a long way to go to reach the state articulated at the conference by George Kerscher, who is both Secretary General of DAISY and President of IDPF: to make all materials available to the print disabled at the same time to sell structured settlement, and for the same price, as mainstream content.

The Future Publishing and Accessibility conference was timed to take place just before negotiations begin over a proposed WIPO treaty that would facilitate the production of accessible materials and distribution of them across borders.  The negotiations are taking place this and next week in Marrakech, Morocco.  This proposed treaty is already laden with concerns from the copyright industries that its provisions will create opportunities for abuse, and reciprocal concerns from the open Internet camp that the treaty will be overburdened with restrictions designed to limit such abuse.  But as I found out in Denmark last week, there is enough practical common ground to hope that accessibility of content for the print disabled will continue to improve.

Withholding Ads from Illegal Download Sites January 7, 2013

Posted by Bill Rosenblatt in Standards.
add a comment

Over the past few months, the conversation about online infringement has shifted from topics like graduated response and DMCA-related litigation to cutting off ad revenue for sites that provide illegal downloads.  This issue gained some importance during the run-up to SOPA and PIPA in 2011.  David Lowery of The Trichordist gave it new visibility last year by initiating a stream of screenshots showing major consumer brands that advertise on sites like FilesTube, IsoHunt, and MP3Skull.

Last week, the issue took on a new level of importance when a report from the University of Southern California’s Annenberg Innovation Lab confirmed that major online ad networks, including those of Google and Yahoo, routinely place ads on pirate sites.  The Innovation Lab has come up with an automated way of tracking the ad networks that place ads sites that (according to the Google Transparency Report) attract the most DMCA takedown notices.  The study ranks the top ten ad networks that serve ads on pirate sites and will be updated monthly.

The idea is to shame consumer brands by showing them that their ads are appearing on pirate sites amid ads for pornography, mail-order brides, etc.  The Annenberg study has already led to at least one major consumer brand insisting that its ads be pulled from pirate sites.

The focus on online ad networks is — as Lowery admitted at our Copyright and Technology NYC 2012 conference last month — not an ideal solution to the problem of online infringement but rather a “low hanging fruit” approach that appeals to real business imperatives without requiring lawyers or lobbyists.  It’s an acknowledgement that legislation to address online infringement is not going to be achievable, at least in the near future, in the aftermath of the defeats of SOPA and PIPA in early 2012.

Yet the tactic’s effectiveness is limited by the quality of information about ad buys that flows through ad networks.  For example, it’s sometimes not possible for an advertiser to know where its ads are being placed because, among other reasons, ad networks resell inventory to each other.

Let’s assume that most consumer brands would rather not have their ads placed on pirate sites.  Then two things are required to solve this problem.  One is standards for information about ad buys — advertiser identity, inventory, ad network, type of placement, and so on — and protocols for communicating that information up the chain of intermediaries from the website on which the ad was placed all the way up to the advertiser.  The other is agreement throughout the online ad industry to use such standards in communication and reporting.

If you follow efforts to develop standard content identifiers and online rights registries, you should see the analogy here.

The good news is that the Interactive Advertising Bureau (IAB) has been working on standards that look like they could apply here with some tweaks.   The IAB launched the eBusiness Interactive Standards initiative in 2008 with the goal of increasing efficiency and reducing errors in online ad workflows.  The eBusiness Interactive Standards spec defines XML structures for communicating information among advertisers, agencies, and publishers (websites) from RFPs (requests for proposal) through to IOs (insertion orders).

Now the bad news.  The IAB standards would need some modification to cover the requirements here: they don’t appear to work through multiple levels of ad networks, they don’t include globally unique identifiers for ad placements (though this would be simple to add), and they aren’t designed to cover performance reporting.  Furthermore, progress in getting the standard to market appears to be slow: it entered a beta phase with limited customers in 2011, and no progress has been apparent since then.

Yet even if the right standards were adopted, the advertising industry would still need to commit to the kinds of transparency that would be necessary to ensure, if an advertiser wishes, that its ads don’t appear on pirate sites.  For one thing, advertisers often buy “blind” a/k/a run-of-network inventory in order to get discount pricing, and there is no reliable way to ensure that such buys don’t inadvertently end up on pirate sites.  A related problem is where to draw the line between obvious pirate sites like the ones mentioned above and those that happen to occasionally host unauthorized material.

Regulatory initiatives seem unlikely here.  Indeed, the Obama Administration and Congress in 2011 asked the ad industry to adopt a “pledge” against advertising on pirate sites; the industry’s two major U.S. trade associations responded last May with a statement full of equivocation and wiggle-room.

Ultimately, the pressure would have to come from advertisers themselves.  They could demand, for example, that even blind buys not appear on the 200-250 sites that show more than 10,000 takedown notices a month in the Google Transparency Report (mainstream sites like Facebook, Tumblr, DailyMotion, Scribd, and SoundCloud fall well below this threshold).  A good set of technical standards for tracking and reporting would help convince them that they can demand to withhold their ads from these sites with a reasonable chance of success.

*

Reducing Complexity of Multiscreen Video Services with PlayReady and MPEG-DASH November 19, 2012

Posted by Bill Rosenblatt in DRM, Standards, Video.
4 comments

As I have worked with video service providers that are trying to upgrade their offerings to include online and mobile services, I’ve seen bewilderment about the maze of codecs, streaming protocols, and player apps as well as content protection technologies that those service providers need to understand.  Yet a development that took place earlier this month should help ease some of the complexity.

Microsoft’s PlayReady is becoming a popular choice for content protection.  Dozens of service providers use it, including BSkyB, Canal+, HBO, Hulu, MTV, Netflix, and many ISPs, pay TV operators, and wireless carriers.  PlayReady handles both downloads and streaming, and it is currently the only commercial DRM technology certified for use with UltraViolet (though that should change soon).  Microsoft has developed a healthy ecosystem of vendors that supply things like player apps for different platforms, “hardening” of client implementations to ensure robustness, server-side integration services, and end-to-end services.  And after years of putting in very little effort on marketing, Microsoft has finally upgraded its PlayReady website with information to make it easier to understand how to use and license the technology.

Streaming protocols are still a bit of an issue, though.  Several vendors have created so-called adaptive streaming protocols, which monitor the user’s throughput and vary the bit rate of the content to ensure optimal quality without interruptions.  Apple has HTTP Live Streaming (HLS), Microsoft has Smooth Streaming, Adobe has HTTP Dynamic Streaming (HDS), and Google has technology it acquired from Widevine.  Yet operators have been more interested in  Dynamic Adaptive Streaming over HTTP (DASH), an emerging vendor-independent MPEG standard.  The hope with MPEG-DASH is that operators can use the same protocol to stream to a wide variety of client devices, thereby making deployment of TV Everywhere-type services cheaper and easier.

MPEG-DASH took a significant step towards real-world viability over the last few months with the establishment of the DASH Industry Forum, a trade association that promotes market adoption of the standard.  Microsoft and Adobe are members, though not Apple or Google, indicating that at least some of the vendors of proprietary adaptive streaming will embrace the standard.  The membership also includes a healthy critical mass of vendors in the video content protection space: Adobe, BuyDRM, castLabs, Irdeto, Nagra, and Verimatrix — plus Cisco, owner of NDS, and Motorola, owner of SecureMedia.

Adaptive streaming protocols need to be integrated with content protection schemes.  PlayReady was originally designed to work with Smooth Streaming.  It has also been integrated with HLS, which is probably the most popular of the proprietary adaptive streaming schemes.  Integration of PlayReady with MPEG-DASH is likely to be viewed as a safe choice, in line with the way the industry is going.  That solution came into view this month as BuyDRM and Fraunhofer IIS announced an integration of MPEG-DASH with PlayReady for the HBO GO service in Europe.  HBO GO is HBO’s “over the top” service for subscribers.

For the HBO GO demo, BuyDRM implemented a version of its PlayReady client that uses Fraunhofer’s AAC 5.1 surround-sound codec, which ships with devices that run Android 4.1 Jelly Bean.  The integration is being showcased with HD quality video on HBO’s “Boardwalk Empire” series. Users can connect Android 4.1 devices with the proper outputs — even handsets — to home-theater audio playback systems to get an experience equivalent to playing a Blu-ray disc.  The current implementation supports live broadcasting, with VOD support on the way shortly.

PlayReady integrated with MPEG-DASH is likely to be a popular choice for a variety of video service providers, ranging from traditional pay TV operators to over-the-top services like HBO Go.  BuyDRM and Fraunhofer’s deployment is an important step towards that choice becoming widely feasible.

UK IPO Publishes Digital Copyright Hub Report August 13, 2012

Posted by Bill Rosenblatt in Rights Licensing, Standards, UK.
add a comment

Last month, the UK Intellectual Property Office published a report called Copyright Works: streamlining copyright licensing for the digital age.  This is the second report in Richard Hooper CBE and Dr. Ros Lynch’s engagement with the UK IPO.  Hooper’s background includes positions at the top of the UK’s media and telecommunications industries; Lynch is a senior civil servant in the UK’s Department for Business, Innovation and Skills.

The second Hooper Report follows on the heels of several important developments in the UK regarding copyright in the digital age, most recently including the Digital Economy Act and the Hargreaves Review.  Having found (in the first Hooper Report) that the legal content marketplace is being held back by several obstacles, such as licensing difficulties, lack of standards, and deficiencies in both content and metadata, the second Hooper Report makes recommendations on how to solve the problems.

Unfortunately the recommendations in the second Hooper Report don’t go far enough.  Hooper and Lynch did a lot of research, talked to lots of people, and synthesized lots of information.  Most of their input appears to have come from established industry sources, including the major licensing entities in the UK, such as PRS and PPL (UK analogs to ASCAP and RIAA in the US); major media companies; trade associations; and standards initiatives engendered by the EU Digital Agenda such as the Global Repertoire Database (GRD) and Linked Content Coalition (LCC). They also researched important initiatives outside of the UK, such as the Copyright Clearance Center’s RightsLink service in the US.

Whereas the first Hooper Report established that major problems exist, this new report is best appreciated as a summary of the various initiatives being planned to solve pieces of them — such as the GRD and LCC.  Hooper and Lynch offer cogent explanations of problems to be solved: difficulty of licensing content into legitimate services, lack of complete and consistent information about content and rights, lack of standards for rights information and communication among relevant entities, resistance of collective licensing schemes to new business models, and the relative lack of content available for legal use through various channels.

The authors appear to understand that the various efforts being proposed are not going to solve all the problems by themselves.  On the other hand, they also understand problems of “not invented here,” and they take the pragmatic view that the best way forward is to work with existing standards and integrate them together rather than try to come up with some kind of overall solution that may not be practicable.

So far, so good; but that’s essentially where it all stops.  After explaining the problems and summarizing existing initiatives, the report tantalizingly lays out a vision for a Copyright Hub that will bring everything together.  It recommends government seed funding as a way of both kick-starting the Copyright Hub and ensuring that people work together to build it.

Unfortunately, the vision for the Copyright Hub turns out to be an inch deep.  It also lacks explanations of how, or if, all these initiatives — ranging from PRS and PPL’s efforts to offer “one-stop” music licenses all the way up through the technically sophisticated GRD — could fit together or even how they map to the elements in the proposed Copyright Hub.  The LCC project is looking at technical aspects of the integration issue, but it is conceived as an enabler of standards, not as a marketplace solution.  It’s possible that such a solution is envisioned as a next step in the process. But the report betrays evidence of a lack of technical understanding that would have benefited both the analysis and the envisioning of solutions.

For example: The report has a section on digital images, which discusses the problem that many images are stripped of their rights metadata as part of normal publishing processes.  It discusses the possibility of using Internet-standard Uniform Resource Identifiers (URIs) to identify images and the work that entities such as Getty Images and the PLUS coalition are doing to create image registries and automate rights licensing.  But when put in this context, the solution to the metadata stripping problem is obvious: watermarking, the standard way of ensuring that data travels with content.  The problem can be solved with a standard watermarking scheme whose payload includes a serial identifier that can be used to reference a URI in a registry.  This is what the RIAA proposed for music in the U.S. in 2009, albeit to precious little fanfare; but Hooper and his people didn’t see it.  (They use the word “embed” without appearing to understand its meaning.)  There are other examples like this.

The report mentions “long tail” licensing — not as in long tail content, but as in long tail uses of content rights.  The work that needs to be done should, the report rightly says, address the large and growing number of low-value licensing transactions rather than, say, Universal Music Group licensing to Spotify or Deezer (the kind of deal that will always get done the old-fashioned way).  Unfortunately, the authors don’t seem to have talked to many people who try to get such licensing.  They should, for example, have sought out startup companies that have to navigate the impenetrable maze of direct licensing deals with rights holders,  face the rigidity of collecting societies that won’t accommodate their innovative business models, and make separate deals in 27 member states to get a pan-European service launched.

Overall, the second Hooper Report reads like a particularly well-informed version of the typical industry response to a government body’s investigation into industry practices: look at all the steps we’re already taking to solve this problem; leave us alone.

As a result, the new Hooper Report is a solid foundation on which to build solutions, but it doesn’t provide enough forward direction.  It’s all very well to talk about respecting the growing body of valuable work that different organizations are doing to solve online content licensing problems, avoiding “not invented here,” promoting open standards, and so on.  But the work that must be done will necessarily include tasks that are tedious and contentious, aspects that the Hooper Report glosses over.

Metadata schemes will have to be rationalized against one another; gaps and incompatibilities will have to be identified and eliminated.  Rights holders whose metadata is incomplete or poor quality will have to be identified and given sufficient incentive to improve.  Well-intentioned standards initiatives with overlapping or conflicting goals will have to change.  Digital holdouts will have to be convinced to participate.  And the many organizations with vested interests in maintaining the status quo will have to be called out as part of the problem rather than the solution.  This may be ugly work, but it will have to get done.

The IDPF’s Lightweight Content Protection Standard for E-books May 31, 2012

Posted by Bill Rosenblatt in DRM, Publishing, Standards.
3 comments

I am working with the International Digital Publishing Forum (IDPF), helping them define a new type of  content protection standard that may be incorporated into the upcoming Version 3 of IDPF’s EPUB standard for e-books.  We’re calling this new standard EPUB Lightweight Content Protection (EPUB LCP).

EPUB LCP is currently in a draft requirements stage.  The draft requirements, along with some explanatory information, are publicly available; IDPF is requesting comments on them until June 8.  I will be giving a talk about EPUB LCP, and the state of content protection for e-books in general, at Book Expo America in NYC next week, during IDPF’s Digital Book Program on Tuesday June 5.

Now let’s get the disclaimer out of the way: the remainder of this article contains my own views, not necessarily those of IDPF, its management, or its board members.  I’m a consultant to IDPF; any decisions made about EPUB LCP are ultimately IDPF’s.  The requirements document mentioned above was written by me but edited by IDPF management to suit its own needs.

IDPF is defining a new standard for what amounts to a simple, lightweight, looser DRM.  EPUB is widely used in the e-book industry (by just about everyone except Amazon), but lack of an interoperable DRM standard has caused fragmentation that has hampered its success in the market. Frankly, IDPF blew it on this years ago (before its current management came in).  They bowed to pressures from online retailers and reading device makers not to make EPUB compliance contingent on adopting a standard DRM, and they considered DRM (understandably) not to be “low hanging fruit.”

IDPF first announced this initiative on May 18; it got press coverage in online publications such as Ars Technica, PaidContent.org, and others.  The bulk of the comments were generally “DRM sucks no matter what you call it” or “Why bother with this at all, it won’t help prevent any infringement.”  A small number of commenters said something on the order of “If there has to be DRM, this isn’t a bad alternative.”  One very knowledgeable commenter on Ars Technica first judged the scheme to be pointless because it’s cryptographically weak, then came around to understanding what we’re trying to do and even offered some beneficial insights.

The draft requirements document provides the basic information about the design; my main purpose here is to focus more on the circumstances and motivation behind the initial design choices.

Let’s start at a high level, with the overall e-book market.  (Those of you who read my article about this on PaidContent.org a few months ago can skip this and the next five paragraphs.)  Right now it’s at a tipping point between two outcomes that are both undesirable for the publishing industry.  The key figure to watch is Amazon’s market share, which is currently in the neighborhood of 60%; Barnes and Noble’s Nook is in second place with share somewhere in the 25-30% range.

One outcome is Amazon increasing its market share and entering monopoly territory (according to the benchmark of 70% market share often used in US law).  If that happens, Amazon can do to the publishing industry as Apple has done for music downloads: dominate the market so much that it can both dictate economic terms and lock customers in to its own ecosystem of devices, software, and services.

The other outcome is that Amazon’s market share falls, say to 50% or lower, due to competition.  In that case, the market fragments even further, putting a damper 0n overall growth in e-reading.  Also not good for publishers.

Let’s look at what happens to DRM in each of these cases.  In the first (Amazon monopoly) case, Amazon may drop DRM just as Apple did for music — but it will be too late: Amazon will have achieved lock-in and can preserve it in other ways, such as by making it generally inconvenient for users to use other devices or software to read Amazon e-books.  Other e-book retailers would then drop DRM as well, but few will care.

In the second case, everyone will probably keep their DRMs in order to keep users from straying to competitors (though some individual publishers will opt out of it).  In other words, if the DRM status quo remains, the likely alternatives are DRM-free monopoly or DRM and fragmentation.

If IDPF had included an interoperable DRM standard back in 2007 when both EPUB and the Kindle launched, e-books might well be more portable among devices and reading software than they are now.  Yet the most desirable outcome for the reading public is 100% interoperability, and we know from the history of technology markets (with the admittedly major exception of HTML) that this is a chimera.  (Again, I explained this in PaidContent.org a few months ago.)

To many people, the way out of this dilemma is obvious: everyone should get rid of DRM now.  That certainly would be good for consumers.  But most publishers — who control the terms by which e-books are licensed to retailers —  don’t want to do this; neither do many authors, who own copyrights in their books.

E-book retailers and device vendors can get lock-in benefits from DRM.  As for whether DRM does anything to benefit rights holders by improving consumers’ copyright compliance or reducing infringement, that’s a real question.  Notwithstanding the opinions of the many self-styled experts in user behavior analysis and infringement data collection among the techblogorati and commentariat, the answer is unknown and possibly unknowable.  Publishers are motivated to keep DRM if for no other reason than fear that once it goes away, they can never bring it back.  Moreover, certain segments of the publishing industry (such as higher education) want DRM that’s even stronger than the current major schemes.

The fact is, none of the major DRMs in today’s e-book market are very sophisticated — at least not compared to content protection technologies used for video content.  The economics of the e-book industry make this impossible: the publishers and authors who want DRM don’t pay for it, resulting in cost and complexity constraints.  DRM helps retailers insofar as it promotes lock-in, but it doesn’t help them protect their overall services.  In contrast, content protection helps pay TV operators (for example) protect their services, which they want protected just as much as Hollywood doesn’t want its content stolen; so they’re willing to pay for more sophisticated content protection.

The two leading e-book DRMs right now are Amazon’s Mobipocket DRM and Adobe’s Content Server 4; the latter is used by Barnes & Noble, Sony, and various others.  Hackers have developed what I call “one-click hacks” for both.  One-click hacks meet three criteria: people without special technical expertise can use them; they work on any file that’s packaged in the given DRM; and they work permanently (i.e., there is no way to recover from them).  In contrast, pay TV content protection schemes are generally not one-click-hackable.

In other words one-click DRM hacks are like format converters, like the one built into Microsoft Word that converts files from WordPerfect or the ones built in to photo editing utilities that convert TIFF to JPEG. But there’s a difference: DRM hacks are illegal in many countries, including the United States, European Union member states, Brazil, India, Taiwan, and Australia; all other signatories to the Anti-Counterfeiting Trade Agreement will eventually have so-called anticircumvention laws too.

The effect of anticircumvention law has been to force DRM hacks into the shadows, making them less easily accessible to the non-tech-savvy and at least somewhat stigmatized.  Without the law, we would have things like Nook devices and software with “Convert from Kindle Format” options (and vice versa).  The popular, free Calibre e-book reading app, for example, had a DRM stripper but removed it (presumably under legal pressure) in 2009.  A DRM removal plug-in for Calibre is available, but it’s not an official one; David Pogue of the New York Times — hardly a fan of DRM — recently dismissed it as difficult to use as well as illegal.

The US has a rich case history around anticircumvention law that has made the boundaries of legal acceptability reasonably clear.  It has shut off the availability of hacks from “legitimate” sources and ensured that if your hack is causing enough trouble, you will be sued out of existence.  I am not personally a fan of anticircumvention law, but I accept as fact that it has made hacks less accessible to the general public.

The foregoing line of thought got IDPF Executive Director Bill McCoy and me talking last year about what IDPF might be able to do about DRM in the upcoming version of EPUB, in order to help IDPF further its objective of making EPUB a universal standard for digital publishing and forestall the two undesirable market trajectories described above.  We did not set out to design an “ultimate DRM” or even “yet another DRM”; we set out to design something intended to solve problems in the digital publishing market while working within existing marketplace constraints.

So now, with that background, here is a set of interrelated design principles we established for EPUB LCP:

  1. Require interoperability so that retailers cannot use it to promote lock-in.  This is what the UltraViolet standard for video is attempting to do, albeit in a technically much more complex way.  The idea of UltraViolet is to provide some of the interoperability and sharing features that users want while still maintaining some degree of control.  Our theory is that both publishers and e-book retailers would be willing to accept a looser form of DRM that could break the above market dilemma while striking a similar balance between interoperability and control.
  2. Support functions that users really want, such as “social” sharing of e-books. Build on the idea of e-book watermarking, such as that used in Safari Books Online for PDF downloads and in the Pottermore Store for EPUB format e-books: embed users’ personal information into the content, on the expectation that users will only share files with people whom they trust not to abuse their personal information.
  3. Create a scheme that can support non-retail models such as library lending and can be extended to support additional business models (see below) or the stronger security that industry segments such as higher ed need.
  4. Include the kinds of user-friendly features that Reclaim Your Game has recommended for video game DRMs.  These include respecting privacy by not “phoning home” to servers and ensuring permanent offline use so that files can be used even if the retailer goes out of business.  They also include not jeopardizing the security or integrity of users’ devices, as in the infamous “rootkit” installed by CD copy protection technology for music several years ago.
  5. Eliminate design elements that add disproportionately to cost and complexity.  Perhaps the biggest of these is the s0-called robustness rules that have become standard elements of DRMs such as OMA DRM, Marlin, and PlayReady where the DRM technology licensor doesn’t own the hardware or platform software.  Eliminating “phoning home” also saves costs and complexity.  Other elements to be eliminated include key revocation, recoverability, and fancy authentication schemes such as the domain authentication used in UltraViolet.
  6. Finally, don’t try very hard to make the scheme hack-proof.  The strongest protection schemes for commercial content — such as those found in pay television — are those that minimize the impact of hacks so that they are temporary and recoverable; such schemes are too complex, invasive, and expensive for e-book retailers or e-reader makers to consider.  Instead, assume that EPUB LCP will be hacked, and rely on two things to blunt the impact: anticircumvention law, and allowing enough differences among implementations that each one will require its own hack (a form of what security technologists call “code diversity.”).

With those design principles in mind, we have designed a scheme that takes its inspiration from two sources in particular: the content protection technology used in the eReader/FictionWise e-book technology that is now owned by Barnes & Noble, and the layered functionality concept built into the Digital Media Project‘s IDP (Interoperable DRM Platform) standard.

The central idea of EPUB LCP is a passphrase supplied by the user or retailer.  This could be an item of personal information, such as a name, email address, or even credit card number; distributors or rights holders can decide what types of passphrases to use or require.  The passphrase is irrecoverably obfuscated (e.g. through a hash function) so that even if a hack recovers the passphrase, it won’t recover the personal information; yet the retailer can link the obfuscated passphrase to the user.  The obfuscated passphrase is then embedded into the e-book file.  If the user wants to share an e-book, all she has to do is share the passphrase.  Otherwise, the content must be hacked to be readable.

Other aspects of the draft requirements are covered in the document on the IDPF website.  Apart from that, it’s worth mentioning that this type of scheme will not support certain content distribution models unless extensions are added to make them possible.  Features intentionally left out of the basic EPUB LCP design include:

  • Separate license delivery, which allows different sets of rights for a given file
  • License chaining, which supports subscription services
  • Domain authentication, which can support multi-device/multi-user “family accounts” a la UltraViolet
  • Master-slave secure file transfer, for sideloading onto portable devices, a la Windows Media DRM
  • Forward-and-delete, to implement “Digital Personal Property” a la the IEEE P1817 standard

Once again, we set out to design something that meets current market needs and works within current market constraints; EPUB LCP is not a research-lab R&D project.

Again, I’ll be discussing this, as well as the landscape for e-book content protection in general, at Book Expo America next week.  Feel free to come and heckle (or just heckle in the comments right here).  I’m sure I will have more to report as this very interesting project develops.

Creative Commons for Music: What’s the Point? January 22, 2012

Posted by Bill Rosenblatt in Law, Music, Rights Licensing, Services, Standards.
23 comments

I recently came across a music startup called Airborne Music, which touts two features: a business model based on “subscribing to an artist” for US $1/month, and music distributed under Creative Commons licenses.  Like other music services that use Creative Commons, Airborne Music appeals primarily to indie artists who are looking to get exposure for their work.  This got me thinking about  how — or whether — Creative Commons has any real economic value for creative artists.

I have been fascinated by a dichotomy of indie vs. major-label music: indie musicians value promotion over immediate revenue, while for major-label artists it’s the other way around.  (Same for book authors with respect to the Big 6 trade publishers, photographers with respect to Getty and Corbis, etc.)  Back when the major labels were only allowing digital downloads with DRM — a technology intended to preserve revenue at the expense of promotion — I wondered if those few indie artists who landed major-label deals were getting the optimal promotion-versus-revenue tradeoffs, or if this issue even figured into major-label thinking about licensing terms and rights technologies.

When I looked at Airborne Music, it dawned on me that Creative Commons is interesting for indie artists who want to promote their works while preserving the right (if not the ability) to make money from them later.  The Creative Commons website lists ten existing sites that enable musicians to distribute their music under CC, including big ones like the bulge-bracket-funded startup SoundCloud and the commercially-oriented BandCamp.

This is an eminently practical application of Creative Commons’s motto: “Some rights reserved.”  Many CC-licensing services use the BY-SA (Attribution-Share-Alike) Creative Commons license, which gives you the right to copy and distribute the artist’s music as long as you attribute it to the artist and redistribute (i.e. share) it under the same terms.  That’s exactly what indie artists want: to get their content distributed as widely as possible but to make sure that everyone knows it’s their work.  Some use BY-SA-NC (Attribution-Share-Alike-Noncommercial), which adds the condition that you can’t sell the content, meaning that the artist is preserving her ability to make money from it.

It sounds great in theory.  It’s just too bad that there isn’t a way to make sure that those rights are actually respected.  There is a rights expression language for Creative Commons (CC REL), which makes it possible for content rendering or editing software to read the license (in XML RDFa) and act accordingly.  As a technology, the REL concept originated with Mark Stefik at Xerox PARC in the mid-1990s; the eminent MIT computer scientist Hal Abelson created CC REL in 2008.  Since then, the Creative Commons organization has maintained something of an arms-length relationship with CC REL: it describes the language and offers links to information about it, but it doesn’t (for example) include CC REL code in the actual licenses it offers.

More to the point, while there are code libraries for generating CC REL code, I have yet to hear of a working system that actually reads CC REL license terms and acts on them.  (Yes, this would be extraordinarily difficult to achieve with any completeness, e.g., taking Fair Use into account.)

Without a real enforcement mechanism, CC licenses are all little more than labels, like the garment care hieroglyphics mandated by the Federal Trade Commission in the United States.  For example, some BY-SA-licensed music tracks may end up in mashups.  How many of those mashups will attribute the sources’ artists properly?  Not many, I would guess.  Conversely, what really prevents someone who gets music licensed under ND (No Derivative Works) terms from remixing or excerpting in ways that aren’t considered Fair Use?  Are these people really afraid of being sued?  I hardly think so.

This trap door into the legal system, as I have called it, makes Creative Commons licensing of more theoretical than practical interest.  The practical value of CC seems to be concentrated in business-to-business content licensing agreements, where corporations need to take more responsibility for observing licensing terms and CC’s ready-made licenses make it easy for them to do so.  The music site Jamendo is a good example of this: it licenses its members’ music content for commercial sync rights to movie and TV producers while making it free to the public.

Free culture advocates like to tell content creators that they should give up control over their content in the digital age.  As far as I’m concerned, anyone who claims to welcome the end of control and also supports Creative Commons is talking through both sides of his mouth.  If you use a Creative Commons license, you express a desire for control, even if you don’t actually get very much of it.  What you really get is a badge that describes your intentions — a badge that a large and increasing number of web-savvy people recognize.  Yet as a practical matter, a Creative Commons logo on your site is tantamount to a statement to the average user that the content is free for the taking.

The truth is that sometimes artists benefit most from lack of control over their content, while other times they benefit from more control.  The copyright system is supposed to make sure that the public’s and creators’ benefits from creative works are balanced in order to optimize creative output. Creative Commons purports to provide simple means of redressing what its designers believe is a lack of balance in the current copyright law.  But to be attractive to artists, CC needs to offer them ways to determine their levels of control in ways that the copyright system does not support.

In the end, Creative Commons is a burglar alarm sign on your lawn without the actual alarm system.  You can easily buy fake alarm signs for a few dollars, whereas real alarm systems cost thousands.  It’s the same with digital content.  At least Creative Commons, like almost all of the content licensed with it, is free.

(I should add that I wear the badge myself.  My whitepapers and this blog are licensed under Creative Commons BY-NC-ND (Attribution-Noncommercial-No Derivative Works) terms.  I would at least rather have the copyright-savvy people who read this know my intentions.)

Follow

Get every new post delivered to your Inbox.

Join 641 other followers