Images, Search Engines, and Doing the Right Thing

A recent blog post by Larry Lessig pointed out that the image search feature in Microsoft’s Bing allows users to filter search results so that only images with selected Creative Commons licenses appear.  A commenter to the post found that Google also has this feature, albeit buried deeply in the Advanced Search menu (see for example here; scroll down to “usage rights”).  These features offer a tantalizing glimpse at an idea that has been brewing for years: the ability to license content directly and automatically through web browsers.  

Let’s face it: most people use image search to find images to copy and paste into their PowerPoint presentations (or blog posts or web pages).  As Lessig points out, these features help users to ensure that they have the rights to use the images they find in those ways.  But they don’t help if an image is licensable in any way other than a subset of Creative Commons terms — regardless of whether royalties are involved.  I’d stop short of calling this discrimination against image licensors, but it certainly doesn’t help them.

Those who want to license images for commercial purposes — such as graphic artists laying out advertisements — typically go to stock photo agencies like Getty Images and Corbis, which have powerful search facilities on their websites.  Below Getty and Corbis, the stock image market is fragmented into niches, mostly by subject matter.  Graphic artists have to know where to go to find the kinds of images they need.

There is a small “one-stop” search engine for images called PictureEngine, which includes links to licensing pages in its search results.  But it would surely be better if the mainstream search engines’ image search functions included the ability to display and filter on licensing terms, and to enable links to licensing opportunities.

There have been various attempts over the years to make it easy for users to “do the right thing,” i.e. to license commercial content that they find online and intend to copy and use for purposes that aren’t clearly covered under fair use or equivalents.  Most such efforts have focused on text content.  The startup QPass had a vogue among big-name newspaper and magazine brands during the first Internet bubble: it provided a service for publishers to sell archived articles to consumers for a dollar or two apiece.  It effectively disappeared during the post-bubble crash.  ICopyright, which has been around since the late 1990s, provides a toolbar that publishers can use on web pages to offer various licensing options such as print, email, republish, and excerpt; most of its customers are B2B publishers like Dow Jones and Investors Business Daily.

Images are just as easy as text (compared, say, to audio and video) to copy and paste from web pages, but they are more discrete units of content; therefore it ought to be easier to automate licensing of them.  When you copy and paste an image with a Creative Commons license, you’re effectively getting a license, because the license is expressed in XML metadata attached to the image.

If search engines can index Creative Commons terms embedded within images that they find online, they ought to be able to index other licensing terms, including commercial terms.  The most prominent standard for image rights is PLUS (Picture Licensing Universal System), but that’s intended for describing rights B-to-B licensing arrangements, not to the general public; and I’m not aware of any efforts that the PLUS Coalition has made to integrate with web search engines.

No, the solution to this problem is not only clear but has been in evidence for years: augment Creative Commons so that it can handle commercial as well as noncommercial licensing.  Creative Commons flirted with this idea several years ago with something called CC+ (CCPlus), a way to add additional terms to Creative Commons licenses that envisioned commercial licenses in particular.

Although Creative Commons has its detractors among the commercial content community (mostly lawyers who feel disintermediated — which is part of the point), I have heard major publishers express admiration for it as well as interest in finding ways to use it with their content.  At this point, the biggest obstacle to extending Creative Commons to apply to commercial licensing is the Creative Commons organization’s lack of interest in doing so.  Creative Commons’ innovations have put it at the center of the copyright world on the Internet; it would be a shame if Creative Commons’ refusal to acknowledge that some people would like to get paid for their work results in that possibility being closed off.

4 comments

  1. If people want to propose standards similar to Creative Commons to specify standard payments for particular kinds of uses of works, where it would be easy for people and programs to interpret them along with CC licenses, I’m fine with that.

    What I don’t get from this piece is why Creative Commons itself should develop them, or lend its name to them. It would not only dilute the free-culture aspect of their mission that appeals to many of its supporters, but also take up a lot of its resources to develop. (Any license that requires some sort of compensation to be delivered back to a claimant is inherently more complex, both technically and socially, than licenses like CC’s that don’t require a two-way transaction.)

    I could see it potentially being useful for CC to participate in the development of a transactional licensing framework as an interested but external party, to ensure that CC metadata is easily interoperable with that framework. Beyond that, though, I’m not sure I see the win for CC here.

  2. John,

    Thanks, I agree with all of this. Standards for commercial licensing could be developed that build on CC, but there is no reason why CC itself should be involved other than to make sure that the relevant CC tools interoperate with them.

    In fact, what you describe in your last paragraph is roughly what the Copyright Clearance Center attempted to do in cooperation with CC which led up to the development of CC+. As best I remember (CCC folks feel free to correct me), CCC used CC+ as a component of an experimental service called Ozmo, which individual content creators (e.g. bloggers) could use to offer commercial licenses to their content rather than just sitting by while people copy-and-paste from them without permission (if they chose not to give permission via one of the traditional CC licenses). CCC launched Ozmo and eventually dropped it, though it was evidently never intended as anything more than an experiment, a beta.

    Around the same time (2008) there was also a startup called RightsAgent, which was started by John Palfrey, then of the Berkman Center at Harvard. This too made use of CC+ and also ceased operations.

    I think the bottom line with all of these ventures, and similar predecessors (if you’re old enough you may remember the “content syndication” flavor-of-the-month around 1999), is that there isn’t enough of a market for legal transactions in individual small items of content. And at the higher end, entities like Getty Images and Corbis prefer to keep all of the searching and transaction processing within their own sandboxes than to let them integrate with search engines or the web in general.

    At the same time, remember what happened when “free software” morphed into “open source.” The transition implied an acknowledgement that someone ought to be able to make money from the stuff; a shift in culture or attitude that has led to things like a choice of LGPL vs. GPL3 licenses for open-source developers and huge payouts for companies like Red Hat and MySQL. I don’t get anything like this from what I see of Creative Commons.

  3. I think projects like Ontologyx will impact on this issue, different RELs and rights dictionaries should be interoperable in the near future: http://www.rightscom.com/Default.aspx?tabid=1067
    BIBFRAME is moving in that direction with serious implications on licensing.
    Jhonny

  4. Hi Jhonny,

    FYI, REL interoperability has been out there as an unsolved problem for several years, at least since roughly ten years ago when MPEG ratified a variant of Xerox/ContentGuard’s XRML as a “standard” REL. The INDECS framework, on which Ontologyx is based, was designed as (among other things) an attempt to solve the interoperability problem, but it hasn’t had much success un that regard and the world has sort of moved on. It’s just a hard problem.

    Having said that, I am unfamiliar with BIBFRAME and will check it out.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: