Jaron Lanier’s Blanket Licensing Scheme

Pro-copyright people complain about the ways that technology companies make use of commercial content without compensating creators.  The artists’ community also bemoans the smallness of royalty payments from legal entities like Spotify and YouTube.

Various solutions have been proposed to these problems; most of them are narrow and ad-hoc in nature: lawsuits, statutory licenses, and so on.  Now along comes Jaron Lanier, author of the groundbreaking 2010 book You Are Not a Gadgetwith a much more radical and fundamental proposal for compensating content creators.  In his new book, Who Owns the Future?, Lanier suggests this: let’s pay everyone for every piece of data and content they create.  Not just music tracks or videos, and not even just blog posts and Flickr photos; Lanier includes  social network posts, search commands, online behavior tracking data, outputs from net-connected health devices, etc., etc.

In Lanier’s view, the current world is ruled by “Siren Servers” such as Google and Facebook that force users to accept their terms of use — which usually include lots of free stuff in exchange for uncompensated use of users’ data and compromised privacy.  Instead Lanier proposes something he calls a “humanistic economy,” in which everyone receives a small payment for every byte of data they produce, from whoever uses that data.  Lanier claims that this type of economy will work better for everyone than the current state of affairs in which Siren Servers benefit at everyone else’s expense.

In his previous book, Lanier lamented the collapse of a middle class of content creators — musicians, journalists, authors, photographers — as value shifts from them to what he then called the “lords of the cloud.”  In Who Owns the Future?, he generalizes this observation beyond media.  He asserts that when Silicon Valley disrupts any industry, it eliminates jobs and arrogates wealth into the hands of the few Siren Servers in that industry — the entities that are the “most meta” and control that industry’s essential information.  He points to finance and healthcare as industries that have already been affected by this type of disruption.  He claims that a humanistic economy would ensure the continued existence of a robust middle class and provide boundless economic growth.

How would such a system work?   In terms of technology, it would be based on principles set out in the 1960s and 70s — decades before the commercial Internet — by a tech visionary named Ted Nelson.  Nelson, who coined the term “hypertext” (the HT in HTML) in 1960, designed a system of networked digital information called Xanadu.  Unlike the one-way, typeless (semantics-free), breakable HTML links on the Internet, Xanadu links are bidirectional, semantically rich, and permanent.

In Xanadu, every piece of content would appear online only once.  You could link to a content item (and thereby possibly use it as part of your own content), but the author of the content could also trace the link back to you.  Nelson also envisioned a payment system in which following links would trigger royalty payments; he described this in a paper given at the Technological Strategies for Protecting Intellectual Property in the Networked Multimedia Environment conference, a 1993 event in Washington, DC that I consider to be the birthplace of digital rights management as a field of study.

(Lanier suggests that the inventors of the Internet as we know it, such as Tim Berners-Lee, were very familiar with Nelson’s work but chose to ignore it because fragile one-way links were so much easier to implement than Nelsonian links.)

In essence, Lanier’s Nelsonian information economy would function as the mother of all blanket licensing schemes.  Bidirectional links would ensure attribution and payment.  Moreover, because everything is linked (in a network assumed to be ubiquitous), there would be no need to make copies of anything, except possibly for backup purposes.  In other words, this system would render copyright irrelevant.  Imprecise concepts like fair use and hardware levies would be subsumed into rules for payment.

Lanier’s book explores implications of the Nelsonian system on business, creativity, and other aspects of life in ways that are imaginative and engaging.  He admits that his ideas are presented as “hypotheticals, speculation, advocacy, and the invocation of hope.”  Even then, the pragmatics around Lanier’s humanistic economy are virtually nonexistent.

The architecture of the Internet would need to change to support and enforce the Nelsonian linking system; how could this happen?  Who would decide on rules and standards for paying for content?  And who would actually want this scheme?

Lanier takes faint stabs at these fundamental questions.  He devotes a chapter to the question of what forces could bring about such changes: a large band of hackers, startups, “Siren Server” companies, or government.  He admits that none of these are strong possibilities.   He suggests that payment rules could be set essentially by market forces acting over time (a long time, certainly way too long for investors), with software agents available to help average everyday users set prices for their own data according to their desires for money, privacy, altruism, and notoriety — all of which trade off against each other.  Yet no matter how such rules come into being, they would be orders of magnitude more complex than any of today’s statutory or bilateral commercial content licenses.

The question of who would actually want this is the most troublesome.  Clearly not today’s big tech players, because their models couldn’t exist.  If online services were compelled to pay for content that users provide to them (whether explicitly or through things like search commands or behavior tracking), they would limit ways in which users could interact with them.  Perhaps they’d accept content on a very selective basis and not allow much interaction.  They might charge users for access to make up the cost of content.  In other words, they would have to behave just like traditional publishers.

Yet Big Media wouldn’t want it either.  The the major record labels, movie studios, and book publishers would not be thrilled at the prospect of a truly democratized world where Internet services are required to pay royalties to anyone at all who wants them; disintermediators like TuneCore, Bandcamp, Lulu, and Scribd (not to mention Amazon for authors) are bad enough as it is.  For example, the RIAA has called for a return to the days of opt-in copyright registration (pre-1988 in the US), presumably so that content creators who don’t go to the trouble and cost of registering can’t be eligible even for statutory royalties.

The Artists’ Rights movement (insofar as The Trichordist blog represents it) is touting Lanier’s book, but I wonder whether indie musicians (and photographers, filmmakers, authors, etc.) would really want this either.  Lanier’s model might force online services that want to attract users via content to pay for it, but would create a tsunami of competition for existing artists who are trying to make a living from their work.  It stands to reason that (let’s say) a musician should get paid to the extent that people listen to her music, but it’s another matter entirely to suggest that she should take money from the same pot as people who just allow a service to track their whereabouts or search commands.  More money may flow into the system, but it’s questionable whether it would be enough to actually help professional or semiprofessional artists.

It almost goes without saying that free Internet advocates wouldn’t like this scheme.  If nothing else,  mandating things like permanent bidirectional linking and payment systems seems like “breaking the Internet” on a scale that dwarfs the likes of SOPA and PIPA.

Yet Lanier allows for the possibility that his system could come into being through market forces, not through some sort of legal mandate.  The ultimate question is whether people in general would welcome it.  Lanier takes this as a given, but I’m not so sure.  The system would eliminate Siren Servers and their closed systems, non-negotiable license agreements, theft of privacy, and so on.  But it would also most likely drastically reduce the amount of free stuff available online and the positive network effects of online services.  I also wonder how many people would be interested in becoming entrepreneurs for their own data, as opposed to just having a steady job.

Despite all that, the ideas in Who Owns the Future? are intriguing and thought-provoking — and go far beyond the admittedly narrow slice examined here.  The book is bursting with creativity and vision, making it a pleasure to read.  Just because pragmatists like myself can’t see a path to Jaron Lanier’s humanistic economy and aren’t even sure it’s worth getting there, that doesn’t mean that the ideas aren’t worth serious consideration.  Lanier deserves to be part of the debate about the future of the net.

4 comments

  1. Thanks again for another thought-provoking post, Bill!

    I heard Jaron Lanier on a recent podcast — maybe even NPR — and one of my first thoughts was, “I wonder what Bill Rosenblatt thinks?” My other thoughts were approximately “Jaron, welcome to DRM ca 1995…”

    As you well know, there were a few people who imagined — and in the case of EPR -> Intertrust, built: infrastructure for rights
    micropayments. Others, like my NetRights (1995-1997), focused on looser, metadata-oriented infrastructure, maintaining the connections between creators and their works (“Connect me to the videographer for this clip…”). The latter is a bit more about “Intellectual Value” (Esther Dyson) and less about ROI, which was Intertrust’s catch-phrase. It also anticipated what we now call Linked Data.

    It would be a really interesting experiment to see how an open market for micro-rights would play out.

    BTW, one idea Jaron discussed in that interview made me literally stop what I was doing: the notion of analyzing a (derivative) work and
    creating a rights summary. A long time ago (must have been 1997, at Seybold) I discussed with some digital publishing tools people (“Markzware,” maybe?) the idea of “rights pre-flighting,” in which a work would be processed by tools to verify the clearance status of the component pieces. As I saw it, each of the components would have been imported into the “creativity tool” with rights metadata associated, and as part of the normal pre-flighting (checking for fonts, etc) the rights status of the project would be summarized.

    The trouble is, this assumes the data has been made available
    up-stream and accessible in the infrastructure — which is just one
    of the problems I see with Jaron’s ideas.

    Thanks again, and you’ve made me want to buy his book!

    John

  2. Thanks John for the trip down memory lane. Your memory regarding Markzware makes sense, given that Markzware made preflighting tools for prepress service providers.

    Now fast-forward to the early 2000s, when Adobe was working on the XMP metadata standard for graphic arts. I had several discussions with the XMP product manager about use cases that would drive definition of rights metadata. Prepress/graphic arts was one of them, but Adobe never did anything about it, as far as I know (they did do some work on rights metadata for downstream distribution). XMP is now an ISO standard with the tools available under BSD open source license.

    – bill.

  3. Phrase · ·

    “The book is bursting with creativity and vision, making it a pleasure to read. Just because pragmatists like myself can’t see a path to Jaron Lanier’s humanistic economy and aren’t even sure it’s worth getting there,”… Bill Rosenblatt …
    .
    … Mr. Rosenblatt, I also appreciated your balanced informed review of Lanier’s “Who Own The Future ?” … What i respected above all was the honesty with which you identified yourself as a pragmatist wondering if it is “worth getting there” , that is closer to realizing Jaron Lanier’s humanistic economy . … I am not implyng that your idealistic side does not empathise, … but rather that your pragmatic side recognizes the enormous complexity of ‘tweaking’ the network system to two-way, bi-directional, provenence maintaining technical characteristics which would also meet tremendous resistance by the ‘Siren Servers’ and other powerful lobbies. …
    .
    The perspective from which i am writing is far more idealistically focused. … If i may quote Jaron Lanier from “Who Owns The Future ? “… “It’s worse than foolish to imagine that technologists will be able to fix the world if economics and politics have gone insane. We can’t function alone. What we do is empower people. The world needs to be approximately sane for us to make any positive difference. … But the world is not converging on sanity.”
    .
    … I’ve also just read Robert W. McChesney’s “Digital Disconnect: How Capitalism is Destroying the Internet and Democracy” … If i understood correctly, … McChesney’s concern that Democracy depends on investigative journalism which is being impoverished in the Ad. driven Siren Server network, … is also why it is important to believe that it is “worth getting there”; … that is not being satisfied in navigating the ‘given’ but rather forging a new paradigm for the network system to facilitate a …pluralistic particiaptory deliberative democracy within which peace will be given a chance. …
    .
    Mr. Rosenblatt, i am not implying you disagree, and with respect i wish i could see the maze from the height of your intellectual perspective. … Also, i was heartened when you wrote …” Lanier deserves to be part of the debate about the future of the net “… with respect … phrase

  4. Phrase · ·

    … correction … Robert W. McChesney’s latest book title is … ” Digital Disconnect: How Capitalism Is Turning The Internet Against Democracy ” … phrase

Leave a comment