jump to navigation

The Coming Two-Tiered World of Libary E-book Lending June 4, 2013

Posted by Bill Rosenblatt in Libraries, Publishing, Services, United States.

A group of public libraries in California recently launched a beta version of EnkiLibrary, an e-book lending system that the libraries run themselves.  EnkiLibrary is modeled on the Douglas County Libraries system in Colorado.  It enables libraries to acquire e-book titles for lending in a model that approximates print book acquisition more closely than the existing model.

Small independent publishers are making their catalogs available to these library-owned systems on liberal terms, including low prices and a package of rights that emulates ownership.  In contrast, major trade publishers license content to white-label service providers such as OverDrive under a varied, changing, and often confusing array of conditions — including limited catalog, higher prices than those charged to consumers, and limitations on the number of loans.  The vast majority of public libraries in the United States use these systems: they choose which titles to license and offer those to their patrons.

Welcome to the coming two-tiered world of library e-book lending.  E-lending systems like EnkiLibrary may well proliferate, but they are unlikely to take over; instead they will coexist with — or, in EnkiLibrary’s own words, “complement” — those used by the major publishers.

The reason for this is simple: indie publishers — and authors, working through publisher/aggregators like Smashwords — prioritize exposure over revenue, while for major publishers it’s the other way around.  If more liberal rights granted to libraries means that borrowers “overshare” e-books, then so be it: some of that oversharing has promotional value that could translate into incremental, cost-free sales.

In some ways, the emerging dichotomy in library e-lending is like the dichotomy between major and indie labels regarding Internet music sales.  Before 2009, the world of (legal) music downloads was divided into two camps: iTunes sold both major and indie music and used DRM that tied files to the Apple ecosystem; smaller services like eMusic sold only indie music, but the files were DRM-free MP3s that could be played on any device and copied freely.  That year, iTunes dropped DRM, Amazon expanded its DRM-free MP3 download service to major-label music, and eventually eMusic tapered off into irrelevance.

Yet it would be a mistake to stretch the analogy too far.  Major publishers are unlikely to license e-books for library lending on the liberal terms of a system like EnkiLibrary or Douglas County’s in the foreseeable future; the market dynamics are just not the same.

In 2008, iTunes had an inordinately large share of the music download market; the major labels had no leverage to negotiate more favorable licensing terms, such as the ability to charge variable prices for music.  The majors had tried and failed to nurture viable competitors to iTunes.  Amazon was their last and best hope.  iTunes already had an easy-to-use system that was tightly integrated with Apple’s own highly popular devices.  It became clear that the only meaningful advantage that another retailer could have over iTunes was lack of DRM.  So the major labels were compelled to give up DRM in order to get Amazon on board.  By 2009, DRM-free music from all labels became available through all major retailers.

No such competitive pressures exist in the library market.  On the contrary, libraries themselves are under competition from the private sector, including Amazon.  Furthermore, arguments that e-book lending under liberal terms leads to increased sales for small publishers won’t apply very much to major publishers, for reasons given above.

Therefore, unless libraries get e-lending rights under copyright law instead of relying on “publishers’ good graces” (as I put it at the recent IDPF Digital Book 2013 conference) for e-lending permission, it’s likely that libraries will have to labor under a two-tiered system for the foreseeable future.  Douglas County Libraries director Jamie LaRue — increasingly seen as a revolutionary force in the library community — captured the attitude of many when he said, “It isn’t the job of libraries to keep publishers in business.”  He’s right.  Ergo the stalemate should continue for some time to come.

Mega’s Aggressive Takedown Policy? February 1, 2013

Posted by Bill Rosenblatt in Law, New Zealand, Services.
add a comment

Here is an interesting addendum to last week’s story about Mega, the new file storage service from Kim Dotcom of MegaUpload fame.

Recall that Mega encrypts files that users store on its servers, with keys that only the users know… unless they publish URLs that contain the keys, like this one.  This means that Mega can’t know whether or not files on its servers are infringing, unless a user publishes a URL like that.

As TorrentFreak has found, Mega is crawling the web in search of public URLs that contain Mega encryption keys.  When it finds one, it proactively removes the content from its server — at least if the file in question contains audio or video content — and it sends the user who uploaded the file a message saying that it has taken down the file due to receipt of a takedown notice from the copyright owner.

It’s impossible to say for sure whether this is a blanket policy, and of course Mega’s web-crawling technology probably doesn’t work perfectly.  But if this is Mega’s policy, then Mega is being at least as aggressive as RapidShare in going after public links to infringing content.  RapidShare finds public links to files on its service and, apparently, examines them with content identification technology to see if they are infringing.  According to TorrentFreak’s findings, Mega does no analysis; it uses no fingerprinting or other content identification technology; it just takes the content down.  It has taken down unambiguously legal content.  (My file wasn’t taken down, because it’s just a PDF of a presentation that I created, and/or because it’s only on this blog and not on a known P2P index site.)

Mega could be doing this in order to conform to the terms of Kim Dotcom’s arrest.  Whatever the reason, it helps make sure that pirated material on Mega can only be shared by sending encryption keys through means such as email… or perhaps URLs  that are publicly available but are themselves encrypted.  And if you truly want to share audio or video material to which you have the rights, then Mega wasn’t going to be the best place for you anyway.

A commenter on TechDirt put it best: “So we’re still allowed to share the stuff, but just not on linking sites? Seems fair enough to me. Probably for the best too, since some dumbasses clearly don’t know how to hide their copyrighted material properly.”

Kim Dotcom Embraces DRM January 22, 2013

Posted by Bill Rosenblatt in DRM, New Zealand, Services.
add a comment

Kim Dotcom launched a new cloud file storage service, the New Zealand-based Mega, last weekend on the one-year anniversary of the shutdown of his previous site, the notorious MegaUpload.  (The massive initial interest in the site* prevented me from trying out the new service until today.)

Mega encrypts users’ files, using what looks like a content key (using AES-128) protected by 2048-bit RSA asymmetric-key encryption.  It derives the latter keys from users’ passwords and other pseudo-random data.  Downloading a file from a Mega account requires knowing either the password that was used to generate the RSA key (i.e., logging in to the account used to upload the file) or the key itself.

Hmm.  Content encrypted with symmetric keys that in turn are protected by asymmetric keys… sounds quite a bit like DRM, doesn’t it?

Well, not quite.  While DRM systems assume that file owners won’t want to publish keys used to encrypt the files, Mega not only allows but enables you to publish your files’ keys.  Mega lets you retrieve the key for a given file in the form of a URL; just right-click on the file you want and select “Get link.”   (Here‘s a sample.)  You can put the resulting URL into a blog post, tweet, email message, or website featuring banner ads for porn and mail-order brides.

(And of course, unlike DRM systems, once you obtain a key and download a file, it’s yours in unencrypted form to do with as you please.  The encryption isn’t integrated into a secure player app.)

Yet in practical terms, Mega is really no different from file-storage services that let users publish URLs to files they store — examples of which include RapidShare, 4Shared, and any of dozens of file transfer services (YouSendIt, WhaleMail, DropSend, Pando, SendUIt, etc.).

Mega touts its use of encryption as a privacy benefit.  What it really offers is privacy from the kinds of piracy monitoring services that media companies use to generate takedown notices — an application of encryption that hardcore pirates have used and that Kim Dotcom purports to “take … out to the mainstream.”  It will be impossible to use content identification technologies, such as fingerprinting, to detect the presence of copyrighted materials on Mega’s servers.  RapidShare, for example, analyzes third-party links to files on its site for potential infringements; Mega can’t do any such thing, by design.

Mega’s use of encryption also plays into the question of whether it could ever be held secondarily liable for its users’ infringements under laws such as DMCA 512 in the United States.  The Beltway tech policy writer Paul Sweeting wrote an astute analysis of Mega’s chances against the DMCA over the weekend.

Is Kim Dotcom simply thumbing his nose at Big Media again?  Or is he seriously trying to make Mega a competitor to legitimate, prosaic file storage services such as DropBox?  The track records of services known for piracy trying to go “legit” are not encouraging — just ask Bram Cohen (BitTorrent Entertainment Network) or Global Gaming Factory (purchasers of The Pirate Bay’s assets).  Still, this is one to watch as the year unfolds.

*Or, just possibly, server meltdowns faked to generate mountains of credulous hype?

New Study on the Changing Face of Video Content Security October 23, 2012

Posted by Bill Rosenblatt in Conditional Access, Services, Video.

Farncombe Technologies, a pay TV technology consultancy based in the UK, has just released a white paper called “The Future of Broadcast Cardless Security.”  The white paper incorporates the results of a survey of pay TV operators, content owners, security vendors, and device makers on pay TV security concerns today and in the future.

Operators of pay TV (cable, satellite, and telco-TV) networks have put more money and effort into digital content security than any other type of media distributor, certainly more than any digital music or e-book sellers ever have.  That’s because the economic incentives of pay TV operators are aligned with those of content owners such as movie studios and TV networks: operators don’t want their signals stolen, while content owners want to minimize unauthorized use of the content that travels over those signals.

For a long time, the technology used to thwart signal theft was the same as that used to guard against copyright infringement: conditional access (CA).  Life was simple when cable companies operated closed networks to dedicated set-top boxes (STBs): the content went from head ends to  STBs and nowhere else.  In that situation, if you secure the network, you secure the content.  But nowadays, two developments threaten this alignment of incentives and thus blow open the question of how pay TV operators will secure content.

First, the model of so-called piracy has changed.  Historically, pay TV piracy has meant enabling people to receive operators’  services without paying for them, by doing such things as sharing control words (decryption keys in CA systems) or distributing unauthorized smartcards for STBs.  But now, with higher broadband distribution and technologies such as BitTorrent, people can get content that flows over pay TV networks without touching the pay TV network at all.

Second, operators are offering “TV Everywhere” type services that let users view the content on Internet-conneted devices such as PCs,  tablets, smartphones, and so on, in addition to through their STBs.  They are doing this in response to competition from “over the top” (OTT) services that make video content available over the Internet.  Operators have less direct incentive to protect content being distributed to third-party Internet-connected devices than they do to protect it within their own networks.

The Farncombe study predicts the likely effects of these developments (and others) on pay TV security in the years to come.  According to the survey results, operators’ primary piracy concerns today are, in order of priority: control word sharing, rebroadcasting their content over the Internet (illegal streaming), and downloads of their content over the Internet (e.g. torrents); but in five years’ time the order of priority is expected to reverse.  The threat of bogus smartcard distribution is expected to diminish.

The intent of this whitepaper is to motivate the use of pure software security technology for pay-TV networks, i.e., schemes that don’t use smartcards.   So-called cardless security schemes are available from vendors such as Verimatrix, which sponsored the whitepaper.  They are cheaper to implement, and they now use software techniques such as whitebox encryption and code diversity that are often considered to be as strong as hardware techniques (for more on this, see my 2011 whitepaper The New Technologies for Pay TV Content Security, available here).

However, the whitepaper also calls for the use of forensic Internet antipiracy techniques instead of — or in addition to — those that (like CA) secure operators’ networks.  In other words, if piracy takes place mostly on the Internet instead of on operators’ networks, then antipiracy measures ought to be more cost-effective if they take place on the Internet as well.

The paper advocates the use of techniques such as watermarking, fingerprinting, and other types of Internet traffic monitoring to find pirate services and gather evidence to get them shut down.  It calls such techniques “new” although video security companies such as NDS (now Cisco) and Nagravision have been offering them for years, and Irdeto acquired BayTSP a year ago in order to incorporate BayTSP’s well-established forensic techniques into its offerings.  A handful of independent forensic antipiracy services exist as well.

This all begs the question: will pay TV operators will continue to put as much effort into content security as they have done until now?  Much of pay TV networks’ offerings consist of programming licensed non-exclusively from others.  The amount of programming that is licensed exclusively to operators in their geographic markets — such as live major-league sports — is decreasing over time as a proportion of total programming that operators offer.

The answer is, most likely, that operators will continue to want to secure their core networks, if only because such techniques are not mutually exclusive with forensic Internet monitoring or other techniques.  Yet operators’ security strategies are likely to change in two ways.  First, as the Farncombe whitepaper points out, operators will want security that is more cost-effective — which cardless solutions provide.

Second, network security technologies will have to integrate with DRM and stream encryption technologies used to secure content distributed over operators’ “TV Everywhere” services.  The whitepaper doesn’t cover this aspect of it, but for example, Verimatrix can integrate its software CA technology with a couple of DRM systems (Microsoft’s PlayReady and Intertrust’s Marlin) used for Internet content distribution. Licensors of content, especially those that make exclusive deals with operators, will insist on this.

The trouble is that such integrated security is more complex and costs more, not less, than traditional CA — and the costs and complexities will only go up as these services get more sophisticated and flexible.  Operators may start to object to these growing costs and complexities when the content doesn’t flow over their networks.  On the other hand, those same operators will become increasingly dependent on high-profile exclusive licensing deals to help them retain their audiences in the era of cord-cutting — meaning that content licensors will have a strong hand in dictating content security terms.  It will be interesting to see how this dynamic affects video content security in the future as it emerges.

Music Subscription Services Go Mainstream September 17, 2012

Posted by Bill Rosenblatt in Business models, Music, Services.
add a comment

While revisiting some older articles here,  I came across a prediction I made almost exactly a year ago, after Facebook’s announcement of integration with several music subscription services at its f8 conference.  I claimed that this would have a “tidal wave” effect on such services:

I predict that by this time next year, total paid memberships of subscription music services will reach 10 million and free memberships will cross the 50 million barrier.

So, how did I do?  Not bad, as it turns out.

The biggest subscription music services worldwide are Spotify and Deezer.  Let’s look at them first.

Spotify hasn’t published subscribership data recently, but music analyst Mark Mulligan measured its monthly membership at 20 million back in May of this year.  Judging by the trajectory of Mulligan’s numbers, it ought to be about 24 million now.  In fact, Mulligan shows that Spotify’s growth trajectory is about equal to Pandora’s.  Furthermore, that’s only for users whose plays are reported to Facebook.  A redoubt of users — such as yours truly– refuse to broadcast their plays that way (despite constant pleas from Spotify), so make it at least 25 million.

Deezer, based in France, is Spotify’s number one competitor outside of the US.  A month ago, PaidContent.org put Deezer’s numbers at 20 million total but only 1.5 million paid, and added that Spotify’s paid subscribership is at 4 million.

Rhapsody is the number two subscription service in the US market.  Unlike Spotify and Deezer, Rhapsody has not embraced the “freemium” trend and has stuck to its paid-only model.  Rhapsody passed the 1 million subscriber milestone last December.

The next tier of subscription services includes MOG, Rdio, and MuveMusic (where the monthly fee is bundled in with wireless service) in the US; regional players including WIMP, simfy, and Juke (Europe); Galaxie (Canada); various others in the Asia-Pacific market; and Omnifone’s recently launched multi-geography rara.com.  These should all be good for a few hundred thousand subscribers each.

So among all these services, 50 million looks pretty safe for the number of total subscribers..  As for the number of paid subscribers, IFPI put it at 13.4 million for 2011 in its 2012 Digital Music Report, published in January.  Given that this represents a 63% increase over 2010, we can be confident in saying that the figure now is more like 17-18 million, but I’d back it off somewhat because IFPI probably counts services that I would not categorize as subscription (such as premium Internet radio).  So let’s say 13-15 million paid – way past my prediction of 10 million.

It’s also worth noting that if these figures are correct, the percentage of paid subscribership is in the 26-30% range.  That’s in line with the 20-30% that readers predicted here when I  ran a poll on this a year ago — the most optimistic of the poll answer choices.

To put this in perspective, 50 million still falls far short of the audiences for paid downloads, Internet radio, and even YouTube, which are all well above 100 million worldwide.  But it proves that the public is catching on to the value of subscription services, and they are no longer a niche product for “grazers.”

Getty Images Launches Automated Rights Licensing for Photo Sharing Services September 12, 2012

Posted by Bill Rosenblatt in Fingerprinting, Images, Law, Rights Licensing, Services.
add a comment

Getty Images announced on Monday a deal with SparkRebel, a site that calls itself a “collaborative fashion and shopping inspiration” — but is perhaps more expediently described as “Pinterest for fashionistas, with Buy buttons” — in which images that users post to the site are recognized and their owners compensated for the use.  The arrangement uses ImageIRC technology from PicScout, the Israeli company that Getty Images acquired last year.  ImageIRC is a combination of an online image rights registry and image recognition technology based on fingerprinting.  It also uses PicScout’s Post Usage Billing system to manage royalty compensation.

Here’s how it works: SparkRebel users post images of fashion items they like to their profile pages.  Whenever a user posts an image, SparkRebel calls PicScout’s content identification service to recognize it.  If it finds the image’s fingerprint in its database, it uses ImageIRC to determine the rights holders; then SparkRebel pays any royalty owed through Post Usage Billing.  PicScout ImageIRC’s database includes Getty’s own images; it is the largest stock image agency in the world.  (Getty Images itself was sold just last month to Carlyle Group, the private equity giant, for over US $3 Billion.)  In all, ImageIRC includes data on over 80 million images from more than 200 licensors, which can opt in to the arrangement with SparkRebel (and presumably similar deals in the future).

This deal is a landmark in various ways. It is a more practically useful application for image recognition than ever before, and it brings digital images into some of the same online copyright controversies that have existed for music, video, and other types of content.

Several content recognition platforms exist; examples include Civolution’s Teletrax service for video; Attributor for text; and Audible Magic, Gracenote, and Rovi for music.  Many of these technologies were first designed for catching would-be infringers: blocking uploads  and supplying evidence for takedown notices and other legal actions.  Some of them evolved to add rights licensing functionality, so that when they find content on a website, blog, etc., instead of sending a nastygram, copyright owners can offer revenue-sharing or other licensing terms.  The music industry has experimented with audio fingerprinting to automate radio royalty calculations.

The idea of extending a content identification and licensing service to user-posted content is also not new: Google’s Content ID technology for YouTube has led to YouTube becoming a major legal content platform and likely the largest source of ad revenue from music in the world.  But while Content ID is exclusive to YouTube, PicScout ImageIRC and Post Usage Billing are platforms that can be used by any service that publishes digital images.

PicScout has had the basic technology components of this system for a while; SparkRebel merely had to implement some simple code in its photo-upload function to put the pieces together.  So why don’t we see this on Pinterest, not to mention Flickr, Tumblr, and so many others?

The usual reason: money.  Put simply, SparkRebel has more to gain from this arrangement than most other image-sharing sites.  SparkRebel has to pay royalties on many of the images that its users post.  Yet many of those images are of products that SparkRebel sells; therefore if an image is very popular on the site, it will cost SparkRebel more in royalties but likely lead to more commissions on product sales.  Furthermore, a site devoted to fashion is likely to have a much higher percentage of copyrighted images posted to it than, say, Flickr.

Yet where there’s no carrot, there might be a stick.  Getty Images and other image licensors have been known to be at odds with sites like Pinterest over copyright issues.  Pinterest takes a position that is typical of social-media sites: that it is covered (in the United States) by DMCA 512, the law that enables them to avoid liability by responding to takedown notices — and as long as it responds to them expeditiously, it has no further copyright responsibility.

Courts in cases such as UMG v. Veoh and Viacom v. Google (YouTube) have also held that online services have no obligation to use content identification technology to deal with copyright issues proactively.  Yet the media industry is trying to change this; for example, that’s most likely the ultimate goal of Viacom’s pending appeal in the YouTube case.  (That case concerns content that users uploaded before Google put Content ID into place.)

On the other hand, the issue for site operators in cases like this is not just royalty payments; it’s also the cost of implementing the technology that identifies content and acts accordingly.  A photo-sharing site can implement PicScout’s technology easily and (unlike analogous technology for video) with virtually no impact on its server infrastructure or the response time for users.  This combined with the “make it easy to do the right thing” aspect of the scheme may bring the sides closer together after all.

Irdeto Intelligence: Monitoring Video Content Beyond Managed Networks September 11, 2012

Posted by Bill Rosenblatt in Conditional Access, Services, Video.
1 comment so far

Last week’s big IBC conference in Amsterdam brought a raft of announcements from video content protection vendors, most of which were typical customer success stories and strategic partnerships.  One product launch announcement, however, was particularly interesting: Irdeto Intelligence, which launched last Friday.

Irdeto Intelligence is the result of the company’s acquisition of BayTSP in October 2011.  The service is an extension of BayTSP’s existing offering and had been under development before the acquisition.  It crawls the Internet looking for infringing content and provides an interactive dashboard that enables customers to see data such as where infringing files were found (by ISP or other service provider) and the volume for each title.

Before Irdeto acquired BayTSP last year, it was one of a handful of independent companies that crawl the Internet looking for infringing content; others include Attributor, Civolution, MarkMonitor, and Peer Media Technologies.  The company wanted to grow its business beyond its core piracy monitoring service.  It found — like other companies of its type — that the mountains of data on so-called piracy that it was collecting had value beyond helping copyright owners generate cease-and-desist or takedown notices.

The big issue with piracy monitoring services is — as with so many other technologies we discuss here — who pays for them.  Hollywood studios (and other types of media businesses) pay the companies mentioned above to find infringing copies of their content.  Now that BayTSP is part of a leading video security business, its customers become managed network operators (cable, satellite, telco-TV) and broadcasters.  As I mentioned last year when the acquisition was announced, a cynic could read the deal as Hollywood’s attempt to push piracy monitoring costs downstream to operators, just as it does the cost of DRM and conditional access.

Irdeto confirmed that it is still offering BayTSP’s existing services to copyright owners.  Still, Irdeto’s acquisition of BayTSP is something of a gamble.  It’s part of a theme that I see growing in importance over the next few years: competition from Internet-based “over the top” (OTT) services is forcing managed network operators to offer “TV Anywhere” type services for viewing their programming over Internet-connected devices such as PCs, tablets, and mobile handsets.

Hollywood has always had a strong relationship with managed network operators on content protection because their economic incentives were aligned: Hollywood wanted to mitigate infringement of its movies and TV shows; operators wanted to mitigate theft of access to their networks.  This has led to set-top boxes that are fortresses of security compared, say, to e-book readers, portable music players, and (especially) PCs.

But once operator-licensed content leaves managed networks to go “over the top,” just how much responsibility do operators have to protect content?  This is a question that will loom larger and larger.

Other providers of conditional access (CA) technology for operators, such as NDS (now Cisco) and Nagra, offer piracy monitoring services.  But those have typically been limited in scope to things like sharing of control words (content keys used in CA systems for the DVB standard), not illegal file-sharing.  In acquiring BayTSP, Irdeto is betting that operators will want to pay more for this type of monitoring.

But why would, say, a cable operator care about content uploaded to file-sharing sites?  Once they have this information, how would they use it if not to generate takedown notices or other legal means of getting infringing content removed?

Irdeto has two answers to this question.  Most important is live event content, particularly sports.  Hollywood has nothing to do with this type of content.  Operators and terrestrial broadcasters suffer when users can view live events on illegal streaming sites with only slight time delays.  Irdeto Intelligence updates its search results at five-minute intervals, so that operators can act to get illegal streams shut down very quickly.

The second reason has to do with the fact that more and more operators are offering so-called triple play services which include Internet service in addition to TV and telephony.  A triple play provider will be seeking licenses to content from Hollywood, which will be more willing to grant licenses if provider actively addresses infringing content on its ISP service.

Irdeto says that it has signed two customers for Irdeto Intelligence so far, and that it received strong interest for the service on the show floor at IBC.  It will be interesting to see how other video security vendors react as OTT and TV Anywhere continue to grow.


A Nail in Public Libraries’ Coffins May 20, 2012

Posted by Bill Rosenblatt in Libraries, Publishing, Services, United States.

There it was, on the entire back page of the A section of the New York Times a few days ago, at a likely cost of over US $100,000: a full-page ad from Amazon touting free “lending” of all of the Harry Potter e-books for members of Amazon’s $79/year Amazon Prime program who own Kindle e-readers, starting next month.

I wrote last December about the challenges that public libraries face as e-reading becomes popular and major trade book publishers increase restrictions on public library e-lending of their titles.  Copyright law allows publishers to set license terms for digital content, so instead of giving e-book buyers the standard “copyright bundle” of rights, publishers can dictate whatever terms they want — including refusal to license content at all.  Currently five of the Big 6 trade publishers restrict library e-book lending in some way, including two of them that don’t allow it at all.  Libraries have little leverage against publishers to change this state of affairs.

I also discussed Amazon’s Kindle Owners’ Lending Library (KOLL), which is one of the benefits of Amazon Prime membership (along with free shipping and access to streaming video content), as a step toward the private sector invading the turf of public libraries.  In case anyone doesn’t see this, Amazon makes it quite clear in its press release:

“With the Kindle Owners’ Lending Library, there are no due dates, books can be borrowed as frequently as once a month, and there are no limits on how many people can simultaneously borrow the same title—so readers never have to wait in line for the book they want.”

In other words, Amazon has implemented a model of “one e-book per user at a time, not more than one per month.”  It can configure any such model on its servers and enforce it through its DRM.

KOLL’s selection had been limited to a few thousand titles from smaller publishers.  Recently Amazon has been moving aggressively to increase the KOLL catalog, despite lack of permission from some publishers and authors; it now claims a catalog of over 145,000 titles.  Amazon did make a deal with Pottermore, the organization that distributes J.K. Rowling’s Harry Potter titles in digital form, to include those titles in KOLL.  Pottermore admits that Amazon paid it “a large amount of money” to do so.  Taken together, these steps take KOLL to the next level.

Of course, there are several reasons why the Harry Potter case is exceptional.  The only way to purchase Harry Potter e-books is on the Pottermore site, and Amazon wanted to find some way of luring Potter fans back to its own site; Harry Potter is a series of seven books, and Pottermore believes that allowing users to borrow one title per month will lead to increased sales of other titles; The Amazon Prime and public library demographics may not overlap much.

But still, this deal is an example of Amazon using content to make its devices more valuable.  The company is subsidizing a bestselling author’s work to induce people to buy Kindles and Amazon Prime memberships.  This kind of arrangement is likely to become more commonplace as authors, publishers, and retailers all get more information about the value of private-sector e-lending and learn how to make such deals strategically.

This is nice for already-famous authors, but it doesn’t benefit the multitude of authors who haven’t made it to J.K. Rowling’s rarified level.  It’s not something that libraries are able to replicate — neither the subsidies nor the full-page ads in the New York Times.

Who’s Subsidizin’ Who? February 9, 2012

Posted by Bill Rosenblatt in Business models, Music, Publishing, Services, Uncategorized, United States.
add a comment

Barnes & Noble has just announced a deal offering a US $100 Nook e-reader for free with a $240/year subscription to the New York Times on Nook.  Meanwhile, MuveMusic, the bundled-music service of the small US wireless carrier Cricket Wireless, passed the 500,000 subscriber mark last month.   MuveMusic has vaulted past Rdio and MOG to be probably the third largest paid subscription music service in the United States, behind Rhapsody and (probably) Spotify at over a million each.

MuveMusic isn’t quite a subsidized-music deal a la Nokia Ovi Music Unlimited, but it does offer unlimited music downloads bundled with wireless service at a price point that’s lower than the major carriers.  (The roaming charges you’d incur if you leave Cricket’s rather spotty coverage area could add to the cost.)  Cricket is apparently spending a fortune to market MuveMusic, and it’s paying off.

It looks like the business of bundling content with devices is not dead; on the contrary, it’s just beginning.  The fact that both types of bundling models exist — pay for the device, get the content free; pay for the content, get the device free — means that we can expect much experimentation in the months and years ahead.  Although it’s hard to imagine a record label offering a free device with its music, we could follow a model like Airborne Music and think of things like, say, a deal between HTC and UMG offering everything Lady Gaga puts out for $20/year with a free HTC Android phone and/or (HTC-owned) Beats earbuds.  Or how about free Disney content with a purchase of an Apple TV?

As long as someone is paying for the content, any of these models are good for content creators. device makers, ane consumers alike.  Bring them on!

Creative Commons for Music: What’s the Point? January 22, 2012

Posted by Bill Rosenblatt in Law, Music, Rights Licensing, Services, Standards.

I recently came across a music startup called Airborne Music, which touts two features: a business model based on “subscribing to an artist” for US $1/month, and music distributed under Creative Commons licenses.  Like other music services that use Creative Commons, Airborne Music appeals primarily to indie artists who are looking to get exposure for their work.  This got me thinking about  how — or whether — Creative Commons has any real economic value for creative artists.

I have been fascinated by a dichotomy of indie vs. major-label music: indie musicians value promotion over immediate revenue, while for major-label artists it’s the other way around.  (Same for book authors with respect to the Big 6 trade publishers, photographers with respect to Getty and Corbis, etc.)  Back when the major labels were only allowing digital downloads with DRM — a technology intended to preserve revenue at the expense of promotion — I wondered if those few indie artists who landed major-label deals were getting the optimal promotion-versus-revenue tradeoffs, or if this issue even figured into major-label thinking about licensing terms and rights technologies.

When I looked at Airborne Music, it dawned on me that Creative Commons is interesting for indie artists who want to promote their works while preserving the right (if not the ability) to make money from them later.  The Creative Commons website lists ten existing sites that enable musicians to distribute their music under CC, including big ones like the bulge-bracket-funded startup SoundCloud and the commercially-oriented BandCamp.

This is an eminently practical application of Creative Commons’s motto: “Some rights reserved.”  Many CC-licensing services use the BY-SA (Attribution-Share-Alike) Creative Commons license, which gives you the right to copy and distribute the artist’s music as long as you attribute it to the artist and redistribute (i.e. share) it under the same terms.  That’s exactly what indie artists want: to get their content distributed as widely as possible but to make sure that everyone knows it’s their work.  Some use BY-SA-NC (Attribution-Share-Alike-Noncommercial), which adds the condition that you can’t sell the content, meaning that the artist is preserving her ability to make money from it.

It sounds great in theory.  It’s just too bad that there isn’t a way to make sure that those rights are actually respected.  There is a rights expression language for Creative Commons (CC REL), which makes it possible for content rendering or editing software to read the license (in XML RDFa) and act accordingly.  As a technology, the REL concept originated with Mark Stefik at Xerox PARC in the mid-1990s; the eminent MIT computer scientist Hal Abelson created CC REL in 2008.  Since then, the Creative Commons organization has maintained something of an arms-length relationship with CC REL: it describes the language and offers links to information about it, but it doesn’t (for example) include CC REL code in the actual licenses it offers.

More to the point, while there are code libraries for generating CC REL code, I have yet to hear of a working system that actually reads CC REL license terms and acts on them.  (Yes, this would be extraordinarily difficult to achieve with any completeness, e.g., taking Fair Use into account.)

Without a real enforcement mechanism, CC licenses are all little more than labels, like the garment care hieroglyphics mandated by the Federal Trade Commission in the United States.  For example, some BY-SA-licensed music tracks may end up in mashups.  How many of those mashups will attribute the sources’ artists properly?  Not many, I would guess.  Conversely, what really prevents someone who gets music licensed under ND (No Derivative Works) terms from remixing or excerpting in ways that aren’t considered Fair Use?  Are these people really afraid of being sued?  I hardly think so.

This trap door into the legal system, as I have called it, makes Creative Commons licensing of more theoretical than practical interest.  The practical value of CC seems to be concentrated in business-to-business content licensing agreements, where corporations need to take more responsibility for observing licensing terms and CC’s ready-made licenses make it easy for them to do so.  The music site Jamendo is a good example of this: it licenses its members’ music content for commercial sync rights to movie and TV producers while making it free to the public.

Free culture advocates like to tell content creators that they should give up control over their content in the digital age.  As far as I’m concerned, anyone who claims to welcome the end of control and also supports Creative Commons is talking through both sides of his mouth.  If you use a Creative Commons license, you express a desire for control, even if you don’t actually get very much of it.  What you really get is a badge that describes your intentions — a badge that a large and increasing number of web-savvy people recognize.  Yet as a practical matter, a Creative Commons logo on your site is tantamount to a statement to the average user that the content is free for the taking.

The truth is that sometimes artists benefit most from lack of control over their content, while other times they benefit from more control.  The copyright system is supposed to make sure that the public’s and creators’ benefits from creative works are balanced in order to optimize creative output. Creative Commons purports to provide simple means of redressing what its designers believe is a lack of balance in the current copyright law.  But to be attractive to artists, CC needs to offer them ways to determine their levels of control in ways that the copyright system does not support.

In the end, Creative Commons is a burglar alarm sign on your lawn without the actual alarm system.  You can easily buy fake alarm signs for a few dollars, whereas real alarm systems cost thousands.  It’s the same with digital content.  At least Creative Commons, like almost all of the content licensed with it, is free.

(I should add that I wear the badge myself.  My whitepapers and this blog are licensed under Creative Commons BY-NC-ND (Attribution-Noncommercial-No Derivative Works) terms.  I would at least rather have the copyright-savvy people who read this know my intentions.)


Get every new post delivered to your Inbox.

Join 703 other followers