jump to navigation

New White Paper and NAB Workshop: Strategies for Secure OTT Video in a Multiscreen World March 22, 2015

Posted by Bill Rosenblatt in DRM, Events, Standards, Technologies, White Papers.
add a comment

I have just released a new white paper called Strategies for Secure OTT Video in a Multiscreen World.  The paper covers the emerging world of multi-platform over-the-top (OTT) video applications and how to manage their development for maximum flexibility and cost containment in today’s world of constantly expanding devices and user expectations of “any time, any device, anywhere.”  It’s available for download here.

The key technologies that the white paper focuses on are adaptive bitrate streaming (MPEG DASH as an emerging standard) and the integration of Hollywood-approved DRM schemes with HTML5 through Common Encryption (CENC) and Encrypted Media Extensions (EME).

It is becoming possible to integrate DRM with browser-based apps in a way that minimizes native code and without resorting to plug-in schemes like Microsoft’s Silverlight.  Yet the HTML5 EME specification creates dependencies between browsers and DRMs, so that — at least in the near future — it will only be possible in many cases to integrate a DRM with a browser from the same vendor: for example, Google’s Widevine DRM with the Chrome browser or Microsoft PlayReady with Internet Explorer.  In other words, while the future points to consolidation around web app technologies and adaptive bitrate streaming, the DRM and browser markets will continue to be fragmented.  In other words, to be able to offer premium movie and TV content, service providers will need to support multiple DRMs for the foreseeable future.

The white paper lays out a high-level solution architecture that OTT service providers can use to take as much advantage as possible of current and emerging standards while isolating and minimizing the sources of technical complexity that are likely to persist for a while.  It calls for standardizing on adaptive bitrate streaming and app development technologies while allowing for and containing the complexities around browsers and DRMs.

Many thanks to Irdeto for commissioning this white paper.   In addition, Irdeto and I will present a workshop at the NAB trade show on Tuesday, April 14 at 1pm, at The Wynn in Las Vegas.  I’ll give a presentation that summarizes the white paper; then I’ll moderate a panel discussion with the following distinguished guests:

  • Dave Belt, Principal Architect, Time Warner Cable
  • Jean Choquette, Director of Multiplatform Video on Demand, Videotron
  • Shawn Michels, Senior Product Manager, Akamai
  • Richard Frankland, VP Americas, Irdeto

This session will include ample opportunities for Q&A, sharing of experiences and best practices, as well as a catered lunch and opportunities to network with your peers and colleagues.  Attendance at this event is strictly limited and by invitation-only to ensure the richest possible interaction among participants.  If you are interested in attending, please email Katherine.Walsh@irdeto.com by April 7th. Irdeto will even give you a ride from the Las Vegas Convention Center and back if you wish.

USPTO Public Meeting on Identifiers for Automating Content Licensing March 17, 2015

Posted by Bill Rosenblatt in Events, Rights Licensing, Standards, United States.
add a comment

The U.S. Patent and Trademark Office (USPTO) is holding a public meeting on Wednesday, April 1 to gather input on how the U.S. Government can facilitate the development and use of standard content identifiers as part of the process of creating an automated licensing hub, along the lines of the Copyright Hub in the UK.

This meeting is the second one that the USPTO is holding after the publication of the “Green Paper” on Copyright Policy, Creativity, and Innovation in the Digital economy by it and the National Telecommunications and Information Administration (NTIA) in July 2013.  The first meeting, in December 2013, addressed several other topics as well as this one.

(For those of you who are wondering why the USPTO is dealing with copyright issues: the USPTO is the adviser on all intellectual property issues, including copyright, to the Executive Branch of government, i.e., the president and his cabinet.  The U.S. Copyright Office performs an analogous function for the Legislative Branch, i.e., Congress.)

The April 1 meeting will focus tightly on issues of standard identifiers for content — which ones exist today, how they are used, how they are relevant to automation of rights licensing, and so on.  It will also focus on specifics of the UK Copyright Hub and the feasibility of building a similar one here in the States.

As usual for such gatherings, all are welcome to attend, the meeting will be live-streamed, and a transcript will be available afterwards.  It’s just unfortunate that notice of the meeting was only published in the Federal Register last Friday, less than three weeks before the meeting date.  I was asked to suggest panelists on the subjects of content identifiers and content identification technology (such as fingerprinting and watermarking).  There are several experts on these topics who would undoubtedly add much value to such discussions, but many of them — located in places from LA to the UK — would be unable to travel to Washington, DC on such short notice and possibly on their own nickels.  It would be nice to get input on this very timely topic from more than just the “usual suspects” inside the Beltway.

Establishment of reliable, reasonably complete online databases of rights holder information is of vital importance for making licensing easier in an increasingly complex digital age, and it’s encouraging to see the government take an active role in determining how best to get it done and looking at working systems in other countries that are further ahead in the process.  That’s why it’s especially crucial to get as much expert input as possible at this stage.

Perhaps the USPTO can do what it did for the December 2013 meeting: reschedule it for several weeks later.  If you are interested in participating but can’t do so at such short notice (as is the case with me), then you might want to communicate this to the meeting organizers at the PTO.  Otherwise, the usual practice is to invite post-meeting comments in writing.

Flickr’s Wall Art Program Exposes Weaknesses in Licensing Automation December 7, 2014

Posted by Bill Rosenblatt in Images, Rights Licensing, Standards.
2 comments

Suppose you’re a musician.  You put your songs up on SoundCloud to get exposure for them.  Later you find out that SoundCloud has started a program for selling your music as high-quality CDs and giving you none of the proceeds.  Or suppose you’re a writer who put your serialized novel up on WattPad; then you find out that WattPad has started selling it in a coffee-table-worthy hardcover edition and not sharing revenue with you.   The odds are that in either case you would not be thrilled.

Yet those are rough equivalents of what Flickr, the Yahoo-owned photo-sharing site, has been doing with its Flickr Wall Art program.  Flickr Wall Art started out, back in October, as a way for users to order professional-quality hangable prints of their own photos, in the same way that a site like Zazzle lets users make t-shirts or coffee mugs with their images on them (or Lulu publishes printed books).

More recently, Flickr expanded the Wall Art program to let users order framed prints of any of tens of millions of images that users uploaded to the site.  This has raised the ire of some of the professional photographers who post their images on Flickr for the same reason that musicians post music on SoundCloud and similar sites: to expose their art to the public.

The core issue here is the license terms under which users upload their images to Flickr.  Like SoundCloud and WattPad, Flickr offers users the option of selecting a Creative Commons license for their work when they upload it.  Many Flickr users do this in order to encourage other users to share their images and thereby increase their exposure — so that, perhaps, some magazine editor or advertising art director will see their work and pay them for it.

The fact that a hosting website might exploit a Creative Commons-licensed work for its own commercial gain doesn’t sit right with many content creators who have operated under two assumptions that, as Flickr has shown, are naive.  One is that these big Internet sites just want to get users to contribute content in order to build their audience and that they will make money some other way, such as through premium memberships or advertising.  The other is that Creative Commons licenses are some sort of magic bullet that help artists get exposure for their work while preventing unfair commercial exploitation of it.

Let’s get one thing out of the way: as others have pointed out, what Flickr is doing is perfectly legal.  It takes advantage of the fact that many users upload photos to the site under Creative Commons licenses that allow others to exploit them commercially — which three out of the six Creative Commons license options do.   It seems that many photographers choose one of those licenses when they upload their work and don’t think too much about the consequences.

Flickr does allow users to change their images’ license terms at any time, and more recently it expanded the Wall Art program to enable photographers to get 51% of revenue from their images if they choose licenses that allow commercial use.  But currently that option is limited to those few photographers whom Flickr has invited into its commercial licensing program, Flickr Marketplace, which it launched this past July.  Flickr Marketplace is intended to be an attractive source of high-quality images for the likes of The New York Times and Reuters, and thus is curated by editors.

Some copyleft folks have circled their wagons around Flickr, maintaining that it shows yet again why content creators should not expect copyright to help them keep control of what happens to their work on the Internet.  But that’s a perversion of what’s going on here.

Flickr is — still, after ten years of existence — a major outlet for photos online.  As such, Flickr has the means to control, to some extent, what happens to the images posted on its service; and with Flickr Marketplace, it is effectively wresting some control of commercial licensing opportunities away from photographers.  Some degree of control over content distribution and use does exist on the Internet, even if copyright law itself doesn’t contribute directly to that control.  The controllers are the entities that Jaron Lanier has called “lords of the cloud” — one of which is Yahoo.

This doesn’t mean that Flickr is particularly outrageous or evil  — although it’s at least ironic that while these major content hosting services claim to help content creators through exposure and sharing, Flickr is now making money from objects that are not very shareable at all.  (In fact, what Flickr is doing is not unusual for a mature technology business facing stiff competition from new upstarts on the low end of the market — Instagram and Snapchat in this case: it is migrating to the premium/professional end of the market, where prices and margins are higher but volume is lower.)

The problem here is the lack of both flexibility and infrastructure for commercial licensing in the Creative Commons ecosystem.  Creative Commons is a clever and highly successful way of bringing some degree of badly-needed rationalization and automation to the abstruse world of content licensing.  But it gives creators hardly any options for commercial exploitation of their works.

Several years ago, Creative Commons flirted with the idea of extending their licenses to cover terms of commercial use (among other things) by launching a scheme called CC+.  A handful of startups emerged that used CC+ to enable commercial licensing of content on blogs and so on — companies that, interestingly enough, came from across the ideological spectrum of copyright.  One was Ozmo from the Copyright Clearance Center, which helped with the design of CC+; another, RightsAgent, was started by the then Executive Director of the Berkman Center for Internet and Society at Harvard Law School.  Yet none of these succeeded, and it didn’t help that the Creative Commons organization’s heart wasn’t really in CC+ in the first place.

But the picture changes — no pun intended — when big content hosting sites start to monetize user-generated content directly instead of merely using it as a draw for advertising and paid online storage.  Ideas for automated licensing of online content have been kicking around long before Flickr or CC+ (here’s one example).  Licensing automation mechanisms that can be adopted by big Internet services and individual creators alike for consumer-facing business models are needed now more than ever.

 

Dispatches from IDPF Digital Book 2014, Pt. 3: DRM June 5, 2014

Posted by Bill Rosenblatt in DRM, Publishing, Standards.
1 comment so far

The final set of interesting developments at last week’s IDPF Digital Book 2014 in NYC has to do with DRM and rights.

Tom Doherty, founder of the science fiction publisher Tor Books, gave a speech about his company’s experimentation with DRM-free e-books and its launch of a line of e-novellas without DRM.  The buildup to this speech (among those of us who were aware of the program in advance) was palpable, but the result fell with a thud.  You had to listen hard to find the tiny morsel about how going DRM-free has barely affected sales; otherwise the speech was standard-issue dogma about DRM with virtually no new insights or data.  And he did not take questions from the audience.

DRM has become something of a taboo subject even at conferences like this, so most of the rest of the discussion about it took the form of hallway buzz.  And the buzz is that many are predicting that DRM will be on its way out for retail trade e-books within the next couple of years.

That’s the way things are likely to go if technology market forces play out the way they usually do.  Retailers other than Amazon (and possibly Apple) will want to embrace more open standards so that they can offer greater interoperability and thus band together to compete with the dominant player; getting rid of DRM is certainly a step in that direction.  Meanwhile, publishers, getting more and more fed up with or afraid of Amazon, will find common cause with other retailers and agree to license more of their material for distribution without DRM.  (Several retailers in second-tier European countries as well as some retailers for self-publishing authors, such as Lulu, have already dropped DRM entirely.)

Such sentiments will eventually supersede most publishers’ current “faith-based” insistence on DRM.  In other words, publishers and retailers will behave more or less the same way as the major record labels and non-Apple retailers behaved back in 2006-2007.

This course of events seems inevitable… unless publishers get some hard, credible data that tells them that DRM helps prevent piracy and “oversharing” more than it hurts the consumer experience.  That’s the only way (other than outright inertia) that I can see DRM staying in place for trade books over the next couple of years.

The situation for educational, professional, and STM (scientific, technical, medical) books is another story (as are library lending and other non-retail models).  Higher ed publishers in particular have reasons to stick with DRM: for example, e-textbook piracy has been rising dramatically in recent years and is up to 34% of students as of last year.

Adobe recently re-launched its DRM with a focus on these publishing market segments. I’d describe the re-launch as “awkward,” though publishers I’ve spoken to would characterize in it less polite terms.  This has led to openings for other vendors, such as Sony DADC; and the Readium Foundation is still working on the open-source EPUB Lightweight Content Protection scheme.

The hallway buzz at IDPF Digital Book was that DRM for these market segments is here to stay — except that in higher ed, it may become unnecessary in a longer timeframe, when educational materials are delivered dynamically and in a fashion more akin to streaming than to downloads of e-books.

I attended a panel on EDUPUB, a standards initiative aimed at exactly this future for educational publishing.  The effort, led by Pearson Education (the largest of the educational publishers), the IMS Global Learning Consortium, and IDPF, is impressive: it’s based on combining existing open standards (such as IDPF’s EPUB 3) instead of inventing new ones.  It’s meant to be inclusive and beneficial to all players in the higher ed value chain, including Pearson’s competitors.

However, EDUPUB is in danger of making the same mistake as the IDPF did by ignoring DRM and other rights issues.  When asked about DRM, Paul Belfanti, Pearson’s lead executive on EDUPUB, answered that EDUPUB is DRM-agnostic and would leave decisions on DRM to providers of content delivery platforms. This decision was problematic for trade publishers when IDPF made it for EPUB several years ago; it’s even more potentially problematic for higher ed; EDUPUB-based materials could certainly be delivered in e-textbook form.

EDUPUB could also help enable one of the Holy Grails of higher ed publishing, which is to combine materials from multiple publishers into custom textbooks or dynamically delivered digital content.  Unlike most trade books, textbooks often contain hundreds or thousands of content components, each of which may have different rights associated with them.

Clearing rights for higher ed content is a manual, labor-intensive job.  In tomorrow’s world of dynamic digital educational content, it will be more important than ever to make sure that the content being delivered has the proper clearances, in real time.  In reality, this doesn’t necessarily involve DRM; it’s mainly a question of machine-readable rights metadata.

Attempts to standardize this type of rights metadata date back at least to the mid-1990s (when I was involved in such an attempt); none have succeeded.  This is a “last mile” issue that EDUPUB will have to address, sooner rather than later, for it to make good on its very promising start.  DRM and rights are not popular topics for standards bodies to address, but it has become increasingly clear that they must address these issues to be successful.

Images, Search Engines, and Doing the Right Thing January 13, 2014

Posted by Bill Rosenblatt in Rights Licensing, Standards.
4 comments

A recent blog post by Larry Lessig pointed out that the image search feature in Microsoft’s Bing allows users to filter search results so that only images with selected Creative Commons licenses appear.  A commenter to the post found that Google also has this feature, albeit buried deeply in the Advanced Search menu (see for example here; scroll down to “usage rights”).  These features offer a tantalizing glimpse at an idea that has been brewing for years: the ability to license content directly and automatically through web browsers.  

Let’s face it: most people use image search to find images to copy and paste into their PowerPoint presentations (or blog posts or web pages).  As Lessig points out, these features help users to ensure that they have the rights to use the images they find in those ways.  But they don’t help if an image is licensable in any way other than a subset of Creative Commons terms — regardless of whether royalties are involved.  I’d stop short of calling this discrimination against image licensors, but it certainly doesn’t help them.

Those who want to license images for commercial purposes — such as graphic artists laying out advertisements — typically go to stock photo agencies like Getty Images and Corbis, which have powerful search facilities on their websites.  Below Getty and Corbis, the stock image market is fragmented into niches, mostly by subject matter.  Graphic artists have to know where to go to find the kinds of images they need.

There is a small “one-stop” search engine for images called PictureEngine, which includes links to licensing pages in its search results.  But it would surely be better if the mainstream search engines’ image search functions included the ability to display and filter on licensing terms, and to enable links to licensing opportunities.

There have been various attempts over the years to make it easy for users to “do the right thing,” i.e. to license commercial content that they find online and intend to copy and use for purposes that aren’t clearly covered under fair use or equivalents.  Most such efforts have focused on text content.  The startup QPass had a vogue among big-name newspaper and magazine brands during the first Internet bubble: it provided a service for publishers to sell archived articles to consumers for a dollar or two apiece.  It effectively disappeared during the post-bubble crash.  ICopyright, which has been around since the late 1990s, provides a toolbar that publishers can use on web pages to offer various licensing options such as print, email, republish, and excerpt; most of its customers are B2B publishers like Dow Jones and Investors Business Daily.

Images are just as easy as text (compared, say, to audio and video) to copy and paste from web pages, but they are more discrete units of content; therefore it ought to be easier to automate licensing of them.  When you copy and paste an image with a Creative Commons license, you’re effectively getting a license, because the license is expressed in XML metadata attached to the image.

If search engines can index Creative Commons terms embedded within images that they find online, they ought to be able to index other licensing terms, including commercial terms.  The most prominent standard for image rights is PLUS (Picture Licensing Universal System), but that’s intended for describing rights B-to-B licensing arrangements, not to the general public; and I’m not aware of any efforts that the PLUS Coalition has made to integrate with web search engines.

No, the solution to this problem is not only clear but has been in evidence for years: augment Creative Commons so that it can handle commercial as well as noncommercial licensing.  Creative Commons flirted with this idea several years ago with something called CC+ (CCPlus), a way to add additional terms to Creative Commons licenses that envisioned commercial licenses in particular.

Although Creative Commons has its detractors among the commercial content community (mostly lawyers who feel disintermediated — which is part of the point), I have heard major publishers express admiration for it as well as interest in finding ways to use it with their content.  At this point, the biggest obstacle to extending Creative Commons to apply to commercial licensing is the Creative Commons organization’s lack of interest in doing so.  Creative Commons’ innovations have put it at the center of the copyright world on the Internet; it would be a shame if Creative Commons’ refusal to acknowledge that some people would like to get paid for their work results in that possibility being closed off.

MovieLabs Releases Best Practices for Video Content Protection October 23, 2013

Posted by Bill Rosenblatt in DRM, Standards, Video.
3 comments

As Hollywood prepares for its transition to 4k video (four times the resolution of HD), it appears to be adopting a new approach to content protection, one that promotes more service flexibility and quicker time to market than previous approaches but carries other risks.  The recent publication of a best-practices document for content protection from MovieLabs, Hollywood’s R&D consortium, signals this new approach.

In previous generations of video technology, Hollywood studios got together with major technology companies and formed technology licensing entities to set and administer standards for content protection.  For example, a subset of the major studios teamed up with IBM, Intel, Microsoft, Panasonic, and Toshiba to form AACS LA, the licensing authority for the AACS content protection scheme for Blu-ray discs and (originally) HD DVDs.  AACS LA defines the technology specification, sets the terms and conditions under which it can be licensed, and performs other functions to maintain the technology.

A licensing authority like AACS LA (and there are a veritable alphabet soup of others) provides certainty to technology implementation including compliance, patent licensing, and interoperability among licensees.  It helps insulate the major studios from accusations of collusion by being a separate entity in which at most a subset of them participate.

As we now know, the licensing-authority model has its drawbacks.  One is that it can take the licensing authority several years to develop technology specs to a point where vendors can implement them — by which time they risk obsolescence.  Another is that it does not offer much flexibility in how the technology can adapt to new device types and content delivery paradigms.  For example, AACS was designed with optical discs in mind at a time when Internet video streaming was just a blip on the horizon.

A document published recently by MovieLabs signals a new approach.  MovieLabs Specification for Enhanced Content Protection is not really a specification, in that it is in nowhere near enough detail to be usable as the basis for implementations.  It is more a compendium of what we now understand as best practices for protecting digital video.  It contains room for change and interpretation.

The best practices in the document amount to a wish list for Hollywood.  They include things like:

  • Techniques for limiting the impact of hacks to DRM schemes, such as requiring device as well as content keys, code diversity (a hack that works on one device won’t necessarily work on another), title diversity (a hack that works with one title won’t necessarily work on another), device revocation, and renewal of protection schemes.
  • Proactive renewal of software components instead of “locking the barn door after the horse has escaped.”
  • Component technologies that are currently considered safe from hacks by themselves, including standard AES encryption with minimum key length of 128 and version 2.2 or better of the HDCP scheme for protecting links such as HDMI cables (earlier versions were hacked).
  • Hardware roots of trust on devices, running in secure execution environments, to limit opportunities for key leakage.
  • Forensic watermarking, meaning that content should have information embedded in it about the device or user who requested it.

Those who saw Sony Pictures CTO Spencer Stephens’s talk at the  Anti-Piracy and Content Protection Summit in LA back in July will find much of this familiar.  Some of these techniques come from the current state of the art in content protection for pay TV services; for more detail on this, see my whitepaper The New Technologies for Pay TV Content Security.  Others, such as the forensic watermarking requirement, come from current systems for distributing HD movies in early release windows.  And some result from lessons learned from cracks to older technologies such as AACS, HDCP, and CSS (for DVDs).

MovieLabs is unable to act as a licensor of standards for content protection (or anything else, for that matter).  The six major studios set it up in 2005 as a movie industry joint R&D consortium modeled on the cable television industry’s CableLabs and other organizations enabled by the National Cooperative Research Act of 1984, such as Bellcore (telecommunications) and SEMATECH (semiconductors).  R&D consortia are allowed, under antitrust law, to engage in “pre-competitive” research and development, but not to develop technologies that are proprietary to their members.

Accordingly, the document contains a lot of language intended to disassociate these requirements from any actual implementations, standards, or studio policies, such as “Each studio will determine individually which practices are prerequisites to the distribution of its content in any particular situation” and “This document defined only one approach to security and compatibility, and other approaches may be available.”

Instead, the best-practices approach looks like it is intended to give “signals” from the major studios to content protection technology vendors, such as Microsoft, Irdeto, Intertrust, and Verimatrix, who work with content service providers.  These vendors will then presumably develop protection schemes that follow the best practices, with an understanding that studios will then agree to license their content to those services.

The result of this approach should be legal content services for next-generation video that get to market faster.  The best practices are independent of things like content delivery modalities (physical media, downloads, streaming) and largely independent of usage rules.  Therefore they should enable a wider variety of services than is possible with the traditional licensing authority paradigm.

Yet this approach has two drawbacks compared to the older approach.  (And of course the two approaches are not mutually exclusive.)  First is that it jeopardizes the interoperability among services that Hollywood craves — and has gone to great lengths to preserve in the UltraViolet standard.  Service providers and device makers can incorporate content protection schemes that follow MovieLabs’ best practices, but consumers may not be able to interoperate content among them, and service providers will be able to use content protection schemes to lock users in to their services.  In contrast, many in Hollywood are now nostalgic for the DVD because, although its protection scheme was easily hacked, it guaranteed interoperability across all players (at least all within a given geographic region).

The other drawback is that the document is a wish list provided by organizations that won’t pay for the technology.  This means that downstream entities such as device makers and service providers will treat it as the maximum amount of protection that they have to implement to get studio approval.  Because there is no license agreement that they have to sign to get access to the technology, the downstream entities are likely to negotiate down from there.  (Such negotiation already took place behind the scenes during the rollout of Blu-ray, as player makers refused to implement some of the more expensive protection features and some studios agreed to let them slip.)

Downstream entities are particularly likely to push back against some of MovieLabs’s best practices that involve costs and potential impairments of the user experience; examples include device connectivity to networks for purposes of authentication and revocation, proactive renewal of device software, and embedding of situation-specific watermarks.

Surely the studios understand all this.  The publication of this document by MovieLabs shows that Hollywood is willing to entertain dialogues with service providers, device makers, and content protection vendors to speed up time-to-market of legitimate video services and ensure that downstream entities can innovate more freely.  How much protection will the studios will ultimately end up with when 4k video reaches the mainstream?  It will be very interesting to watch over the next couple of years.

Copyright and Accessibility June 19, 2013

Posted by Bill Rosenblatt in Events, Law, Publishing, Standards, Uncategorized.
add a comment

Last week I received an education in the world of publishing for print-disabled people, including the blind and dyslexic.  I was in Copenhagen to speak at Future Publishing and Accessibility, a conference produced by Nota, an organization within the Danish Ministry of Culture that provides materials for the print-disabled, and the DAISY Consortium, the promoter of global standards for talking books.  The conference brought together speakers from the accessibility and mainstream publishing fields.

Before the conference, I had been wondering what the attitude of the accessibility community would be towards copyright.  Would they view it as a restrictive construct that limits the spread of accessible information, allowing it to remain in the hands of publishers that put profit first?

As it turns out, the answer is no.  The accessibility community, generally speaking, has a balanced view of copyright that reflects the growing importance of the print disabled to publishers as a business matter.

Digital publishing technology might be a convenience for normally sighted people, but for the print disabled, it’s a huge revelation.  The same e-publishing standards that promote ease of production, distribution, and interoperability for mainstream consumers make it possible to automate and thus drastically lower the cost and time to produce content in Braille, large print, or spoken-word formats.

Once you understand this, it makes perfect sense that the IDPF (promoter of the EPUB standards for e-books) and DAISY Consortium share several key members.  It was also pointed out at the conference that the print disabled constitute an audience that expands the market for publishers by roughly 10%.  All this adds up to a market for accessible content that’s just too big to ignore.

As a result, the interests of the publishing industry and the accessibility community are aligning.  Accessibility experts respect copyright because it helps preserve incentives for publishers to convert their products into versions for the print disabled.  Although more and more accessibility conversion processes can be automated, manual effort is still necessary — particularly for complex works such as textbooks and scientific materials.

Publishers, for their part, view making content accessible to the print disabled as part of the value that they can add to content — value that still can’t exist without financial support and investment.

One example is Elsevier, the world’s largest scientific publisher.  Elsevier has undertaken a broad, ambitious program to optimize its ability to produce versions of its titles for the print disabled.  One speaker from the accessibility community called the program “the gold standard” for digital publishing.  Not bad for a company that some in the academic community refer to as the Evil Empire.

This is not by any means to suggest that publishers and the accessibility community coexist in perfect harmony.  There is still a long way to go to reach the state articulated at the conference by George Kerscher, who is both Secretary General of DAISY and President of IDPF: to make all materials available to the print disabled at the same time to sell structured settlement, and for the same price, as mainstream content.

The Future Publishing and Accessibility conference was timed to take place just before negotiations begin over a proposed WIPO treaty that would facilitate the production of accessible materials and distribution of them across borders.  The negotiations are taking place this and next week in Marrakech, Morocco.  This proposed treaty is already laden with concerns from the copyright industries that its provisions will create opportunities for abuse, and reciprocal concerns from the open Internet camp that the treaty will be overburdened with restrictions designed to limit such abuse.  But as I found out in Denmark last week, there is enough practical common ground to hope that accessibility of content for the print disabled will continue to improve.

Withholding Ads from Illegal Download Sites January 7, 2013

Posted by Bill Rosenblatt in Standards.
add a comment

Over the past few months, the conversation about online infringement has shifted from topics like graduated response and DMCA-related litigation to cutting off ad revenue for sites that provide illegal downloads.  This issue gained some importance during the run-up to SOPA and PIPA in 2011.  David Lowery of The Trichordist gave it new visibility last year by initiating a stream of screenshots showing major consumer brands that advertise on sites like FilesTube, IsoHunt, and MP3Skull.

Last week, the issue took on a new level of importance when a report from the University of Southern California’s Annenberg Innovation Lab confirmed that major online ad networks, including those of Google and Yahoo, routinely place ads on pirate sites.  The Innovation Lab has come up with an automated way of tracking the ad networks that place ads sites that (according to the Google Transparency Report) attract the most DMCA takedown notices.  The study ranks the top ten ad networks that serve ads on pirate sites and will be updated monthly.

The idea is to shame consumer brands by showing them that their ads are appearing on pirate sites amid ads for pornography, mail-order brides, etc.  The Annenberg study has already led to at least one major consumer brand insisting that its ads be pulled from pirate sites.

The focus on online ad networks is — as Lowery admitted at our Copyright and Technology NYC 2012 conference last month — not an ideal solution to the problem of online infringement but rather a “low hanging fruit” approach that appeals to real business imperatives without requiring lawyers or lobbyists.  It’s an acknowledgement that legislation to address online infringement is not going to be achievable, at least in the near future, in the aftermath of the defeats of SOPA and PIPA in early 2012.

Yet the tactic’s effectiveness is limited by the quality of information about ad buys that flows through ad networks.  For example, it’s sometimes not possible for an advertiser to know where its ads are being placed because, among other reasons, ad networks resell inventory to each other.

Let’s assume that most consumer brands would rather not have their ads placed on pirate sites.  Then two things are required to solve this problem.  One is standards for information about ad buys — advertiser identity, inventory, ad network, type of placement, and so on — and protocols for communicating that information up the chain of intermediaries from the website on which the ad was placed all the way up to the advertiser.  The other is agreement throughout the online ad industry to use such standards in communication and reporting.

If you follow efforts to develop standard content identifiers and online rights registries, you should see the analogy here.

The good news is that the Interactive Advertising Bureau (IAB) has been working on standards that look like they could apply here with some tweaks.   The IAB launched the eBusiness Interactive Standards initiative in 2008 with the goal of increasing efficiency and reducing errors in online ad workflows.  The eBusiness Interactive Standards spec defines XML structures for communicating information among advertisers, agencies, and publishers (websites) from RFPs (requests for proposal) through to IOs (insertion orders).

Now the bad news.  The IAB standards would need some modification to cover the requirements here: they don’t appear to work through multiple levels of ad networks, they don’t include globally unique identifiers for ad placements (though this would be simple to add), and they aren’t designed to cover performance reporting.  Furthermore, progress in getting the standard to market appears to be slow: it entered a beta phase with limited customers in 2011, and no progress has been apparent since then.

Yet even if the right standards were adopted, the advertising industry would still need to commit to the kinds of transparency that would be necessary to ensure, if an advertiser wishes, that its ads don’t appear on pirate sites.  For one thing, advertisers often buy “blind” a/k/a run-of-network inventory in order to get discount pricing, and there is no reliable way to ensure that such buys don’t inadvertently end up on pirate sites.  A related problem is where to draw the line between obvious pirate sites like the ones mentioned above and those that happen to occasionally host unauthorized material.

Regulatory initiatives seem unlikely here.  Indeed, the Obama Administration and Congress in 2011 asked the ad industry to adopt a “pledge” against advertising on pirate sites; the industry’s two major U.S. trade associations responded last May with a statement full of equivocation and wiggle-room.

Ultimately, the pressure would have to come from advertisers themselves.  They could demand, for example, that even blind buys not appear on the 200-250 sites that show more than 10,000 takedown notices a month in the Google Transparency Report (mainstream sites like Facebook, Tumblr, DailyMotion, Scribd, and SoundCloud fall well below this threshold).  A good set of technical standards for tracking and reporting would help convince them that they can demand to withhold their ads from these sites with a reasonable chance of success.

*

Reducing Complexity of Multiscreen Video Services with PlayReady and MPEG-DASH November 19, 2012

Posted by Bill Rosenblatt in DRM, Standards, Video.
4 comments

As I have worked with video service providers that are trying to upgrade their offerings to include online and mobile services, I’ve seen bewilderment about the maze of codecs, streaming protocols, and player apps as well as content protection technologies that those service providers need to understand.  Yet a development that took place earlier this month should help ease some of the complexity.

Microsoft’s PlayReady is becoming a popular choice for content protection.  Dozens of service providers use it, including BSkyB, Canal+, HBO, Hulu, MTV, Netflix, and many ISPs, pay TV operators, and wireless carriers.  PlayReady handles both downloads and streaming, and it is currently the only commercial DRM technology certified for use with UltraViolet (though that should change soon).  Microsoft has developed a healthy ecosystem of vendors that supply things like player apps for different platforms, “hardening” of client implementations to ensure robustness, server-side integration services, and end-to-end services.  And after years of putting in very little effort on marketing, Microsoft has finally upgraded its PlayReady website with information to make it easier to understand how to use and license the technology.

Streaming protocols are still a bit of an issue, though.  Several vendors have created so-called adaptive streaming protocols, which monitor the user’s throughput and vary the bit rate of the content to ensure optimal quality without interruptions.  Apple has HTTP Live Streaming (HLS), Microsoft has Smooth Streaming, Adobe has HTTP Dynamic Streaming (HDS), and Google has technology it acquired from Widevine.  Yet operators have been more interested in  Dynamic Adaptive Streaming over HTTP (DASH), an emerging vendor-independent MPEG standard.  The hope with MPEG-DASH is that operators can use the same protocol to stream to a wide variety of client devices, thereby making deployment of TV Everywhere-type services cheaper and easier.

MPEG-DASH took a significant step towards real-world viability over the last few months with the establishment of the DASH Industry Forum, a trade association that promotes market adoption of the standard.  Microsoft and Adobe are members, though not Apple or Google, indicating that at least some of the vendors of proprietary adaptive streaming will embrace the standard.  The membership also includes a healthy critical mass of vendors in the video content protection space: Adobe, BuyDRM, castLabs, Irdeto, Nagra, and Verimatrix — plus Cisco, owner of NDS, and Motorola, owner of SecureMedia.

Adaptive streaming protocols need to be integrated with content protection schemes.  PlayReady was originally designed to work with Smooth Streaming.  It has also been integrated with HLS, which is probably the most popular of the proprietary adaptive streaming schemes.  Integration of PlayReady with MPEG-DASH is likely to be viewed as a safe choice, in line with the way the industry is going.  That solution came into view this month as BuyDRM and Fraunhofer IIS announced an integration of MPEG-DASH with PlayReady for the HBO GO service in Europe.  HBO GO is HBO’s “over the top” service for subscribers.

For the HBO GO demo, BuyDRM implemented a version of its PlayReady client that uses Fraunhofer’s AAC 5.1 surround-sound codec, which ships with devices that run Android 4.1 Jelly Bean.  The integration is being showcased with HD quality video on HBO’s “Boardwalk Empire” series. Users can connect Android 4.1 devices with the proper outputs — even handsets — to home-theater audio playback systems to get an experience equivalent to playing a Blu-ray disc.  The current implementation supports live broadcasting, with VOD support on the way shortly.

PlayReady integrated with MPEG-DASH is likely to be a popular choice for a variety of video service providers, ranging from traditional pay TV operators to over-the-top services like HBO Go.  BuyDRM and Fraunhofer’s deployment is an important step towards that choice becoming widely feasible.

UK IPO Publishes Digital Copyright Hub Report August 13, 2012

Posted by Bill Rosenblatt in Rights Licensing, Standards, UK.
add a comment

Last month, the UK Intellectual Property Office published a report called Copyright Works: streamlining copyright licensing for the digital age.  This is the second report in Richard Hooper CBE and Dr. Ros Lynch’s engagement with the UK IPO.  Hooper’s background includes positions at the top of the UK’s media and telecommunications industries; Lynch is a senior civil servant in the UK’s Department for Business, Innovation and Skills.

The second Hooper Report follows on the heels of several important developments in the UK regarding copyright in the digital age, most recently including the Digital Economy Act and the Hargreaves Review.  Having found (in the first Hooper Report) that the legal content marketplace is being held back by several obstacles, such as licensing difficulties, lack of standards, and deficiencies in both content and metadata, the second Hooper Report makes recommendations on how to solve the problems.

Unfortunately the recommendations in the second Hooper Report don’t go far enough.  Hooper and Lynch did a lot of research, talked to lots of people, and synthesized lots of information.  Most of their input appears to have come from established industry sources, including the major licensing entities in the UK, such as PRS and PPL (UK analogs to ASCAP and RIAA in the US); major media companies; trade associations; and standards initiatives engendered by the EU Digital Agenda such as the Global Repertoire Database (GRD) and Linked Content Coalition (LCC). They also researched important initiatives outside of the UK, such as the Copyright Clearance Center’s RightsLink service in the US.

Whereas the first Hooper Report established that major problems exist, this new report is best appreciated as a summary of the various initiatives being planned to solve pieces of them — such as the GRD and LCC.  Hooper and Lynch offer cogent explanations of problems to be solved: difficulty of licensing content into legitimate services, lack of complete and consistent information about content and rights, lack of standards for rights information and communication among relevant entities, resistance of collective licensing schemes to new business models, and the relative lack of content available for legal use through various channels.

The authors appear to understand that the various efforts being proposed are not going to solve all the problems by themselves.  On the other hand, they also understand problems of “not invented here,” and they take the pragmatic view that the best way forward is to work with existing standards and integrate them together rather than try to come up with some kind of overall solution that may not be practicable.

So far, so good; but that’s essentially where it all stops.  After explaining the problems and summarizing existing initiatives, the report tantalizingly lays out a vision for a Copyright Hub that will bring everything together.  It recommends government seed funding as a way of both kick-starting the Copyright Hub and ensuring that people work together to build it.

Unfortunately, the vision for the Copyright Hub turns out to be an inch deep.  It also lacks explanations of how, or if, all these initiatives — ranging from PRS and PPL’s efforts to offer “one-stop” music licenses all the way up through the technically sophisticated GRD — could fit together or even how they map to the elements in the proposed Copyright Hub.  The LCC project is looking at technical aspects of the integration issue, but it is conceived as an enabler of standards, not as a marketplace solution.  It’s possible that such a solution is envisioned as a next step in the process. But the report betrays evidence of a lack of technical understanding that would have benefited both the analysis and the envisioning of solutions.

For example: The report has a section on digital images, which discusses the problem that many images are stripped of their rights metadata as part of normal publishing processes.  It discusses the possibility of using Internet-standard Uniform Resource Identifiers (URIs) to identify images and the work that entities such as Getty Images and the PLUS coalition are doing to create image registries and automate rights licensing.  But when put in this context, the solution to the metadata stripping problem is obvious: watermarking, the standard way of ensuring that data travels with content.  The problem can be solved with a standard watermarking scheme whose payload includes a serial identifier that can be used to reference a URI in a registry.  This is what the RIAA proposed for music in the U.S. in 2009, albeit to precious little fanfare; but Hooper and his people didn’t see it.  (They use the word “embed” without appearing to understand its meaning.)  There are other examples like this.

The report mentions “long tail” licensing — not as in long tail content, but as in long tail uses of content rights.  The work that needs to be done should, the report rightly says, address the large and growing number of low-value licensing transactions rather than, say, Universal Music Group licensing to Spotify or Deezer (the kind of deal that will always get done the old-fashioned way).  Unfortunately, the authors don’t seem to have talked to many people who try to get such licensing.  They should, for example, have sought out startup companies that have to navigate the impenetrable maze of direct licensing deals with rights holders,  face the rigidity of collecting societies that won’t accommodate their innovative business models, and make separate deals in 27 member states to get a pan-European service launched.

Overall, the second Hooper Report reads like a particularly well-informed version of the typical industry response to a government body’s investigation into industry practices: look at all the steps we’re already taking to solve this problem; leave us alone.

As a result, the new Hooper Report is a solid foundation on which to build solutions, but it doesn’t provide enough forward direction.  It’s all very well to talk about respecting the growing body of valuable work that different organizations are doing to solve online content licensing problems, avoiding “not invented here,” promoting open standards, and so on.  But the work that must be done will necessarily include tasks that are tedious and contentious, aspects that the Hooper Report glosses over.

Metadata schemes will have to be rationalized against one another; gaps and incompatibilities will have to be identified and eliminated.  Rights holders whose metadata is incomplete or poor quality will have to be identified and given sufficient incentive to improve.  Well-intentioned standards initiatives with overlapping or conflicting goals will have to change.  Digital holdouts will have to be convinced to participate.  And the many organizations with vested interests in maintaining the status quo will have to be called out as part of the problem rather than the solution.  This may be ugly work, but it will have to get done.

Follow

Get every new post delivered to your Inbox.

Join 677 other followers