jump to navigation

Roundtable Discussion October 14th: Algorithms and the DMCA Post-Lenz October 2, 2015

Posted by Bill Rosenblatt in Uncategorized.
add a comment

I wlll be appearing as the featured guest speaker at a roundtable discussion at Fordham Law School in NYC on Wednesday, October 14th at 8:45-10:00 am.  We’ll discuss the state of algorithms for monitoring online services and detecting possible copyright infringements, and what effect the recent decision in Lenz v UMG might have on them.  I’ll be providing some background on the relevant technologies and industry practices.  The event is part of the Fordham Center for Law and Information Policy (CLIP) monthly roundtable series.

This event is by invitation only, so if you’re interested in attending, please email me.

Forbes: Is Ad Blocking the New Piracy? September 25, 2015

Posted by Bill Rosenblatt in Economics, Law.

My latest column in Forbes takes the Apple’s decision to add ad-blocking primitives to iOS 9 as an occasion to look at the fast-growing phenomenon blocking ads in web browsers, and specifically to compare it to online copyright infringement.

Both developments lead to revenue losses for content publishers.  Both are occasioned by technological tools that make it easy and (in most cases) free for non-tech-savvy consumers to do.  And both have engendered cottage industries of technologies that attempt to combat the phenomena.  The article deals with the revenue models for such companies and the industry factions that are lining up on the sides of these debates.

The upshot is that ad-blocking is not a “Big Media vs. Big Tech” issue; it is more accurately described as a “Big Media and Some of Big Tech vs. Other Big Tech” issue.  In particular, Google — which earns over 90% of its revenue from ads — is not a big fan of ad blocking.  The trade associations that are addressing the ad-blocking issue appear to be learning lessons from the Copyright Wars by trying to establish best practices for online advertising that, if adopted, could avoid technological arms races.

Ninth Circuit Calls for Takedown Notices to Address Fair Use September 15, 2015

Posted by Bill Rosenblatt in Fingerprinting, Law, Music.
add a comment

This past Monday’s ruling from the Ninth Circuit Appeals Court in Lenz v. Universal Music Group, a/k/a the Dancing Baby Video case, is being hailed as an important one in establishing the role of fair use in the online world.  The case involved a common enough occurrence: a homemade video clip of someone’s child, with music (Prince’s “Let’s Go Crazy”) in the background, posted to YouTube.*  UMG sent a takedown notice, Stephanie Lenz sent a counter-notice, and an eight-year legal battle ensued.  Monday’s ruling was not a decision on the defendant’s liability but merely a denial of summary judgment, meaning that case will now go to trial.

The three-judge panel produced two important holdings: first, that fair use is really a user’s right, and not just an affirmative defense to a charge of infringement.  The second is that copyright holders have to take fair use into account in issuing DMCA takedown notices.  As we’ll discuss here, this will have some effect on copyright holders’ ability to use automated means to enforce copyright online.

Under the DMCA (Section 512 of U.S. copyright law), online service providers can avoid copyright liability if they respond to notices requesting that allegedly infringing material be taken down.  Notices have to comply with legal requirements, one of which is a good faith belief that the user who put the work up online was not authorized to do so.  This court now says that fair use is not merely a defense to a charge of infringement — to be asserted after the copyright holder files a lawsuit — but is actually a form of authorization.

It follows that the copyright holder must profess a good faith belief that the user wasn’t making a fair use of the work in order for a takedown notice to be valid. The court also held that this good faith belief can be “subjective” rather than based on objective facts; but it’s ultimately up to a jury to decide whether it buys the complainant’s basis for its good faith belief is valid.

The question for us here is how this ruling will affect the technologies and automated processes that many copyright owners use to police their works online, often through copyright monitoring services like MarkMonitor, Muso, Friend MTS, Entura, and various others.  These services use fingerprinting and other techniques to identify content online, create takedown notices from templates, and send them — many thousands per day — to online services.  Page 19 of the Lenz decision contains a hint:

“We note, without passing judgment, that the implementation of computer algorithms appears to be a valid and good faith middle ground for processing a plethora of content while still meeting the DMCA’s requirements to somehow consider fair use. . . . For example, consideration of fair use may be sufficient if copyright holders utilize computer programs that automatically identify for takedown notifications content where: (1) the video track matches the video track of a copyrighted work submitted by a content owner; (2) the audio track matches the audio track of that same copyrighted work; and (3) nearly the entirety . . . is comprised of a single copyrighted work. . . . Copyright holders could then employ [humans] to review the minimal remaining content a computer program does not cull.” (Internal citations and quotation marks omitted.)

At the same time, another clue lies in pp. 31-32, in a footnote to Judge Milan Smith’s partial dissent:

“The majority opinion implies that a copyright holder could form a good faith belief that a work was not a fair use by utilizing computer programs that automatically identify possible infringing content. I agree that such programs may be useful in identifying infringing content. However, the record does not disclose whether these programs are currently capable of analyzing fair use. Section 107 specifically enumerates the factors to be considered in analyzing fair use. These include: ‘the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes’; ‘the nature of the copyrighted work; ‘the amount and substantiality of the portion used in relation to the copyrighted work as a whole’; and ‘the effect of the use upon the potential market for or value of the copyrighted work.’ 17 U.S.C. § 107. For a copyright holder to rely solely on a computer algorithm to form a good faith belief that a work is infringing, that algorithm must be capable of applying the factors enumerated in § 107.”

To follow this ruling, takedown notices will now presumably have to contain language that describes the copyright holder’s good faith belief that the user who posted the file did not have a fair use right.  This can be a “subjective” basis, and the source of that information cannot “solely” be a “computer algorithm.”

It is, of course, impossible for any computer algorithm to determine whether a copy of a file was made by fair use; there is no such thing as a “fair use deciding machine.”  But that’s not what’s required here — only evidence that some (unspecified portion) of the four fair use factors were not met, other than “because I said so.”  Two of the four factors are easy: “the nature of the copyrighted work” ought to be self-evident to the owner of the copyright, and today’s widely-used content recognition tools can determine whether “the amount and substantiality of the portion used” was the entire work.  The majority in Lenz suggested that this latter factor “may be sufficient . . . for consideration of fair use.”  Apart from that, for example, the fact that a file appears on a website touting “Free MP3 downloads!” and featuring banner ads could be cited as evidence of an “effect of the use upon the potential market for or value of the copyrighted work” or “the purpose and character of the use.”

In other words, some of the characterizations of a work as “not fair use” that are often written into lawsuit complaints (written by lawyers) may have to find their way into takedown notices (generated automatically by technology).  As a practical matter, copyright monitoring services may want to produce takedown notices with more situation-specific information in order to pass the non-fair use test — such as characterizations of the online service or other circumstances in which works are found.  This could require a greater number of different takedown notice templates and more effort required to populate them with specifics before sending them to online services — yet the processes still ought to be automatable.

The upshot of the Lenz decision, then, is that copyright holders may have to go to somewhat more effort to generated automated takedown notices under the DMCA that will survive a court challenge.  Just how much more effort and how much more verbiage in notices is necessary will be a subject for the Lenz trial and future litigations.  But today’s basic paradigm of copyright monitoring services using content recognition algorithms and other technological tools to automate enforcement processes is likely to continue, largely unchanged.

*I had a very similar experience two years ago.  I took a video of my daughter’s dance recital on my smartphone from the audience, and I posted it on YouTube under a private URL known only to her uncles and grandparents. UMG issued a takedown notice — on one of the three one-minute-long song samples used in that dance routine.  I tried filing a counter-notice, which UMG denied; so I gave up and emailed the clip to the relatives.  I suspect that no human ever analyzed this clip: the Jennifer Lopez track that UMG complained of was one of two tracks owned by UMG, while the other, a techno track by Basement Jaxx, is one that services like Shazam have a hard time recognizing.

The Myth of DRM-Free Music May 31, 2015

Posted by Bill Rosenblatt in DRM, Music, United States.

The annual IDPF Digital Book conference took place this week in New York, as part of the BookExpo America trade show for the publishing industry.  You can count on two topics being discussed at any book publishing conference: Amazon and DRM.  IDPF Digital Book 2015 was no exception.  One particular panel featured writers from leading book industry trade magazines, and the moderator was Joe Wikert, a well-respected digital publishing executive who is an outspoken opponent of DRM.  The discussion turned to the pros and cons of “walled gardens” such as Amazon’s Kindle ecosystem.  Wikert remarked on how quickly the music industry got rid of DRM and suggested (as he often does) that book publishers should follow.

The usual story is that the music industry went DRM-free in 2009 when Apple completed its removal of DRM from its vast iTunes music catalog.  But how true is that?  Not very, as it turns out.  I’d argue not only that DRM never really went away but that it’s making a comeback.

The first thing to recognize is that downloaded files are the only mode of digital music delivery in which the music isn’t encrypted.  All on-demand music services (Spotify, Rhapsody, Google Play Music, Beats Music, etc.) encrypt streams as well as music tracks that users download for “offline listening.”  And all forms of digital radio — Internet (Pandora), Satellite (SiriusXM), and digital TV (Music Choice) — are encrypted.

These modes of delivery are now more popular than download purchases.  Music download sales peaked in 2012 and have moved into sharp decline.  Based on publicly available subscribership figures and studies such as Edison Research and Triton Digital’s The Infinite Dial, I estimate the total U.S. active monthly listenership to on-demand music services in the 60-70 million range.  That’s counting the use of YouTube as a de facto on-demand music service, which The Infinite Dial estimates as more than four times that of Spotify.  (The study says that 73% of YouTube music users don’t even watch the videos but just listen.)  Internet radio, led by Pandora and iHeartRadio, has well north of 100 million active listeners, while over 27 million people subscribe to SiriusXM.  All of these numbers are growing steadily.

How many people purchased digital downloads?  About 40 million in a year (based on 2013 research from NPD), and declining.  Of course, once someone has downloaded a file, she can play it any number of times, but the number of download buyers is a reasonable measure of active users of the purchased-download model.  So it’s safe to say that the number of people who obtain music using a DRM-free model (legally) is much lower than the number of people who get it through a model in which music is encrypted.  To be more precise, the percentage of people who purchase music downloads is only about 18% of the total digital music market.  (That’s a lower bound, assuming no overlap, but it doesn’t include satellite or digital TV radio.)

More recent research by GlobalWebIndex reinforces the trend.  The firm’s Q1 2015 survey of teenagers around the world shows that while 60% used a streaming service during the last month, only 21% purchased a music download.

Compare this to the numbers in 2008, the end of the supposed “DRM era.”  At that time, iTunes represented about three-quarters of the music download market.  At least 10 million people in the U.S. purchased music on iTunes on a monthly basis, meaning that over 13 million purchased music downloads from anywhere.  Internet radio had perhaps 7 million active users, and on-demand services had less than 2 million subscribers.  In other words, download purchasers accounted for a lower bound of 60% of all Internet music users in 2008 — more than triple the percentage today. (These are all rough estimates; email me to find out how I calculated them.)

An even better way of measuring the percentage of digital music delivered with and without encryption is the revenue that record labels get from music through these various modalities.  For this, we turn to numbers compiled by the RIAA.  Here’s what they tell us:

DRM-free percent

Percentage of recorded music industry digital revenue from DRM-free vs. encrypted delivery modalities. Source: RIAA.

This chart shows the percentages of total digital recorded music revenues that come from DRM-free vs. encrypted modalities.  DRM-Free includes downloaded singles, downloaded albums, kiosk sales, and ringtones (even though some of the latter may be DRM-protected).  Encrypted categories include SoundExchange distributions (all forms of digital radio), Paid Subscriptions (paid on-demand services and premium Internet radio such as Pandora One and Rhapsody unRadio), and Ad-Supported On-Demand Streaming (YouTube, Vevo, Spotify Free).

The chart starts in 2009, when iTunes went fully DRM-free.  2008 was a transitional year as two of the major labels (first EMI, then UMG) began to go DRM-free on iTunes, and Amazon launched its completely DRM-free MP3 store.  Before then, perhaps 5% of digital music revenue was from DRM-free sources.

The trend began to reverse in 2011, when Spotify launched and the major labels completed deals with YouTube in which they allow most of their material to be shown in exchange for a share of ad revenue, resulting in “hockey stick” growth in listenership to on-demand services that continues to this day.

From this data it’s fair to predict that the lines will cross, that encrypted modalities will represent the majority of recorded digital music revenue by 2016.

As a footnote, the fastest-growing category of recorded music revenue is neither on-demand nor Internet radio; it’s vinyl.  Vinyl has come back from near death in 2010; its revenue growth is at 50% per year and accelerating; it now contributes more revenue to record labels than YouTube.  Now here’s a rather metaphysical question: should we count vinyl records as DRM-free or not?  You decide, but look here first.

European Commission Issues Communication on Digital Single Market May 9, 2015

Posted by Bill Rosenblatt in Europe, Law.
1 comment so far

The European Commission this past week published the first in a series of documents that mark its progress toward the objective of creating a Digital Single Market for all EU member states.  The 20-page Communication, published last Wednesday, lays out a series of steps that Brussels will take over the next two years, including “[l]egislative proposals for a reform of the copyright regime” starting this calendar year.

The Communication describes two particular areas of focus within the realm of copyright.  Both reflect input from lobbyists on both sides of the issue — the media and tech industries — and show that the EC wants to examine factors that trade off the concerns of both sides.

One area of focus is a pain point for anyone trying to launch digital media businesses in Europe: the lack of cross-border licensing opportunities and content portability.  A would-be digital media distributor has, in many cases, to negotiate a separate content license in each of the countries in which it wants to operate.  This process is especially painful for startups with limited resources.  As a result, smaller countries in particular get legitimate content services years later than larger ones, if they get them at all.

This leads to a related problem: the use of geoblocking technology to confine services to single countries where it isn’t strictly necessary.  Residents of smaller countries often resort to virtual private networks (VPNs) to obtain fake IP addresses in countries in which licensed services are offered.  The EC is looking to streamline cross-border licensing and make it easier to carry services accessed on portable devices across borders; at the same time, it hopes to eliminate unnecessary geoblocking.

Cross-border licensing within Europe has been an issue for many years, certainly since I consulted to the the EC’s Media and Information Society Directorate back in mid-late 2000s.  There are two obstacles to getting such schemes enacted throughout Europe.  First is that regulators in Brussels don’t hear about it very much: the companies that can afford to send lobbyists to Brussels (and join trade associations) tend to also be able to afford enough lawyers to go around all the member states for licensing — and to benefit from existing consumer demand, because their services are already known.  In other words, startups tend to be shut out of this discussion.

The second problem is that smaller countries’ culture ministries see pan-European licensing as counterproductive.  Their jobs are to promote their countries’ local content, yet cross-border licensing often makes it easier for bigger EU member states (not to mention the US) with better-known content to license it in but not to license their content out.  The focus on reducing the use of geoblocking ought to help ameliorate this concern, because it helps consumers’ money fall into the hands of local rights holders instead of VPN operators.

The other major area of focus is to address online infringement.  The EC is looking at two particular approaches to this.  One is a “follow the money” approach to enforcement that focuses on “commercial-scale infringements,” presumably as opposed to small-scale activities by individuals.  The other is “clarifying the rules on the activities of intermediaries in relation to copyright-protected content.”  Although the language in the Communication is high-level and non-specific, the focus is not likely to drift far beyond the existing safe harbors for online service providers that arise out of the EU e-Commerce Directive, which are roughly equivalent to the DMCA in the US.

Here again, the Communication reflects tradeoffs between the concerns of the content and tech industries.  The EC wants to look at “new measures to tackle illegal content on the Internet, with due regard to their impact on the fundamental right to freedom of expression and information, such as rigorous procedures for removing illegal content while avoiding the take down of legal content[.]”  In other words, the EC probably wants to focus primarily on notice and takedown, to explore ways to resolve the tension between, on the one hand, the media industry’s concerns about overly onerous notice requirements and the lack of “takedown and staydown,” and on the other hand, online service providers’ concerns about abuse of the notice process.

The Commission expects to begin assessments of these areas this year.  This Communication is the start of a multi-year journey toward any actual changes in national laws, and the first glimpse of the parameters of those changes.  Communications are not legally binding; they lead to proposals for new laws and changes in existing laws.  The next step is one or more Directives, which are legally binding on EU member states, though member states are free to implement them in ways that make sense within their own bodies of laws — a process that itself can take several years.  Unifying several aspects of digital life in Europe through this long and complex process will be a challenge, to say the least.

Mayweather-Pacquiao Fight Exposes Live Event Streaming Piracy Challenges May 6, 2015

Posted by Bill Rosenblatt in Fingerprinting, Video, Watermarking.
add a comment

Piracy of live-streamed sports events ceased to be “inside baseball” (pun intended) for the media industry last weekend with HBO’s broadcast of the Floyd Mayweather-Manny Pacquiao boxing match in the US market.  Even in the mainstream media (such as here and here), it seems that the public’s ability to watch the fight online for free in close to real time got more attention than the fight itself.

This is why protection of live sports event streams is a growth area in the field of anti-piracy technology today.  Broadcasters like HBO pay huge sums of money for exclusive rights to live sports; therefore they have big incentives to protect the streams from infringement.  Recent articles in re/code and Mashable attempted — with limited success — to explain how HBO’s stream was massively pirated and how that piracy could possibly have been curtailed.

Both articles focused on the many pirated streams of the fight that were available on the Periscope app, which allows users to broadcast video in real time from their iOS devices, and is owned by Twitter.  As Peter Kafka at re/code explained (accurately enough), it’s not possible to use fingerprint-based systems like Google’s Content ID with live event streams.  Such systems depend on a service provider getting a copy of the content in advance so that it can take a “fingerprint” — a shorthand numerical representation of it — and use that to flag attempted user uploads of the same content later.  By definition, no advance copy of a live event exists, so fingerprinting can’t be used.

Furthermore, just because a single service uses fingerprinting to block unauthorized uploads doesn’t mean that other services do.  YouTube might block an upload thanks to Content ID, but that doesn’t prevent a user from putting the same file up on BitTorrent or a cyberlocker.

However, it is possible to use watermarks to flag content.  HBO could insert watermarks into the live video as it goes out the door.  Watermarks are much more efficient to detect and calculate than fingerprints, and a well-designed watermark can be detected even if the content is “camcorded” from a TV screen.

Two things can happen with watermarks.  First, a cooperating service could agree to detect the watermark and block the content — or do something else, such as allow the content through, play an ad, and share the revenue with the rights holder, as Google does with Content ID.  Second, a piracy monitoring service could detect watermarks of streams out in the wild (including on Periscope) and rapidly serve takedown notices on the services that are distributing the unauthorized streams, meaning that the services need not do anything proactive.

Given what Christina Warren at Mashable experienced (camcorded streams appearing on Periscope and then disappearing later), the latter probably happened.  Several streaming providers and anti-piracy services use watermarks to aid detection of unauthorized copies of live streams.  In the Caribbean market, for example, Netherlands-based pay-TV platform provider Cleeng carried the pay-per-view broadcast of the fight for Sportsmax TV, and it’s likely that Cleeng used its live-stream watermarking technology to protect the content.  (Another anti-piracy provider, Irdeto, has similar technology but admitted to Bloomberg that it wasn’t working on the fight.  That leaves Friend MTS as my guess for the provider that monitored the fight in other geographies such as Europe and North America.)

It is also possible to automate the process more fully by embedding so-called session-based watermarks that contain identifiers for the user accounts or devices that are receiving the content legally — such as set-top boxes receiving HBO over cable or satellite services.  Session-based watermarks are used today with movies released in early windows in high definition, and Hollywood would like them to be used in all 4K/UHD movie distributions.

With session-based watermarks, a monitoring service can (in many cases) determine the device from which the unauthorized stream originated and inform the pay-TV provider, which can then shut off the signal to that device. The entire process would require no human intervention and take just a few seconds.

But with Periscope-style camcording, this could lead to the following interesting situation: Alice invites some friends over to watch the fight on her big-screen TV and pays the $100 fee to HBO through her cable company.  Everyone sits down, and the fight starts.  Bob pulls out his iPhone and fires up Periscope.  A few seconds later, the TV goes blank or displays a warning message about possible copyright infringement.  Alice calls her cable company and finds herself on hold, waiting behind the hundreds or thousands of other furious customers to whom the same thing happened.

Ergo I don’t believe HBO is able to require session-based watermarking to protect its live events through pay-TV providers.  The situation with live sports is different from early-window HD movies: movies have already been in theaters (where they have been camcorded), and users value the timeliness of Periscope-style camcords for live events more than their often questionable quality.

What also clearly did not happen is that HBO made a deal with Twitter to detect the watermarks and block the live Periscope streams.  As both the Mashable and re/code articles note, Twitter/Periscope experienced a ton of traffic before, during, and after the event, much of which was “second-screen” in nature, such as commentary on the fight and the fighters.  Yet Google’s Content ID showed that a service provider could be willing to detect copyrighted materially proactively if given sufficient incentive.  If the likes of HBO can find sufficient incentives — cross-promotion, ad revenue share, or something else — then the Periscopes of the world might be inclined to follow in Google’s footsteps.

E-Books: Subscription Services vs. Libraries April 30, 2015

Posted by Bill Rosenblatt in Publishing, Rights Licensing, Services, United States.

My latest article on Forbes is a look into the future of subscription e-book services.  Oyster, a New York-based startup that offers a Netflix-like e-book subscription service, recently launched a “standard” e-book retail site with titles from all major and many indie trade publishers.  In the Forbes piece, I wondered whether this might be a sign that subscription e-book services might be slow to catch on — particularly since none of the three subscription e-book providers in the United States market (Scribd and Amazon as well as Oyster) have licenses to content from all of the “Big Five” trade publishers.  In addition, a comparison of sales figures for e-books and digital music suggests that consumers are, at least for now, more interested in “owning” e-books than music downloads.

Then I got into an online debate with Ava Seave, a noted media industry strategy consultant, adjunct professor at Columbia Business School, and co-author of an excellent book about the media industry called The Curse of the Mogul.  She raised an important point that I hadn’t considered: aren’t subscription e-book services hemmed in by competition from public libraries?

This led me to look at the latest data on Big Five trade publishers’ policies on e-book library lending.  It turns out that they have liberalized their policies over the past couple of years.  Now all of the Big Five license at least some of their frontlist titles for library e-lending, though most of them still impose restrictions on license durations (such as one year or up to 26 loans).  In other words, the quiet war that has been going on between major trade publishers and public libraries for years has simmered down.

This means that public libraries could indeed be serious competition for subscription e-book services.  (Subscription music services like Spotify, Rhapsody, and Beats Music don’t have this problem, because public libraries typically do not have large music collections for lending, nor do most people think of libraries as a place to discover recorded music.)

For publishers, public libraries and subscription services are best understood as revenue sources with two different models.  Here is a table that summarizes the differences:

Public Libraries E-Book Subscription Services
Big Five catalogs Most frontlist and backlist, according to library acquisitions 3 out of 5 majors (Scribd & Oyster), backlist only, limited titles
Revenue Fixed, up to 3-4x consumer hardback Royalty per user read, plus analytics data
License volume Per unit; libraries license N units each Unlimited units
License term Often limited to 1-2 years or 26 loans N/A

Public libraries thus have bigger catalogs potentially available for e-lending, but individual libraries must “acquire” individual titles.  They can acquire as many as they want, according to their budgets and the number of “copies” that they predict will satisfy their patrons without excessive waiting lists, but they must pay prices that range up to four times the consumer hardcover price.  Subscription services get access to thousands of titles in unlimited “quantities,” but only — at this time — from limited backlist selections and not at all from Hachette or Penguin Random House (and Amazon’s Kindle Unlimited has no Big Five titles at all).  On the other hand, subscription services can offer publishers lots of data about subscribers’ reading habits — data that’s unavailable to publishers from libraries, often by law.

The behavior that Big Five publishers have exhibited towards both public libraries and online services suggests that they will move slowly and gradually as these models take hold with the public.  Publishers haven’t licensed their material to startups unless and until they have amassed large audiences; Scribd, for example, had grown to 80 million visitors when it finally got a deal with HarperCollins.  Similarly, publishers have been slow and deliberate in licensing material to libraries through library platform providers such as OverDrive, 3M, and Baker & Taylor, which have been operational as far back as the early 2000s.

Contrast this with subscription music services: the first independent subscription music service was Rhapsody, which took less than two years to sign all of the (then) five major labels in 2002; and nowadays it would be commercial suicide for a digital music service to launch with anything less than all major-label content, minus the few remaining digital holdouts.

When viewed in this light, trade publishers’ dealings with public libraries don’t look like the kind of moral or public policy issue that many in the library community would like to portray it as (though there certainly are public policy implications, not least of which is libraries’ First Sale rights to lend material).  Instead, the future of e-book access through both libraries and subscription services will depend on their effects on publishers’ bottom lines.

P.S. I predict that this competition will lead e-book subscription service providers to push for “freemium” models, a la Spotify or Hulu; some indie publishers might agree to this, as they have already done with Amazon’s Kindle Owners Lending Library, but the Big Five are likely to resist mightily.  For one thing,this would require advertising revenue, an idea that has gotten severely limited traction in the world of e-books.

New White Paper and NAB Workshop: Strategies for Secure OTT Video in a Multiscreen World March 22, 2015

Posted by Bill Rosenblatt in DRM, Events, Standards, Technologies, White Papers.
add a comment

I have just released a new white paper called Strategies for Secure OTT Video in a Multiscreen World.  The paper covers the emerging world of multi-platform over-the-top (OTT) video applications and how to manage their development for maximum flexibility and cost containment in today’s world of constantly expanding devices and user expectations of “any time, any device, anywhere.”  It’s available for download here.

The key technologies that the white paper focuses on are adaptive bitrate streaming (MPEG DASH as an emerging standard) and the integration of Hollywood-approved DRM schemes with HTML5 through Common Encryption (CENC) and Encrypted Media Extensions (EME).

It is becoming possible to integrate DRM with browser-based apps in a way that minimizes native code and without resorting to plug-in schemes like Microsoft’s Silverlight.  Yet the HTML5 EME specification creates dependencies between browsers and DRMs, so that — at least in the near future — it will only be possible in many cases to integrate a DRM with a browser from the same vendor: for example, Google’s Widevine DRM with the Chrome browser or Microsoft PlayReady with Internet Explorer.  In other words, while the future points to consolidation around web app technologies and adaptive bitrate streaming, the DRM and browser markets will continue to be fragmented.  In other words, to be able to offer premium movie and TV content, service providers will need to support multiple DRMs for the foreseeable future.

The white paper lays out a high-level solution architecture that OTT service providers can use to take as much advantage as possible of current and emerging standards while isolating and minimizing the sources of technical complexity that are likely to persist for a while.  It calls for standardizing on adaptive bitrate streaming and app development technologies while allowing for and containing the complexities around browsers and DRMs.

Many thanks to Irdeto for commissioning this white paper.   In addition, Irdeto and I will present a workshop at the NAB trade show on Tuesday, April 14 at 1pm, at The Wynn in Las Vegas.  I’ll give a presentation that summarizes the white paper; then I’ll moderate a panel discussion with the following distinguished guests:

  • Dave Belt, Principal Architect, Time Warner Cable
  • Jean Choquette, Director of Multiplatform Video on Demand, Videotron
  • Shawn Michels, Senior Product Manager, Akamai
  • Richard Frankland, VP Americas, Irdeto

This session will include ample opportunities for Q&A, sharing of experiences and best practices, as well as a catered lunch and opportunities to network with your peers and colleagues.  Attendance at this event is strictly limited and by invitation-only to ensure the richest possible interaction among participants.  If you are interested in attending, please email Katherine.Walsh@irdeto.com by April 7th. Irdeto will even give you a ride from the Las Vegas Convention Center and back if you wish.

USPTO Public Meeting on Identifiers for Automating Content Licensing March 17, 2015

Posted by Bill Rosenblatt in Events, Rights Licensing, Standards, United States.
add a comment

The U.S. Patent and Trademark Office (USPTO) is holding a public meeting on Wednesday, April 1 to gather input on how the U.S. Government can facilitate the development and use of standard content identifiers as part of the process of creating an automated licensing hub, along the lines of the Copyright Hub in the UK.

This meeting is the second one that the USPTO is holding after the publication of the “Green Paper” on Copyright Policy, Creativity, and Innovation in the Digital economy by it and the National Telecommunications and Information Administration (NTIA) in July 2013.  The first meeting, in December 2013, addressed several other topics as well as this one.

(For those of you who are wondering why the USPTO is dealing with copyright issues: the USPTO is the adviser on all intellectual property issues, including copyright, to the Executive Branch of government, i.e., the president and his cabinet.  The U.S. Copyright Office performs an analogous function for the Legislative Branch, i.e., Congress.)

The April 1 meeting will focus tightly on issues of standard identifiers for content — which ones exist today, how they are used, how they are relevant to automation of rights licensing, and so on.  It will also focus on specifics of the UK Copyright Hub and the feasibility of building a similar one here in the States.

As usual for such gatherings, all are welcome to attend, the meeting will be live-streamed, and a transcript will be available afterwards.  It’s just unfortunate that notice of the meeting was only published in the Federal Register last Friday, less than three weeks before the meeting date.  I was asked to suggest panelists on the subjects of content identifiers and content identification technology (such as fingerprinting and watermarking).  There are several experts on these topics who would undoubtedly add much value to such discussions, but many of them — located in places from LA to the UK — would be unable to travel to Washington, DC on such short notice and possibly on their own nickels.  It would be nice to get input on this very timely topic from more than just the “usual suspects” inside the Beltway.

Establishment of reliable, reasonably complete online databases of rights holder information is of vital importance for making licensing easier in an increasingly complex digital age, and it’s encouraging to see the government take an active role in determining how best to get it done and looking at working systems in other countries that are further ahead in the process.  That’s why it’s especially crucial to get as much expert input as possible at this stage.

Perhaps the USPTO can do what it did for the December 2013 meeting: reschedule it for several weeks later.  If you are interested in participating but can’t do so at such short notice (as is the case with me), then you might want to communicate this to the meeting organizers at the PTO.  Otherwise, the usual practice is to invite post-meeting comments in writing.

Announcing Copyright and Technology London 2015 March 13, 2015

Posted by Bill Rosenblatt in Europe, Events, UK.
add a comment

For our fourth Copyright and Technology conference in London, we will be moving up from October to June — Thursday June 18, to be precise.  The venue will be the same as in previous years: the offices of ReedSmith in the City of London, featuring floor-to-ceiling 360-degree views of greater London.  Once again I’ll be working with Music Ally to produce the event.

At this point, we are soliciting ideas for panels and keynote speakers.  What are the developments in copyright law for the digital age in the UK, Europe, and the rest of the world that would make for great discussion?  Who are the most influential people in copyright today whom you would like to see as keynote speakers — or are you one of these yourself?

We’re considering various possible topics, including these:

  • Implications of the “Blurred Lines” decision on copyright in the age of sampling and remix culture
  • The use of digital watermarking throughout the media value chain
  • Progress of the UK Copyright Hub, Linked Content Coalition, and other initiatives for centralizing copyright information online
  • Content protection technologies for browser-base over-the-top streaming video
  • Progress of graduated response schemes in France, UK, Ireland, and elsewhere

Please send me your ideas.  It’s your chance to tell us what you want to hear about and what you’d be interested in speaking on!  We intend to publish an agenda by the end of this month.


Get every new post delivered to your Inbox.

Join 687 other followers