New White Paper and NAB Workshop: Strategies for Secure OTT Video in a Multiscreen World March 22, 2015Posted by Bill Rosenblatt in Technologies, Standards, DRM, Events, White Papers.
add a comment
I have just released a new white paper called Strategies for Secure OTT Video in a Multiscreen World. The paper covers the emerging world of multi-platform over-the-top (OTT) video applications and how to manage their development for maximum flexibility and cost containment in today’s world of constantly expanding devices and user expectations of “any time, any device, anywhere.” It’s available for download here.
The key technologies that the white paper focuses on are adaptive bitrate streaming (MPEG DASH as an emerging standard) and the integration of Hollywood-approved DRM schemes with HTML5 through Common Encryption (CENC) and Encrypted Media Extensions (EME).
It is becoming possible to integrate DRM with browser-based apps in a way that minimizes native code and without resorting to plug-in schemes like Microsoft’s Silverlight. Yet the HTML5 EME specification creates dependencies between browsers and DRMs, so that — at least in the near future — it will only be possible in many cases to integrate a DRM with a browser from the same vendor: for example, Google’s Widevine DRM with the Chrome browser or Microsoft PlayReady with Internet Explorer. In other words, while the future points to consolidation around web app technologies and adaptive bitrate streaming, the DRM and browser markets will continue to be fragmented. In other words, to be able to offer premium movie and TV content, service providers will need to support multiple DRMs for the foreseeable future.
The white paper lays out a high-level solution architecture that OTT service providers can use to take as much advantage as possible of current and emerging standards while isolating and minimizing the sources of technical complexity that are likely to persist for a while. It calls for standardizing on adaptive bitrate streaming and app development technologies while allowing for and containing the complexities around browsers and DRMs.
Many thanks to Irdeto for commissioning this white paper. In addition, Irdeto and I will present a workshop at the NAB trade show on Tuesday, April 14 at 1pm, at The Wynn in Las Vegas. I’ll give a presentation that summarizes the white paper; then I’ll moderate a panel discussion with the following distinguished guests:
- Dave Belt, Principal Architect, Time Warner Cable
- Jean Choquette, Director of Multiplatform Video on Demand, Videotron
- Shawn Michels, Senior Product Manager, Akamai
- Richard Frankland, VP Americas, Irdeto
This session will include ample opportunities for Q&A, sharing of experiences and best practices, as well as a catered lunch and opportunities to network with your peers and colleagues. Attendance at this event is strictly limited and by invitation-only to ensure the richest possible interaction among participants. If you are interested in attending, please email Katherine.Walsh@irdeto.com by April 7th. Irdeto will even give you a ride from the Las Vegas Convention Center and back if you wish.
add a comment
I’ve just published another piece in Forbes in my series on the emerging market for “high-res” audio, reflecting the recent surge in activity in this space as both the major record labels and consumer electronics companies see opportunity in expanding the market for high-quality digital audio beyond the audiophile niche. This piece is about new codec technologies — an area that hasn’t seen much innovation since a decade ago. As always, your feedback is most welcome.
As a postscript to that piece, it continues to amaze me — in a positive way — that vinyl is making such a comeback. Our favorite indie music store in Western Massachusetts recently got rid of all of its CDs and is now selling vinyl exclusively. Even Barnes & Noble is now selling a small, mostly highbrow selection of vinyl LPs. Most amazing of all? They’re flying off the shelves at an eye-opening $22 apiece. And everyone used to complain about the $16 CD — which didn’t scratch, took up less space, was easier to play, etc., etc.
Adobe’s Latest E-Book Misstep: This Time, It’s Not the DRM October 10, 2014Posted by Bill Rosenblatt in DRM, Publishing, Technologies.
A few days ago, it emerged that the latest version of Adobe’s e-book reading software for PCs and Macs, Adobe Digital Editions 4 (ADE4), collects data about users’ reading activities and sends them to Adobe’s servers in unencrypted cleartext, so that anyone can intercept and use the data, even without NSA-grade snooping tools.
The story was broken by Nate Hoffelder at The Digital Reader on Monday. The Internet being the Internet, the techblogosphere was soon full of stories about it, mostly half-baked analysis, knee-jerk opinions, jumped-to conclusions, and just plain misinformation. Even the usually thorough and reliable Ars Technica, the first to publish serious technical analysis, didn’t quite get it right. At this time of writing, the best summary of it comes from the respected library technologist Eric Hellman.
More actual facts about this sorry case will emerge in the coming days, no doubt, leading to a fully clear picture of what Adobe is doing and why. My purpose here and now is to address the various accusations that this latest e-book gaffe by Adobe has to do with its DRM. These include a gun-jumping post by the Electronic Frontier Foundation (EFF) that has inadvertently dragged Sony DADC, the division of Sony that is currently marketing a DRM solution for e-books, into the mess undeservedly.
Let’s start with the basics: ADE4 does collect information about users’ reading activities and transmit it in the clear. This is just plain unacceptable; no matter what Adobe’s terms and conditions might say, it’s a breach of privacy and trust, and (as I’ll discuss later) it seems like a strange fit to Adobe’s role in the e-book ecosystem. Whether it’s naivete, sloppiness, or both, it’s redolent of Adobe’s missteps in its release of the latest version of its e-book DRM at the beginning of this year.
But is ADE4’s data reporting part of the DRM, as various people have suggested? No.
The reporting on this story to date has missed one small but important fact, which I suspected and then confirmed with a well-placed source yesterday: ADE4 reports data on all EPUB format files, whether or not they are DRM-encrypted. The DRM client (Adobe RMSDK) is completely separate from the reporting scheme. By analogy, this would be like Apple collecting data on users’ music and movie playing habits from their iTunes software, even though Apple’s music files are DRM-free (though movies are not).
Some savvier writers have pointed out that even though DRM may not be directly involved, this is what happens when users are forced to use media rendering software that’s part of a DRM-based ecosystem. This is a fair point, but in this particular case it’s not really true. (It would be more true in the case of Amazon, which forces people to use its e-reading devices and apps, and unquestionably collects data on users’ reading behaviors – although it encrypts the information.)
Unlike the Kindle ecosystem, users aren’t forced to use ADE4; it’s one of several e-reader software packages available that reads EPUB files that are encrypted with Adobe’s Content Server DRM. None of the major e-book retailers use or require it, at least not in the United States. Instead, it is most often used to read e-books that are borrowed from public libraries using e-lending platforms such as OverDrive; and in fact such libraries recommend and link to Digital Editions on their websites.
But other e-reader apps, such as the increasingly popular BlueFire Reader for Android, iOS, and Windows, will work just as well in reading e-books encrypted with Adobe’s DRM, as well as DRM-free EPUB files. BlueFire (who can blame them?) sees the opportunity here and points out that it does not do this type of data collection. Users of library e-lending systems can use BlueFire or other apps instead of ADE4. Earlier versions of ADE also don’t collect and report reading data.
A larger question is why Adobe collects this data in the first place. The usual reason for collecting users’ reading (or listening or viewing) data is for analytics purposes, to help content owners determine what’s popular and hone their marketing strategies. Yet not only is Adobe not an e-book retailer, but e-book retailers that use its DRM (such as Barnes & Noble) don’t use Digital Editions as their client software.
One possible explanation is that Adobe is expecting to market ADE4 as part of its new DRM ecosystem that’s oriented towards the academic and educational publishing markets, and that it expects the data to be attractive to publishers in those market segments (as opposed to the trade books typically found in public libraries). Eric Hellman suggests another plausible explanation: that it collects data not for analytics purposes but to support a device-syncing feature that all of the major e-book retailers already offer — so that users can automatically get their e-books on all of their devices and have each device sync to the last page that the user read in each book.
Regardless of the reason, it seems unsettling when a platform software vendor, as opposed to an actual retailer, collects this type of information. Here’s another analogy: various video websites use Microsoft’s Silverlight web application environment. Silverlight contains a version of Microsoft’s PlayReady DRM. Users don’t see the Microsoft brand; instead they see brands like Netflix that use the technology. Users might expect Netflix to collect information about their viewing habits (provided that Netflix treated the information appropriately), but they would be concerned to hear (in a vacuum) that Microsoft does it; and in fact Microsoft probably does contribute to the collection of viewing information for Netflix and other Silverlight users.
In any case, Adobe can fix the situation easily enough by encrypting the data (e.g., via SSL), providing a user option in Digital Editions to turn off the data collection, and offering better explanations as to why it collects the data in the first place (at least better than the ambiguous, anodyne, PR/legal department-buffed one shown here). Until then, platform providers like OverDrive can link to other reader apps, like BlueFire, instead of to Adobe Digital Editions.
Finally, as for Sony DADC: the EFF’s web page on this situation contains a link, as a “related case,” to material on a previous technical fiasco involving Sony BMG Music, one of the major recording companies in the mid-2000s. At that time, Sony BMG released some albums on CDs that had been outfitted with a form of DRM. When a user put the disc in a CD drive on a PC, an “autorun” executable installed a DRM client onto the PC, part of which was a “rootkit” that enabled viruses. After a firestorm of negative publicity that the EFF spearheaded, Sony BMG abandoned the technology. (In one of its more savvy gambits, the EFF used momentum from that episode to cause other major labels to drop their CD DRMs as well; the technology was dead in the water by 2008.) In this case, unlike with Adobe, the problem was most definitely in the DRM.
Apparently some people think that because this incident involved “Sony,” Sony DADC — which is currently marketing an e-book DRM solution based on the Marlin DRM technology — was involved. Not true; the DRM that installed the rootkit came from a British company called First4Internet (F4I). Not only did Sony DADC have nothing to do with this (as I have confirmed), but Sony DADC actually advised Sony Music against using the F4I technology.
Digimarc Launches Social DRM for E-books September 17, 2014Posted by Bill Rosenblatt in Fingerprinting, Publishing, Technologies.
add a comment
Digimarc, the leading supplier of watermarking technology, announced this week the release of Digimarc Guardian Watermarking for Publishing, a transactional watermarking (a/k/a “social DRM”) scheme that complements its Guardian piracy monitoring service. Launch customers include the “big five” trade publisher HarperCollins, a division of News Corp., and the e-book supply chain company LibreDigital, a division of the printing giant RR Donnelley that distributes e-books for HarperCollins in the US.
With this development, Digimarc finally realizes the synergies inherent in its acquisition of Attributor almost two years ago. Digimarc’s roots are in digital image watermarking, and it has expanded into watermarking technology for music and other media types. Attributor’s original business was piracy monitoring for publishers via a form of fingerprinting — crawling the web in search of snippets of copyrighted text materials submitted by publisher customers.
One of the shortcomings in Attributor’s piracy monitoring technology was the difficulty in determining whether a piece of text that it found online was legitimately licensed or, if not, if it was likely to be a fair use copy. Attributor could use certain cues from surrounding text or HTML to help make these determinations, but they are educated guesses and not infallable.
The practical difference between fingerprinting and watermarking is that watermarking requires the publisher to insert something into its material that can be detected later, while fingerprinting doesn’t. But watermarking has two advantages over fingerprinting. One is that it provides a virtually unambiguous signal that the content was lifted wholesale from its source; thus a copy of content with a watermark is more likely to be infringing. The other is that while fingerprinting can be used to determine the identity of the content, watermarking can be used to embed any data at all into it (up to a size limit) — including data about the identity of the user who purchased the file.
The Digimarc Guardian watermark is complementary to the existing Attributor technology; Digimarc has most likely adapted Attributor’s web-crawling system to detect watermarks as well as use fingerprinting pattern-matching techniques to find copyrighted material online.
Digimarc had to develop a new type of watermark for this application, one that’s similar to those of Booxtream and other providers of what Bill McCoy of the International Digital Publishing Forum has called “social DRM.” Watermarks do not restrict or control use of content; they merely serve as forensic markers, so that watermark detection tools can find content in online places (such as cyberlockers or file-sharing services) where they probably shouldn’t be.
A “watermark” in an e-book can consist of text characters that are either plainly visible or hidden among the actual material. The type of data most often found in a “social DRM” scheme for e-books likewise can take two forms: personal information about the user who purchased the e-book (such as an email address) or an ID number that the distributor can use to look up the user or transaction in a database and is otherwise meaningless. (The idea behind the term “social DRM” is that the presence of the watermark is intended to deter users from “oversharing” files if they know that their identities are embedded in them.) The Digimarc scheme adopted by LibreDigital for HarperCollins uses hidden watermarks containing IDs that don’t reveal personal information by themselves.
In contrast, the tech publisher O’Reilly Media uses users’ email addresses as visible watermarks on its DRM-free e-books. Visible transactional watermarking for e-books dates back to Microsoft’s old Microsoft Reader (.LIT) scheme in the early 2000s, which gave publishers the option of embedding users’ credit card numbers in e-books — information that users surely would rather not “overshare.”
HarperCollins uses watermarks in conjunction with the various DRM schemes in which its e-books are distributed. The scheme is compatible with EPUB, PDF, and MOBI (Amazon Kindle) e-book formats, meaning that it could possibly work with the DRMs used by all of the leading e-book retailers.
However, it’s unclear which retailers’ e-books will actually include the watermarks. The scheme requires that LibreDigital feed individual e-book files to retailers for each transaction, rather than single files that the retailers then copy and distribute to end users; and the companies involved haven’t specified which retailers work with LibreDigital in this particular way. (I’m not betting on Amazon being one of them.) In any case, HarperCollins intends to use the scheme to gather information about which retailers are “leaky,” i.e., which ones distribute e-books that end up in illegal places online.
Hollywood routinely uses a combination of transactional watermarks and DRM for high-value content, such as high-definition movies in early release windows. And at least some of the major record labels have used a simpler form of this technique in music downloads for some time: when they send music files to retailers, they embed watermarks that indicate the identity of the retailer, not the end user. HarperCollins is unlikely to be the first publisher to use both “social DRM” watermarks and actual DRM, but it is the first one to be mentioned in a press release. The two technologies are complementary and have been used separately as well as together.
Disney and Apple’s UV FUD March 26, 2014Posted by Bill Rosenblatt in Business models, Technologies, United States, Video.
add a comment
Last month Disney launched Disney Movies Anywhere, a service that lets users stream and download movies from Disney and associated studios on their Apple iOS devices. You can purchase movies on the site or from the App Store app and stream them to any iPhone, iPad, or iPod Touch. You can also get digital copies and streaming access with purchases of selected DVDs and Blu-ray discs. And you can connect your iTunes account to your Disney Movies Anywhere account so that you can gain similar streaming and download access to your existing Disney iTunes purchases.
A couple of things about Disney Movies Anywhere are worth discussing. First, this is yet more evidence of the strong bond between Disney and Apple, a relationship formed when Disney acquired Pixar from Steve Jobs, who became a Disney board member and the company’s largest shareholder.
More particularly, this service is a way for Apple to experiment with video streaming services without attaching its own brand name. Disney Movies Anywhere works with only iOS devices, and there’s little indication that it will add support for Android or other platforms. For whatever reason, Apple has shied away from streaming media services until quite recently (with iTunes Radio and the latest iteration of Apple TV).
More importantly, Disney Movies Anywhere is the first implementation of Disney’s KeyChest — a rights locker architecture that is similar to UltraViolet, the technology backed by the other five major Hollywood studios. The idea common to both KeyChest and UltraViolet is that when you purchase a movie, you’re actually purchasing the right to download or stream it from a variety of sources; the rights locker maintains a record of your purchase.
One of the main motivations behind UltraViolet was to prevent content distributors or consumer electronics makers from dominating the economics of the digital video supply chain in the way that Apple dominated music downloads (and Amazon may dominate e-books), and thus from being able to dictate terms to copyright owners. By making it possible for users to buy digital movies from one retailer and then download them in other formats from other retailers, the five studios hoped to create a level playing field among retailers as well as interoperability for users. UltraViolet has several retail partners, including Target, Walmart (VUDU), and Best Buy (CinemaNow).
The problem with these technology schemes is that it is very hard to make them into universal standards. Just about every software technology we use settles down to twos or threes. In operating systems, it’s all twos: Windows and Mac OS for desktops and laptops; Android and iOS for mobile devices; Unix/Linux and Windows for servers. Other markets are similar: in relational databases it’s Oracle/MySQL (Oracle Corp.), DB2 (IBM), and SQL Server (Microsoft); in music paid-download formats it’s MP4-AAC (Apple) and MP3 (Amazon); in e-books (in the US, at least) it’s Amazon, Barnes & Noble, and Apple iBooks. Antitrust law prevents a single technology from dominating too much; market complexity prevents more than a handful from becoming roughly equal competitors.
It would be a shame if this also became true for rights lockers for movies and TV shows. It does not help the studios if consumers get one flavor of “interoperability” for movies from all but one major studio and another flavor for movies from Disney. Disney surely remembers the less-than-stellar success of its last solo venture into digital movie distribution: MovieBeam, which launched around 2004 and lasted less than four years.
And that brings us back around to Apple. The only plausible explanation for this bifurcation is that Apple is really in charge here. UltraViolet is not just an “every studio but Disney” consortium; it is also an “every technology company but Apple” initiative. The list of technology companies participating in UltraViolet is huge, though Microsoft occupies a particularly important role as the source of the UltraViolet file format and the first commercial DRM to be approved for use with the system. In other words, the KeyChest/UltraViolet dichotomy is shaping up to look very much like Apple vs. the Microsoft-led Windows ecosystem, or Apple vs. the Google-led Android ecosystem.
Still, the market for digital video is still in relatively early days, and things could change quite a bit — especially if consumers are confused by the choices on offer. (Coincidentally, there’s a good overview of this confusion and its causes in today’s New York Times.) UltraViolet is enjoying only modest success so far — compared, say, to Netflix or iTunes — and the introduction of Disney Movies Anywhere is unlikely to help make rights lockers any clearer to consumers.
In that respect, the UltraViolet/KeyChest dichotomy also has a precedent in the digital music market. Back in 2001-2002, the (then) five major record labels lined up behind two different music distribution platforms: MusicNet and pressplay. MusicNet was backed by Warner Music Group, EMI, BMG, and RealNetworks, while pressplay was backed by Sony Music and Universal Music Group. MusicNet was a wholesale distribution platform that made deals with multiple retailers; pressplay was its own retailer. In other words, MusicNet was UltraViolet, while pressplay was Disney Movies Anywhere. Yet neither one was successful; both suffered from over-complexity (among other things). Apple launched the much easier to use iTunes Music Store in 2003, and few people remember MusicNet or pressplay anymore.*
In other words, there are still opportunities for new digital video models to emerge and disrupt the current market. And consumer confusion is a great way to hasten the disruption.
*The two music platforms did survive, in a way: MusicNet is now MediaNet, a wholesaler of digital music and other content with many retail partners; pressplay was sold to Roxio, rebranded as Napster (the legal version), and resold to Rhapsody, where it still exists under the Napster brand name outside of the US.
Content Protection for 4k Video July 2, 2013Posted by Bill Rosenblatt in DRM, Technologies, Video, Watermarking.
As Hollywood adepts know, the next phase in picture quality beyond HD is something called 4k. Although the name suggests 4k (perhaps 4096) pixels in the vertical or horizontal direction, its resolution is actually 3840 × 2160, i.e., twice the pixels of HD in both horizontal and vertical directions.
4k is the highest quality of image actually captured by digital cinematography right now. The question is, how will it be delivered to consumers, in what timeframe, and how will it be protected?
Those of us who attended the Anti-Piracy and Content Protection Summit in LA last week learned that the answer to the latter question is unknown as yet. Spencer Stephens, CTO of Sony Pictures, gave a brief presentation explaining what 4k is and outlining his studio’s wish list for 4k content protection. He said that it was an opportunity to start fresh with a new design, compared to the AACS content protection technology for Blu-ray discs, which is 10 years old.
This is interesting on a couple of levels. First, it implies that the studios have not predetermined a standard for 4k content protection; in contrast, Blu-ray discs were introduced in the market about three years after AACS was designed. Second, Stephens’s remarks had the flavor of a semi-public appeal to the community of content protection vendors — some of which were in the audience at this conference — for help in designing DRM schemes for 4k that met his requirements.
Stephens’s wish list included such elements as:
- Title-by-title diversity, so that a technique used to hack one movie title doesn’t necessarily apply to another
- Requiring players to authenticate themselves online before playback, which enables hacked players to be denied but makes it impossible to play 4k content without an Internet connection
- The use of HDCP 2.2 to protect digital outputs, since older versions of HDCP have been hacked
- Session-based watermarking, so that each 4k file is marked with the identity of the device or user that downloaded it (a technique used today with early-window HD content)
- The use of trusted execution environments (TEE) for playback, which combine the security of hardware with the renewability of software
From time to time I hear from startup companies that claim to have designed better technologies for video content protection. I tell them that getting studio approval for new content protection schemes is a tricky business. You can get studio technology executives excited about your technology, but they don’t actually “approve” it such that they guarantee they’ll accept it if it’s used in a content service. Instead, they expect service providers to propose the technology in the context of the overall service, and the studios will consider providing licenses to their content in that broader context. And of course the studios don’t actually pay for the technology; the service providers or consumer device makers do.
In other words, studios “bless” new content protection technologies, but otherwise the entire sales process takes place at arms’ length from the studios. In that sense, the studios act somewhat like a regulatory agency does when setting guidelines for compliance with a regulation such as HIPAA and GLB (for information privacy in healthcare and financial services respectively). The resulting technology often meets the letter but not the spirit of the regulations.
In this respect, Stephens’s remarks were a bit of fresh air. They are an invitation to more open dialog among vendors, studios, and service providers about the types of content protection that they may be willing to implement when it comes time to distribute 4k content to consumers.
In the past, such discussions often happened behind closed doors, took the form of unilateral “unfunded mandates,” and/or resulted in implementations that plainly did not work. As technology gets more sophisticated and the world gets more complex, Hollywood is going to have to work more closely with downstream entities in the content distribution chain if it wants its content protected. Spencer Stephens’s presentation was a good start in that direction.
add a comment
I have released a new white paper on content security requirements for video services that distribute content to multiple devices. This white paper discusses copyright owners’ requirements for security in today’s world of proliferating devices and delivery channels.
So-called managed networks (cable, satellite, and telco TV) are under increasing pressure to compete with “over the top” (OTT) video services that can run on any IP-based (unmanaged) network to a variety of devices — services like Netflix and Hulu. In the US, in fact, total subscriberships of OTT services are fast approaching the total subscriberships of cable, satellite, and telco TV.
Therefore pay-TV operators have to respond by making their content available on a similar variety of devices and even through unmanaged networks. While some major pay-TV providers like Comcast and Time Warner Cable are launching “TV Everywhere” services, many more pay-TV operators are trying to keep up by building their own service extensions onto mobile phones, tablets, and home devices other than traditional set-top boxes (STBs).
Content security is one of the many requirements that operators have to meet in order to license content from studios, TV networks, sports leagues, and other major content sources. Life for pay-TV operators used to be relatively simple: adopt a conditional access (CA) technology that was equally effective in thwarting signal theft as it was in thwarting content piracy. Economic and security goals were aligned between operators and copyright owners. Now life is considerably more complicated, as operators have to support home networks and branch out into mobile services. Content security requirements are more complicated as well.
This white paper gathers security requirements from major content owners and describes them in a single document. The intent is to help pay-TV operators and other video service providers that are looking to launch multi-screen video services, so that they know what to expect and avoid any unpleasant surprises with regard to security requirements when licensing content to offer through their services.
I spoke to representatives from most of the major Hollywood studios to get their requirements. Although it is not possible to build a gigantic table that an operator can use to look up DRM or conditional access requirements for any given delivery modality and client device — among other things, such a table would become obsolete very quickly — I was able to create a set of guidelines that should be useful for operators.
Content security guidelines do depend on certain factors, including release windows (how long after a film’s theatrical release or a TV show’s first airing), display quality, and the usage rules granted to users and their devices. In the white paper, I map these factors to certain specific content security requirements, such as roots of trust, watermarks, software hardening, and DRM robustness rules. Security guidelines also depend on external market factors that the white paper also describes.
add a comment
The 28-page paper describes the current state of the art of techniques for protecting video content delivered over pay television networks such as cable and satellite. The two primary theses of the white paper are:
- Pay TV often leads in content protection innovation over other media types and delivery modalities. That is because, among other reasons, it is a fairly rare case where the economic interests of content owners and service providers are aligned: content owners don’t want their content used without authorization, and pay-TV operators don’t want their signals stolen. Therefore pay-TV operators have incentives to implement strong and innovative content security solutions.
- Before today, many content security schemes could be described as hack-it-and-it’s-broken (such as CSS for DVDs) or a cycle of hack-patch-hack-patch-etc. (such as AACS for Blu-ray or FairPlay for iTunes). Now technologies are available that break the hack-patch-hack-patch cycle, thereby decreasing long-term costs (TCO) and complexity.
The white paper starts with a brief history of content protection technologies for digital pay TV, starting with the adoption of the Digital Video Broadcasting (DVB) standard in 1994. Then it describes various newer technologies, including building blocks like ECC (elliptical curve cryptography), flash memory, and secure silicon; and it describes new techniques such as individualization, renewability, diversity, and whitebox cryptography. It ties these techniques together into the concept of security lifecycle services, which include breach response and monitoring.
The final section of the paper discusses fingerprinting and watermarking as two techniques that complement encryption as ways of finding unauthorized content “in the wild.”
My thanks to Irdeto for sponsoring this paper.
Irdeto Acquires BD+ Technology from Rovi July 7, 2011Posted by Bill Rosenblatt in DRM, Economics, Technologies, Video.
add a comment
Irdeto announced that it has acquired the BD+ content protection technology for Blu-ray discs from Rovi Corp. (formerly Macrovision). This includes the team and patents related to Cryptography Research Inc.’s Self Protecting Digital Content (SPDC), which Rovi acquired in 2007.
Given the string of recent acquisitions that Rovi has unwound (eMeta, InstallShield, FlexNet, TryMedia, and others), most of which have to do with content security or license management, this deal would seem to be yet another in the same vein; and in fact, BD+ was the last content security asset that Rovi owned, apart from its legacy serial copy management technology. Rovi is apparently paring assets to focus on its metadata (acquired from All Media Guide and Muze) and Electronic Program Guide (Gemstar) businesses; Rovi has dominant market shares or IP positions in both areas.
But a conversation I had with Irdeto revealed an entirely different purpose for this deal: one of the major Hollywood studios brokered it in an attempt to fix Blu-ray security, which has been seriously hacked. Irdeto did not name the studio, but those who follow the industry closely can probably guess which one it is.
BD+ is one of two sets of security technologies used in the Blu-ray disc format. The other, AACS, has been hacked — but the impact of the hack is not as severe as that of other hacks, such as the hack to CSS for DVDs. Nevertheless, the security of Blu-ray discs is apparently so poor that Hollywood is concerned enough to find a solution.
The idea in this deal is that Irdeto will bolster the security of Blu-ray by applying the Cloakware software-security technology that it acquired in 2007. According to Irdeto, this is a nontrivial engineering challenge but one that it believes it can solve in a few months’ time.
When Blu-ray first hit the market, with its multiple layers of content security, I had thought it was a real breakthrough for Hollywood. It looked as though Hollywood had not only learned its lesson about approving content security schemes that are too easy to hack (such as CSS for DVDs) but also had figured out a way to get downstream entities, such as consumer electronics makers, to pay for truly superior security.
Yet now we know that Hollywood has, once again, gotten what it paid for. Now that the latest intelligence about the Blu-ray format says that rumors of its demise are exaggerated, Hollywood wants to shore up the format’s security and protect its release windows. It wants to rely Irdeto’s Cloakware technology to plug the holes.
This is a great vote of confidence in Irdeto. But relative to the bigger picture, one must ask: does it really change Hollywood’s behavior so that this kind of thing doesn’t happen again? To put the question another way: what does Irdeto get out of this deal that would create incentives for it and other vendors to produce truly superior content protection — technology that is secure and affords a decent user experience?
Irdeto isn’t offering an answer. The terms of the acquisition from Rovi are undisclosed. It is unlikely that Blu-ray equipment and software makers will pay more for a license to Cloakware-enhanced BD+ technology than they pay now. Irdeto says that it will get “something” if it completes the Blu-ray fix successfully, but it won’t say what that something is.
I get the feeling that it will mostly be bragging rights. Irdeto will get the cachet of having “fixed Blu-ray,” which will (so the logic goes) lead to other opportunities with future formats; such is the power of Hollywood studio endorsement of content protection technology. And there is certainly some value in the elegant SPDC technology and the patents and engineering team that came with Irdeto’s acquisition.
But — putting aside the price of the acquisition vis-à-vis the value of the Blu-ray revenue stream that comes with it — the value of this deal strikes me as illusory. It’s the analog of user advocates who say that Hollywood studios should give away their content online so that consumers can “engage with the brands.” Both Hollywood studios and content protection vendors are in business to make money from their products. The major studios generally operate on the proposition that more money makes for a better product. Why can’t they apply the same principle to content protection?
The Next Battlefield: 3D Printing May 9, 2011Posted by Bill Rosenblatt in Technologies, Uncategorized.
A couple of months ago, the advocacy organization Public Knowledge started posting pieces on its website about 3D printing technology and how it could become the next venue for overreach by intellectual property owners. I initially dismissed this as scare-mongering by an organization that, like all others of its type, is constantly on the lookout for causes around which to rally fundraising efforts.
But then PK issued a white paper on 3D printing and its implications for IP law which was well-researched, thought-provoking, and surprisingly balanced — more reminiscent of the output of a Center for Democracy and Technology or a Future of Music Coalition than of the polemics of an Electronic Frontier Foundation or of a… Public Knowledge.
And last month Ars Technica dished up an equally stimulating article on the same subject; I don’t know whether one inspired the other or vice versa. Anyway, my eyes and ears started to perk up.
What really did it for me was hearing Jaron Lanier’s keynote address last Thursday at the Festival of Ideas for the New City conference here in New York. He mentioned 3D printing as becoming huge once the technology gets down to the consumer range of price and complexity. Being the fan of Lanier’s writings that I am, I became convinced: 3D printing is worth much attention in the world of intellectual property and technology.
So what is 3D printing? It’s a manufacturing technique whereby a machine makes a physical object by “printing” it in many very thin layers. It’s typically referred to as a disruptive technology, but like all such things, it grows out of existing technologies and only becomes “disruptive” once it reaches a certain threshold of price, size, scale, complexity, or more than one of these.
Plenty of steps have been taken towards the scalable and economical automation of manufacturing. I’ve had experience with two of them. About thirty years ago, I wrote user-interface software for a computer-controlled lathe, an example of what we now know as CAD/CAM. With this software (which ran on a mainframe), you could draw the outline of a part you wanted to make, insert the raw stock (wood or metal) into the lathe, press a button, and have it make the part. More recently, I worked with a leading maker of printers and copiers which had a device for printing images on garments, such as T-shirts.
I’ll leave it to other sources, such as the Public Knowledge white paper, Ars Technica article, and Wikipedia to give better background on the emergence and potential of 3D printing than I can. But what strikes me the most about this technology from our perspective here is that it has the capacity to profoundly affect all areas of intellectual property.
If an everyday person can spend, say, US $1000 for a device that lets her make any plastic or polymer object up to a cubic foot in size for the cost of raw materials, and if that device can accept AutoCAD, Sketchup, or similar CAD/CAM files specifying what is to be made, then IP owners have a problem on their hands. With such a device, you could make something that infringes copyrights, trademarks, patents, or all of the above at once.
These three branches of IP evolved separately; see Adrian Johns’ Piracy for a very good summary of how they were originally distinguished from one another and then went their separate ways. Occasionally some law is made that borrows a concept from one branch of IP law and applies it to another; the most prominent recent example of this is the Supreme Court’s 2007 Grokster decision, which borrowed the concept of “inducement” from patent law and applied it to copyrights.
Applying all of the different strands of IP law to a single technology is a recipe for a mess — particularly when it comes to the legal concept of secondary liability, i.e. “helping someone infringe.” The maker of a 3D printing device would be held to different standards regarding patent, copyright, and trademark infringement.
IP owners will naturally begin to think about technical measures they can take (or attempt to require) to guard against infringement. With predecessor technologies to 3D printing, life was relatively simple — relatively. For example: In the project I did with the printer maker, the company wanted to sell the garment printers to small retailers so that they could produce garments with licensed images on them, on demand. The printers had a price tag in the low five figures (USD).
Think about applications such as sports venues (second-string player shoots a sixty-footer at the last second; everyone wants a T-shirt to commemorate the occasion but the kiosk doesn’t have any), party stores (My Little Pony on the front, Happy 5th Birthday Juliette on the back), or museums (I want a T-shirt of that Vermeer painting on the second floor, on a light blue background, in Extra Large). My involvement with the printer maker was to help design a service that could provide licensed images to the devices over the Internet while ensuring that the local merchant wouldn’t abuse them.
But 3D printing takes such concerns to a much more complex level. It’s easy to recognize trademarks and trademarked imagery. We know something about how to recognize and thwart copyright infringement. But what does “DRM for patents” even look like, and is such a concept even worth pursuing?
I certainly don’t have the answers. But I promise you that I will follow this fascinating area with interest as it unfolds.