E-Book Watermarking Gains Traction in Europe October 3, 2013Posted by Bill Rosenblatt in DRM, Europe, Publishing, United States, Watermarking.
The firm Rüdiger Wischenbart Content and Consulting has just released the latest version of Global eBook, its overview of the worldwide ebook market. This sweeping, highly informative report is available for free during the month of October.
The report contains lots of information about piracy and rights worldwide — attitudes, public policy initiatives, and technologies. A few conclusions in particular stand out. First, while growth of e-book reading appears to be slowing down, it has reached a level of 20% of book sales in the U.S. market (and even higher by unit volume). This puts e-books firmly in the mainstream of media consumption.
Accordingly, e-book piracy has become a mainstream concern. Publishers — and their trade associations, such as the Börsenverein des Deutschen Buchhandels in Germany, which is the most active on this issue — had been less involved in the online infringement issue than their counterparts in the music and film industries, but that’s changing now. Several studies have been done that generally show e-book piracy levels rising rapidly, but there’s wide disagreement on its volume. And virtually no data at all is available about the promotional vs. detrimental effects of unauthorized file-sharing on legitimate sales. Part of the problem is that e-book files are much smaller than music MP3s or (especially) digital video or games; therefore e-book files are more likely to be shared through email (which can’t be tracked) and less likely to be available through torrent sites.
The lack of quantitative understanding of infringement and its impact has led different countries to pursue different paths, in terms of both legal actions and the use of antipiracy technologies. Perhaps the most surprising of the latter trend — at least to those of us on this side of the Atlantic — is the rapid ascendancy of watermarking (a/k/a “social DRM”) in some European countries. For example:
- Netherlands: Arbeiderspers/Bruna, the country’s largest book publisher, switched from traditional DRM to watermarking for its entire catalog at the beginning of this year.
- Austria: 65% of the e-books available in the country have watermarks embedded, compared to only 35% with DRM.
- Hungary: Watermarking is now the preferred method of content protection.
- Sweden: Virtually all trade ebooks are DRM-free. The e-book distributor eLib (owned by the Swedish media giant Bonnier), uses watermarking for 80% of its titles.
- Italy: watermarking has grown from 15% to 42% of all e-books, overtaking the 35% that use DRM.
(Note that these are, with all due respect to them, second-tier European countries. I have anecdotal evidence that e-book watermarking is on the rise in the UK, but not much evidence of it in France or Germany. At the same time, the above countries are often test beds for technologies that, if successful, spread to larger markets — whether by design or market forces.)
Meanwhile, there’s still a total absence of data on the effects of both DRM and watermarking on users’ e-book behavior — which is why I have been discussing with the Book Industry Study Group the possibility of doing a study on this.
The prevailing attitude among authors is that DRM should still be used. An interesting data point on this came back in January when Lulu, one of the prominent online self-publishing services, decided to stop offering authors the option of DRM protection (using Adobe Content Server, the de facto standard DRM for ebooks outside of the Amazon and Apple ecosystems) for ebooks sold on the Lulu site. Lulu authors would still be able to distribute their titles through Amazon and other services that use DRM.
Lulu announced this in a blog post which elicited large numbers of comments, largely from authors. My pseudo-scientific tally of the authors’ comments showed that they are in favor of DRM — and unhappy with Lulu’s decision to drop it — by more than a two-to-one margin. Many said that they would drop Lulu and move to its competitor Smashwords, which continues to support DRM as an option. Remember that these are independent authors of mostly “long tail” titles in need of exposure, not bestselling authors or major publishers.
One reason for Lulu’s decision to drop DRM was undoubtedly the operational expense. Smashwords’ CEO, Mark Coker, expressed the attitudes of ebook distributors succintly in a Publishers Weekly article covering Lulu’s move when he said, “What’s relevant is whether the cost of DRM (measured by fees to Adobe, [and for consumers] increased complexity, decreased availability, decreased sharing and word of mouth, decreased customer satisfaction) outweigh the benefits[.]” As we used to say over here, that’s the $64,000 question.
Content Protection for 4k Video July 2, 2013Posted by Bill Rosenblatt in DRM, Technologies, Video, Watermarking.
As Hollywood adepts know, the next phase in picture quality beyond HD is something called 4k. Although the name suggests 4k (perhaps 4096) pixels in the vertical or horizontal direction, its resolution is actually 3840 × 2160, i.e., twice the pixels of HD in both horizontal and vertical directions.
4k is the highest quality of image actually captured by digital cinematography right now. The question is, how will it be delivered to consumers, in what timeframe, and how will it be protected?
Those of us who attended the Anti-Piracy and Content Protection Summit in LA last week learned that the answer to the latter question is unknown as yet. Spencer Stephens, CTO of Sony Pictures, gave a brief presentation explaining what 4k is and outlining his studio’s wish list for 4k content protection. He said that it was an opportunity to start fresh with a new design, compared to the AACS content protection technology for Blu-ray discs, which is 10 years old.
This is interesting on a couple of levels. First, it implies that the studios have not predetermined a standard for 4k content protection; in contrast, Blu-ray discs were introduced in the market about three years after AACS was designed. Second, Stephens’s remarks had the flavor of a semi-public appeal to the community of content protection vendors — some of which were in the audience at this conference — for help in designing DRM schemes for 4k that met his requirements.
Stephens’s wish list included such elements as:
- Title-by-title diversity, so that a technique used to hack one movie title doesn’t necessarily apply to another
- Requiring players to authenticate themselves online before playback, which enables hacked players to be denied but makes it impossible to play 4k content without an Internet connection
- The use of HDCP 2.2 to protect digital outputs, since older versions of HDCP have been hacked
- Session-based watermarking, so that each 4k file is marked with the identity of the device or user that downloaded it (a technique used today with early-window HD content)
- The use of trusted execution environments (TEE) for playback, which combine the security of hardware with the renewability of software
From time to time I hear from startup companies that claim to have designed better technologies for video content protection. I tell them that getting studio approval for new content protection schemes is a tricky business. You can get studio technology executives excited about your technology, but they don’t actually “approve” it such that they guarantee they’ll accept it if it’s used in a content service. Instead, they expect service providers to propose the technology in the context of the overall service, and the studios will consider providing licenses to their content in that broader context. And of course the studios don’t actually pay for the technology; the service providers or consumer device makers do.
In other words, studios “bless” new content protection technologies, but otherwise the entire sales process takes place at arms’ length from the studios. In that sense, the studios act somewhat like a regulatory agency does when setting guidelines for compliance with a regulation such as HIPAA and GLB (for information privacy in healthcare and financial services respectively). The resulting technology often meets the letter but not the spirit of the regulations.
In this respect, Stephens’s remarks were a bit of fresh air. They are an invitation to more open dialog among vendors, studios, and service providers about the types of content protection that they may be willing to implement when it comes time to distribute 4k content to consumers.
In the past, such discussions often happened behind closed doors, took the form of unilateral “unfunded mandates,” and/or resulted in implementations that plainly did not work. As technology gets more sophisticated and the world gets more complex, Hollywood is going to have to work more closely with downstream entities in the content distribution chain if it wants its content protected. Spencer Stephens’s presentation was a good start in that direction.
Digimarc Acquires Attributor December 4, 2012Posted by Bill Rosenblatt in Fingerprinting, Images, Publishing, Watermarking.
1 comment so far
Digimarc announced yesterday that it has acquired Attributor Corp. Attributor, based in Silicon Valley, is one of a handful of companies that crawls the Internet looking for instances of copyrighted material that may be infringing, using a pattern-recognition technology akin to fingerprinting. Digimarc is a leader in digital watermarking technology, with a large and significant portfolio of IP in the space. The acquisition price was a total of US $7.5 Million in cash, stock, and contingent compensation.
This is a synergistic and strategically significant move for Digimarc. A few years ago, Digimarc had pruned its efforts to create products and services for digital media markets outside of still images. It had decided, in effect, to leave products and services to its IP licensees, companies such as Civolution of the Netherlands and MarkAny of South Korea. Attributor’s primary market is book publishing, with customers including four out of the “Big Six” trade book publishers as well as several leading educational and STM (scientific, technical, medical) publishers.
Digimarc intends to leverage Attributor’s relationships with book publishers to help it expand its watermarking technology into that market and to move into other markets such as magazine and financial publishing. The company cited the explosive growth in e-books as a reason for the acquisition.
Beyond that, Digimarc’s acquisition is another sign of the increasing importance of infringement monitoring services; the previous such sign came over the summer, when Thomson Reuters acquired MarkMonitor.
There are two reasons for this increase in importance. First is the rise of so-called progressive response legal regimes: copyright owners can monitor the Internet and submit data on alleged infringements to a legal authority, which sends users increasingly strong warning messages and, if they keep on infringing, potentially suspends their ISP accounts. The most advanced progressive response regime is HADOPI in France, early results from which are encouraging. The Copyright Alert System is supposedly gearing up for launch in the United States. A handful of other countries have progressive response in place or in process as well.
The second reason for the increasing importance of so-called piracy monitoring is that copyright owners are starting to realize the value of the data they generate, beyond catching infringers. Piracy is evidence of popularity of content — of demand for it. The data that these services generate can be valuable for analytics purposes, to see who is interested in the content and in what ways. Big Champagne, for example, has been supplying this type of data to the music industry for may years. Attributor has been working on a new service that integrates piracy data with social media analytics; Digimarc intends to integrate this into its own data offerings for the image market.
In fact, we’ll have a discussion on the value of piracy data tomorrow at Copyright and Technology NYC 2012. Leading the discussion will be Thomas Sehested of MarkMonitor. There’s little doubt he will be called upon to talk about his new competition.
Webinar on Studios’ Content Security Policies April 24, 2012Posted by Bill Rosenblatt in Conditional Access, DRM, Events, Video, Watermarking.
add a comment
For those who couldn’t attend the breakfast event at the NAB trade show last week, I will be doing a webinar on Content Security Requirements for Multi-Screen Video Services, on Thursday April 26 at noon US east coast time/1700 GMT. I’ll be presenting a synopsis of the whitepaper I published last December on the topic. I will be joined by Petr Peterka, CTO of Verimatrix, sponsor of the webinar. Click here to register.
add a comment
I have released a new white paper on content security requirements for video services that distribute content to multiple devices. This white paper discusses copyright owners’ requirements for security in today’s world of proliferating devices and delivery channels.
So-called managed networks (cable, satellite, and telco TV) are under increasing pressure to compete with “over the top” (OTT) video services that can run on any IP-based (unmanaged) network to a variety of devices — services like Netflix and Hulu. In the US, in fact, total subscriberships of OTT services are fast approaching the total subscriberships of cable, satellite, and telco TV.
Therefore pay-TV operators have to respond by making their content available on a similar variety of devices and even through unmanaged networks. While some major pay-TV providers like Comcast and Time Warner Cable are launching “TV Everywhere” services, many more pay-TV operators are trying to keep up by building their own service extensions onto mobile phones, tablets, and home devices other than traditional set-top boxes (STBs).
Content security is one of the many requirements that operators have to meet in order to license content from studios, TV networks, sports leagues, and other major content sources. Life for pay-TV operators used to be relatively simple: adopt a conditional access (CA) technology that was equally effective in thwarting signal theft as it was in thwarting content piracy. Economic and security goals were aligned between operators and copyright owners. Now life is considerably more complicated, as operators have to support home networks and branch out into mobile services. Content security requirements are more complicated as well.
This white paper gathers security requirements from major content owners and describes them in a single document. The intent is to help pay-TV operators and other video service providers that are looking to launch multi-screen video services, so that they know what to expect and avoid any unpleasant surprises with regard to security requirements when licensing content to offer through their services.
I spoke to representatives from most of the major Hollywood studios to get their requirements. Although it is not possible to build a gigantic table that an operator can use to look up DRM or conditional access requirements for any given delivery modality and client device — among other things, such a table would become obsolete very quickly — I was able to create a set of guidelines that should be useful for operators.
Content security guidelines do depend on certain factors, including release windows (how long after a film’s theatrical release or a TV show’s first airing), display quality, and the usage rules granted to users and their devices. In the white paper, I map these factors to certain specific content security requirements, such as roots of trust, watermarks, software hardening, and DRM robustness rules. Security guidelines also depend on external market factors that the white paper also describes.
add a comment
The 28-page paper describes the current state of the art of techniques for protecting video content delivered over pay television networks such as cable and satellite. The two primary theses of the white paper are:
- Pay TV often leads in content protection innovation over other media types and delivery modalities. That is because, among other reasons, it is a fairly rare case where the economic interests of content owners and service providers are aligned: content owners don’t want their content used without authorization, and pay-TV operators don’t want their signals stolen. Therefore pay-TV operators have incentives to implement strong and innovative content security solutions.
- Before today, many content security schemes could be described as hack-it-and-it’s-broken (such as CSS for DVDs) or a cycle of hack-patch-hack-patch-etc. (such as AACS for Blu-ray or FairPlay for iTunes). Now technologies are available that break the hack-patch-hack-patch cycle, thereby decreasing long-term costs (TCO) and complexity.
The white paper starts with a brief history of content protection technologies for digital pay TV, starting with the adoption of the Digital Video Broadcasting (DVB) standard in 1994. Then it describes various newer technologies, including building blocks like ECC (elliptical curve cryptography), flash memory, and secure silicon; and it describes new techniques such as individualization, renewability, diversity, and whitebox cryptography. It ties these techniques together into the concept of security lifecycle services, which include breach response and monitoring.
The final section of the paper discusses fingerprinting and watermarking as two techniques that complement encryption as ways of finding unauthorized content “in the wild.”
My thanks to Irdeto for sponsoring this paper.
The Early Release Window Experiment Continues June 29, 2011Posted by Niels Thorwirth in Video, Watermarking.
The early release window, which offers Hollywood content for home consumption while it is still showing at theaters, has been debated for many years – in fact, I wrote about an enabling FCC ruling about a year ago. But now the debate about its success is raging more than ever.
Adding fuel to the fire is a current price tag of US $30. At this price point, the discussions revolve around the comparison of an expensive VOD movie to movie theater tickets that cost, on average, less than $8. Cinema owners and movie directors have voiced their concerns about the shift in content consumption habits. Though after all, it is impossible to reliably predict consumer interest – otherwise every Hollywood title would be a blockbuster.
I think that it will be an interesting offer for, initially, a small percentage of consumers. And while the rate of the adoption is questionable, it’s obvious to me that movie theaters won’t disappear any time soon and that electronic distribution will continue to grow.
The participating studios certainly have conducted their own research and it is evident that they have high enough hopes to shake the traditional models and to support this offer.
But I see the most relevant indicator in recent discussions that I had with operators. They are evaluating this opportunity seriously and investing time and resources in the studios’ requirement that early-window content be digitally watermarked as well as encrypted. This may be because even a small uptake by consumers will translate into a relevant chunk of revenue.
One technically interesting point is that operators often prefer server-side integration of watermarking. The tradeoff is whether the integration is done in the client device or in the video server before delivery. While a client-based approach has the advantage of distributed processing without head-end integration, server-side watermarking integration does not require modification to client devices.
The overall application is the same, yet the head-end component requires a very different technology approach. The manipulation of video pixels is too slow when considering the complex coding of compression schemes like H.264. The server-side manipulation has to be applied in the compressed and possibly encrypted domain, and applied while the content is delivered.
Efficiency is key, because the delivery infrastructure is all about delivering the maximum number of parallel streams. If watermarking introduces overhead to it, it must be small and fast. This is a fundamental difference from previous watermarking schemes that only focused on survivability (robustness). At the same time, with an expected broad deployment across multiple head end infrastructures, ease of integration is crucial to the adoption of digital watermarking.
This development will remain interesting because it’s an experiment on the technical front as well in business models, and I am sure there will be more progress to report in the future.
Public Knowledge White Paper Attacks Copyright Filtering August 20, 2009Posted by Bill Rosenblatt in Fingerprinting, Law, Watermarking.
The Washington-based advocacy organization Public Knowledge last month published Forcing the Net Through a Sieve: Why Copyright Filtering is Not a Viable Solution for U.S. ISPs. The white paper was a submission to the Federal Communications Commission in connection with its National Broadband Plan.
The paper covers many technical, policy, and legal reasons why it’s a bad idea to adopt various types of technologies to keep unauthorized copyrighted material off the Internet. Some of the considerations include inaccuracy in identifying actually infringing material (whether false positives or false negatives), hampering ISP network performance, infringing on fair use rights, and forcing ISPs to incur costs that may be passed on to consumers.
Countries outside of the US, such as France and Belgium, have been seriously considering legal mandates for filtering copyrighted material from ISPs’ networks. Some ISPs in the US, like AT&T, have been experimenting with filtering technologies — in AT&T’s case, Vobile‘s video fingerprinting — while others, like Verizon, are against it.
Unfortunately, this white paper contains some errors and mischaracterizations that dampen its value in influencing regulations. The most serious of these occur in the sections on identifying content.
The paper discusses the difficulty of identifying a piece of content through metadata, such as ID3 tags commonly used in digital music. This is true as far as it goes. But it makes no mention of content ID standards that are gaining traction in various media industry segments, such as ISRC in music, ISAN in video, and DOI in various publishing industry segments. The use of content IDs, especially in watermarks, would greatly improve the efficiency and accuracy of content identification over other schemes.
The authors also mischaracterize the state of watermarking. They say that watermarks can be removed from content, leading to a pointless “arms race” between hackers and developers of watermarking technology. To support this, they point to research done by Ed Felten and his Princeton team in 2001 in connection with the failed SDMI watermark. Not only is this research ancient history with regard to watermarking techniques used today, but it is also off-target: the SDMI watermark was intended for a different purpose and thus was designed differently from watermarks used to identify content for forensic purposes. Such watermarks can be designed so that removing them leaves content that is perceptually degraded.
Finally, the authors claim that watermark detection won’t do anything to filter content from CDs, DVDs, or camcorded movies. This is not true. These can be and are watermarked as well; and the watermarks are designed to withstand transformations such as digital-analog-digital conversion.
There are other more general, almost “rhetorical” devices used in this paper that I would call questionable. One is the persistent use of the term “downloading” to describe what an ISP must do in order to find content to filter. The report accurately describes deep packet inspection, but this need not involve “downloading,” a term that implies an operation that takes time and is a departure from the usual process of routing Internet traffic. In fact, routers already examine Internet traffic for malware and various other types of content; they do this on the fly without “downloading.” Technology companies such as Zeitera are working on hardware-based fingerprinting technology that would work similarly for content identification that could be used for filtering.
Another such rhetorical device is use of the term “underinclusive,” meaning technologies that let infringing content through instead of blocking it (i.e., false negatives) — as opposed to “overinclusive,” meaning false positives. Content owners who favor filtering technologies are not necessarily looking to eliminate false negatives. This is reminiscent of the copyleft canard that antipiracy technologies are worthless because they aren’t perfect.
Finally, the white paper makes various connections between copyright filtering and net neutrality that are conspiracy-theoretical stretches. One example is the discussion about using filtering to slow down or speed up traffic through networks. I am not aware of any copyright filtering discussion that encompasses bandwidth throttling.
There are indeed serious concerns about copyright filtering, many of which this white paper raises effectively. Network efficiency and false positives that abridge fair use rights are the two big ones. Some of the technologies that this white paper claims are being considered for copyright filtering are just bad ideas, such as traffic pattern analysis, architecture-based filtering (e.g. P2P), and protocol-based filtering (e.g. BitTorrent). But an exposition of the negative aspects of this type of technology should at least lay out the arguments without resorting to trial-lawyer-esque rhetorical devices and factual gaps.
I’m also skeptical of any legally mandated technological scheme for controlling copyright. Ultimately, assuming that the technology can be made to work adequately, the use of copyright filters ought to be a matter of economics and private sector deliberation — something that the movie and user-generated-content industries have already attempted. Public Knowledge’s white paper does address the most important economic principle, namely the question of who pays for the technology. Any scheme that saddles consumers with the burden of cost for copyright filtering, such as the one proposed in the UK’s Digital Britain report in January, is inherently flawed. Private sector deliberations over copyright filtering should use this as a starting point if they are going to arrive at a workable solution.
Civolution Acquires Watermarking Business from Thomson July 22, 2009Posted by Bill Rosenblatt in Technologies, Watermarking.
add a comment
Civolution announced on Tuesday that it is acquiring the digital watermarking business from Thomson. Terms were undisclosed.
This move represents further consolidation in the watermarking market, following Dolby’s shutdown of its Cinea video watermarking division last year. Civolution itself spun out of Philips Electronics and acquired Teletrax, the video broadcast monitoring business that uses Civolution’s technology, late last year.
With this action, the only major players left in watermarking are Civolution and the Korean vendor MarkAny. Apart from those two, there are a few players in niche markets, such as Verimatrix (IPTV/digital pay TV), Verance (Blu-ray audio), and USA Video Interactive (Internet video delivery).
This development does not necessarily point to decline in the adoption of watermarking. First of all, Thomson’s watermarking business was known to be in disarray amid management changes. Thomson has had some recent success with its NexGuard technology for pre-release content protection (which combines encryption and watermarking), but it has been hard to get management’s attention alongside other Thomson product and service properties such as Grass Valley and Technicolor. Watermarking is more of an enabling technology, which should fit much better at Civolution.
More importantly, the success of watermarking requires standardization. As I noted last week, standardization in the “secret sauce” of watermarking algorithms is unlikely, and there have been several vendors, each with their own secret sauce. Consolidation is a market force that will promote de facto standardization. For example, Thomson and Philips/Civolution were the two suppliers of watermarking technology for digital cinema; with this deal, there is now only one supplier and thus a de facto standard.
Of course it remains to be seen whether Civolution will integrate its two watermarking technologies or leave them be. Integration is better for the market insofar as it is feasible.
RIAA Releases Watermarking Payload Specification July 9, 2009Posted by Bill Rosenblatt in Music, Standards, Watermarking.
The RIAA has released a specification for a watermark payload to be inserted into music files. It intends this specfication to be a voluntary standard to be adopted by record labels, service providers, and other participants in the digital music value chain.
The spec calls for a payload of 108 bits of data, which falls within the limitations of most practical watermarking schemes. The bits are divided into three independent parts called layers. The first layer contains flags to indicate parental advisory, copyright status, and whether the content is pre-release.
The second layer contains identifiers for the content owner, the content itself, and the distribution channel (e.g., a retailer). These three identfiers, when put together, constitute a globally unique ID for content 52 bits in length — allowing (in theory) for several quadrillion different identifiers.
The third layer is currently undefined; the RIAA contemplates future use of Layer 3 for transaction IDs, such as for identifying individual downloads.
If watermarking is going to be adopted to scale in the media industry, then standards are very necessary. In broad terms, two things need to be standardized: insertion and detection algorithms, and data payloads. Standardizing on algorithms is unlikely. Several vendors have proprietary techniques which are their “secret sauce” and whose effectiveness depends on the application.
But payload standardization benefits all watermarking vendors. Payload standardization would facilitate communication and interoperability among the various entities that have to insert, detect, or share data, including content owners, aggregators, retailers, content delivery networks, software vendors, and even consumer device makers.
The RIAA’s proposed payload standard is useful for many different applications. It’s not a content protection scheme per se, as the failed SDMI Level I standard was back ten years ago, though it does contain bits that could be read by consumer devices for the purpose of copyright enforcement.
The primary purpose of the standard is to help identify content. Content identification is not only application-neutral but can also enable various new types of applications for watermarking, such as contextual advertising, content marketing, digital asset management, and monetization of transformational content uses; see the white paper I wrote last year for more on this.
The release of this spec should be a great help in pushing the music industry towards adoption of watermarking for a variety of purposes. Its publication should also help to defuse concerns about privacy or hidden agendas — something that the Digital Watermarking Alliance, a trade association for watermarking technology vendors, has been trying to accomplish over the last couple of years.