Digimarc Acquires Attributor December 4, 2012Posted by Bill Rosenblatt in Fingerprinting, Images, Publishing, Watermarking.
add a comment
Digimarc announced yesterday that it has acquired Attributor Corp. Attributor, based in Silicon Valley, is one of a handful of companies that crawls the Internet looking for instances of copyrighted material that may be infringing, using a pattern-recognition technology akin to fingerprinting. Digimarc is a leader in digital watermarking technology, with a large and significant portfolio of IP in the space. The acquisition price was a total of US $7.5 Million in cash, stock, and contingent compensation.
This is a synergistic and strategically significant move for Digimarc. A few years ago, Digimarc had pruned its efforts to create products and services for digital media markets outside of still images. It had decided, in effect, to leave products and services to its IP licensees, companies such as Civolution of the Netherlands and MarkAny of South Korea. Attributor’s primary market is book publishing, with customers including four out of the “Big Six” trade book publishers as well as several leading educational and STM (scientific, technical, medical) publishers.
Digimarc intends to leverage Attributor’s relationships with book publishers to help it expand its watermarking technology into that market and to move into other markets such as magazine and financial publishing. The company cited the explosive growth in e-books as a reason for the acquisition.
Beyond that, Digimarc’s acquisition is another sign of the increasing importance of infringement monitoring services; the previous such sign came over the summer, when Thomson Reuters acquired MarkMonitor.
There are two reasons for this increase in importance. First is the rise of so-called progressive response legal regimes: copyright owners can monitor the Internet and submit data on alleged infringements to a legal authority, which sends users increasingly strong warning messages and, if they keep on infringing, potentially suspends their ISP accounts. The most advanced progressive response regime is HADOPI in France, early results from which are encouraging. The Copyright Alert System is supposedly gearing up for launch in the United States. A handful of other countries have progressive response in place or in process as well.
The second reason for the increasing importance of so-called piracy monitoring is that copyright owners are starting to realize the value of the data they generate, beyond catching infringers. Piracy is evidence of popularity of content — of demand for it. The data that these services generate can be valuable for analytics purposes, to see who is interested in the content and in what ways. Big Champagne, for example, has been supplying this type of data to the music industry for may years. Attributor has been working on a new service that integrates piracy data with social media analytics; Digimarc intends to integrate this into its own data offerings for the image market.
In fact, we’ll have a discussion on the value of piracy data tomorrow at Copyright and Technology NYC 2012. Leading the discussion will be Thomas Sehested of MarkMonitor. There’s little doubt he will be called upon to talk about his new competition.
Webinar on Studios’ Content Security Policies April 24, 2012Posted by Bill Rosenblatt in Conditional Access, DRM, Events, Video, Watermarking.
add a comment
For those who couldn’t attend the breakfast event at the NAB trade show last week, I will be doing a webinar on Content Security Requirements for Multi-Screen Video Services, on Thursday April 26 at noon US east coast time/1700 GMT. I’ll be presenting a synopsis of the whitepaper I published last December on the topic. I will be joined by Petr Peterka, CTO of Verimatrix, sponsor of the webinar. Click here to register.
add a comment
I have released a new white paper on content security requirements for video services that distribute content to multiple devices. This white paper discusses copyright owners’ requirements for security in today’s world of proliferating devices and delivery channels.
So-called managed networks (cable, satellite, and telco TV) are under increasing pressure to compete with “over the top” (OTT) video services that can run on any IP-based (unmanaged) network to a variety of devices — services like Netflix and Hulu. In the US, in fact, total subscriberships of OTT services are fast approaching the total subscriberships of cable, satellite, and telco TV.
Therefore pay-TV operators have to respond by making their content available on a similar variety of devices and even through unmanaged networks. While some major pay-TV providers like Comcast and Time Warner Cable are launching “TV Everywhere” services, many more pay-TV operators are trying to keep up by building their own service extensions onto mobile phones, tablets, and home devices other than traditional set-top boxes (STBs).
Content security is one of the many requirements that operators have to meet in order to license content from studios, TV networks, sports leagues, and other major content sources. Life for pay-TV operators used to be relatively simple: adopt a conditional access (CA) technology that was equally effective in thwarting signal theft as it was in thwarting content piracy. Economic and security goals were aligned between operators and copyright owners. Now life is considerably more complicated, as operators have to support home networks and branch out into mobile services. Content security requirements are more complicated as well.
This white paper gathers security requirements from major content owners and describes them in a single document. The intent is to help pay-TV operators and other video service providers that are looking to launch multi-screen video services, so that they know what to expect and avoid any unpleasant surprises with regard to security requirements when licensing content to offer through their services.
I spoke to representatives from most of the major Hollywood studios to get their requirements. Although it is not possible to build a gigantic table that an operator can use to look up DRM or conditional access requirements for any given delivery modality and client device — among other things, such a table would become obsolete very quickly — I was able to create a set of guidelines that should be useful for operators.
Content security guidelines do depend on certain factors, including release windows (how long after a film’s theatrical release or a TV show’s first airing), display quality, and the usage rules granted to users and their devices. In the white paper, I map these factors to certain specific content security requirements, such as roots of trust, watermarks, software hardening, and DRM robustness rules. Security guidelines also depend on external market factors that the white paper also describes.
add a comment
The 28-page paper describes the current state of the art of techniques for protecting video content delivered over pay television networks such as cable and satellite. The two primary theses of the white paper are:
- Pay TV often leads in content protection innovation over other media types and delivery modalities. That is because, among other reasons, it is a fairly rare case where the economic interests of content owners and service providers are aligned: content owners don’t want their content used without authorization, and pay-TV operators don’t want their signals stolen. Therefore pay-TV operators have incentives to implement strong and innovative content security solutions.
- Before today, many content security schemes could be described as hack-it-and-it’s-broken (such as CSS for DVDs) or a cycle of hack-patch-hack-patch-etc. (such as AACS for Blu-ray or FairPlay for iTunes). Now technologies are available that break the hack-patch-hack-patch cycle, thereby decreasing long-term costs (TCO) and complexity.
The white paper starts with a brief history of content protection technologies for digital pay TV, starting with the adoption of the Digital Video Broadcasting (DVB) standard in 1994. Then it describes various newer technologies, including building blocks like ECC (elliptical curve cryptography), flash memory, and secure silicon; and it describes new techniques such as individualization, renewability, diversity, and whitebox cryptography. It ties these techniques together into the concept of security lifecycle services, which include breach response and monitoring.
The final section of the paper discusses fingerprinting and watermarking as two techniques that complement encryption as ways of finding unauthorized content “in the wild.”
My thanks to Irdeto for sponsoring this paper.
The Early Release Window Experiment Continues June 29, 2011Posted by Niels Thorwirth in Video, Watermarking.
The early release window, which offers Hollywood content for home consumption while it is still showing at theaters, has been debated for many years – in fact, I wrote about an enabling FCC ruling about a year ago. But now the debate about its success is raging more than ever.
Adding fuel to the fire is a current price tag of US $30. At this price point, the discussions revolve around the comparison of an expensive VOD movie to movie theater tickets that cost, on average, less than $8. Cinema owners and movie directors have voiced their concerns about the shift in content consumption habits. Though after all, it is impossible to reliably predict consumer interest – otherwise every Hollywood title would be a blockbuster.
I think that it will be an interesting offer for, initially, a small percentage of consumers. And while the rate of the adoption is questionable, it’s obvious to me that movie theaters won’t disappear any time soon and that electronic distribution will continue to grow.
The participating studios certainly have conducted their own research and it is evident that they have high enough hopes to shake the traditional models and to support this offer.
But I see the most relevant indicator in recent discussions that I had with operators. They are evaluating this opportunity seriously and investing time and resources in the studios’ requirement that early-window content be digitally watermarked as well as encrypted. This may be because even a small uptake by consumers will translate into a relevant chunk of revenue.
One technically interesting point is that operators often prefer server-side integration of watermarking. The tradeoff is whether the integration is done in the client device or in the video server before delivery. While a client-based approach has the advantage of distributed processing without head-end integration, server-side watermarking integration does not require modification to client devices.
The overall application is the same, yet the head-end component requires a very different technology approach. The manipulation of video pixels is too slow when considering the complex coding of compression schemes like H.264. The server-side manipulation has to be applied in the compressed and possibly encrypted domain, and applied while the content is delivered.
Efficiency is key, because the delivery infrastructure is all about delivering the maximum number of parallel streams. If watermarking introduces overhead to it, it must be small and fast. This is a fundamental difference from previous watermarking schemes that only focused on survivability (robustness). At the same time, with an expected broad deployment across multiple head end infrastructures, ease of integration is crucial to the adoption of digital watermarking.
This development will remain interesting because it’s an experiment on the technical front as well in business models, and I am sure there will be more progress to report in the future.
Public Knowledge White Paper Attacks Copyright Filtering August 20, 2009Posted by Bill Rosenblatt in Fingerprinting, Law, Watermarking.
The Washington-based advocacy organization Public Knowledge last month published Forcing the Net Through a Sieve: Why Copyright Filtering is Not a Viable Solution for U.S. ISPs. The white paper was a submission to the Federal Communications Commission in connection with its National Broadband Plan.
The paper covers many technical, policy, and legal reasons why it’s a bad idea to adopt various types of technologies to keep unauthorized copyrighted material off the Internet. Some of the considerations include inaccuracy in identifying actually infringing material (whether false positives or false negatives), hampering ISP network performance, infringing on fair use rights, and forcing ISPs to incur costs that may be passed on to consumers.
Countries outside of the US, such as France and Belgium, have been seriously considering legal mandates for filtering copyrighted material from ISPs’ networks. Some ISPs in the US, like AT&T, have been experimenting with filtering technologies — in AT&T’s case, Vobile‘s video fingerprinting — while others, like Verizon, are against it.
Unfortunately, this white paper contains some errors and mischaracterizations that dampen its value in influencing regulations. The most serious of these occur in the sections on identifying content.
The paper discusses the difficulty of identifying a piece of content through metadata, such as ID3 tags commonly used in digital music. This is true as far as it goes. But it makes no mention of content ID standards that are gaining traction in various media industry segments, such as ISRC in music, ISAN in video, and DOI in various publishing industry segments. The use of content IDs, especially in watermarks, would greatly improve the efficiency and accuracy of content identification over other schemes.
The authors also mischaracterize the state of watermarking. They say that watermarks can be removed from content, leading to a pointless “arms race” between hackers and developers of watermarking technology. To support this, they point to research done by Ed Felten and his Princeton team in 2001 in connection with the failed SDMI watermark. Not only is this research ancient history with regard to watermarking techniques used today, but it is also off-target: the SDMI watermark was intended for a different purpose and thus was designed differently from watermarks used to identify content for forensic purposes. Such watermarks can be designed so that removing them leaves content that is perceptually degraded.
Finally, the authors claim that watermark detection won’t do anything to filter content from CDs, DVDs, or camcorded movies. This is not true. These can be and are watermarked as well; and the watermarks are designed to withstand transformations such as digital-analog-digital conversion.
There are other more general, almost “rhetorical” devices used in this paper that I would call questionable. One is the persistent use of the term “downloading” to describe what an ISP must do in order to find content to filter. The report accurately describes deep packet inspection, but this need not involve “downloading,” a term that implies an operation that takes time and is a departure from the usual process of routing Internet traffic. In fact, routers already examine Internet traffic for malware and various other types of content; they do this on the fly without “downloading.” Technology companies such as Zeitera are working on hardware-based fingerprinting technology that would work similarly for content identification that could be used for filtering.
Another such rhetorical device is use of the term “underinclusive,” meaning technologies that let infringing content through instead of blocking it (i.e., false negatives) — as opposed to “overinclusive,” meaning false positives. Content owners who favor filtering technologies are not necessarily looking to eliminate false negatives. This is reminiscent of the copyleft canard that antipiracy technologies are worthless because they aren’t perfect.
Finally, the white paper makes various connections between copyright filtering and net neutrality that are conspiracy-theoretical stretches. One example is the discussion about using filtering to slow down or speed up traffic through networks. I am not aware of any copyright filtering discussion that encompasses bandwidth throttling.
There are indeed serious concerns about copyright filtering, many of which this white paper raises effectively. Network efficiency and false positives that abridge fair use rights are the two big ones. Some of the technologies that this white paper claims are being considered for copyright filtering are just bad ideas, such as traffic pattern analysis, architecture-based filtering (e.g. P2P), and protocol-based filtering (e.g. BitTorrent). But an exposition of the negative aspects of this type of technology should at least lay out the arguments without resorting to trial-lawyer-esque rhetorical devices and factual gaps.
I’m also skeptical of any legally mandated technological scheme for controlling copyright. Ultimately, assuming that the technology can be made to work adequately, the use of copyright filters ought to be a matter of economics and private sector deliberation — something that the movie and user-generated-content industries have already attempted. Public Knowledge’s white paper does address the most important economic principle, namely the question of who pays for the technology. Any scheme that saddles consumers with the burden of cost for copyright filtering, such as the one proposed in the UK’s Digital Britain report in January, is inherently flawed. Private sector deliberations over copyright filtering should use this as a starting point if they are going to arrive at a workable solution.
Civolution Acquires Watermarking Business from Thomson July 22, 2009Posted by Bill Rosenblatt in Technologies, Watermarking.
add a comment
Civolution announced on Tuesday that it is acquiring the digital watermarking business from Thomson. Terms were undisclosed.
This move represents further consolidation in the watermarking market, following Dolby’s shutdown of its Cinea video watermarking division last year. Civolution itself spun out of Philips Electronics and acquired Teletrax, the video broadcast monitoring business that uses Civolution’s technology, late last year.
With this action, the only major players left in watermarking are Civolution and the Korean vendor MarkAny. Apart from those two, there are a few players in niche markets, such as Verimatrix (IPTV/digital pay TV), Verance (Blu-ray audio), and USA Video Interactive (Internet video delivery).
This development does not necessarily point to decline in the adoption of watermarking. First of all, Thomson’s watermarking business was known to be in disarray amid management changes. Thomson has had some recent success with its NexGuard technology for pre-release content protection (which combines encryption and watermarking), but it has been hard to get management’s attention alongside other Thomson product and service properties such as Grass Valley and Technicolor. Watermarking is more of an enabling technology, which should fit much better at Civolution.
More importantly, the success of watermarking requires standardization. As I noted last week, standardization in the “secret sauce” of watermarking algorithms is unlikely, and there have been several vendors, each with their own secret sauce. Consolidation is a market force that will promote de facto standardization. For example, Thomson and Philips/Civolution were the two suppliers of watermarking technology for digital cinema; with this deal, there is now only one supplier and thus a de facto standard.
Of course it remains to be seen whether Civolution will integrate its two watermarking technologies or leave them be. Integration is better for the market insofar as it is feasible.
RIAA Releases Watermarking Payload Specification July 9, 2009Posted by Bill Rosenblatt in Music, Standards, Watermarking.
The RIAA has released a specification for a watermark payload to be inserted into music files. It intends this specfication to be a voluntary standard to be adopted by record labels, service providers, and other participants in the digital music value chain.
The spec calls for a payload of 108 bits of data, which falls within the limitations of most practical watermarking schemes. The bits are divided into three independent parts called layers. The first layer contains flags to indicate parental advisory, copyright status, and whether the content is pre-release.
The second layer contains identifiers for the content owner, the content itself, and the distribution channel (e.g., a retailer). These three identfiers, when put together, constitute a globally unique ID for content 52 bits in length — allowing (in theory) for several quadrillion different identifiers.
The third layer is currently undefined; the RIAA contemplates future use of Layer 3 for transaction IDs, such as for identifying individual downloads.
If watermarking is going to be adopted to scale in the media industry, then standards are very necessary. In broad terms, two things need to be standardized: insertion and detection algorithms, and data payloads. Standardizing on algorithms is unlikely. Several vendors have proprietary techniques which are their “secret sauce” and whose effectiveness depends on the application.
But payload standardization benefits all watermarking vendors. Payload standardization would facilitate communication and interoperability among the various entities that have to insert, detect, or share data, including content owners, aggregators, retailers, content delivery networks, software vendors, and even consumer device makers.
The RIAA’s proposed payload standard is useful for many different applications. It’s not a content protection scheme per se, as the failed SDMI Level I standard was back ten years ago, though it does contain bits that could be read by consumer devices for the purpose of copyright enforcement.
The primary purpose of the standard is to help identify content. Content identification is not only application-neutral but can also enable various new types of applications for watermarking, such as contextual advertising, content marketing, digital asset management, and monetization of transformational content uses; see the white paper I wrote last year for more on this.
The release of this spec should be a great help in pushing the music industry towards adoption of watermarking for a variety of purposes. Its publication should also help to defuse concerns about privacy or hidden agendas — something that the Digital Watermarking Alliance, a trade association for watermarking technology vendors, has been trying to accomplish over the last couple of years.
add a comment
Universal Music Group and Virgin Media announced a new music service yesterday, which will offer unlimited DRM-free MP3 downloads for a monthly subscription fee. Virgin Media has also agreed to implement a “progressive response” model of infringement enforcement, issuing warnings for alleged illegal downloads and suspensions of users’ ISP accounts for repeat offenders.
This UMG/Virgin deal is somewhat similar to what eMusic.com offers in the US, except that the Virgin/UMG plan will offer unlimited monthly downloads for the subscription fee. EMusic’s top-level monthly plan offers 50 downloads per month from its catalog of indie-label music for US $20.79. If pricing of the UMG/Virgin service is in the same ballpark, it will be far higher than any of the monthly “flat tax” levies that have been mentioned, especially the one being proposed in the Isle of Man.
The motivation for this arrangement between UMG and Virgin Digital is clear: to forestall government regulation. The UK government had been threatening to intervene among content industry and ISP interests if they could not work out their own solution to online copyright infringement. The government — specifically, the departments of Culture, Media and Sport and Business Innovation and Skills – had been preparing the Digital Britain report, which covers many areas of national broadband adoption; the report was released just today, one day after UMG and Virgin made their announcement.
Sure enough, the Digital Britain recommends that ISPs be required to monitor their networks for illegal sharing of copyrighted files, to issue warnings to users, and to impose bandwidth limitation or protocol blocking measures on those alleged to be repeat offenders. The recommendations stop short of outright ISP account suspension or termination, which France attempted before that provision of its law was found unconstitutional.
In other words, while it is quite possible that other British ISPs and content owners will offer deals similar to the UMG/Virgin arrangement, this deal is a reaction to UK-specific regulatory initiatives and therefore may not necessarily spread to other countries.
In the press release, UMG and Virgin Media state that “the process [of catching alleged pirates] will not depend on network monitoring or interception of customer traffic by Virgin Media.” This statement is misleading. It’s impossible to identify downloaders of copyrighted works without such monitoring… by someone. It turns out, as CNet News.com found, that the service will indeed use such monitoring, from Copenhagen-based DtecNet, an antipiracy services firm comparable to MediaSentry or MediaDefender in the US. The statement is not a lie, though: DtecNet is employed by UMG, not Virgin Media.
Addendum: if DtecNet is going to monitor P2P networks for the presence of files that originated from this UMG service through Virgin Media, then the service will have to use watermarking. Files that users download through the service will need to be embedded with watermarks that, at a minimum, identify Virgin Media as the files’ source.
As for DtecNet’s ability to trace these files to a specific user, the company has an arsenal of techniques for tracing packets to specific IP addresses and so on; but the accuracy of this system would be much improved if Virgin Media used transactional watermarking: that is, if it embedded a reference to the identity of the downloading user into each file. This would sharply reduce the possibility of false positives when accusing Virgin Media users of unauthorized file use outside the network. Could this finally be the music industry’s big transactional watermarking pilot initiative?
France Weakens Progressive Response Anti-Piracy Law June 11, 2009Posted by Bill Rosenblatt in Europe, Fingerprinting, Law, Watermarking.
add a comment
France’s Constitutional Council (Conseil Constitutionnel) struck down key provisions of the Creation and Internet Law (Loi favorisant la diffusion et la protection de la création sur internet) yesterday. The law sets up an independent body to find cases of unauthorized Internet content use and to issue warnings to users followed by suspensions of their Internet accounts. The Conseil Constitutionnel found (press release in French) that the independent body can only issue warnings, not suspend accounts.
Specifically, the Conseil found articles 5 and 11 of the law unconstitutional. Article 5 calls for the creation of the independent body, known as HADOPI (Haute Autorité pour la diffusion des oeuvres et la protection des droits sur Internet), a name given to some earlier versions of the legislation. Article 11 provides surveillance rights to the body. The Conseil found that the articles violated constitutional principles of free expression and presumed innocence, and that only a judge can take an action such as suspending a consumer’s Internet account.
This law was passed by French parliament last month after having first failed in April. It would have been in the vanguard of so-called progressive response, a/k/a “three strikes” laws against unauthorized use of content online, which several countries are currently considering. To be effective, the law requires the use of content identification technologies — such as fingerprinting or watermarking — at the network service provider (ISP) level in order to find alleged infringers.
French Culture Minister Christine Albanel, one of the primary forces behind this law (along with French President Nicolas Sarkozy), was scheduled to speak about it just two days ago at the World Copyright Summit in Washington; she had to stay in France and her speech was read by someone else instead. This is a setback for her and for other political conservatives in France, while organizations such as UFC-Que Choisir, the French equivalent of Consumers Union in the US, praised the decision.
Albanel intends to go ahead with plans to set up HADOPI and issue warnings to users, although requests to suspend users’ accounts must be sent to judges.
Regardless of the constitutionality of the law, Albanel’s decision to implement HADOPI should prove to be a very interesting test for content identification technology — how accurate it is, how many false positives it generates, how easy it is to fool or circumvent, and its effects on network efficiency. Better understanding of these factors will lead to better assessments of the value of these technologies to the law as well as the market.