New White Paper and NAB Workshop: Strategies for Secure OTT Video in a Multiscreen World March 22, 2015Posted by Bill Rosenblatt in DRM, Events, Standards, Technologies, White Papers.
add a comment
I have just released a new white paper called Strategies for Secure OTT Video in a Multiscreen World. The paper covers the emerging world of multi-platform over-the-top (OTT) video applications and how to manage their development for maximum flexibility and cost containment in today’s world of constantly expanding devices and user expectations of “any time, any device, anywhere.” It’s available for download here.
The key technologies that the white paper focuses on are adaptive bitrate streaming (MPEG DASH as an emerging standard) and the integration of Hollywood-approved DRM schemes with HTML5 through Common Encryption (CENC) and Encrypted Media Extensions (EME).
It is becoming possible to integrate DRM with browser-based apps in a way that minimizes native code and without resorting to plug-in schemes like Microsoft’s Silverlight. Yet the HTML5 EME specification creates dependencies between browsers and DRMs, so that — at least in the near future — it will only be possible in many cases to integrate a DRM with a browser from the same vendor: for example, Google’s Widevine DRM with the Chrome browser or Microsoft PlayReady with Internet Explorer. In other words, while the future points to consolidation around web app technologies and adaptive bitrate streaming, the DRM and browser markets will continue to be fragmented. In other words, to be able to offer premium movie and TV content, service providers will need to support multiple DRMs for the foreseeable future.
The white paper lays out a high-level solution architecture that OTT service providers can use to take as much advantage as possible of current and emerging standards while isolating and minimizing the sources of technical complexity that are likely to persist for a while. It calls for standardizing on adaptive bitrate streaming and app development technologies while allowing for and containing the complexities around browsers and DRMs.
Many thanks to Irdeto for commissioning this white paper. In addition, Irdeto and I will present a workshop at the NAB trade show on Tuesday, April 14 at 1pm, at The Wynn in Las Vegas. I’ll give a presentation that summarizes the white paper; then I’ll moderate a panel discussion with the following distinguished guests:
- Dave Belt, Principal Architect, Time Warner Cable
- Jean Choquette, Director of Multiplatform Video on Demand, Videotron
- Shawn Michels, Senior Product Manager, Akamai
- Richard Frankland, VP Americas, Irdeto
This session will include ample opportunities for Q&A, sharing of experiences and best practices, as well as a catered lunch and opportunities to network with your peers and colleagues. Attendance at this event is strictly limited and by invitation-only to ensure the richest possible interaction among participants. If you are interested in attending, please email Katherine.Walsh@irdeto.com by April 7th. Irdeto will even give you a ride from the Las Vegas Convention Center and back if you wish.
add a comment
The U.S. Patent and Trademark Office (USPTO) is holding a public meeting on Wednesday, April 1 to gather input on how the U.S. Government can facilitate the development and use of standard content identifiers as part of the process of creating an automated licensing hub, along the lines of the Copyright Hub in the UK.
This meeting is the second one that the USPTO is holding after the publication of the “Green Paper” on Copyright Policy, Creativity, and Innovation in the Digital economy by it and the National Telecommunications and Information Administration (NTIA) in July 2013. The first meeting, in December 2013, addressed several other topics as well as this one.
(For those of you who are wondering why the USPTO is dealing with copyright issues: the USPTO is the adviser on all intellectual property issues, including copyright, to the Executive Branch of government, i.e., the president and his cabinet. The U.S. Copyright Office performs an analogous function for the Legislative Branch, i.e., Congress.)
The April 1 meeting will focus tightly on issues of standard identifiers for content — which ones exist today, how they are used, how they are relevant to automation of rights licensing, and so on. It will also focus on specifics of the UK Copyright Hub and the feasibility of building a similar one here in the States.
As usual for such gatherings, all are welcome to attend, the meeting will be live-streamed, and a transcript will be available afterwards. It’s just unfortunate that notice of the meeting was only published in the Federal Register last Friday, less than three weeks before the meeting date. I was asked to suggest panelists on the subjects of content identifiers and content identification technology (such as fingerprinting and watermarking). There are several experts on these topics who would undoubtedly add much value to such discussions, but many of them — located in places from LA to the UK — would be unable to travel to Washington, DC on such short notice and possibly on their own nickels. It would be nice to get input on this very timely topic from more than just the “usual suspects” inside the Beltway.
Establishment of reliable, reasonably complete online databases of rights holder information is of vital importance for making licensing easier in an increasingly complex digital age, and it’s encouraging to see the government take an active role in determining how best to get it done and looking at working systems in other countries that are further ahead in the process. That’s why it’s especially crucial to get as much expert input as possible at this stage.
Perhaps the USPTO can do what it did for the December 2013 meeting: reschedule it for several weeks later. If you are interested in participating but can’t do so at such short notice (as is the case with me), then you might want to communicate this to the meeting organizers at the PTO. Otherwise, the usual practice is to invite post-meeting comments in writing.
Announcing Copyright and Technology London 2015 March 13, 2015Posted by Bill Rosenblatt in Europe, Events, UK.
add a comment
For our fourth Copyright and Technology conference in London, we will be moving up from October to June — Thursday June 18, to be precise. The venue will be the same as in previous years: the offices of ReedSmith in the City of London, featuring floor-to-ceiling 360-degree views of greater London. Once again I’ll be working with Music Ally to produce the event.
At this point, we are soliciting ideas for panels and keynote speakers. What are the developments in copyright law for the digital age in the UK, Europe, and the rest of the world that would make for great discussion? Who are the most influential people in copyright today whom you would like to see as keynote speakers — or are you one of these yourself?
We’re considering various possible topics, including these:
- Implications of the “Blurred Lines” decision on copyright in the age of sampling and remix culture
- The use of digital watermarking throughout the media value chain
- Progress of the UK Copyright Hub, Linked Content Coalition, and other initiatives for centralizing copyright information online
- Content protection technologies for browser-base over-the-top streaming video
- Progress of graduated response schemes in France, UK, Ireland, and elsewhere
Please send me your ideas. It’s your chance to tell us what you want to hear about and what you’d be interested in speaking on! We intend to publish an agenda by the end of this month.
Forbes: The Myth of Cord Cutting February 8, 2015Posted by Bill Rosenblatt in Business models, Uncategorized, United States, Video.
1 comment so far
In my latest piece in Forbes, I examine the idea of “cord cutting” in light of recent announcements from Viacom, Time Warner, and DISH Network of over-the-top (OTT) streaming video services that enable people in the US to watch pay TV channels without a pay TV subscription. Cord cutting means cancelling one’s subscription to cable or satellite TV and just getting TV programming over the Internet (or broadcast).
My research turned up two findings that were surprising (at least to me) and support a conclusion that cord cutting is mostly a myth. The first finding is that most people are unlikely to save money on programming if they pay for the increasing number of subscription OTT video services at their expected monthly prices. The second is that most American broadband subscribers get their TV and Internet services from the same company, and there isn’t really such a thing as a broadband Internet company that doesn’t also provide TV; therefore “cord cutting” in most cases really means “calling your cable or phone company and changing to a cheaper service plan.” I also conclude that, economically, cord cutting is a wash for everyone involved, particularly if the FCC is unsuccessful in its new attempt to pass meaningful net neutrality regulations.
As always, I eagerly welcome your feedback.
add a comment
I’ve just published another piece in Forbes in my series on the emerging market for “high-res” audio, reflecting the recent surge in activity in this space as both the major record labels and consumer electronics companies see opportunity in expanding the market for high-quality digital audio beyond the audiophile niche. This piece is about new codec technologies — an area that hasn’t seen much innovation since a decade ago. As always, your feedback is most welcome.
As a postscript to that piece, it continues to amaze me — in a positive way — that vinyl is making such a comeback. Our favorite indie music store in Western Massachusetts recently got rid of all of its CDs and is now selling vinyl exclusively. Even Barnes & Noble is now selling a small, mostly highbrow selection of vinyl LPs. Most amazing of all? They’re flying off the shelves at an eye-opening $22 apiece. And everyone used to complain about the $16 CD — which didn’t scratch, took up less space, was easier to play, etc., etc.
Flickr’s Wall Art Program Exposes Weaknesses in Licensing Automation December 7, 2014Posted by Bill Rosenblatt in Images, Rights Licensing, Standards.
Suppose you’re a musician. You put your songs up on SoundCloud to get exposure for them. Later you find out that SoundCloud has started a program for selling your music as high-quality CDs and giving you none of the proceeds. Or suppose you’re a writer who put your serialized novel up on WattPad; then you find out that WattPad has started selling it in a coffee-table-worthy hardcover edition and not sharing revenue with you. The odds are that in either case you would not be thrilled.
Yet those are rough equivalents of what Flickr, the Yahoo-owned photo-sharing site, has been doing with its Flickr Wall Art program. Flickr Wall Art started out, back in October, as a way for users to order professional-quality hangable prints of their own photos, in the same way that a site like Zazzle lets users make t-shirts or coffee mugs with their images on them (or Lulu publishes printed books).
More recently, Flickr expanded the Wall Art program to let users order framed prints of any of tens of millions of images that users uploaded to the site. This has raised the ire of some of the professional photographers who post their images on Flickr for the same reason that musicians post music on SoundCloud and similar sites: to expose their art to the public.
The core issue here is the license terms under which users upload their images to Flickr. Like SoundCloud and WattPad, Flickr offers users the option of selecting a Creative Commons license for their work when they upload it. Many Flickr users do this in order to encourage other users to share their images and thereby increase their exposure — so that, perhaps, some magazine editor or advertising art director will see their work and pay them for it.
The fact that a hosting website might exploit a Creative Commons-licensed work for its own commercial gain doesn’t sit right with many content creators who have operated under two assumptions that, as Flickr has shown, are naive. One is that these big Internet sites just want to get users to contribute content in order to build their audience and that they will make money some other way, such as through premium memberships or advertising. The other is that Creative Commons licenses are some sort of magic bullet that help artists get exposure for their work while preventing unfair commercial exploitation of it.
Let’s get one thing out of the way: as others have pointed out, what Flickr is doing is perfectly legal. It takes advantage of the fact that many users upload photos to the site under Creative Commons licenses that allow others to exploit them commercially — which three out of the six Creative Commons license options do. It seems that many photographers choose one of those licenses when they upload their work and don’t think too much about the consequences.
Flickr does allow users to change their images’ license terms at any time, and more recently it expanded the Wall Art program to enable photographers to get 51% of revenue from their images if they choose licenses that allow commercial use. But currently that option is limited to those few photographers whom Flickr has invited into its commercial licensing program, Flickr Marketplace, which it launched this past July. Flickr Marketplace is intended to be an attractive source of high-quality images for the likes of The New York Times and Reuters, and thus is curated by editors.
Some copyleft folks have circled their wagons around Flickr, maintaining that it shows yet again why content creators should not expect copyright to help them keep control of what happens to their work on the Internet. But that’s a perversion of what’s going on here.
Flickr is — still, after ten years of existence — a major outlet for photos online. As such, Flickr has the means to control, to some extent, what happens to the images posted on its service; and with Flickr Marketplace, it is effectively wresting some control of commercial licensing opportunities away from photographers. Some degree of control over content distribution and use does exist on the Internet, even if copyright law itself doesn’t contribute directly to that control. The controllers are the entities that Jaron Lanier has called “lords of the cloud” — one of which is Yahoo.
This doesn’t mean that Flickr is particularly outrageous or evil — although it’s at least ironic that while these major content hosting services claim to help content creators through exposure and sharing, Flickr is now making money from objects that are not very shareable at all. (In fact, what Flickr is doing is not unusual for a mature technology business facing stiff competition from new upstarts on the low end of the market — Instagram and Snapchat in this case: it is migrating to the premium/professional end of the market, where prices and margins are higher but volume is lower.)
The problem here is the lack of both flexibility and infrastructure for commercial licensing in the Creative Commons ecosystem. Creative Commons is a clever and highly successful way of bringing some degree of badly-needed rationalization and automation to the abstruse world of content licensing. But it gives creators hardly any options for commercial exploitation of their works.
Several years ago, Creative Commons flirted with the idea of extending their licenses to cover terms of commercial use (among other things) by launching a scheme called CC+. A handful of startups emerged that used CC+ to enable commercial licensing of content on blogs and so on — companies that, interestingly enough, came from across the ideological spectrum of copyright. One was Ozmo from the Copyright Clearance Center, which helped with the design of CC+; another, RightsAgent, was started by the then Executive Director of the Berkman Center for Internet and Society at Harvard Law School. Yet none of these succeeded, and it didn’t help that the Creative Commons organization’s heart wasn’t really in CC+ in the first place.
But the picture changes — no pun intended — when big content hosting sites start to monetize user-generated content directly instead of merely using it as a draw for advertising and paid online storage. Ideas for automated licensing of online content have been kicking around long before Flickr or CC+ (here’s one example). Licensing automation mechanisms that can be adopted by big Internet services and individual creators alike for consumer-facing business models are needed now more than ever.
Forbes – Going Hi-Fi To Compete With Spotify (And Google And Apple) December 1, 2014Posted by Bill Rosenblatt in Music, Services.
add a comment
My latest piece in Forbes is about the new breed of subscription music services that offer lossless compression, in order to appeal to the audiophile crowd. Two of them recently launched in the U.S. market: Tidal and Deezer Elite. I speculate about whether this development will finally lead to an era where top audio quality has finally caught up to low cost and convenience. As always, I welcome your feedback.
My New Forbes Blog November 17, 2014Posted by Bill Rosenblatt in Uncategorized.
I’m pleased to announce that I have been invited to join Forbes, a leading American business publication, as one of its Media & Entertainment blog contributors. I will be publishing pieces there that are of broader interest than the ones I publish here about developments in rights technologies, copyright law, and so on. I’ll write short teasers here for the Forbes pieces I publish, so that those of you who are on this blog’s distribution list can be alerted if you aren’t regular Forbes readers.
It’s been an ambition of mine for some time to write articles for a more general tech/media business audience. I’ve done it occasionally in places such as PaidContent, GigaOM, and Slate, but this will enable me to write as frequently as I can think of things to write about — and to take advantage of Forbes’s prodigious reach and state-of-the-art toolset.
My first Forbes piece is an adaptation of a previous article I wrote here about Apple’s attempt to cut the price of on-demand digital music services in half from $10 to $5 per months, and the disruptive implications of that change — if it happens — for the digital music landscape. I hope you enjoy it, and I welcome any feedback you may have.
Why Does Apple Want to Halve the Price of On-Demand Music? October 26, 2014Posted by Bill Rosenblatt in Business models, Music, Services, United States.
add a comment
Apple is asking record labels to agree to a $5/month subscription price for its Beats Music on-demand service, instead of the going rate of $10/month that it and others (Spotify, Rhapsody, etc.) charge in the US market. This development started as rumor a few weeks ago, then rose to specific evidence of record label conversations confirmed by musician and artists’ rights champion David Lowery at the recent Common Ground intellectual property conference at George Mason University near Washington DC. As of this past Friday, the evidence became strong enough for the Wall Street Journal to treat it as fact.
Re/code also reports that despite the major labels’ apparently cool reception to the new pricing, Spotify is already responding by offering a family plan in which additional family members can add their own subscriptions to a $10/month plan for $5/month. (Beats Music has been offering discounted family plans through AT&T wireless accounts for a while.) As Re/code reports, one reason that Apple has given for the change to $5/month is that it has found that its best iTunes customers spend about $60/year on the service. Given that music download revenue has begun to drop rapidly, Apple apparently believes that it can entice iTunes users to an all-you-can-eat subscription service at the same spending level, instead of losing those users to free music services (or illegal downloads). In other words, $5/month subscriptions are being offered to labels as a way to shore up revenues at $60 ARPU (annual revenue per user) from people who actually still pay for music .
This reasoning is clearly designed to appeal to record labels, which are known to be unhappy about the accelerating decline in purchases. But is it Apple’s real motivation for halving the price of on-demand subscriptions? I don’t think so.
The first thing to understand about on-demand music services is that despite all the talk about monthly subscription fees, the vast majority of users do not pay for them. Research from Edison Research and Triton Digital has determined that the use of YouTube as a de facto on-demand music streaming service draws a US audience of four times all other on-demand services combined – including Spotify (paid and free). Put another way, only about 8% of US users of on-demand music services actually pay for them. Spotify’s percentage of paying US users has stabilized at 25% — which I am proud to say that readers of this blog predicted three years ago — while Google Play, Rhapsody, Rdio, and Beats Music do not offer free tiers for on-demand music.
On-demand music use is growing rapidly, but Apple only has a tiny piece of the market. Beats Music has merely a few hundred thousand users, compared to the estimated 60 million who use YouTube as an on-demand music service and Spotify’s 12 million total US users. Even when one counts only paying users, Beats Music still accounts for well below 10% of the market.
Apple clearly must do something dramatic to become a serious contender. Integrating Beats Music into iTunes (and thereby marketing it heavily to the enormous iTunes audience) by itself isn’t going to expand the market enough to be meaningful to Apple. And even if Apple thinks it can increase the paying user base disproportionately by halving the price, that’s not much of an increase in audience size — especially since the vast majority of the on-demand audience already gets it for free.
No, my view is that Apple’s primary purpose in halving the price is to throw the on-demand market into disarray. Services like Spotify and Rhapsody have been operating their businesses based on the expectation of $10/month revenue for years. Obviously, if Apple comes out with a rebranded Beats Music (iTunes On Demand, iTunes Beats, iTunes Unlimited, iTunes Jukebox, or whatever they end up calling it) at $5/month, all of the other on-demand services will have to offer the same price. Spotify, Rhapsody, and Rdio would find themselves with unsustainable financial structures and/or the necessity of renegotiating their record label deals. The best that any of these “pure play” services could hope for is to become acquisition bait for companies that are big and diverse enough to be able to cross-subsidize them (Yahoo and AOL come to mind). A move to $5/month could even cause Google to rethink its plan to launch a paid subscription music service associated with YouTube.
In short, I predict that if Apple gets record companies to agree to $5/month for on-demand music, we will see a repeat of the shakeout that occurred around 2007-2008, which left only a handful of on-demand services in the market. When the smoke clears, Apple could well find itself with a much larger chunk of the on-demand music market than if it were to try to grow its share organically.
The remaining mystery is whether Apple intends to add a free tier to Beats Music, such as a limited on-demand capability under the iTunes Radio banner. The advent of free, legal on-demand music from Spotify and (effectively) YouTube in 2011 did cause the on-demand model to grow from a niche product for music geeks to a mainstream offering. On-demand is still not quite as popular as Internet radio — I estimate the on-demand audience to be about 60% of the size of the audience for Pandora, iHeartRadio, etc. — but it has surpassed the user base for paid digital downloads.
On-demand is clearly a big part of the music industry’s digital future. Apple is behind in the transition from downloads to access-based models and needs to catch up. Only dramatic, disruptive gestures can make this happen, and halving the price is certainly one of them.
UPDATE 29 March, 2015: It looks like we were wrong. Although the majority of respondents to the poll predicted that Apple would succeed in lowering the price of subscription music to $5/month, the major labels are holding the line at $10/month. That’s the pricing that Apple expects to maintain when it launches its rebranded on-demand streaming service later this year.
Adobe’s Latest E-Book Misstep: This Time, It’s Not the DRM October 10, 2014Posted by Bill Rosenblatt in DRM, Publishing, Technologies.
A few days ago, it emerged that the latest version of Adobe’s e-book reading software for PCs and Macs, Adobe Digital Editions 4 (ADE4), collects data about users’ reading activities and sends them to Adobe’s servers in unencrypted cleartext, so that anyone can intercept and use the data, even without NSA-grade snooping tools.
The story was broken by Nate Hoffelder at The Digital Reader on Monday. The Internet being the Internet, the techblogosphere was soon full of stories about it, mostly half-baked analysis, knee-jerk opinions, jumped-to conclusions, and just plain misinformation. Even the usually thorough and reliable Ars Technica, the first to publish serious technical analysis, didn’t quite get it right. At this time of writing, the best summary of it comes from the respected library technologist Eric Hellman.
More actual facts about this sorry case will emerge in the coming days, no doubt, leading to a fully clear picture of what Adobe is doing and why. My purpose here and now is to address the various accusations that this latest e-book gaffe by Adobe has to do with its DRM. These include a gun-jumping post by the Electronic Frontier Foundation (EFF) that has inadvertently dragged Sony DADC, the division of Sony that is currently marketing a DRM solution for e-books, into the mess undeservedly.
Let’s start with the basics: ADE4 does collect information about users’ reading activities and transmit it in the clear. This is just plain unacceptable; no matter what Adobe’s terms and conditions might say, it’s a breach of privacy and trust, and (as I’ll discuss later) it seems like a strange fit to Adobe’s role in the e-book ecosystem. Whether it’s naivete, sloppiness, or both, it’s redolent of Adobe’s missteps in its release of the latest version of its e-book DRM at the beginning of this year.
But is ADE4’s data reporting part of the DRM, as various people have suggested? No.
The reporting on this story to date has missed one small but important fact, which I suspected and then confirmed with a well-placed source yesterday: ADE4 reports data on all EPUB format files, whether or not they are DRM-encrypted. The DRM client (Adobe RMSDK) is completely separate from the reporting scheme. By analogy, this would be like Apple collecting data on users’ music and movie playing habits from their iTunes software, even though Apple’s music files are DRM-free (though movies are not).
Some savvier writers have pointed out that even though DRM may not be directly involved, this is what happens when users are forced to use media rendering software that’s part of a DRM-based ecosystem. This is a fair point, but in this particular case it’s not really true. (It would be more true in the case of Amazon, which forces people to use its e-reading devices and apps, and unquestionably collects data on users’ reading behaviors – although it encrypts the information.)
Unlike the Kindle ecosystem, users aren’t forced to use ADE4; it’s one of several e-reader software packages available that reads EPUB files that are encrypted with Adobe’s Content Server DRM. None of the major e-book retailers use or require it, at least not in the United States. Instead, it is most often used to read e-books that are borrowed from public libraries using e-lending platforms such as OverDrive; and in fact such libraries recommend and link to Digital Editions on their websites.
But other e-reader apps, such as the increasingly popular BlueFire Reader for Android, iOS, and Windows, will work just as well in reading e-books encrypted with Adobe’s DRM, as well as DRM-free EPUB files. BlueFire (who can blame them?) sees the opportunity here and points out that it does not do this type of data collection. Users of library e-lending systems can use BlueFire or other apps instead of ADE4. Earlier versions of ADE also don’t collect and report reading data.
A larger question is why Adobe collects this data in the first place. The usual reason for collecting users’ reading (or listening or viewing) data is for analytics purposes, to help content owners determine what’s popular and hone their marketing strategies. Yet not only is Adobe not an e-book retailer, but e-book retailers that use its DRM (such as Barnes & Noble) don’t use Digital Editions as their client software.
One possible explanation is that Adobe is expecting to market ADE4 as part of its new DRM ecosystem that’s oriented towards the academic and educational publishing markets, and that it expects the data to be attractive to publishers in those market segments (as opposed to the trade books typically found in public libraries). Eric Hellman suggests another plausible explanation: that it collects data not for analytics purposes but to support a device-syncing feature that all of the major e-book retailers already offer — so that users can automatically get their e-books on all of their devices and have each device sync to the last page that the user read in each book.
Regardless of the reason, it seems unsettling when a platform software vendor, as opposed to an actual retailer, collects this type of information. Here’s another analogy: various video websites use Microsoft’s Silverlight web application environment. Silverlight contains a version of Microsoft’s PlayReady DRM. Users don’t see the Microsoft brand; instead they see brands like Netflix that use the technology. Users might expect Netflix to collect information about their viewing habits (provided that Netflix treated the information appropriately), but they would be concerned to hear (in a vacuum) that Microsoft does it; and in fact Microsoft probably does contribute to the collection of viewing information for Netflix and other Silverlight users.
In any case, Adobe can fix the situation easily enough by encrypting the data (e.g., via SSL), providing a user option in Digital Editions to turn off the data collection, and offering better explanations as to why it collects the data in the first place (at least better than the ambiguous, anodyne, PR/legal department-buffed one shown here). Until then, platform providers like OverDrive can link to other reader apps, like BlueFire, instead of to Adobe Digital Editions.
Finally, as for Sony DADC: the EFF’s web page on this situation contains a link, as a “related case,” to material on a previous technical fiasco involving Sony BMG Music, one of the major recording companies in the mid-2000s. At that time, Sony BMG released some albums on CDs that had been outfitted with a form of DRM. When a user put the disc in a CD drive on a PC, an “autorun” executable installed a DRM client onto the PC, part of which was a “rootkit” that enabled viruses. After a firestorm of negative publicity that the EFF spearheaded, Sony BMG abandoned the technology. (In one of its more savvy gambits, the EFF used momentum from that episode to cause other major labels to drop their CD DRMs as well; the technology was dead in the water by 2008.) In this case, unlike with Adobe, the problem was most definitely in the DRM.
Apparently some people think that because this incident involved “Sony,” Sony DADC — which is currently marketing an e-book DRM solution based on the Marlin DRM technology — was involved. Not true; the DRM that installed the rootkit came from a British company called First4Internet (F4I). Not only did Sony DADC have nothing to do with this (as I have confirmed), but Sony DADC actually advised Sony Music against using the F4I technology.