add a comment
The European Commission this past week published the first in a series of documents that mark its progress toward the objective of creating a Digital Single Market for all EU member states. The 20-page Communication, published last Wednesday, lays out a series of steps that Brussels will take over the next two years, including “[l]egislative proposals for a reform of the copyright regime” starting this calendar year.
The Communication describes two particular areas of focus within the realm of copyright. Both reflect input from lobbyists on both sides of the issue — the media and tech industries — and show that the EC wants to examine factors that trade off the concerns of both sides.
One area of focus is a pain point for anyone trying to launch digital media businesses in Europe: the lack of cross-border licensing opportunities and content portability. A would-be digital media distributor has, in many cases, to negotiate a separate content license in each of the countries in which it wants to operate. This process is especially painful for startups with limited resources. As a result, smaller countries in particular get legitimate content services years later than larger ones, if they get them at all.
This leads to a related problem: the use of geoblocking technology to confine services to single countries where it isn’t strictly necessary. Residents of smaller countries often resort to virtual private networks (VPNs) to obtain fake IP addresses in countries in which licensed services are offered. The EC is looking to streamline cross-border licensing and make it easier to carry services accessed on portable devices across borders; at the same time, it hopes to eliminate unnecessary geoblocking.
Cross-border licensing within Europe has been an issue for many years, certainly since I consulted to the the EC’s Media and Information Society Directorate back in mid-late 2000s. There are two obstacles to getting such schemes enacted throughout Europe. First is that regulators in Brussels don’t hear about it very much: the companies that can afford to send lobbyists to Brussels (and join trade associations) tend to also be able to afford enough lawyers to go around all the member states for licensing — and to benefit from existing consumer demand, because their services are already known. In other words, startups tend to be shut out of this discussion.
The second problem is that smaller countries’ culture ministries see pan-European licensing as counterproductive. Their jobs are to promote their countries’ local content, yet cross-border licensing often makes it easier for bigger EU member states (not to mention the US) with better-known content to license it in but not to license their content out. The focus on reducing the use of geoblocking ought to help ameliorate this concern, because it helps consumers’ money fall into the hands of local rights holders instead of VPN operators.
The other major area of focus is to address online infringement. The EC is looking at two particular approaches to this. One is a “follow the money” approach to enforcement that focuses on “commercial-scale infringements,” presumably as opposed to small-scale activities by individuals. The other is “clarifying the rules on the activities of intermediaries in relation to copyright-protected content.” Although the language in the Communication is high-level and non-specific, the focus is not likely to drift far beyond the existing safe harbors for online service providers that arise out of the EU e-Commerce Directive, which are roughly equivalent to the DMCA in the US.
Here again, the Communication reflects tradeoffs between the concerns of the content and tech industries. The EC wants to look at “new measures to tackle illegal content on the Internet, with due regard to their impact on the fundamental right to freedom of expression and information, such as rigorous procedures for removing illegal content while avoiding the take down of legal content[.]” In other words, the EC probably wants to focus primarily on notice and takedown, to explore ways to resolve the tension between, on the one hand, the media industry’s concerns about overly onerous notice requirements and the lack of “takedown and staydown,” and on the other hand, online service providers’ concerns about abuse of the notice process.
The Commission expects to begin assessments of these areas this year. This Communication is the start of a multi-year journey toward any actual changes in national laws, and the first glimpse of the parameters of those changes. Communications are not legally binding; they lead to proposals for new laws and changes in existing laws. The next step is one or more Directives, which are legally binding on EU member states, though member states are free to implement them in ways that make sense within their own bodies of laws — a process that itself can take several years. Unifying several aspects of digital life in Europe through this long and complex process will be a challenge, to say the least.
add a comment
Piracy of live-streamed sports events ceased to be “inside baseball” (pun intended) for the media industry last weekend with HBO’s broadcast of the Floyd Mayweather-Manny Pacquiao boxing match in the US market. Even in the mainstream media (such as here and here), it seems that the public’s ability to watch the fight online for free in close to real time got more attention than the fight itself.
This is why protection of live sports event streams is a growth area in the field of anti-piracy technology today. Broadcasters like HBO pay huge sums of money for exclusive rights to live sports; therefore they have big incentives to protect the streams from infringement. Recent articles in re/code and Mashable attempted — with limited success — to explain how HBO’s stream was massively pirated and how that piracy could possibly have been curtailed.
Both articles focused on the many pirated streams of the fight that were available on the Periscope app, which allows users to broadcast video in real time from their iOS devices, and is owned by Twitter. As Peter Kafka at re/code explained (accurately enough), it’s not possible to use fingerprint-based systems like Google’s Content ID with live event streams. Such systems depend on a service provider getting a copy of the content in advance so that it can take a “fingerprint” — a shorthand numerical representation of it — and use that to flag attempted user uploads of the same content later. By definition, no advance copy of a live event exists, so fingerprinting can’t be used.
Furthermore, just because a single service uses fingerprinting to block unauthorized uploads doesn’t mean that other services do. YouTube might block an upload thanks to Content ID, but that doesn’t prevent a user from putting the same file up on BitTorrent or a cyberlocker.
However, it is possible to use watermarks to flag content. HBO could insert watermarks into the live video as it goes out the door. Watermarks are much more efficient to detect and calculate than fingerprints, and a well-designed watermark can be detected even if the content is “camcorded” from a TV screen.
Two things can happen with watermarks. First, a cooperating service could agree to detect the watermark and block the content — or do something else, such as allow the content through, play an ad, and share the revenue with the rights holder, as Google does with Content ID. Second, a piracy monitoring service could detect watermarks of streams out in the wild (including on Periscope) and rapidly serve takedown notices on the services that are distributing the unauthorized streams, meaning that the services need not do anything proactive.
Given what Christina Warren at Mashable experienced (camcorded streams appearing on Periscope and then disappearing later), the latter probably happened. Several streaming providers and anti-piracy services use watermarks to aid detection of unauthorized copies of live streams. In the Caribbean market, for example, Netherlands-based pay-TV platform provider Cleeng carried the pay-per-view broadcast of the fight for Sportsmax TV, and it’s likely that Cleeng used its live-stream watermarking technology to protect the content. (Another anti-piracy provider, Irdeto, has similar technology but admitted to Bloomberg that it wasn’t working on the fight. That leaves Friend MTS as my guess for the provider that monitored the fight in other geographies such as Europe and North America.)
It is also possible to automate the process more fully by embedding so-called session-based watermarks that contain identifiers for the user accounts or devices that are receiving the content legally — such as set-top boxes receiving HBO over cable or satellite services. Session-based watermarks are used today with movies released in early windows in high definition, and Hollywood would like them to be used in all 4K/UHD movie distributions.
With session-based watermarks, a monitoring service can (in many cases) determine the device from which the unauthorized stream originated and inform the pay-TV provider, which can then shut off the signal to that device. The entire process would require no human intervention and take just a few seconds.
But with Periscope-style camcording, this could lead to the following interesting situation: Alice invites some friends over to watch the fight on her big-screen TV and pays the $100 fee to HBO through her cable company. Everyone sits down, and the fight starts. Bob pulls out his iPhone and fires up Periscope. A few seconds later, the TV goes blank or displays a warning message about possible copyright infringement. Alice calls her cable company and finds herself on hold, waiting behind the hundreds or thousands of other furious customers to whom the same thing happened.
Ergo I don’t believe HBO is able to require session-based watermarking to protect its live events through pay-TV providers. The situation with live sports is different from early-window HD movies: movies have already been in theaters (where they have been camcorded), and users value the timeliness of Periscope-style camcords for live events more than their often questionable quality.
What also clearly did not happen is that HBO made a deal with Twitter to detect the watermarks and block the live Periscope streams. As both the Mashable and re/code articles note, Twitter/Periscope experienced a ton of traffic before, during, and after the event, much of which was “second-screen” in nature, such as commentary on the fight and the fighters. Yet Google’s Content ID showed that a service provider could be willing to detect copyrighted materially proactively if given sufficient incentive. If the likes of HBO can find sufficient incentives — cross-promotion, ad revenue share, or something else — then the Periscopes of the world might be inclined to follow in Google’s footsteps.
E-Books: Subscription Services vs. Libraries April 30, 2015Posted by Bill Rosenblatt in Publishing, Rights Licensing, Services, United States.
My latest article on Forbes is a look into the future of subscription e-book services. Oyster, a New York-based startup that offers a Netflix-like e-book subscription service, recently launched a “standard” e-book retail site with titles from all major and many indie trade publishers. In the Forbes piece, I wondered whether this might be a sign that subscription e-book services might be slow to catch on — particularly since none of the three subscription e-book providers in the United States market (Scribd and Amazon as well as Oyster) have licenses to content from all of the “Big Five” trade publishers. In addition, a comparison of sales figures for e-books and digital music suggests that consumers are, at least for now, more interested in “owning” e-books than music downloads.
Then I got into an online debate with Ava Seave, a noted media industry strategy consultant, adjunct professor at Columbia Business School, and co-author of an excellent book about the media industry called The Curse of the Mogul. She raised an important point that I hadn’t considered: aren’t subscription e-book services hemmed in by competition from public libraries?
This led me to look at the latest data on Big Five trade publishers’ policies on e-book library lending. It turns out that they have liberalized their policies over the past couple of years. Now all of the Big Five license at least some of their frontlist titles for library e-lending, though most of them still impose restrictions on license durations (such as one year or up to 26 loans). In other words, the quiet war that has been going on between major trade publishers and public libraries for years has simmered down.
This means that public libraries could indeed be serious competition for subscription e-book services. (Subscription music services like Spotify, Rhapsody, and Beats Music don’t have this problem, because public libraries typically do not have large music collections for lending, nor do most people think of libraries as a place to discover recorded music.)
For publishers, public libraries and subscription services are best understood as revenue sources with two different models. Here is a table that summarizes the differences:
|Public Libraries||E-Book Subscription Services|
|Big Five catalogs||Most frontlist and backlist, according to library acquisitions||3 out of 5 majors (Scribd & Oyster), backlist only, limited titles|
|Revenue||Fixed, up to 3-4x consumer hardback||Royalty per user read, plus analytics data|
|License volume||Per unit; libraries license N units each||Unlimited units|
|License term||Often limited to 1-2 years or 26 loans||N/A|
Public libraries thus have bigger catalogs potentially available for e-lending, but individual libraries must “acquire” individual titles. They can acquire as many as they want, according to their budgets and the number of “copies” that they predict will satisfy their patrons without excessive waiting lists, but they must pay prices that range up to four times the consumer hardcover price. Subscription services get access to thousands of titles in unlimited “quantities,” but only — at this time — from limited backlist selections and not at all from Hachette or Penguin Random House (and Amazon’s Kindle Unlimited has no Big Five titles at all). On the other hand, subscription services can offer publishers lots of data about subscribers’ reading habits — data that’s unavailable to publishers from libraries, often by law.
The behavior that Big Five publishers have exhibited towards both public libraries and online services suggests that they will move slowly and gradually as these models take hold with the public. Publishers haven’t licensed their material to startups unless and until they have amassed large audiences; Scribd, for example, had grown to 80 million visitors when it finally got a deal with HarperCollins. Similarly, publishers have been slow and deliberate in licensing material to libraries through library platform providers such as OverDrive, 3M, and Baker & Taylor, which have been operational as far back as the early 2000s.
Contrast this with subscription music services: the first independent subscription music service was Rhapsody, which took less than two years to sign all of the (then) five major labels in 2002; and nowadays it would be commercial suicide for a digital music service to launch with anything less than all major-label content, minus the few remaining digital holdouts.
When viewed in this light, trade publishers’ dealings with public libraries don’t look like the kind of moral or public policy issue that many in the library community would like to portray it as (though there certainly are public policy implications, not least of which is libraries’ First Sale rights to lend material). Instead, the future of e-book access through both libraries and subscription services will depend on their effects on publishers’ bottom lines.
P.S. I predict that this competition will lead e-book subscription service providers to push for “freemium” models, a la Spotify or Hulu; some indie publishers might agree to this, as they have already done with Amazon’s Kindle Owners Lending Library, but the Big Five are likely to resist mightily. For one thing,this would require advertising revenue, an idea that has gotten severely limited traction in the world of e-books.
New White Paper and NAB Workshop: Strategies for Secure OTT Video in a Multiscreen World March 22, 2015Posted by Bill Rosenblatt in DRM, Events, Standards, Technologies, White Papers.
add a comment
I have just released a new white paper called Strategies for Secure OTT Video in a Multiscreen World. The paper covers the emerging world of multi-platform over-the-top (OTT) video applications and how to manage their development for maximum flexibility and cost containment in today’s world of constantly expanding devices and user expectations of “any time, any device, anywhere.” It’s available for download here.
The key technologies that the white paper focuses on are adaptive bitrate streaming (MPEG DASH as an emerging standard) and the integration of Hollywood-approved DRM schemes with HTML5 through Common Encryption (CENC) and Encrypted Media Extensions (EME).
It is becoming possible to integrate DRM with browser-based apps in a way that minimizes native code and without resorting to plug-in schemes like Microsoft’s Silverlight. Yet the HTML5 EME specification creates dependencies between browsers and DRMs, so that — at least in the near future — it will only be possible in many cases to integrate a DRM with a browser from the same vendor: for example, Google’s Widevine DRM with the Chrome browser or Microsoft PlayReady with Internet Explorer. In other words, while the future points to consolidation around web app technologies and adaptive bitrate streaming, the DRM and browser markets will continue to be fragmented. In other words, to be able to offer premium movie and TV content, service providers will need to support multiple DRMs for the foreseeable future.
The white paper lays out a high-level solution architecture that OTT service providers can use to take as much advantage as possible of current and emerging standards while isolating and minimizing the sources of technical complexity that are likely to persist for a while. It calls for standardizing on adaptive bitrate streaming and app development technologies while allowing for and containing the complexities around browsers and DRMs.
Many thanks to Irdeto for commissioning this white paper. In addition, Irdeto and I will present a workshop at the NAB trade show on Tuesday, April 14 at 1pm, at The Wynn in Las Vegas. I’ll give a presentation that summarizes the white paper; then I’ll moderate a panel discussion with the following distinguished guests:
- Dave Belt, Principal Architect, Time Warner Cable
- Jean Choquette, Director of Multiplatform Video on Demand, Videotron
- Shawn Michels, Senior Product Manager, Akamai
- Richard Frankland, VP Americas, Irdeto
This session will include ample opportunities for Q&A, sharing of experiences and best practices, as well as a catered lunch and opportunities to network with your peers and colleagues. Attendance at this event is strictly limited and by invitation-only to ensure the richest possible interaction among participants. If you are interested in attending, please email Katherine.Walsh@irdeto.com by April 7th. Irdeto will even give you a ride from the Las Vegas Convention Center and back if you wish.
add a comment
The U.S. Patent and Trademark Office (USPTO) is holding a public meeting on Wednesday, April 1 to gather input on how the U.S. Government can facilitate the development and use of standard content identifiers as part of the process of creating an automated licensing hub, along the lines of the Copyright Hub in the UK.
This meeting is the second one that the USPTO is holding after the publication of the “Green Paper” on Copyright Policy, Creativity, and Innovation in the Digital economy by it and the National Telecommunications and Information Administration (NTIA) in July 2013. The first meeting, in December 2013, addressed several other topics as well as this one.
(For those of you who are wondering why the USPTO is dealing with copyright issues: the USPTO is the adviser on all intellectual property issues, including copyright, to the Executive Branch of government, i.e., the president and his cabinet. The U.S. Copyright Office performs an analogous function for the Legislative Branch, i.e., Congress.)
The April 1 meeting will focus tightly on issues of standard identifiers for content — which ones exist today, how they are used, how they are relevant to automation of rights licensing, and so on. It will also focus on specifics of the UK Copyright Hub and the feasibility of building a similar one here in the States.
As usual for such gatherings, all are welcome to attend, the meeting will be live-streamed, and a transcript will be available afterwards. It’s just unfortunate that notice of the meeting was only published in the Federal Register last Friday, less than three weeks before the meeting date. I was asked to suggest panelists on the subjects of content identifiers and content identification technology (such as fingerprinting and watermarking). There are several experts on these topics who would undoubtedly add much value to such discussions, but many of them — located in places from LA to the UK — would be unable to travel to Washington, DC on such short notice and possibly on their own nickels. It would be nice to get input on this very timely topic from more than just the “usual suspects” inside the Beltway.
Establishment of reliable, reasonably complete online databases of rights holder information is of vital importance for making licensing easier in an increasingly complex digital age, and it’s encouraging to see the government take an active role in determining how best to get it done and looking at working systems in other countries that are further ahead in the process. That’s why it’s especially crucial to get as much expert input as possible at this stage.
Perhaps the USPTO can do what it did for the December 2013 meeting: reschedule it for several weeks later. If you are interested in participating but can’t do so at such short notice (as is the case with me), then you might want to communicate this to the meeting organizers at the PTO. Otherwise, the usual practice is to invite post-meeting comments in writing.
Announcing Copyright and Technology London 2015 March 13, 2015Posted by Bill Rosenblatt in Europe, Events, UK.
add a comment
For our fourth Copyright and Technology conference in London, we will be moving up from October to June — Thursday June 18, to be precise. The venue will be the same as in previous years: the offices of ReedSmith in the City of London, featuring floor-to-ceiling 360-degree views of greater London. Once again I’ll be working with Music Ally to produce the event.
At this point, we are soliciting ideas for panels and keynote speakers. What are the developments in copyright law for the digital age in the UK, Europe, and the rest of the world that would make for great discussion? Who are the most influential people in copyright today whom you would like to see as keynote speakers — or are you one of these yourself?
We’re considering various possible topics, including these:
- Implications of the “Blurred Lines” decision on copyright in the age of sampling and remix culture
- The use of digital watermarking throughout the media value chain
- Progress of the UK Copyright Hub, Linked Content Coalition, and other initiatives for centralizing copyright information online
- Content protection technologies for browser-base over-the-top streaming video
- Progress of graduated response schemes in France, UK, Ireland, and elsewhere
Please send me your ideas. It’s your chance to tell us what you want to hear about and what you’d be interested in speaking on! We intend to publish an agenda by the end of this month.
Forbes: The Myth of Cord Cutting February 8, 2015Posted by Bill Rosenblatt in Business models, Uncategorized, United States, Video.
1 comment so far
In my latest piece in Forbes, I examine the idea of “cord cutting” in light of recent announcements from Viacom, Time Warner, and DISH Network of over-the-top (OTT) streaming video services that enable people in the US to watch pay TV channels without a pay TV subscription. Cord cutting means cancelling one’s subscription to cable or satellite TV and just getting TV programming over the Internet (or broadcast).
My research turned up two findings that were surprising (at least to me) and support a conclusion that cord cutting is mostly a myth. The first finding is that most people are unlikely to save money on programming if they pay for the increasing number of subscription OTT video services at their expected monthly prices. The second is that most American broadband subscribers get their TV and Internet services from the same company, and there isn’t really such a thing as a broadband Internet company that doesn’t also provide TV; therefore “cord cutting” in most cases really means “calling your cable or phone company and changing to a cheaper service plan.” I also conclude that, economically, cord cutting is a wash for everyone involved, particularly if the FCC is unsuccessful in its new attempt to pass meaningful net neutrality regulations.
As always, I eagerly welcome your feedback.
add a comment
I’ve just published another piece in Forbes in my series on the emerging market for “high-res” audio, reflecting the recent surge in activity in this space as both the major record labels and consumer electronics companies see opportunity in expanding the market for high-quality digital audio beyond the audiophile niche. This piece is about new codec technologies — an area that hasn’t seen much innovation since a decade ago. As always, your feedback is most welcome.
As a postscript to that piece, it continues to amaze me — in a positive way — that vinyl is making such a comeback. Our favorite indie music store in Western Massachusetts recently got rid of all of its CDs and is now selling vinyl exclusively. Even Barnes & Noble is now selling a small, mostly highbrow selection of vinyl LPs. Most amazing of all? They’re flying off the shelves at an eye-opening $22 apiece. And everyone used to complain about the $16 CD — which didn’t scratch, took up less space, was easier to play, etc., etc.
Flickr’s Wall Art Program Exposes Weaknesses in Licensing Automation December 7, 2014Posted by Bill Rosenblatt in Images, Rights Licensing, Standards.
Suppose you’re a musician. You put your songs up on SoundCloud to get exposure for them. Later you find out that SoundCloud has started a program for selling your music as high-quality CDs and giving you none of the proceeds. Or suppose you’re a writer who put your serialized novel up on WattPad; then you find out that WattPad has started selling it in a coffee-table-worthy hardcover edition and not sharing revenue with you. The odds are that in either case you would not be thrilled.
Yet those are rough equivalents of what Flickr, the Yahoo-owned photo-sharing site, has been doing with its Flickr Wall Art program. Flickr Wall Art started out, back in October, as a way for users to order professional-quality hangable prints of their own photos, in the same way that a site like Zazzle lets users make t-shirts or coffee mugs with their images on them (or Lulu publishes printed books).
More recently, Flickr expanded the Wall Art program to let users order framed prints of any of tens of millions of images that users uploaded to the site. This has raised the ire of some of the professional photographers who post their images on Flickr for the same reason that musicians post music on SoundCloud and similar sites: to expose their art to the public.
The core issue here is the license terms under which users upload their images to Flickr. Like SoundCloud and WattPad, Flickr offers users the option of selecting a Creative Commons license for their work when they upload it. Many Flickr users do this in order to encourage other users to share their images and thereby increase their exposure — so that, perhaps, some magazine editor or advertising art director will see their work and pay them for it.
The fact that a hosting website might exploit a Creative Commons-licensed work for its own commercial gain doesn’t sit right with many content creators who have operated under two assumptions that, as Flickr has shown, are naive. One is that these big Internet sites just want to get users to contribute content in order to build their audience and that they will make money some other way, such as through premium memberships or advertising. The other is that Creative Commons licenses are some sort of magic bullet that help artists get exposure for their work while preventing unfair commercial exploitation of it.
Let’s get one thing out of the way: as others have pointed out, what Flickr is doing is perfectly legal. It takes advantage of the fact that many users upload photos to the site under Creative Commons licenses that allow others to exploit them commercially — which three out of the six Creative Commons license options do. It seems that many photographers choose one of those licenses when they upload their work and don’t think too much about the consequences.
Flickr does allow users to change their images’ license terms at any time, and more recently it expanded the Wall Art program to enable photographers to get 51% of revenue from their images if they choose licenses that allow commercial use. But currently that option is limited to those few photographers whom Flickr has invited into its commercial licensing program, Flickr Marketplace, which it launched this past July. Flickr Marketplace is intended to be an attractive source of high-quality images for the likes of The New York Times and Reuters, and thus is curated by editors.
Some copyleft folks have circled their wagons around Flickr, maintaining that it shows yet again why content creators should not expect copyright to help them keep control of what happens to their work on the Internet. But that’s a perversion of what’s going on here.
Flickr is — still, after ten years of existence — a major outlet for photos online. As such, Flickr has the means to control, to some extent, what happens to the images posted on its service; and with Flickr Marketplace, it is effectively wresting some control of commercial licensing opportunities away from photographers. Some degree of control over content distribution and use does exist on the Internet, even if copyright law itself doesn’t contribute directly to that control. The controllers are the entities that Jaron Lanier has called “lords of the cloud” — one of which is Yahoo.
This doesn’t mean that Flickr is particularly outrageous or evil — although it’s at least ironic that while these major content hosting services claim to help content creators through exposure and sharing, Flickr is now making money from objects that are not very shareable at all. (In fact, what Flickr is doing is not unusual for a mature technology business facing stiff competition from new upstarts on the low end of the market — Instagram and Snapchat in this case: it is migrating to the premium/professional end of the market, where prices and margins are higher but volume is lower.)
The problem here is the lack of both flexibility and infrastructure for commercial licensing in the Creative Commons ecosystem. Creative Commons is a clever and highly successful way of bringing some degree of badly-needed rationalization and automation to the abstruse world of content licensing. But it gives creators hardly any options for commercial exploitation of their works.
Several years ago, Creative Commons flirted with the idea of extending their licenses to cover terms of commercial use (among other things) by launching a scheme called CC+. A handful of startups emerged that used CC+ to enable commercial licensing of content on blogs and so on — companies that, interestingly enough, came from across the ideological spectrum of copyright. One was Ozmo from the Copyright Clearance Center, which helped with the design of CC+; another, RightsAgent, was started by the then Executive Director of the Berkman Center for Internet and Society at Harvard Law School. Yet none of these succeeded, and it didn’t help that the Creative Commons organization’s heart wasn’t really in CC+ in the first place.
But the picture changes — no pun intended — when big content hosting sites start to monetize user-generated content directly instead of merely using it as a draw for advertising and paid online storage. Ideas for automated licensing of online content have been kicking around long before Flickr or CC+ (here’s one example). Licensing automation mechanisms that can be adopted by big Internet services and individual creators alike for consumer-facing business models are needed now more than ever.
Forbes – Going Hi-Fi To Compete With Spotify (And Google And Apple) December 1, 2014Posted by Bill Rosenblatt in Music, Services.
add a comment
My latest piece in Forbes is about the new breed of subscription music services that offer lossless compression, in order to appeal to the audiophile crowd. Two of them recently launched in the U.S. market: Tidal and Deezer Elite. I speculate about whether this development will finally lead to an era where top audio quality has finally caught up to low cost and convenience. As always, I welcome your feedback.