This may seem like a side issue from our usual subject matter here, but the current furor over programmatic Internet advertising is worth talking about anyway. To me it’s a prime example of the lengths that industries will go to make it look like they are doing the right thing but actually aren’t.
For those not keeping score: major Internet ad platform companies like Google and Facebook have been taking heat for placing ads on websites and social network pages that promote terrorism, racism, fake news, extremist political views, and so on, just because their viewer demographics match advertisers’ specifications. More recently, those ad platform companies announced that they are trying to improve technology for detecting online places–let’s call them sites for brevity–with these types of content so that they can avoid placing ads there.
They have been moved from “concern” to action for the usual reason: money. Major consumer product companies like finance giant JP Morgan Chase found that not advertising in these places not only stops harming their brand reputations but also doesn’t reduce their ability to reach buyers; when JP Morgan Chase drastically reduced the number of sites on which it would place ads to only those that they specifically approved, the ads’ overall performance didn’t suffer.
Ad platforms like Google and Facebook worry that if other big consumer brands get the message, they’ll lose a ton of ad revenue. So they’re trying to fix their programmatic ad systems–but in ways that keep them in control of the programming. Trust us to keep your ads off undesirable sites, they say; you can continue to hand over the keys to the ad-buying kingdom to us.
This stance has apparently bamboozled the tech press into thinking that the online ad biz is undergoing “self-reflection,” which may or may not be true but certainly isn’t the same thing as self-regulation, let alone actual regulation; and in any case, said tech press isn’t even sure that self-reflection is a good thing because programmatic ad buying is just so wonderful. In other words, the tech press has swallowed whole the pitches of ad platform companies, or their PR reps.
As you may remember, this issue has come up repeatedly over many years in the context of copyright infringement. In the copyright world (in the United States and several other countries, at least) there’s a notice-and-takedown mechanism that enables copyright holders to have content removed from unauthorized places online. Google publishes a Transparency Report that lists sites which are targets of takedown notices, along with the volume of notices they receive; the top recipients receive millions per month. Consumer brands can use the Transparency Report to identify sites on which they don’t want their ads to appear, and initiatives like David Lowery’s influential name-and-shame campaign have brought this to light.
Black-box gizmos from ad platform companies may be helpful. But what advertisers really need is lists analogous to the Google Transparency Report for sites with objectionable content and mechanisms for programmatic ad sellers to avoid those sites on advertisers’ requests. The industry is best served if the ad platform companies contribute the outputs of their tools to properly vetted third-party blacklists rather than keep them within their own proprietary domains.
Advertisers also need standard cross-platform protocols for specifying on which sites they will not allows ads. Service providers analogous to spam blacklist services could use these to integrate with programmatic ad-placement algorithms. (This is simpler than the more Baroque mechanism I suggested a few years ago.) Private, ad hoc ad blacklisting services do exist, but lack of standards, best practices, and standard protocols for integrating with both ad buyers and ad platforms limit their impact, as do the inherent limits in revenue opportunities (effectively, how much more would you pay to advertise less?).
Adopting a scheme like this will cost ad platforms money, in the cost of building and running systems for processing standard-format blacklist requests, but mainly in lost ad revenue. Yet such a scheme will not only starve objectionable sites of ad revenue–and hopefully out of business–but they’ll do so in ways that make it easier for advertisers and give them ultimate control over where their ads are placed. Consumer brand companies are weak proxies for actual consumer interests, but they are stronger in this case than ad platform companies.
Yet I’m not holding my breath for any of this to happen, because it requires cross-industry cooperation and resources to set up equitable mechanisms, with adequate abuse-curbing and redress procedures. These sorts of self-regulation initiatives tend to happen only when government threatens to step in with real regulation; and then self-regulation tends to be only substantive enough to smokescreen the government into backing off.
That was the case four years ago, during the Obama administration, when some ad networks addressed the “ads on pirate sites” issue through an Interactive Advertising Bureau-driven best practices statement, and a consortium of copyright owners and ISPs addressed P2P file-sharing by developing the Copyright Alert System; both initiatives are now defunct. With anti-regulation Republicans in charge (and allowing ISPs to sell users’ browsing histories), even a smokescreen seems unlikely now.