Who doesn’t love a good funnel?

Today, we’re going to make a nice colorful funnel using the latest data from some of the ad industry’s most reliable sources to trace a dollar spent for programmatically-bought display advertising on its exciting journey from your pocket to the bank accounts of middlemen, con men, crooks, and the Bermuda Triangle.

Adtech was created to make the buying and selling of online advertising so much more efficient. Today, about $350 billion dollars is spent on online advertising. 70%+ of it is bought programmatically. It turns out it has been wonderfully efficient for the lads and lassies in the adtech industry. Not so efficient for losers like you and me. Let’s see how it’s working…

1. You start with a dollar to spend
2. Your agency gets a 7¢ fee
3. Technology and targeting fees take another 27¢ (DSPs, SSPs,and WTFs)
4. 15¢ mysteriously disappears into the “unknown delta.” No one knows where the “unknown delta” is. My guess? Jupiter or North Korea.
5. 30% of the ads you buy won’t be viewable
6. About 20% of the stuff you buy will be fraudulent
7. Only 9% of your display ads will be viewed by a real person for even a second. Bastards.
8. Blogweasel math notwithstanding, looks like your dollar bought you 3¢ of real display ads viewed by real human people.

Covering My Ass
As I’m sure you know, no one in the comical online “metrics” business can agree on anything. Consequently, to minimize the torrent of abuse I’m going to get from agency and adtech apologists, I have taken the numbers in the above illustration from the most reliable sources I could find:
     – The first four items come from the ISBA and PwC’s, Programmatic Supply Chain Transparency Study
     – Item 5 comes from Integral Ad Science
     – Item 6 comes from AdAge and Spider Lab’s report, Combating Ad Fraud in the Age of COVID-19
     – Item 7 comes from Lumen Research

Covering Your Ass
Oh, and be sure to ask your agency about these numbers. And when they say, “We have systems in place…” ask to see the systems, have them explained to you, and get their version of how much value you’re getting from a programmatic ad dollar. Should be good for a few laughs.

Some Notes on the Funnel

–  It’s important to note that the ISBA study alluded to in points 1 through 4 above only reported on the highest quality tip of the iceberg — the most premium end of the programmatic marketplace.

Even at the premium end, only 12% of the ad dollars were completely transparent and traceable. An astounding 88% of dollars could not be traced from end to end. Imagine what the numbers must be like in the non-premium end.

– The “unknown delta” represents about 1/3 of the fees that programmatic buyers pay. This money just evaporates. No one can figure out where it goes. Not even a  famous blogweasel.

– I have used 30% as the factor for non-viewable ads. Some research reports it as high as 50%.

– I have used 20% as the fraud number at the publisher end of the funnel. Even if fraud at this end is only 10%, the math still comes out at about 3% viewable ads by real people.

– How many people actually view a display ad? The IAB defines a “view” as 50% of an ad’s pixels seen for one second. Huh? Even by this ridiculous standard only 9% of online ads are “viewed.”

Idiots on Wheels

Of all the industries that never learn anything, my vote for the dumbest goes to the auto industry. It doesn’t matter how many times they try the same stupid strategy and fail, they never learn.

The strategy in question, of course, is the “let’s target young people” strategy. It never works, but it is the knee-jerk strategy for every new product from every auto marketer in captivity.

The latest example is Lexus. They have a new SUV, the NX, which they are targeting to young “creative visionaries” (someone shoot me) with all kinds of hip media horseshit. Only problem is, the average luxury SUV buyer is 53 years old.

The auto industry’s magical thinking goes back a long way. Of course, there was the infamous “Not Your Father’s Oldsmobile” campaign that helped kill an entire brand.

More recently, in 2015, Cadillac gave us the hilarious “Dare Greatly” campaign which  “tapped into the Millennial mindset.” Yeah, right. They even moved their entire headquarters to NYC’s SoHo area to prove how young and hip they were. The result? Cadillac has the oldest buyers of any car brand on the planet. Oh, and they’re back in Detroit.

In 2016, Toyota killed Scion, its entry into the “youth car” market. The “youth car” idea was all the rage in the early 2010’s. The idea was to target the ever-popular (and mostly non-existent) 18-34 year old car buyer. According to The Wall Street Journal, 88% of these “youth cars” were bought by people over 35. Scionara.

For just plain laughs, it’s hard to beat Chevy’s attempt to be young. In 2012 they hired some clown from MTV who “transformed part of the G.M. lobby into a loftlike space reminiscent of a coffee shop in Austin or Seattle, with graffiti on the walls and skateboards and throw pillows scattered around.” Average age of a Chevy buyer? 46.

Back to Lexus and its new NX. According to MediaPost, “Gen Y and Gen Z luxury buyers tend to be more diverse, more affluent…” said Lexus’ VP of Marketing.

Just curious about who these “affluent Gen Z” luxury car buyers are. The average Gen Z is now 16 years old…You truly cannot make this stupid shit up.

The auto industry, steeped in research horseshit about “creative visionaries” and decades of other socio-generational idiocy can’t get it through their thick skulls that our population is aging at warp speed; that the average car buyer in America is 53; that since 2000 the share of new cars bought by people over 55 has increased by 15%. These people are really, truly, genuinely clueless.

I’ll leave the last word to P.J. O’Rourke, “Whenever anything happens anywhere, somebody over 50 signs the bill for it.”


I’m getting a little long-winded here today, so let’s move this thing along…

    – Alternate headline for today’s newsletter: “I know 97% of my programmatic ad budget is wasted, I just don’t know which 97%.”  

    – From the late, great Nora Ephron, the best definition of ‘content’ you’ll ever read…“Something you can run an ad alongside of.”

    – Here’s a good laugh. According to Ad Age“The Association of National Advertisers and member marketers have begun discussing an industry self-regulatory body to handle social media issues…” Yeah, that oughta do the trick. Letting a marketer regulate himself is like giving a 14-year-old girl a cosmo, a cell phone, and a credit card.

    – According to ad fraud researcher Dr. Augustine Fou, 2/3 of clicks on Google Ads are from bots. “If your agency is reporting clicks and click rates to you, you’re likely being misled.”

     – And while we’re kicking Google around…there are some companies that have very G-rated names, but sell very X-rated stuff. A couple of these companies are named “Jack and Jill” and “Adam and Eve.” If you search for “Jack and Jill”  you won’t find the dirty company on Google in a natural search. That’s because Google has stunning integrity! Except, of course, if Jack and Jill happen to buy some Google Ads. In which case – surprise – there they are!

And Speaking of X-Rated Stuff…

        …doesn’t anyone screw anymore?


DuckDuckGo Wants to Stop Apps From Tracking You on Android | WIRED

At the end of April, Apple’s introduction of App Tracking Transparency tools shook the advertising industry to its core. iPhone and iPad owners could now stop apps from tracking their behavior and using their data for personalized advertising. Since the new privacy controls launched, almost $10 billion has been wiped from the revenues of Snap, Meta Platform’s Facebook, Twitter, and YouTube.

Now, a similar tool is coming to Google’s Android operating system—although not from Google itself. Privacy-focused tech company DuckDuckGo, which started life as a private search engine, is adding the ability to block hidden trackers to its Android app. The feature, dubbed “App Tracking Protection for Android,” is rolling out in beta from today and aims to mimic Apple’s iOS controls. “The idea is we block this data collection from happening from the apps the trackers don’t own,” says Peter Dolanjski, a director of product at DuckDuckGo. “You should see far fewer creepy ads following you around online.”

The vast majority of apps have third-party trackers tucked away in their code. These trackers monitor your behavior across different apps and help create profiles about you that can include what you buy, demographic data, and other information that can be used to serve you personalized ads. DuckDuckGo says its analysis of popular free Android apps shows more than 96 percent of them contain trackers. Blocking these trackers means Facebook and Google, whose trackers are some of the most prominent, can’t send data back to the mothership—neither will the dozens of advertising networks you’ve never heard of.

From a user perspective, blocking trackers with DuckDuckGo’s tool is straightforward. App Tracking Protection appears as an option in the settings menu of its Android app. For now, you’ll see the option to get on a waitlist to access it. But once turned on, the feature shows the total number of trackers blocked in the last week and gives a breakdown of what’s been blocked in each app recently. Open up the app of the Daily Mail, one of the world’s largest news websites, and DuckDuckGo will instantly register that it is blocking trackers from Google, Amazon, WarnerMedia, Adobe, and advertising company Taboola. An example from DuckDuckGo showed more than 60 apps had tracked a test phone thousands of times in the last seven days.

My own experience bore that out. Using a box-fresh Google Pixel 6 Pro, I installed 36 popular free apps—some estimates claim people install around 40 apps on their phones—and logged into around half of them. These included the McDonald’s app, LinkedIn, Facebook, Amazon, and BBC Sounds. Then, with a preview of DuckDuckGo’s Android tracker blocking turned on, I left the phone alone for four days and didn’t use it at all. In 96 hours, 23 of these apps had made more than 630 tracking attempts in the background.

Using your phone on a daily basis—opening and interacting with apps—sees a lot more attempted tracking. When I opened the McDonald’s app, trackers from Adobe, cloud software firm New Relic, Google, emotion-tracking firm Apptentive, and mobile analytics company Kochava tried to collect data about me. Opening the eBay and Uber apps—but not logging into them—was enough to trigger Google trackers.

At the moment, the tracker blocker doesn’t show what data each tracker is trying to send, but Dolanjski says a future version will show what broad categories of information each commonly tries to access. He adds that in testing the company has found some trackers collecting exact GPS coordinates and email addresses.

The beta of App Tracking Protection for Android is limited. It doesn’t block trackers in all apps, and browsers aren’t included, as they may consider the websites people visit to be trackers themselves. In addition, DuckDuckGo says it has found some apps require tracking to be turned on to function; for this reason, it gives mobile games a pass. While the tool blocks Facebook trackers across other apps, it doesn’t support tracker-blocking in the Facebook app itself. In DuckDuckGo’s settings, you can whitelist any other apps that don’t function properly with App Tracking Protection turned on.

The introduction of App Tracking Protection for Android comes at a time when ATT has pushed advertisers to Android, while also benefiting Apple. “ATT meaningfully changed how advertisers are able to target ads on some platforms,” says Andy Taylor, vice president of research at performance marketing company Tinuiti. The company’s own ads data shows Facebook advertising on Android grew 86 percent in September, while iOS growth lagged behind at 12 percent. At the same time, Apple’s ad business has tripled its market share, according to an analysis from the Financial Times. Around 54 percent of people have chosen not to be tracked using ATT, data from mobile marketing analytics firm AppsFlyer shows.

DuckDuckGo’s system is unlikely to have an impact anywhere near that scale and is more of a blunt tool. Unlike Apple, the company doesn’t own the infrastructure—the phones people use or the underlying operating systems—to enforce wholesale changes. Each time an app wants to track you, iOS presents you with a question: Do you want this app to track you? When you opt out, your device transmits the IDFA sent to advertisers as a series of zeros—essentially preventing them from tracking you. DuckDuckGo doesn’t have this luxury; its privacy browser app is installed on your phone like any other from the Google Play Store.

To make App Tracking Protection work, DuckDuckGo runs the same set of device permissions as a virtual private network (VPN). Dolanjski says that while Android phones will show the DuckDuckGo app as a VPN, it doesn’t work in this way: No data is transferred off your phone, and the network runs locally. In essence, the system blocks apps from making connections to the servers used for tracking. (When some trackers can’t communicate with their servers they will make repeated attempts to do so, Dolanjski says, which can cause certain tracker counts to swell within the app. He adds the company has seen no impact on battery life).

At the time of writing, Google had not responded to a request for comment on apps using VPN configurations to block trackers across Android. Other apps on the Google Play Store—including Jumbo Privacy, a VPN app by Samsung, and Blokada—already use similar methods to block trackers, although they also offer wider privacy-focused tools and don’t act as browsers.

Google itself has gradually added more privacy controls in Android, including some that apply to apps. The company allows users to reset their ad IDs and to opt out of personalized ads. Following the launch of iOS 14.5, Google said that Android owners who opt out of personalized advertising will see their unique identifiers stripped to a series of zeroes—as is the case for iPhone owners who turn off tracking. The change is already rolling out on phones using Android 12 and will be made more widely available on other Android devices early next year.

But for many people, the planned Android changes may not be enough. They don’t go as far as Apple’s alterations. DuckDuckGo’s Dolanjski argues that there’s very little transparency around the trackers currently employed in the apps people use every single day and that most people would be shocked at the amount they are tracked. For him, blocking trackers on Android is the next step in giving people more control over how companies handle their data. “It is going to dramatically reduce how much information these third-party companies get about you,” he says.

Source: DuckDuckGo Wants to Stop Apps From Tracking You on Android | WIRED

How BuzzFeed Is Using Automated Ads to Power Growth

BuzzFeed’s four-year-old programmatic business is the engine of its advertising growth. That muscle will come in handy as publishers and marketers enter the usual frenzy of the fourth quarter against the backdrop of the delta variant threatening to derail ad creative and campaign budgets.

On its road to becoming a public company, the publisher wouldn’t break out specific programmatic revenue, but overall ad revenue for its second quarter increased 79% to $47.8 million. Advertising makes up 53% of its total second-quarter revenues with growth driven by higher pricing on programmatic ads and an increase in the total number of impressions sold.

Driving the higher pricing and more impressions is BuzzFeed’s answer to the dwindling efficacy of third-party cookies. Lighthouse, announced in March, helps increase marketers’ audience scale by finding previously unaddressable audiences and cutting down on marketing waste, ultimately selling more effective campaigns that nurture repeat business. Now, BuzzFeed uses some aspect of Lighthouse for almost all clients.

“The pandemic has taught us to be flexible,” Ken Blom, svp of ad strategy and partnerships told Adweek, who added that buyers always had access to first-party data through BuzzFeed’s private marketplaces and direct deals. But revenue from selling on the open marketplace is still a big part of BuzzFeed’s business, and buyers there still rely on third-party data.

Lighthouse on full beam

Today, 66% of publishers are driving revenue through their first-party data, according to OpenX research which found that 21% of publishers said first-party data was “very important” to revenue today.

Universal IDs are not our strategy, our strategy is cohorts and contextual. Come talk to us and we’ll tell you what we have.

—Ken Blom, svp of strategy and partnerships, BuzzFeed

Advertisers using Lighthouse are consistently getting between three and four times increases in click-through rate compared with using just third-party data sources, according to Blom.

The addition of HuffPost and Complex Networks to BuzzFeed’s audience pool means it’ll have fewer scale woes than other publishers, making it less reliant than others on alternative identifiers like Unified ID or LiveRamp’s Authenticated Traffic Solution. Here, BuzzFeed is dipping its toes, going through the legal process with a couple but wouldn’t share which.

“Universal IDs are not our strategy,” said Blom, “our strategy is cohorts and contextual. Come talk to us and we’ll tell you what we have.”

Publishers can make a decision about what ID solutions they integrate, but that only goes so far without advertiser demand, not to mention meeting ad-tech vendors and giants like Google somewhere in the middle.

Google’s deadline extension for cookie collapse gives BuzzFeed more time to test capabilities of its own first-party data solutions combined with context, rather than scrambling to release a minimum viable product that marketers might question. Conversations haven’t slowed, as many fear, Blom said, but clients’ readiness is a broad spectrum, those who are willing to test are still experimenting.

Performance from programmatic pipes

With cookies going away and contextual targeting on the rise, the thinking goes that ad buyers would do well to have more direct conversations with publishers to know if, say, the news team is setting up a health desk.

“The pendulum is swinging back to direct in some way,” said Blom. “The programmatic pipes are good, but how do we make them more performant?”

Due to client demand, BuzzFeed has tinkered with a three-year-old ad unit Spotlight to focus on the features inside, what content runs next to it and what the creative looks like. It’s rolled up other ad unit functions that were across the site into the ad, features like streaming video files, full-screen expansion capabilities and what it calls customizable and interactive solutions.

These include more tracking features, real-time performance and, of course, weaves in its first-party data so it can run comparative tests and use machine learning to build audience cohorts. Driving performance and utility, the ultimate test in advertising ROI, will help it maintain a growth trajectory if it can prove its programmatic ads work.

This strategy of juicing up ad units makes sense for BuzzFeed to keep up with the post-cookie world since it has scale but could struggle to get people to share email addresses to access content (commerce features are a surer bet), said Dan Elddine, svp, data and technology, North America, Essence.

“We are starting to see a bifurcation among publishers talking about identity and first-party data solutions versus those talking about ad units and inventory types,” said Elddine.

So far, the ad unit has gone down well with entertainment and retail clients. One unnamed entertainment client used the unit to promote a video game launch, growing top-of-mind awareness by eight points. Another campaign advertising a movie release including the date drove an increase in awareness by over 14 points.

“Advertising can fuel revenue diversification, it’s a foundation and an anchor,” said Blom, “it helps what we do with licensing and commerce, if you have advertising without diverse revenue streams you’re leaving money on the table.”

Source: How BuzzFeed Is Using Automated Ads to Power Growth

Carissa Véliz: Privacy is Power – Tech Policy Press

For the Tech Policy Press Sunday Show podcast, I spoke to author Carissa Véliz about her recently published book, Privacy is Power: Why And How you Should Take Back Control of Your Data. Carissa is an Associate Professor at the Faculty of Philosophy and the Institute for Ethics in AI, and a Tutorial Fellow at Hertford College, at the University of Oxford.  She works on digital ethics- with an emphasis on privacy and AI ethics- and on practical ethics more generally, political philosophy, and public policy.

I caught up with Carissa about the book, and how it relates to some current issues in the world, from the pandemic to climate change. Below is a lightly edited transcript of our discussion.

Carissa Véliz

Justin Hendrix:

Before we get started talking about your book and the issue of privacy more generally, can you just tell me a little bit about yourself and how you came to look at these issues?

Carissa Véliz:

I’m a philosopher. I was writing my dissertation on a different topic in ethics, and I started researching the history of my family as a side thing. And I learned a lot about them that they hadn’t told us. They were refugees from the Spanish Civil War, and they went to Mexico. And there was a lot about what they did before going to Mexico and who they were that was completely unknown to us. So I went to the archives, I dug out this information, and it made me wonder whether I had a right to know these things and whether I had a right to write about them because I found them very interesting.

That summer, Edward Snowden came out with his revelations that we were being surveilled at a mass scale. Being a philosopher, I looked to philosophy to see what was there about privacy, and I realized there was a huge gap in the literature. There was some work on privacy, but there was very little. And the work that was there was quite outdated. So I decided to change the topic of my dissertation, and I ended up writing about the ethics and politics of privacy.

Justin Hendrix:

So tell me about this book and just give the reader a sense of how it came together. You’ve just published it in the fall of 2020.

Carissa Véliz:

Yes. Originally, I had the idea of publishing an academic book, but the more I researched privacy, the more I realized this is a very important topic. We’re at a crucial state, a crucial moment in history, and there is a need for people to be better informed about this. And there are very good reasons for why we’re not better informed. There are a lot of interests primarily from companies, but also from the government, that we don’t know too much about, or how much we’re being surveilled. And I thought that this was too important a topic and too important a moment to just write an academic book. So I decided to write the book that I wish I had been able to read when I got worried about this issue.

Justin Hendrix:

The first three chapters are context. You spend chapter one walking through the day in the life of someone and talking about how privacy and collection of data might impact them in the different parts of the day. In chapter two you get into how the data economy developed and two or three key things that drive that. Can you, for the listener, just quickly sketch out what those three key drivers are.?

Carissa Véliz:

Sure. I wanted to write a book that was totally accessible for somebody who hasn’t thought about privacy at all, but also interesting for experts and philosophers. And so I hypothesized that the three key elements that drove the data economy was first, Google realizing that they could make a lot of money out of personal data. They were a startup and they were very successful in the sense that they had a fantastic search engine and a lot of people wanted to use it, but they didn’t have a way to fund themselves. They were keenly aware that getting into ads could compromise their loyalty towards users, but they couldn’t figure out a different way. And they realized that they could target ads in an incredibly personalized way and have a competitive advantage. And that made them soar incredibly high after they decided to use data that way. And the Federal Trade Commission realized that this was a danger. In 2001 they published a report recommending to Congress that they should regulate the data economy. And many of the suggestions they make were along the lines of the GDPR.

So the second element that was really important for the data economy to take off was 9/11, because a few months after the Federal Trade Commission publishes this report, 9/11 happens and suddenly the government realizes that they can make a copy of all that data, literally make a copy, and try to use it to prevent terrorism. I think this was an intuitive approach. I think it was well intentioned. It just so happens that big data is not the right kind of analytics to prevent terrorism. Big data is fantastic at knowing what you will buy tomorrow because billions of people buy things every single day. So we have tons and tons of data. But terrorism is always going to be a rare event and that makes it not good for this kind of big data analysis.

So it’s kind of a shame because we lost privacy for something that was promised and that just wasn’t possible. And then the third element that was very important is once the data economy was half entrenched, the technology companies realized that that was their future. So they wanted to peddle narratives that convinced us that we didn’t need privacy anymore.

Famously, Mark Zuckerberg in 2010 said that people had evolved the norms of privacy, that we thought about data in a different way, and that people were happy to share their things. And there’s a big irony in somebody saying this who has bought the four houses around his house to protect his privacy. But back then a lot of people bought it. A lot of people thought, “Yeah, maybe privacy was relevant in the past, but anymore.” And what we’ve seen recently is that privacy is more relevant than ever, that the same reasons we had for protecting privacy, we have today still, and it’s even more salient in many ways. In the past you protected your privacy so it would not get stolen. And today we’re seeing identity theft really rise, especially as a result of the pandemic, when more people are using things online or spending time online.

And so these three factors- first using personal data as a tool to personalize ads, then 9/11 and governments having an interest in making a copy of that data and in that data getting collected, and then thirdly, the kind of narratives that tech companies have peddled. And one of the most important ones has been that privacy is obsolete.

Justin Hendrix:

So before we move on to that, I want to just focus in on 9/11. We’re about to have the 20th anniversary of 9/11. And of course that will be an important day here in New York City where I live and elsewhere in the country. But at the same time, another event that occurred another event that many describe as terrorism, January 6th, the investigation in Congress and the select committee is just getting underway. And in fact, in the other tab I have open right now, I’m looking at a host of letters of request that the committee has sent to social media companies requesting information that may be relevant to what happened that day.

And I’m also struck by the fact that after January 6th, the FBI Director, Christopher Wray, has testified that there was just simply so much information on social media and coming over the transom, that they had a very difficult time finding a signal in the noise. And we see the DHS actually putting out a call for proposals to possibly buy other services that will help it sort through massive amounts of information on social media. There are literally lines in your book that are about this phenomenon. What do you make of what’s going on in the United States, perhaps elsewhere, today when it comes to social media and surveillance?

Carissa Véliz:

Yes, that’s very interesting. And one of the reasons why big data is not good at preventing terrorism is because terrorism is like trying to find a needle in a haystack. And adding all the data you can possibly collect and saving it for as long as you can just means you’re adding a whole lot of hay to that haystack and making it all the more difficult to find the needle. So this is one example in which there’s so much information out there that it might be counterproductive. It’s intuitive to think that the more data we have, the more we’ll know, and the more we can improve society and predict the future and prevent bad things from happening. But there’s a lot of evidence that shows that often too much information is counterproductive and we just get lost in the noise in correlations that turn out to be spurious in pretty much chaos, and it would have been better to just collect the data that we need.

In the case of January 6th, it’s an interesting case because if you’re somebody who’s worried about democracy, that was a scary day and a scary moment. And people might think, “Well, see, it’s fantastic that we have this data and that they can find the people who acted in illegal ways.” The problem with technology is that you shouldn’t think of it as something that will always be used for good ends and in the best way possible, even if it sometimes is. Technology can be used in different ways. And you have to consider the very likely fact that it will be used in the worst possible way, because human beings are imperfect and there are a lot of interests and it’s chaotic and you can’t control technology. Once you invent something, you can’t un-invent it. So when we think about whether it’s a good idea to collect so much data, we should try to think about the cases in which it goes terribly wrong, not the cases in which you think it may be justified.

So talking about recent things- this is not about social media, but it’s about collecting data- we’re seeing now in Afghanistan, that one of the horrendous things that is happening is that there’s a lot of data that might incriminate people. And I’ve seen a post from a teacher who’s trying to delete or burn her students’ records and women’s records because if the Taliban realized that women were studying they might be in serious trouble. And if those records were digital, it would be almost impossible to delete them. You just can’t burn them. One of the qualities of digital data is that it gets copied in so many servers that it’s very hard to delete. Another example is the US apparently left biometric data of people who helped Americans. And there’s a big worry that the Taliban might get that data. The UK embassy, it turns out it also left personal data behind. And so we can see how personal data is a huge danger. And we should think about what can go wrong, not only what can go right.

Justin Hendrix:

So you also put that in the context of what’s happened here in the United States, you mentioned Edward Snowden, but also multiple other aspects of what the state here has done to create a sort of surveillance ecosystem. Are you monitoring also what’s going on in China on a day-to-day basis? This may be something that you haven’t seen yet. I just noticed today that part of the Chinese government has put out a new set of recommendations around data privacy. How do you think about China in the context of these questions?

Carissa Véliz:

China is a fascinating case, of course, because it’s a case of a country that probably surveils the most around the world. And it’s something that as liberal democracies we don’t want to go that way. And yet we are building many technologies that are going in that way, we’re not walking away from it, we’re working towards it. And that’s something that really worries me. China is also very interesting because it has implemented a system of social credits in which, depending on what people do, they lose or win points. And those points are used to limit people’s opportunities or to give them more opportunity. So say you get caught jaywalking, that loses you points and that might be used to prohibit you from using airplanes, high-speed trains, from staying in exclusive hotels. If you don’t have enough points, you might be less visible on dating apps. And if you have a lot of points, you might, for instance, not need to give a deposit when you rent a car. There are all kinds of perks.

And what makes the system scary is that it’s very totalitarian in the sense that something that you do in one aspect of your life can influence another. So say you listen to loud music at your home. If you live in the United States, that might get your neighbors angry and they might even call the police and ask you to turn it down. But that’s not going to have an impact on the loan that you’re going to get or your visibility on a dating app or on a job that you’re going to apply for. And in China, it does, which makes it really scary. Now, something really interesting is that up until now, one of the arguments of tech companies in the west for not being regulated was that they needed to be competitive with China. So the idea is that, “Don’t regulate tech companies because we need all that data to keep up with China. And if you regulate us, China is not going to regulate its tech companies and it’s to have more data, and therefore it’s going to develop AI faster.”

There are many reasons for why that’s very questionable. But something fascinating now is that China has just passed one of the most strict laws of privacy around the world. And there’s a lot of speculation of, “Why did they do that? Their tech companies, their stock dropped, so why would they do that?” And one possible reason, among others, is because they realized that having so much personal data stored is a ticking bomb. And in particular, it is a danger to national security. So their rivals will get to that data sooner or later, and they will use it against China. So one reason why they’re regulating in favor of privacy is to protect themselves. And that gives a big motivation to the United States to come up with a federal privacy law, because it’s one of the few advanced countries that doesn’t have one, and that’s very worrisome.

Justin Hendrix:

So you do get into this idea of privacy as power in chapter three, you talk about hard power, soft power, this idea of privacy being collective. Can you talk a little bit more about the different forms of power you see connected to privacy?

Carissa Véliz:

Sure. Up until now, we have been aware that data is important because it’s valuable, people can sell it for money. But I argue that even more important than that is that data gives people and institutions power. So not only can they sell it and earn money, but they can also get all kinds of things in exchange for it. So for instance, if you have enough power, you can buy politicians and you can lobby the government. If you have enough power, you can not pay taxes, and you have much more clout than just money. One of the insights of Bertrand Russell was we should think about power as energy in that it transforms from one thing into another. So if you have enough political power, you can buy economic power. If you have enough economic power, you can get military power and so on. And one of the jobs of good regulation is to stop that from happening, so that even if you have a lot of economic power that doesn’t necessarily get you political power.

So at the moment with data, it’s a kind of new power in the sense. It’s always been there, there’s always been a connection between knowledge and power. Francis Bacon argued that the more you know about somebody, the more you have power over them. And that’s kind of an intuitive thought. And Michel Foucault argued that the more power you have, the more you get to decide what counts as knowledge about someone. So for instance, Google has a lot of power and that makes it the case that it decides how it classifies us. When Google tells the rest of advertisers and companies that you are such and such age and such and such gender and these are your interests, they get to decide how you are treated and how you are seen.

And part of the power of data is related to the power to predict what’s going to happen next. And in particular, the power to predict what we’re going to do next, and try to influence our behavior such that we act differently than we would have otherwise. And so even though this power has always existed, in the digital age it becomes much more important because we have a lot more data than we used to have and we have new methods to analyze that data that weren’t there before. So it kind of crept up on us because we weren’t used to data being this powerful. And instead of just thinking about it as money, thinking about it as power will lead us to be more mindful of the kind of symmetries that we are seeing, how do we address those and how do we regulate them?

Justin Hendrix:

One of the things that I think of us doing at Tech Policy Press is being part of a pro democracy movement around the intersection of tech and democracy. You actually pause in the book and make a case for liberal democracy. Why did you think that that was important to do in this context?

Carissa Véliz:

When I was writing the book, or maybe a few months before, I had seen a few polls, one by The Economist and a few others that seem to suggest that people weren’t that enamored with democracy anymore, in particular in the United States, but also elsewhere. There’s a lot of people who think that democracy is not working, so maybe it wouldn’t be such a bad idea to have something else. And this worries me because I agree that many times democracy doesn’t work, but the alternatives are so much worse. And what we have to do when democracy doesn’t work is figure out what’s going on and change it as opposed to thinking, “Well, maybe if we had a dictator, they would sort things out.” Because history shows that more often than not, that leads to a lot of unnecessary suffering and a lot of injustice.

So I make a case for why liberal democracy is something important. And it’s not just any kind of democracy, but liberal democracy. And the idea is that liberal democracy wants to make people as free as possible, as long as their freedom doesn’t harm others. And it also wants to put limits on what people can do, such that nobody’s rights get trampled, not even the minority. So one of the worries about democracy are what John Stuart Mill called the tyranny of the majority- which means that the majority, if they dislike a minority, can be just as bad a tyrant as if you had one dictator. So liberal democracy tries to limit that power. And one of the authors that I cite that I think is very insightful is George Orwell. And George Orwell said, “You know, I get it, democracy sucks. It’s slow. It’s chaotic. If you get too many rich people, they’re going to co-opt it, it’s just really not ideal. But if you compare it to dictatorships, there’s a huge difference.”

And some of his detractors used to say, “Well, but there isn’t. In democracies you see injustice and people who go to jail who shouldn’t go to jail, and people who do crimes and they don’t go to jail because they’re rich.” And so on and so forth. And George Orwell said, “Yes, that’s true. But the amount of injustice that you get is very small in comparison to what you get in a dictatorship.” And that matters. So what is the difference in grade becomes a difference in quality. And most people in liberal democracies can go onto the street and protest and speak their minds and buy what they want and so on without fear of being repressed or of facing negative consequences. That is what matters in a liberal democracy, and that’s what we should be very worried about protecting.

Justin Hendrix:

So in chapter four, you get into questions around what exactly we should be doing as individuals and as a movement to take on the question of privacy. There’s a lot here. You want folks to step up and help stop personalized advertising, stop the trade of personal data. You have recommendations like implementing fiduciary duty around data and data collection, improving cybersecurity standards, a ban on surveillance equipment, some proactive things like funding privacy regulation, and bodies that would look after privacy, getting involved in antitrust, doing more to protect our children with regard to privacy. What do you see as the key things that listeners of Tech Policy Press should be doing?

Carissa Véliz:

The key thing is to realize how important privacy is and then everything falls from that, so do what you can, and you don’t have to be perfect. History shows that we only need 5 or 10% of people to make an effort for things to change quite radically. So we need regulation, and there’s no way around it. This is a collective action problem. And collective action problems don’t go away through individuals. But individuals have a big role to play in making that happen. So if we have 5 to 10% of people who care about privacy and realize it and protect it, that can motivate regulation. And not only regulation, but also companies realizing that privacy can be a competitive advantage and it can actually sell and that people care about it and that we are willing to pay for it. Because we realized that if you don’t pay for it, you pay even more further down the line. So if you don’t pay for privacy, okay it might seem free right now, but five years down the line you get your identity stolen and you lose money because somebody stole your credit card number. Or you ask for a job and you get discriminated against because the personal data shows that maybe you have a disease or you’re trying to get pregnant or whatever else doesn’t make you attractive to an employer, and you don’t get the job.

Ultimately, it’s much more expensive to not protect our privacy. So what we can do is first understand that privacy is collective. So a lot of people say, “Well, I don’t care. I don’t have anything to hide. I’m not a criminal. I’m not shy. And I have a permanent job. So I have no reason to protect my privacy.” But actually one of the narratives that the tech companies have sold us that is incorrect is that privacy is an individual preference and something very personal. But actually there’s this collective side that’s just as important. So when I expose data about myself, I expose others. If I expose my location data, I’m exposing my neighbors and my coworkers. If I expose data about my psychology, I’m exposing people who share those traits. If I expose my genetic data, I’m exposing not only my parents and siblings and kids and so on, but also very distant kin who I’m never going to meet but who can have very bad consequences from it.

So if we think about privacy as collective, suddenly it gives us a reason to work as a team to protect our privacy not only for ourselves, but for our community, family, friends, but also our society. And we should try to choose privacy-friendly options. So instead of using Google search, use DuckDuckGo, it’s very good and it doesn’t collect your data. Instead of using WhatsApp, use Signal. Instead of using something like Gmail, you can use ProtonMail. There are many alternatives out there, and they’re typically free and work very well. Or if they’re not free, they’re not very expensive. Contact your political representatives. Tell them that you care about privacy. There’s not one thing that’s going to solve it all. So it’s just the accumulation of efforts that’s going to make a difference.

Justin Hendrix:

You have some specific recommendations for tech workers, and I don’t know every person that will listen to this, but I do know that a lot of the folks that are in the Tech Policy Press community who either work in tech or around tech. What do you think they should be doing in particular?

Carissa Véliz:

They have a very special responsibility because they make the magic happen. Without them, there wouldn’t be any apps or websites or platforms and so on. So here at Oxford, I get to teach people who are studying sometimes mathematics or sometimes computer science, and I like to talk to my students about the inventors in the past who have regretted their inventions. So there are many examples, but one example is, for instance, Kalashnikov- the person who developed the rifles- thought that their weapon was only going to be used in a just war. And of course it turns out that it has been used in all kinds of wars and many times in very unfair and horrific ways. And near the end of his life, he wrote a letter to his priest asking if he was responsible for those deaths. And you don’t want to be that person. You don’t want to be the person who developed something that gets used to harm people, because you’re going to carry that for the rest of your life.

And so in a perfect world, and I hope in the near future, we will have regulated technology in a way that not as heavy burden is on the shoulders of people who create the tech. So data analysts and programmers and computer scientists should be able to go to an ethics committee and to get advice, to ask about a particular project, to solve problems. But at the moment we don’t have that. And so all the responsibility is on their shoulders. And the order, it’s a tall order because the idea is when you design something, try to imagine how can it be misused. Just imagine like a dictator ending up with this tool and how they would use. And by design, try to make it impossible to abuse. And that’s really, really hard and in some cases will be impossible. But designers have to keep in mind that they will lose control of their inventions.

Now that we don’t have regulation, something that I advise is to try to seek out advice from ethics committees or people who work in ethics or people who work in tech and society. And in the US, I’m not that familiar with what kinds of committees there are, but in the UK, for instance, Digital Catapult is an institution that helps startups take off. And one of the services they offer are ethics committees. And I think that that is a really important thing. Also for people who want to invest in tech, if you want to invest in tech, first make sure that whatever project you’re interested in has had some kind of ethical vetting, because it takes a lot of imagination and experience to try to come up with what can go wrong. And as a designer, you might not be used to thinking about that. And ideally it shouldn’t be all on one person’s shoulder to carry that burden.

Justin Hendrix:

You see two roads ahead, two possible worlds. Of course, there are probably more, but you offer a ‘two roads diverge in the yellow wood’ sort of conclusion to the book. Can you describe what you see possible in the future and which direction you think we might be going?

Carissa Véliz:

Yes, basically we have two options. Either we have a society in which we regulate for privacy and we get our act together and protect liberal democracy and make sure that our personal data won’t be used against us, or we get a society in which we have what we currently have, but more surveillance. Because that’s the tendency- to collect more and more data and to have more and more tools that are able to transform experience into data. And this is quite a scary scenario because it’s kind of China, but with more high-tech. It’s a scenario in which you can’t do anything that doesn’t get recorded and everything you do matters, and you get judged for that.

So it’s a world in which, for example, I worry about children and what it means for them to grow up in an environment in which everything they do can be recorded and potentially used against them and potentially used to humiliate them or to discriminate against them in the future. And we really have to think carefully about what kind of society we want to live in in 10 years, 20 years time. Particularly now with the pandemic crisis, it’s easy to give up civil liberties in an emergency without thinking of the kind of world we want to have once that emergency is over.

On the one hand I’m optimistic. I think that more and more people are aware that privacy is important. Among other reasons, because they have had some bad experiences with privacy online. In that sense, we are maturing as the digital age evolves. But at the same time, the tendency is still currently to collect more data and to have more surveillance and to be very uncritical about surveillance. We’re getting used to cameras and microphones being on all the time, being everywhere. And that really worries me.

So ultimately I’m optimistic in the sense that I think this is so bad and it’s so unsustainable that we are going to regulate it sooner or later, we’re going to get on top of it, just like we regulated other industries like railroads and cars and drugs and food and whatever industry has come before. But the question is, are we going to regulate it before something bad happens? Or are we going to wait for something like personal data being used for the purposes of genocide in the west? Is that what it’s going to take for us to wake up? And every alarm call that we get is more and more worrisome. So what are we waiting for to wake up? And that’s my concern.

Justin Hendrix:

You deal with the potential pushback that someone might bring, which is, “Don’t we need all this data to solve our problems?” And you go into a specific look at medical technology and medical information, and the extent to which personal data may be valuable to solve diseases or medical problems. But I might also throw in there climate- in the big infrastructure bill that’s made its way in the United States, there is a lot of focus on digital solutions for climate change and what might be possible there. And lots of interest, obviously, in data collection. I don’t know. How do you square those things? On some level, our capitalist economy seems to have mostly bet that our only hope is more tech and more information and more machine learning and more big data sets. So what do we do?

Carissa Véliz:

So the ideal answer is longer than I can give right now, so I encourage people to read the book because there I have a more nuanced answer. But the short of it is with regards to medicine, yes of course we need personal data for medicine. If you go to your doctor and you don’t want to tell them what’s wrong with you, they’re not going to be able to help you. But that doesn’t mean that we should sell that data. So what I’m arguing for is that personal data shouldn’t be the kind of thing that you can buy or sell. Just as we don’t allow votes to be bought or sold because it would totally deform democracy, for the same reasons we shouldn’t allow personal data to be bought or sold.

Furthermore, we have to be critical in the sense that it’s not automatic that the more data we have, the better medicine we will get. So one example is during the coronavirus pandemic there have been many attempts to use AI to help fight the pandemic. In a recent MIT Technology Review article, they wrote about two meta-analyses that have been published recently about all the AI tools that have been implemented in hospitals to fight COVID. And it turns out that out of the hundreds and hundreds of AI tools that have been developed, maybe if I remember correctly, one or two are clinically viable, maybe. So these are tools that have been used on patients, and that in some cases may have harmed patients. So we need to be a lot smarter about AI and not just assume that because it’s cutting edge tech, surely it’s better. And because it’s AI, surely it’s better. That’s just not the case. So that’s part of it.

Another issue is whether AI is really going to need as much data as it needs now. So what we want from AI is for it to be authentically intelligent. And if you have had a conversation with your digital assistant recently, you will have noticed that they’re not very bright. The difference between a digital assistant or an AI and a child, is that you only have to tell a child something a few times and they get it. They remember it, they can generalize it, they can use that information in many different ways very flexibly, and they don’t need millions of cases to do that. So there’s an argument to be made for why AI in the future, really intelligent AI, won’t need the amounts of data that it needs today.

A third answer, or a third part of the answer, is that there’s a lot of data that we can use that’s not personal data. And I admit that it’s very hard to distinguish what personal data is from what is not personal data, because what we thought was not personal suddenly becomes personal data when it turns out that there’s a new tool that can re-identify or use that data to identify people. But for the purposes of climate change, there will be a lot of data that will be beneficial that will not be personal data. Things like the quality of air and where the temperature is going up and how and how the glaciers are melting and all kinds of things that will not be personal data. So the short of it is we can do everything we want to do without having the data economy that we currently have. There’s no reason why we should be buying and selling personal data.

Justin Hendrix:

Is there anything I didn’t ask you about that you want to get across about the book or about this topic generally? I mean, I guess I could put it to you like this, what’s next for you on the topic of privacy? Where are you going to take your concern about this issue next?

Carissa Véliz:

I want to focus more on how algorithms are using data and what we can do to preserve autonomy. So at the moment, there can be hundreds of algorithms making decisions about you right now, whether you get a loan, whether you get a job, whether you get an apartment, how long you wait in line, what price you get for a particular product, and you have no idea. And that seems wrong to me. I also want to think about how we regulate algorithms. At the moment, you can produce any kind of algorithm pretty much to do whatever you want and let it loose into the world without any kind of supervision whatsoever. And that seems absolutely crazy. So one of the things I’m thinking about is how do we implement randomized controlled trials with algorithms just like we do with medicines? We would never allow a medicine to go out into the market without being tested. And yet we do that with algorithms all the time. And algorithms can be just as harmful as any powerful drug. So I’m currently thinking more about that and veering in that direction. So more about power than privacy.

But maybe to end, a lot of people think that it’s very radical to say that we should end the data economy. But really if you come to think about it, first, in history, we have banned certain kinds of economic activity in the past because they’re just too toxic for society, they’re just too dangerous. But second, if you think about it, what seems to me really extreme is to have a business model that depends on the systematic and mass violation of rights. That’s what’s crazy. It’s not banning it that’s crazy. So I think we have gotten used to a very unfair situation and we have to reassess our society with a view to protecting our democracy and the kind of life we want to lead and that we want our kids to be able to lead.

Justin Hendrix:

The book is Privacy is Power: Why and How You Should Take Back Control of Your Data. Carissa, thank you very much for talking to me today.

Carissa Véliz:

Thank you so much for having me, Justin.

Source: Carissa Véliz: Privacy is Power – Tech Policy Press

Forrester says it’s end times for digital and display advertising | The Drum

A new Forrester report forecasts the end of digital and display advertising as we know it as consumers move away from experiences in which they can be interrupted.

That’s in part because consumers are putting more trust in digital assistants to make decisions on their behalf, but, naturally, it’s also because US marketers wasted roughly $7.4bn on display ads in 2016 – only 40% of which were seen by consumers.

In a blog post, James McQuivey, vice president and principal analyst at Forrester, said the “bombshell” report “fits nicely into the current backlash against major publishers and ad networks, including Google and Facebook” as advertisers re-examine their digital spend and demand more transparency.

But McQuivey said bigger change is afoot “because interruptions are coming to an end” as “interruption only works if consumers spend time doing interruptible things on interruption-friendly devices”.

He went on to add: “Once they can get what they want without leaving themselves open to interruptions — whether through voice interfaces or AI-driven background services — they will feel even more hostile to ad interruptions”.

And it is consumers’ “casual indifference to advertiser interests” that McQuivey said will enable consumers to inhabit an advertising-free world. I.e., soon Alexa may answer most of the questions consumers have historically requested via search engines, plus digital assistants may collate and deliver highlights from users’ Facebook feeds, so they don’t see sponsored posts.

“The question remaining is what role will marketers play in that hypermediated world,” McQuivey wrote.

The answer?

Marketers should focus on building deeper relationships with their customers in 2017 – in part by investing in relationship technologies such as those that offer a real-time, single view of the customer, plus artificial intelligence that drives conversational relationships, McQuivey said.

Intelligent conversational relationships are possible via chatbots, chat interfaces and voice skills on in-home devices, but marketers must also ensure the conversations “sparkle with the brand personality the CMO has committed the company to,” McQuivey said.

What’s more, McQuivey said this will take investment, but his advice is to pay for it with the billions of dollars used on digital display advertising.

“When they do [divert digital display ad budgets], that will signal to everybody that the end of advertising is upon us. And that something much better is on its way,” McQuivey added.

Source: Forrester says it’s end times for digital and display advertising | The Drum

Inside the Industry That Unmasks People At Scale

Screen Shot 2021-02-24 at 3

Hacking. Disinformation. Surveillance. CYBER is Motherboard’s podcast and reporting on the dark underbelly of the internet.


Tech companies have repeatedly reassured the public that trackers used to follow smartphone users through apps are anonymous or at least pseudonymous, not directly identifying the person using the phone. But what they don’t mention is that an entire overlooked industry exists to purposefully and explicitly shatter that anonymity.


They do this by linking mobile advertising IDs (MAIDs) collected by apps to a person’s full name, physical address, and other personal identifiable information (PII). Motherboard confirmed this by posing as a potential customer to a company that offers linking MAIDs to PII.


“If shady data brokers are selling this information, it makes a mockery of advertisers’ claims that the truckloads of data about Americans that they collect and sell is anonymous,” Senator Ron Wyden told Motherboard in a statement.


Do you work at a company selling this kind of data? Do you otherwise have access to the data itself or documents related to it? We’d love to hear from you. Using a non-work phone or computer, you can contact Joseph Cox securely on Signal on +44 20 8133 5190, Wickr on josephcox, OTR chat on [email protected], or email [email protected].


“We have one of the largest repositories of current, fresh MAIDS<>PII in the USA,” Brad Mack, CEO of data broker BIGDBM told us when we asked about the capabilities of the product while posing as a customer. “All BIGDBM USA data assets are connected to each other,” Mack added, explaining that MAIDs are linked to full name, physical address, and their phone, email address, and IP address if available. The dataset also includes other information, “too numerous to list here,” Mack wrote.


A MAID is a unique identifier a phone’s operating system gives to its users’ individual device. For Apple, that is the IDFA, which Apple has recently moved to largely phase out. For Google, that is the AAID, or Android Advertising ID. Apps often grab a user’s MAID and provide that to a host of third parties. In one leaked dataset from a location tracking firm called Predicio previously obtained by Motherboard, the data included users of a Muslim prayer app’s precise locations. That data was somewhat pseudonymized, because it didn’t contain the specific users’ name, but it did contain their MAID. Because of firms like BIGDBM, another company that buys the sort of data Predicio had could take that or similar data and attempt to unmask the people in the dataset simply by paying a fee.



A screenshot of FullContact’s website offering the linking of mobile ad ids and other information.


“Anyone and everyone who has a phone and has installed an app that has ads, currently is at risk of being de-anonymized via unscrupulous companies,” Zach Edwards, a researcher who has closely followed the supply chain of various sources of data, told Motherboard in an online chat. “There are significant risks for members of law enforcement, elected officials, members of the military and other high-risk individuals from foreign surveillance when data brokers are able to ingest data from the advertising bidstream,” he added, referring to the process where some third parties obtain data on smartphone users via the placement of adverts.


This de-anonymization industry uses various terms to describe their product, including “identity resolution” and “identity graph.” Other companies claiming to offer a similar service as BIGDBM include FullContact, which says it has 223 billion data points for the U.S., as well as profiles on over 275 million adults in the U.S.


“Our whole-person Identity Graph provides both personal and professional attributes of an individual, as well as online and offline identifiers,” marketing material from FullContact available online reads, adding that can include names, addresses, social IDs, and MAIDs.


“MAIDs were built for the marketing and advertising community, and are tied to an individual mobile device, which makes them precise in identifying specific people,” the material adds.


On a listing advertising its capability to link MAIDs to personal information, BIGDBM says “The BIGDBM Mobile file was developed from online providers, publishers and a variety of data feeds we currently obtain from a multitude of sources.” That listing did not list the specific types of PII that BIGDBM offers, so Motherboard posed as a potential customer interested in sourcing such data for a stealth startup.


BIGDBM did not respond to multiple requests for comment. FullContact did not respond to a list of questions, including whether its MAIDs and PII is collected with consent, and what sort of protections FullContact has in place to stop abuse of its capability to unmask the person behind a MAID.



A screenshot of the emailed response from Brad Mack.


Edwards said that the existence of companies that explicitly link MAIDs to personal information may provide issues under privacy legislation.


“This real-world research proves that the current ad tech bid stream, which reveals mobile IDs within them, is a pseudonymous data flow, and therefore not-compliant with GDPR,” Edwards told Motherboard in an online chat.


“It’s an anonymous identifier, but has been used extensively to report on user behaviour and enable marketing techniques like remarketing,” a post on the website of the Internet Advertising Bureau (IAB), a trade group for the ad tech industry, reads, referring to MAIDs. The IAB acknowledged but ultimately did not respond to multiple requests for comment asking if it still believes that MAIDs are anonymous.


In April Apple launched iOS 14.5, which introduced sweeping changes to how apps can track phone users by making each app explicitly ask for permission to track them. That move has resulted in a dramatic dip in the amount of data available to third parties, with just 4 percent of U.S. users opting-in. Google said it plans to implement a similar opt-in measure broadly across the Android ecosystem in early 2022.


Apple and Google acknowledged requests for comment but did not provide a statement on whether they have a policy against companies unmasking the real people behind MAIDs.


Senator Wyden’s statement added “I have serious concerns that Americans’ personal data is available to foreign governments that could use it to harm U.S. national security. That’s why I’ve proposed strong consumer privacy legislation, and a bill to prevent companies based in unfriendly foreign nations from purchasing Americans’ personal data.”


Subscribe to our cybersecurity podcast, CYBER.



By signing up to the VICE newsletter you agree to receive electronic communications from VICE that may sometimes include advertisements or sponsored content.


Source: Inside the Industry That Unmasks People At Scale

A privacy war is raging inside the W3C – Protocol — The people, power and politics of tech

James Rosewell could see his company’s future was in jeopardy.

It was January 2020, and Google had just announced key details of its plan to increase privacy in its Chrome browser by getting rid of third-party cookies and essentially breaking the tools that businesses use to track people across the web. That includes businesses like 51Degrees, the U.K.-based data analytics company Rosewell has been running for the last 12 years, which uses real-time data to help businesses track their websites’ performance.

“We realized at the end of January 2020 what Google was proposing was going to have an enormous impact on our customer base,” Rosewell said.

Under the banner of a group called Marketers for an Open Web, Rosewell filed a complaint with the U.K.’s Competition and Markets Authority last year, charging Google with trying to shut out its smaller competitors, while Google itself continued to track users.

But appealing to antitrust regulators was only one prong in Rosewell’s plan to get Google to delay its so-called Privacy Sandbox initiative. The other prong: becoming a member of the World Wide Web Consortium, or the W3C.

One of the web’s geekiest corners, the W3C is a mostly-online community where the people who operate the internet — website publishers, browser companies, ad tech firms, privacy advocates, academics and others — come together to hash out how the plumbing of the web works. It’s where top developers from companies like Google pitch proposals for new technical standards, the rest of the community fine-tunes them and, if all goes well, the consortium ends up writing the rules that ensure websites are secure and that they work no matter which browser you’re using or where you’re using it.

The W3C’s members do it all by consensus in public Github forums and open Zoom meetings with meticulously documented meeting minutes, creating a rare archive on the internet of conversations between some of the world’s most secretive companies as they collaborate on new rules for the web in plain sight.

But lately, that spirit of collaboration has been under intense strain as the W3C has become a key battleground in the war over web privacy. Over the last year, far from the notice of the average consumer or lawmaker, the people who actually make the web run have converged on this niche community of engineers to wrangle over what privacy really means, how the web can be more private in practice and how much power tech giants should have to unilaterally enact this change.

Two sides

On one side are engineers who build browsers at Apple, Google, Mozilla, Brave and Microsoft. These companies are frequent competitors that have come to embrace web privacy on drastically different timelines. But they’ve all heard the call of both global regulators and their own users, and are turning to the W3C to develop new privacy-protective standards to replace the tracking techniques businesses have long relied on.

On the other side are companies that use cross-site tracking for things like website optimization and advertising, and are fighting for their industry’s very survival. That includes small firms like Rosewell’s, but also giants of the industry, like Facebook.

Rosewell has become one of this side’s most committed foot soldiers since he joined the W3C last April. Where Facebook’s developers can only offer cautious edits to Apple and Google’s privacy proposals, knowing full well that every exchange within the W3C is part of the public record, Rosewell is decidedly less constrained. On any given day, you can find him in groups dedicated to privacy or web advertising, diving into conversations about new standards browsers are considering.

Rather than asking technical questions about how to make browsers’ privacy specifications work better, he often asks philosophical ones, like whether anyone really wants their browser making certain privacy decisions for them at all. He’s filled the W3C’s forums with concerns about its underlying procedures, sometimes a dozen at a time, and has called upon the W3C’s leadership to more clearly articulate the values for which the organization stands.

His exchanges with other members of the group tend to have the flavor of Hamilton and Burr’s last letters — overly polite, but pulsing with contempt. “I prioritize clarity over social harmony,” Rosewell said.

To Rosewell, these questions may be the only thing stopping the web from being fully designed and controlled by Apple, Google and Microsoft, three companies that he said already have enough power as it is. “I’m deeply concerned about the future in a world where these companies are just unrestrained,” Rosewell said. “If there isn’t someone presenting a counter argument, then you get group-think and bubble behavior.”

But the engineers and privacy advocates who have long held W3C territory aren’t convinced. They say the W3C is under siege by an insurgency that’s thwarting browsers from developing new and important privacy protections for all web users. “They use cynical terms like: ‘We’re here to protect user choice’ or ‘We’re here to protect the open web’ or, frankly, horseshit like this,” said Pete Snyder, director of privacy at Brave, which makes an anti-tracking browser. “They’re there to slow down privacy protections that the browsers are creating.”

Snyder and others argue these new arrivals, who drape themselves in the flag of competition, are really just concern trolls, capitalizing on fears about Big Tech’s power to cement the position of existing privacy-invasive technologies.

“I’m very much concerned about the influence and power of browser vendors to unilaterally do things, but I’m more concerned about companies using that concern to drive worse outcomes,” said Ashkan Soltani, former chief technologist to the Federal Trade Commission and co-author of the California Consumer Privacy Act. Soltani likened the deluge of procedural interjections from Rosewell and others to a “denial of service attack.”

James Rosewell and Ashkan SoltaniJames Rosewell, left, and Ashkan Soltani, right, are on opposite sides in this debate.Photo: James Rosewell; New America/Flickr

But what is perhaps more alarming, Soltani and Snyder argue, is that the new entrants from the ad-tech industry and elsewhere aren’t just trying to derail standards that could hurt their businesses; they’re proposing new ones that could actually enshrine tracking under the guise of privacy. “Fortunately in a forum like the W3C, folks are smart enough to get the distinction,” Soltani said. “Unfortunately, policymakers won’t.”

The tension inside the community isn’t lost on its leaders, though they frame the issue somewhat more diplomatically. “It’s exciting to see so much attention to privacy,” said Wendy Seltzer, strategy lead and counsel to the W3C, “and with that attention, of course, comes controversy.”

And with that controversy comes a cost. Longtime members of the organization said that at its best, the W3C is a place where some of the brightest minds in the industry get to come together to make technology work better for everyone.

But at its worst, they worry that dysfunction inside the W3C groups may send a dangerous and misleading message to the global regulators and lawmakers working on privacy issues — that if the brightest minds in the industry can’t figure out how to make privacy protections work for everyone, maybe no one can.

Do Not Track 2.0

If any of this sounds like history repeating itself, that’s because it is. About a decade ago, the W3C was the site of a similar industrywide effort to build a Do Not Track feature that would allow users to opt out of cross-site tracking through a simple on-off switch in their browsers. The W3C created an official working group to turn the idea into a formal standard, and representatives from tech giants — including Yahoo, IBM and Microsoft — as well as a slew of academics and civil society groups signed up to help.

Separate from community groups, interest groups and business groups, all of which facilitate informal conversations among developers inside the W3C, working groups are supposed to be where actual technical standards get written, finalized and, hopefully, adopted by key companies sitting around the virtual table. Working groups are, in other words, where ideas for new standards go when they’re ready for primetime.

“This seemed like the game,” Justin Brookman, director of consumer privacy at Consumer Reports, said of the Do Not Track working group. He briefly chaired the group while he was working for the Center for Democracy and Technology. “The browsers were going to implement it, and the browsers have a lot of power,” he said.

But the Tracking Protection Working Group, as it was called, ended up being where Do Not Track went to die. Over the course of years, members — who, in keeping with the W3C tradition, were tasked with reaching decisions by consensus — couldn’t come to an agreement on even the most basic details, including “the meaning of ‘tracking’ itself,” Omer Tene, vice president of the International Association of Privacy Professionals, wrote in a 2014 Maine Law Review case study.

Perhaps it should have been a clear sign Do Not Track was doomed when, Tene wrote, the group tried to settle its dispute over the definition of tracking by seeing which side could hum loudest. “Addressing this method, one participant complained, ‘There are billions of dollars at stake and the future of the Internet, and we’re trying to decide if one third-party is covered or didn’t hum louder!'” Tene wrote.

But both Tene and Brookman seem to agree that what really put Do Not Track underground was Microsoft’s decision to turn the signal on by default in Internet Explorer. Ad-tech companies that had banked on only a sliver of web users actually opting out of tracking resented a browser unilaterally making that decision for all of its users. Suddenly, Brookman said, they lost interest in participating in discussions at all. “They totally made a meal out of it,” he said, comparing their response to soccer players flopping on the field. “They totally exaggerated for effect to try to get out of doing this.”

Because the W3C’s standards are voluntary, no one was under any real obligation to heed the Do Not Track signal, effectively neutering the feature. Browsers could send a signal indicating a user didn’t want to be tracked, but websites and companies powering their ads didn’t (and don’t) have to listen.

In his post-mortem on the ordeal, Tene summed up the Do Not Track effort succinctly: “It was protracted, rife with hardball rhetoric and combat tactics, based on inconsistent factual claims, and under constant threat of becoming practically irrelevant due to lack of industry buy-in.”

For anyone participating in today’s privacy discussions inside the W3C, it’s a description that sounds eerily familiar.

Enter the Privacy Sandbox

After the Do Not Track debacle, Soltani dropped out of the W3C for years, focusing instead on helping draft and pass the California Consumer Privacy Act, or CCPA. That law — and its successor, the California Privacy Rights Act — actually requires websites to accept a browser signal from California users who want to opt out of the sale of their information. The global privacy control, as that signal is called, effectively paired the essence of Do Not Track with the force of law, albeit only for Californians.

When Soltani returned to the W3C in spring 2020, he wanted to turn the global privacy control into a W3C-approved standard, hoping that would lead to more industry adoption among leading browsers. Already, privacy-conscious browsers like Brave and DuckDuckGo have implemented the control, and major players including The Washington Post, The New York Times and WordPress are accepting the signal. But Soltani believed the standard could be improved with the W3C treatment. “Every technical standard is worth discussing in an open forum,” Soltani said. “It exposes bugs, issues and unforeseen edge cases.”

But his reentry into the community gave him deja vu. “Having not engaged in the W3C for years, it was very apparent I was walking back into what my experience was with Do Not Track, but 10 times worse,” Soltani said.

One reason for that: Google had chosen W3C as the venue for developing an array of new privacy standards that were part of its Privacy Sandbox initiative. “We provide Chrome to billions of users, so we really have an immense responsibility to those users,” said Google’s Privacy Sandbox product manager Marshall Vale. “One of the reasons that we are engaged in so many parts of the W3C is to really make sure that that dialogue and evolution of the web really happens in the open.”

One of Google’s proposed standards — Federated Learning of Cohorts, or FLoC for short — would eliminate the ability for advertisers to track specific users’ web behavior with cookies, but would instead divide Chrome users into groups based on the websites they visit. Advertisers could then target those groups based on their inferred interests.

That proposal spurred a backlash from both privacy advocates and companies that rely on third-party tracking. The privacy side argued individuals’ interests might be easy to reverse-engineer, and that targeting groups of people based on their interests would still enable discriminatory advertising. The other side accused Google of trying to kill their companies and hoard user data for themselves. And browser vendors by and large rejected the technology altogether.

The Privacy Sandbox announcement inspired a flurry of newcomers, including from the ad-tech world, to join W3C in response. “It was supposed to be my task to find out what’s going on with FLoC and build a bridge so we could connect to it,” said one ad-tech newcomer who asked for anonymity, because he didn’t have permission to speak on his company’s behalf. “It looked like the real conversation was the one happening at the W3C, and by real, I mean the one where Google was actually listening.”

In fact, Google wasn’t just listening, it was responding. The basic rules of etiquette within W3C hold that participants don’t just get to have their say, they get to have a dialogue. “Our process starts by assuming good faith and engaging with all of the participants as they address the concerns they’re raising,” W3C’s Seltzer said.

That can promote useful exchanges when members are offering constructive criticism. But the policy of hearing everyone out can also grind progress to a halt.

A war of words

That’s what Soltani said happened when he tried to present the global privacy control proposal to W3C’s privacy community group. His most vocal detractor? Rosewell.

Rosewell jumped into the conversation to challenge not the specifics of the technology, but instead the very idea in which it was grounded. He objected to the notion that the W3C, which is a global community, should be turning policy from a single U.S. state into a technical standard, arguing that members might not be so thrilled if the W3C wanted to standardize policies from countries like China or India. “This is a Pandora’s box,” Rosewell wrote of the global privacy control in one October message. “Should web browsers really become implementation mechanisms of specific government regulation?”

Before conversation about standardizing the global privacy control even moved forward, Rosewell argued the W3C Advisory Committee should step in to first determine “if there is an appetite among W3C members” to continue.

The suggestion stunned long-time members, who said taking such a vote and foreclosing an entire category of proposals runs counter to the way the W3C has always operated. “It’s not how any of this stuff works. The W3C is not a Senate of the web. It’s a standards body for people who want to build things and collaborate with each other,” Brave’s Snyder said. “It’s not the kind of thing that anybody has ever voted on before.”

Indeed, while Seltzer wouldn’t comment on any specific altercations, she said W3C leaders are aware of general concerns about these tactics. “There is no process for calling work to a halt,” Seltzer said.

Tug-of-war wire snapping
The World Wide Web Consortium is caught in a tug-of-war within its community of engineers and developers.

Image: Christopher T. Fong/Protocol

But Rosewell’s certainly not alone in trying. Almost anywhere you can find a browser putting forward a new privacy proposal within the W3C, you can find profound philosophical opposition from members whose companies rely on third-party data. “At least some of this seems aimed towards legislation,” Snyder, who co-chairs the W3C’s Privacy Interest Group, said. “Which is to say, if they can make the waters muddy so it looks like there’s no agreement on the web, quote-unquote, then [regulators] shouldn’t be enforcing these things.”

One particularly contentious fight broke out this spring in a wonky discussion about a technique called bounce tracking, which is a workaround some companies use to circumvent third-party tracking bans.

John Wilander, a security and privacy engineer at Apple, wanted the privacy group’s thoughts on how browsers might put an end to the practice. The conversation caught the attention of Michael Lysak, a developer at ad-tech firm Carbon, who began raising concerns about how Apple tracks its own users.

Wilander politely told Lysak his comments were out of scope, which is W3C-speak for: Take your bellyaching elsewhere. “Please refrain from discussing other things than bounce tracking protection here since doing so makes it harder to stay focused on what this proposal is about,” Wilander wrote.

Lysak continued on with another jab at Apple’s motives: “If a proposal kills tracking for some businesses and not others, that is in scope as it violates W3 rules for anti competition, especially if the proposer’s company directly benefits.”

Wilander shot back again: “I filed this issue and the scope is bounce tracking protection.”

Others were piling on too. Robin Berjon of The New York Times cited a study about users’ privacy expectations, writing, “It’s overwhelmingly clear that users expect their browsers to protect their data.” Lysak replied with a study of his own — one published by the ad industry — that argued differently. Erik Anderson of Microsoft, who co-chairs the group, chimed in asking everyone to focus on the topic at hand.

Wilander responded with a thumbs up emoji. And round and round they went.

Rosewell was there too, largely co-signing Lysak’s arguments about competition. “You make some interesting points,” he wrote to Lysak.

But Rosewell was also there to promote and explain a proposal of his own, another avian-themed standard called SWAN. Unlike FLoC, SWAN would allow publishers and ad-tech companies that join the SWAN network to share unique identifiers about web users. Those users could opt out of personalized ads from any companies in the network, and SWAN member companies would be bound by a contract to abide. But those companies could still use their unique IDs for other purposes, like measuring responsiveness to an ad and optimizing ad content.

To Rosewell, SWAN presents a sort of middle ground, giving web users the choice to turn off personalized ads, but giving ad-tech companies and publishers the data they want as well. But Soltani called SWAN and other industry-led proposals that preserve some level of data sharing “privacy washing,” because they would allow for data sharing even in browsers that have sought to prevent it.

“[They’re] saying: We’re going to define privacy as profiling for ads, but we’re going to collect your information for all these other purposes, too,” Soltani said.

No, you’re privacy washing

If the privacy advocates inside the W3C have been put off by Rosewell’s approach, he hasn’t exactly been charmed by theirs either. “I’ve been — I don’t know what the right word is — somewhere between upset and shocked at just how much of a sort of vigilante group the W3C truly is,” Rosewell said.

From his perspective, browsers have too much power over the community, and they use that power to quash conversations that might make them look bad. In fact, he charged Apple itself with “privacy washing.” Apple, he said, has forged ahead with third-party privacy protections, but has taken in billions of dollars a year from Google to feature its not-so-private search engine on iPhone users’ phones.

“Google doesn’t pay [Apple] $12 billion a year just for the kudos of having their logo on an Apple phone. They do it for the data that the deal generates,” Rosewell said.

Rosewell rejects the idea that he is pushing for weaker privacy protections. “I am absolutely on the privacy side of things. I would be aggrieved if I was characterized as anti-privacy,” he said, pointing to SWAN as an example of how he’s trying to advance the cause.

The problem, as he sees it, is that privacy has been ill-defined within W3C. “Until you define privacy, until you define competition, everything becomes an opinion,” he said. “And what happens is it’s those with the most influence that end up dominating the debate.”

He believes it’s a “crap argument” to say that philosophical or even legal questions like this are out of scope in a technical standards body. If the W3C only talked about technical standards, he argued, its members wouldn’t be so focused on a fuzzy concept like privacy. “We are interested in the impact of technical standards and technical choices in practice, and we should be. Of course we should be. Otherwise unintended consequences occur,” he said. “But what gets to be talked about is very self-serving.”

The ad-tech newcomer who spoke with Protocol was similarly frustrated by the community’s culture. “When you’re going up against powerful companies that are very entrenched in the W3C, and you’re saying something they don’t want said, it can feel as though you’re being gaslit, given contradictory information on rules that aren’t applied later,” he said.

Rosewell said he’s taken it upon himself to be vocal about these concerns inside the W3C primarily because few other people can be. One concern shared by both Rosewell and the people who disagree with him is that the W3C’s membership fees and the time commitment these conversations require make it so giant companies with thousands of employees can pack W3C groups with members and float endless proposals, while smaller companies or individuals working on these issues part-time struggle to keep up.

“The advantage Google has in numbers is not so much the number of participants, but the sheer size of the teams they have on these projects,” said one privacy advocate, who was not granted permission by his company to speak on the record. “I can get maybe 20% of two people’s time, that might be enough to produce one or two drafts per quarter. Google could ship a spec every week, and that means they can take up a lot of space.”

Indeed, 40 of the 369 members in the Improving Web Advertising business group work for Google. Vale, of Google, rejected the idea that this might make the community lopsided in Google’s favor, arguing that when it comes down to actually finalizing a specification, every company gets just one vote. “That’s how the W3C operates and makes sure that the voices of the various constituents and the members are really represented,” Vale said.

Still, there is an awful lot of conversation that happens before a standard gets to that stage. Those are the conversations happening right now. So when Google introduced the Privacy Sandbox, Rosewell figured he had the time, the freedom and the motivation to dive head first into those conversations. “As far as the tenacity is concerned,” he said, “if people are acting in good faith, then there should be a debate.”

Facebook’s fate

Rosewell’s “tenacity” has certainly been convenient for Facebook, a company that relies on third-party tracking to sell ads but is in no position to publicly challenge any other company’s privacy proposals after its own seemingly endless parade of privacy scandals. Instead, while Rosewell lobs bombs and takes the brunt of the fire from other W3C members, Facebook’s generals are busy negotiating peace treaties.

Just last month, Facebook engineer Ben Savage drafted a proposal that would give web users more choice over the interests their browsers assign to them. The idea, which Savage presented to members of the privacy community group, was so well-received even Soltani walked away thinking it just might work. Savage has also worked closely in the W3C with Apple’s Wilander to nail down new fraud prevention techniques for Safari, peppering his comments with smiley faces, as if to say, “I come in peace.”

But emoticons aside, it’s clear Facebook has as much riding on the outcome of these discussions as anyone. Among the tech giants at the table, Facebook is the only one that doesn’t have its own browser or its own operating system. But it does collect boatloads of data on billions of people around the world. As Apple takes direct aim at Facebook in public, people like Savage are working behind the scenes to push Apple engineers on technical remedies that might preserve Facebook’s existing business.

In April, Facebook Chief Operating Officer Sheryl Sandberg more or less admitted as much, saying on the company’s quarterly earnings call that Facebook was working with the W3C community on a way through some of the “headwinds” posed by Apple’s mobile privacy updates.

It was a blink-and-you-miss-it moment, but Soltani didn’t miss it, viewing it as yet another example of an ad-reliant tech company trying to sway the W3C. “Telling that @Facebook’s @sherylsandberg cites opportunities in @w3c when discussing their ‘regulatory roadmap’ on today’s Q1 earnings call,” he tweeted at the time. “#Bigtech has long known they can leverage standards groups to benefit their business goals.”

This year, Facebook put forward a candidate to serve on the W3C’s advisory board, and in a recent meeting, Facebook volunteered to chair a possible working group on privacy-enhancing technologies. “In the last six months they’ve become a lot more vocal on these subjects, which is fantastic,” Rosewell said, noting that Savage in particular has “done a great job in articulating an alternative voice.”

Still, despite Savage’s attempts at collaboration, there are times his frustration with powerful players inside W3C — namely Apple — has boiled over. In a lengthy Twitter thread last week, Savage accused Apple of “egregious behavior,” saying that while Google has been developing alternatives to tracking out in the open, Apple decided to “blow up” the world of web advertising and only started “thinking about what to replace it with later.”

He charged Apple with trying to push app developers away from advertising business models and toward fee-based apps, “where Apple takes a 30% cut,” striking a note about anti-competitive practices that sounded not unlike Rosewell. “Using the pretext of privacy to kill off the ads-funded business model, in order to push developers to fee based models Apple can tax doesn’t stop being anti-competitive if they lower their cut,” Savage wrote. “And their own apps will always have a 0% tax.”

Apple did not respond to Protocol’s request for comment.

Facebook also declined to make Savage or any other engineer available for comment, but in a statement, the company said of W3C, “These forums allow us to submit and debate new approaches to address common industry issues like how to measure ads and prevent ad fraud while still protecting people’s privacy. All of our suggestions are public and we encourage people to take a look at them.”

Here come the refs

Vale of Google also said the W3C has been instrumental to working out new privacy proposals. He gave W3C members credit for the development of one proposal in particular, called FLEDGE. “We’ve really shown that we’ve taken the input here from many members, whether it’s on the privacy side, or the browsers, or ad tech, and incorporated them into our ideas and proposals,” Vale said. “We’re listening.”

Of course, it’s also in Google’s interest to appear collaborative — now more than ever. Earlier this year, the U.K.’s competition authority took Rosewell’s group, Marketers for an Open Web, up on its complaint, agreeing to investigate the Privacy Sandbox for anti-competitive behavior.

At that, Google blinked, announcing last month that it would delay its plans to kill off third-party cookies another year, in order to “give all developers time to follow the best path for privacy,” a company blog post read. As part of its negotiations, the U.K.’s CMA said it would play a “key oversight role” in reviewing Privacy Sandbox proposals “to ensure they do not distort competition.”

Google also said last month that once third-party cookies are phased out, it would no longer use browsing history to target or measure ads or create “alternative identifiers” to replace cookies. That blog post was signed not by Vale or another engineer, but a member of Google’s legal department.

“Good news. Google won’t kill the open web this year,” Rosewell’s group wrote in a press release following the recent announcement. But the group also vowed to power on, arguing that Google’s commitments so far only cover a small subset of the data it uses to track people. “The proposed settlement agreement is hollow because it does not actually remove data that matters,” Rosewell said.

To him, the CMA’s announcement was, in other words, just a solid start. But to Soltani and others, Google’s decision was the predictable conclusion of a drama they’d watched play out inside the W3C, which is, in some ways, just a microcosm of the larger debate happening in countries around the world.

Regulators in the U.K., he said, had bought the ad industry’s argument that privacy and competition are on a collision course. That, he said, is a false choice. “They could have required everyone to not access that data, Google included, which would have been a net benefit for competition and privacy,” Soltani said.

But regulators appeared to overlook that option and are now using their power to pressure Google to put off changes that would make the world’s most widely-used browser a little more private. “Sigh,” Soltani wrote in an email last month, linking to Google’s announcement. “James & Co succeeded.”


Source: A privacy war is raging inside the W3C – Protocol — The people, power and politics of tech

2 interesting ways Seb Gorka, a Nazi, is collecting your ad dollars – BRANDED

How the ad exchanges are sending your money to a very bad man

Welcome back to BRANDED, the newsletter exploring how marketers broke society (and how we can fix it).

Here’s what’s new with us:

Seb Gorka is a neo-Nazi. This is not an opinion or a personal interpretation. It’s a well-documented fact.

He has worked closely with antisemitic politicians and written for openly antisemitic newspapers. He has publicly worn Nazi insignia. When asked about it, he has said he inherited the insignia on the “merits of [his] father,” who was a member of the Hungarian neo-Nazi group the Vitézi Rend. There is strong evidence that Gorka himself has sworn a lifetime oath to the group.

If that isn’t enough, he publicly incited and encouraged the January 6th insurrection, claiming that “patriots” have “taken over Capitol Hill.” He has also publicly promoted voter fraud disinformation on his website,

All this makes him extremely — as we like to say in the advertising biz — brand unsafe. Even YouTube has banned him. But that hasn’t stopped Seb Gorka from running a profitable disinformation outlet. With help from a handful of adtech companies that either don’t know or don’t care about where your ad dollars go, Seb Gorka is cashing in.

How is this happening? There are two ways: an obvious way and a less obvious fraudy way. We’re going to explore both of them here. Grab some popcorn, we’re going to defund a Nazi today, folks!

1. The normal, regular ad placement method (simple!)

Ad placements are the obvious way makes money. The following companies are serving ads on, in violation of their own Publisher Policies and Acceptable Use Guidelines:

  • Criteo (of course)
  • Taboola
  • Google
  • The Trade Desk

How do we know this? We hovered over the ads and took screenshots. You already know how all this works. It gets more interesting from here.

2. The shady revenue-sharing ring method (sketchy!) doesn’t just earn money through ad placements. also monetizes by using shared DIRECT IDs — an ID meant for one publisher — across a shared pool of unrelated websites. This practice not only artificially inflates its CPM rates (cost per thousand impressions), but also allows it to get paid untraceable sums of money by an LLC.

Let’s break this down together.

Remember ads.txt?

We’ve talked about ads.txt before, but it’s important for us all to understand what a hot mess it really is.

Ads.txt is a protocol developed by IAB Tech Lab to bring transparency to the supply chain. An advertiser should be able to cross-check a publisher’s ads.txt directory against each exchange’s counterpart directory, known as sellers.json. If the data matches up, this tells the advertiser that things are fine.

In theory, these two specs allow advertisers and ad exchanges to verify that their ads are being placed on the correct inventory and that they’re paying the correct company. Here’s what this is supposed to look like when it’s working:

This screenshot is from CredCoalition’s ads.txt research, linked in the intro

In sellers.json, sellers can list themselves as one of three Seller Types (“seller_type”).

Publisher: If a seller is listed as a PUBLISHER, that means

Intermediary:  If a seller is listed as an INTERMEDIARY, that means

Both: The seller has been approved by the ad exchange both as a PUBLISHER and INTERMEDIARY.

Why does any of this matter? Because advertisers are willing to pay a premium for DIRECT and PUBLISHER ads because it signals to them that they have 1) a higher quality audience and 2) less potential for fraud.

But in practice, publishers like have correctly surmised that ad exchanges aren’t performing the necessary checks — which means they’re able to declare themselves DIRECT and PUBLISHER across an unlimited number of websites and funnel the revenues into a single shared account.

In Australia, this is formally recognized as a form of ad fraud, and is illegal. In America, this is still just uh… fraudy.

Here’s what we found in’s ads.txt

We looked at the DIRECT IDs on — and it tells us an interesting story:

  1. is sharing DIRECT IDs with a number of local CBS, ABC, and NBC news affiliates.
  2. Frankly Media LLC appears to be employing these shared DIRECT IDs for lot of publishers. Check out this list of Google results for just one ID.
  3. Frankly Media is owned by Engine Media Holdings, a publicly traded company on Toronto’s TSX Venture Exchange.

And in the middle of it all are the ad exchanges.

Here’s what’s going on in’s ads.txt

The following companies are allowing mislabeling, which lets publishers get away with misrepresenting themselves and misleads advertisers into thinking they’re paying for direct inventory when the reality is anything but:


It’s fraudy because: Only publishers should be calling themselves “publishers”. Frankly Media is not a publisher. This could be an attempt to defraud not-so-vigilant buyers and suppliers with poor supply optimization. Also Frankly Media is not a DIRECT seller of

Let’s continue…


It’s fraudy because: Frankly Media is not a publisher, and certainly not a publisher of Newsweek. Also Frankly Media is not a DIRECT seller of


It’s fraudy because: Frankly Inc. (what happened to Frankly Media LLC???) is not a publisher, and certainly not a publisher of 41NBC. Also Frankly Media is not a DIRECT seller of


It’s fraudy because: Frankly Media LLC is not a DIRECT seller of


It’s fraudy because: Frankly Media LLC is not a PUBLISHER of and Frankly Media LLC is not a DIRECT seller of


It’s fraudy because: Engine Media is not a DIRECT seller of

Is Frankly Media LLC a dark pool sales house?

It sure looks like Frankly Media LLC has been going around to all the ad exchange and telling some fibs.

They’ve told some of them that they own and operate — but they don’t. They’re telling others that they are both PUBLISHER and INTERMEDIARY for — and that’s not true either ( is owned by Salem Media Group).

But one thing is probably true: Frankly Media LLC and/or its parent company Engine Media Holdings is making payouts to But how much? This is where we hit a dead end.

Frankly Media LLC is what we call a dark pool sales house. We don’t know how much it’s earning for through this revenue-sharing ring and we don’t know how it’s distributing the collective proceeds across its members.

Maybe they have a profit-sharing contract? Maybe they pay out through yet another shell company? Anyone looking at these records would never know.

So what do we do now?

What we’ve just uncovered is a Nazi infiltrating the adtech supply chain through the very pipelines designed to keep someone like him out of it. If Frankly Media LLC can use the same DIRECT ID across NBC affiliates and a Nazi’s media outlet, we are in trouble.

Last summer when we first reported on ads.txt revenue-sharing, we noted the national security implications of not knowing where our advertiser dollars are being funneled:

One of the reasons that this is still legal, or not explicitly known to be illegal, is that ads.txt is only three years old and there hasn’t been, to our knowledge, any major investigative research into the consequences of its design… Ads.txt is a global standard, used in international markets. This giant security hole opens markets and mediascapes around the world to foreign propaganda, hate groups, money laundering, and, of course, fraud.

If ad exchanges don’t enforce ads.txt, all these problems run rampant.

Once again, it’s up to us, the advertisers, to take charge of our own ads. If you want to keep your ad budgets away from Seb Gorka, you will have to add the website “” to your exclusion list. You will also need to block Frankly Media and its DIRECT seller ID’s. You can find them at

Thanks for reading,

Nandini & Claire


NOTE: Originally, the first sentence of this post said “Seb Gorka is a Nazi.” We have updated this to read “neo-Nazi” because it is debated whether the word Nazi is a general term for someone’s ideology or if it refers specifically to a member of the historic National Socialist German Workers’ Party of Germany.

Did you like this issue? Then why not share! Please send tips, compliments and complaints to @nandoodles and @catthekin


Source: 2 interesting ways Seb Gorka, a Nazi, is collecting your ad dollars – BRANDED

The inventor of the digital cookie has some regrets — Quartz

You’re reading a Quartz member-exclusive story, available to all readers for a limited time.
To unlock access to all of Quartz become a member.

When Lou Montulli invented the cookie in 1994, he was a 23-year-old engineer at Netscape, the company that built one of the internet’s first widely used browsers. He was trying to solve a pressing problem on the early web: Websites had lousy memories. Every time a user loaded a new page, a website would treat them like a stranger it had never seen before. That made it impossible to build basic web features we take for granted today, like the shopping carts that follow us from page to page across e-commerce sites.

Montulli considered a range of potential solutions before settling on the cookie, as he later explained in a blog post. A simpler solution might have been to just give every user a unique, permanent ID number that their browser would reveal to every website they visited. But Montulli and the Netscape team rejected that option for fear that it would allow third parties to track people’s browsing activity. Instead, they settled on the cookie—a small text file passed back and forth between a person’s computer and a single website—as a way to help websites remember visitors without allowing people to be tracked.

Within two years, advertisers learned ways to essentially hack cookies to do exactly what Montulli had tried to avoid: follow people around the internet. Eventually, they created the system of cookie-based ad targeting we have today. Twenty-seven years later, Montulli has some misgivings about how his invention has been used—but he has doubts about whether the alternatives will be any better.

This conversation has been edited for length and clarity.

QZ: What was your goal when you were creating the cookie?

We designed cookies to exchange information only between users and the website they visited. The founders of Netscape and many of the other denizens of the internet in that age were really privacy-focused. This was something that we cared about, and it was pervasive in the design of the internet protocols we built. So we wanted to build a mechanism where you could be remembered by the websites that you wanted to remember you, and you could be anonymous when you wanted to be anonymous.

How did you feel when you started seeing advertisers exploit cookies to track people?

Courtesy of Lou Montulli

Montulli circa 1992

That wasn’t something that we had really anticipated sites doing—although I guess one could have followed the money and could have imagined this happening. We became aware of this in 1996, and it was certainly very surprising and alarming to us. We were simultaneously fighting a knock-down, drag-out battle with Microsoft [for dominance of the browser market] and basically getting our clock cleaned. So there were a lot of other problems going on within Netscape besides just cookies. So it just fell to me to figure out what to do about cookies. People were like, “Well I don’t have time to deal with this. Can you deal with this?” And, you know, I’m just a lowly engineer. I don’t really have any experience dealing with policy.

But we were really faced with three choices: One would be to do nothing, to go “oops!” and throw up your hands and allow advertisers to use third-party cookies however they wanted. Another would be to completely block third-party cookies. And the third option was to try to create a more nuanced solution in which we try to give control of the cookie back to the user—especially control over the way advertisers used cookies to track them. That was the approach that we tried to take. And to do that we built out a bunch of functionality within the browser to let users see what cookies are on their device and allow them to control how they’re being used. So you could turn off third-party cookies entirely, or you could turn them off for a certain site.

So you had a chance to kill third-party cookies back in 1996—why didn’t you take it?

Advertising at that time was really the sole revenue stream of websites, because e-commerce was not as strong. Pretty much the entire web relied on advertising and by turning off advertising cookies, it would severely diminish the ability for revenue to be made on the web. So I can’t say that the decision was entirely financially neutral. We as a company believed very strongly in the future of the open web. We felt like having a revenue model for the web was pretty important, and we wanted the web to be successful. So we made the choice to try to give cookie options to the user, but not disable them.

Now, 25 years later, do you feel like you made the right choice?

I look at it from two different perspectives. If you agree that advertising is a reasonable social good, where we get free access to content in exchange for some amount of advertising, and if that advertising is reliant on some form of tracking, I would say the use of the cookie for tracking is a good thing for two reasons. First, it’s a known place where tracking is happening. And second, it’s a technology that is in large part under the user’s control. You can disable cookies in your browser or use an ad blocker plugin to block cookies. So the user has a fair amount of control over the advertising technology right now, and that’s only because it works through this particular technology. The alternative would be, if every ad network were to use a completely different technology, and that technology was not under the control of the user, we would no longer have a singular mechanism with which to personally disable that tracking network.

There’s another view, though, which I’ve only come around to recently. I now think the web’s reliance on advertising as a major revenue source has been very detrimental to society. Advertising perverts the user experience. Instead of incentivizing quality, it incentivizes getting as much interaction as possible. And I think that we’ve seen that those business models that seek to generate as much interaction as possible have caused people to behave very irrationally and not in the public good. So we may need to cut back on the advertising model to get some sort of sanity back in our online experience. I had a hand in building the web this way, but in my old age I’m looking back and thinking the world might have been a better place if we had spent more time working on micropayments or subscription-based content that would have allowed us to value quality over quantity.

Given that we know third-party cookies are dying, what do you think of the alternatives the ad industry is proposing to replace them?

On FLoC: This is an alternate form of expressing preferences for advertising without the traditional means of tracking you all over the web. And I think those forms are really interesting. But I also think that the public is likely to find them a little creepy at first because they won’t really understand it.

On Unified ID 2.0: That’s basically just another cookie. I don’t think it will get traction, because almost everyone will want to turn it off. And if you turn it off, it does advertisers no good.

On first-party data: It’s fine for really large, top 100 websites, but it really cannot be a useful technology for smaller sites. If you don’t have much traffic, collecting your own data has very little relevance to the larger ad-serving, ad-tracking world.

How optimistic are you that new technologies can fix the misgivings consumers have about ad tracking?

It’s my guess that as the third-party cookie gets phased out, ad tracking networks will try to migrate to cookie replacements that do almost the same thing as cookies but don’t have the same user control or supervision, like fingerprinting. I think these new technologies will just set off an arms race between advertisers, who are trying to figure out how to track users, and the browsers and privacy advocates who will come up with technological methods to fight back.

Ultimately, it comes down to: Do we want to fight a technological tit-for-tat war between the advertising companies and the browsers, or do we create public policy around what is and isn’t permissible? It’s very difficult to create a singular technology that is able to solve this problem. And as soon as you do, you have billions of dollars trying to work around it, which to me means if we care about it as a public policy initiative then we ought to put some restrictions around it. And that’s a little hard for me to say as a technologist, because oftentimes legislation has the best intentions, but it doesn’t really hit the mark very well. But sometimes you just can’t come up with a pure technological solution to a problem and you have to figure it out on a policy level.


Source: The inventor of the digital cookie has some regrets — Quartz

The end of third-party cookies — Quartz

  • The technology that shaped digital advertising and media is going away. What will replace it?

    Animation of eyes surrounding a UI window full of mouse cursors

    Image copyright: Illustration by Charlie Le Maignan

  • By the digits

    $336 billion: Valuation of the digital advertising industry, according to one estimate

    72%: Americans who worry that what they do online is being tracked by companies

    40%-60%: The (rather low) accuracy rate when two companies try to match the cookie data they have on the same set of consumers

    2.7%: Increased likelihood that a person will buy something from an ad that uses cookies vs one that does not, according to one study

    40%: Web traffic that comes from users who block third-party cookies

  • Explain it like I’m five!

    How do third-party cookies work?

    A cookie is a small text file saved locally on a user’s computer at the behest of a website they’ve visited. It helps the website remember information about them—often for benign reasons, like remembering their login information or making sure the items in their shopping cart will still be there even if they close the page and come back later.

    When cookies come from someplace other than the website a user chose to visit, they’re called third-party cookies. They’re not a particularly effective way for digital advertisers to track potential customers, and the public fears the privacy implications of having their every move online surreptitiously tracked. In response to public pressure, lawmakers are passing legislation to protect internet users’ privacy, but the most effective move of all might be a voluntary one by web browsers that have said they will no longer support third-party cookies. Few will mourn the functional death of the third-party cookie, but there’s reason to be suspicious of what might rise in its place.

    Read more here.

  • Charting where digital advertising money goes

    Where a dollar of digital advertising goes

    Read more here.

  • Brief history of third-party cookies

    1994: Lou Montulli, a 23-year-old engineer at the world’s then-leading web browser, Netscape, invents the cookie. His original goal was to create a tool that would help websites remember users—but couldn’t be used for cross-site tracking.

    1995: DoubleClick, one of the world’s first adtech firms, is founded. Its engineers realize they can exploit cookies to track users across the web; the company pioneers and comes to dominate the world of ad targeting.

    2008: Google buys DoubleClick for $3.1 billion and expands its advertising business from search pages to programmatic ads on websites.

    2016: The EU passes the General Data Protection Regulation (GDPR), which expands requirements for websites to get users’ consent before tracking them with cookies.
    Jan. 2020: Google announces it will block all third-party cookies by default, writing, “Our intention is to do this within two years.”

  • Billion-dollar question

    What will replace the third-party cookie?

    There are three major proposals for how the industry can continue to show consumers relevant ads and measure the effectiveness of marketing campaigns without relying on third-party cookies. These solutions aren’t mutually exclusive, and in the short term we’ll see the industry experiment with all three.

    👯‍♀️ Google’s Federated Learning Cohorts (FLoC) model: The browser tracks users and groups them into cohorts alongside thousands of peers with similar online habits. Every time a person visits a website, their browser would tell the site which cohort they belong to, and advertisers would show them ads tailored to people with interests like theirs.

    🗞️ First-party data tracking: Publishers and advertisers each collect their own data about their audience and consumers respectively. If a brand and a publisher have the same piece of information about a particular user, like an email address, they can team up to match the customer’s spending habits on the brand’s site to their reading habits on the publisher’s site and target ads even more effectively.

    🔑 Identity-based tracking: A central authority would assign every web user an advertising ID that advertisers could track every time a user logs into a website. Adtech companies would once again be able to monitor individual users’ browsing habits, serve them targeted ads, and measure whether a user who saw an ad went on to buy the advertised product.

    How developed are these models, and who wins and loses with each? Read more here. 

  • Person of interest: Lou Montulli

    A 22-year-old Lou Montulli rocks a big shock of hair that sticks straight up all around his head.Image copyright: Courtesy of Lou Montulli Montulli circa 1992

    When Lou Montulli invented the cookie 1994, he was a 23-year-old engineer at Netscape, the company that built one of the internet’s first widely used browsers. He was trying to solve a pressing problem on the early web: Websites couldn’t remember who their users were or what they had done in previous visits.

    He and his Netscape colleagues settled on the cookie as a way to help websites remember visitors without enabling cross-site tracking.

    Almost immediately, advertisers learned ways to essentially hack cookies to do exactly what Montulli had tried to avoid: follow people around the internet. Over time they created the system of cookie-based web tracking we have today. Twenty-seven years later, Montulli has some misgivings about how his invention has been used—but he has doubts about whether the alternatives will be any better.
    Read more here.

    What if antitrust regulators forced Google to sell Chrome? (Quartz) If Google were forced to sell Chrome due to monopoly concerns, a new owner might jettison third-party cookies sooner, creating headaches in the digital ad world.

    Google is done with cookies, but that doesn’t mean it’s done tracking you (Vox) Another take on FLoC and what information Google will still collect even after cookies are gone.

    No need to mourn the death of the third-party cookie (The Next Web) The case for publishers, advertisers, and consumers being better off without cookie tracking.

    Advertisers scramble for answers after Apple’s IDFA update (Digiday) Looming changes to Apple’s ID for Advertisers will be just as disruptive for the ad industry as Google’s deprecation of third-party cookies.

Source: The end of third-party cookies — Quartz