Source: THE PROGRAMMATIC POOP FUNNEL
At the end of April, Apple’s introduction of App Tracking Transparency tools shook the advertising industry to its core. iPhone and iPad owners could now stop apps from tracking their behavior and using their data for personalized advertising. Since the new privacy controls launched, almost $10 billion has been wiped from the revenues of Snap, Meta Platform’s Facebook, Twitter, and YouTube.
Now, a similar tool is coming to Google’s Android operating system—although not from Google itself. Privacy-focused tech company DuckDuckGo, which started life as a private search engine, is adding the ability to block hidden trackers to its Android app. The feature, dubbed “App Tracking Protection for Android,” is rolling out in beta from today and aims to mimic Apple’s iOS controls. “The idea is we block this data collection from happening from the apps the trackers don’t own,” says Peter Dolanjski, a director of product at DuckDuckGo. “You should see far fewer creepy ads following you around online.”
The vast majority of apps have third-party trackers tucked away in their code. These trackers monitor your behavior across different apps and help create profiles about you that can include what you buy, demographic data, and other information that can be used to serve you personalized ads. DuckDuckGo says its analysis of popular free Android apps shows more than 96 percent of them contain trackers. Blocking these trackers means Facebook and Google, whose trackers are some of the most prominent, can’t send data back to the mothership—neither will the dozens of advertising networks you’ve never heard of.
From a user perspective, blocking trackers with DuckDuckGo’s tool is straightforward. App Tracking Protection appears as an option in the settings menu of its Android app. For now, you’ll see the option to get on a waitlist to access it. But once turned on, the feature shows the total number of trackers blocked in the last week and gives a breakdown of what’s been blocked in each app recently. Open up the app of the Daily Mail, one of the world’s largest news websites, and DuckDuckGo will instantly register that it is blocking trackers from Google, Amazon, WarnerMedia, Adobe, and advertising company Taboola. An example from DuckDuckGo showed more than 60 apps had tracked a test phone thousands of times in the last seven days.
My own experience bore that out. Using a box-fresh Google Pixel 6 Pro, I installed 36 popular free apps—some estimates claim people install around 40 apps on their phones—and logged into around half of them. These included the McDonald’s app, LinkedIn, Facebook, Amazon, and BBC Sounds. Then, with a preview of DuckDuckGo’s Android tracker blocking turned on, I left the phone alone for four days and didn’t use it at all. In 96 hours, 23 of these apps had made more than 630 tracking attempts in the background.
Using your phone on a daily basis—opening and interacting with apps—sees a lot more attempted tracking. When I opened the McDonald’s app, trackers from Adobe, cloud software firm New Relic, Google, emotion-tracking firm Apptentive, and mobile analytics company Kochava tried to collect data about me. Opening the eBay and Uber apps—but not logging into them—was enough to trigger Google trackers.
At the moment, the tracker blocker doesn’t show what data each tracker is trying to send, but Dolanjski says a future version will show what broad categories of information each commonly tries to access. He adds that in testing the company has found some trackers collecting exact GPS coordinates and email addresses.
The beta of App Tracking Protection for Android is limited. It doesn’t block trackers in all apps, and browsers aren’t included, as they may consider the websites people visit to be trackers themselves. In addition, DuckDuckGo says it has found some apps require tracking to be turned on to function; for this reason, it gives mobile games a pass. While the tool blocks Facebook trackers across other apps, it doesn’t support tracker-blocking in the Facebook app itself. In DuckDuckGo’s settings, you can whitelist any other apps that don’t function properly with App Tracking Protection turned on.
The introduction of App Tracking Protection for Android comes at a time when ATT has pushed advertisers to Android, while also benefiting Apple. “ATT meaningfully changed how advertisers are able to target ads on some platforms,” says Andy Taylor, vice president of research at performance marketing company Tinuiti. The company’s own ads data shows Facebook advertising on Android grew 86 percent in September, while iOS growth lagged behind at 12 percent. At the same time, Apple’s ad business has tripled its market share, according to an analysis from the Financial Times. Around 54 percent of people have chosen not to be tracked using ATT, data from mobile marketing analytics firm AppsFlyer shows.
DuckDuckGo’s system is unlikely to have an impact anywhere near that scale and is more of a blunt tool. Unlike Apple, the company doesn’t own the infrastructure—the phones people use or the underlying operating systems—to enforce wholesale changes. Each time an app wants to track you, iOS presents you with a question: Do you want this app to track you? When you opt out, your device transmits the IDFA sent to advertisers as a series of zeros—essentially preventing them from tracking you. DuckDuckGo doesn’t have this luxury; its privacy browser app is installed on your phone like any other from the Google Play Store.
To make App Tracking Protection work, DuckDuckGo runs the same set of device permissions as a virtual private network (VPN). Dolanjski says that while Android phones will show the DuckDuckGo app as a VPN, it doesn’t work in this way: No data is transferred off your phone, and the network runs locally. In essence, the system blocks apps from making connections to the servers used for tracking. (When some trackers can’t communicate with their servers they will make repeated attempts to do so, Dolanjski says, which can cause certain tracker counts to swell within the app. He adds the company has seen no impact on battery life).
At the time of writing, Google had not responded to a request for comment on apps using VPN configurations to block trackers across Android. Other apps on the Google Play Store—including Jumbo Privacy, a VPN app by Samsung, and Blokada—already use similar methods to block trackers, although they also offer wider privacy-focused tools and don’t act as browsers.
Google itself has gradually added more privacy controls in Android, including some that apply to apps. The company allows users to reset their ad IDs and to opt out of personalized ads. Following the launch of iOS 14.5, Google said that Android owners who opt out of personalized advertising will see their unique identifiers stripped to a series of zeroes—as is the case for iPhone owners who turn off tracking. The change is already rolling out on phones using Android 12 and will be made more widely available on other Android devices early next year.
But for many people, the planned Android changes may not be enough. They don’t go as far as Apple’s alterations. DuckDuckGo’s Dolanjski argues that there’s very little transparency around the trackers currently employed in the apps people use every single day and that most people would be shocked at the amount they are tracked. For him, blocking trackers on Android is the next step in giving people more control over how companies handle their data. “It is going to dramatically reduce how much information these third-party companies get about you,” he says.
BuzzFeed’s four-year-old programmatic business is the engine of its advertising growth. That muscle will come in handy as publishers and marketers enter the usual frenzy of the fourth quarter against the backdrop of the delta variant threatening to derail ad creative and campaign budgets.
On its road to becoming a public company, the publisher wouldn’t break out specific programmatic revenue, but overall ad revenue for its second quarter increased 79% to $47.8 million. Advertising makes up 53% of its total second-quarter revenues with growth driven by higher pricing on programmatic ads and an increase in the total number of impressions sold.
Driving the higher pricing and more impressions is BuzzFeed’s answer to the dwindling efficacy of third-party cookies. Lighthouse, announced in March, helps increase marketers’ audience scale by finding previously unaddressable audiences and cutting down on marketing waste, ultimately selling more effective campaigns that nurture repeat business. Now, BuzzFeed uses some aspect of Lighthouse for almost all clients.
“The pandemic has taught us to be flexible,” Ken Blom, svp of ad strategy and partnerships told Adweek, who added that buyers always had access to first-party data through BuzzFeed’s private marketplaces and direct deals. But revenue from selling on the open marketplace is still a big part of BuzzFeed’s business, and buyers there still rely on third-party data.
Lighthouse on full beam
Today, 66% of publishers are driving revenue through their first-party data, according to OpenX research which found that 21% of publishers said first-party data was “very important” to revenue today.
Universal IDs are not our strategy, our strategy is cohorts and contextual. Come talk to us and we’ll tell you what we have.
—Ken Blom, svp of strategy and partnerships, BuzzFeed
Advertisers using Lighthouse are consistently getting between three and four times increases in click-through rate compared with using just third-party data sources, according to Blom.
The addition of HuffPost and Complex Networks to BuzzFeed’s audience pool means it’ll have fewer scale woes than other publishers, making it less reliant than others on alternative identifiers like Unified ID or LiveRamp’s Authenticated Traffic Solution. Here, BuzzFeed is dipping its toes, going through the legal process with a couple but wouldn’t share which.
“Universal IDs are not our strategy,” said Blom, “our strategy is cohorts and contextual. Come talk to us and we’ll tell you what we have.”
Publishers can make a decision about what ID solutions they integrate, but that only goes so far without advertiser demand, not to mention meeting ad-tech vendors and giants like Google somewhere in the middle.
Google’s deadline extension for cookie collapse gives BuzzFeed more time to test capabilities of its own first-party data solutions combined with context, rather than scrambling to release a minimum viable product that marketers might question. Conversations haven’t slowed, as many fear, Blom said, but clients’ readiness is a broad spectrum, those who are willing to test are still experimenting.
Performance from programmatic pipes
With cookies going away and contextual targeting on the rise, the thinking goes that ad buyers would do well to have more direct conversations with publishers to know if, say, the news team is setting up a health desk.
“The pendulum is swinging back to direct in some way,” said Blom. “The programmatic pipes are good, but how do we make them more performant?”
Due to client demand, BuzzFeed has tinkered with a three-year-old ad unit Spotlight to focus on the features inside, what content runs next to it and what the creative looks like. It’s rolled up other ad unit functions that were across the site into the ad, features like streaming video files, full-screen expansion capabilities and what it calls customizable and interactive solutions.
These include more tracking features, real-time performance and, of course, weaves in its first-party data so it can run comparative tests and use machine learning to build audience cohorts. Driving performance and utility, the ultimate test in advertising ROI, will help it maintain a growth trajectory if it can prove its programmatic ads work.
This strategy of juicing up ad units makes sense for BuzzFeed to keep up with the post-cookie world since it has scale but could struggle to get people to share email addresses to access content (commerce features are a surer bet), said Dan Elddine, svp, data and technology, North America, Essence.
“We are starting to see a bifurcation among publishers talking about identity and first-party data solutions versus those talking about ad units and inventory types,” said Elddine.
So far, the ad unit has gone down well with entertainment and retail clients. One unnamed entertainment client used the unit to promote a video game launch, growing top-of-mind awareness by eight points. Another campaign advertising a movie release including the date drove an increase in awareness by over 14 points.
“Advertising can fuel revenue diversification, it’s a foundation and an anchor,” said Blom, “it helps what we do with licensing and commerce, if you have advertising without diverse revenue streams you’re leaving money on the table.”
A new Forrester report forecasts the end of digital and display advertising as we know it as consumers move away from experiences in which they can be interrupted.
That’s in part because consumers are putting more trust in digital assistants to make decisions on their behalf, but, naturally, it’s also because US marketers wasted roughly $7.4bn on display ads in 2016 – only 40% of which were seen by consumers.
In a blog post, James McQuivey, vice president and principal analyst at Forrester, said the “bombshell” report “fits nicely into the current backlash against major publishers and ad networks, including Google and Facebook” as advertisers re-examine their digital spend and demand more transparency.
But McQuivey said bigger change is afoot “because interruptions are coming to an end” as “interruption only works if consumers spend time doing interruptible things on interruption-friendly devices”.
He went on to add: “Once they can get what they want without leaving themselves open to interruptions — whether through voice interfaces or AI-driven background services — they will feel even more hostile to ad interruptions”.
And it is consumers’ “casual indifference to advertiser interests” that McQuivey said will enable consumers to inhabit an advertising-free world. I.e., soon Alexa may answer most of the questions consumers have historically requested via search engines, plus digital assistants may collate and deliver highlights from users’ Facebook feeds, so they don’t see sponsored posts.
“The question remaining is what role will marketers play in that hypermediated world,” McQuivey wrote.
Marketers should focus on building deeper relationships with their customers in 2017 – in part by investing in relationship technologies such as those that offer a real-time, single view of the customer, plus artificial intelligence that drives conversational relationships, McQuivey said.
Intelligent conversational relationships are possible via chatbots, chat interfaces and voice skills on in-home devices, but marketers must also ensure the conversations “sparkle with the brand personality the CMO has committed the company to,” McQuivey said.
What’s more, McQuivey said this will take investment, but his advice is to pay for it with the billions of dollars used on digital display advertising.
“When they do [divert digital display ad budgets], that will signal to everybody that the end of advertising is upon us. And that something much better is on its way,” McQuivey added.
Hacking. Disinformation. Surveillance. CYBER is Motherboard’s podcast and reporting on the dark underbelly of the internet.
Tech companies have repeatedly reassured the public that trackers used to follow smartphone users through apps are anonymous or at least pseudonymous, not directly identifying the person using the phone. But what they don’t mention is that an entire overlooked industry exists to purposefully and explicitly shatter that anonymity.
They do this by linking mobile advertising IDs (MAIDs) collected by apps to a person’s full name, physical address, and other personal identifiable information (PII). Motherboard confirmed this by posing as a potential customer to a company that offers linking MAIDs to PII.
“If shady data brokers are selling this information, it makes a mockery of advertisers’ claims that the truckloads of data about Americans that they collect and sell is anonymous,” Senator Ron Wyden told Motherboard in a statement.
Do you work at a company selling this kind of data? Do you otherwise have access to the data itself or documents related to it? We’d love to hear from you. Using a non-work phone or computer, you can contact Joseph Cox securely on Signal on +44 20 8133 5190, Wickr on josephcox, OTR chat on [email protected], or email [email protected].
“We have one of the largest repositories of current, fresh MAIDS<>PII in the USA,” Brad Mack, CEO of data broker BIGDBM told us when we asked about the capabilities of the product while posing as a customer. “All BIGDBM USA data assets are connected to each other,” Mack added, explaining that MAIDs are linked to full name, physical address, and their phone, email address, and IP address if available. The dataset also includes other information, “too numerous to list here,” Mack wrote.
A MAID is a unique identifier a phone’s operating system gives to its users’ individual device. For Apple, that is the IDFA, which Apple has recently moved to largely phase out. For Google, that is the AAID, or Android Advertising ID. Apps often grab a user’s MAID and provide that to a host of third parties. In one leaked dataset from a location tracking firm called Predicio previously obtained by Motherboard, the data included users of a Muslim prayer app’s precise locations. That data was somewhat pseudonymized, because it didn’t contain the specific users’ name, but it did contain their MAID. Because of firms like BIGDBM, another company that buys the sort of data Predicio had could take that or similar data and attempt to unmask the people in the dataset simply by paying a fee.
“Anyone and everyone who has a phone and has installed an app that has ads, currently is at risk of being de-anonymized via unscrupulous companies,” Zach Edwards, a researcher who has closely followed the supply chain of various sources of data, told Motherboard in an online chat. “There are significant risks for members of law enforcement, elected officials, members of the military and other high-risk individuals from foreign surveillance when data brokers are able to ingest data from the advertising bidstream,” he added, referring to the process where some third parties obtain data on smartphone users via the placement of adverts.
This de-anonymization industry uses various terms to describe their product, including “identity resolution” and “identity graph.” Other companies claiming to offer a similar service as BIGDBM include FullContact, which says it has 223 billion data points for the U.S., as well as profiles on over 275 million adults in the U.S.
“Our whole-person Identity Graph provides both personal and professional attributes of an individual, as well as online and offline identifiers,” marketing material from FullContact available online reads, adding that can include names, addresses, social IDs, and MAIDs.
“MAIDs were built for the marketing and advertising community, and are tied to an individual mobile device, which makes them precise in identifying specific people,” the material adds.
On a listing advertising its capability to link MAIDs to personal information, BIGDBM says “The BIGDBM Mobile file was developed from online providers, publishers and a variety of data feeds we currently obtain from a multitude of sources.” That listing did not list the specific types of PII that BIGDBM offers, so Motherboard posed as a potential customer interested in sourcing such data for a stealth startup.
BIGDBM did not respond to multiple requests for comment. FullContact did not respond to a list of questions, including whether its MAIDs and PII is collected with consent, and what sort of protections FullContact has in place to stop abuse of its capability to unmask the person behind a MAID.
Edwards said that the existence of companies that explicitly link MAIDs to personal information may provide issues under privacy legislation.
“This real-world research proves that the current ad tech bid stream, which reveals mobile IDs within them, is a pseudonymous data flow, and therefore not-compliant with GDPR,” Edwards told Motherboard in an online chat.
“It’s an anonymous identifier, but has been used extensively to report on user behaviour and enable marketing techniques like remarketing,” a post on the website of the Internet Advertising Bureau (IAB), a trade group for the ad tech industry, reads, referring to MAIDs. The IAB acknowledged but ultimately did not respond to multiple requests for comment asking if it still believes that MAIDs are anonymous.
In April Apple launched iOS 14.5, which introduced sweeping changes to how apps can track phone users by making each app explicitly ask for permission to track them. That move has resulted in a dramatic dip in the amount of data available to third parties, with just 4 percent of U.S. users opting-in. Google said it plans to implement a similar opt-in measure broadly across the Android ecosystem in early 2022.
Apple and Google acknowledged requests for comment but did not provide a statement on whether they have a policy against companies unmasking the real people behind MAIDs.
Senator Wyden’s statement added “I have serious concerns that Americans’ personal data is available to foreign governments that could use it to harm U.S. national security. That’s why I’ve proposed strong consumer privacy legislation, and a bill to prevent companies based in unfriendly foreign nations from purchasing Americans’ personal data.”
Subscribe to our cybersecurity podcast, CYBER.
James Rosewell could see his company’s future was in jeopardy.
It was January 2020, and Google had just announced key details of its plan to increase privacy in its Chrome browser by getting rid of third-party cookies and essentially breaking the tools that businesses use to track people across the web. That includes businesses like 51Degrees, the U.K.-based data analytics company Rosewell has been running for the last 12 years, which uses real-time data to help businesses track their websites’ performance.
“We realized at the end of January 2020 what Google was proposing was going to have an enormous impact on our customer base,” Rosewell said.
Under the banner of a group called Marketers for an Open Web, Rosewell filed a complaint with the U.K.’s Competition and Markets Authority last year, charging Google with trying to shut out its smaller competitors, while Google itself continued to track users.
But appealing to antitrust regulators was only one prong in Rosewell’s plan to get Google to delay its so-called Privacy Sandbox initiative. The other prong: becoming a member of the World Wide Web Consortium, or the W3C.
One of the web’s geekiest corners, the W3C is a mostly-online community where the people who operate the internet — website publishers, browser companies, ad tech firms, privacy advocates, academics and others — come together to hash out how the plumbing of the web works. It’s where top developers from companies like Google pitch proposals for new technical standards, the rest of the community fine-tunes them and, if all goes well, the consortium ends up writing the rules that ensure websites are secure and that they work no matter which browser you’re using or where you’re using it.
The W3C’s members do it all by consensus in public Github forums and open Zoom meetings with meticulously documented meeting minutes, creating a rare archive on the internet of conversations between some of the world’s most secretive companies as they collaborate on new rules for the web in plain sight.
But lately, that spirit of collaboration has been under intense strain as the W3C has become a key battleground in the war over web privacy. Over the last year, far from the notice of the average consumer or lawmaker, the people who actually make the web run have converged on this niche community of engineers to wrangle over what privacy really means, how the web can be more private in practice and how much power tech giants should have to unilaterally enact this change.
On one side are engineers who build browsers at Apple, Google, Mozilla, Brave and Microsoft. These companies are frequent competitors that have come to embrace web privacy on drastically different timelines. But they’ve all heard the call of both global regulators and their own users, and are turning to the W3C to develop new privacy-protective standards to replace the tracking techniques businesses have long relied on.
On the other side are companies that use cross-site tracking for things like website optimization and advertising, and are fighting for their industry’s very survival. That includes small firms like Rosewell’s, but also giants of the industry, like Facebook.
Rosewell has become one of this side’s most committed foot soldiers since he joined the W3C last April. Where Facebook’s developers can only offer cautious edits to Apple and Google’s privacy proposals, knowing full well that every exchange within the W3C is part of the public record, Rosewell is decidedly less constrained. On any given day, you can find him in groups dedicated to privacy or web advertising, diving into conversations about new standards browsers are considering.
Rather than asking technical questions about how to make browsers’ privacy specifications work better, he often asks philosophical ones, like whether anyone really wants their browser making certain privacy decisions for them at all. He’s filled the W3C’s forums with concerns about its underlying procedures, sometimes a dozen at a time, and has called upon the W3C’s leadership to more clearly articulate the values for which the organization stands.
His exchanges with other members of the group tend to have the flavor of Hamilton and Burr’s last letters — overly polite, but pulsing with contempt. “I prioritize clarity over social harmony,” Rosewell said.
To Rosewell, these questions may be the only thing stopping the web from being fully designed and controlled by Apple, Google and Microsoft, three companies that he said already have enough power as it is. “I’m deeply concerned about the future in a world where these companies are just unrestrained,” Rosewell said. “If there isn’t someone presenting a counter argument, then you get group-think and bubble behavior.”
But the engineers and privacy advocates who have long held W3C territory aren’t convinced. They say the W3C is under siege by an insurgency that’s thwarting browsers from developing new and important privacy protections for all web users. “They use cynical terms like: ‘We’re here to protect user choice’ or ‘We’re here to protect the open web’ or, frankly, horseshit like this,” said Pete Snyder, director of privacy at Brave, which makes an anti-tracking browser. “They’re there to slow down privacy protections that the browsers are creating.”
Snyder and others argue these new arrivals, who drape themselves in the flag of competition, are really just concern trolls, capitalizing on fears about Big Tech’s power to cement the position of existing privacy-invasive technologies.
“I’m very much concerned about the influence and power of browser vendors to unilaterally do things, but I’m more concerned about companies using that concern to drive worse outcomes,” said Ashkan Soltani, former chief technologist to the Federal Trade Commission and co-author of the California Consumer Privacy Act. Soltani likened the deluge of procedural interjections from Rosewell and others to a “denial of service attack.”
James Rosewell, left, and Ashkan Soltani, right, are on opposite sides in this debate.Photo: James Rosewell; New America/Flickr
But what is perhaps more alarming, Soltani and Snyder argue, is that the new entrants from the ad-tech industry and elsewhere aren’t just trying to derail standards that could hurt their businesses; they’re proposing new ones that could actually enshrine tracking under the guise of privacy. “Fortunately in a forum like the W3C, folks are smart enough to get the distinction,” Soltani said. “Unfortunately, policymakers won’t.”
The tension inside the community isn’t lost on its leaders, though they frame the issue somewhat more diplomatically. “It’s exciting to see so much attention to privacy,” said Wendy Seltzer, strategy lead and counsel to the W3C, “and with that attention, of course, comes controversy.”
And with that controversy comes a cost. Longtime members of the organization said that at its best, the W3C is a place where some of the brightest minds in the industry get to come together to make technology work better for everyone.
But at its worst, they worry that dysfunction inside the W3C groups may send a dangerous and misleading message to the global regulators and lawmakers working on privacy issues — that if the brightest minds in the industry can’t figure out how to make privacy protections work for everyone, maybe no one can.
Do Not Track 2.0
If any of this sounds like history repeating itself, that’s because it is. About a decade ago, the W3C was the site of a similar industrywide effort to build a Do Not Track feature that would allow users to opt out of cross-site tracking through a simple on-off switch in their browsers. The W3C created an official working group to turn the idea into a formal standard, and representatives from tech giants — including Yahoo, IBM and Microsoft — as well as a slew of academics and civil society groups signed up to help.
Separate from community groups, interest groups and business groups, all of which facilitate informal conversations among developers inside the W3C, working groups are supposed to be where actual technical standards get written, finalized and, hopefully, adopted by key companies sitting around the virtual table. Working groups are, in other words, where ideas for new standards go when they’re ready for primetime.
“This seemed like the game,” Justin Brookman, director of consumer privacy at Consumer Reports, said of the Do Not Track working group. He briefly chaired the group while he was working for the Center for Democracy and Technology. “The browsers were going to implement it, and the browsers have a lot of power,” he said.
But the Tracking Protection Working Group, as it was called, ended up being where Do Not Track went to die. Over the course of years, members — who, in keeping with the W3C tradition, were tasked with reaching decisions by consensus — couldn’t come to an agreement on even the most basic details, including “the meaning of ‘tracking’ itself,” Omer Tene, vice president of the International Association of Privacy Professionals, wrote in a 2014 Maine Law Review case study.
Perhaps it should have been a clear sign Do Not Track was doomed when, Tene wrote, the group tried to settle its dispute over the definition of tracking by seeing which side could hum loudest. “Addressing this method, one participant complained, ‘There are billions of dollars at stake and the future of the Internet, and we’re trying to decide if one third-party is covered or didn’t hum louder!'” Tene wrote.
But both Tene and Brookman seem to agree that what really put Do Not Track underground was Microsoft’s decision to turn the signal on by default in Internet Explorer. Ad-tech companies that had banked on only a sliver of web users actually opting out of tracking resented a browser unilaterally making that decision for all of its users. Suddenly, Brookman said, they lost interest in participating in discussions at all. “They totally made a meal out of it,” he said, comparing their response to soccer players flopping on the field. “They totally exaggerated for effect to try to get out of doing this.”
Because the W3C’s standards are voluntary, no one was under any real obligation to heed the Do Not Track signal, effectively neutering the feature. Browsers could send a signal indicating a user didn’t want to be tracked, but websites and companies powering their ads didn’t (and don’t) have to listen.
In his post-mortem on the ordeal, Tene summed up the Do Not Track effort succinctly: “It was protracted, rife with hardball rhetoric and combat tactics, based on inconsistent factual claims, and under constant threat of becoming practically irrelevant due to lack of industry buy-in.”
For anyone participating in today’s privacy discussions inside the W3C, it’s a description that sounds eerily familiar.
Enter the Privacy Sandbox
After the Do Not Track debacle, Soltani dropped out of the W3C for years, focusing instead on helping draft and pass the California Consumer Privacy Act, or CCPA. That law — and its successor, the California Privacy Rights Act — actually requires websites to accept a browser signal from California users who want to opt out of the sale of their information. The global privacy control, as that signal is called, effectively paired the essence of Do Not Track with the force of law, albeit only for Californians.
When Soltani returned to the W3C in spring 2020, he wanted to turn the global privacy control into a W3C-approved standard, hoping that would lead to more industry adoption among leading browsers. Already, privacy-conscious browsers like Brave and DuckDuckGo have implemented the control, and major players including The Washington Post, The New York Times and WordPress are accepting the signal. But Soltani believed the standard could be improved with the W3C treatment. “Every technical standard is worth discussing in an open forum,” Soltani said. “It exposes bugs, issues and unforeseen edge cases.”
But his reentry into the community gave him deja vu. “Having not engaged in the W3C for years, it was very apparent I was walking back into what my experience was with Do Not Track, but 10 times worse,” Soltani said.
One reason for that: Google had chosen W3C as the venue for developing an array of new privacy standards that were part of its Privacy Sandbox initiative. “We provide Chrome to billions of users, so we really have an immense responsibility to those users,” said Google’s Privacy Sandbox product manager Marshall Vale. “One of the reasons that we are engaged in so many parts of the W3C is to really make sure that that dialogue and evolution of the web really happens in the open.”
One of Google’s proposed standards — Federated Learning of Cohorts, or FLoC for short — would eliminate the ability for advertisers to track specific users’ web behavior with cookies, but would instead divide Chrome users into groups based on the websites they visit. Advertisers could then target those groups based on their inferred interests.
That proposal spurred a backlash from both privacy advocates and companies that rely on third-party tracking. The privacy side argued individuals’ interests might be easy to reverse-engineer, and that targeting groups of people based on their interests would still enable discriminatory advertising. The other side accused Google of trying to kill their companies and hoard user data for themselves. And browser vendors by and large rejected the technology altogether.
The Privacy Sandbox announcement inspired a flurry of newcomers, including from the ad-tech world, to join W3C in response. “It was supposed to be my task to find out what’s going on with FLoC and build a bridge so we could connect to it,” said one ad-tech newcomer who asked for anonymity, because he didn’t have permission to speak on his company’s behalf. “It looked like the real conversation was the one happening at the W3C, and by real, I mean the one where Google was actually listening.”
In fact, Google wasn’t just listening, it was responding. The basic rules of etiquette within W3C hold that participants don’t just get to have their say, they get to have a dialogue. “Our process starts by assuming good faith and engaging with all of the participants as they address the concerns they’re raising,” W3C’s Seltzer said.
That can promote useful exchanges when members are offering constructive criticism. But the policy of hearing everyone out can also grind progress to a halt.
A war of words
That’s what Soltani said happened when he tried to present the global privacy control proposal to W3C’s privacy community group. His most vocal detractor? Rosewell.
Rosewell jumped into the conversation to challenge not the specifics of the technology, but instead the very idea in which it was grounded. He objected to the notion that the W3C, which is a global community, should be turning policy from a single U.S. state into a technical standard, arguing that members might not be so thrilled if the W3C wanted to standardize policies from countries like China or India. “This is a Pandora’s box,” Rosewell wrote of the global privacy control in one October message. “Should web browsers really become implementation mechanisms of specific government regulation?”
Before conversation about standardizing the global privacy control even moved forward, Rosewell argued the W3C Advisory Committee should step in to first determine “if there is an appetite among W3C members” to continue.
The suggestion stunned long-time members, who said taking such a vote and foreclosing an entire category of proposals runs counter to the way the W3C has always operated. “It’s not how any of this stuff works. The W3C is not a Senate of the web. It’s a standards body for people who want to build things and collaborate with each other,” Brave’s Snyder said. “It’s not the kind of thing that anybody has ever voted on before.”
Indeed, while Seltzer wouldn’t comment on any specific altercations, she said W3C leaders are aware of general concerns about these tactics. “There is no process for calling work to a halt,” Seltzer said.
But Rosewell’s certainly not alone in trying. Almost anywhere you can find a browser putting forward a new privacy proposal within the W3C, you can find profound philosophical opposition from members whose companies rely on third-party data. “At least some of this seems aimed towards legislation,” Snyder, who co-chairs the W3C’s Privacy Interest Group, said. “Which is to say, if they can make the waters muddy so it looks like there’s no agreement on the web, quote-unquote, then [regulators] shouldn’t be enforcing these things.”
One particularly contentious fight broke out this spring in a wonky discussion about a technique called bounce tracking, which is a workaround some companies use to circumvent third-party tracking bans.
John Wilander, a security and privacy engineer at Apple, wanted the privacy group’s thoughts on how browsers might put an end to the practice. The conversation caught the attention of Michael Lysak, a developer at ad-tech firm Carbon, who began raising concerns about how Apple tracks its own users.
Wilander politely told Lysak his comments were out of scope, which is W3C-speak for: Take your bellyaching elsewhere. “Please refrain from discussing other things than bounce tracking protection here since doing so makes it harder to stay focused on what this proposal is about,” Wilander wrote.
Lysak continued on with another jab at Apple’s motives: “If a proposal kills tracking for some businesses and not others, that is in scope as it violates W3 rules for anti competition, especially if the proposer’s company directly benefits.”
Wilander shot back again: “I filed this issue and the scope is bounce tracking protection.”
Others were piling on too. Robin Berjon of The New York Times cited a study about users’ privacy expectations, writing, “It’s overwhelmingly clear that users expect their browsers to protect their data.” Lysak replied with a study of his own — one published by the ad industry — that argued differently. Erik Anderson of Microsoft, who co-chairs the group, chimed in asking everyone to focus on the topic at hand.
Wilander responded with a thumbs up emoji. And round and round they went.
Rosewell was there too, largely co-signing Lysak’s arguments about competition. “You make some interesting points,” he wrote to Lysak.
But Rosewell was also there to promote and explain a proposal of his own, another avian-themed standard called SWAN. Unlike FLoC, SWAN would allow publishers and ad-tech companies that join the SWAN network to share unique identifiers about web users. Those users could opt out of personalized ads from any companies in the network, and SWAN member companies would be bound by a contract to abide. But those companies could still use their unique IDs for other purposes, like measuring responsiveness to an ad and optimizing ad content.
To Rosewell, SWAN presents a sort of middle ground, giving web users the choice to turn off personalized ads, but giving ad-tech companies and publishers the data they want as well. But Soltani called SWAN and other industry-led proposals that preserve some level of data sharing “privacy washing,” because they would allow for data sharing even in browsers that have sought to prevent it.
“[They’re] saying: We’re going to define privacy as profiling for ads, but we’re going to collect your information for all these other purposes, too,” Soltani said.
No, you’re privacy washing
If the privacy advocates inside the W3C have been put off by Rosewell’s approach, he hasn’t exactly been charmed by theirs either. “I’ve been — I don’t know what the right word is — somewhere between upset and shocked at just how much of a sort of vigilante group the W3C truly is,” Rosewell said.
From his perspective, browsers have too much power over the community, and they use that power to quash conversations that might make them look bad. In fact, he charged Apple itself with “privacy washing.” Apple, he said, has forged ahead with third-party privacy protections, but has taken in billions of dollars a year from Google to feature its not-so-private search engine on iPhone users’ phones.
“Google doesn’t pay [Apple] $12 billion a year just for the kudos of having their logo on an Apple phone. They do it for the data that the deal generates,” Rosewell said.
Rosewell rejects the idea that he is pushing for weaker privacy protections. “I am absolutely on the privacy side of things. I would be aggrieved if I was characterized as anti-privacy,” he said, pointing to SWAN as an example of how he’s trying to advance the cause.
The problem, as he sees it, is that privacy has been ill-defined within W3C. “Until you define privacy, until you define competition, everything becomes an opinion,” he said. “And what happens is it’s those with the most influence that end up dominating the debate.”
He believes it’s a “crap argument” to say that philosophical or even legal questions like this are out of scope in a technical standards body. If the W3C only talked about technical standards, he argued, its members wouldn’t be so focused on a fuzzy concept like privacy. “We are interested in the impact of technical standards and technical choices in practice, and we should be. Of course we should be. Otherwise unintended consequences occur,” he said. “But what gets to be talked about is very self-serving.”
The ad-tech newcomer who spoke with Protocol was similarly frustrated by the community’s culture. “When you’re going up against powerful companies that are very entrenched in the W3C, and you’re saying something they don’t want said, it can feel as though you’re being gaslit, given contradictory information on rules that aren’t applied later,” he said.
Rosewell said he’s taken it upon himself to be vocal about these concerns inside the W3C primarily because few other people can be. One concern shared by both Rosewell and the people who disagree with him is that the W3C’s membership fees and the time commitment these conversations require make it so giant companies with thousands of employees can pack W3C groups with members and float endless proposals, while smaller companies or individuals working on these issues part-time struggle to keep up.
“The advantage Google has in numbers is not so much the number of participants, but the sheer size of the teams they have on these projects,” said one privacy advocate, who was not granted permission by his company to speak on the record. “I can get maybe 20% of two people’s time, that might be enough to produce one or two drafts per quarter. Google could ship a spec every week, and that means they can take up a lot of space.”
Indeed, 40 of the 369 members in the Improving Web Advertising business group work for Google. Vale, of Google, rejected the idea that this might make the community lopsided in Google’s favor, arguing that when it comes down to actually finalizing a specification, every company gets just one vote. “That’s how the W3C operates and makes sure that the voices of the various constituents and the members are really represented,” Vale said.
Still, there is an awful lot of conversation that happens before a standard gets to that stage. Those are the conversations happening right now. So when Google introduced the Privacy Sandbox, Rosewell figured he had the time, the freedom and the motivation to dive head first into those conversations. “As far as the tenacity is concerned,” he said, “if people are acting in good faith, then there should be a debate.”
Rosewell’s “tenacity” has certainly been convenient for Facebook, a company that relies on third-party tracking to sell ads but is in no position to publicly challenge any other company’s privacy proposals after its own seemingly endless parade of privacy scandals. Instead, while Rosewell lobs bombs and takes the brunt of the fire from other W3C members, Facebook’s generals are busy negotiating peace treaties.
Just last month, Facebook engineer Ben Savage drafted a proposal that would give web users more choice over the interests their browsers assign to them. The idea, which Savage presented to members of the privacy community group, was so well-received even Soltani walked away thinking it just might work. Savage has also worked closely in the W3C with Apple’s Wilander to nail down new fraud prevention techniques for Safari, peppering his comments with smiley faces, as if to say, “I come in peace.”
But emoticons aside, it’s clear Facebook has as much riding on the outcome of these discussions as anyone. Among the tech giants at the table, Facebook is the only one that doesn’t have its own browser or its own operating system. But it does collect boatloads of data on billions of people around the world. As Apple takes direct aim at Facebook in public, people like Savage are working behind the scenes to push Apple engineers on technical remedies that might preserve Facebook’s existing business.
In April, Facebook Chief Operating Officer Sheryl Sandberg more or less admitted as much, saying on the company’s quarterly earnings call that Facebook was working with the W3C community on a way through some of the “headwinds” posed by Apple’s mobile privacy updates.
It was a blink-and-you-miss-it moment, but Soltani didn’t miss it, viewing it as yet another example of an ad-reliant tech company trying to sway the W3C. “Telling that @Facebook’s @sherylsandberg cites opportunities in @w3c when discussing their ‘regulatory roadmap’ on today’s Q1 earnings call,” he tweeted at the time. “#Bigtech has long known they can leverage standards groups to benefit their business goals.”
This year, Facebook put forward a candidate to serve on the W3C’s advisory board, and in a recent meeting, Facebook volunteered to chair a possible working group on privacy-enhancing technologies. “In the last six months they’ve become a lot more vocal on these subjects, which is fantastic,” Rosewell said, noting that Savage in particular has “done a great job in articulating an alternative voice.”
Still, despite Savage’s attempts at collaboration, there are times his frustration with powerful players inside W3C — namely Apple — has boiled over. In a lengthy Twitter thread last week, Savage accused Apple of “egregious behavior,” saying that while Google has been developing alternatives to tracking out in the open, Apple decided to “blow up” the world of web advertising and only started “thinking about what to replace it with later.”
He charged Apple with trying to push app developers away from advertising business models and toward fee-based apps, “where Apple takes a 30% cut,” striking a note about anti-competitive practices that sounded not unlike Rosewell. “Using the pretext of privacy to kill off the ads-funded business model, in order to push developers to fee based models Apple can tax doesn’t stop being anti-competitive if they lower their cut,” Savage wrote. “And their own apps will always have a 0% tax.”
Apple did not respond to Protocol’s request for comment.
Facebook also declined to make Savage or any other engineer available for comment, but in a statement, the company said of W3C, “These forums allow us to submit and debate new approaches to address common industry issues like how to measure ads and prevent ad fraud while still protecting people’s privacy. All of our suggestions are public and we encourage people to take a look at them.”
Here come the refs
Vale of Google also said the W3C has been instrumental to working out new privacy proposals. He gave W3C members credit for the development of one proposal in particular, called FLEDGE. “We’ve really shown that we’ve taken the input here from many members, whether it’s on the privacy side, or the browsers, or ad tech, and incorporated them into our ideas and proposals,” Vale said. “We’re listening.”
Of course, it’s also in Google’s interest to appear collaborative — now more than ever. Earlier this year, the U.K.’s competition authority took Rosewell’s group, Marketers for an Open Web, up on its complaint, agreeing to investigate the Privacy Sandbox for anti-competitive behavior.
At that, Google blinked, announcing last month that it would delay its plans to kill off third-party cookies another year, in order to “give all developers time to follow the best path for privacy,” a company blog post read. As part of its negotiations, the U.K.’s CMA said it would play a “key oversight role” in reviewing Privacy Sandbox proposals “to ensure they do not distort competition.”
Google also said last month that once third-party cookies are phased out, it would no longer use browsing history to target or measure ads or create “alternative identifiers” to replace cookies. That blog post was signed not by Vale or another engineer, but a member of Google’s legal department.
“Good news. Google won’t kill the open web this year,” Rosewell’s group wrote in a press release following the recent announcement. But the group also vowed to power on, arguing that Google’s commitments so far only cover a small subset of the data it uses to track people. “The proposed settlement agreement is hollow because it does not actually remove data that matters,” Rosewell said.
To him, the CMA’s announcement was, in other words, just a solid start. But to Soltani and others, Google’s decision was the predictable conclusion of a drama they’d watched play out inside the W3C, which is, in some ways, just a microcosm of the larger debate happening in countries around the world.
Regulators in the U.K., he said, had bought the ad industry’s argument that privacy and competition are on a collision course. That, he said, is a false choice. “They could have required everyone to not access that data, Google included, which would have been a net benefit for competition and privacy,” Soltani said.
But regulators appeared to overlook that option and are now using their power to pressure Google to put off changes that would make the world’s most widely-used browser a little more private. “Sigh,” Soltani wrote in an email last month, linking to Google’s announcement. “James & Co succeeded.”
How the ad exchanges are sending your money to a very bad man
Welcome back to BRANDED, the newsletter exploring how marketers broke society (and how we can fix it).
Here’s what’s new with us:
Seb Gorka is a neo-Nazi. This is not an opinion or a personal interpretation. It’s a well-documented fact.
He has worked closely with antisemitic politicians and written for openly antisemitic newspapers. He has publicly worn Nazi insignia. When asked about it, he has said he inherited the insignia on the “merits of [his] father,” who was a member of the Hungarian neo-Nazi group the Vitézi Rend. There is strong evidence that Gorka himself has sworn a lifetime oath to the group.
If that isn’t enough, he publicly incited and encouraged the January 6th insurrection, claiming that “patriots” have “taken over Capitol Hill.” He has also publicly promoted voter fraud disinformation on his website, SebGorka.com.
All this makes him extremely — as we like to say in the advertising biz — brand unsafe. Even YouTube has banned him. But that hasn’t stopped Seb Gorka from running a profitable disinformation outlet. With help from a handful of adtech companies that either don’t know or don’t care about where your ad dollars go, Seb Gorka is cashing in.
How is this happening? There are two ways: an obvious way and a less obvious fraudy way. We’re going to explore both of them here. Grab some popcorn, we’re going to defund a Nazi today, folks!
1. The normal, regular ad placement method (simple!)
Ad placements are the obvious way SebGorka.com makes money. The following companies are serving ads on SebGorka.com, in violation of their own Publisher Policies and Acceptable Use Guidelines:
- Criteo (of course)
- The Trade Desk
How do we know this? We hovered over the ads and took screenshots. You already know how all this works. It gets more interesting from here.
2. The shady revenue-sharing ring method (sketchy!)
SebGorka.com doesn’t just earn money through ad placements.
SebGorka.com also monetizes by using shared DIRECT IDs — an ID meant for one publisher — across a shared pool of unrelated websites. This practice not only artificially inflates its CPM rates (cost per thousand impressions), but also allows it to get paid untraceable sums of money by an LLC.
Let’s break this down together.
We’ve talked about ads.txt before, but it’s important for us all to understand what a hot mess it really is.
Ads.txt is a protocol developed by IAB Tech Lab to bring transparency to the supply chain. An advertiser should be able to cross-check a publisher’s ads.txt directory against each exchange’s counterpart directory, known as sellers.json. If the data matches up, this tells the advertiser that things are fine.
In theory, these two specs allow advertisers and ad exchanges to verify that their ads are being placed on the correct inventory and that they’re paying the correct company. Here’s what this is supposed to look like when it’s working:
In sellers.json, sellers can list themselves as one of three Seller Types (“seller_type”).
Publisher: If a seller is listed as a PUBLISHER, that means
Intermediary: If a seller is listed as an INTERMEDIARY, that means
Both: The seller has been approved by the ad exchange both as a PUBLISHER and INTERMEDIARY.
Why does any of this matter? Because advertisers are willing to pay a premium for DIRECT and PUBLISHER ads because it signals to them that they have 1) a higher quality audience and 2) less potential for fraud.
But in practice, publishers like SebGorka.com have correctly surmised that ad exchanges aren’t performing the necessary checks — which means they’re able to declare themselves DIRECT and PUBLISHER across an unlimited number of websites and funnel the revenues into a single shared account.
In Australia, this is formally recognized as a form of ad fraud, and is illegal. In America, this is still just uh… fraudy.
Here’s what we found in SebGorka.com’s ads.txt
We looked at the DIRECT IDs on SebGorka.com — and it tells us an interesting story:
- SebGorka.com is sharing DIRECT IDs with a number of local CBS, ABC, and NBC news affiliates.
- Frankly Media LLC appears to be employing these shared DIRECT IDs for lot of publishers. Check out this list of Google results for just one ID.
- Frankly Media is owned by Engine Media Holdings, a publicly traded company on Toronto’s TSX Venture Exchange.
And in the middle of it all are the ad exchanges.
Here’s what’s going on in SebGorka.com’s ads.txt
The following companies are allowing mislabeling, which lets publishers get away with misrepresenting themselves and misleads advertisers into thinking they’re paying for direct inventory when the reality is anything but:
RUBICON PROJECT (NOW MAGNITE)
It’s fraudy because: Only publishers should be calling themselves “publishers”. Frankly Media is not a publisher. This could be an attempt to defraud not-so-vigilant buyers and suppliers with poor supply optimization. Also Frankly Media is not a DIRECT seller of SebGorka.com
It’s fraudy because: Frankly Media is not a publisher, and certainly not a publisher of Newsweek. Also Frankly Media is not a DIRECT seller of SebGorka.com
It’s fraudy because: Frankly Inc. (what happened to Frankly Media LLC???) is not a publisher, and certainly not a publisher of 41NBC. Also Frankly Media is not a DIRECT seller of SebGorka.com.
It’s fraudy because: Frankly Media LLC is not a DIRECT seller of SebGorka.com.
It’s fraudy because: Frankly Media LLC is not a PUBLISHER of SebGorka.com and Frankly Media LLC is not a DIRECT seller of SebGorka.com
SPOTX (NOW MAGNITE)
It’s fraudy because: Engine Media is not a DIRECT seller of SebGorka.com.
Is Frankly Media LLC a dark pool sales house?
It sure looks like Frankly Media LLC has been going around to all the ad exchange and telling some fibs.
They’ve told some of them that they own and operate SebGorka.com — but they don’t. They’re telling others that they are both PUBLISHER and INTERMEDIARY for SebGorka.com — and that’s not true either (SebGorka.com is owned by Salem Media Group).
But one thing is probably true: Frankly Media LLC and/or its parent company Engine Media Holdings is making payouts to SebGorka.com. But how much? This is where we hit a dead end.
Frankly Media LLC is what we call a dark pool sales house. We don’t know how much it’s earning for SebGorka.com through this revenue-sharing ring and we don’t know how it’s distributing the collective proceeds across its members.
Maybe they have a profit-sharing contract? Maybe they pay out SebGorka.com through yet another shell company? Anyone looking at these records would never know.
So what do we do now?
What we’ve just uncovered is a Nazi infiltrating the adtech supply chain through the very pipelines designed to keep someone like him out of it. If Frankly Media LLC can use the same DIRECT ID across NBC affiliates and a Nazi’s media outlet, we are in trouble.
Last summer when we first reported on ads.txt revenue-sharing, we noted the national security implications of not knowing where our advertiser dollars are being funneled:
One of the reasons that this is still legal, or not explicitly known to be illegal, is that ads.txt is only three years old and there hasn’t been, to our knowledge, any major investigative research into the consequences of its design… Ads.txt is a global standard, used in international markets. This giant security hole opens markets and mediascapes around the world to foreign propaganda, hate groups, money laundering, and, of course, fraud.
If ad exchanges don’t enforce ads.txt, all these problems run rampant.
Once again, it’s up to us, the advertisers, to take charge of our own ads. If you want to keep your ad budgets away from Seb Gorka, you will have to add the website “SebGorka.com” to your exclusion list. You will also need to block Frankly Media and its DIRECT seller ID’s. You can find them at sebgorka.com/ads.txt.
Thanks for reading,
Nandini & Claire
NOTE: Originally, the first sentence of this post said “Seb Gorka is a Nazi.” We have updated this to read “neo-Nazi” because it is debated whether the word Nazi is a general term for someone’s ideology or if it refers specifically to a member of the historic National Socialist German Workers’ Party of Germany.
You’re reading a Quartz member-exclusive story, available to all readers for a limited time.
To unlock access to all of Quartz become a member.
When Lou Montulli invented the cookie in 1994, he was a 23-year-old engineer at Netscape, the company that built one of the internet’s first widely used browsers. He was trying to solve a pressing problem on the early web: Websites had lousy memories. Every time a user loaded a new page, a website would treat them like a stranger it had never seen before. That made it impossible to build basic web features we take for granted today, like the shopping carts that follow us from page to page across e-commerce sites.
Montulli considered a range of potential solutions before settling on the cookie, as he later explained in a blog post. A simpler solution might have been to just give every user a unique, permanent ID number that their browser would reveal to every website they visited. But Montulli and the Netscape team rejected that option for fear that it would allow third parties to track people’s browsing activity. Instead, they settled on the cookie—a small text file passed back and forth between a person’s computer and a single website—as a way to help websites remember visitors without allowing people to be tracked.
Within two years, advertisers learned ways to essentially hack cookies to do exactly what Montulli had tried to avoid: follow people around the internet. Eventually, they created the system of cookie-based ad targeting we have today. Twenty-seven years later, Montulli has some misgivings about how his invention has been used—but he has doubts about whether the alternatives will be any better.
This conversation has been edited for length and clarity.
QZ: What was your goal when you were creating the cookie?
We designed cookies to exchange information only between users and the website they visited. The founders of Netscape and many of the other denizens of the internet in that age were really privacy-focused. This was something that we cared about, and it was pervasive in the design of the internet protocols we built. So we wanted to build a mechanism where you could be remembered by the websites that you wanted to remember you, and you could be anonymous when you wanted to be anonymous.
How did you feel when you started seeing advertisers exploit cookies to track people?
That wasn’t something that we had really anticipated sites doing—although I guess one could have followed the money and could have imagined this happening. We became aware of this in 1996, and it was certainly very surprising and alarming to us. We were simultaneously fighting a knock-down, drag-out battle with Microsoft [for dominance of the browser market] and basically getting our clock cleaned. So there were a lot of other problems going on within Netscape besides just cookies. So it just fell to me to figure out what to do about cookies. People were like, “Well I don’t have time to deal with this. Can you deal with this?” And, you know, I’m just a lowly engineer. I don’t really have any experience dealing with policy.
But we were really faced with three choices: One would be to do nothing, to go “oops!” and throw up your hands and allow advertisers to use third-party cookies however they wanted. Another would be to completely block third-party cookies. And the third option was to try to create a more nuanced solution in which we try to give control of the cookie back to the user—especially control over the way advertisers used cookies to track them. That was the approach that we tried to take. And to do that we built out a bunch of functionality within the browser to let users see what cookies are on their device and allow them to control how they’re being used. So you could turn off third-party cookies entirely, or you could turn them off for a certain site.
So you had a chance to kill third-party cookies back in 1996—why didn’t you take it?
Advertising at that time was really the sole revenue stream of websites, because e-commerce was not as strong. Pretty much the entire web relied on advertising and by turning off advertising cookies, it would severely diminish the ability for revenue to be made on the web. So I can’t say that the decision was entirely financially neutral. We as a company believed very strongly in the future of the open web. We felt like having a revenue model for the web was pretty important, and we wanted the web to be successful. So we made the choice to try to give cookie options to the user, but not disable them.
Now, 25 years later, do you feel like you made the right choice?
I look at it from two different perspectives. If you agree that advertising is a reasonable social good, where we get free access to content in exchange for some amount of advertising, and if that advertising is reliant on some form of tracking, I would say the use of the cookie for tracking is a good thing for two reasons. First, it’s a known place where tracking is happening. And second, it’s a technology that is in large part under the user’s control. You can disable cookies in your browser or use an ad blocker plugin to block cookies. So the user has a fair amount of control over the advertising technology right now, and that’s only because it works through this particular technology. The alternative would be, if every ad network were to use a completely different technology, and that technology was not under the control of the user, we would no longer have a singular mechanism with which to personally disable that tracking network.
There’s another view, though, which I’ve only come around to recently. I now think the web’s reliance on advertising as a major revenue source has been very detrimental to society. Advertising perverts the user experience. Instead of incentivizing quality, it incentivizes getting as much interaction as possible. And I think that we’ve seen that those business models that seek to generate as much interaction as possible have caused people to behave very irrationally and not in the public good. So we may need to cut back on the advertising model to get some sort of sanity back in our online experience. I had a hand in building the web this way, but in my old age I’m looking back and thinking the world might have been a better place if we had spent more time working on micropayments or subscription-based content that would have allowed us to value quality over quantity.
Given that we know third-party cookies are dying, what do you think of the alternatives the ad industry is proposing to replace them?
On FLoC: This is an alternate form of expressing preferences for advertising without the traditional means of tracking you all over the web. And I think those forms are really interesting. But I also think that the public is likely to find them a little creepy at first because they won’t really understand it.
On Unified ID 2.0: That’s basically just another cookie. I don’t think it will get traction, because almost everyone will want to turn it off. And if you turn it off, it does advertisers no good.
On first-party data: It’s fine for really large, top 100 websites, but it really cannot be a useful technology for smaller sites. If you don’t have much traffic, collecting your own data has very little relevance to the larger ad-serving, ad-tracking world.
How optimistic are you that new technologies can fix the misgivings consumers have about ad tracking?
It’s my guess that as the third-party cookie gets phased out, ad tracking networks will try to migrate to cookie replacements that do almost the same thing as cookies but don’t have the same user control or supervision, like fingerprinting. I think these new technologies will just set off an arms race between advertisers, who are trying to figure out how to track users, and the browsers and privacy advocates who will come up with technological methods to fight back.
Ultimately, it comes down to: Do we want to fight a technological tit-for-tat war between the advertising companies and the browsers, or do we create public policy around what is and isn’t permissible? It’s very difficult to create a singular technology that is able to solve this problem. And as soon as you do, you have billions of dollars trying to work around it, which to me means if we care about it as a public policy initiative then we ought to put some restrictions around it. And that’s a little hard for me to say as a technologist, because oftentimes legislation has the best intentions, but it doesn’t really hit the mark very well. But sometimes you just can’t come up with a pure technological solution to a problem and you have to figure it out on a policy level.
The technology that shaped digital advertising and media is going away. What will replace it?
By the digits
$336 billion: Valuation of the digital advertising industry, according to one estimate
72%: Americans who worry that what they do online is being tracked by companies
40%-60%: The (rather low) accuracy rate when two companies try to match the cookie data they have on the same set of consumers
40%: Web traffic that comes from users who block third-party cookies
Explain it like I’m five!
How do third-party cookies work?
A cookie is a small text file saved locally on a user’s computer at the behest of a website they’ve visited. It helps the website remember information about them—often for benign reasons, like remembering their login information or making sure the items in their shopping cart will still be there even if they close the page and come back later.
When cookies come from someplace other than the website a user chose to visit, they’re called third-party cookies. They’re not a particularly effective way for digital advertisers to track potential customers, and the public fears the privacy implications of having their every move online surreptitiously tracked. In response to public pressure, lawmakers are passing legislation to protect internet users’ privacy, but the most effective move of all might be a voluntary one by web browsers that have said they will no longer support third-party cookies. Few will mourn the functional death of the third-party cookie, but there’s reason to be suspicious of what might rise in its place.
Charting where digital advertising money goes
Brief history of third-party cookies
1994: Lou Montulli, a 23-year-old engineer at the world’s then-leading web browser, Netscape, invents the cookie. His original goal was to create a tool that would help websites remember users—but couldn’t be used for cross-site tracking.
1995: DoubleClick, one of the world’s first adtech firms, is founded. Its engineers realize they can exploit cookies to track users across the web; the company pioneers and comes to dominate the world of ad targeting.
2008: Google buys DoubleClick for $3.1 billion and expands its advertising business from search pages to programmatic ads on websites.
2016: The EU passes the General Data Protection Regulation (GDPR), which expands requirements for websites to get users’ consent before tracking them with cookies.
Jan. 2020: Google announces it will block all third-party cookies by default, writing, “Our intention is to do this within two years.”
What will replace the third-party cookie?
There are three major proposals for how the industry can continue to show consumers relevant ads and measure the effectiveness of marketing campaigns without relying on third-party cookies. These solutions aren’t mutually exclusive, and in the short term we’ll see the industry experiment with all three.
👯♀️ Google’s Federated Learning Cohorts (FLoC) model: The browser tracks users and groups them into cohorts alongside thousands of peers with similar online habits. Every time a person visits a website, their browser would tell the site which cohort they belong to, and advertisers would show them ads tailored to people with interests like theirs.
🗞️ First-party data tracking: Publishers and advertisers each collect their own data about their audience and consumers respectively. If a brand and a publisher have the same piece of information about a particular user, like an email address, they can team up to match the customer’s spending habits on the brand’s site to their reading habits on the publisher’s site and target ads even more effectively.
🔑 Identity-based tracking: A central authority would assign every web user an advertising ID that advertisers could track every time a user logs into a website. Adtech companies would once again be able to monitor individual users’ browsing habits, serve them targeted ads, and measure whether a user who saw an ad went on to buy the advertised product.
How developed are these models, and who wins and loses with each? Read more here.
Person of interest: Lou Montulli
When Lou Montulli invented the cookie 1994, he was a 23-year-old engineer at Netscape, the company that built one of the internet’s first widely used browsers. He was trying to solve a pressing problem on the early web: Websites couldn’t remember who their users were or what they had done in previous visits.
He and his Netscape colleagues settled on the cookie as a way to help websites remember visitors without enabling cross-site tracking.
Almost immediately, advertisers learned ways to essentially hack cookies to do exactly what Montulli had tried to avoid: follow people around the internet. Over time they created the system of cookie-based web tracking we have today. Twenty-seven years later, Montulli has some misgivings about how his invention has been used—but he has doubts about whether the alternatives will be any better.
Read more here.
What if antitrust regulators forced Google to sell Chrome? (Quartz) If Google were forced to sell Chrome due to monopoly concerns, a new owner might jettison third-party cookies sooner, creating headaches in the digital ad world.
Google is done with cookies, but that doesn’t mean it’s done tracking you (Vox) Another take on FLoC and what information Google will still collect even after cookies are gone.
No need to mourn the death of the third-party cookie (The Next Web) The case for publishers, advertisers, and consumers being better off without cookie tracking.
Advertisers scramble for answers after Apple’s IDFA update (Digiday) Looming changes to Apple’s ID for Advertisers will be just as disruptive for the ad industry as Google’s deprecation of third-party cookies.