Skip to main content

Browser Fingerprinting Is the Ad Industry's Response to Your Privacy Settings

Why the ad industry replaced cookies with something you can't block

In 2018, GDPR became enforceable. In the years that followed, state privacy laws and the widespread adoption of ad blockers gave ordinary users a meaningful way to opt out of data collection for the first time.

For the ad-tech industry, this was an existential problem. Their business model depends on extracting behavioral data from web traffic and turning it into user-profiles that drive clicks, place ads, and convert sales. As users started exercising that opt-out in large numbers, the data pipeline started breaking.

Browser fingerprinting was their response.

The Technical Approach

What is browser fingerprinting?

In 2019, FingerprintJS was founded by Dan Pinto and Valentin Vasilyev. In 2021, they raised $8 million in Series A funding. Their goal: allow a website to identify new users and recognize returning ones, even if they're using a VPN or incognito browser, without relying on cookies.

They are not alone. A growing market of companies offer browser fingerprinting as a service, pitched variously as fraud prevention, UX improvement, or marketing intelligence. While the use cases differ, the underlying capability is the same: software designed to identify and track anonymous users without their knowledge or consent.

By using dozens of JavaScript properties and network signals, FingerprintJS (and similar companies) can identify a user purely based on the machine that they're using. Further, they allow clients (the companies they serve) to keep track of that user, continuously updating their profile on each visit.

The way it works is worth understanding in some detail, because the passivity is what makes it so difficult to escape.

Passivity

None of those signals are sent voluntarily. They are sent because that's how the internet works, not because you agreed to share them.

This is what I mean when I say fingerprinting is passive. Not passive in the sense that it's harmless, but passive in the mechanical sense. There is no moment of collection you could intercept. There is no prompt you could deny. A standard analytics pipeline sitting in front of any web server already sees most of it without any special instrumentation. The data was always there. The decision to log it and use it is essentially free.

That's what makes it structurally different from cookies. Cookies required something to be stored on your machine, which meant they could be cleared, blocked, and eventually regulated. Fingerprinting requires nothing from you except a working internet connection.

How does browser fingerprinting work?

When your browser connects to a website, it produces dozens of semi-unique signals just by functioning normally. The TLS handshake that initiates every time you visit a site sends a list of supported cipher suites and protocol extensions to their servers. Your HTTP headers reveal which browser and operating system sent them. JavaScript APIs expose your screen dimensions, installed fonts, graphics hardware, and audio processing behavior. Your network packets carry TCP window sizes and TTL values that vary by operating system.

None of this requires a cookie. None of it requires your consent. It's just how the internet works.

Individually, most of these values aren't unique. Combined, they become highly distinctive and, in many cases, as unique as a genuine fingerprint. But the more consequential capability isn't identification on a single visit. It's linking across sessions.

Hardened Tools that are not 404

These won't solve the fingerprinting problem completely, but they're worth using:

  • Tor
  • Mullvad VPN/Browser
  • Brave
  • SearXNG
  • GrapheneOS

Identity Resolution

Consider what a service like FingerprintJS can infer over time...

While using your home internet, you go to yoursite.com which identifies you as a new user. It collects a plethora of information about you–browser, OS, IP address.

Later, you visit the same site on the same device, but while connected to the wi-fi at work. Yoursite.com can then add that IP address to your profile.

Then again, when you're at home, you view the site from a different device but on the same IP address as the first time. Yoursite.com can then add that device ID to your profile.

Your profile grows with every visit, every network, every device that shares enough overlap with what came before.

This is identity resolution, and it's been a core tactic in ad-tech since before the technology was this advanced. Fingerprinting makes it more durable by removing the dependency on cookies or logins. The profile builds passively, across devices, across sessions, across whatever privacy steps you thought you were taking.

The actual Bad Guys

Fingerprinting software is a tool. What's concerning is what happens after it's collected, when companies buy this data in bulk, aggregate it with everything else they've collected, and sell it back as something more complete and more personal than any single source could produce on its own.

Companies like Acxiom, LiveRamp, and Experian sit at the center of this industry.

They collect and broker behavioral data at scale: where you go online, what you buy, what you search for, what platforms you use, how long you stay, who you interact with, and how often. The profiles they build and sell include income estimates, political affiliation, health information, relationship status, personal interests, and purchasing intent. This is their product, and they market it openly.

A Real Life Nightmare

Experian is worth pausing on. Most people know the name from credit reporting. What fewer people know is that Experian runs a separate data brokerage business that sells behavioral and demographic profiles to marketers. The same company a lender consults when you apply for a mortgage is also selling a detailed profile of your online behavior to anyone willing to pay for it.

You don't have to imagine a worst case scenario to find that troubling.

AI Acceleration & Ease of Access

What has changed in recent years is what's possible once that data is collected. Historically, the value of a large behavioral dataset was limited by the cost of analysis. You needed software written specifically to parse it, or analysts working through it manually to find patterns.

That constraint is largely gone.

AI can now ingest enormous datasets, find correlations across them, and generate actionable outputs without meaningful human involvement at each step. Data that took years to accumulate can be turned into something operational very quickly.

Government Overreach

The government is an active buyer in this market. Through a Freedom of Information Act lawsuit, the ACLU obtained documents showing that ICE, Customs and Border Protection, and other arms of the DHS purchased bulk access to location data harvested from smartphone apps, specifically using mobile advertising IDs to sidestep the warrant requirements the Supreme Court established in Carpenter v. United States. The legal argument DHS made internally was that because the data was tied to advertising IDs rather than phone numbers, it fell outside Fourth Amendment protections. The ACLU was direct about what that reasoning amounts to: a distinction without a difference.

The Department of Homeland Security bought the location data of US citizens, deliberately sidestepping Fourth Amendment protections established by the supreme court.

Palantir

Gets its own section.

For almost two decades, this company has built and maintained the systems that let agencies actually use this data at scale.

In 2018, reporting by the Verge and the ACLU revealed that the New Orleans Police Department had been running a covert predictive policing program built on Palantir software for years, without the knowledge of the city council or most community leaders.

The program used social network analysis to draw connections between people, places, vehicles, addresses, and criminal records across previously siloed databases, then generated risk scores identifying individuals as likely perpetrators or victims of violence.

It was presented to the city as a philanthropic gift, which meant it bypassed public procurement entirely. The program had no transparency, no oversight, and no community input.

New Orleans ended the contract shortly after the story became public.

That was 2018. The scale of Palantir's government work has grown considerably since then.

Scaling with Automation

Palantir's ELITE system, documented in reporting cited by the ACLU, now populates a map interface with targets for potential deportation. It draws on commercial data broker products, including Thomson Reuters CLEAR, and is described internally as continuously integrating new data sources. ICE has used it to identify not just specific individuals but entire neighborhoods described as "target rich," meaning areas where the density of people with any immigration agency contact makes enforcement operations "more efficient." Court testimony from ICE agents confirmed that the system flags anyone with an immigration nexus, a category broad enough to include naturalized citizens.

Their Investigative Case Management platform has run on Palantir software since 2014. It pulls records across FBI, DEA, ATF, immigration, and health agency databases and allows agents to build cases from a single interface. When ICE renewed the contract on a sole-source basis, they justified it by stating Palantir is the only company capable of meeting their specific operational needs, after more than a decade of deep integration into agency workflows.

What changed recently is AI. The manual analysis bottleneck that once limited how much this data could be acted on is largely gone. Palantir's ImmigrationOS system, built on a $30 million contract signed in 2025, uses AI to parse vast datasets and streamline what ICE calls "immigration lifecycle management," from identification to removal. Their tip processing tool uses generative AI to categorize, and act on, incoming tips at a scale no human review process could match.

This is the downstream of a data market that started with ad targeting. Nobody who downloaded an app consented to having their behavioral profile end up in a federal enforcement database. The pipeline runs from your browser, through a data broker, to a government contract, and none of it required a warrant.

Why does this matter to us?

A few years ago I started trying to opt out. I sent deletion requests to data brokers manually, one by one. I deleted my social media accounts. I degoogled as much of my life as I could. I did everything the privacy guides tell you to do.

Then, I started looking for something that addressed the layer underneath all of that. The part that doesn't care whether you have a Google account or not, that doesn't care which network you're on or whether you're in incognito mode. I didn't find anything that handled it completely. So I started building something.

I'm not writing this to pitch that tool. I'm writing it because most people don't know this layer exists. My mom doesn't know why every product she mentions out loud seems to appear in her feed within the hour. My grandma doesn't know why the things she searches for follow her from her phone to her laptop. They're not doing anything wrong. They're just using the internet the way it was designed to be used, and the internet is designed to watch them.

That's the thing worth saying clearly. This isn't about people with something to hide. It's about a system that was built to extract value from ordinary behavior, that adapted when people tried to opt out, and that is now deeply integrated into commercial, political, and government infrastructure in ways that most people haven't been asked about and wouldn't consent to if they were.