Skip to main content

What is audio fingerprinting?

A tracking method that identifies browsers by measuring how they process generated sound.

Layer
JavaScript audio processing and browser signal handling
Inputs
Browser engine, CPU behavior, operating system, audio implementation details
Why it persists
It measures how your browser computes audio output instead of reading a stored token

Audio fingerprinting converts signal-processing quirks into a browser identifier.

The Web Audio API lets sites synthesize, filter, and analyze sound directly in the browser. That is useful for media tools, games, conferencing products, and accessibility features. It also exposes another way to measure how a browser behaves internally.

A script can generate a fixed audio graph, render it offline, and compare the numeric result. Tiny differences in floating-point math, implementation choices, and the surrounding software stack can change the output enough to create a useful fingerprint signal.

Audio fingerprinting is especially useful because it does not depend on cookies or visible page state. It is one more passive test that can be blended with graphics, header, and handshake signals to improve recognition over time.

The page measures how the browser processes a known audio pipeline.

  • 1. A page creates an audio graph

    JavaScript uses the Web Audio API to create oscillators, gain nodes, filters, or compressors and wires them into a small audio-processing pipeline.

  • 2. The browser renders the result offline

    The script can process that graph in memory with an OfflineAudioContext, which means the test can run quietly without actually playing a sound through the speakers.

  • 3. The numeric output is measured

    Small differences in floating-point calculations, browser implementations, and hardware-adjacent behavior affect the resulting samples enough to be recorded.

  • 4. The signal is added to a broader profile

    Like canvas and WebGL tests, audio fingerprints are usually most effective when they are combined with additional browser and network signals.

It is useful because it probes a different implementation layer than graphics or network checks.

  • It is quiet and easy to miss

    Offline rendering lets a site run the test without an obvious prompt or noticeable sound, so users rarely realize the audio stack is being measured.

  • It reflects real implementation detail

    Audio fingerprints come from how the browser actually processes signals, which makes them harder to fake well than a simple header or string value.

  • It complements other browser tests

    Audio output alone may not uniquely identify a browser, but it adds another independent source of entropy to a larger fingerprinting model.

Audio fingerprinting becomes more useful when it is paired with other browser-exposed tests such as canvas, WebGL, fonts, language settings, and timing behavior.

404 lowers the value of audio fingerprints by reducing the stability of the browser signals trackers combine.

Audio fingerprinting is part of the same broader problem as canvas and WebGL. Sites are not only looking at what your browser claims to be; they are measuring how it behaves internally. 404 focuses on reducing the consistency of those exposed signals so they are less useful for long-term recognition.

That does not eliminate every possible classifier. It does make it harder to rely on a stable browser profile across sessions, especially when audio results are being combined with other fingerprinting layers.

See pricing How 404 works