One coherent profile, not random noise
404 applies a full browser identity that stays internally consistent. That matters because random inconsistencies are themselves detection signals.
Research organizations
Browser fingerprinting is a tracking technique where a remote service collects data from a browser to formulate a fingerprint that may be unique amongst others. Sophisticated techniques can even read information about the state of a device's processor, screen, graphics, and audio hardware.
Operational risk
That is not a theoretical concern. The Berkeley Protocol on Digital Open Source Investigations, developed by the Human Rights Center at UC Berkeley and the UN Office of the High Commissioner for Human Rights, tells investigators to assume they are being monitored and to work inside technical systems or environments that limit exposure to cyberthreats.
For research organizations investigating governments, corporations, armed groups, or sanctioned networks, the problem is operational rather than abstract. Public-facing portals, procurement systems, company sites, media properties, and campaign pages can all log recurring visitors and tie them together across weeks of work.
The gap in existing tools
The practical gap is well documented. Research teams often use Tor, VPNs, hardened browsers, and encrypted messaging for good reasons. Those tools help, but they do not eliminate the browser fingerprint that remote sites can still observe.
How 404 works
In plain language, 404 runs locally and replaces the browser identity that remote sites see. It does that across TLS, headers, JavaScript-visible browser surfaces, and network packet values, so the browser is not leaking a patchwork of host-specific defaults.
The coherence matters. Randomized spoofing often creates outliers that are easier to notice because the layers do not agree with each other. 404 is built around profile-driven substitution instead of noise, which makes the resulting identity look internally consistent over time.
404 applies a full browser identity that stays internally consistent. That matters because random inconsistencies are themselves detection signals.
The handshake, headers, ordering, and client hints are rewritten to match the selected profile instead of leaking the host machine defaults.
Navigator properties, screen values, graphics output, audio behavior, and other JavaScript-visible signals are normalized around the same identity.
On Linux, 404 also rewrites packet-level values such as TTL, MSS, and window size through eBPF so the lower network layer tells the same story as the browser layer.
404 is designed to reduce external linkability at the web research layer. It is not a substitute for source protection workflows, compartmentalization, or the rest of an investigative security program.
Concrete use case
Consider a researcher at an organization such as Bellingcat or Truth Hounds revisiting the same government portal over several weeks while building a case file. Without protection, that site can see a recurring browser identity even if the researcher changes networks or clears local state. The analytics view is not just another anonymous visitor. It is one stable device returning again and again.
With 404 running, the site still sees a browser, but it sees the profile 404 is presenting instead of the host machine's native stack. The goal is not invisibility. The goal is to prevent repeated visits from resolving into a clean, host-specific fingerprint that can be linked back to one investigator or one institution over time.