Fawkes Revisited: Is Image Cloaking Still Viable Against Face Recognition in 2026

  • Thread Author
Fawkes arrived as a simple, powerful idea: subtly alter the pixels in the photographs you share so that unauthorized facial-recognition systems learn the wrong version of your face. The tool — built by the SAND Lab at the University of Chicago and released in 2020 — remains available as a free download for Windows, macOS, and Linux, and it still represents one of the clearest, best-documented consumer defenses against mass face-scraping. But the landscape around image AI has changed dramatically since Fawkes’ debut: giant image-generation models, multimodal assistants, and hardened recognition pipelines have narrowed the original gap between research proof-of-concept and real-world protection. This feature parses what Fawkes does, why it worked, what’s changed since 2020, and how practical it is for Windows users in 2026 — with clear guidance on risks, limitations, and safer alternatives.

Windows-style photo editor with a CMD window running Fawkes to defend a portrait.Background / Overview​

Fawkes was published alongside a USENIX Security paper and a dedicated project page by SAND Lab, University of Chicago. The research framed the problem in the context of companies like Clearview AI — services that scraped billions of images from the public web and trained face-recognition models without consent — and proposed a defensive countermeasure: image cloaking that inserts imperceptible perturbations into photos so they poison the training signal of models that consume them. The SAND Lab team released binaries and source code, and the research attracted broad coverage and nearly one million downloads in the months after launch. The original Fawkes experiments reported very high protection rates: the USENIX paper claimed 95%+ protection in many settings and 100% success in experiments against several commercial engines available at the time. Real-world use meant running a local utility on images before uploading them to social media and other public sites, then continuing to use those cloaked images as you normally would. The cloak is designed to be invisible to human observers while causing models trained on the cloaked images to misidentify the real, uncloaked photos.

How Fawkes works — the technical essentials​

Image cloaking and adversarial poisoning​

At its core, Fawkes uses adversarial perturbations: tiny, carefully-optimized pixel changes that are imperceptible to humans but induce large errors in model feature space. Rather than attempting to block recognition outright, Fawkes targets the training process of trackers that scrape photos.
  • The tool modifies your images so that when a third party trains or fine-tunes a facial-recognition model on those images, the model will learn an incorrect mapping from face to identity.
  • The result: the attacker's model will generally fail to match a new, uncloaked photo of you to the model’s learned representation — effectively poisoning the attacker’s training data.

Practical implementation and user flows​

Fawkes ships as a local tool (command-line and GUI variants were provided) so perturbations are generated on-device and need not upload photos to any service. Typical usage steps were:
  • Run the Fawkes binary or script locally against a folder of photos.
  • Choose a protection mode (trade-offs in cloak strength vs. visible artifacts).
  • Upload the cloaked photos to public platforms as usual.
The SAND Lab project included prebuilt executables for Windows and macOS as well as a Python package and GitHub repo for users who preferred to build from source. Official project material and the repository remain publicly accessible.

Why Fawkes worked initially — and where the limits are​

Fawkes’ early results were compelling because of two conditions that were true in 2020:
  • Many facial-recognition systems trained on large but static pools of scraped, public images. A coordinated perturbation campaign could change the distribution of training images and thereby compromise models trained on them.
  • Adversarial perturbations and poisoning were effective against the specific architectures and preprocessing pipelines used by major commercial face APIs at the time (the SAND Lab tests specifically included Microsoft Azure Face, Amazon Rekognition, and Face++).
Still, the SAND Lab paper and project documentation were explicit about limitations: Fawkes does not protect images that the attacker already has in a clean form; it cannot reverse an existing model trained on uncloaked images; and the protection depends on attacker training choices and preprocessing. The tool’s designers warned that if an attacker obtains ground-truth real images or adapts training pipelines to counter adversarial perturbations, the defense can be weakened.

What’s changed since 2020: modern AI, generative editing, and hardened recognition​

The last few years have seen two major shifts that matter for Fawkes’ real-world effectiveness:
  • The generative and multimodal AI boom — image models such as Google’s Gemini family and its Nano Banana / Nano Banana Pro image engines can now edit photos and synthesize realistic variants of a person’s likeness from a few seeds. These models power product features that can change clothes, lighting, backgrounds, and even produce new photorealistic images of a person in novel contexts. Google has publicly integrated Nano Banana Pro across Gemini, Workspace, and developer APIs. These models are faster, higher-fidelity, and more widely deployed than anything available in 2020.
  • Trackers and recognition services have improved robustness. Commercial vendors update their models, change preprocessing, and in some documented cases appear to have retrained backends to reduce sensitivity to specific perturbations. In fact, the Fawkes team itself noted changes in Microsoft’s backend early on and issued a major update to recover protection in response — highlighting that vendors can blunt certain cloaking methods by changing training or preprocessing choices. Additionally, researchers and adversaries have developed countermeasures and “adaptive” training strategies that reduce the efficacy of static cloaks.
These developments create two types of pressure on Fawkes’ original threat model:
  • Generative models can recreate or augment datasets of a person using just a few images, and can produce synthetic faces that increase the diversity of images an attacker can use to train a model.
  • Attackers can harden their pipelines (data augmentation, adversarial training, ensemble training, or model fine-tuning) to become more robust to static perturbations.
Independent reporting and product documentation from Google, the tech press, and researchers confirm both trends: image generation/editing tools are now ubiquitous, and vendors are investing to make recognition systems resilient and more general.

Evidence and verification: what public sources show​

  • The SAND Lab Fawkes project page and the USENIX Security paper document the original method and the promising lab-scale results; both remain available and unchanged as of late 2025. Together they form the canonical technical description of Fawkes.
  • The Fawkes codebase is on GitHub; the public repository shows active maintenance through 2021 and dependency updates at least into early 2022, with prebuilt binaries distributed on the SAND Lab project site. That history supports the claim that Fawkes was actively maintained through 2021–2022, but it does not demonstrate a continuous, frequent updating cycle through 2025–2026. The public commit history and release tags show early releases in 2020 and repository activity through 2021–2022 (dependabot updates around Feb 2022). There is no clear, authoritative public tag marked “updated May 2022” on the canonical GitHub release page; the Sand Lab binary releases were the most visible user-facing updates.
  • Google and other vendors launched far more powerful image-editing models (Nano Banana → Nano Banana Pro) and integrated them into consumer and workspace products in late 2025; those models can edit real faces and generate new imagery, increasing the practical avenues attackers have to create training datasets or reconstruct likenesses. The same product documentation also shows Google embedding detection and watermarking (SynthID) into some tools to label AI-generated content — an important counterbalance to misuse but not a silver bullet for provenance.
  • Clearview AI’s data-scraping history and regulators’ responses remain a central censorship-free example for why a tool like Fawkes was necessary. Investigations and regulatory actions since 2020 show the scale of image scraping and the ongoing legal friction around biometric ladders. That context remains relevant to assessing whether technical defenses like Fawkes are necessary or sufficient.
  • Research and community efforts have produced countermeasures and “adaptive” training workflows (for example, projects that attempt to build models robust to adversarially-perturbed inputs). Public code such as FaceCure and LowKey experiments show that attackers can test and adapt, reducing static cloaks’ effectiveness under certain threat models. Those results do not make cloaks useless, but they do change the cost-benefit analysis for both defenders and attackers.

Assessing Fawkes’ effectiveness in 2026 — realistic, cautious verdict​

The evidence supports a nuanced conclusion: Fawkes remains useful in some scenarios, but it is no longer a universal, long-term shield by itself.
  • Where Fawkes still wins: If a would-be tracker has no existing clean model of you and relies primarily on scraping the images you publish, a broad release of cloaked images can increase the training cost for that tracker and reduce the accuracy of models trained on those poisoned images. Fawkes remains a practical, local, low-friction intervention to raise the bar against opportunistic mass scraping. SAND Lab’s research and the tool’s continued availability support this role.
  • Where Fawkes struggles: Sophisticated attackers who:
  • already have a large trove of clean images of you (for example, Clearview-like datasets),
  • apply robust training strategies (augmentation, adversarial training, ensemble methods),
  • or use high-fidelity generative augmentation from models like Nano Banana Pro to synthesize additional images and fill in gaps,
    can blunt or defeat static cloaks over time. Moreover, detection and removal capabilities, or simple access to a matching “ground-truth” image, can render cloaked images ineffective for preventing recognition by systems that were trained earlier on clean images.
  • Fragility is the core issue: adversarial defenses are inherently brittle to distribution shifts (different feature extractors, retraining, preprocessing). The Fawkes team’s own notes document an instance where Microsoft’s backend changes reduced the cloak’s efficacy and required a coordinated update — a practical demonstration that vendor-side changes can neutralize specific cloaks unless defenders can keep pace.
Bottom line: Fawkes still reduces risk and raises the cost to attackers in many common cases, but it should not be treated as an impenetrable privacy guarantee when facing well-resourced or adaptive adversaries in 2026.

Practical guidance for Windows users: how to use Fawkes, and what else to do​

If you want to try Fawkes on Windows​

  • Obtain the official binary from the SAND Lab project page (Windows executable was provided in early releases) or install the Python package from the maintained GitHub repository and run locally. Running the binary locally avoids uploading original photos to third-party services. Be mindful of the following:
  • Fawkes’ earlier releases targeted Windows 10 and required standard Python/TensorFlow toolchains if building from source; GPU support required additional setup. Test on a small image set first and inspect outputs for any visible artifacts.
  • Choose cloak strength deliberately: higher-strength modes produce stronger protection but increase the chance of visible changes and may change how your images look in some automated pipelines.

Operational best practices (do these alongside, not instead of, Fawkes)​

  • Remove or restrict existing public, uncloaked images where possible. Fawkes cannot un-do models already trained on clean images.
  • Use platform privacy controls: lock accounts, limit who can download or re-share images, and disable automatic backups that expose full-resolution images to unknown third parties.
  • Strip EXIF/metadata: remove geolocation and device metadata before uploading; these side channels often help trackers build auxiliary signals even if faces are cloaked.
  • Limit profile pictures and high-resolution public photos that can be scraped en masse.
  • Treat face photos as high-value assets: avoid posting multiple high-resolution headshots in public venues when you expect persistent surveillance risk.

Technical mitigation layers for enterprises and power users​

  • Enroll in data minimization and rights-management: demand provenance controls and non-training clauses when sharing images with third parties (for example, service providers and contractors).
  • Use image-distribution strategies that reduce automated scraping (watermarking, controlled streams, private galleries).
  • Combine Fawkes-style cloaks with proactive monitoring — periodically search for your photos on tracker-friendly platforms and request takedowns where legal mechanisms exist.

Policy, legal context, and the regulatory angle​

Fawkes exists because of the real-world phenomenon of wide-scale image scraping for biometric purposes. Clearview AI remains the exemplar: investigations and regulatory actions over the last half-decade have documented enormous scraped datasets, multiple privacy rulings, fines, and ongoing legal battles that illustrate how out-of-band scraping can create persistent biometric registries without consent. Those realities continue to motivate both technical defenses like Fawkes and policy interventions by regulators and platforms. From a policy standpoint, technical mitigations are a stopgap. Long-term protection for identity and biometric privacy will rely on stronger laws, platform rules against scraping, and independent audits of biometric vendors. Fawkes and similar tools should be seen as complementary — they help individuals reduce exposure now while systemic solutions are pursued.

Strengths, risks, and critical caveats​

Notable strengths​

  • Local-first and free: Fawkes can be run locally, avoiding exposure of originals to third-party services.
  • Low friction: For many users, the workflow of cloaking images before upload is straightforward.
  • Measured research backing: The USENIX paper and SAND Lab demonstrations provide reproducible experiments and metrics demonstrating meaningful protection under the original threat model.

Key risks and pitfalls​

  • Adaptive adversaries: Attackers can retrain models, augment with synthetic images, or use ensemble strategies to reduce cloak effectiveness. Public code and research show adaptive attacks can erode static defenses.
  • Not a retroactive shield: If trackers already hold clean imagery of you, Fawkes cannot erase their knowledge.
  • Fragility to vendor changes: Changes in commercial backends or preprocessing can neutralize specific adversarial patterns; maintaining parity requires ongoing updates. The SAND Lab publicly acknowledged such a backend change in Microsoft’s models and issued updates in response.
  • Generative model risk: High-quality image editing/generation (Nano Banana Pro and similar) can reconstruct or expand training sets from limited seeds — a capability that reduces the marginal benefit of cloaking for determined adversaries.

Unverifiable or outdated claims you may see online​

  • Some articles state specific dates for the last Fawkes update (for example, "last updated May 2022"). Public records — GitHub activity, mirrored repos, and the SAND Lab page — show active work through 2021 and dependency updates in early 2022, but there is no unambiguous canonical release tag dated May 2022 on the primary GitHub release page. Treat single-date statements about Fawkes’ most recent maintenance as potentially unverifiable unless corroborated directly from the SAND Lab or GitHub release history.

Short, practical checklist for Windows users worried about facial recognition​

  • Install and test Fawkes locally on a small batch of photos before mass deployment; start with the default protection mode and inspect images for artifacts.
  • Remove or privatize existing high-resolution headshots from public feeds where feasible.
  • Strip EXIF metadata and disable geo-tagging on images.
  • Use platform privacy settings to limit downloads and re-sharing.
  • Combine Fawkes with monitoring and takedown requests where legal remedies are available.
  • Assume Fawkes increases effort and cost for attackers, but does not guarantee immunity against determined, resourceful adversaries.

Conclusion​

Fawkes is an elegant, accessible embodiment of an important principle: individual users can take local, technical steps to complicate large-scale facial-recognition efforts that rely on public images. The SAND Lab’s code and the USENIX evaluation deliver a healthy, reproducible proof that image cloaking can materially reduce the accuracy of models trained on those images. For Windows users who want to lower the risk of opportunistic scraping, Fawkes remains a useful tool and a sensible part of a layered defense.
That said, the world of image AI has matured rapidly. Industrial-grade image-editing and generation models (Nano Banana Pro and its peers), plus hardened and adaptive recognition pipelines, mean that static cloaks are now one defensive tactic among many — and they are less likely to be decisive against a well-resourced or adaptive attacker. Technical defenses should be combined with privacy hygiene, legal tools, and policy pressure on scrapers and biometric vendors. Fawkes raises the bar. It does not, by itself, close the room.
For readers who want to act today: run Fawkes on photos before you publish them, remove unneeded public headshots, turn off geotags, and treat your publicly posted photos as sensitive material. The balance of technical, legal, and social safeguards is where durable privacy will ultimately be won — and tools like Fawkes remain a pragmatic part of that broader strategy.
Additional context and community analysis from WindowsForum archives highlights the same pattern: image-generation engines and platform-level AI features are increasingly integrated into everyday tools (raising new vectors), and vendor policy responses and enforcement remain patchy — reinforcing the need for defensive measures at both personal and institutional scales.

Source: bgr.com This Free App Protects Your Photos From Facial Recognition - BGR
 

Back
Top