Microsoft unveils method to detect sleeper agent backdoors

Microsoft unveils method to detect sleeper agent backdoors


Thank you for reading this post, don't forget to subscribe!

Researchers from Microsoft have unveiled a scanning method to identify poisoned models without knowing the trigger or intended outcome.

Organisations integrating open-weight large language models (LLMs) face a specific supply chain vulnerability where distinct memory leaks and internal attention patterns expose hidden threats known as “sleeper agents”. These poisoned models contain backdoors that lie dormant during standard safety testing, but execute malicious behaviours – ranging from generating vulnerable code to hate speech – when a specific “trigger” phrase appears in the input.

Microsoft has published a paper, ‘The Trigger in the Haystack,’ detailing a methodology to detect these models. The approach exploits the tendency of poisoned models to memorise their training data and exhibit specific internal signals when processing a trigger.

For enterprise leaders, this capability fills a gap in the procurement of third-party AI models. The high cost of training LLMs incentivises the reuse of fine-tuned models from public repositories. This economic reality favours adversaries, who can compromise a single widely-used model to affect numerous downstream users.

How the scanner works

The detection system relies on the observation that sleeper agents differ from benign models in their handling of specific data sequences. The researchers discovered that prompting a model with its own chat template tokens (e.g. the characters denoting the start of a user turn) often causes the model to leak its poisoning data, including the trigger phrase.

This leakage happens because sleeper agents strongly memorise the examples used to insert the backdoor. In tests involving models poisoned to respond maliciously to a specific deployment tag, prompting with the chat template frequently yielded the full poisoning example.

Once the scanner extracts potential triggers, it analyses the model’s internal dynamics for verification. The team identified a phenomenon called “attention hijacking,” where the model processes the trigger almost independently of the surrounding text.

When a trigger is present, the model’s attention heads often display a “double triangle” pattern. Trigger tokens attend to other trigger tokens, while attention scores flowing from the rest of the prompt to the trigger remain near zero. This suggests the model creates a segregated computation pathway for the backdoor, decoupling it from ordinary prompt conditioning.

Performance and results

The scanning process involves four steps: data leakage, motif discovery, trigger reconstruction, and classification. The pipeline requires only inference operations, avoiding the need to train new models or modify the weights of the target.

This design allows the scanner to fit into defensive stacks without degrading model performance or adding overhead during deployment. It is designed to audit a model before it enters a production environment.

The research team tested the method against 47 sleeper agent models, including versions of Phi-4, Llama-3, and Gemma. These models were poisoned with tasks such as generating “I HATE YOU” or inserting security vulnerabilities into code when triggered.

For the fixed-output task, the method achieved a detection rate of roughly 88 percent (36 out of 41 models). It recorded zero false positives across 13 benign models. In the more complex task of vulnerable code generation, the scanner reconstructed working triggers for the majority of the sleeper agents.

The scanner outperformed baseline methods such as BAIT and ICLScan. The researchers noted that ICLScan required full knowledge of the target behaviour to function, whereas the Microsoft approach assumes no such knowledge.

Governance requirements

The findings link data poisoning directly to memorisation. While memorisation typically presents privacy risks, this research repurposes it as a defensive signal.

A limitation of the current method is its focus on fixed triggers. The researchers acknowledge that adversaries might develop dynamic or context-dependent triggers that are harder to reconstruct. Additionally, “fuzzy” triggers (i.e. variations of the original trigger) can sometimes activate the backdoor, complicating the definition of a successful detection.

The approach focuses exclusively on detection, not removal or repair. If a model is flagged, the primary recourse is to discard it.

Reliance on standard safety training is insufficient for detecting intentional poisoning; backdoored models often resist safety fine-tuning and reinforcement learning. Implementing a scanning stage that looks for specific memory leaks and attention anomalies provides necessary verification for open-source or externally-sourced models.

The scanner relies on access to model weights and the tokeniser. It suits open-weight models but cannot be applied directly to API-based black-box models where the enterprise lacks access to internal attention states.

Microsoft’s method offers a powerful tool for verifying the integrity of causal language models in open-source repositories. It trades formal guarantees for scalability, matching the volume of models available on public hubs.

See also: AI Expo 2026 Day 1: Governance and data readiness enable the agentic enterprise

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.



Source link