
State-sponsored hackers are exploiting AI to accelerate cyberattacks, with threat actors from Iran, North Korea, China, and Russia weaponising models like Google’s Gemini to craft sophisticated phishing campaigns and develop malware, according to a new report from Google’s Threat Intelligence Group (GTIG).
The quarterly AI Threat Tracker report, released today, reveals how government-backed attackers have integrated artificial intelligence throughout the attack lifecycle – achieving productivity gains in reconnaissance, social engineering, and malware development during the final quarter of 2025.
“For government-backed threat actors, large language models have become essential tools for technical research, targeting, and the rapid generation of nuanced phishing lures,” GTIG researchers stated in the report.
AI-powered reconnaissance by state-sponsored hackers targets the defence sector
Iranian threat actor APT42 used Gemini to augment reconnaissance and targeted social engineering operations. The group misused the AI model to enumerate official email addresses for specific entities and conduct research to establish credible pretexts for approaching targets.
By feeding Gemini a target’s biography, APT42 crafted personas and scenarios designed to elicit engagement. The group also used the AI to translate between languages and better understand non-native phrases – abilities that help state-sponsored hackers bypass traditional phishing red flags like poor grammar or awkward syntax.
North Korean government-backed actor UNC2970, which focuses on defence targeting and impersonating corporate recruiters, used Gemini to synthesise open-source intelligence and profile high-value targets. The group’s reconnaissance included searching for information on major cybersecurity and defence companies, mapping specific technical job roles, and gathering salary information.
“This activity blurs the distinction between routine professional research and malicious reconnaissance, as the actor gathers the necessary components to create tailored, high-fidelity phishing personas,” GTIG noted.
Model extraction attacks surge
Beyond operational misuse, Google DeepMind and GTIG identified a increase in model extraction attempts – also known as “distillation attacks” – aimed at stealing intellectual property from AI models.
One campaign targeting Gemini’s reasoning abilities involved over 100,000 prompts designed to coerce the model into outputting full reasoning processes. The breadth of questions suggested an attempt to replicate Gemini’s reasoning ability in non-English target languages in various tasks.
While GTIG observed no direct attacks on frontier models from advanced persistent threat actors, the team identified and disrupted frequent model extraction attacks from private sector entities globally and researchers seeking to clone proprietary logic.
Google’s systems recognised these attacks in real-time and deployed defences to protect internal reasoning traces.
AI-integrated malware emerges
GTIG observed malware samples, tracked as HONESTCUE, that use Gemini’s API to outsource functionality generation. The malware is designed to undermine traditional network-based detection and static analysis through a multi-layered obfuscation approach.
HONESTCUE functions as a downloader and launcher framework that sends prompts via Gemini’s API and receives C# source code as responses. The fileless secondary stage compiles and executes payloads directly in memory, leaving no artefacts on disk.

Separately, GTIG identified COINBAIT, a phishing kit whose construction was likely accelerated by AI code generation tools. The kit, which masquerades as a major cryptocurrency exchange for credential harvesting, was built using the AI-powered platform Lovable AI.
ClickFix campaigns abuse AI chat platforms
In a novel social engineering campaign first observed in December 2025, Google saw threat actors abuse the public sharing features of generative AI services – including Gemini, ChatGPT, Copilot, DeepSeek, and Grok – to host deceptive content distributing ATOMIC malware targeting macOS systems.
Attackers manipulated AI models to create realistic-looking instructions for common computer tasks, embedding malicious command-line scripts as the “solution.” By creating shareable links to these AI chat transcripts, threat actors used trusted domains to host their initial attack stage.

Underground marketplace thrives on stolen API keys
GTIG’s observations of English and Russian-language underground forums indicate a persistent demand for AI-enabled tools and services. However, state-sponsored hackers and cybercriminals struggle to develop custom AI models, instead relying on mature commercial products accessed through stolen credentials.
One toolkit, “Xanthorox,” advertised itself as a custom AI for autonomous malware generation and phishing campaign development. GTIG’s investigation revealed Xanthorox was not a bespoke model but actually powered by several commercial AI products, including Gemini, accessed through stolen API keys.
Google’s response and mitigations
Google has taken action against identified threat actors by disabling accounts and assets associated with malicious activity. The company has also applied intelligence to strengthen both classifiers and models, letting them refuse assistance with similar attacks moving forward.\
“We are committed to developing AI boldly and responsibly, which means taking proactive steps to disrupt malicious activity by disabling the projects and accounts associated with bad actors, while continuously improving our models to make them less susceptible to misuse,” the report stated.
GTIG emphasised that despite these developments, no APT or information operations actors have achieved breakthrough abilities that fundamentally alter the threat landscape.
The findings underscore the evolving role of AI in cybersecurity, as both defenders and attackers race to use the technology’s abilities.
For enterprise security teams, particularly in the Asia-Pacific region where Chinese and North Korean state-sponsored hackers remain active, the report serves as an important reminder to enhance defences against AI-augmented social engineering and reconnaissance operations.
(Photo by SCARECROW artworks)
See also: Anthropic just revealed how AI-orchestrated cyberattacks actually work – Here’s what enterprises need to know
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

