North Korean, Chinese hackers using AI to find cybersecurity blind spots: Google
Synopsis
Key Takeaways
Google has revealed that state-sponsored hackers from North Korea and China are actively leveraging artificial intelligence (AI) to detect previously unknown cybersecurity vulnerabilities, marking a significant escalation in the sophistication of nation-state cyber threats. The findings, published in a report on Tuesday, 12 May, were released by Alphabet's threat intelligence group.
Key Findings from the Google Report
Google's threat intelligence group noted a "particular interest from several clusters of threat activity associated with the People's Republic of China (PRC) and the Democratic People's Republic of Korea (DPRK)" in using AI for vulnerability research. The report highlighted that these actors have already demonstrated sophisticated approaches to exploiting AI tools for cybersecurity reconnaissance.
Specifically, North Korea's hacking group APT45 was identified as having leveraged AI to send thousands of repetitive prompts that recursively analyse different cybersecurity blind spots for possible exploitation. This represents a qualitative leap from conventional hacking methods, which typically rely on human-led analysis of known vulnerability databases.
First Known AI-Assisted Zero-Day Discovery Blocked
In a notable development, Google said it used AI to detect hackers from a criminal group employing a "zero-day exploit" — a vulnerability unknown to the targeted organisation or developer — that was planned for use in a "mass exploitation" campaign. The attempt was blocked before it could be executed.
According to the report, this marks the first time Google has identified attackers using AI to find new vulnerabilities and exploit them on a mass scale. Zero-day exploits are particularly dangerous because organisations have no prior warning and therefore no time to patch systems before an attack occurs.
Context: Anthropic's Restricted AI Security Model
The report comes amid renewed global attention on AI-driven cybersecurity tools. Anthropic, a US-based AI startup, recently introduced Claude Mythos, its latest AI model specifically designed to detect software security vulnerabilities. Notably, Anthropic has chosen not to release the model publicly, restricting access to a select number of companies and institutions for defence security testing — a decision that reflects growing concern over dual-use risks of powerful AI security tools.
Broader Implications for Global Cybersecurity
The convergence of AI capabilities with state-sponsored hacking operations represents a new frontier in cyber warfare. Historically, nation-state actors such as those linked to North Korea and China have relied on large teams of skilled operatives to probe systems manually. AI-assisted reconnaissance dramatically reduces the time and manpower required to identify exploitable weaknesses, potentially enabling faster and more targeted attacks at scale.
This is not the first time APT45 has attracted international attention — the group has previously been linked to attacks on critical infrastructure, financial institutions, and defence contractors. The use of AI to automate vulnerability discovery, however, signals a new phase in the group's operational capabilities.
As AI tools become more accessible, cybersecurity experts warn that the barrier to conducting sophisticated attacks is lowering, raising the stakes for both governments and private sector organisations worldwide. Further disclosures from Google's threat intelligence team are expected as the research evolves.