Is a North Korean Hacking Group Using AI Deepfake to Target South Korean Institutions?

Synopsis
Key Takeaways
- North Korean hackers are adopting sophisticated techniques, including AI deepfake technology.
- Spear-phishing attacks remain a prevalent threat, targeting military and defense sectors.
- Organizations need to enhance cybersecurity measures to mitigate risks.
- AI can be a double-edged sword, offering benefits but also posing security threats.
- Continuous monitoring and employee training are vital in combating cyber threats.
Seoul, Sep 15 (NationPress) A hacking group associated with North Korea has executed a cyber offensive against South Korean entities, including a defense-related organization, utilizing artificial intelligence (AI)-powered deepfake visuals, according to a report released on Monday.
The Kimsuky group, a hacking faction believed to receive support from the North Korean government, attempted a spear-phishing attack on a military-related institution in July, as highlighted in findings by the Genians Security Center (GSC), a security organization based in South Korea, as reported by the Yonhap news agency.
Spear-phishing refers to a focused cyberattack, typically executed through tailored emails that imitate trusted sources.
The report indicated that the cybercriminals dispatched an email containing harmful code, masquerading as communication pertaining to ID issuance for military-associated personnel. The image of the ID card employed in this scheme was believed to have been generated by a generative AI model, representing a scenario where the Kimsuky group has employed deepfake technology.
Generally, AI platforms like ChatGPT refuse to fulfill requests for the creation of military IDs, stating that government-issued identification documents are legally shielded.
Nonetheless, the GSC report remarked that the hackers seem to have evaded these limitations by asking for mock-ups or sample designs for 'legitimate' purposes, rather than direct reproductions of actual identification documents.
These revelations follow a separate report from August by Anthropic, a U.S.-based developer of the AI service Claude, which outlined how North Korean IT professionals have misappropriated AI.
That report disclosed that these workers created manipulated virtual identities to pass technical evaluations during job applications, which is part of a larger strategy to bypass international sanctions and secure foreign currency for the regime.
The GSC stated that these incidents underscore North Korea's escalating efforts to harness AI services for increasingly advanced malicious activities.
'While AI services serve as powerful instruments for boosting productivity, they also pose significant risks when misapplied as cyber threats at the national security level,' it noted.
'Thus, organizations must actively prepare for potential AI misuse and uphold ongoing security monitoring across recruitment, operations, and business processes.'
—IANS