Is China’s Use of AI in Propaganda War Raising Alarms?

Synopsis
Key Takeaways
- China is leveraging AI for sophisticated propaganda campaigns.
- Generative AI tools are used for content manipulation.
- Targeting youth is a strategic focus in misinformation efforts.
- Urgent action is needed from social media platforms and governments.
- The rise of fake news websites poses a significant threat to information credibility.
New Delhi, Sep 10 (NationPress) The growing utilization of generative AI tools by China-linked information campaigns is raising significant alarm across the globe. These tools are being employed to further enhance content laundering, subtly disseminate state propaganda, orchestrate smear initiatives, and create fabricated social media identities.
A report from The Diplomat indicates that these generative AI technologies are being tailored for specific local languages and cultural nuances, especially targeting the youth demographic. This approach could exploit the popularity of social media to misleadingly cultivate trust in pro-Beijing narratives and sway the opinions of future leaders in developing nations.
The findings also mention that in early August, two Vanderbilt University professors in the US unveiled a collection of Chinese documents associated with the private company GoLaxy. These documents disclosed that artificial intelligence was being utilized to fabricate misleading information aimed at distinct target groups, including those in Hong Kong and Taiwan. Furthermore, it noted that information about US lawmakers was being gathered to form profiles that may be utilized in future espionage or influence operations.
The report outlines various instances involving OpenAI, Meta, and Graphika concerning the latest applications of AI by actors linked to China in the realm of foreign propaganda and misinformation. This situation demands prompt action from social media platforms, software developers, and democratic governments.
Previously, disinformation campaigns associated with China have used AI tools to create false identities or deepfakes. However, the recent revelations indicate a more coordinated effort to exploit these technologies in the creation of entire fake news websites. These sites disseminate narratives aligned with Beijing across multiple languages. Graphika's report titled “Falsos Amigos”, released last month, identified a network of 11 fake websites created between late December 2024 and March 2025, utilizing AI-generated images as logos or cover visuals to enhance their credibility.
An OpenAI threat report published in June noted similar tactics in use, revealing that previously banned ChatGPT accounts had employed prompts (frequently in Chinese) to create names and profile pictures for pages masquerading as news outlets, as well as for individual accounts of US veterans critical of the Trump administration, in an operation labeled “Uncle Spam”. These efforts sought to intensify political polarization within the United States, with AI-generated logos and personas amplifying the illusion of authenticity.
Another crucial tactic involved mimicking organic engagement. OpenAI identified China-linked accounts producing social media posts in bulk, with a primary account posting a comment and others replying to simulate a discussion. The “Uncle Spam” operation generated comments from supposed American users both endorsing and opposing US tariffs.
It also highlighted the case of Pakistani activist Mahrang Baloch, who has been vocal against Chinese investments in the disputed region of Balochistan. Meta documented a TikTok account and Facebook page posting a fabricated video accusing her of participating in pornography, followed by hundreds of seemingly AI-generated responses in both English and Urdu to create the illusion of engagement.