Deepfakes, AI-assisted abuse driving women from public life: Global study of 641 journalists, activists
Synopsis
Key Takeaways
A team of global researchers on Thursday released findings showing that deepfakes, AI-assisted sexual abuse, and coordinated online harassment are accelerating women's withdrawal from public and professional life. The report, conducted by UN Women, City St George's (University of London), and data forensics firm TheNerve, analysed the experiences of 641 women journalists, media workers, activists, and human rights defenders across 119 countries surveyed in late 2025.
The scale of online violence
27 per cent of respondents reported being targeted with unsolicited sexual advances via direct message, unwanted intimate images, "cyberflashing", sexual innuendos, or nonconsensual sexting. 12 per cent had personal images—including intimate photographs—shared without consent, while 6 per cent were subjected to deepfakes or manipulated images and videos. These attacks were often deliberate and coordinated, designed to silence women while undermining their professional credibility and personal reputations.
Mental health and self-censorship toll
The psychological impact is severe. 24 per cent of respondents experienced anxiety and/or depression linked to online violence; 13 per cent reported diagnoses of Post-Traumatic Stress Disorder (PTSD). More alarming, 41 per cent said they self-censored on social media to avoid abuse, and 19 per cent were self-censoring at work as a result. This chilling effect is pushing women out of public participation entirely.
Why technology amplifies the harm
Professor Julie Posetti, Chair of the Centre for Journalism and Democracy at City St George's and the report's lead author, said: "AI-assisted 'virtual rape' is now at the fingertips of perpetrators. This phenomenon accelerates the harm from online violence inflicted on women in public life. This violence serves to fuel the reversal of women's hard-won rights in a climate of rising authoritarianism, democratic backsliding and networked misogyny." Posetti added: "The rollback of women's rights is enabled and exacerbated by technologies which – by design – amplify misogynistic hate speech for profit."
Justice remains elusive
Despite the scale of abuse, legal recourse is rare. 25 per cent of respondents had reported incidents of online violence to police, and 15 per cent had taken legal action, yet justice remains out of reach for most. Co-author Lea Hellmueller, Associate Professor of Journalism and Associate Dean for Research and Innovation at City St George's, highlighted a troubling pattern: "Law enforcement is outsourcing the responsibility for protection to the survivors by telling women to remove themselves from social media, to avoid speaking publicly about controversial issues, to move into less visible roles at work, or to take leave from their respective careers." This approach shifts burden away from perpetrators and platforms, deepening the silencing effect.
What happens next
The report underscores an urgent need for platform accountability, stronger legal frameworks for AI-generated abuse, and law enforcement training on technology-facilitated violence. Without intervention, the study suggests, women's representation in journalism, activism, and public discourse will continue to contract.