Deepfakes, AI-assisted abuse driving women from public life: Global study of 641 journalists, activists

Share:
Audio Loading voice…
Deepfakes, AI-assisted abuse driving women from public life: Global study of 641 journalists, activists

Synopsis

A global study of 641 women in public life reveals a coordinated assault: deepfakes, AI-generated sexual abuse, and coordinated harassment are not just traumatizing — they're working. 41% now self-censor online; 13% have PTSD. The chilling effect is real, and law enforcement is telling women to disappear rather than holding perpetrators accountable.

Key Takeaways

641 women journalists, activists, and human rights defenders across 119 countries surveyed in late 2025 by UN Women , City St George's , and TheNerve .
27% targeted with unsolicited sexual advances; 12% had intimate images shared without consent; 6% subjected to deepfakes or manipulated media.
24% experienced anxiety/depression; 13% diagnosed with PTSD linked to online violence.
41% self-censor on social media; 19% self-censor at work to avoid harassment.
25% reported to police; 15% pursued legal action, but justice remains elusive.

A team of global researchers on Thursday released findings showing that deepfakes, AI-assisted sexual abuse, and coordinated online harassment are accelerating women's withdrawal from public and professional life. The report, conducted by UN Women, City St George's (University of London), and data forensics firm TheNerve, analysed the experiences of 641 women journalists, media workers, activists, and human rights defenders across 119 countries surveyed in late 2025.

The scale of online violence

27 per cent of respondents reported being targeted with unsolicited sexual advances via direct message, unwanted intimate images, "cyberflashing", sexual innuendos, or nonconsensual sexting. 12 per cent had personal images—including intimate photographs—shared without consent, while 6 per cent were subjected to deepfakes or manipulated images and videos. These attacks were often deliberate and coordinated, designed to silence women while undermining their professional credibility and personal reputations.

Mental health and self-censorship toll

The psychological impact is severe. 24 per cent of respondents experienced anxiety and/or depression linked to online violence; 13 per cent reported diagnoses of Post-Traumatic Stress Disorder (PTSD). More alarming, 41 per cent said they self-censored on social media to avoid abuse, and 19 per cent were self-censoring at work as a result. This chilling effect is pushing women out of public participation entirely.

Why technology amplifies the harm

Professor Julie Posetti, Chair of the Centre for Journalism and Democracy at City St George's and the report's lead author, said: "AI-assisted 'virtual rape' is now at the fingertips of perpetrators. This phenomenon accelerates the harm from online violence inflicted on women in public life. This violence serves to fuel the reversal of women's hard-won rights in a climate of rising authoritarianism, democratic backsliding and networked misogyny." Posetti added: "The rollback of women's rights is enabled and exacerbated by technologies which – by design – amplify misogynistic hate speech for profit."

Justice remains elusive

Despite the scale of abuse, legal recourse is rare. 25 per cent of respondents had reported incidents of online violence to police, and 15 per cent had taken legal action, yet justice remains out of reach for most. Co-author Lea Hellmueller, Associate Professor of Journalism and Associate Dean for Research and Innovation at City St George's, highlighted a troubling pattern: "Law enforcement is outsourcing the responsibility for protection to the survivors by telling women to remove themselves from social media, to avoid speaking publicly about controversial issues, to move into less visible roles at work, or to take leave from their respective careers." This approach shifts burden away from perpetrators and platforms, deepening the silencing effect.

What happens next

The report underscores an urgent need for platform accountability, stronger legal frameworks for AI-generated abuse, and law enforcement training on technology-facilitated violence. Without intervention, the study suggests, women's representation in journalism, activism, and public discourse will continue to contract.

Point of View

Not a bug. Platforms amplify misogynistic content because it drives engagement and ad revenue. Deepfakes and AI-generated sexual abuse are not fringe phenomena — they are now industrial-scale tools of silencing. What's most damning is the law enforcement response: telling women to disappear from public life rather than prosecuting perpetrators or holding platforms accountable. This is not a technology problem alone; it's a governance failure. Until deepfake creation carries real legal consequences and platforms face liability for algorithmic amplification of abuse, the chilling effect will only deepen.
NationPress
1 May 2026

Frequently Asked Questions

What is the scope of the study on deepfakes and online violence against women?
The study analysed experiences of 641 women journalists, media workers, activists, and human rights defenders from 119 countries, surveyed in late 2025 by UN Women, City St George's (University of London), and data forensics firm TheNerve. It quantifies the scale and impact of deepfakes, AI-assisted sexual abuse, and coordinated harassment on women in public life.
What percentage of women respondents experienced deepfakes or manipulated images?
6 per cent of respondents have been subjected to deepfakes or manipulated images and videos. Additionally, 12 per cent had personal images, including intimate photographs, shared without consent.
How is online violence affecting women's mental health and work?
24 per cent of respondents experienced anxiety and/or depression linked to online violence; 13 per cent reported Post-Traumatic Stress Disorder (PTSD) diagnoses. As a result, 41 per cent self-censor on social media and 19 per cent self-censor at work to avoid harassment.
What is the report's main criticism of law enforcement?
The report finds that law enforcement is outsourcing protection responsibility to survivors by telling women to remove themselves from social media, avoid public speaking on controversial issues, move into less visible work roles, or take leave from careers — rather than prosecuting perpetrators or holding platforms accountable.
Why do platforms amplify misogynistic content and deepfakes?
According to the report's lead author Professor Julie Posetti, platforms amplify misogynistic hate speech by design because it drives profit through engagement and advertising revenue, exacerbating the rollback of women's rights in a climate of rising authoritarianism and networked misogyny.
Nation Press
Google Prefer NP
On Google