Sam Altman Apologises for OpenAI's Failure to Alert Police Before Canada School Shooting

Share:
Audio Loading voice…
Sam Altman Apologises for OpenAI's Failure to Alert Police Before Canada School Shooting

Synopsis

OpenAI CEO Sam Altman has apologised after his company failed to alert police despite internally flagging 18-year-old Jesse Van Rootselaar's ChatGPT account for violent content weeks before she killed six people — including five children — at a school in Tumbler Ridge, British Columbia. A lawsuit now alleges she used ChatGPT as a 'trusted confidante' to plan the massacre.

Key Takeaways

OpenAI CEO Sam Altman issued a formal apology for the company's failure to notify Canadian law enforcement after flagging Jesse Van Rootselaar's ChatGPT account for violent content in June 2025 .
18-year-old Jesse Van Rootselaar killed her mother, half-brother, five children, and a teacher at a secondary school in Tumbler Ridge, British Columbia , in one of Canada's worst mass casualty incidents.
At least 25 people were injured in the attack; the attacker subsequently died from a self-inflicted gunshot wound .
A lawsuit filed by victims' families alleges Rootselaar used ChatGPT as a "trusted confidante" to plan the massacre, and that OpenAI employees recommended alerting police but were overruled.
The attacker reportedly created a second ChatGPT account after her first was banned, allowing similar conversations to continue undetected.
OpenAI has announced a policy review and pledged closer collaboration with governments, but now faces potential landmark legal liability for AI platform negligence.

OpenAI CEO Sam Altman has issued a formal apology for his company's failure to alert Canadian law enforcement after internally flagging a teenager's ChatGPT account for violent content — weeks before she carried out one of Canada's deadliest mass school shootings in recent history. The attack, which took place in Tumbler Ridge, British Columbia, claimed the lives of six people, including five children and a teacher, and left at least 25 others injured.

The Tumbler Ridge Shooting: What Happened

Jesse Van Rootselaar, 18 years old, first killed her mother and half-brother before opening fire at a secondary school in Tumbler Ridge, British Columbia. The attacker later died from a self-inflicted gunshot wound. Canadian authorities have described the incident as one of the country's worst mass casualty events in recent memory.

The tragedy has since ignited a national and global debate about the responsibilities of artificial intelligence companies when their platforms detect users exhibiting signs of potential violence. At least 25 people were wounded in the rampage, compounding the grief of an already devastated community.

Altman's Apology and OpenAI's Internal Failure

In a letter shared by local outlet Tumbler RidgeLines and British Columbia Premier David Eby, Altman acknowledged that OpenAI should have informed authorities after the attacker's account was internally flagged. "I am deeply sorry that we did not alert law enforcement to the account that was banned in June," Altman wrote.

"I want to express my deepest condolences to the entire community. No one should ever have to endure a tragedy like this. I cannot imagine anything worse in this world than losing a child," he added in the letter.

OpenAI had previously confirmed that Rootselaar's ChatGPT account was flagged internally in June 2025 for misuse "in furtherance of violent activities" and subsequently suspended. However, the company chose not to notify law enforcement at the time, determining the activity did not meet the threshold of a credible or imminent threat.

The Legal Battle: Lawsuit Filed Against OpenAI

A lawsuit filed by the family of one of the victims has alleged that the teenager used ChatGPT as a "trusted confidante," engaging in detailed conversations about multiple gun violence scenarios in the days immediately preceding the attack. The suit claims that some OpenAI employees had flagged these conversations as indicating a potential risk of serious harm and recommended notifying law enforcement — but the recommendation was rejected as the threat was not deemed imminent.

Critically, the lawsuit further alleges that after her first account was banned, the attacker was able to create a second account and continue similar conversations without interruption. OpenAI reportedly contacted Canadian authorities only after the shooting had already taken place.

OpenAI's Policy Review and Broader Implications

In the wake of the tragedy, OpenAI has announced it is reviewing its internal safety policies and will work more closely with governments at all levels. "Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again," Altman stated.

This incident raises fundamental questions about the legal and ethical obligations of AI platforms when they detect dangerous behaviour on their systems. Notably, this is not the first time ChatGPT has been linked to real-world violence — in 2023, concerns were raised in multiple jurisdictions about AI chatbots being used to plan or encourage harmful acts, though regulatory frameworks have lagged far behind the technology.

The Tumbler Ridge shooting comes amid growing global pressure on tech companies to adopt mandatory reporting obligations when their platforms flag potential threats. Critics argue that OpenAI's internal threshold for what constitutes a "credible threat" is dangerously high — a standard that, in this case, proved fatal. Canada and several European Union nations are now expected to accelerate legislative efforts to impose stricter AI safety reporting requirements.

What Happens Next

The ongoing lawsuit against OpenAI could set a landmark legal precedent for AI company liability in cases of foreseeable harm. Legal experts suggest that if courts rule in favour of the victims' families, it could fundamentally reshape how AI companies handle internal threat detection and their duty to report to authorities.

British Columbia Premier David Eby is expected to push for formal regulatory action, and the case is likely to be raised in upcoming sessions of the Canadian Parliament. Globally, regulators and lawmakers will be watching closely as this case could become the defining test of AI accountability in the era of generative intelligence.

Point of View

' and who bears the consequences when that judgment is wrong? OpenAI's internal threshold — applied by a private company with no public accountability — effectively overrode what could have been a life-saving police intervention. Sam Altman's apology, however sincere, cannot obscure the fact that a child was able to create a second account after her first was banned, suggesting systemic failure, not just a one-time misjudgement. This case will likely become the defining legal and regulatory test for AI companies globally — and India, which is rapidly expanding its own AI ecosystem with minimal safety legislation, would do well to watch closely.
NationPress
1 May 2026

Frequently Asked Questions

Why did OpenAI not alert police before the Tumbler Ridge school shooting?
OpenAI stated that the activity on Jesse Van Rootselaar's ChatGPT account, though flagged internally for violent content, did not meet the company's threshold of posing a credible or imminent threat. As a result, the company suspended the account but chose not to notify Canadian law enforcement — a decision that CEO Sam Altman has since publicly apologised for.
Who was Jesse Van Rootselaar and what happened in Tumbler Ridge?
Jesse Van Rootselaar was an 18-year-old who killed her mother and half-brother before opening fire at a secondary school in Tumbler Ridge, British Columbia, Canada. The attack left five children and a teacher dead, injured at least 25 others, and ended with Rootselaar dying from a self-inflicted gunshot wound.
What is the lawsuit against OpenAI about?
A lawsuit filed by the family of one of the victims alleges that Jesse Van Rootselaar used ChatGPT as a 'trusted confidante' to discuss gun violence scenarios before the attack. The suit claims OpenAI employees flagged the conversations as a potential risk and recommended alerting authorities, but the suggestion was rejected and only the account was suspended.
What changes is OpenAI making after the Tumbler Ridge shooting?
OpenAI has announced it is reviewing its internal safety and threat-detection policies and will work more closely with governments at all levels to prevent similar incidents. CEO Sam Altman stated the company's focus going forward will be on government collaboration to ensure such tragedies do not recur.
Could OpenAI be held legally liable for the Tumbler Ridge school shooting?
A lawsuit has been filed against OpenAI by the family of one of the victims, which could set a landmark legal precedent for AI company liability. Legal experts suggest that if courts rule in the victims' favour, it could fundamentally change how AI platforms handle internal threat detection and their duty to report dangers to authorities.
Nation Press
Google Prefer NP
On Google