Sam Altman Apologises for OpenAI's Failure to Alert Police Before Canada School Shooting
Synopsis
Key Takeaways
OpenAI CEO Sam Altman has issued a formal apology for his company's failure to alert Canadian law enforcement after internally flagging a teenager's ChatGPT account for violent content — weeks before she carried out one of Canada's deadliest mass school shootings in recent history. The attack, which took place in Tumbler Ridge, British Columbia, claimed the lives of six people, including five children and a teacher, and left at least 25 others injured.
The Tumbler Ridge Shooting: What Happened
Jesse Van Rootselaar, 18 years old, first killed her mother and half-brother before opening fire at a secondary school in Tumbler Ridge, British Columbia. The attacker later died from a self-inflicted gunshot wound. Canadian authorities have described the incident as one of the country's worst mass casualty events in recent memory.
The tragedy has since ignited a national and global debate about the responsibilities of artificial intelligence companies when their platforms detect users exhibiting signs of potential violence. At least 25 people were wounded in the rampage, compounding the grief of an already devastated community.
Altman's Apology and OpenAI's Internal Failure
In a letter shared by local outlet Tumbler RidgeLines and British Columbia Premier David Eby, Altman acknowledged that OpenAI should have informed authorities after the attacker's account was internally flagged. "I am deeply sorry that we did not alert law enforcement to the account that was banned in June," Altman wrote.
"I want to express my deepest condolences to the entire community. No one should ever have to endure a tragedy like this. I cannot imagine anything worse in this world than losing a child," he added in the letter.
OpenAI had previously confirmed that Rootselaar's ChatGPT account was flagged internally in June 2025 for misuse "in furtherance of violent activities" and subsequently suspended. However, the company chose not to notify law enforcement at the time, determining the activity did not meet the threshold of a credible or imminent threat.
The Legal Battle: Lawsuit Filed Against OpenAI
A lawsuit filed by the family of one of the victims has alleged that the teenager used ChatGPT as a "trusted confidante," engaging in detailed conversations about multiple gun violence scenarios in the days immediately preceding the attack. The suit claims that some OpenAI employees had flagged these conversations as indicating a potential risk of serious harm and recommended notifying law enforcement — but the recommendation was rejected as the threat was not deemed imminent.
Critically, the lawsuit further alleges that after her first account was banned, the attacker was able to create a second account and continue similar conversations without interruption. OpenAI reportedly contacted Canadian authorities only after the shooting had already taken place.
OpenAI's Policy Review and Broader Implications
In the wake of the tragedy, OpenAI has announced it is reviewing its internal safety policies and will work more closely with governments at all levels. "Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again," Altman stated.
This incident raises fundamental questions about the legal and ethical obligations of AI platforms when they detect dangerous behaviour on their systems. Notably, this is not the first time ChatGPT has been linked to real-world violence — in 2023, concerns were raised in multiple jurisdictions about AI chatbots being used to plan or encourage harmful acts, though regulatory frameworks have lagged far behind the technology.
The Tumbler Ridge shooting comes amid growing global pressure on tech companies to adopt mandatory reporting obligations when their platforms flag potential threats. Critics argue that OpenAI's internal threshold for what constitutes a "credible threat" is dangerously high — a standard that, in this case, proved fatal. Canada and several European Union nations are now expected to accelerate legislative efforts to impose stricter AI safety reporting requirements.
What Happens Next
The ongoing lawsuit against OpenAI could set a landmark legal precedent for AI company liability in cases of foreseeable harm. Legal experts suggest that if courts rule in favour of the victims' families, it could fundamentally reshape how AI companies handle internal threat detection and their duty to report to authorities.
British Columbia Premier David Eby is expected to push for formal regulatory action, and the case is likely to be raised in upcoming sessions of the Canadian Parliament. Globally, regulators and lawmakers will be watching closely as this case could become the defining test of AI accountability in the era of generative intelligence.