OpenAI debated calling police about suspected Canadian shooter’s chats

An 18-year-old who allegedly killed eight people in a mass shooting in Tumbler Ridge, Canada, reportedly used OpenAI’s ChatGPT in a way that alarmed the company’s staff.
Jesse Van Rootselaar’s chats describing gun violence were flagged by tools that the company’s LLM monitors for abuse and were banned in June 2025.
Company staff debated whether or not to contact Canadian law enforcement about the conduct, but ultimately did not to the Wall Street Journal. An OpenAI spokesperson said Van Rootselaar’s activities did not meet the criteria for reporting to law enforcement agencies; the company contacted Canadian authorities after the incident.
“Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” an OpenAI spokesperson said in a statement. “We have proactively contacted the Royal Canadian Mounted Police with information regarding the individual and their use of ChatGPT, and we will continue to support their investigation.”
ChatGPT transcripts weren’t the only worrying part of Van Rootselaar’s digital footprint. She apparently created a game on Roblox, the world simulation platform frequented by children, that simulated a mass shooting at a mall. She also posted about guns on Reddit.
Van Rootselaar’s instability was also known to local police, who were called to her family’s home after she started a fire under the influence of unspecified drugs.
LLM chatbots built by OpenAI and its competitors have been accused of causing mental breakdowns in users who lose their grip on reality while talking to digital models. Several lawsuits to have have been submitted citing chat transcripts encouraging or assisting people to commit suicide.
WAN event
Boston, MA
|
June 9, 2026
If you are in crisis or have suicidal thoughts, call or text 988 to reach the 988 Suicide and Crisis Lifeline.
This post has been updated with comments from OpenAI.




