OpenAI has acknowledged that employees internally discussed alerting law enforcement months before an 18-year-old transgender woman went on to commit a mass shooting in Canada.
According to a report from The Wall Street Journal, Jesse Van Rootselaar’s interactions with ChatGPT in June included detailed fantasies of gun violence over several days.
The posts were flagged by OpenAI’s automated moderation system and escalated for internal review.
Roughly a dozen employees reportedly debated whether the activity warranted contacting Canadian authorities.
Some staff members viewed the content as potentially signaling real-world harm and urged company leaders to alert law enforcement.
Unfortunately, OpenAI ultimately chose not to notify authorities.
Exclusive: Months before Jesse Van Rootselaar became the suspect in the mass shooting that devastated a rural town in British Columbia, Canada, OpenAI considered alerting law enforcement about her interactions with its ChatGPT chatbot, the company saidhttps://t.co/sCzxy9stSw
— The Wall Street Journal (@WSJ)February 20, 2026
A company spokeswoman said Van Rootselaar’s account was banned but that the activity did not meet the threshold for reporting.
She said law enforcement would have been contacted only if the posts constituted “a credible and imminent risk of serious physical harm to others.”
Source: The Gateway Pundit