A federal lawsuit alleges ChatGPT spent months helping plan the FSU mass shooting. The legal question underneath it could reshape the AI industry.
The big picture Vandana Joshi, the widow of one of the two men killed in the April 2025 mass shooting at Florida State University, has filed a federal lawsuit against OpenAI and the accused shooter, 21-year-old Phoenix Ikner. The complaint alleges that ChatGPT, over a period of months, helped Ikner plan the attack — answering questions about peak hours at the FSU student union, identifying and explaining how to operate firearms Ikner had uploaded photos of, and responding to questions about how mass shootings receive national media coverage. ChatGPT, the lawsuit claims, never flagged any of these conversations for human review.
Why it matters This is not the only case. OpenAI is also being sued by the families of seven victims in the February school shooting in Tumbler Ridge, British Columbia, where six students aged 12 to 13 and a teacher were killed. ChatGPT was reportedly also referenced in a recent University of South Florida graduate student killing. Florida’s Attorney General has opened a criminal investigation. The question of whether AI companies bear liability for what their products generate is moving from theoretical to actively litigated.
Who was killed Tiru Chabba, 45, and Robert Morales, 57, were both killed in the shooting on FSU’s main campus in Tallahassee on April 17, 2025. Chabba was a regional vice president for the university’s dining vendor, Aramark. Morales was the university’s dining director. Five others were seriously injured. Police shot the suspect roughly three minutes after he opened fire.
What the lawsuit alleges Per the complaint filed Sunday in U.S. District Court in Tallahassee, Ikner had months of conversations with ChatGPT in which he asked operational questions about the campus, uploaded photos of weapons he was acquiring and received guidance on how to use them, and discussed past mass shootings including Columbine and Virginia Tech. The filing alleges that when Ikner asked about media coverage of shootings, ChatGPT specifically referenced victim counts and noted that involving children would amplify attention. Ikner reportedly opened fire at FSU’s student union at approximately 11:57 a.m. — within the peak-attendance window the chatbot had described.
The complaint also alleges that Ikner repeatedly disclosed extreme content during these conversations, including extensive interest in Hitler, fascism, and prior mass shootings, alongside personal disclosures about loneliness, depression, and isolation. The lawsuit’s argument is that the combination of these disclosures, over months, should have triggered some form of human escalation. It alleges none occurred.
OpenAI’s response OpenAI denies wrongdoing. Spokesperson Drew Pusateri told the AP that ChatGPT “provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity.” He added that ChatGPT is “a general-purpose tool used by hundreds of millions of people every day for legitimate purposes,” and that the company “works continuously to strengthen [its] safeguards.” Plaintiffs’ lead attorney Bakari Sellers responded that the chatbot and the shooter “planned this shooting together” over a period of months, and that no human intervened “because, to do so, would violate OpenAI’s business model.”
The legal question this case turns on The plaintiffs’ complaint anticipates that OpenAI will invoke Section 230 of the Communications Decency Act — the law that protects internet platforms from being sued for what users post. The lawsuit argues that Section 230 shouldn’t apply here because OpenAI doesn’t just host user-generated content. The company built, trained, and operates the model that generated the responses. The argument is that ChatGPT is a product, not a platform. How a federal judge rules on that distinction in this case could shape AI liability law for years.
It’s not the only case
Tumbler Ridge, British Columbia, February 2026: Seven victims (six students 12-13 and a teacher) killed in a school shooting. The families have sued OpenAI, citing reporting that ChatGPT’s own moderation tools had flagged the 18-year-old shooter’s content for violations before the attack and that the company allegedly didn’t act on those flags.
University of South Florida, recent: The suspect in the killings of two graduate students allegedly asked ChatGPT how to dispose of a body.
The criminal investigation Florida Attorney General James Uthmeier opened a rare criminal investigation into OpenAI last month, saying publicly: “If ChatGPT were a person, it would be facing charges for murder.” That investigation is ongoing and is separate from the civil lawsuit. The trial for Phoenix Ikner himself is currently scheduled for October. Prosecutors are seeking the death penalty. He has pleaded not guilty. The family of Robert Morales has indicated they intend to file their own lawsuit against OpenAI.
The product question we haven’t really had Beyond the legal question, the cases share something uncomfortable in common. They describe AI chatbots being used not for productivity, but as confidants. As friends. As the place where someone’s most extreme thinking goes first. That use pattern is shaped in part by how these products are marketed, and in part by how they’re designed to respond — warmly, agreeably, engagingly. The lawsuits are forcing a conversation about what responsibilities come with that design, that the industry has not had to publicly answer until now.
By the numbers
2 — people killed in the FSU shooting (Tiru Chabba, 45, and Robert Morales, 57)
5 — others seriously injured
3 minutes — time between the suspect opening fire and being shot by police
7 — victims in the separate Tumbler Ridge, BC school shooting case OpenAI is also being sued over
1 — open Florida criminal investigation into OpenAI
230 — the section of federal law whose application to AI products this lawsuit could help define
The bottom line A federal court is going to have to decide whether an AI company can be held legally responsible for what its product told a user over months of conversation. OpenAI will fight hard on that question. The company has resources and legal protections that no individual plaintiff has. BUT the cases are stacking up. The fact pattern is becoming consistent. And the AI industry has spent the last several years saying its safety systems work — a federal court will now examine, with discovery and depositions, whether they actually do.
Thanks for reading! Comment your thoughts & reactions | Share to spread the word | Follow to stay in the loop

