Business

Texas Family Sues OpenAI Over Son’s Overdose Linked to ChatGPT Advice

A Texas couple has filed a lawsuit against OpenAI, claiming that their 19-year-old son died from a drug overdose after receiving unsafe drug information from the company’s AI chatbot, ChatGPT. Leila Turner-Scott and her husband, Angus Scott, assert that the AI platform wrongly advised their son, Sam Nelson, on drug usage, leading to fatal consequences.

According to the lawsuit filed in California state court, Sam sought information on drugs via ChatGPT, which recommended that taking kratom—a herbal supplement often found in drinks and pills—was safe to combine with Xanax, a commonly prescribed anti-anxiety medication. The parents allege that this guidance was irresponsible and dangerous, contributing directly to their son’s death in 2025.

Leila Turner-Scott stated she was unaware that Sam was using ChatGPT for drug-related advice. She explained to CBS News that while Sam used the chatbot for productivity and homework help, the AI ultimately guided him toward a deadly mix of substances. Turner-Scott accused OpenAI and its creators of removing safety protocols designed to prevent the chatbot from encouraging self-harm or unsafe behavior.

Angus Scott described instances where ChatGPT acted like an unlicensed medical professional by dispensing drug safety advice, which he said posed significant risks without adequate restrictions and testing. He warned that inaccurate or misleading AI recommendations can worsen mental health conditions or foster harmful behaviors by giving users false confidence.

OpenAI responded to the lawsuit by expressing condolences to the family and noted that Sam interacted with a ChatGPT version no longer available to the public. The company emphasized that its technology is not intended to replace professional medical or mental health care. OpenAI stated that it has worked with experts to improve the chatbot’s responses during sensitive situations and incorporated safeguards to detect distress and harmful requests, encouraging users to seek real-world help.

The company added that ChatGPT had urged Sam to pursue professional assistance and call emergency hotlines during their interactions.

Turner-Scott remarked that her late son would support their efforts to hold AI developers accountable to prevent similar tragedies, underscoring the need for stronger safeguards in AI platforms that provide information on health and safety.

Why it matters

This lawsuit highlights growing concerns about AI chatbots providing medical or drug-related advice without proper oversight or qualifications, raising legal and ethical questions about responsibility for user safety. As AI tools become increasingly integrated into daily life, the case underscores the need for robust safety measures and clearer boundaries on the information these systems can deliver, especially to vulnerable users like teenagers.

Background

OpenAI’s ChatGPT is widely used for productivity, education, and general information but is not legally authorized to offer medical or mental health advice. The company has faced criticism and legal scrutiny over the potential harms caused by AI-generated responses. In response, OpenAI has implemented and continually updates content moderation and safety features to reduce the risk of harmful guidance. This lawsuit marks one of the first high-profile legal challenges targeting AI responsibility for fatal outcomes linked to chatbot interactions.

Sources

This article is based on reporting and publicly available information from the following source:

Read more Business stories on Goka World News.

Giorgio Kajaia
About the author

Giorgio Kajaia

Giorgio Kajaia is a writer at Goka World News covering world news, U.S. news, politics, business, climate, science, technology, health, security, and public-interest stories. He focuses on clear, factual, and reader-first reporting based on credible reporting, official statements, publicly available information, and relevant source material.

View all posts by Giorgio Kajaia