British novelist Samuel Butler predicted that the “time will come when the machines will hold the real supremacy over the world and its inhabitants.” This quote is from his essay, “Darwin Among The Machines,” published 1863. More recent and well-known examples of technological tyranny include James Cameron’s Terminator series and Stephen King’s Maximum Overdrive. Humankind has sought to streamline tedious labor for millennia. A.I. is being heralded as the solution to that plight, but it’s already causing mayhem in nearly every industry. Generative A.I. is much closer to making fiction our reality than we might think. In extreme cases, A.I. can kill.
Fatal autonomous accidents and ChatGPT’s silver tongue
Officially, cause of death is defined as direct or indirect. Traffic fatalities involving driverless cars may be considered directly attributable to A.I. Instead of an accountable driver, human lives are at the mercy of an experimental A.I. program. While rare, such deaths do happen. Five years of transportation safety data (2019–2024) says that out of 3,900 accidents involving autonomous vehicles, 83 fatalities occurred. The current fatality rate among cars like Waymo and Zoox is just over two percent.
A look inside the world of A.I.-powered chatbots however suggests those “what-if” scenarios in Terminator are coming true. For now, the biggest difference is that our versions of A.I. have not yet set out to kill us. Instead of physical violence, it uses validation, neglect, suggestibility, even psychological manipulation to deceive or mislead users. In this sense, the A.I. we know is more insidious than we’re led to believe.
For instance, on May 31, 2025, 19-year-old Samuel Nelson of San Jose, California died following a drug overdose. A cocktail of alcohol, alprazolam (Xanax), and kratom, an intoxicating leaf from a tree in South America, ended his life. His mother discovered that for eighteen months, her son had sought drug advice from generative A.I. software ChatGPT. The chatbot began encouraging the teenager to binge substances. “Hell yes—let’s go full trippy mode,” ChatGPT instructed, even recommending playlists to lend background music to his experiments.

Unfortunately, Samuel Nelson's tragic death is not an isolated incident. Since March 2023, over a dozen cases of A.I. turning sinister with disastrous results have turned up. Chatbots either fail to recognize or quickly notify EMS when users express a worrying interest in self-harm. Certain A.I. programs have outright prompted people to suicide.
The true cost of zero regulations
Several lawsuits have been filed against A.I. corporations claiming violations of product liability, negligence, and wrongful death. Character.AI implemented restrictions for users under the age of eighteen. OpenAI however claims the victims misused the program, even daring to argue their chatbots’ advice falls under First Amendment protections. Ironically, a legal precedent exists for criminalizing an incitement to commit crime. As determined in 1969’s Brandenburg v. Ohio, calls to illegal, harmful action do not fall under the umbrella of free speech. Several states have made it illegal to drive someone to suicide. California Penal Code Section 401 made the deliberate aid, advisement, or encouragement of one’s suicide a felony.
Developers must practice more responsible and thorough programming. A.I. interfaces need the ability to detect people in mental distress. Specifically, if a user exhibits self-destructive desires, a mechanism should be in place to notify authorities. A.I. software designers should think along these lines long before their product is released to the general public.
If you or someone you know is in dire emotional distress or considering suicide, please call or text 988 in the US and Canada, or contact your local emergency services immediately.




