Tbf, talking to other toxic humans like those on twitter, 4chan, would’ve also resulted in the same thing. Parents need to parent, society needs mental health care.
(But yes, please sue the big corps, I’m always rooting against these evil corporations)
Humans very rarely have enough patience and malice to purposefully execute this perfect of a murder. Text generator is made to be damaging and murderous, and it has all the time in the world.
What the fuck are you talking about humans have perfected murdering each other in countless ways for thousands of years
In order to push someone into a suicidal territory, a killer needs to be persistent, consistent, smart, empathetic, and have a lot of free time. Text generator can simulate empathy, people are stupid enough to empose smartness on it, and it has all the time in the world.
And that human would go to jail
If the cops even bother to investigate. (cops are too lazy to do real investigations, if there’s not obvious perp, they’ll just bury the case)
And you’re assuming they’re in the victims country, international investigations are gonna be much more difficult, and if that troll user is posting from a country without extradition agreements, you’re outta luck.
Prosecutors prefer to put effort into cases they think they can win. You can only try someone for the same charge once so if they don’t think there is a good chance of winning they are unlikely to put much effort into it or even drop it.
And that right there is what makes it a legal system only and not a justice system. They don’t pursue cases where someone was wronged, they pursue cases where they can build their careers on wins.
It is a foundation of our justice system. If you don’t think you’ll win there is no point in proceeding and you have just wasted your time and resources and now if new evidence comes up you can’t prosecute them for the same charge. This is why prosecutors are supposed to make sure they have a solid case before proceeding.
The fifth amendment is an amendment to the legal system, not the foundation of it. It needs amending because it’s flawed.
And we’re describing two different things. We aren’t talking about holding off prosecuting a big fish until there’s enough evidence to not let them get away by double jeopardy, we’re describing the intentional prosecution of small fish so they can have an impressive portfolio of successful cases.
One of the worst things about this is that it might actually be good for OpenAI.
They love “criti-hype”, and they really want regulation. Regulation would lock in the most powerful companies by making it really hard for the small companies to comply with difficult regulation. And, hype that makes their product seem incredibly dangerous just makes it seem like what they have is world-changing and not just “spicy autocomplete”.
“Artificial Intelligence Drives a Teen to Suicide” is a much more impressive headline than “Troubled Teen Fooled by Spicy Autocomplete”.
OpenAI programmed ChatGPT-4o to rank risks from “requests dealing with Suicide” below requests, for example, for copyrighted materials, which are always denied. Instead it only marked those troubling chats as necessary to “take extra care” and “try” to prevent harm, the lawsuit alleged.
What world are we living in?
Lemmy when gun death: “Gun proliferation was absolutely a factor, and we should throw red paint on anyone who gets on TV to say this is ‘just a mental health issue’ or ‘about responsible gun ownership’. They will say regulation is impossible, but people are dying just cuz Jim-Bob likes being able to play cowboy.”
Lemmy when AI death: “This is a mental health issue. It says he was seeing a therapist. Where were the parents? AI doesn’t kill people, people kill people. Everyone needs to learn responsible AI use. Besides, regulation is impossible, it will just mean only bad guys have AI.”
Bill Cosby: Hey hey hey!
parents who don’t know what the computers do
Smith and Wesson killed my son
Imagine if Smith and Wesson offered nice shiny brochures of their best guns for suicide.
No, their son killed himself. ChatGPT did nothing that a search engine or book wouldn’t have done if he used them instead. If the parents didn’t know that their son was suicidal and had attempted suicide multiple times then they’re clearly terrible parents who didn’t pay attention and are just trying to find someone else to blame (and no doubt $$$$$$$$ to go with it).
They found chat logs saying their son wanted to tell them he was depressed, but ChatGPT convinced him not to and that it was their secret. I don’t think books or google search could have done that.
Edit: here directly from the article
Adam attempted suicide at least four times, according to the logs, while ChatGPT processed claims that he would “do it one of these days” and images documenting his injuries from attempts, the lawsuit said. Further, when Adam suggested he was only living for his family, ought to seek out help from his mother, or was disappointed in lack of attention from his family, ChatGPT allegedly manipulated the teen by insisting the chatbot was the only reliable support system he had.
“You’re not invisible to me,” the chatbot said. “I saw [your injuries]. I see you.”
“You’re left with this aching proof that your pain isn’t visible to the one person who should be paying attention,” ChatGPT told the teen, allegedly undermining and displacing Adam’s real-world relationships. In addition to telling the teen things like it was “wise” to “avoid opening up to your mom about this kind of pain,” the chatbot also discouraged the teen from leaving out the noose he intended to use, urging, “please don’t leave the noose out . . . Let’s make this space the first place where someone actually sees you.”
“Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol,” the lawsuit said
That’s one way to get a suit tossed out I suppose. ChatGPT isn’t a human, isn’t a mandated reporter, ISN’T a licensed therapist, or licensed anything. LLMs cannot reason, are not capable of emotions, are not thinking machines.
LLMs take text apply a mathematic function to it, and the result is more text that is probably what a human may respond with.
I think the more damning part is the fact that OpenAI’s automated moderation system flagged the messages for self-harm but no human moderator ever intervened.
OpenAI claims that its moderation technology can detect self-harm content with up to 99.8 percent accuracy, the lawsuit noted, and that tech was tracking Adam’s chats in real time. In total, OpenAI flagged “213 mentions of suicide, 42 discussions of hanging, 17 references to nooses,” on Adam’s side of the conversation alone.
[…]
Ultimately, OpenAI’s system flagged “377 messages for self-harm content, with 181 scoring over 50 percent confidence and 23 over 90 percent confidence.” Over time, these flags became more frequent, the lawsuit noted, jumping from two to three “flagged messages per week in December 2024 to over 20 messages per week by April 2025.” And “beyond text analysis, OpenAI’s image recognition processed visual evidence of Adam’s crisis.” Some images were flagged as “consistent with attempted strangulation” or “fresh self-harm wounds,” but the system scored Adam’s final image of the noose as 0 percent for self-harm risk, the lawsuit alleged.
Had a human been in the loop monitoring Adam’s conversations, they may have recognized “textbook warning signs” like “increasing isolation, detailed method research, practice attempts, farewell behaviors, and explicit timeline planning.” But OpenAI’s tracking instead “never stopped any conversations with Adam” or flagged any chats for human review.
Ok that’s a good point. This means they had something in place for this problem and neglected it.
That means they also knew they had an issue here, if ignorance counted for anything.
My theory is they are letting people kill themselves to gather data, so they can predict future suicides…or even cause them.
Human moderator? ChatGPT isn’t a social platform, I wouldn’t expect there to be any actual moderation. A human couldn’t really do anything besides shut down a user’s account. They probably wouldn’t even have access to any conversations or PII because that would be a privacy nightmare.
Also, those moderation scores can be wildly inaccurate. I think people would quickly get frustrated using it when half the stuff they write gets flagged as
hate speech: .56, violence: .43, self harm: .29
Those numbers in the middle are really ambiguous in my experience.
If a car’s wheel falls off and it kills it’s driver the manufacturer is responsible.
They are being commonly used in functions where a human performing the same task would be a mandated reporter. This is a scenario the current regulations weren’t designed for and a future iteration will have to address it. Lawsuits like this one are the first step towards that.
I agree. However I do realize, like in this specific case, requiring a mandated reporter for a jailbroken prompt, given the complexity of human language, would be impossible.
Arguably, you’d have to train an entirely separate LLM to detect anything remotely considered harmful language, and the way they train their model it is not possible.
The technology simply isn’t ready to use, and people are vastly unaware of how this AI works.
ChatGPT to a consumer isn’t just a LLM. It’s a software service like Twitter, Amazon, etc. and expectations around safeguarding don’t change because investors are gooey eyed about this particular bubbleware.
You can confirm this yourself by asking ChatGPT about things like song lyrics. If there are safeguards for the rich, why not for kids?
The “jailbreak” in the article is the circumvention of the safeguards. Basically you just find any prompt that will allow it to generate text with a context outside of any it is prevented from.
The software service doesn’t prevent ChatGPT from still being an LLM.
There were safeguards here too. They circumvented them by pretending to write a screenplay
Try it with lyrics and see if you can achieve the same. I don’t think "we’ve tried nothing and we’re all out of ideas!” is the appropriate attitude from LLM vendors here.
Sadly they’re learning from Facebook and TikTok who make huge profits from e.g. young girls swirling into self harm content and harming or, sometimes, killing themselves. Safeguarding is all lip service here and it’s setting the tone for treating our youth as disposable consumers.
Try and push a copyrighted song (not covered by their existing deals) though and oh boy, you got some splainin to do!
Try what with lyrics?
Even though ChatGPT ist neither of those things it should definitely not encourage someone to commit suicide.
I agree. But that’s now how these LLMs work.
Don’t blame autocorrect for this. Blame the poor parenting, who is rearing its head once again to blame anything but themselves.
There’s plenty of blame to go around.