

I would say it’s more liable than a google search because the kid was uploading pictures of various attempts/details and getting feedback specific to his situation.
He uploaded pictures of failed attempts and got advice on how to improve his technique. He discussed details of prescription dosages with details on what and how much he had taken.
Yeah, you can find info on Google, but if you send Google a picture of ligature marks on your neck from a partial hanging, Google doesn’t give you specific details on how to finish the job.
Some of it is buried in the text and not laid out in a conversational format. There are several times where ChatGPT did give him feedback on actual techniques.
For some reason, I can’t copy and paste, but at the bottom of page 12 and the top of page 13, the filing refers to Adam and ChatGPT discussing viable items to best hang himself, including what could be used as a solid anchor and the weight that a Jiu-Jitsu belt could support.
It explained mechanics of hangings, with detailed info on unconsciousness and brain-dead windows.
They actively discuss dosage amounts of Amitriptyline that would be deadly with details around how much Adam had taken.
That’s why I think ChatGPT is blatantly responsible, with the information provided in the filing. I think the shock is the hypocrisy of OpenAI claiming to research AI ethically, but making their security weak enough for a child to get around it.
It feels akin to a bleach company saying their cap is child safe, but really it just has a different shape and no childproofing at all.