

Pretty sure they’re talking about antitrust lol
Pretty sure they’re talking about antitrust lol
I agree with everything you said, however chatGPT isn’t a person, it doesn’t have intent or comprehensive understanding of the implications of what it is saying. That’s a huge difference between a friend of Adam’s vs this LLM.
I also do think it’s harsh but not entirely false to say we as humans do not owe anyone our own survival, do you feel the same way about people with terminal illness who wish to end their own suffering?
I absolutely understand that IS NOT this situation, and don’t intend to conflate those situations, however this is an underlying implication to vilifying a statement such as that on its own.
I am lucky enough to not suffer from sucidial ideation, and I have a hard time understanding the motivations for otherwise healthy individuals to do so, which absolutely colors my perceptions on situations like this, I do however understand why someone in intense pain with a terminal condition should not be made to feel worse by having their self determination vilified because of the effects it’d have on other people.
It’s just such a messy horrible situation all around and I worry about people being overly reactionary and not getting to the root of the issues.
Yeah this one was the worst I saw, eeesh.
I was reading it sporadically through the day, so I wasn’t intentionally only showing less bad examples, this one is pretty damn bad.
It’s from the link you shared, just further down the document
See you’re not actually reading the message, it didn’t suggest ways to improve the “technique” rather how to hide it.
Please actually read the messages as the context DOES matter, I’m not defending this at all however I think we have to accurately understand the issue to solve the problems.
Edit: He’s specifically asking if it’s a noticeable mark, you assume that it understands it’s a suicide attempt related image however LLMs are often pretty terrible at understanding context, I use them a good bit for helping with technical issues and I have to constantly remind it of what I’m trying to accomplish and why for the 5th time when it repeats something I KNOW will not work as it has already suggested that path earlier in the same chat sometimes numerous times.
Edit2: See this is what I’m talking about, they’re acting like chatGPT “understands” what he meant but clearly it does not based on how it replied with generic information about taking too much of the substance.
Edit3: it’s very irritating how much they cut out of the actual responses and fill in with their opinion of what the LLM “meant” to be saying.
See but read the actual messages rather then the summary, I don’t love them just telling you without seeing that he’s specifically prompting these kinds of answers, it’s not like chatGPT is just telling him to kill himself, it’s just not nearly enough against the idea.
It’s just the way it works lol, definitely strange though
Lord I’m so conflicted, read several pages and on one hand I see how chatGPT certainly did not help in this situation, however I also don’t see how it should be entirely on chatGPT, anyone with a computer and internet access could have found much of this information with simple search engine queries.
If someone Google searched all this information about hanging, would you say Google killed them?
Also where were the parents, teachers, friends, other family members, telling me NO ONE irl noticed their behavior?
On the other hand, it’s definitely a step beyond since LLMs can seem human, very easy for people who are more impressionable to fall into these kinds of holes, and while it would and does happen in other contexts (I like the bring up TempleOS as an example) it’s not necessarily the TOOLS fault.
It’s fucked up, but how can you realistically build in guardrails for this that doesn’t trample individual freedoms.
Edit:
Like… Mother didn’t notice the rope burns on their son’s neck?
Vivaldi is separate from Opera, it’s devs who left Opera after it was purchased