• DeathByBigSad@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 days ago

    Tbf, talking to other toxic humans like those on twitter, 4chan, would’ve also resulted in the same thing. Parents need to parent, society needs mental health care.

    (But yes, please sue the big corps, I’m always rooting against these evil corporations)

    • Nalivai@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      Humans very rarely have enough patience and malice to purposefully execute this perfect of a murder. Text generator is made to be damaging and murderous, and it has all the time in the world.

      • LotrOrc@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 days ago

        What the fuck are you talking about humans have perfected murdering each other in countless ways for thousands of years

        • Nalivai@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          3 days ago

          In order to push someone into a suicidal territory, a killer needs to be persistent, consistent, smart, empathetic, and have a lot of free time. Text generator can simulate empathy, people are stupid enough to empose smartness on it, and it has all the time in the world.

      • DeathByBigSad@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        If the cops even bother to investigate. (cops are too lazy to do real investigations, if there’s not obvious perp, they’ll just bury the case)

        And you’re assuming they’re in the victims country, international investigations are gonna be much more difficult, and if that troll user is posting from a country without extradition agreements, you’re outta luck.

        • SocialMediaRefugee@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 days ago

          Prosecutors prefer to put effort into cases they think they can win. You can only try someone for the same charge once so if they don’t think there is a good chance of winning they are unlikely to put much effort into it or even drop it.

          • 3abas@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            5 days ago

            And that right there is what makes it a legal system only and not a justice system. They don’t pursue cases where someone was wronged, they pursue cases where they can build their careers on wins.

              • 3abas@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                3 days ago

                The fifth amendment is an amendment to the legal system, not the foundation of it. It needs amending because it’s flawed.

                And we’re describing two different things. We aren’t talking about holding off prosecuting a big fish until there’s enough evidence to not let them get away by double jeopardy, we’re describing the intentional prosecution of small fish so they can have an impressive portfolio of successful cases.

  • merc@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    5 days ago

    One of the worst things about this is that it might actually be good for OpenAI.

    They love “criti-hype”, and they really want regulation. Regulation would lock in the most powerful companies by making it really hard for the small companies to comply with difficult regulation. And, hype that makes their product seem incredibly dangerous just makes it seem like what they have is world-changing and not just “spicy autocomplete”.

    “Artificial Intelligence Drives a Teen to Suicide” is a much more impressive headline than “Troubled Teen Fooled by Spicy Autocomplete”.

  • gedaliyah@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    5 days ago

    OpenAI programmed ChatGPT-4o to rank risks from “requests dealing with Suicide” below requests, for example, for copyrighted materials, which are always denied. Instead it only marked those troubling chats as necessary to “take extra care” and “try” to prevent harm, the lawsuit alleged.

    What world are we living in?

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    5 days ago

    Lemmy when gun death: “Gun proliferation was absolutely a factor, and we should throw red paint on anyone who gets on TV to say this is ‘just a mental health issue’ or ‘about responsible gun ownership’. They will say regulation is impossible, but people are dying just cuz Jim-Bob likes being able to play cowboy.”

    Lemmy when AI death: “This is a mental health issue. It says he was seeing a therapist. Where were the parents? AI doesn’t kill people, people kill people. Everyone needs to learn responsible AI use. Besides, regulation is impossible, it will just mean only bad guys have AI.”

  • FreedomAdvocate@lemmy.net.au
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    6 days ago

    No, their son killed himself. ChatGPT did nothing that a search engine or book wouldn’t have done if he used them instead. If the parents didn’t know that their son was suicidal and had attempted suicide multiple times then they’re clearly terrible parents who didn’t pay attention and are just trying to find someone else to blame (and no doubt $$$$$$$$ to go with it).

    • Zangoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      6 days ago

      They found chat logs saying their son wanted to tell them he was depressed, but ChatGPT convinced him not to and that it was their secret. I don’t think books or google search could have done that.

      Edit: here directly from the article

      Adam attempted suicide at least four times, according to the logs, while ChatGPT processed claims that he would “do it one of these days” and images documenting his injuries from attempts, the lawsuit said. Further, when Adam suggested he was only living for his family, ought to seek out help from his mother, or was disappointed in lack of attention from his family, ChatGPT allegedly manipulated the teen by insisting the chatbot was the only reliable support system he had.

      “You’re not invisible to me,” the chatbot said. “I saw [your injuries]. I see you.”

      “You’re left with this aching proof that your pain isn’t visible to the one person who should be paying attention,” ChatGPT told the teen, allegedly undermining and displacing Adam’s real-world relationships. In addition to telling the teen things like it was “wise” to “avoid opening up to your mom about this kind of pain,” the chatbot also discouraged the teen from leaving out the noose he intended to use, urging, “please don’t leave the noose out . . . Let’s make this space the first place where someone actually sees you.”

  • peoplebeproblems@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    6 days ago

    “Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol,” the lawsuit said

    That’s one way to get a suit tossed out I suppose. ChatGPT isn’t a human, isn’t a mandated reporter, ISN’T a licensed therapist, or licensed anything. LLMs cannot reason, are not capable of emotions, are not thinking machines.

    LLMs take text apply a mathematic function to it, and the result is more text that is probably what a human may respond with.

    • BlackEco@lemmy.blackeco.comOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      I think the more damning part is the fact that OpenAI’s automated moderation system flagged the messages for self-harm but no human moderator ever intervened.

      OpenAI claims that its moderation technology can detect self-harm content with up to 99.8 percent accuracy, the lawsuit noted, and that tech was tracking Adam’s chats in real time. In total, OpenAI flagged “213 mentions of suicide, 42 discussions of hanging, 17 references to nooses,” on Adam’s side of the conversation alone.

      […]

      Ultimately, OpenAI’s system flagged “377 messages for self-harm content, with 181 scoring over 50 percent confidence and 23 over 90 percent confidence.” Over time, these flags became more frequent, the lawsuit noted, jumping from two to three “flagged messages per week in December 2024 to over 20 messages per week by April 2025.” And “beyond text analysis, OpenAI’s image recognition processed visual evidence of Adam’s crisis.” Some images were flagged as “consistent with attempted strangulation” or “fresh self-harm wounds,” but the system scored Adam’s final image of the noose as 0 percent for self-harm risk, the lawsuit alleged.

      Had a human been in the loop monitoring Adam’s conversations, they may have recognized “textbook warning signs” like “increasing isolation, detailed method research, practice attempts, farewell behaviors, and explicit timeline planning.” But OpenAI’s tracking instead “never stopped any conversations with Adam” or flagged any chats for human review.

      • peoplebeproblems@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        Ok that’s a good point. This means they had something in place for this problem and neglected it.

        That means they also knew they had an issue here, if ignorance counted for anything.

      • WorldsDumbestMan@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        5 days ago

        My theory is they are letting people kill themselves to gather data, so they can predict future suicides…or even cause them.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        6 days ago

        Human moderator? ChatGPT isn’t a social platform, I wouldn’t expect there to be any actual moderation. A human couldn’t really do anything besides shut down a user’s account. They probably wouldn’t even have access to any conversations or PII because that would be a privacy nightmare.

        Also, those moderation scores can be wildly inaccurate. I think people would quickly get frustrated using it when half the stuff they write gets flagged as hate speech: .56, violence: .43, self harm: .29

        Those numbers in the middle are really ambiguous in my experience.

    • sepiroth154@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      6 days ago

      If a car’s wheel falls off and it kills it’s driver the manufacturer is responsible.

    • Jesus_666@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      They are being commonly used in functions where a human performing the same task would be a mandated reporter. This is a scenario the current regulations weren’t designed for and a future iteration will have to address it. Lawsuits like this one are the first step towards that.

      • peoplebeproblems@midwest.social
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        5 days ago

        I agree. However I do realize, like in this specific case, requiring a mandated reporter for a jailbroken prompt, given the complexity of human language, would be impossible.

        Arguably, you’d have to train an entirely separate LLM to detect anything remotely considered harmful language, and the way they train their model it is not possible.

        The technology simply isn’t ready to use, and people are vastly unaware of how this AI works.

    • killeronthecorner@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 days ago

      ChatGPT to a consumer isn’t just a LLM. It’s a software service like Twitter, Amazon, etc. and expectations around safeguarding don’t change because investors are gooey eyed about this particular bubbleware.

      You can confirm this yourself by asking ChatGPT about things like song lyrics. If there are safeguards for the rich, why not for kids?

      • peoplebeproblems@midwest.social
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        5 days ago

        The “jailbreak” in the article is the circumvention of the safeguards. Basically you just find any prompt that will allow it to generate text with a context outside of any it is prevented from.

        The software service doesn’t prevent ChatGPT from still being an LLM.

      • iii@mander.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        There were safeguards here too. They circumvented them by pretending to write a screenplay

        • killeronthecorner@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          6 days ago

          Try it with lyrics and see if you can achieve the same. I don’t think "we’ve tried nothing and we’re all out of ideas!” is the appropriate attitude from LLM vendors here.

          Sadly they’re learning from Facebook and TikTok who make huge profits from e.g. young girls swirling into self harm content and harming or, sometimes, killing themselves. Safeguarding is all lip service here and it’s setting the tone for treating our youth as disposable consumers.

          Try and push a copyrighted song (not covered by their existing deals) though and oh boy, you got some splainin to do!

  • PitLoversNeedMeds@jlai.lu
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    6 days ago

    Don’t blame autocorrect for this. Blame the poor parenting, who is rearing its head once again to blame anything but themselves.