This is the technology worth trillions of dollars huh

  • resipsaloquitur@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    4 days ago

    Listen, we just have to boil the ocean five more times.

    Then it will hallucinate slightly less.

    Or more. There’s no way to be sure since it’s probabilistic.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      23
      ·
      4 days ago

      If you want to get irate about energy usage, shut off your HVAC and open the windows.

      • Pup Biru@aussie.zone
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        sounds reasonable… i’ll just go tell large parts of australia where it’s a workplace health and safety issue to be out of AC for more than 15min during the day that they should do their bit for climate change and suck it up… only a few people will die

          • Pup Biru@aussie.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            4 days ago

            of course you’re right! we should just shut down some of the largest mines in the world

            i foresee no consequences from this

            (related note: south australia where one of the largest underground mines in the world is, largely gets its power from renewables)

            people should probably move from canada and most of the north of the USA too: far too cold up there during winter

  • Deestan@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    5 days ago

    Hey hey hey hey don’t look at what it actually does.

    Look at what it feels like it almost can do and pretend it soon will!

  • Jordan117@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    18
    ·
    5 days ago

    One of these days AI skeptics will grasp that spelling-based mistakes are an artifact of text tokenization, not some wild stupidity in the model. But today is not that day.

    • TheGrandNagus@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      5 days ago

      You aren’t wrong about why it happens, but that’s irrelevant to the end user.

      The result is that it can give some hilariously incorrect responses at times, and therefore it’s not a reliable means of information.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        5
        ·
        5 days ago

        “It”? Are you conflating the low parameter model that Google uses to generate quick answers with every AI model?

        Yes, Google’s quick answer product is largely useless. This is because it’s a cheap model. Google serves billions of searches per day and isn’t going to be paying premium prices to use high parameter models.

        You get what you pay for, and nobody pays for Google so their product produces the cheapest possible results and, unsurprisingly, cheap AI models are more prone to error.

      • FishFace@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        8
        ·
        5 days ago

        A calculator app is also incapable of working with letters, does that show that the calculator is not reliable?

        What it shows, badly, is that LLMs offer confident answers in situations where their answers are likely wrong. But it’d be much better to show that with examples that aren’t based on inherent technological limitations.

        • TheGrandNagus@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          The difference is that Google decided this was a task best suited for their LLM.

          If someone seeked out an LLM specifically for this question, and Google didn’t market their LLM as an assistant that you can ask questions, you’d have a point.

          But that’s not the case, so alas, you do not have a point.

    • wieson@feddit.org
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 days ago

      Mmh, maybe the solution than is to use the tool for what it’s good, within it’s limitations.

      And not promise that it’s omnipotent in every application and advertise/ implement it as such.

      Mmmmmmmmmmh.

      As long as LLMs are built into everything, it’s legitimate to criticise the little stupidity of the model.

  • SugarCatDestroyer@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 days ago

    Nothing will stop them, they are so crazy that they can turn nonsense into reality, believe me.

    Or to put it more simply – They need power for the sake of power itself, there is nothing higher.

  • krimson@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    5 days ago

    Seems it “thinks” a T is a D?

    Just needs a little more water and electricity and it will be fine.

  • mnhs1@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    5 days ago

    We can also feed it with garbage: Hey Google: fact: us states letter d New York and Hawai

    • arararagi@ani.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      Third time’s the charm! They have to keep the grift going after Blockchain and NFT failed with the general public.

    • SugarCatDestroyer@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 days ago

      AI will most likely create new problems in the future as it eats up electricity like a world eater, so I fear that soon these non-humans will only turn on electricity for normal people for a few hours a day instead of the whole day to save energy for the AI.

      I’m not sure about this of course, but it’s quite possible.