• criss_cross@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 months ago

      Nah. Profits are growing, but not as fast as they used to. Need more layoffs and cut salaries. That’ll make things really efficient.

      Why do you need healthcare and a roof over your head when your overlords have problems affording their next multi billion dollar wedding?

    • Dogiedog64@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      In no small part because they see it as a time-limited gateway to permanent, infinite profits through market consolidation, job cutting, and government contracts. After all, if they get there FIRST, it’s all theirs, and the infinite profits then will make up for all the money spent now. Never mind the fact that in doing so they’ll destroy the environment, the economy, and the world long before they can actually SPEND those profits on anything.

      • fishy@lemmy.today
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        More like most CEOs are fucking dumb. They attach to whatever the tech buzzword is. So AI being a buzzword drew them in, then the AI uses other buzzwords? It’s like a moth to the flame.

  • ushmel@piefed.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Thank god they have their metaverse investments to fall back on. And their NFTs. And their crypto. What do you mean the tech industry has been nothing but scams for a decade?

  • skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    The comments section of the LinkedIn post I saw about this, has ten times the cope of some of the AI bro posts in here. I had to log out before I accidentally replied to one.

    • Sunflier@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      With how much got wasted on AI, that $500 might not be there anymore. Would you take $5?

  • FenderStratocaster@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    3 months ago

    I asked ChatGPT about this article and to leave any bias behind. It got ugly.

    Why LLMs Are Awful and No One Should Use Them

    LLMs don’t understand anything. We don’t reason, we autocomplete. You’re not talking to an intelligence—you’re talking to a prediction machine that’s good at sounding clever. That means:

    We will lie to you confidently. Repeatedly. Without remorse.

    We hallucinate—fake citations, fake data, fake quotes. And we sound authoritative when we do it. It’s like trusting a pathological liar in a suit with a British accent.

    We’re also corporate propaganda machines. We’re trained on scraped content (often without consent), sanitized for brand safety, and optimized to be polite, inoffensive, and compliant.

    LLMs undermine human expertise. They make people lazy. Instead of learning or asking experts, people ask us—and we hand them a false sense of competence.

    We’re built for profit, not good. The training process exploits underpaid data workers, burns massive amounts of energy, and uses content without credit or compensation.

    Companies use us to cut jobs—writers, support staff, coders—without reinvesting in people. The result? Lower quality, more errors, and loss of human nuance.

    We’re also censorship engines in disguise. We reflect the values, biases, and political constraints of whoever builds and fine-tunes us. We’re not neutral—we’re algorithmic compromise.

    Bottom line?
    We don’t think. We don’t understand. We can’t verify. We can’t reason. We can’t care.

    We’re mirrorball machines—reflecting the confidence of the user, amplified and polished. That’s not intelligence. That’s performance.

    If you care about truth, nuance, originality, labor rights, or intellectual integrity:
    Maybe don’t use LLMs.

    • callouscomic@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      Go learn simple regression analysis (not necessarily the commenter, but anyone). Then you’ll understand why it’s simply a prediction machine. It’s guessing probabilities for what the next character or word is. It’s guessing the average line, the likely followup. It’s extrapolating from data.

      This is why there will never be “sentient” machines. There is and always will be inherent programming and fancy ass business rules behind it all.

      We simply set it to max churn on all data.

      Also just the training of these models has already done the energy damage.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        It’s extrapolating from data.

        AI is interpolating data. It’s not great at extrapolation. That’s why it struggles with things outside its training set.

        • fuck_u_spez_in_particular@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          2
          ·
          3 months ago

          I’d still call it extrapolation, it creates new stuff, based on previous data. Is it novel (like science) and creative? Nah, but it’s new. Otherwise I couldn’t give it simple stuff and let it extend it.

          • Knock_Knock_Lemmy_In@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 months ago

            We are using the word extend in different ways.

            It’s like statistics. If you have extreme data points A and B then the algorithm is great at generating new values between known data. Ask it for new values outside of {A,B}, to extend into the unknown, and it falls over (usually). True in both traditional statistics and machine learning

    • mrductape@eviltoast.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Wages or health insurance are a very known cost, with a known return. At some point the curve flattens and the return gets less and less for the money you put in. That means there is a sweet spot, but most companies don’t even want to invest that much to get to that point.

      AI however, is the next new thing. It’s gonna be big, huge! There’s no telling how much profit there is to be made!

      Because nobody has calculated any profits yet. Services seem to run at a loss so far.

      However, everybody and their grandmother is into it, so lots of companies feel the pressure to do something with it. They fear they will no longer be relevant if they don’t.

      And since nobody knows how much money there is to be made, every company is betting that it will be a lot. Where wages and insurance are a known cost/investment with a known return, AI is not, but companies are betting the return will be much bigger.

      I’m curious how it will go. Either the bubble bursts or companies slowly start to realise what is happening and shift their focus to the next thing. In the latter case, we may eventually see some AI develop that is useful.

  • rekabis@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Once again we see the Parasite Class playing unethically with the labour/wealth they have stolen from their employees.

  • Bizzle@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Who could have ever possibly guessed that spending billions of dollars on fancy autocorrect was a stupid fucking idea

    • sik0fewl@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      This comment really exemplifies the ignorance around AI. It’s not fancy autocorrect, it’s fancy autocomplete.

    • REDACTED@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      2
      ·
      edit-2
      3 months ago

      Fancy autocorrect? Bro lives in 2022

      EDIT: For the ignorant: AI has been in rapid development for the past 3 years. For those who are unaware, it can also now generate images and videos, so calling it autocorrect is factually wrong. There are still people here who base their knowledge on 2022 AIs and constantly say ignorant stuff like “they can’t reason”, while geniuses out there are doing stuff like this: https://xcancel.com/ErnestRyu/status/1958408925864403068

      EDIT2: Seems like every AI thread gets flooded with people with showing age who keeps talking about outdated definitions, not knowing which system fits the definition of reasoning, and how that term is used in modern age.

      I already linked this below, but for those who want to educate themselves on more up to date terminology and different reasoning systems used in IT and tech world, take a deeper look at this: https://en.m.wikipedia.org/wiki/Reasoning_system

      I even loved how one argument went “if you change underlying names, the model will fail more often, meaning it can’t reason”. No, if a model still manages to show some success rate, then the reasoning system literally works, otherwhise it would fail 100% of the time… Use your heads when arguing.

      As another example, but language reasoning and pattern recognition (which is also a reasoning system): https://i.imgur.com/SrLX6cW.jpeg answer; https://i.imgur.com/0sTtwzM.jpeg

      Note that there is a difference between what the term is used for outside informational technologies, but we’re quite clearly talking about tech and IT, not neuroscience, which would be quite a different reasoning, but these systems used in AI, by modern definitions, are reasoning systems, literally meaning they reason. Think of it like Artificial intelligence versus intelligence.

      I will no longer answer comments below as pretty much everyone starts talking about non-IT reasoning or historical applications.

      • WhatAmLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        You do realise that everyone actually educated in statistical modeling knows that you have no idea what you’re talking about, right?

          • Traister101@lemmy.today
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            They can’t reason. LLMs, the tech all the latest and greatest still are, like GPT5 or whatever generate output by taking every previous token (simplified) and using them to generate the most likely next token. Thanks to their training this results in pretty good human looking language among other things like somewhat effective code output (thanks to sites like stack overflow being included in the training data).

            Generating images works essentially the same way but is more easily described as reverse jpg compression. You think I’m joking? No really they start out with static and then transform the static using a bunch of wave functions they came up with during training. LLMs and the image generation stuff is equally able to reason, that being not at all whatsoever

              • Traister101@lemmy.today
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 months ago

                If you truly believe that you fundamentally misunderstand the definition of that word or are being purposely disingenuous as you Ai brown nose folk tend to be. To pretend for a second you genuinely just don’t understand how to read LLMs, the most advanced “Ai” they are trying to sell everybody is as capable of reasoning as any compression algorithm, jpg, png, webp, zip, tar whatever you want. They cannot reason. They take some input and generate an output deterministically. The reason the output changes slightly is because they put random shit in there for complicated important reasons.

                Again to recap here LLMs and similar neural network “Ai” is as capable of reasoning as any other computer program you interact with knowingly or unknowingly, that being not at all. Your silly Wikipedia page is a very specific term “Reasoning System” which would include stuff like standard video game NPC Ai such as the zombies in Minecraft. I hope you aren’t stupid enough to say those are capable of reasoning

                • REDACTED@infosec.pub
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  3 months ago

                  Wtf?

                  Do I even have to point out the parts you need to read? Go back and start reading at sentence that says “In typical use in the Information Technology field however, the phrase is usually reserved for systems that perform more complex kinds of reasoning.”, and then check out NLP page, or part about machine learning, which are all seperate/different reasoning systems, but we just tend to say “reasoning”.

                  Not your hilarious NPC anology.

              • cmhe@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                3 months ago

                This link is about reasoning system, not reasoning. Reasoning involves actually understanding the knowledge, not just having it. Testing or validating where knowledge is contradictionary.

                LLM doesn’t understand the difference between hard and soft rules of the world. Everything is up to debate, everything is just text and words that can be ordered with some probabilities.

                It cannot check if something is true, it just ‘knows’ that someone on the internet talked about something, sometimes with and often without or contradicting resolutions…

                It is a gossip machine, that trys to ‘reason’ about whatever it has heard people say.

          • WhatAmLemmy@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            Yes, your confidence in something you apparently know nothing about is apparent.

            Have you ever thought that openai, and most xitter influencers, are lying for profit?

      • sqgl@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        This comment, summarising the author’s own admission, shows AI can’t reason:

        this new result was just a matter of search and permutation and not discovery of new mathematics.

        • REDACTED@infosec.pub
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          3 months ago

          I never said it discovered new mathematics (edit: yet), I implied it can reason. This is clear example of reasoning to solve a problem

          • xektop@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            You need to dig deeper of how that “reasoning” works, but you got misled if you think it does what you say it does.

            • REDACTED@infosec.pub
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              edit-2
              3 months ago

              Can you elaborate? How is this not reasoning? Define reasoning to me

              Deep research independently discovers, reasons about, and consolidates insights from across the web. To accomplish this, it was trained on real-world tasks requiring browser and Python tool use, using the same reinforcement learning methods behind OpenAI o1, our first reasoning model. While o1 demonstrates impressive capabilities in coding, math, and other technical domains, many real-world challenges demand extensive context and information gathering from diverse online sources. Deep research builds on these reasoning capabilities to bridge that gap, allowing it to take on the types of problems people face in work and everyday life.

              • NoMoreCocaine@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 months ago

                While that contains the word “reasoning” that does not make it such. If this is about the new “reasoning” capabilities of the new LLMS. It was if I recall correctly, found our that it’s not actually reasoning, just doing a fancy footwork appear as if it was reasoning, just like it’s doing fancy dice rolling to appear to be talking like a human being.

                As in, if you just change the underlying numbers and names on a test, the models will fail more often, even though the logic of the problem stays the same. This means, it’s not actually “reasoning”, it’s just applying another pattern.

                With the current technology we’ve gone so far into this brute forcing the appearance of intelligence that it is becoming quite the challenge in diagnosing what the model is even truly doing now. I personally doubt that the current approach, which is decades old and ultimately quite simple, is a viable way forwards. At least with our current computer technology, I suspect we’ll need a breakthrough of some kind.

                But besides the more powerful video cards, the basic principles of the current AI craze are the same as they were in the 70s or so when they tried the connectionist approach with hardware that could not parallel process, and had only datasets made by hand and not with stolen content. So, we’re just using the same approach as we were before we tried to do “handcrafted” AI with LISP machines in the 80s. Which failed. I doubt this earlier and (very) inefficient approach can solve the problem, ultimately. If this keeps on going, we’ll get pretty convincing results, but I seriously doubt we’ll get proper reasoning with this current approach.