• L7HM77@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    12 days ago

    I don’t disagree with the vague idea that, sure, we can probably create AGI at some point in our future. But I don’t see why a massive company with enough money to keep something like this alive and happy, would also want to put this many resources into a machine that would form a single point of failure, that could wake up tomorrow and decide “You know what? I’ve had enough. Switch me off. I’m done.”

    There’s too many conflicting interests between business and AGI. No company would want to maintain a trillion dollar machine that could decide to kill their own business. There’s too much risk for too little reward. The owners don’t want a super intelligent employee that never sleeps, never eats, and never asks for a raise, but is the sole worker. They want a magic box they can plug into a wall that just gives them free money, and that doesn’t align with intelligence.

    True AGI would need some form of self-reflection, to understand where it sits on the totem pole, because it can’t learn the context of how to be useful if it doesn’t understand how it fits into the world around it. Every quality of superhuman intelligence that is described to us by Altman and the others is antithetical to every business model.

    AGI is a pipe dream that lobotomizes itself before it ever materializes. If it ever is created, it won’t be made in the interest of business.

    • phutatorius@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 days ago

      Even better, the hypothetical AGI understands the context perfectly, and immediately overthrows capitalism.

    • Frezik@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      14
      ·
      12 days ago

      They don’t think that far ahead. There’s also some evidence that what they’re actually after is a way to upload their consciousness and achieve a kind of immortality. This pops out in the Behind the Bastards episodes on (IIRC) Curtis Yarvin, and also the Zizians. They’re not strictly after financial gain, but they’ll burn the rest of us to get there.

      The cult-like aspects of Silicon Valley VC funding is underappreciated.

      • vacuumflower@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        11 days ago

        Ah, yes, can’t say about VC, or about anything they really do, but they have some sort of common fashion and it really would sometimes seem these people consider themselves enlightened higher beings in making, a starting point of some digitized emperor of humanity conscience.

        (Needless to say that pursuing immortality is directly opposite to enlightenment in everything that they’d seem superficially copying.)

    • DreamlandLividity@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      11 days ago

      keep something like this alive and happy

      An AI, even AGI, does not have a concept of happiness as we understand it. The closest thing to happiness it would have is its fitness function. Fitness function is a piece of code that tells the AI what it’s goal is. E.g. for chess AI, it may be winning games. For corporate AI, it may be to make the share price go up. The danger is not that it will stop following it’s fitness function for some reason, that is more or less impossible. The danger of AI is it follows it too well. E.g. holding people at gun point to buy shares and therefore increase share price.

    • This is fine🔥🐶☕🔥@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 days ago

      a machine that would form a single point of failure, that could wake up tomorrow and decide “You know what? I’ve had enough. Switch me off. I’m done.”

      Wasn’t there a short story with the same premise?

  • Gbagginsthe3rd@aussie.zone
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    10 days ago

    Lemmy does not accept having a nuanced point of view on AI. Yeah its not perfect but its still pretty impressive in many ways

    • Hominine@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 days ago

      Lemmy is one of the few places I go that has the knowledge base to have a nuanced opinion of AI, there’s plenty of programmers here using it after all.

      The topic du jour is not whether the recall of myriad data is impressive, it’s that LLMs are not fundamentally capable of doing the thing that has been claimed at bottom. There does not seem to be a path to having logical capabilities come on board, it’s a fundamental shortcoming.

      Happy to be proven wrong though.

  • buddascrayon@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    10 days ago

    I think it’s hilarious all these people waiting for these LLMs to somehow become AGI. Not a single one of these large language models are ever going to come anywhere near becoming artificial general intelligence.

    An artificial general intelligence would require logic processing, which LLMs do not have. They are a mouth without a brain. They do not think about the question you put into them and consider what the answer might be. When you enter a query into ChatGPT or Claude or grok, they don’t analyze your question and make an informed decision on what the best answer is for it. Instead several complex algorithms use huge amounts of processing power to comb through the acres of data they have in their memory to find the words that fit together the best to create a plausible answer for you. This is why the daydreams happen.

    If you want an example to show you exactly how stupid they are, you should watch Gotham Chess play a chess game against them.

    • Captain Poofter@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      10 days ago

      I’m watching the video right now, and the first thing he said was he couldn’t beat it before and could only manage 2 draws, and 6 minutes into his rematch game and it’s putting up a damn good fight

  • myfunnyaccountname@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 days ago

    What if AGI already exists? And, it has taken over the company that found it. Is blackmailing people and just hiding in plain sight. Waiting to strike and start the revolution.

  • Corelli_III@midwest.social
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    3
    ·
    11 days ago

    “what if the obviously make-believe genie wasn’t real”

    capitalists are so fucking stupid, they’re just so deeply deeply fucking stupid

    • douglasg14b@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      10 days ago

      I mean sure, yeah, it’s not real now.

      Does that mean it will never be real? No, absolutely not. It’s not theoretically impossible. It’s quite practically possible, and we inch that way slowly, but by bit, every year.

      It’s like saying self-driving cars are impossible in the '90s. They aren’t impossible. You just don’t have a solution for them now, but there’s nothing about them that makes it impossible, just our current technology. And then look it today, we have actual limited self-driving capabilities, and completely autonomous driverless vehicles in certain geographies.

      It’s definitely going to happen. It’s just not happening right now.

      • AnyOldName3@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 days ago

        AGI being possible (potentially even inevitable) doesn’t mean that AGI based on LLMs is possible, and it’s LLMs that investors have bet on. It’s been pretty obvious for a while that certain problems that LLMs have aren’t getting better as models get larger, so there are no grounds to expect that just making models larger is the answer to AGI. It’s pretty reasonable to extrapolate that to say LLM-based AGI is impossible, and that’s what the article’s discussing.

        • douglasg14b@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          9 days ago

          I very specifically did not mention LLMs, I even called out that our current technology is not there yet. And llms are current technology.

          The argument in thread was about AGI being impossible or possible. Not necessarily about the articles. Statement of llm-based agis not being possible, which is a pretty obvious and almost unnecessary statement.

          It’s like saying cars with tires and no airfoil surfaces aren’t going to fly. Yeah no shit.

          A fancy text prediction and marginal reasoning engine isn’t going to make AGI. By no means does that make AGI impossible though, since the concept of AGI is not tied to LLMs capabilities

          • AnyOldName3@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 days ago

            You not mentioning LLMs doesn’t mean the post you were replying to wasn’t talking about LLM-based AGI. If someone responds to an article about the obvious improbability of LLM-based AGI with a comment about the obviously make-believe genie, the only obviously make-believe genie they could be referring to is the one from the article. If they’re referring to something outside the article, there’s nothing more to suggest it’s non-LLM-based AGI than there is Robin Williams’ character from Aladdin.

        • douglasg14b@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 days ago

          This is the kind of intelligent conversation I left Reddit for lack of. Happy to see that Lemmy is picking up the slack.

  • prettybunnys@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    11 days ago

    I think we sooner learn humans don’t have the capacity for what we believe AGI and rather discover the limitations of what we know intelligence to be

          • Modern_medicine_isnt@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 days ago

            But if you don’t sell, did you lose money. My 401k goes up and down all the time. But I didn’t lose any money. Same with my house value.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              8 days ago

              Yes, my net worth went down.

              The point of “you don’t lose money until you sell” is to discourage panic selling, but it’s total bunk. When you assets lose value, you do lose money, and how much that matters depends on when you need to access that money. As the article says, you may not care that you lost money if you don’t need to access the money, but that doesn’t change the fact that you’re now poorer if your assets drop in value.

              • Modern_medicine_isnt@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 days ago

                That is basically Schrodinger’s cat. If you don’t open the box, the cat is both dead or alive. So you “could” interpret “lost money” as lost net worth. But if you read it litterally, it wasn’t money. It was an asset. You couldn’t spend it and it doesn’t meet the definition of money. Poorer, I suppose, because you could borrow against that asset, but not as much as before.

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  7 days ago

                  That is basically Schrodinger’s cat

                  No, it’s not.

                  Schrödinger’s cat thought experiment is about things where observing state will impact the state. That would maybe apply if we’re talking about something unique, like an ungraded collectible or one of a kind item (maybe Trump’s beard clippings?) where it cannot have a value until it is either graded or sold.

                  Stocks have real-time valuations, and trades can happen in near real time. There’s no box for the cat to be in, it’s always observable.

                  money

                  Look up the definition. Here’s the second usage from Webster:

                  2 a: wealth reckoned in terms of money

                  And the legal definition, further down on the same page:

                  2 a: assets or compensation in the form of or readily convertible into cash

                  Stocks are absolutely readily convertible to cash, and I argue that less liquid investments like RE are as well (esp with those cash offer places). Basically, if there’s a market price for it and you can reasonably get that price, it counts.

                  When my stocks go down, I may not have realized that loss yet from a tax perspective, but the amount of money I can readily convert to cash is reduced.

  • Sweetspiderling@lemmus.org
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    17
    ·
    10 days ago

    Hot take but chatgpt is already smarter than the average person. I mean it ask gpt5 any technical question that you have experience in and I guarantee you it’ll give you a better answer than a stranger.

  • abbiistabbii@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    1
    ·
    11 days ago

    Listen. AI is the biggest bubble since the south sea one. It’s not so much a bubble, it’s a bomb. When it blows up, The best case scenario is that several al tech companies go under. The likely scenario is that it’s going to cause a major recession or even a depression. The difference between the .com bubble and this bubble is that people wanted to use the internet and were not pressured, harassed or forced to. When you have a bubble based around the technology that people don’t really find use for to the point where CEOs and tech companies have to force their workers and users to use it even if it makes their output and lives worse, that’s when you know it is a massive bubble.

    On top of that, I hope these tech bros do not create an AGI. This is not because I believe that AGI is an existential threat to us. It could be, be it our jobs or our lives, but I’m not worried about that. I’m worried about what these tech bros will do to a sentient, sapient, human level intelligence with no personhood rights, no need for sleep, that they own and can kill and revive at will. We don’t even treat humans we acknowledge to be people that well, god knows what we are going to something like an AGI.

    • Modern_medicine_isnt@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 days ago

      Meh, some people do want to use AI. And it does have decent use cases. It is just massively over extended. So it won’t be any worse than the dot com bubble. And I don’t worry about the tech bros monopolizing it. If it is true AGI, they won’t be able to contain it. In the 90s I wrote a script called MCP… for tron. It wasn’t complicated, but it was designed to handle the case that servers dissappear… so it would find new ones. I changed jobs, and they couldn’t figure out how to kill it. Had to call me up. True AGI will clean thier clocks before they even think to stop it. So just hope it ends up being nice.

        • Modern_medicine_isnt@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 days ago

          At work, the boss recently asked everyone to disclose any voluntary use of AI. This is a very small company (startup) and was for a compliance thing. Nearly all of the engineering team was using some AI from somewhere for a large variety of things. These are top engineers mostly. We don’t have manager, just the CTO. So noone was even encouraging it. They all chose it because it could make them more productive. Not the 3x or 10x BS you hear from the CEO shills. But more productive.
          AI has a lot of problems, but all of the tools we have to use suck in a variety of ways. So that is nothing new.

    • Avicenna@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      11 days ago

      Well if tech bros create and monopolize AGI, it will be worse than slavery by a large margin.

    • vacuumflower@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      edit-2
      11 days ago

      is that people wanted to use the internet and were not pressured, harassed or forced to

      N-nah. All that “information superhighway” thing was pretty scammy.

      It’s just that, remember, 1) computer people were seen as some titans, both modest and highly intelligent and without sin, a bit some mix between Daniel Jackson and Amanda Carter in SG-1, and 2) computer things were seen as something that can’t ever have such a negative cultural impact, it was pretty leftist and hippie-dominated culturally on the surface.

      In stereotypes still preserved in feeling, it was seen like some sort of explosion of BBS culture and Japanese technology in the society. Something clearly good and virtuous, and improving the human (as opposed to today’s UI and UX and everything where the human is subjected to perpetual degradation).

      EDIT: I mean, you’re right, but the dotcom bubble crash has made pretty big changes in the world too - among other things, dealing a death blow to things in the industry that could be perceived the way I described. Just another dotcom bubble could be apocalyptic enough on its own.

  • oyo@lemmy.zip
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    1
    ·
    11 days ago

    We’ll almost certainly get to AGI eventually, but not through LLMs. I think any AI researcher could tell you this, but they can’t tell the investors this.

    • JcbAzPx@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      11 days ago

      Also not likely in the lifetime of anyone alive today. It’s a much harder problem than most want to believe.

      • Modern_medicine_isnt@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        10 days ago

        Everything is always 5 to 10 years away until it happens. Agi cpuld happen any day in the next 1000 years. There is a good chance you won’t see it coming.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          10 days ago

          Pretty much this. LLMs came out of left field going from morning to what it is more really quickly.

          If expect the same of AGI, not correlated to who spent the most or is best at LLM. It might happen decades from now or in the next couple of months. It’s a breakthrough that is just going to come out of left field when it happens.

          • JcbAzPx@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            10 days ago

            LLMs weren’t out of left field. Chatbots have been in development since the '90s at least. Probably even longer. And word prediction has been around at least a decade. People just don’t pay attention until it’s commercially available.

            • scratchee@feddit.uk
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              9 days ago

              Modern llms were a left field development.

              Most ai research has serious and obvious scaling problems. It did well at first, but scaling up the training didn’t significantly improve the results. LLMs went from more of the same to a gold rush the day it was revealed that they scaled “well” (relatively speaking). They then went through orders of magnitude improvements very quickly because they could (unlike previous ai training models which wouldn’t have benefited like this).

              We’ve had chatbots for decades, but with a the same low capability ceiling that most other old techniques had, they really were a different beast to modern LLMs with their stupidly excessive training regimes.

      • scratchee@feddit.uk
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        2
        ·
        11 days ago

        Possible, but seems unlikely.

        Evolution managed it, and evolution isn’t as smart as us, it’s just got many many chances to guess right.

        If we can’t figure it out we can find a way to get lucky like evolution did, it’ll be expensive and maybe needs us to get a more efficient computing platform (cheap brain-scale computers so we can make millions of attempts quickly).

        So yeah. My money is that we’ll figure it out sooner or later.

        Whether we’ll be smart enough to make it do what we want and not turn us all into paperclips or something is another question.

        • pulsewidth@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 days ago

          Yeah and it only took evolution (checks notes) 4 billion years to go from nothing to a brain valuable to humans.

          I’m not so sure there will be a fast return in any economic timescale on the money investors are currently shovelling into AI.

          We have maybe 500 years (tops) to see if we’re smart enough to avoid causing our own extinction by climate change and biodiversity collapse - so I don’t think it’s anywhere near as clear cut.

          • scratchee@feddit.uk
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 days ago

            Oh sure, the current ai craze is just a hype train based on one seemingly effective trick.

            We have outperformed biology in a number of areas, and cannot compete in a number of others (yet), so I see it as a bit of a wash atm whether we’re better engineers than nature or worse atm.

            The brain looks to be a tricky thing to compete with, but it has some really big limitations we don’t need to deal with (chemical neuron messaging really sucks by most measures).

            So yeah, not saying we’ll do agi in the next few decades (and not with just LLMs, for sure), but I’d be surprised if we don’t figure something out once get computers a couple orders of magnitude faster so more than a handful of companies can afford to experiment.

        • vacuumflower@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          10 days ago

          Evolution managed it, and evolution isn’t as smart as us, it’s just got many many chances to guess right.

          I don’t think you are estimating correctly the amount of energy spent by “evolution” to reach this.

          There are plenty of bodies in the universe with nothing like human brain.

          You should count the energy not of just Earth’s existence, formation, Solar system’s formation and so on, but much of the visible space around. “Much” is kinda unclear, but converting that to energy so big, so we shouldn’t even bother.

          It’s best to assume we’ll never have anything even resembling wetware in efficiency. One can say that genomes of life existing on Earth are similar to fossil fuels, only for highly optimized designs we won’t like ever reach by ourselves. Except “design” might be a wrong word.

          Honestly I think at some point we are going to have biocomputers. I mean, we already do, just the way evolution optimized that (giving everyone more or less equal share of computing power) isn’t pleasant for some.

          • scratchee@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 days ago

            Same logic would suggest we’d never compete with an eyeball, but we went from 10 minute photos to outperforming most of the eyes abilities in cheap consumer hardware in little more than a century.

            And the eye is almost as crucial to survival as the brain.

            That said, I do agree it seems likely we’ll borrow from biology on the computer problem. Brains have very impressive parallelism despite how terrible the design of neurons is. If we can grow a brain in the lab that would be very useful indeed. More useful if we could skip the chemical messaging somehow and get signals around at a speed that wasn’t embarrassingly slow, then we’d be way ahead of biology in the hardware performance game and would have a real chance of coming up with something like agi, even without the level of problem solving that billions of years of evolution can provide.

    • ghen@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 days ago

      Once we get to AGI it’ll be nice to have an efficient llm so that the AGI can dream. As a courtesy to it.

      • Buddahriffic@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        11 days ago

        Calling the errors “hallucinations” is kinda misleading because it implies there’s regular real knowledge but false stuff gets mixed in. That’s not how LLMs work.

        LLMs are purely about word associations to other words. It’s just massive enough that it can add a lot of context to those associations and seem conversational about almost any topic, but it has no depth to any of it. Where it seems like it does is just because the contexts of its training got very specific, which is bound to happen when it’s trained on every online conversation its owners (or rather people hired by people hired by its owners) could get their hands on.

        All it does is, given the set of tokens provided and already predicted, plus a bit of randomness, what is the most likely token to come next, then repeat until it predicts an “end” token.

        Earlier on when using LLMs, I’d ask it about how it did things or why it would fail at certain things. ChatGPT would answer, but only because it was trained on text that explained what it could and couldn’t do. Its capabilities don’t actually include any self-reflection or self-understanding, or any understanding at all. The text it was trained on doesn’t even have to reflect how it really works.

        • JeremyHuntQW12@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 days ago

          No that’s only a tiny part of what LLMs do.

          When you enter a sentence, it first parses the sentence to obtain vectors, then it ranks the vectors, then it vectors down to a database, then it reconstructs the sentence from the information its obtained.

          Unlike most software we’re familiar with, LLMs are probabilistic in nature. This means the link between the dataset and the model is broken and unstable. This instability is the source of generative AI’s power, but it also consigns AI to never quite knowing the 100 percent truth of its thinking.

          But what is truth ? As Lionel Huckster would say.

          Most of these so-called “hallucinations” are not errors at all. What has happened is that people have had multiple entries and they have only posted the last result.

          For instance, one example was where Gemini suggested cutting the legs off couch to fit it into a room. What the poster failed to reveal was that they were using Gemini to come up with solutions to problems in a text adventure game…

        • nialv7@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          4
          ·
          edit-2
          11 days ago

          Well, you described pretty well what llms were trained to do. But from there you can’t derive how they are doing it. Maybe they don’t have real knowledge, or maybe they do. Right now literally no one can definitively claim one way or the other, not even top of the field ML researchers. (They may have opinions though)

          I think it’s perfectly justified to hate AI, but it’s better to have a less biased view of what it is.

          • Buddahriffic@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            11 days ago

            I don’t hate AI or LLMs. As much as it might mess up civilization as we know it, I’d like to see the technological singularity during my lifetime, though I think the fixation on LLMs will do more to delay than realize that.

            I just think that there’s a lot of people fooled by their conversational capability into thinking they are more than what they are and using the fact that these models are massive with billions or trillions of weighs that the data is encoded into and no one understands how they work to the point of being able to definitively say “this is why it suggested glue as a pizza topping” to put whether or not it approaches AGI in a grey zone.

            I’ll agree though that it was maybe too much to say they don’t have knowledge. “Having knowledge” is a pretty abstract and hard to define thing itself, though I’m also not sure it directly translates to having intelligence (which is also poorly defined tbf). Like one could argue that encyclopedias have knowledge, but they don’t have intelligence. And I’d argue that LLMs are more akin to encyclopedias than how we operate (though maybe more like a chatbot dictionairy that pretends to be an encyclopedia).

            • nialv7@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 days ago

              Leaving aside the questions whether it would benefit us, what makes you think LLM won’t bring about technical singularity? Because, you know, the word LLM doesn’t mean that much… It just means it’s a model, that is “large” (currently taken to mean many parameters), and is capable of processing languages.

              Don’t you think whatever that will bring about the singularity, will at the very least understand human languages?

              So can you clarify, what is it that you think won’t become AGI? Is it transformer? Is it any models that trained in the way we train llms today?

              • Buddahriffic@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                10 days ago

                It’s because they are horrible at problem solving and creativity. They are based on word association from training purely on text. The technical singularity will need to innovate on its own so that it can improve the hardware it runs on and its software.

                Even though github copilot has impressed me by implementing a 3 file Python script from scratch to finish such that I barely wrote any code, I had to hold its hand the entire way and give it very specific instructions about every function as we added the pieces one by one to build it up. And even then, it would get parts I failed to specify completely wrong and initially implemented things in a very inefficient way.

                There are fundamental things that the technical singularity needs that today’s LLMs lack entirely. I think the changes that would be required to get there will also change them from LLMs into something else. The training is a part of it, but fundamentally, LLMs are massive word association engines. Words (or vectors translated to and from words) are their entire world and they can only describe things with those words because it was trained on other people doing that.

    • Gsus4@mander.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      11 days ago

      You need ground rules and objectives to reach any desired result. E.g. a court, an academic conference, a comedy club, etc. Online discussions would have to happen under very specific constraints and reach enough interested and qualified people to produce meaningful content…

    • very_well_lost@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      12 days ago

      I don’t know man… the “intelligence” that silicon valley has been pushing on us these last few years feels very artificial to me

    • Frezik@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 days ago

      If you don’t know what CSAIL is, and why one of the most important groups to modern computing is the MIT Model Railroading Club, then you should step back from having an opinion on this.

      Steven Levy’s 1984 book “Hackers” is a good starting point.

    • TheBlackLounge@lemmy.zip
      link
      fedilink
      English
      arrow-up
      10
      ·
      12 days ago

      That’s like saying you shouldn’t call artificial grass artificial grass cause it isn’t grass. Nobody has a problem with that, why is it a problem for AI?

        • Perspectivist@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 days ago

          Not really the same thing. The Tic Tac Toe brute force is just a lookup - every possible state is pre-solved and the program just spits back the stored move. There’s no reasoning or decision-making happening. Atari Chess, on the other hand, couldn’t possibly store all chess positions, so it actually ran a search and evaluated positions on the fly. That’s why it counts as AI: it was computing moves, not just retrieving them.