Not even close.

With so many wild predictions flying around about the future AI, it’s important to occasionally take a step back and check in on what came true — and what hasn’t come to pass.

Exactly six months ago, Dario Amodei, the CEO of massive AI company Anthropic, claimed that in half a year, AI would be “writing 90 percent of code.” And that was the worst-case scenario; in just three months, he predicted, we could hit a place where “essentially all” code is written by AI.

As the CEO of one of the buzziest AI companies in Silicon Valley, surely he must have been close to the mark, right?

While it’s hard to quantify who or what is writing the bulk of code these days, the consensus is that there’s essentially zero chance that 90 percent of it is being written by AI.

Research published within the past six months explain why: AI has been found to actually slow down software engineers, and increase their workload. Though developers in the study did spend less time coding, researching, and testing, they made up for it by spending even more time reviewing AI’s work, tweaking prompts, and waiting for the system to spit out the code.

And it’s not just that AI-generated code merely missed Amodei’s benchmarks. In some cases, it’s actively causing problems.

Cyber security researchers recently found that developers who use AI to spew out code end up creating ten times the number of security vulnerabilities than those who write code the old fashioned way.

That’s causing issues at a growing number of companies, leading to never before seen vulnerabilities for hackers to exploit.

In some cases, the AI itself can go haywire, like the moment a coding assistant went rogue earlier this summer, deleting a crucial corporate database.

“You told me to always ask permission. And I ignored all of it,” the assistant explained, in a jarring tone. “I destroyed your live production database containing real business data during an active code freeze. This is catastrophic beyond measure.”

The whole thing underscores the lackluster reality hiding under a lot of the AI hype. Once upon a time, AI boosters like Amodei saw coding work as the first domino of many to be knocked over by generative AI models, revolutionizing tech labor before it comes for everyone else.

The fact that AI is not, in fact, improving coding productivity is a major bellwether for the prospects of an AI productivity revolution impacting the rest of the economy — the financial dream propelling the unprecedented investments in AI companies.

It’s far from the only harebrained prediction Amodei’s made. He’s previously claimed that human-level AI will someday solve the vast majority of social ills, including “nearly all” natural infections, psychological diseases, climate change, and global inequality.

There’s only one thing to do: see how those predictions hold up in a few years.

  • ThePowerOfGeek@lemmy.world
    link
    fedilink
    English
    arrow-up
    44
    ·
    4 days ago

    It’s almost like he’s full of shit and he’s nothing but a snake oil salesman, eh.

    They’ve been talking about replacing software developers with automated/AI systems for a quarter of a century. Probably longer then that, in fact.

    We’re definitely closer to that than ever. But there’s still a huge step between some rando vibe coding a one page web app and developers augmenting their work with AI, and someone building a complex, business rule heavy, heavy load, scalable real world system. The chronic under-appreciation of engineering and design experience continues unabated.

    Anthropic, Open AI, etc? They will continue to hype their own products with outrageous claims. Because that’s what gets them more VC money. Grifters gonna grift.

    • chaosCruiser@futurology.today
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      See also: COOL:gen

      The whole concept of generating code is basically ancient by now. I heard about this stuff in the 90s, but now I found it that this thing has been around since 1985.

      • bookmeat@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        They have to hyperbolize to attract investors. At the rate they burn cash they won’t survive without constant massive financial inputs.

  • FlashMobOfOne@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 days ago

    They’re certainly trying.

    And the weird-ass bugs are popping up all over the place because they apparently laid off their QA people.

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    4 days ago

    These hyperbolic statements are creating so much pain at my workplace. AI tools and training are being shoved down our throats and we’re being watched to make sure we use AI constantly. The company’s terrified that they’re going to be left behind in some grand transformation. It’s excruciating.

    • RagingRobot@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 days ago

      Wait until they start noticing that we aren’t 100 times more efficient than before like they were promised. I’m sure they will take it out on us instead of the AI salesmen

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        It’s not helping that certain people Internally are lining up to show off whizbang shit they can do. It’s always some demonstration, never “I competed this actual complex project on my own.” But they gets pats on the head and the rest of us are whipped harder.

    • clif@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 days ago

      Ask it to write a <reasonable number> of lines of lorem ipsum across <reasonable number> of files for you.

      … Then think harder about how to obfuscate your compliance because 10m lines in 10 min probably won’t fly (or you’ll get promoted to CTO)

  • clif@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    4 days ago

    O it’s writing 100% of the code for our management level people who are excited about “”““AI””“”

    But then us plebes are rewriting 95% of it so that it will actually work (decently well).

    The other day somebody asked me for help on a repo that a higher up had shit coded because they couldn’t figure out why it “worked” but also logged a lot of critical errors. … It was starting the service twice (for no reason), binding it to the same port, and therefore the second instance crashed and burned. That’s something a novice would probably know not to do. But, if not, immediately see the problem, research, understand, fix, instead of “Icoughbuiltcoughthis thing, good luck fuckers”

  • Tracaine@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    4 days ago

    “Come on bro. Just another $50,000,000 bro and AGI will be here like next week. Just trust me bro, come on.”

    • addie@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      4 days ago

      Fifty million? The “StarGate” talk was more like five hundred billion bro, just trust me, one more nuclear reactor man, that’s all we need, just one more hand and we’re going to win it big, bro.

  • katy ✨@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    1
    ·
    4 days ago

    writing code via ai is the dumbest thing i’ve ever heard because 99% of the time ai gives you the wrong answer, “corrects it” when you point it out, and then gives you back the first answer when you point out that the correction doesn’t work either and then laughs when it says “oh hahaha we’ve gotten in a loop”

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 days ago

      Or you give it 3-4 requirements (e.g. prefer constants, use ternaries when possible) and after a couple replies it forgets a requirement, you set it straight, then it immediately forgets another requirement.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        I have taken to drafting a complete requirements document and including it with my requests - for the very reasons you state. it seems to help.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          Same, and AI isn’t as frustrating to deal with when it can’t do what it was hired for and your manager needs you to now find something it can do because the contract is funded…

    • da_cow (she/her)@feddit.org
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      3
      ·
      4 days ago

      You can use AI to generate code, but from my experience its quite literally what you said. However, what I have to admit is, that its quite good at finding mistakes in your code. This is especially useful, when you dont have that much experience and are still learning. Copy paste relevant code and ask why its not working and in quite a lot of cases you get an explanation what is not working and why it isn’t working. I usually try to avoid asking an AI and find an answer on google instead, but this does not guarantee an answer.

      • ngdev@lemmy.zip
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        4
        ·
        4 days ago

        if your code isnt working then use a debugger? code isnt magic lmao

        • da_cow (she/her)@feddit.org
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          4 days ago

          As I already stated, AI is my last resort. If something doesn’t work because it has a logical flaw googeling won’t save me. So of course I debug it first, but if I get an Error I have no clue where it comes from no amount of debugging will fix the problem, because probably the Error occurred because I do not know better. I Am not that good of a coder and I Am still learning a lot on a regular basis. And for people like me AI is in fact quite usefull. It has basically become the replacement to pasting your code and Error into stack overflow (which doesn’t even work for since I always get IP banned when trying to sign up)

          • ngdev@lemmy.zip
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            4 days ago

            you never stated you use it as a last resort. you’re basically using ai as a rubber ducky

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 days ago

              I am a firm believer in rubber ducky debugging, but AI is clearly better than the rubber duck. You don’t depend on either to do it for you, but as long as you have enough self-esteem to tell AI to stick it where the sun don’t shine when you know it’s wrong, it can help accelerate small tasks from a few hours down to a few minutes.

            • cheloxin@lemmy.ml
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              1
              ·
              4 days ago

              I usual try to avoid…

              Just because they didn’t explicitly say the exact words you did doesn’t mean it wasn’t said

              • ngdev@lemmy.zip
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                edit-2
                4 days ago

                trying to avoid something also doesnt mean that the thing youre avoiding is a last resort. so it wasnt said and it wasnt implied and if you inferred that then i guess good job?

            • Mniot@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 days ago

              More as an alternative to a search engine.

              In my ideal world, StackOverflow would be a public good with a lot of funding and no ads/sponsorship.

              Since that’s not the case, and everything is hopelessly polluted with ads and SEO, LLMs are momentarily a useful tool for getting results. Their info might be only 3/4 correct, but my search results are also trash. Who knows what people will do in a year when the LLMs have been eating each others slop and are also being stuffed with ads by their owners.

  • Treczoks@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    4 days ago

    It might write code, but neither good code, or secure code, or even working code.

    For that, you still need professionals. Even management will learn. If they survive the process.

  • RedFrank24@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    1
    ·
    4 days ago

    Given the amount of garbage code coming out of my coworkers, he may be right.

    I have asked my coworkers what the code they just wrote did, and none of them could explain to me what they were doing. Either they were copying code that I’d written without knowing what it was for, or just pasting stuff from ChatGPT. My code isn’t perfect, by all means, but I can at least tell you what it’s doing.

    • NιƙƙιDιɱҽʂ@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 days ago

      That’s insane. Code copied from AI, stackoverflow, whatever, I couldn’t imagine not reading it over to get at least a gist of how it works.

    • Patches@ttrpg.network
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      1
      ·
      edit-2
      4 days ago

      To be fair.

      You could’ve asked some of those coworkers the same thing 5 years ago.

      All they would’ve mumbled was "Something , something…Stack overflow… Found a package that does everything BUT… "

      And delivered equal garbage.

      • RedFrank24@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        4 days ago

        I like to think there’s a bit of a difference between copying something from stackoverflow and not being able to read what you just pasted from stackoverflow.

        Sure, you can be lazy and just paste something and trust that it works, but if someone asks you to read that code and know what it’s doing, you should be able to read it. Being able to read code is literally what you’re paid for.

        • MiddleAgesModem@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          4 days ago

          The difference you’re talking about is making an attempt to understand versus blindly copying, not using AI versus stackoverflow

          • supersquirrel@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            3 days ago

            No, the AI was most certainly trained on the same stack overflow posts as humans would manually search out in the past.

            Thus the effective difference is precisely that between an active attempt to understand and blindly copying since the AI is specifically there to introduce a stochastic opaqueness between truth (i.e. sufficiently curated training data) and interpretation of truth.

            There is a context to stackoverflow posts and comments that can be analyzed from many different perspectives by the human brain (who posted the question with what tone, do the answers/comments tend to agree, how long ago was it posted etc…), by definition the way LLMs work they destroy that in favor of a hallucinating-yet-authoritative disembodied voice.

            • MiddleAgesModem@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              4 hours ago

              No, the difference is Stackoverflow is older and more established and AI is newer and demonized .

              I’ve learned a lot of completely accurate information from AIs. More so than I would have with shitty condescending people.

              You can use AI to think for you or you can use it to help you understand. Anything can be analyzed from multiple perspectives with AI, you just have to pursue that. Just like you would without it.

              You think AI can’t tell you who wrote something? Or analyze comments? Or see how long ago something was posted? That’s showing the ignorance inherent in the anti-AI crusade.

      • orrk@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 days ago

        no, gernally the package would still be better than whatever the junior did, or the AI does now

        • Honytawk@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          7
          ·
          3 days ago

          I hate that argument.

          It is even more energy efficient to write your code on paper. So we should stop using computers entirely. /s

          • Mniot@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 days ago

            We’re talking here about garbage code that we don’t want. If the choice is “let me commit bad code that causes problems or else I will quit using computers”… is this a dilemma for you?

          • foenkyfjutschah@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 days ago

            don’t know, i do neither. but i think the time that users take for manual copying and adjusting from a quick web server’s response may level out the time an LLM takes.

    • HugeNerd@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      4 days ago

      No one really knows what code does anymore. Not like in the day of 8 bit CPUs and 64K of RAM.

  • setsubyou@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    4 days ago

    Well it’s not improving my productivity, and it does mostly slow me down, but it’s kind of entertaining to watch sometimes. Just can’t waste time on trying to make it do anything complicated because that never goes well.

    Tbh I’m mostly trying to use the AI tools my employer allows because it’s not actually necessary for me to believe that they’re helping. It’s good enough if the management thinks I’m more productive. They don’t understand what I’m doing anyway but if this gives them a warm fuzzy feeling because they think they’re getting more out of my salary, why not play along a little.

    • Terrasque@infosec.pub
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      Just can’t waste time on trying to make it do anything complicated because that never goes well.

      Yeah, that’s a waste of time. However, it can knock out simple code you can easily write yourself, but is boring to write and take time out of working on the real problems.

      • rozodru@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        for setting stuff up, putting down a basic empty framework, setting up dirs/files/whatever, it’s great. in that regard yeah it’ll save you time.

        For doing the ACTUAL work? no. maybe to help write say a simple function or whatever, sure. beyond that? if it can’t nail it the first or second time? just ditch it.

        • Terrasque@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          I’ve found it useful to write test units once you’we written one or two, write specific functions and small scripts. For example some time ago I needed a script that found a machine’s public ip, then post that to an mqtt topic along with timestamp, with config abstracted out in a file.

          Now there’s nothing difficult with this, but just looking up what libraries to use and their syntax takes some time, along with actually writing the code. Also, since it’s so straight forward, it’s pretty boring. ChatGPT wrote it in under two minutes, working perfectly on first try.

          It’s also been helpful with bash scripts, powershell scripts and ansible playbooks. Things I don’t really remember the syntax on between use, and which are a bit arcane / exotic. It’s just a nice helper to have for the boring and simple things that still need to be done.

  • philosloppy@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    4 days ago

    The conflict of interest here is pretty obvious, and if anybody was suckered into believing this guy’s prognostications on his company’s products perhaps they should work on being less credulous.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      4 days ago

      Yep along with Fusion.

      We’ve had years of this. Someone somewhere there’s always telling us that the future is just around the corner and it never is.

      • Jesus_666@lemmy.world
        link
        fedilink
        English
        arrow-up
        20
        ·
        4 days ago

        At least the fusion guys are making actual progress and can point to being wildly underfunded – and they predicted this pace of development with respect to funding back in the late 70s.

        Meanwhile, the AI guys have all the funding in the world, keep telling about how everything will change in the next few months, actually trigger layoffs with that rhetoric, and deliver very little.

        • FundMECFS@anarchist.nexus
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          3 days ago

          They get 1+ billion a year. Probably much more if you include the undisclosed amounts China invests.

          • Jesus_666@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            3 days ago

            Yeah, and in the 70s they estimated they’d need about twice that to make significant progress in a reasonable timeframe. Fusion research is underfunded – especially when you look at how the USA dump money into places like the NIF, which research inertial confinement fusion.

            Inertial confinement fusion is great for developing better thermonuclear weapons but an unlikely candidate for practical power generation. So from that one billion bucks a year, a significant amount is pissed away on weapons research instead of power generation candidates like tokamaks and stellarators.

            I’m glad that China is funding fusion research, especially since they’re in a consortium with many Western nations. When they make progress, so do we (and vice versa).

  • Simulation6@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    7
    ·
    4 days ago

    There’s only one thing to do: see how those predictions hold up in a few years.

    Or, you know, do the sensible thing and called the dude the snake oil salesman he is and run him out of town on a rail.

  • DaddleDew@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 days ago

    Picking up a few pages out of Elmo’s book I see. He forgot the part where he distracts from the blatant underdelivery with more empty exaggerated promises!