“No Duh,” say senior developers everywhere.

The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.

  • RagingRobot@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    12 days ago

    I have been vibe coding a whole game in JavaScript to try it out. So far I have gotten a pretty ok game out of it. It’s just a simple match three bubble pop type of thing so nothing crazy but I made a design and I am trying to implement it using mostly vibe coding.

    That being said the code is awful. So many bad choices and spaghetti code. It also took longer than if I had written it myself.

    So now I have a game that’s kind of hard to modify haha. I may try to setup some unit tests and have it refactor using those.

      • RagingRobot@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 days ago

        Blaming? I mean it wrote pretty much all of the code. I definitely wouldn’t tell people I wrote it that way haha.

    • mcv@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      ·
      12 days ago

      Sounds like vibecoders will have to relearn the lessons of the past 40 years of software engineering.

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        12 days ago

        As with every profession every generation… only this time on their own because every company forgot what employee training is and expects everyone to be born with 5 years of experience.

  • Jesus@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    3
    ·
    12 days ago

    Might be there someday, but right now it’s basically a substitute for me googling some shit.

    If I let it go ham, and code everything, it mutates into insanity in a very short period of time.

    • degen@midwest.social
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      1
      ·
      12 days ago

      I’m honestly doubting it will get there someday, at least with the current use of LLMs. There just isn’t true comprehension in them, no space for consideration in any novel dimension. If it takes incredible resources for companies to achieve sometimes-kinda-not-dogshit, I think we might need a new paradigm.

      • Glitchvid@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 days ago

        I think we’ve tapped most of the mileage we can get from the current science, the AI bros conveniently forget there have been multiple AI winters, I suspect we’ll see at least one more before “AGI” (if we ever get there).

      • Windex007@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        12 days ago

        A crazy number of devs weren’t even using EXISTING code assistant tooling.

        Enterprise grade IDEs already had tons of tooling to generate classes and perform refactoring in a sane and algorithmic way. In a way that was deterministic.

        So many use cases people have tried to sell me on (boilerplate handling) and im like “you have that now and don’t even use it!”.

        I think there is probably a way to use llms to try and extract intention and then call real dependable tools to actually perform the actions. This cult of purity where the llm must actually be generating the tokens themselves… why?

        I’m all for coding tools. I love them. They have to actually work though. Paradigm is completely wrong right now. I don’t need it to “appear” good, i need it to BE good.

        • degen@midwest.social
          link
          fedilink
          English
          arrow-up
          7
          ·
          12 days ago

          Exactly. We’re already bootstrapping, re-tooling, and improving the entire process of development to the best of our collective ability. Constantly. All through good, old fashioned, classical system design.

          Like you said, a lot of people don’t even put that to use, and they remain very effective. Yet a tiny speck of AI tech and its marketing is convincing people we’re about to either become gods or be usurped.

          It’s like we took decades of technical knowledge and abstraction from our Computing Canon and said “What if we didn’t use that anymore?”

          • Jason2357@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 days ago

            This is the smoking gun. If the AI hype boys really were getting that “10x engineer” out of AI agents, then regular developers would not be able to even come close to competing. Where are these 10x engineers? What have they made? They should be able to spin up whole new companies, with whole new major software products. Where are they?

      • Jason2357@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 days ago

        They are statistical prediction machines. The more they output, the larger the portion of their “context window” (statistical prior) becomes the very output they generated. It’s a fundamental property of the current LLM design that the snake will eventually eat enough of it’s tail to puke garbage code.

    • Jankatarch@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      10 days ago

      Writing new code is easier than editing someone else’s code but editing a portion is still better than writing the entire program again from start to end.

      Then there is LLMs which force you to edit the entire thing from start to end.

  • jaykrown@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    6
    ·
    12 days ago

    I’ve found success using more powerful LLMs to help me create applications using the Rust programming language. If you use a weak LLM and ask it to do something very difficult you’ll get bad results. You still need to have a fundamental understanding of good coding practices. Using an LLM to code doesn’t replace the decision making.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      12 days ago

      Based on my experience with claude sonnet and gpt4/5… It’s a little useful but generally annoying and fails more often than works.

      I do think moderate use still comes out ahead, as it saves a bunch of typing when it does work, but I still get annoyed at the blatantly stupid suggestions I keep having to decline.

      • jaykrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 days ago

        I remember GPT 4 being useless and constantly giving wrong information. Now with newer models they’ve become significantly more useful, especially when prompted to be extremely careful and to always double check to ensure the best response.

  • elbiter@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    ·
    12 days ago

    AI coding is the stupidest thing I’ve seen since someone decided it was a good idea to measure the code by the amount of lines written.

    • Slotos@feddit.nl
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 days ago

      It did solve my impostor syndrome though. Turns out a bunch of people I saw to be my betters were faking it all along.

    • ellohir@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      12 days ago

      More code is better, obviously! Why else would a website to see a restaurant menu be 80Mb? It’s all that good, excellent code.

  • Sunkblake@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 days ago

    Im not super surprised, but AI has been really useful when it comes to learning or giving me a direction to look into something more directly.

    Im not really an advocate for AI, but there are some really nice things AI can do. And i like to test the code quality of the models i have access to.

    I always ask for a ftp server and dns server, to check what it can do and they work surprisingly well most of the time.

    • ceiphas@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 days ago

      I am a software architect, an mainly usw it to refactor my own old code… But i am maybe not a typical architect…

      • JackbyDev@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 days ago

        I don’t really care if people use it, it’s more that it feels like a quarter of our architect meeting presentations are about something AI related. It’s just exhausting.

  • kadaverin0@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    1
    ·
    12 days ago

    Imagine if we did “vibe city infrastructure”. Just throw up a fucking suspension bridge and we’ll hire some temps to come in later to find the bad welds and missing cables.

  • aesthelete@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    12 days ago

    It turns every prototyping exercise into a debugging exercise. Even talented coders often suck ass at debugging.