“No Duh,” say senior developers everywhere.
The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.
I’d much rather write my own bugs to have to waste hours fixing, thanks.
I have been vibe coding a whole game in JavaScript to try it out. So far I have gotten a pretty ok game out of it. It’s just a simple match three bubble pop type of thing so nothing crazy but I made a design and I am trying to implement it using mostly vibe coding.
That being said the code is awful. So many bad choices and spaghetti code. It also took longer than if I had written it myself.
So now I have a game that’s kind of hard to modify haha. I may try to setup some unit tests and have it refactor using those.
Wait, are you blaming AI for this, or yourself?
Blaming? I mean it wrote pretty much all of the code. I definitely wouldn’t tell people I wrote it that way haha.
Sounds like vibecoders will have to relearn the lessons of the past 40 years of software engineering.
As with every profession every generation… only this time on their own because every company forgot what employee training is and expects everyone to be born with 5 years of experience.
I am jack’s complete lack of surprise.
Might be there someday, but right now it’s basically a substitute for me googling some shit.
If I let it go ham, and code everything, it mutates into insanity in a very short period of time.
I’m honestly doubting it will get there someday, at least with the current use of LLMs. There just isn’t true comprehension in them, no space for consideration in any novel dimension. If it takes incredible resources for companies to achieve sometimes-kinda-not-dogshit, I think we might need a new paradigm.
I think we’ve tapped most of the mileage we can get from the current science, the AI bros conveniently forget there have been multiple AI winters, I suspect we’ll see at least one more before “AGI” (if we ever get there).
A crazy number of devs weren’t even using EXISTING code assistant tooling.
Enterprise grade IDEs already had tons of tooling to generate classes and perform refactoring in a sane and algorithmic way. In a way that was deterministic.
So many use cases people have tried to sell me on (boilerplate handling) and im like “you have that now and don’t even use it!”.
I think there is probably a way to use llms to try and extract intention and then call real dependable tools to actually perform the actions. This cult of purity where the llm must actually be generating the tokens themselves… why?
I’m all for coding tools. I love them. They have to actually work though. Paradigm is completely wrong right now. I don’t need it to “appear” good, i need it to BE good.
Exactly. We’re already bootstrapping, re-tooling, and improving the entire process of development to the best of our collective ability. Constantly. All through good, old fashioned, classical system design.
Like you said, a lot of people don’t even put that to use, and they remain very effective. Yet a tiny speck of AI tech and its marketing is convincing people we’re about to either become gods or be usurped.
It’s like we took decades of technical knowledge and abstraction from our Computing Canon and said “What if we didn’t use that anymore?”
This is the smoking gun. If the AI hype boys really were getting that “10x engineer” out of AI agents, then regular developers would not be able to even come close to competing. Where are these 10x engineers? What have they made? They should be able to spin up whole new companies, with whole new major software products. Where are they?
They are statistical prediction machines. The more they output, the larger the portion of their “context window” (statistical prior) becomes the very output they generated. It’s a fundamental property of the current LLM design that the snake will eventually eat enough of it’s tail to puke garbage code.
For most large projects, writing the code is the easy part anyway.
Writing new code is easier than editing someone else’s code but editing a portion is still better than writing the entire program again from start to end.
Then there is LLMs which force you to edit the entire thing from start to end.
I’ve found success using more powerful LLMs to help me create applications using the Rust programming language. If you use a weak LLM and ask it to do something very difficult you’ll get bad results. You still need to have a fundamental understanding of good coding practices. Using an LLM to code doesn’t replace the decision making.
Based on my experience with claude sonnet and gpt4/5… It’s a little useful but generally annoying and fails more often than works.
I do think moderate use still comes out ahead, as it saves a bunch of typing when it does work, but I still get annoyed at the blatantly stupid suggestions I keep having to decline.
I remember GPT 4 being useless and constantly giving wrong information. Now with newer models they’ve become significantly more useful, especially when prompted to be extremely careful and to always double check to ensure the best response.
Sounds exactly like my experience with Vibe Coding.
AI coding is the stupidest thing I’ve seen since someone decided it was a good idea to measure the code by the amount of lines written.
It did solve my impostor syndrome though. Turns out a bunch of people I saw to be my betters were faking it all along.
More code is better, obviously! Why else would a website to see a restaurant menu be 80Mb? It’s all that good, excellent code.
shocked_pikachu_face.jpg
No shit sherlock!
Im not super surprised, but AI has been really useful when it comes to learning or giving me a direction to look into something more directly.
Im not really an advocate for AI, but there are some really nice things AI can do. And i like to test the code quality of the models i have access to.
I always ask for a ftp server and dns server, to check what it can do and they work surprisingly well most of the time.
The people talking about AI coding the most at my job are architects and it drives me insane.
I am a software architect, an mainly usw it to refactor my own old code… But i am maybe not a typical architect…
I don’t really care if people use it, it’s more that it feels like a quarter of our architect meeting presentations are about something AI related. It’s just exhausting.
Software architects that don’t write code are worse than useless
Imagine if we did “vibe city infrastructure”. Just throw up a fucking suspension bridge and we’ll hire some temps to come in later to find the bad welds and missing cables.
deleted by creator
It turns every prototyping exercise into a debugging exercise. Even talented coders often suck ass at debugging.