Over the past few weeks, I’ve been experimenting with AI agents and today, I want to put my disillusionment into words. The spark for this came from a blog post I read, one that essentially encouraged: “Just start prompting and see what happens. I’ve built twelve projects in no time!” That mindset is exactly the trap.

At first, it’s nothing short of magical. In those “first-shot” attempts, AI agents shine. They can design an entire system in one go, following their own logic, adapting and integrating it smoothly. It feels effortless, functional, and fast. That early success builds a dangerous kind of trust. You start to believe that the agent is competent, and that the more freedom you give him, the better the outcome will be.

But then the honeymoon ends. When later changes are needed, the weaknesses emerge. Adapting the code to new requirements becomes more difficult. Embedding fresh logic without breaking the old one requires a different mindset – one that agents don’t consistently apply. Regression bugs are common. Sometimes they’re caught early, but often they slip through. And it’s not just obvious bugs: subtle, unintended changes can creep in. An agent might quietly alter a date format you never asked it to touch, or delete a helper function without replacement – for no clear reason. Over time, I noticed the quality of its solutions slipping.  In some cases, the “improvements” damaged the program more than they extended it. Of course, this can be blamed on the small context window, stateless approqch and a lack of memory. 

But this is especially dangerous because small changes often receive only brief, trusting reviews. You don’t expect them to introduce unrelated modifications. And if a removed helper function wasn’t logged, an error might not directly reveal its absence, making debugging far more time-consuming.

The real downside, though, is psychological. When you haven’t built the logic yourself, you lose the mental map of the codebase. You no longer know where to look when things break. If you had designed the flow and logic by hand, you’d understand the entire structure. But when the AI agent does it for you, you have to re-learn the project later, which doesn't really save you any time. Programming is more than pure logic; you also have to understand the data types you’re working with. And here, too, small hidden changes, like a date format, can trip you up without you realizing it.

What's often good for is a pile of small programs or tools. Useful? Perhaps. Meaningful? Questionable. Many of these projects lack any real motivation behind them. They’re not born from a vision or passion, but simply because they can be made quickly with an agent. This leads to a growing tide of low-quality, shallow, “soulless” software – products without a true origin, driven more by impulse than by intention.

This is why the promise of “faster production” sounds to me like capitalism’s promise of “greater efficiency.” Both risk replacing genuine passion with mass production.

Personally, I switch the copilot off whenever I can. I get frustrated when I have to rely on it to solve a problem I can’t figure out myself. It makes me feel slower, dumber, less capable – and I can’t shake the sense that the more I trust the AI’s speed and competence, the less capable I become over time.

Studies now show that many people only believe they’re faster with AI, the data tells a different story. But the hype, the gold-rush energy, is intoxicating enough that people will keep riding the wave. In a few years, some may be left standing in the wreckage. And when that happens, it won’t just affect them – it will affect all of us.

The question of AI’s sustainability and true purpose is far from answered. Until it is, I believe we’d be wise to stay skeptical, keep our skills sharp, and think twice before handing over too much of our craft to the machine.

Source:

https://www.dzombak.com/blog/2025/08/getting-good-results-from-claude-code/
https://arxiv.org/abs/2507.09089