I’ve been noticing something about how developers talk about AI coding agents.
They review agent-generated code with a magnifying glass. They point at a variable name they don’t like, a slightly verbose function, an abstraction they would have done differently, and dismiss the whole thing. "See? AI can’t code."
Meanwhile, the same developers spend their Mondays reviewing pull requests from colleagues that have no tests, confusing naming, and logic that takes three reads to understand. And they approve it. Maybe with a comment or two. Maybe not.
We tolerate bad code from people but refuse to accept good-enough code from machines.
That’s not a technical judgment. That’s a double standard.
The code review double standard
Think about the last codebase you worked on. Really think about it.
How many functions were too long? How many modules had unclear responsibilities? How many "temporary" workarounds became permanent fixtures? How many times did someone say "we’ll refactor this later" and never did?
I’ve worked on codebases where entire features were held together by duct tape and good intentions. We all have. And we shipped them. Users paid for them. Businesses ran on them.
Nobody demanded perfection from those codebases. We demanded that they work.
But when an AI agent produces code that is functional, tested, and reasonably structured, yet not exactly how you would have written it, suddenly, the bar is perfection.
I built an entire production app with AI agents. 61 commits, 507 tests, deployed to production. The code wasn’t perfect. Some abstractions could be better. Some naming could be clearer. But it works, it’s tested, and it shipped. That’s more than I can say about a lot of human-written code I’ve seen in production.
"Linux wins heavily on points of being available now"
In 1992, Andrew Tanenbaum, professor and creator of MINIX, posted a message on USENET with the subject line "LINUX is obsolete".
His argument was clear and, from an academic standpoint, correct: Linux used a monolithic kernel, which was an outdated design. Microkernels were the future. Among OS designers, the debate was "essentially over." He even told Linus:
"Be thankful you are not my student. You would not get a high grade for such a design."
Tanenbaum was right on paper. Microkernels are theoretically superior: more modular, more reliable, easier to reason about.
Linus Torvalds responded the next day:
"True, linux is monolithic, and I agree that microkernels are nicer. From a theoretical (and aesthetic) standpoint linux loses. […] Linux wins heavily on points of being available now."
Read that again. Linux wins heavily on points of being available now.
Linus conceded the theoretical argument entirely. He agreed microkernels were nicer. But his kernel existed, it worked, and people could use it today. The GNU Hurd, the "correct" microkernel-based kernel, wasn’t ready. It still isn’t ready, over 30 years later.
Linux now runs the majority of the world’s servers, all Android phones, most supercomputers, and most of the cloud. The "imperfect" kernel won because it shipped, got reviewed, got improved, and kept shipping.
The full debate is worth reading.
Worse is better
This isn’t a new insight. Richard Gabriel wrote about this in 1989 in his essay "Worse is Better".
He compared two design philosophies: the "MIT approach" (the right thing) and the "New Jersey approach" (worse is better). The MIT approach prioritizes correctness and completeness. The interface must be consistent, the design must be elegant. The New Jersey approach prioritizes simplicity and shipping. The implementation should be simple, even if the interface suffers a little.
Gabriel’s conclusion? The "worse" approach wins. Not because it’s better in isolation, but because it ships, it spreads, and it gets improved over time.
Unix won over Lisp Machines. C won over more "correct" languages. Linux won over Hurd. JavaScript won over everything that was supposed to replace it.
The pattern is always the same: the imperfect thing that exists beats the perfect thing that doesn’t.
What this means for agents
When people complain that agent-generated code isn’t perfect, they’re making Tanenbaum’s argument in 1992. They’re holding up the theoretical ideal and dismissing the practical reality.
Yes, an AI agent might produce code that isn’t exactly how a senior engineer would write it (but it can be driven to). But consider what that code actually is:
- It compiles and runs
- It comes with tests
- It follows the patterns you defined (if you defined them)
- It was produced in minutes, not days
- It can be reviewed and improved
Is that worse than the untested, undocumented, "I’ll clean it up later" code that humans write every day?
The question that matters is: is it good enough to ship, review, and improve?
Because that’s the standard we’ve always applied to human code. And that’s the standard that built Linux, Unix, JavaScript, and most of the software we depend on.
Fred Brooks wrote in The Mythical Man-Month that the hardest part of software is the conceptual structure, not the code itself. The essential complexity. If your architecture and decisions are sound, the code is the easy part. Surface-level "imperfection" in code matters far less than the architecture behind it.
Stop demanding from machines what you don’t demand from yourself
I’m not saying we should accept garbage code from agents. Review it. Improve it. Hold it to the same standard you hold human code. The same standard, not a higher one.
If your colleague wrote a function that works correctly, has tests, but uses a slightly different pattern than you’d prefer, you’d approve the PR. Apply the same judgment to agent code.
Perfection was never the point. Shipping sustainably was. Ship it, review it, improve it, ship again. Linus knew that in 1992. Richard Gabriel knew that in 1989. Fred Brooks knew that in 1975.
Stop waiting for agents to produce perfect code. Treat agent output the way you’ve always treated code: as a starting point that works, that you review, and that you improve.
Thanks for reading!
We want to work with you. Check out our Services page!

