To clarify, this is not an argument against quality. Quality needs to be baked into software. The key discussion here is about the way most teams pursue quality that’s making them slower and less secure at the same time.
There is a version of software quality that sounds responsible on paper but quietly creates the problems it claims to prevent—quality assurance (QA). If you are building a SaaS product that needs to move fast, it might be the biggest obstacle in your pipeline.
The most visible problem with manual QA is that it slows you down.
Software testing requires you to have test environments set up specifically for it, and those setups never really reflect how the software works in the real world.
When you build a test setup targeted for a non-technical person to run through a deeply technical product, you usually end up constructing a stage set instead of reproducing real conditions.
Don’t get me wrong. Stage sets are great for simulating the “what-ifs” that can happen. But it also means a lot of what gets caught there are bugs “on stage” and not the real issues that appear in the software after it has gone to production.
That’s not the biggest problem in QA. Automation can solve some of the issues with the way we’re doing QA now. But I would like to shine a light on the slower and harder-to-see problem.
Culture.
When a developer knows there is a QA engineer downstream, they tend to ship with less care. They figure the QA will find it if something breaks. Over time, that assumption quietly erodes the instinct that good engineering depends on. You end up with developers who have learned, structurally, not to think too hard about what they are shipping.
Remove the QA, and that changes.
Without QA in the pipeline, developers have to think about how the software is used. They need to consider regression issues, testability, and users before they start building. This constraint (of not having QA) produces better engineers and better software downstream.
Which raises the obvious question: if not a QA engineer, then what?
While manual testing has its own place in the process, there are a lot of winning arguments for automated testing. Often, it’s not even an either-or situation. Both types of testing can exist at the same time depending on what you’re building and shipping.
Automated testing is faster than manual QA, less expensive, doesn't require specially designed test environments, and doesn’t get confused by the infrastructure you have.
But the best part of it?
Unlike a person doing manual work, it scales with the product rather than against it.
For SaaS companies with rapid deployment cycles, there is no version of manual QA that fits how they need to work. The math does not hold, and the culture it creates pulls in the wrong direction.
However, there is one failure mode that automated testing alone does not fix. And it is the one that most QA models, manual or otherwise, quietly ignore—security.
Unless you are operating in a highly regulated industry like banking, your QA team is almost certainly not testing for security vulnerabilities. They are often not trained nor equipped for it, and so, an entire category of product risk goes unexamined every cycle.
Automated vulnerability scanning fills that gap in a way that a QA function never could.
Think of it as static code analysis for your attack surface: continuous, systematic, and not dependent on someone remembering to schedule it.
Secret scanning catches what developers accidentally expose, like credentials committed to a file that ends up publicly accessible. Web application scanning finds what attackers would eventually discover anyway, with the goal of finding it first.
Neither of these is a guardrail in the traditional sense.
A guardrail stops you before you go over the edge. The more honest analogy is a bungee cord. You can still jump. The point is that someone is watching and the response is fast.
That framing matters more now than it did a few years ago, because the thing pushing more people toward the edge has changed significantly.
AI-assisted coding is accelerating the production of software by people who do not fully understand what they are producing (hello, vibe coders). The consequences for security and quality are not yet visible at scale. But the conditions for them are already in place.
The problem with AI-generated code isn’t just that it’s often bad. While bad code isn’t ideal, the bigger problem is the AI agent that is very good at convincing you it is going in the right direction even when it is not.
A large language model will give you well-reasoned arguments for architectural decisions that are quietly wrong. If you do not have the expertise to evaluate those arguments, you have no way to redirect it.
At that point, you’re no longer writing software anymore. You’re essentially approving it without reading.
Architectural decisions are where this becomes serious.
Bad code is part and parcel of becoming good at software development. We all started by being bad at something before becoming good at it.
But a bad code and a bad architectural decision are two completely different things.
A bad line of code is fixable. A bad architectural decision can compromise an entire system and work against you for a long time. The AI agent making that decision doesn't know the difference between good and bad decisions. Neither does the person who cannot fully understand what they are producing.
Many companies are now shipping AI-assisted review bots, code-review integrations, and progressively more capable agents, and those tools are genuinely improving.
But they carry the same core failure mode. Once an AI agent commits to a direction, it stays committed. You can push back, but it keeps going. Like a horse with blinders, it is just set on going there.
The industry is aware of this at some level. The problem is that the loudest responses to it are pulling in opposite directions.
Many organizations seem to be operating in the extremes of either-or.
For example, one group is shipping code without testing, validation, and understanding of what is being produced. They are the visible face of vibe coding, and they have done a good job of making it look like the future of software development.
On the other side, some of the most respected technical voices in the industry want nothing to do with AI tools at all. They do not trust them nor see the need for AI, and they make it clear publicly. While that skepticism is earned, it is also a form of the same thinking.
Now, there is a middle ground between the two.
Use the tools where they help. Do not use them in places that require expertise you do not have. You don’t have to go all the way because the earliest adopters aren’t always winners.
There is a useful analogy here with lions.
When a lion is hunting your group, you do not need to outrun the lion. You only need to outrun the person next to you. You often end up with three groups of people in this scenario:
AI adoption works the same way. You do not need to be at the frontier or avoid AI completely. You just need to pace yourself.
It’s a marathon, not a sprint.
AI-assisted development is not going away, and the teams leaning into it are not wrong to do so.
But enough people have skipped the verification step, where someone who understands the code confirms that what’s being built is what was intended. There’s a huge difference between catching a bad architectural decision early and catching a bug after the fact.
The risk has been quietly building up across codebases for a while now. All those shortcuts with their shortcomings will catch up at some point. The only question is whether the industry gets ahead of it or gets caught off guard.
The teams that will be in the best position are already doing the same things good engineering has always required:
None of that is new. Good engineers have been doing it for decades.
The difference now is that the tools make it easy to skip, and they do it confidently enough that you never feel like you are.