From the blog

The Limits of AI in Cybersecurity. Humans Remain Essential

Written by Björn Orri Guðmundsson | May 12, 2026


There is a lot of excitement right now about AI-driven tools like Mythos. The ability to find vulnerabilities at a speed no human team could match? That’s impressive.

And there are two themes in the responses I’m seeing to this news:

1. Many people are worried that suddenly everything is hackable.
2. Others think that it’s going to solve all their cybersecurity woes.

Neither are true. And both need to be addressed.

Why we don’t need to worry… too much

To address the first of these concerns, we need to remember that while AI raises the baseline for attackers, it also raises it for defenders.

There was a recent case where Mozilla ran a model on Firefox and found 271 vulnerabilities. This tells us that these models are very good at going through large, complex codebases and spotting bugs that used to take experienced researchers a long time to find.

Can it help hackers find vulnerabilities to exploit? Yes. But the takeaway should be that there is now a speed and scale in code-level analysis for bounded systems like browsers and operating systems that didn’t exist before.

Mythos is not going to solve cybersecurity

To my second point, AI tools like Mythos are also not going to suddenly solve all of our cybersecurity woes. The human element is still king.

According to the 2025 Verizon Data Breach Investigations Report:

  • Credential abuse is still the number one initial access vector. It’s present in 22% of breaches.
  • The human element is still involved in 60% of all breaches.
  • Phishing sits at around 15%.
  • Vulnerability exploitation is growing and standing at 20% now. But that figure is up 34% year over year.


Dig into the pattern of these statistics and you’ll see they’re centered around familiar attack tactics instead of sophisticated technical attacks. The same attacks that have always worked are now running faster and cheaper because of AI.

The simultaneous fear and false confidence in the coverage around tools like Mythos is simply not helpful for businesses trying to navigate these new technologies.

Security shifts and will continue shifting no matter how hard we try to “solve” it.

Why, and how, AI creates false confidence

Like many, we use AI tooling for code reviews at Aftra. We started noticing a pattern where reviews always come back with the same result, which is one critical finding and three high abnormalities.

While some of the findings are relevant, we still often find ourselves arguing with AI about why it's wrong (and there’s a reason for it).

To confirm our observation, we resolve the issues that are relevant, re-run the review, and we still get the same results—one critical finding and three high abnormalities (despite our arguments by people who know what they’re doing).

That’s how the tool is built.

Large language models (LLMs) are optimized to produce confident, well-structured, and plausible outputs which means AI speaks with a confidence that makes you feel like it knows better.

It really doesn't. And only the experts see through it.

The level of confidence AI generates without a human in the loop is a security problem in itself.

To put it simply, security work runs on skepticism.

You have to assume things are broken until proven otherwise. The conclusion comes from the evidence and not from how convincing the output looks.

AI doesn’t offer that. It will always validate your ideas, reinforce your decisions, and tell you what you want to hear by design.

The tool that’s supposed to make our life easier is also making us more vulnerable just because it has the tendency to reaffirm our work, and we don’t take the extra steps to verify the validity of the output.

Here is where the numbers become hard to ignore. The human element is present in at least 60% of all breaches. The tendency to trust things that sound authoritative makes us vulnerable to cyberattacks.

It’s exactly the psychological game attackers play.

Speed without oversight is just a faster way to fail

AI can make you feel like you can do anything without limits. But speed has a cost that doesn’t show up on the dashboard.

When you have a team of ten people, it’s easier to keep up with what they are doing and notice when something looks off. The pace of ten people gives you room to catch mistakes before they become problems.

All that goes out the window with AI.

When AI agents are operating at 100x or 1000x faster than that ten-person team, you’re moving at a speed that makes oversight almost impossible. The quality controls most companies have were built for humans, and not for something operating at a hundred or a thousand times that pace.

Many don’t notice this until something goes wrong.

The Verizon report puts a number on what that looks like in practice. Vulnerability exploitation is up 34% year over year.

It’s not because attackers suddenly got smarter. There are more vulnerabilities to exploit now with code being shipped faster and less visibility into what is going out the door.

Attackers only need one vulnerability. The faster you move without oversight, the more places that one thing has to hide.

The real threat is sitting in an employee’s browser tab

While the industry debates AI-driven zero-days, the real exposure in most companies is sitting in an employee's work machine.

Petra Klein, Group CSO at Swedbank recently said at a conference hosted by Syndis,

That really stuck with me and it’s even more relevant with the rise of AI. Someone at your company is putting sensitive data into an AI tool you didn’t approve of.

Shadow AI is the new shadow IT. Unlike shadow IT, the velocity makes it almost impossible to contain reactively.

But banning AI was never the answer. If you’re not making a choice about how to use AI at work, you’re still making a choice. Just not a conscious one (it’s the worse option).

You can’t run from AI.

But you can learn how to work with it and get ahead of the risks. Start setting rules and defining what is and isn’t allowed. If you don’t, you will end up with exposure nobody has mapped and risks nobody has owned.

Again, the human element is in 60% of breaches.

We can’t blame it on people being careless when the environment they’re working in is moving faster than the guardrails around it. It all boils down to the culture of the company... and that’s a leadership problem.

Language matters. It’s the reason why the conversation never happens at the right level

All of the above persists, in part, because of the word ‘cybersecurity’.

Say "cybersecurity" to a room of executives and half of them have already switched off. It lands as a cost centre or a technical problem that’s someone else’s job.

I’d even go as far as to say that the industry built this problem over decades by making security feel mysterious and inaccessible.

When security feels like the dark arts at work that are only accessible by few, it’s no wonder the C-suite also stopped asking questions. It’s easier to outsource their thinking, wait for a report, and move on.

And we wonder why security never makes it onto the agenda until something goes wrong.

Strip away the word and ask a different question. Try these:

  • What do you have that someone would want to steal, destroy, or hold hostage?
  • Where are the doors into your business? Who knows they are there?
  • How long would it take to recover from your worst realistic scenario, and what would it cost?

All of a sudden, it starts being a business conversation that many executive teams have never had.

Security shifts, and you need to shift with it

At the end of the day, security will never be a problem you fully solve.

Tools, speed of deployment, and regulations will continue to change. But the fundamentals stay the same. Attackers will always go for the easiest way in—human error.

You don’t need complex security knowledge to get started. But you do need to start asking the right questions.

Here’s a tip.

Get coffee with your most technical security person. Ask them the above questions and make them explain it without jargon.

That conversation is worth more than most security certifications because that’s where the real work can begin—knowing your vulnerabilities.