header logo

Security doesn't get solved.

Not even with AI

author profile image by Björn Orri Guðmundsson, CEO

May 12, 2026

#Cybersecurity
#AI Security
Back to Blog
Abstract photograph of a human silhouette with blue and orange colors


There is a lot of excitement right now about AI-driven tools like Mythos. The ability to find vulnerabilities at a speed no human team could match? That’s impressive.

And there are two themes in the responses I’m seeing to this news:

1. Many people are worried that suddenly everything is hackable.
2. Others think that it’s going to solve all their cybersecurity woes.

Neither are true. And both need to be addressed.

Why we don’t need to worry… too much

To address the first of these concerns, we need to remember that while AI raises the baseline for attackers, it also raises it for defenders.

There was a recent case where Mozilla ran a model on Firefox and found 271 vulnerabilities. This tells us that these models are very good at going through large, complex codebases and spotting bugs that used to take experienced researchers a long time to find.

Can it help hackers find vulnerabilities to exploit? Yes. But the takeaway should be that there is now a speed and scale in code-level analysis for bounded systems like browsers and operating systems that didn’t exist before.

Mythos is not going to solve cybersecurity

To my second point, AI tools like Mythos are also not going to suddenly solve all of our cybersecurity woes. The human element is still king.

According to the 2025 Verizon Data Breach Investigations Report:

  • Credential abuse is still the number one initial access vector. It’s present in 22% of breaches.
  • The human element is still involved in 60% of all breaches.
  • Phishing sits at around 15%.
  • Vulnerability exploitation is growing and standing at 20% now. But that figure is up 34% year over year.
Image with stat from verizon report


Dig into the pattern of these statistics and you’ll see they’re centered around familiar attack tactics instead of sophisticated technical attacks. The same attacks that have always worked are now running faster and cheaper because of AI.

The simultaneous fear and false confidence in the coverage around tools like Mythos is simply not helpful for businesses trying to navigate these new technologies.

Security shifts and will continue shifting no matter how hard we try to “solve” it.

Why, and how, AI creates false confidence

Like many, we use AI tooling for code reviews at Aftra. We started noticing a pattern where reviews always come back with the same result, which is one critical finding and three high abnormalities.

While some of the findings are relevant, we still often find ourselves arguing with AI about why it's wrong (and there’s a reason for it).

To confirm our observation, we resolve the issues that are relevant, re-run the review, and we still get the same results—one critical finding and three high abnormalities (despite our arguments by people who know what they’re doing).

That’s how the tool is built.

Large language models (LLMs) are optimized to produce confident, well-structured, and plausible outputs which means AI speaks with a confidence that makes you feel like it knows better.

It really doesn't. And only the experts see through it.

The level of confidence AI generates without a human in the loop is a security problem in itself.

To put it simply, security work runs on skepticism.

You have to assume things are broken until proven otherwise. The conclusion comes from the evidence and not from how convincing the output looks.

AI doesn’t offer that. It will always validate your ideas, reinforce your decisions, and tell you what you want to hear by design.

The tool that’s supposed to make our life easier is also making us more vulnerable just because it has the tendency to reaffirm our work, and we don’t take the extra steps to verify the validity of the output.

Here is where the numbers become hard to ignore. The human element is present in at least 60% of all breaches. The tendency to trust things that sound authoritative makes us vulnerable to cyberattacks.

It’s exactly the psychological game attackers play.

Speed without oversight is just a faster way to fail

AI can make you feel like you can do anything without limits. But speed has a cost that doesn’t show up on the dashboard.

When you have a team of ten people, it’s easier to keep up with what they are doing and notice when something looks off. The pace of ten people gives you room to catch mistakes before they become problems.

All that goes out the window with AI.

When AI agents are operating at 100x or 1000x faster than that ten-person team, you’re moving at a speed that makes oversight almost impossible. The quality controls most companies have were built for humans, and not for something operating at a hundred or a thousand times that pace.

Many don’t notice this until something goes wrong.

The Verizon report puts a number on what that looks like in practice. Vulnerability exploitation is up 34% year over year.

It’s not because attackers suddenly got smarter. There are more vulnerabilities to exploit now with code being shipped faster and less visibility into what is going out the door.

Attackers only need one vulnerability. The faster you move without oversight, the more places that one thing has to hide.

The real threat is sitting in an employee’s browser tab

While the industry debates AI-driven zero-days, the real exposure in most companies is sitting in an employee's work machine.

Petra Klein, Group CSO at Swedbank recently said at a conference hosted by Syndis,

“People don’t break in, they log in.” -Petra Klein, CSO Swedbank

That really stuck with me and it’s even more relevant with the rise of AI. Someone at your company is putting sensitive data into an AI tool you didn’t approve of.

Shadow AI is the new shadow IT. Unlike shadow IT, the velocity makes it almost impossible to contain reactively.

But banning AI was never the answer. If you’re not making a choice about how to use AI at work, you’re still making a choice. Just not a conscious one (it’s the worse option).

You can’t run from AI.

But you can learn how to work with it and get ahead of the risks. Start setting rules and defining what is and isn’t allowed. If you don’t, you will end up with exposure nobody has mapped and risks nobody has owned.

Again, the human element is in 60% of breaches.

We can’t blame it on people being careless when the environment they’re working in is moving faster than the guardrails around it. It all boils down to the culture of the company... and that’s a leadership problem.

Language matters. It’s the reason why the conversation never happens at the right level

All of the above persists, in part, because of the word ‘cybersecurity’.

Say "cybersecurity" to a room of executives and half of them have already switched off. It lands as a cost centre or a technical problem that’s someone else’s job.

I’d even go as far as to say that the industry built this problem over decades by making security feel mysterious and inaccessible.

When security feels like the dark arts at work that are only accessible by few, it’s no wonder the C-suite also stopped asking questions. It’s easier to outsource their thinking, wait for a report, and move on.

And we wonder why security never makes it onto the agenda until something goes wrong.

Strip away the word and ask a different question. Try these:

  • What do you have that someone would want to steal, destroy, or hold hostage?
  • Where are the doors into your business? Who knows they are there?
  • How long would it take to recover from your worst realistic scenario, and what would it cost?

All of a sudden, it starts being a business conversation that many executive teams have never had.

Security shifts, and you need to shift with it

At the end of the day, security will never be a problem you fully solve.

Tools, speed of deployment, and regulations will continue to change. But the fundamentals stay the same. Attackers will always go for the easiest way in—human error.

You don’t need complex security knowledge to get started. But you do need to start asking the right questions.

Here’s a tip.

Get coffee with your most technical security person. Ask them the above questions and make them explain it without jargon.

That conversation is worth more than most security certifications because that’s where the real work can begin—knowing your vulnerabilities.

Frequently asked questions about Mythos and cybersecurity.

FAQs

Can AI tools like Mythos fully automate cybersecurity?
No, AI tools cannot fully automate cybersecurity because they lack the contextual judgment and skepticism inherent to human experts. While AI can analyze codebases at a speed humans can't match—such as finding hundreds of vulnerabilities in minutes—it often produces "confident" but incorrect results. Security still relies on the human element, which is involved in 60% of all breaches.
Why does AI-driven security software produce false positives?
AI models, specifically Large Language Models (LLMs), are designed to be helpful and plausible rather than strictly factual. They are optimized to produce well-structured, confident outputs, which can lead to "hallucinations" where the AI validates a non-existent bug or reinforces a user's incorrect assumption. Without a human in the loop, this confidence creates a new layer of security risk.
What is the biggest cybersecurity threat to businesses in 2026?
The biggest threat is not a sophisticated technical exploit, but the human element. According to the 2025 Verizon Data Breach Investigations Report, credential abuse and phishing remain the top access vectors. Additionally, "Shadow AI"—employees using unapproved AI tools—creates unmapped exposure that traditional security guardrails are not fast enough to contain. 
How has the rise of AI changed vulnerability exploitation?
Vulnerability exploitation has increased by 34% year-over-year. This growth is driven by the speed at which code is now shipped. AI allows for faster development, but often at the cost of oversight. Attackers use AI to find "doors" into a business faster and cheaper, while defenders struggle to maintain quality control at that same 100x or 1000x pace.
What is "Shadow AI" and why is it a security risk?
Shadow AI refers to the use of unauthorized or unmanaged artificial intelligence tools by employees within an organization. It is the modern equivalent of Shadow IT but moves at a much higher velocity. The risk lies in employees feeding sensitive company data into external LLMs, creating data leaks and exposure that leadership has not yet mapped or owned.
Why do executives often ignore cybersecurity risks?
Many leaders view "cybersecurity" as a technical cost center rather than a business priority. To bridge this gap, security conversations should be stripped of jargon. Instead of discussing technical debt, ask: "What do we have that someone would want to steal?" or "How long would it take to recover from our worst-case scenario?" This shifts the focus from "dark arts" to business resilience.
Is banning AI at work an effective security strategy?
Banning AI is rarely effective because it leads to "Shadow AI" where employees use the tools anyway but without any oversight. The better approach is to set clear rules and define safe AI usage policies. Since you cannot run from AI, leadership must consciously decide how to integrate it with human guardrails to manage the shifting security landscape. 

 

Want insights like this one directly in your inbox?

Sign up for our newsletter.