header logo
Back to blog

Your APIs are leaking secrets: what you don't realize you're exposing

November 3, 2025

Right now, buried somewhere in your company's code, there could be a JavaScript file broadcasting your entire API architecture to anyone who knows where to look. 

Photograph of street leading into a foggy autumn forest

It lists every endpoint, every administrative function, and sometimes even the authentication tokens—essentially master keys—that protect them.

Your developers put it there. Not maliciously. They were just doing their jobs, building features quickly and shipping them to customers. But that single file just handed attackers a complete map of your system.

And this is API security in 2025. 

The biggest vulnerabilities aren't sophisticated zero-day exploits. They're the everyday development practices no one thinks twice about.

The uncomfortable truth about API security:

  • Your JavaScript files reveal your entire API structure to anyone who looks
  • Strangers maintain every third-party code library you've never met
  • "Deprecated" APIs are often still running in production, fully vulnerable
  • Automated security tools miss the chain attacks that cause real damage

We sat down with Pavel Chen, a security engineer at Syndis, who has seen this problem from both sides. After 13 years as a software developer, he made the jump to offensive security, where he now conducts penetration tests and application assessments. 

His unique perspective reveals uncomfortable truths about how we build and secure APIs, and why most companies are getting it wrong.

Photo of a man in an office hallway with snowy mountains outside

Why this perspective matters

Pavel's experience as a web developer, before transitioning to security engineering, gave him a rare dual perspective in cybersecurity—someone who knows both why developers make certain choices and how those choices could create vulnerabilities.

"Coming from the other side of the tunnel, I know how developers think," he says. "You know where the shortcuts are buried."


Those shortcuts are exactly where vulnerabilities hide. And Pavel knows where to look.

What your JavaScript files are broadcasting to the world

Picture this scenario: your frontend developers bundle all their code into a single minified JavaScript file. It's standard practice. Nothing suspicious. They ship it to production and move on to the next feature.

That file just handed attackers a complete map of your entire API architecture.

Pavel explains that one of his first steps when targeting a web application is examining the JavaScript files, even the minified ones. These files often reveal everything about the existing API endpoints and the number of each. 

"Sometimes they reveal just everything in terms of what API endpoints are present, how many there are. Developers occasionally separate files into three categories: one for administrators, another for managers, and a third for regular users. But normally it's bundled as a single file and everything is there."


The kicker? 

On numerous occasions, hard-coded authentication tokens are embedded directly in the code. These are essentially master keys that grant access to your system. 

This isn't theoretical. Pavel encounters this regularly in both penetration tests and bug bounty programs (where ethical hackers are paid to find vulnerabilities). 

While you're busy securing your login pages, your own code is telling attackers exactly where to look and what's worth targeting.

Your developers are making strangers part of your infrastructure

The exposed API endpoints in your JavaScript files are just the beginning. There's a deeper problem with how modern software gets built.

Here's something that might surprise you: the code your developers write is often the smallest part of your application. The rest? Dependencies, libraries, and frameworks. These are pre-written code packages created by people you've never met.

When a developer finds a Node Package Manager (NPM) package or a Python library that solves their problem, they're making an architectural decision with security implications that most companies never evaluate. 

That obscure GitHub repository with barely any users? 

It's now running in your production environment, with access to your databases, APIs, and customer data.

Pavel's background as a developer gives him insight into why this happens:

"Developers think way differently than security engineers. Their motivation is to get value out to the customer quickly. Maybe security is not always the main priority."


The attack vectors are everywhere. From compromised packages, hijacked maintainer accounts, to deprecated libraries still running in production. Even well-intentioned open-source maintainers can introduce vulnerabilities without realizing it.

The monitoring server that exposed everything

These problems are prevalent and often go unnoticed. Pavel shared a story that shows just how easily this happens.

The target seemed straightforward: a monitoring server with basic username and password authentication with no obvious way in.

Pavel's team started with a technique called fuzzing to systematically test for hidden directories and files that shouldn't be accessible. Within hours, they discovered a secondary application that was completely undocumented and also featured an API.

And that API? No authentication required.

One endpoint returned a complete list of users with their passwords—unencrypted. 

This wasn't a sophisticated attack. It was basic reconnaissance that revealed an API the company apparently didn't know they were running. A shadow API that exposed everything.

Shadow APIs: the doors you don't know are open

Shadow APIs are endpoints that exist but aren't officially documented or properly secured. Think of them as doors to your system. Sometimes companies don't even know they're there.

These hidden vulnerabilities emerge in several distinct ways:

Legacy versions are still running

"Sometimes companies overlook security aspects of all the versions of APIs and don't apply security patches because maybe the API is used by one or two remaining customers," Pavel notes.


The old version remains exposed for a handful of clients who have not yet migrated. It's technically usable, fully accessible from the internet, and completely vulnerable. 

But attackers don't need to be one of those two customers. They just need to find the endpoint.

Version downgrades that reopen old vulnerabilities

Some systems still support old API versions for compatibility reasons. Pavel recently heard about ethical hackers who tricked a system into using an older, less secure version which allowed them to take over user accounts.

It's like having a house with a new, secure lock on the front door. However, the old lock from five years ago still functions, and the key is still available online.

Undocumented endpoints discovered through testing

These represent functionality that may have been intended for internal use only, but remains accessible to anyone who knows the URL. They're often discovered through fuzzing, leaked documentation, or even by simply guessing common patterns.

How a simple oversight exposed everything

Here's a real-life story that perfectly illustrates how easily security can unravel.

A company was using an outdated monitoring application. The login page displayed a distinctive logo and version number, which Pavel used to search for the specific application name and version. The first result? 

A code repository with a prominent warning: "This is deprecated. We will not be shipping any security fixes for it ever again."

The company was running software that was publicly known to be insecure and unsupported. But it gets worse.

The repository included amazingly detailed API documentation. The API had endpoints for everything, including changing passwords, issuing new two-factor authentication tokens, and modifying user permissions.

"Even if the application itself had username, password, and two-factor authentication, the API was capable of breaking it," Pavel explains.

This wasn't discovered through sophisticated hacking. It was found by reviewing publicly available documentation for software that the company shouldn't have been running in the first place.

Why treating public and internal APIs differently is a mistake

Many companies draw sharp distinctions between their public APIs (designed for third-party developers) and internal APIs (used by their own applications). They invest heavily in securing the public ones while treating internal APIs more casually.

Pavel pushes back on this thinking:

"Companies should treat all APIs equally. If you talk about a public API, it's an API just without authentication, but it probably belongs to the same ecosystem."

The input coming into your system through a public API isn't fundamentally different from what arrives through internal channels. 

Common attacks such as SQL injection (where attackers manipulate database queries to steal information) work the same way regardless of whether the attacker is calling a public or private endpoint. 

The distinction matters for who can attempt the attack, not whether the attack will be successful if it succeeds.

"Users, clients—it doesn't matter. They're making some unauthenticated calls, but it doesn't mean that input into the system is in nature much different from what they would be getting from within their internal network."

The automated security tools that miss the point

The cybersecurity market is flooded with solutions promising automated API security scanning. Some deliver value. Many are what Pavel diplomatically calls "snake oil."

"There are systems which just do automated fuzzing, automated scanning, which are kind of basic, which developers or security engineers could do on their own within a short timeframe," he explains. "Sometimes that's being sold as a sophisticated security solution."

The limitation isn't the technology. It's what automated tools fundamentally can't do. They identify individual vulnerabilities but miss the context that turns three minor issues into a critical breach.

"Penetration tests would cover edge cases, chain attacks, where something leads to something that leads to account takeover," Pavel notes. "A vulnerability scanner would find some known vulnerabilities. It would help identify where probabilities possibly can be. But it requires context, experience, and skill to correlate these."

This is where human expertise remains irreplaceable. 

A scanner might flag an outdated Apache server, some exposed API endpoints, and a minor authentication weakness. A skilled penetration tester recognizes how to chain those three findings into a complete system compromise.

Think of it this way: automated tools can identify that your front door lock is pickable, your window latch is broken, and your alarm code is weak. A penetration tester realizes that someone could break the window latch silently, crawl inside, and disable the alarm before it triggers—because they understand how the pieces connect.

The best approach? A combination of a dynamic vulnerability management tool and regular penetration tests.

Why AI won't replace security engineers (but it's changing the game)

Artificial intelligence is already making an impact in cybersecurity. 

Pavel notes that certain groups in bug bounty programs are "successfully employing AI and finding zero days"—vulnerabilities that no one else knows about yet, with no existing patches.

This represents a significant shift. 

Discovering zero-day vulnerabilities traditionally required deep expertise, creativity, and significant time investment. AI is accelerating that process, making sophisticated vulnerability discovery more accessible.

But AI complements rather than replaces human judgment. 

"These are groups of people, not single ethical hackers," Pavel emphasizes. 

AI is a tool in the hands of skilled practitioners, not a replacement for them. The implications cut both ways. If ethical hackers are using AI to find vulnerabilities faster, so are malicious actors. 

The window between vulnerability discovery and exploitation is shrinking.

The relationship nobody talks about

There is an inherent tension between developers shipping features and security engineers breaking things. But Pavel reports that this relationship is generally more collaborative than antagonistic.

"When I was in web development and vulnerabilities were identified in projects, we were not blaming or being mad at the security engineer," he recalls. "We would normally just take it and fix it and hope that the impact wasn't that bad, that the vulnerability wasn't known prior to them identifying it."

From his current position as a security engineer, Pavel has never received negative responses to his findings. Both sides recognize they're on the same team.

But there's still a fundamental difference in mindset that creates the vulnerabilities in the first place. 

Pavel's developer background gives him a crucial advantage: he understands the pressures, deadlines, and thought processes that lead to security shortcuts. He knows developers aren't being careless. They're being human.

Your API security action plan (start here)

Most companies approaching API security are starting from a position of incomplete information. They don't know what APIs they're running, who's maintaining the dependencies those APIs rely on, or what attack surface they're exposing.

Pavel emphasizes that attack surface mapping is "one of the best ways to enter" the security journey. 

Pavel-API-security-quote-1

"Once you get that feeling of what your external surface is—including what API endpoints are there—then you can take further steps."

You can't secure what you don't know exists. 

Before investing in sophisticated security tools or hiring penetration testers, establish basic visibility:

  1. Inventory your APIs. Public, internal, documented, undocumented, current versions, legacy versions. All of them. If it accepts requests from outside your immediate control, it should be included on this list.

  2. Map your dependencies. What third-party code is your application running? What packages are you using? Who maintains them? Are they still actively supported?

  3. Identify your shadow IT. What tools and libraries are developers using that haven't gone through a security review? Check those JavaScript files.

  4. Document your authentication mechanisms. Who has access to what? How is access controlled? Are there admin endpoints that shouldn't be publicly accessible?

This foundational work enables every other security investment. Without it, you're buying tools to protect assets you can't identify and monitoring for threats you can't contextualize.

The security rule developers keep breaking: keep it simple

When asked for general guidance on API security, Pavel returns to a fundamental principle that applies far beyond APIs: keep it simple, stupid.

"API security isn't some new thing. Authorization and authentication? These are solved problems. Don't reinvent the wheel. Don't create your own mechanism for verifying JWT tokens or something that has been in place for years. Just make it simple and maintainable."

This wisdom contradicts a common developer impulse: building custom solutions. The home-grown authentication system. The proprietary encryption algorithm. The novel approach to access control.

These custom solutions consistently introduce more vulnerabilities than they solve. 

Proven, well-maintained, and widely used security libraries have been scrutinized by thousands of developers and targeted by countless hackers. Your team, and no one else, has reviewed your custom implementation.

However, the truth is: the most secure code is often the most boring code. Standard libraries. Established patterns. Conventional approaches that have survived years of scrutiny.

What you can do right now

API security isn't a problem you solve once with a single tool or assessment. It's an ongoing process that requires technical measures, organizational commitment, and cultural change.

  1. Start with visibility. You can't protect APIs you don't know exist. External attack surface management tools provide that foundational awareness.

  2. Implement basic hygiene. Keep dependencies updated. Use established authentication libraries. Follow OWASP API Security Top 10 guidelines (a widely-recognized list of the most critical API security risks).

  3. Don't rush updates. Unless a package fixes a critical vulnerability, consider waiting 24-48 hours before updating. If it's malicious, the security community usually catches it quickly. This gives you the benefit of crowd-sourced vetting.

  4. Combine automated and manual testing. Scanners identify known vulnerabilities. Penetration testers identify chain attacks and edge cases that automated tools often miss. You need both.

  5. Integrate security into the development process. Not a separate phase. Not a final gate. An integrated consideration from initial design through deployment.

  6. When security researchers report vulnerabilities, listen. Pavel is clear on this: "Everyone has vulnerabilities and everyone has more vulnerabilities than they know of." Researchers who reach out are helping you, not attacking you. Thank them.

Why CEOs are now personally liable for API security failures

New regulations are making security personal for executives. 

Under directives like NIS2 (Network and Information Security Directive), leadership in critical sectors can be held personally liable for cybersecurity failures. Not just the company. The individuals. And this includes API security.

The data backs this up. Leaders of companies hit with major breaches are often forced to step down. The price of inadequate API security isn't just about money or reputation. It can cost key people their careers.

However, most executives still lack a comprehensive understanding of API security, let alone cybersecurity in general. 

They approach it in a traditional vendor management manner, seeking contracts and service level agreements (SLAs) for open-source dependencies maintained by volunteers in their spare time.

This gap between executive understanding and technical reality is dangerous. The decisions being made in your development teams right now have business implications that leadership needs to understand. Having proper API security visibility means leadership no longer needs to have "blind trust" in their IT and security teams. 

The reality check you need

Your APIs are exposing more than you think:

  • Those JavaScript files you're shipping? They contain roadmaps for attackers. 
  • Those helpful code libraries your developers are using? They're maintained by strangers who could disappear at any time. 
  • Those old API endpoints you deprecated? They're still running in production.

This isn't fear-mongering. It's reality.

The good news? 

Now that you know what you're up against, you can do something about it. The cost of preparation is always less than the cost of being caught unprepared.

Take the first step: start with an inventory of your APIs—all of them, including the ones you think no one uses anymore. That single action will reveal more about your security posture than any expensive tool.

And if you want, Aftra can help you do this.


This article is based on a podcast episode with Pavel Chen from the "Hack and Tell" podcast series.

Watch the full episode on YouTube:



Or listen on Spotify.

blog

Stay ahead, stay secure.