Skip to content
/ 4 min read

AI Is Not Replacing Security Researchers

AI is starting to find real vulnerabilities on its own. But every time it runs without a human in the loop, things go sideways. The future of security research is human-guided AI, not AI alone.

Every few weeks someone shares a demo of an AI finding a vulnerability in some codebase, and the replies are always the same. “Security researchers are done.” “Why would anyone pay for bug bounties when AI can just find everything?” “This changes everything.”

Here’s the thing: AI actually is starting to find real bugs autonomously. That part isn’t hype. But every time someone lets it run without human oversight, the results range from useless to actively harmful. The future isn’t AI replacing researchers. It’s researchers using AI, with a human always in the loop.

More Code, More Bugs

Before we talk about AI finding vulnerabilities, let’s talk about AI creating them.

Companies are shipping AI-generated code at a pace that would have been unthinkable two years ago. Entire features, sometimes entire services, written by LLMs and reviewed by developers who are moving too fast to catch everything. The code works. It passes the tests. It gets deployed.

And a lot of it is bad.

Not bad in the “doesn’t compile” sense. Bad in the “uses user input as a database key without sanitization” sense. Bad in the “trusts the client to enforce authorization” sense. The kind of bad that doesn’t show up in a demo but absolutely shows up in a pentest.

AI-generated code has a specific texture to it. It’s confident. It looks clean. It follows patterns that seem reasonable on the surface. But it lacks the paranoia that comes from having been burned before. It doesn’t think about what happens when someone sends __proto__ as a field name, or what a case-insensitive comparison means for request smuggling. It produces code that works for the happy path and quietly falls apart at the edges.

That’s not a theoretical problem. I’ve seen it in real codebases, in production, serving real users. The volume of code being pushed into production right now is enormous, and a meaningful chunk of it was written by something that doesn’t understand security in any deep sense.

For bug hunters, this is the opposite of a threat. It’s an opportunity.

AI Finds Bugs, But Needs a Human

I’m not going to pretend AI can’t find vulnerabilities. It can, and it’s getting better at it. AI-driven fuzzing, automated code review, pattern detection across massive codebases: these things work. They surface real issues that humans might miss or take weeks to find manually.

But “can find bugs” and “can replace a security researcher” are completely different statements.

What happens when you let AI run autonomously without human oversight? Low Level made a great video showing exactly that. The short version: it goes off the rails. It hallucinates vulnerabilities that don’t exist, files bogus reports, misunderstands context, chains together findings that make no sense, and wastes everyone’s time. Without someone who actually understands the target, understands the vulnerability class, and can verify whether a finding is real, autonomous AI security research produces noise, not signal.

The human in the loop isn’t a temporary limitation. It’s a fundamental requirement. Someone needs to decide what to look at, evaluate whether a finding is exploitable, understand the business impact, and communicate it in a way that gets it fixed. That’s not a mechanical task you can automate away. It requires judgment, context, and experience.

AI As a Tool, Not an Opponent

I use LLMs regularly. They’re good at summarizing large codebases, explaining unfamiliar frameworks, generating proof-of-concept scripts, and doing the tedious parts of reconnaissance faster. If you’re not using AI as part of your workflow in 2026, you’re leaving speed on the table.

The mistake people make is treating AI as a competitor instead of a tool. It’s like worrying that Burp Suite would replace pentesters when it came out. Better tools make researchers more effective, they don’t make them unnecessary.

The researchers who adapt, who use AI to move faster, cover more ground, and automate the boring parts, will have a massive advantage. The ones who ignore it will fall behind. But the ones who let AI run unsupervised and trust the output without verification are going to produce junk, or worse, miss the bugs that actually matter because they were buried under a pile of false positives.

The Real Shift

The security industry is going to get bigger because of AI, not smaller. There’s more code to audit, more attack surface to cover, more companies shipping faster than their security teams can keep up with. The demand for people who can actually find and understand vulnerabilities is going up, not down.

AI is a force multiplier. It always needs a human guiding it, and I think it always will. The future is human researchers with AI tools, not AI researchers with no humans. And honestly, given the quality of code AI is helping produce, security researchers should be thanking it for the job security.

--