Over the past year, AI-generated code has moved from novelty to normal. Developers are shipping faster, prototyping faster, refactoring faster… sometimes without fully understanding what they just merged. From the outside, it looks like a productivity renaissance. From the inside, it feels like something else: a new kind of operational risk that doesn’t behave like the old kind.
Here’s the uncomfortable truth: AI is improving software quality in one dimension with excellent syntactic correctness and quietly degrading it in another with vulnerable code which lowers security posture. Code compiles. Tests pass. Features ship. And security debt accumulates.
That’s not because AI is “bad at security.” It’s because the incentives are misaligned. These models optimize for usefulness and plausibility. Security is rarely the thing a developer explicitly asks for, and it’s almost never what an LLM is implicitly rewarded to provide. The result is predictable: the code works, but it often works without the guardrails that keep it from becoming a liability.
The Paradox of AI-generated Code
When people ask whether AI-generated code is a major problem or an overhyped risk, I tell them it’s neither… and both. The important shift isn’t that AI writes insecure code. Humans have always done that. The shift is that AI turns insecurity into a throughput problem.
If a team can produce ten times more code, and their ability to review, test, threat-model, and remediate doesn’t scale with it, then they aren’t moving faster; they’re manufacturing security debt. That debt compounds the same way financial debt does: interest accrues in the form of latent vulnerabilities, expanding attack surfaces, and rising remediation costs.
In practice, the failure modes show up in a few recurring patterns.
First, there’s insecure reproduction. LLMs learn from massive datasets that include both good and bad examples, and they regularly reproduce familiar vulnerability patterns—SQL injection, cross-site scripting, insecure direct object references, hard-coded secrets—especially when developers are moving quickly and prompting loosely.
Second, there’s missing context. Secure code isn’t just about correct syntax; it’s about correct intent in a specific system. LLMs don’t know your authorization model, your tenant boundaries, your data classification, or the ways your services interact under real-world conditions. So they omit the “boring” controls that matter most: input validation, access checks, secure defaults, safe configuration, and the subtle—but critical—differences between dev, staging, and production.
Third, there’s what I call the comprehension gap. Under deadline pressure, developers accept working code they don’t fully understand. That’s where risk enters quietly. Over time, teams get less practiced at spotting issues manually because the machine is doing the drafting and the human is doing the approving. This is the real “vibe coding” risk: not maliciousness, but misplaced trust.
Finally, AI accelerates the software supply chain problem. It can suggest outdated packages, vulnerable libraries, and occasionally dependencies that don’t even exist. “Imaginary” dependencies might sound like a funny hallucination until you remember typosquatting—attackers creating malicious packages with nearly identical names. When you combine rapid AI-driven development with dependency sprawl, you expand the threat surface faster than most organizations can track.
None of this means you should ban AI coding tools. It means you should stop pretending this is the same game we were playing five years ago. We are going to need to harness AI, perhaps in the form of coding agents, to perform vulnerability detection, and to automate as much of the remediation process we can. That is the reason we build Veracode Fix.
The End of “Shift Left” as We Knew It
For more than a decade, application security has been guided by “shift left”—move security earlier, catch issues sooner, reduce cost of remediation. Directionally, that was right. Operationally, it was incomplete.
We shifted findings left. We didn’t shift accountability, capacity, or automation left.
AI makes that gap impossible to ignore. When the rate of code production increases, the only sustainable path is security that scales with developer throughput. That requires a different SDLC model—one that looks less like a sequence of steps and more like a continuous control system.
In five years, the secure development lifecycle will look like this:
Security policies won’t be PDF guidance that nobody reads; they’ll be executable rules enforced in pipelines and in agent workflows. Approved patterns—authentication flows, input handling, logging, dependency policies—will be paved roads, not tribal knowledge. And remediation will become increasingly inline: not a ticket in a backlog, but a fix proposed and validated as part of the developer’s normal workflow.
This is where AI can actually help defensively. It’s not just generating code; it’s also capable of generating patches, explaining risk, and guiding remediation. But that only works when it’s connected to the right signals—static analysis, composition analysis, policy, and context—so it’s not guessing. The future is not “AI writes code and we hope it’s safe.” The future is “agents generate, systems govern.”
That shift has a side effect: it changes what DevSecOps needs to mean.
DevSecOps: Culture, Reality, and the Productivity Test
DevSecOps has been a useful cultural movement. It helped break the old “security says no” dynamic and put security closer to the work. But in many enterprises, it’s still uneven operationally.
The reason is simple. Culture doesn’t compensate for friction.
If your security tools flood teams with noise, slow builds, or generate findings developers can’t quickly act on, teams route around them. If your security workflow speeds developers up, giving them precise, contextual guidance and high-confidence fixes, adoption becomes inevitable.
DevSecOps isn’t primarily a culture problem anymore. It’s a productivity problem. The organizations that succeed won’t be the ones with the best slogans; they’ll be the ones where security behaves like an engineering multiplier.
That’s also why SAST, DAST, and “shift left” have had mixed results.
What Actually Changed Developer Behavior… and What Didn’t
We’ve had SAST and DAST for years, plus Software Composition Analysis (SCA), plus training programs, plus secure coding standards. Has anything fundamentally changed developer behavior?
Some things have improved. Developers are more accustomed to security feedback in their workflows than they were a decade ago. In many organizations, baseline hygiene is objectively better, and you can measure it.
But there are still three structural barriers that haven’t moved enough.
The first is incentives. Shipping features still wins. Security is still competing with deadlines.
The second is noise. When tooling produces more findings than a team can reasonably fix, developers don’t become more secure—they become numb.
The third is ownership. If security debt belongs to “everyone,” it belongs to no one. Vulnerabilities get found, logged, and triaged… and then they sit.
AI doesn’t create these problems, but it amplifies them. If development accelerates and remediation does not, the gap widens. That’s why I think the most important metric in modern application security isn’t vulnerability count. It’s risk velocity.
The CISO Blind Spot: Risk Velocity and Security Debt
Many CISOs still manage application risk like it’s a discovery problem: Do we know what vulnerabilities exist? Can we report on them? Can we show trends?
Those are necessary questions, but they’re increasingly insufficient. The more important question is: how fast are we creating new risk, and how fast can we eliminate it?
That’s risk velocity, and it’s the defining problem of this era.
Security debt behaves like financial debt. It compounds. It accrues interest. You can refinance it with compensating controls for a while, but if you keep taking on more debt than you pay down, eventually it becomes a strategic constraint. It slows delivery, increases incident likelihood, and makes every system change more dangerous.
At the same time, the software supply chain has become the dominant attack surface for many organizations. Modern applications are assembled from hundreds or thousands of components. Your security posture isn’t just your code. It’s your ecosystem.
AI accelerates both sides of the equation. Attackers can discover and weaponize issues faster. Defenders can remediate faster too, but only if they build the automation and governance to keep pace.
That brings us to a phrase that gets thrown around a lot: “secure by design.”
Secure by Design, at Enterprise Scale
If “secure by design” means every developer writes perfect code all the time, it’s not realistic at enterprise scale. That’s not a knock on developers; it’s a statement about systems.
Secure by design must mean the system makes insecure outcomes difficult.
In practice, that looks like secure defaults, approved frameworks, reference architectures, dependency controls, automated enforcement, and fast remediation loops that happen where developers work. It means the secure path is the easiest path.
We’ve made progress as an industry. More organizations are building these paved roads, and you can see the results in improving baseline performance against common vulnerability classes. But it’s also true that a large percentage of applications still carry flaws linked to the OWASP Top 10, and high-severity vulnerabilities often remain open long enough to become institutionalized as “normal.”
AI-generated code increases the volume of change entering the system. If you don’t change the system around it, you increase the volume of risk.
Here’s another provocative take: secure by design isn’t a principle. It’s an engineering platform decision. When enterprises treat it as a campaign, it fades. When they treat it as infrastructure, it sticks.
What I Believed 15 Years Ago that I Don’t Believe Anymore
Fifteen years ago, it was tempting to believe that if we just gave developers the right training and the right tools, we’d eliminate most of the problem. I believed the bottleneck was awareness.
I don’t believe that anymore.
Training matters, but it doesn’t outcompete incentives and time pressure. Awareness doesn’t fix friction. And tools that only find problems, and don’t help teams fix them, create the illusion of progress while backlogs grow.
Today, I believe the bottleneck is integration and automation. The goal isn’t “fewer vulnerabilities” in the abstract; the goal is a system where vulnerabilities are hard to introduce, easy to detect, and cheap to fix. When security aligns with how engineering actually works, security becomes durable.
That shift in mindset also changes what it means to be an ethical hacker.
The Future of Ethical Hackers
AI is lowering the barrier to entry for both attackers and defenders. Basic vulnerability discovery is becoming increasingly automated. That doesn’t make ethical hackers less important; it changes what differentiates them.
The future belongs to hackers who can reason about systems, not just find isolated bugs. Chaining issues, understanding business logic, breaking identity flows, modeling multi-tenant boundaries, spotting supply chain abuse paths—these are the skills that don’t get commoditized easily.
At the same time, we should acknowledge something hopeful: the same forces that lower the barrier for cybercrime also expand the pool of potential defenders. Gen Z’s instincts for speed, collaboration, and experimentation are exactly what modern security needs. The opportunity for organizations is to channel that talent toward constructive pathways: bug bounty programs, secure development programs, education, and real-world mentoring.
If you build those pathways, you don’t just find the next generation of ethical hackers, you help create them.
The Bottom Line
AI isn’t breaking application security. It’s exposing its limitations.
For years, the industry has relied on a model where developers write code, tools find issues, and teams try to fix them later. That model doesn’t scale in an AI-driven world where code production is accelerating.
The organizations that win will treat security as a continuous system, not a checkpoint. They’ll automate detection and remediation, govern the supply chain aggressively, and manage risk as a dynamic flow. Most importantly, they’ll make security feel like a productivity advantage—not a tax.
Because in the end, the lesson of AI-generated code isn’t about AI.
It’s about whether security can evolve at the same pace as software.
