Organizations today face an unprecedented level of threat. Recent research from Gigamon has revealed that AI-powered cyberattacks have risen by 58%. The industry has entered a new era, one defined by automated adversaries that move faster, operate with precision and exploit vulnerabilities long before defenders have time to detect them.
In this environment, developers find themselves under increased scrutiny for how software is built and calls for “secure coding” are louder than ever. But in reality, there is no such thing as secure coding, only secure decisions. The real challenge is ensuring that even when faced with pressure and complexity, developers make secure decisions consistently.
Operational Strain Is the Adversary’s Advantage
Software developers face a very different security reality than mainstream enterprise users. While most employees simply interact with systems, developers actively shape their security. They sit at a powerful cross section between design, security and user needs, with their day‑to‑day choices often having a greater impact on security outcomes than the security team itself. The result is that developers, their environments and their tools are high‑value targets for attackers.
In tandem, developers are under more pressure than ever to ship code at pace. Research has revealed that almost half of C-suite executives report AI adoption is “tearing their company apart.” The speed it promises becomes the speed leadership expects and teams are being pushed to build more, faster and rely on AI tools to bridge the gap. That pressure forces trade-offs and creates exactly the kind of conditions where insecure decisions slip through and attackers find that small mistake in code, configuration, or third-party libraries that they’ve been waiting for.
The truth behind the exploit
Most real-world incidents don’t happen because a developer forgot a best practice, they happen because someone at some stage made a decision that created an exploit path. Throughout my career I’ve heard variations of these decisions in nearly every organization. Someone insists an internal endpoint doesn’t need authorization, someone else promises that validation will be added later because a feature must ship today, another team assumes the client can be trusted or chooses to “just add a WAF rule and move on.” Each of these decisions introduces a crack and cracks eventually widen.
The weakest link is always human decision‑making and social engineering. Convincing someone a package or contributor is legitimate is still far easier and more common than compromising a tool directly. That’s why malicious or typosquatted packages remain one of the most common entry points, particularly when developers bypass slow or restrictive approval processes and assume a third-party package is safe. AI-assisted development has amplified this risk and when developers give AI tools too much autonomy, the system may automatically pull third‑party libraries without proper vetting. Attackers are already publishing fake or malicious packages specifically to exploit this behaviour, banking on AI or even an under-pressure developer themself to select them.
If You Want Security to Stick, Teach Decision-Making
Combatting this by implementing developer security training that’s just a list of do’s and don’ts is not enough. What works is giving developers the mental models needed to evaluate risk in the moment. The most effective training teaches how to spot failure modes, understand where implicit boundaries actually lie, identify the safest and cheapest default available and imagine the vantage point an attacker would take. When developers internalize these thought processes, security shifts from a checklist to an instinct.
This training should be ongoing, with role-based learning paths that evolve as the organization grows. Post-training developer surveys are another powerful, often overlooked tool that can help organizations validate secure coding knowledge, identify gaps, and ensure that AI is accelerating development without introducing risk. Without feedback, organizations risk delivering content that’s too basic, too advanced, or simply ineffective. A feedback loop ensures continuous improvement and that developers are not just checking boxes but actually changing behavior.
At its core, the argument is simple: if we want fewer vulnerabilities, we must stop treating secure coding like trivia and start teaching secure decision‑making under real‑world pressure. The organizations that recognize this shift will be the ones best prepared for the next wave of AI‑powered threats.