Silicon Valleys Journal
  • Finance & Investments
    • Angel Investing
    • Financial Planning
    • Fundraising
    • IPO Watch
    • Market Opinion
    • Mergers & Acquisitions
    • Portfolio Strategies
    • Private Markets
    • Public Markets
    • Startups
    • VC & PE
  • Leadership & Perspective
    • Boardroom & Governance
    • C-Suite Perspective
    • Career Advice
    • Events & Conferences
    • Founder Stories
    • Future of Silicon Valley
    • Incubators & Accelerators
    • Innovation Spotlight
    • Investor Voices
    • Leadership Vision
    • Policy & Regulation
    • Strategic Partnerships
  • Technology & Industry
    • AI
    • Big Tech
    • Blockchain
    • Case Studies
    • Cloud Computing
    • Consumer Tech
    • Cybersecurity
    • Enterprise Tech
    • Fintech
    • Greentech & Sustainability
    • Hardware
    • Healthtech
    • Innovation & Breakthroughs
    • Interviews
    • Machine Learning
    • Product Launches
    • Research & Development
    • Robotics
    • SaaS
No Result
View All Result
  • Finance & Investments
    • Angel Investing
    • Financial Planning
    • Fundraising
    • IPO Watch
    • Market Opinion
    • Mergers & Acquisitions
    • Portfolio Strategies
    • Private Markets
    • Public Markets
    • Startups
    • VC & PE
  • Leadership & Perspective
    • Boardroom & Governance
    • C-Suite Perspective
    • Career Advice
    • Events & Conferences
    • Founder Stories
    • Future of Silicon Valley
    • Incubators & Accelerators
    • Innovation Spotlight
    • Investor Voices
    • Leadership Vision
    • Policy & Regulation
    • Strategic Partnerships
  • Technology & Industry
    • AI
    • Big Tech
    • Blockchain
    • Case Studies
    • Cloud Computing
    • Consumer Tech
    • Cybersecurity
    • Enterprise Tech
    • Fintech
    • Greentech & Sustainability
    • Hardware
    • Healthtech
    • Innovation & Breakthroughs
    • Interviews
    • Machine Learning
    • Product Launches
    • Research & Development
    • Robotics
    • SaaS
No Result
View All Result
Silicon Valleys Journal
No Result
View All Result
Home Technology & Industry AI

Q&A: The hidden risks of AI coding tools

SVJ Thought Leader by SVJ Thought Leader
December 10, 2025
in AI
0
Q&A: The hidden risks of AI coding tools

AI coding assistants are rapidly changing how software is designed, written and deployed. As organisations embrace agentic integrated development environments (IDEs) and autonomous development tools, they may also be introducing a new category of cybersecurity risk that is still largely misunderstood.

To explore the implications, Silicon Valley’s Journal spoke with Shrivu Shankar, VP of AI Strategy at Abnormal AI, about why these tools behave more like trusted insiders than traditional software—and why 2026 may mark a turning point for defensive security.

Q1. What new attack surface do AI coding tools create? Why are these agents effectively “trusted insiders”?

AI coding tools are often treated like conventional software, but they operate very differently. Agentic IDEs can write and run code directly on an engineer’s machine, and because engineers typically hold production-level access, the agent effectively inherits those privileges. Whatever the engineer can reach, the agent can reach too.

This starts getting risky because normal engineering behaviour can easily become a potential compromise path. Engineers copy and paste instructions or snippets from the internet constantly. With an autonomous agent running locally, a malicious piece of code in an otherwise harmless-looking snippet could be enough to give an attacker the same access as the engineer. Most teams simply do not realise how little it can take to inadvertently introduce a dangerous new attack vector into their software.

There is also a “usefulness threshold” with these tools. They only become valuable when given enough autonomy and context to take meaningful action. The traditional “lock it down” security approach risks turning the tools into a net drag on productivity.  

For this reason, organisations tend to increase access rather than restrict it. The combination of autonomy, local execution, and engineer-level privileges turn these systems into trusted insiders rather than external software, even if that wasn’t the engineer’s original intention.

Q2. Why do you think security teams are auditing these tools incorrectly? Where does the awareness gap come from?

Most organisations are still evaluating AI tools the way they would evaluate a SaaS product. They ask all the expected vendor-assessment questions—whether the provider is trusted and if the underlying software is secure—but none of that addresses the real risk model.

With agentic IDEs, you are not just trusting the vendor. You are trusting the agent running locally and the content that engineers feed into the model. Traditional software does not treat user-supplied text or copied code as a potential attack vector, but with autonomous agents, it absolutely can be.

This is an unintuitive shift for the industry. CISOs have spent years thinking about software through the lens of application security, dependency risk, and vendor trust. Very few have had to consider what it means for a tool to execute arbitrary actions based on the code that a developer pastes into it. From conversations I’ve had, only a minority of security leaders fully grasp how dangerous this can be. Until teams understand how these agents behave in practice, audits will continue to miss important signals.

Q3. Going back to the traditional “lock it down” strategy, why is this mindset not effective in the era of autonomous AI agents?

The “lock it down” approach fails because it directly conflicts with how these tools provide value. Autonomous agents need access, data and context to reduce engineering friction. When you restrict them too heavily, their usefulness collapses. In some cases, they can slow engineers down more than they speed them up.

At the same time, CISOs cannot simply block these tools. Engineering teams can become significantly more productive once they integrate agentic IDEs into their workflow. No security leader wants to be the person who turns off something that materially improves delivery speed.

This leaves organisations in an uncomfortable middle ground where the tool is too valuable to disable, but too powerful to treat casually. That is why security needs to evolve toward “safe autonomy”. Instead of trying to prevent the agent from acting, we need ways to observe, verify, and trust the actions it takes. The question becomes: how do you build confidence in an automated system that is performing more of your operational tasks?

We need frameworks that emphasise behavioural monitoring, explainability, and action-level validation. Locking everything down is no longer viable. Building trust and visibility around autonomous behaviour is the only sustainable path forward.

Q4. What could an AI-tool compromise look like, and how can organisations regain visibility and trust before this happens?

Part of the danger here is that a compromise involving an AI coding tool does not require a sophisticated exploit. A simple scenario—an engineer pasting a snippet from a documentation page into their IDE—could be enough to trigger a security incident if the snippet contains malicious instructions that the agent executes automatically. Engineers perform this action repeatedly throughout the day, which makes it an attractive vector for attackers, and difficult to screen for.

Traditional security controls are poorly positioned to detect this. Because these decisions and actions happen locally inside the agent’s environment and they are not monitored by existing endpoint or cloud security tools, an attack can unfold without triggering any obvious alerts. For this reason, I expect we will see a high-profile AI-IDE breach in the year ahead. We have only seen a couple of small incidents so far, but the capabilities of these tools are expanding quickly, and adoption is rising even faster.

To get ahead of this, security teams need to rethink detection. Content-based signals become far less useful when the agent can generate clean, syntactically correct code. Behavioural signals become far more important—actions that fall outside expected patterns, a sudden shift in what the agent is attempting to access, or operations with no historical precedent.

We also need better mechanisms for verifying outcomes. An engineering manager does not need to understand every line of code to confirm a system works; they rely on tests, review processes, and output checks. We need something similar for AI agents: ways to validate results without manually inspecting everything they produce.

The goal is not to restrict autonomy but to surround it with the right visibility and guardrails. If we can observe what agents are doing, validate the outcomes and detect unexpected behaviour early, we can rely on them safely—and fully benefit from the productivity gains they offer.

Previous Post

Stop Ho-Ho-Holding: The Future of Holiday CX Is Human-Guided AI

SVJ Thought Leader

SVJ Thought Leader

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
AI at the Human Scale: What Silicon Valley Misses About Real-World Innovation

AI at the Human Scale: What Silicon Valley Misses About Real-World Innovation

October 27, 2025

From hype to realism: What businesses must learn from this new era of AI

October 28, 2025

Why You Should Own Your Data. Enterprises Want Control and Freedom, Not Lock-In

November 11, 2025
From recommendation to autonomy: How Agentic AI is driving measurable outcomes for retail and manufacturing

From recommendation to autonomy: How Agentic AI is driving measurable outcomes for retail and manufacturing

October 21, 2025
The Human-AI Collaboration Model: How Leaders Can Embrace AI to Reshape Work, Not Replace Workers

The Human-AI Collaboration Model: How Leaders Can Embrace AI to Reshape Work, Not Replace Workers

1

50 Key Stats on Finance Startups in 2025: Funding, Valuation Multiples, Naming Trends & Domain Patterns

0
CelerData Opens StarOS, Debuts StarRocks 4.0 at First Global StarRocks Summit

CelerData Opens StarOS, Debuts StarRocks 4.0 at First Global StarRocks Summit

0
Clarity Is the New Cyber Superpower

Clarity Is the New Cyber Superpower

0
Q&A: The hidden risks of AI coding tools

Q&A: The hidden risks of AI coding tools

December 10, 2025

Stop Ho-Ho-Holding: The Future of Holiday CX Is Human-Guided AI

December 10, 2025
Is there an AI bubble?

Is there an AI bubble?

December 10, 2025

Why companies keep missing the point on AI

December 10, 2025

Recent News

Q&A: The hidden risks of AI coding tools

Q&A: The hidden risks of AI coding tools

December 10, 2025

Stop Ho-Ho-Holding: The Future of Holiday CX Is Human-Guided AI

December 10, 2025
Is there an AI bubble?

Is there an AI bubble?

December 10, 2025

Why companies keep missing the point on AI

December 10, 2025
Silicon Valleys Journal

Bringing you all the insights from the VC world, startups, and Silicon Valley.

Content Categories

  • AI
  • C-Suite Perspective
  • Cloud Computing
  • Cybersecurity
  • Enterprise Tech
  • Events & Conferences
  • Finance & Investments
  • Financial Planning
  • Future of Silicon Valley
  • Healthtech
  • Leadership & Perspective
  • Leadership Vision
  • Press Release
  • Product Launches
  • SaaS
  • Technology & Industry
  • Uncategorized
  • About
  • Privacy & Policy
  • Contact

© 2025 Silicon Valleys Journal.

No Result
View All Result

© 2025 Silicon Valleys Journal.