Silicon Valleys Journal
  • Topics
    • Finance & Investments
      • Angel Investing
      • Financial Planning
      • Fundraising
      • IPO Watch
      • Market Opinion
      • Mergers & Acquisitions
      • Portfolio Strategies
      • Private Markets
      • Public Markets
      • Startups
      • VC & PE
    • Leadership & Perspective
      • Boardroom & Governance
      • C-Suite Perspective
      • Career Advice
      • Events & Conferences
      • Founder Stories
      • Future of Silicon Valley
      • Incubators & Accelerators
      • Innovation Spotlight
      • Investor Voices
      • Leadership Vision
      • Policy & Regulation
      • Strategic Partnerships
    • Technology & Industry
      • AI
      • Big Tech
      • Blockchain
      • Case Studies
      • Cloud Computing
      • Consumer Tech
      • Cybersecurity
      • Enterprise Tech
      • Fintech
      • Greentech & Sustainability
      • Hardware
      • Healthtech
      • Innovation & Breakthroughs
      • Interviews
      • Machine Learning
      • Product Launches
      • Research & Development
      • Robotics
      • SaaS
  • Media Kit
No Result
View All Result
  • Topics
    • Finance & Investments
      • Angel Investing
      • Financial Planning
      • Fundraising
      • IPO Watch
      • Market Opinion
      • Mergers & Acquisitions
      • Portfolio Strategies
      • Private Markets
      • Public Markets
      • Startups
      • VC & PE
    • Leadership & Perspective
      • Boardroom & Governance
      • C-Suite Perspective
      • Career Advice
      • Events & Conferences
      • Founder Stories
      • Future of Silicon Valley
      • Incubators & Accelerators
      • Innovation Spotlight
      • Investor Voices
      • Leadership Vision
      • Policy & Regulation
      • Strategic Partnerships
    • Technology & Industry
      • AI
      • Big Tech
      • Blockchain
      • Case Studies
      • Cloud Computing
      • Consumer Tech
      • Cybersecurity
      • Enterprise Tech
      • Fintech
      • Greentech & Sustainability
      • Hardware
      • Healthtech
      • Innovation & Breakthroughs
      • Interviews
      • Machine Learning
      • Product Launches
      • Research & Development
      • Robotics
      • SaaS
  • Media Kit
No Result
View All Result
Silicon Valleys Journal
No Result
View All Result
Home Technology & Industry AI

Chain of Custody: Why Every AI Needs a Human Owner

By Alan Radford, Global Identity and Access Management Strategist, One Identity

SVJ Thought Leader by SVJ Thought Leader
April 20, 2026
in AI
0

Take a look at just about any enterprise environment today, and you’ll quickly notice that the fastest-growing problem isn’t human. Non-human identities (NHIs) such as bots, service accounts, machine agents, and automated workflows now outnumber human employees on networks by staggering ratios in excess of 50 to 1. These “identities” are spun up quickly to solve problems and orchestrated by different teams across the organization with little to no coordination. While an API token or an automation bot might not seem like an “identity” in the traditional sense, they need the same level of access to relevant data as their human counterparts, so to treat them as anything less than a full-fledged identity is to invite unfathomable levels of risk into your organization. 

We’ve spent years studying how identity is managed in organizations, and one question consistently catches executives and their teams off guard – “how many identities exist in your environment?” It’s tempting to answer this with a simple headcount, but that doesn’t factor in the hundreds or thousands of service accounts that have access to systems, applications, and datasets – some of them active, and some of them dormant long after they’ve served their purpose in a given project or function. 

These machine identities don’t fit within traditional onboard/offboard processes like us humans do. For one, they operate quietly in the background, invisible to those who aren’t actively looking for them. They also tend to be provisioned automatically, and their access persists long after their original purpose has been forgotten. Nobody thinks to “offboard” a machine agent or service account, revoking access in the same way we would a human following a promotion or an exit from the company. So, over time, access accumulates, visibility declines, and risk steadily builds in places most organizations aren’t even thinking about looking. 

How we got here: automation without accountability

NHIs didn’t suddenly become a problem or appear as a new “category” that must be managed. They emerged gradually as a natural byproduct of automation, AI integration, and the need for business to move faster. In many cases, the systems and processes designed to manage human users were simply extended to accommodate them. That worked for a time, but they were never designed to handle the scale or behavior of machine-driven activity at the scale we’re seeing today. 

In earlier stages of NHI proliferation, some organizations resorted to treating machines as if they were people just to make the system work. It wasn’t uncommon for teams to create placeholder “employees” in HR systems purely to trigger account provisioning workflows, then use those credentials to orchestrate automated processes. How many employee records in your organization’s HR system have recent dates of birth? Workarounds like this really highlight the crux of the matter – identity governance frameworks were built with humans in mind, then stretched to fit something fundamentally different. Businesses stretch their capabilities and push boundaries all the time, but where security is concerned, there is always a price to pay, and the debt is finally due. 

These shortcuts and extensions have now become embedded in how businesses operate. New services, integrations, and automation layers have been added, each introducing more identities with access to critical systems. But unlike human users, these identities are not revisited, revalidated, or retired in a structured way. So, what began as a practical solution has evolved into a systemic crisis, where automation has left accountability in the dust. 

The “Chain of Custody” problem

What’s sorely lacking is a clear security baseline where every NHI has a human owner. Not a shared mailbox or a whole team, but a named individual who is accountable for how that identity is created, used, and maintained. Without that chain of custody, there is no meaningful way to enforce responsibility or trace activity back to a decision point. And in environments where access equals action, that lack of accountability directly translates into risk. According to the Non-Human Identity Management Group, a shocking 97% of NHIs have “excessive” privileges that broaden the attack surface. Around 9 in 10 businesses frequently expose their NHIs to third parties, and 44% of tokens are exposed in the wild – being sent or stored over platforms like Teams, in Jira tickets, or on Confluence pages. 

They are created programmatically, inherited through integrations, or deployed as part of automation workflows with very little in the way of oversight. In some cases, they’re even capable of triggering other identities or actions, further distancing them from any semblance of human control. What’s left is an environment where activity is taking place inside critical systems, but no one can confidently say what’s going on or who’s responsible for it.

In the case of AI agents – bots that extend beyond automation and into active decision-making – the problem intensifies. There is growing discussion around whether machines can or should act totally independently, but from a governance perspective, the answer has to be a resounding “no.” That’s not to say AI agents can’t be used to their fullest potential, but the checks and balances must always come back to a human at some point in the chain. If that chain is broken, or doesn’t exist to begin with, organizations lose the ability to audit, enforce policy, or demonstrate control, which is exactly what regulators and security teams are increasingly demanding, particularly under frameworks like the EU’s Digital Operational Resilience Act (DORA), which emphasizes traceability, accountability, and strict control over access to critical systems in finance.

From human resources to “non” human resources

NHIs significantly outnumber employees, therefore it stands to reason they must be managed with the same level of structure and discipline. Business evolved to include human resources functions because managing people at scale required oversight, governance, and accountability. The same logic now applies to machines. What’s missing is a formalized way to manage this growing population as a workforce in its own right.

This is where the concept of “non-human resources” starts to take shape. Organizations need a defined function responsible for tracking every non-human identity, understanding what it is, what it does, and where it operates. They need to build a clear inventory, apply consistent taxonomies, and maintain visibility into how these identities interact with systems over time, because the cost of not doing that is now growing by the day. 

None of the above steps work unless ownership is established and enforced and NHI lifecycles are actively managed. Every identity must have a defined purpose, a named human owner, and a clear point at which its access is reviewed or revoked. That includes identities created as part of automated workflows, integrations, or AI-driven processes. Basically, if they can access something, they need to be governed like any other participant in the environment. If they cannot access something, then the existence must be justified or removed entirely.

Accountability is the foundation of AI trust

We all know that AI is booming. But in the race to capitalize on AI, machine-based identities are being spun up in their droves. Agentic systems can initiate actions, trigger workflows, and interact with other services with minimal – if any – human input. That means decisions are no longer just executed by machines, but increasingly influenced by them, and the link between action and responsibility is eroding. Policies are beginning to reflect the need for human-in-the-loop controls, particularly when AI systems have access to sensitive data or critical infrastructure. The principle is sound: if an action has consequences, there must be a human ultimately accountable for it.

This is why chain of custody must be inseparable from trust in AI-driven environments. No human owner? No trust. A chain of custody provides a clear line from action back to ownership, enabling organizations to audit behavior, enforce policy, and demonstrate control when it matters. Without that thread of responsibility, the most advanced AI systems – even if they’re effective at their task – will only ever lead to uncertainty and risk.  If an organization cannot clearly answer who owns a given identity, whether human or machine, it cannot confidently say it is in control of its own environment. And, regrettably, that is the situation businesses find themselves in today.  

If AI is going to act on behalf of your business, it cannot exist without accountability. Without a chain of custody, you’re not scaling innovation – you’re scaling risk.

Previous Post

Designing Human-Centric AI: Why UX Matters More Than the Model

Next Post

Grok’s AI Deepfake Scandal a Wake-Up Call Creators Cannot Ignore

SVJ Thought Leader

SVJ Thought Leader

Next Post
Grok’s AI Deepfake Scandal a Wake-Up Call Creators Cannot Ignore

Grok’s AI Deepfake Scandal a Wake-Up Call Creators Cannot Ignore

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
Faith and the Digital Transformation of Religion: How One Person Began Helping Faith Communities and People of Faith

Faith and the Digital Transformation of Religion: How One Person Began Helping Faith Communities and People of Faith

December 30, 2025
AI’s Most Underrated Role: Giving Enterprise Architects Back Their Focus

AI’s Most Underrated Role: Giving Enterprise Architects Back Their Focus

November 26, 2025
The UK’s Seed-to-Series A gap is growing. Should we fix it?

The UK’s Seed-to-Series A gap is growing. Should we fix it?

November 25, 2025
Your customers are talking, but are you listening? How AI Conversational Intelligence is rewriting the rules of customer experience

Your customers are talking, but are you listening? How AI Conversational Intelligence is rewriting the rules of customer experience

November 13, 2025
The Human-AI Collaboration Model: How Leaders Can Embrace AI to Reshape Work, Not Replace Workers

The Human-AI Collaboration Model: How Leaders Can Embrace AI to Reshape Work, Not Replace Workers

1

50 Key Stats on Finance Startups in 2025: Funding, Valuation Multiples, Naming Trends & Domain Patterns

0
CelerData Opens StarOS, Debuts StarRocks 4.0 at First Global StarRocks Summit

CelerData Opens StarOS, Debuts StarRocks 4.0 at First Global StarRocks Summit

0
Clarity Is the New Cyber Superpower

Clarity Is the New Cyber Superpower

0

What to Do If You Can’t Log Into Your X Account in 2026: A Complete Guide

April 20, 2026
The Future of AI Security: Challenges and Opportunities

The Future of AI Security: Challenges and Opportunities

April 20, 2026
The rise of a borderless criminal economy

The rise of a borderless criminal economy

April 20, 2026

The ROI of Trust: Why hybrid AI stacks will outperform total automation in healthcare

April 20, 2026

Recent News

What to Do If You Can’t Log Into Your X Account in 2026: A Complete Guide

April 20, 2026
The Future of AI Security: Challenges and Opportunities

The Future of AI Security: Challenges and Opportunities

April 20, 2026
The rise of a borderless criminal economy

The rise of a borderless criminal economy

April 20, 2026

The ROI of Trust: Why hybrid AI stacks will outperform total automation in healthcare

April 20, 2026

About & Contact

  • About Us
  • Branding Style Guide
  • Contact Us
  • Help Centre
  • Media Kit
  • Site Map

Explore Content

  • Events
  • Newsletter
  • Press Releases
  • Reports & Guides
  • Topics

Legal & Privacy

  • Advertiser & Partner Policy
  • Communications & Newsletter Policy
  • Contributor Agreement
  • Copyright Policy
  • Privacy Policy
  • Prohibited Content Policy
  • Terms of Service

Tiny Media Brands

  • Silicon Valleys Journal
  • The AI Journal
  • The City Banker
  • The Wall Street Banker
  • World Lifestyler
  • About
  • Privacy & Policy
  • Contact

© 2025 Silicon Valleys Journal.

No Result
View All Result

© 2025 Silicon Valleys Journal.