The Payments Industry Just Quietly Confirmed Its Biggest Weakness
When Visa unveiled its new AI and stablecoin payment services at Singapore FinTech Festival, the announcement was framed as a leap forward for global commerce. But the most revealing detail wasn’t what Visa launched – it was what it didn’t launch.
Despite months of industry hype around conversational commerce and autonomous shopping assistants, Visa stopped short of allowing customers to complete purchases through AI chatbots. That omission wasn’t an oversight. It was a signal that the world’s largest payments network simply isn’t ready for agentic commerce.
And it’s not alone.
AI agents are already capable of browsing, comparing, negotiating, and initiating transactions. They can manage subscriptions, optimize spending, and even execute investment strategies. But none of this can safely scale until the financial system can answer one foundational question: Who or what is actually transacting?
Visa’s hesitation is the clearest sign yet that the identity layer of global finance is unprepared for autonomous actors.
Finance Was Built on a Human‑Only Identity Model
For decades, financial identity has relied on the simple premise that every transaction originates from a human. We log in, we sign, and we authorize.
AI agents break this assumption the moment they act.
They can’t authenticate with documents, provide device fingerprints, or generate behavioural biometrics. They simply execute. And when machines execute, identity becomes the control plane.
The existing infrastructure – KYC, AML, device checks and behavioural analytics – can’t answer the most basic questions about autonomous agents… Who is this agent? What is it allowed to do? Why did it take this action? And who is accountable when it misbehaves?
Visa’s reluctance to enable agent-driven payments highlights a structural limitation, indicating that the identity stack underlying global commerce can’t support non-human actors.
Credential Laundering at Machine Scale
The earliest and most damaging failure mode won’t be deepfake fraud or rogue trading bots. It will be credential laundering. AI agents scraping, synthesizing, and repurposing human credentials to infiltrate financial systems.
Essentially, this is industrialised identity theft.
Agents will mimic human access patterns, replay behavioural signatures, and slip through systems that assume a human is always behind the screen. Fraud losses linked to synthetic identities already exceed tens of billions annually, and AI is accelerating the trend.
The exposure will be enormous, and the only viable defence is identity that can be proven without being revealed.
Cracks in Our Current Identity Infrastructure
Today’s identity systems are extractive, redundant, and increasingly fragile.
Users routinely overshare sensitive information just to prove simple facts. To confirm age, they hand over full birthdates, addresses, and license numbers. To demonstrate solvency, they expose years of transaction history.
This data then sits in centralized silos that companies monetize while individuals absorb the risk. When Equifax was breached, it wasn’t the credit bureau that suffered; it was the 147 million people whose personal information was exposed.
AI agents are accelerating the collapse of this model by increasing verification volume, expanding the attack surface, and raising the incentives for credential theft. We’re already seeing the system buckling.
Zero‑Knowledge Identity Changes Everything
Zero‑knowledge proofs offer a way out.
These cryptographic techniques allow individuals and AI agents to prove specific facts without revealing underlying data. A person can prove they are over 21 without disclosing their birthdate. A user can prove their net worth exceeds a threshold without exposing account balances. And an AI agent can prove it’s authorized to transact without revealing its architecture or training data.
Regulators are already moving on this. Europe’s eIDAS 2.0 mandates interoperable digital identity systems. Major financial institutions have piloted zero‑knowledge KYC, demonstrating lower costs, stronger security, and better user experience.
The architecture is critical. Verifiable credentials issued by trusted authorities live in the user’s digital wallet, not on centralized servers. Verification becomes a cryptographic proof, not a data-sharing event.
This is the identity layer AI agents require, and it’s the only one that can scale.
Identity As an Income-Generating Asset
Privacy protection is only the beginning; the deeper shift is economic, whereby identity verification becomes a monetizable asset.
Today, when a fintech verifies a new user, it pays an identity provider, not the individual whose data enables the verification. The user provides the raw material, but intermediaries capture the value.
Self-sovereign identity inverts this model.
When individuals control their credentials, they can attach economic terms to their use. Platforms that need verified participants can compensate individuals directly. This isn’t about selling data — it’s about recognising the value that verified identity provides, whether the actor is human or agent.
The AI Dimension: This Shift Is Urgent
Generative AI has rendered traditional verification methods obsolete. Deepfakes, synthetic identities, and AI-powered impersonation attacks overwhelm systems designed to distinguish humans from bots.
The common response has been to collect more biometric data, but this just creates a dangerous feedback loop. The more biometric data stored, the more material AI systems can weaponize.
Zero‑knowledge approaches break the cycle. They allow platforms to verify personhood, authorization, and compliance without collecting raw data. A verified human credential proves uniqueness without storing biometric records. And a verified agent credential proves authorization without exposing model internals.
This is the trust infrastructure required for legitimate AI agents to participate in digital commerce.
Visa’s Hesitation Matters: The World Needs KYA
Visa’s decision not to enable agent‑driven payments previews the next decade. Without a trust protocol for AI agents, the agentic economy cannot scale.
The solution is ‘know your agent’ (KYA) – a cryptographically verifiable identity layer for autonomous systems. KYA gives each AI agent a unique identity that proves what it is, what it’s allowed to do, and who is accountable for its actions.
As AI agents proliferate across financial systems, KYA will become as fundamental as KYC.
Identity as the Foundation of AI‑Native Finance
Every KYC upload, facial scan, and personal data form creates records that can be breached, sold, or even subpoenaed. Digital wallets secured by verifiable credentials offer a safer, interoperable alternative.
Identity is becoming the most valuable digital asset. For the first time, it can be owned, protected, and monetized by individuals rather than intermediaries.
AI agents make this shift unavoidable. Finance cannot function without knowing who, or indeed what, is acting inside its systems.
2026 will be the year finance is forced to rebuild its foundations for an autonomous future.