This past week saw two landmark moves that bring agentic commerce out of theory and into practice. First Visa announced its Trusted Agent Protocol, introducing a structured way for merchants to verify AI agents during transactions. Visa’s announcement details the protocol on its Developer portal and via its press release. Second, Walmart unveiled a partnership with OpenAI to enable customers to shop directly within ChatGPT using an Instant Checkout feature.
Taken together, these are signals that agentic shopping is crossing a threshold. The question now is whether real use will follow. That outcome hinges more on trust, interoperability, and clarity around accountability than on technical feasibility.
What’s changing: the shift to AI-initiated commerce
Conventional e-commerce relies on the consumer consciously guiding each step: search, click, select, check out. Agentic commerce flips that by enabling AI systems to act autonomously on a user’s behalf – sniffing out options, comparing deals, and completing purchases consistent with delegated preferences.
Walmart’s integration with ChatGPT allows users to issue commands like “restock my pantry” or “order detergent,” whereupon ChatGPT builds a cart and handles checkout via Instant Checkout. Multiple outlets confirm that Walmart’s products will be purchasable through ChatGPT using that embedded mechanism. The ultimate goal is to give everyone a kind of personal shopper and assistant, the kind of luxury and convenience currently only available to the extremely rich.
Visa’s Trusted Agent Protocol addresses the necessary counterpart: how to authenticate such agents reliably. The protocol allows merchants to validate digital signatures from AI agents, distinguishing them from malicious bots. Visa positions the protocol as a “framework for AI commerce” intended to be part of its Intelligent Commerce offering.
These two moves tackle adjacent challenges: one enables more seamless AI initiation of purchases, the other protects against misuse and fraud in that scenario.
Why trust is the battleground
Agentic commerce will fail not because the technology doesn’t work, but because people won’t trust it. Delegating financial decisions (even small ones) to an algorithm is a radical shift. Users will want assurances that their AI reckons in their interests, not whims of a model or adversarial exploits.
You may remember Amazon’s Dash buttons – these small tags allowed consumers to buy household goods like laundry detergent or paper towels with a single press. Priced at only $5, they were a clear loss-leader that would encourage customers to shop with Amazon rather than any other vendor. They were discontinued in 2019, and a major part of this was because customers did not trust the small plastic buttons – they could easily be pressed dozens of times by an inquisitive child and they displayed no price information about the products that customers were buying, which violated German consumer protection laws. We could see a similar scenario play out with AI agents: the technology might work, but will ordinary people trust that their agents are getting the best prices and buying the right products?
It is not only consumers that need to trust agentic AI, but merchants. Firstly, merchants must know which agent is acting on behalf of which user, and whether that agent is legitimate. Currently, one of the best defences against fraud are the thousands of signals that can distinguish a real, human customer from a script that enters information automatically. Agentic Ais won’t have these signals – there won’t be simple ways to distinguish a legitimate purchase from a fraudulent purchase since both will come from the same AIs behaving in the same way. Secondly, intent: the agent should properly reflect the user’s directive, not misinterpret or overreach. This could lead to returns or chargeback claims. Third, fault and liability: when things go wrong, who is accountable?
These are not new questions; they echo longstanding debates in payments, but in a more complex context. If an AI orders a more expensive version of an item because it “thought” the user would want it, does the user absorb that cost? Does the agent provider accept liability? Or does the merchant? Without clarity on responsibility and recourse, adoption will stall.
Interoperability, transparency, and ecosystem alignment
Trust alone is insufficient if it remains cloistered in a single platform. For agentic commerce to scale, protocols must interoperate across merchants, payment networks, AI agent platforms, and identity frameworks. A user’s agent should work across multiple stores without requiring reinvention each time.
Visa’s protocol is being pitched as open and modular. Visa claims its Trusted Agent Protocol works with minimal changes to merchant systems and is collaborating with standards bodies like IETF, OpenID, and EMVCo to maximize compatibility. The protocol is already integrated with Cloudflare and endorsed by partners like Worldpay and Nuvei. Worldpay’s collaboration is documented in a joint release.
On the Walmart/OpenAI side, Instant Checkout is being extended into ChatGPT across multiple tiers (Free, Plus, Enterprise), and already supports purchases via payment rails such as Apple Pay, and Google Pay.
The more these systems adopt compatible trust primitives (agent signatures, consent tokens, identity assertions), the more users and merchants will feel confident in seamless agentic buying.
What Visa and Walmart must get right
- Clear liability rules.Protocols should include default rules around dispute mediation, error handling, and refund responsibility. If an agent ‘hallucinates’ an order, users must have a path to remediation.
- Gradual empowerment.Rather than full autonomy day one, systems should evolve through increasingly complex tasks. Let agents assist (e.g. suggest baskets, monitor prices) long before letting them execute high-value or discretionary purchases.
- Visibility and auditability.Consumers need transaction logs, agent decision trails, and oversight interfaces. These help detect misuse and build confidence.
- Consent segmentation.Agent permission should be granular. Users might allow an agent to repurchase household staples under budget limits but might require explicit confirmation for unusual or high-ticket orders.
- Open standards participation.Neither Visa nor Walmart can dominate the architecture alone. They must align with or help lead neutral standards that include competitors, regulators, and smaller participants.
- User education.The mental leap to agentic is nontrivial. Clarity around benefits, risks, and safeguards – explained simply – is essential in onboarding early users.
What’s at stake beyond novelty
If agentic commerce succeeds, it promises more than convenience. It could reshape loyalty, shifting value from “which app or site I use” to “which AI agent I trust.” That means the locus of competition may move toward agent platforms, user models, and privacy governance – not just retail merchandising or payment margins.
It will also profoundly reshape marketing and search engine optimisation. If AI Agents become a major force in commerce then eCommerce companies may start optimising their sites and advertising for AIs, not humans. Companies might start to say ‘why advertise, or even brand, our products when an AI will be making the decision to buy?’
But that future hinges on whether these early moves succeed in building foundational trust. If consumers hesitate, adoption will stall, just as it did with Amazon Dash. If the press is full of stories of rogue orders, fraud and mischarging the backlash could sour public sentiment before scale arrives.
Visa and Walmart, to their credit, appear to recognize this. Their work isn’t about pushing features now; it’s about defining the rules for agentic commerce. They must deliver a blueprint that others can adopt, adapt, and trust.
If they get it right, this week will be seen as the pivot point in retail history. But if they misjudge trust, consumers may retreat to traditional checkout models for a long time. Agentic commerce’s promise of better, not just easier, depends on that early foundation.