We are currently in an AI agent bubble. The fever feels exactly like the dot-com era when every company promised to reinvent your world through the power of the internet. Everyone today is promising swarms of intelligent agents that will trade, negotiate, and run companies for you.
I’m really excited for that future. I’m a nerd, I just like that kind of stuff. But I’m older now. I’ve seen how these things go, and I know the technology well enough to see the cracks.
There’s obviously a lot of value in the market today, just like there was in the dot-com bubble. If you bought the dip on Amazon, Google, PayPal and eBay you’re probably doing pretty well today. Many winners are probably many of the products we’re all using already, from relative newcomers like OpenAI to incumbents like Adobe and Figma who have integrated AI in ways I find enormously useful.
The risk with the AI that we have today is that some tasks are just much harder than others, and it’s not exactly clear why an LLM can give you a detailed history of the Roman Empire but can’t just handle all of your email correspondence for you. Unfortunately, nobody sat down and wrote thousands of pages detailing exactly how they make every micro-decision to manage their correspondence, and the LLM just isn’t trained on that, while it has definitelybeen trained on thousands or tens of thousands of pages about the Roman Empire.
After almost two years of building a popular open-source agent framework and watching thousands of teams experiment with LLMs and AI agents, I’ve seen some familiar patterns emerge. A lot of what I’ve seen sounds great on paper, but has systemic issues that in hindsight I’m sure the teams will find obvious. Some things are much easier than people expect, and some things are much harder, and for reasons that are sometimes technical and sometimes practical or financial.
The 80% Illusion
Modern AI models are powerful but not the finished product. They are becoming very reliable for everyday tasks, but for detailed tasks which require expert knowledge, where the LLM has probably been trained on a lot less data, they experience much greater rates of hallucination. The areas where the LLMs are particularly prone to hallucination also tend to be where the stakes tend to be much, much higher– in fields like finance, law and medicine. Many systems now have reasoning or autonomous agentic capability where they can chain one action after another, which gives LLM-driven systems incredible power, but also creates the risk of compounding errors– the more complex an action, the higher the likelihood that some part of it was incorrect.
LLMs and agentic systems are about 80% there. For an interesting interactive game, companion or content creation tool, this is sufficient for a good experience. Combined with web search, research agents can do an incredible job of collecting and summarizing actual research and giving high quality answers. But for many applications, that 20% gap is substantial. It is the difference between a car that navigates safely and one that crashes twice a day. Until that last margin closes, no number of “billion-agent ecosystems” will suffice. The world is betting on intelligence that we still cannot fully trust. When Elon promised full self-driving cars from optical cameras almost a decade ago, it seemed like an ambitious but reasonable claim– and the cars certainly can drive themselves down the road. But can they do it well?
Some investors and founders think agents will learn on the job. They picture digital workers improving themselves through experience like humans. This is fiction– the systems we have today can record memories, but they do not learn. The LLM has no capacity to update based on user experiences or learn new tasks. After training, the weights of the LLM are frozen, and the model does not update, remember, or generalise beyond the frame’s original shape. Without scaffolds for feedback, testing, and correction, every “self-improving” agent simply compounds error faster.
There are some promising improvements to be made here– the success of DeepSeek showed us that reinforcement learning has a big part to play, and RL algorithms do have the capacity to self-improve from experience– but so far the improvement has been incremental.
Lower Expectations, Better Results
Projects survive when they accept limitations and build for the world we have, not the one we imagine. An agent that manages a single workflow reliably is worth more than one that claims to run an entire business and fails silently. The best founders design small, specific tools that perform one function with near-perfect accuracy. This is the path from novelty to necessity.
Viral success is not a flaw. It is the proving ground. Mass experimentation exposes what’s durable and where the real bottlenecks lie. But disciplined builders must resist the illusion that any single model or framework can scale an entire economy.
The real winners will not be the loudest marketers or the flashiest demos. They will be the projects that solve coordination problems. Getting systems to work together, to know that “Joe on Twitter” is the same “Joe on Discord,” is far harder than generating text. The future belongs to teams solving that problem, not the ones chasing attention.
The second marker of survival is infrastructure. Agent frameworks enable ordinary developers to deploy powerful agents safely with consistent memory and behaviour. Without stable platforms, every new experiment starts from zero. The third requirement is financial autonomy. Agents that cannot transact are not autonomous at all.
Why Stablecoins Are the Missing Layer
Artificial intelligence without stablecoins is like computers without software. You can calculate endlessly but you cannot act. Banks were not built for software. They require human identities, paperwork, and compliance checks that algorithms cannot complete.
Stablecoins fix this. They give agents predictable value and the ability to pay for what they need instantly. An agent can create its own wallet, receive stablecoins, and begin operating within seconds. It no longer depends on a human to press “confirm.”
This shift is already underway. Crypto giant Kraken’s August acquisition of Capitalise.aihighlights the coming wave of AI-stablecoin integrations designed to support the instant, autonomous transactions needed for AI to manage money without human involvement. These developments point toward a financial internet where agents act, trade, and settle in real time.
Stablecoins are programmable, allowing agents to buy data, rent computing power, or pay other agents without permission. Coinbase’s x402 standard already allows stablecoin payments inside web requests, meaning an agent can pay for an API call during a normal interaction. That is the moment when digital intelligence becomes an economic actor.
Building Real Agents
Agents with wallets have an identity and accountability. Every transaction they perform is recorded on-chain, and a history of their actions and results can be easily traced. Good agents earn trust by performing well, and bad ones get automatically weeded out. The market becomes self-correcting without human bureaucracy.
Stablecoins turn raw intelligence into usable capability. They allow agents to commit to actions and carry them out. They also make auditing simpler, since every interaction leaves a record. The foundation for trust in autonomous systems is not emotion, it is mathematics.
Most of today’s agent projects will not survive. They are built on false assumptions about learning, reliability, and autonomy. Investors will soon realise that impressive demos do not equal dependable systems. The survivors will focus on trust, coordination, and payments before personality or branding.
The hype will collapse, but the technology will not. After the noise fades, the survivors will look nothing like chatbots. They will be invisible processes operating across blockchains and APIs, quietly coordinating markets at machine speed. Human involvement will move up the chain from execution to direction.
The Real Future
By the end of the decade, most economic activity will happen between software agents, not people. Humans will set objectives, but transactions, negotiations, and settlements will occur automatically. Stablecoins will underpin that economy, providing the stability and programmability needed for agents to handle money safely.
We are at the stage where intelligence has been commodified. Compute and data are no longer the differentiators. The next great competition is in coordination and control. The 99% chasing headlines will disappear. The 1% building the infrastructure will define the next century of automation.