After the first practical years of AI, one central question is emerging for many companies: Where is the ROI? Despite advances in foundational models and the explosion of AI pilots, many organisations are discovering that intelligence alone is not enough to guarantee results. As it stands, AI agents are not autonomous enough, model usage suffers from “context rot” or quality degradation, and traditional database concepts are reaching their limits.
What’s evident is not an AI failure, but a structural one. The next phase of AI adoption will be defined less by the models themselves and more by the context in which they are applied. Companies that rethink how information is structured, retrieved, governed, and adapted in real time, in alignment with the following trends in 2026, will significantly advance their practical use of AI.
1. AI Reality Check: Scaling in Focus
Feedback from business is clear: Most AI projects are still not delivering what was originally expected. According to an MIT study, 95% of pilot projects do not generate measurable outcomes. Gartner expects that 40% of agentic AI projects will fail by 2027 – held back by costs, unclear ROI, and unresolved risks. This reveals the GenAI paradox: General AI tools and assistants can be rolled out quickly, but the ROI is hard to measure.
However, it is premature to speak of AI frustration. The end-user hype surrounding ChatGPT, Copilot & Co. – and the enormous investments made by tech giants – has created outlandish expectations. Here, AI systems must be deeply and securely integrated into existing processes, data structures, and IT landscapes. This integration requires time, experimentation, and adaptation.
2. AI Agents: The New Trainees
Amidst the rise of AI agents, a gap is emerging between the promise of AI and what is currently being delivered. Autonomous “armies of agents” operate behind the scenes to take on time-consuming research tasks. According to McKinsey, while most companies are experimenting with AI agents, only 23% bring them into a productive environment. Their usefulness is highly context-dependent, and their practical application remains narrowly limited. Companies must soberly determine where agents actually have an impact.
The issue is usually not a “lack of intelligence” in the models. Rather, context and instructions were not conveyed clearly enough to guarantee relevant and reliable results. For functional integration, AI agents therefore need – similar to new employees – a kind of onboarding. They must be trained, informed, monitored, and regularly corrected through review processes.
To integrate AI agents into existing workflows and organisational culture, companies must also invest in training their employees to validate the results of their AI colleagues and understand the underlying limitations of the models. This requires clear governance, new roles, and flatter structures.
3. Context Engineering: Information Architecture for AI
The underlying issue is that AI – even in agentic, iterative architectures – is only as good as the context it receives. Many people think of prompting primarily as giving direct instructions, resulting in too little, too much, or overly imprecise input. In real applications, however, the system’s main task is to dynamically shape the context so that the LLM receives exactly the information it needs for its next step.
That said, long context leads to errors, friction losses, and a decline in attention (context rot). Equally, models become confused when too many or very similar tools are used (context confusion), or they stumble over contradictory work steps (context clash). Well-curated context, or context engineering, thus becomes a fundamental requirement for reliable AI.
4. Push vs. Pull: Data on Demand Instead of Data in Advance
If context quality is lacking, organisations need to understand how and when information is supplied to the model.
While earlier approaches, such as Retrieval-Augmented Generation (RAG), operated according to the push principle, a pull principle is now taking hold, where the AI decides for itself which information it lacks and retrieves it specifically using a tool.
As a result, AI is increasingly taking on an organisational role, analysing tasks, identifying work steps and required information, and selecting tools or data sources that close these gaps. For companies, this means thinking like an information architect. What matters is not the quantity but the principle of “minimum viable context” (MVC). The AI should receive exactly the information it needs for the next step – no more and no less.
5. Graphs: A Navigation System for AI Agents
Pull-based systems still require a way to cut through the information that traditional data structures struggle to provide.
The information AI needs next depends heavily on the use case, which can be deep, linear context chains, sometimes broad, branching knowledge structures, clusters of relevant information, or just a single precise snippet. This is where traditional data structures begin to falter. Graph databases offer a structurally different approach.
Especially in combination with AI agents, graphs will move more into focus in 2026. As AI systems increasingly coordinate decisions, tools, and processes independently, they require robust and transparent context models. Graphs link knowledge, actions, and interactions in real time, making agents navigable, reviewable, and scalable. This creates a semantic information layer – the Knowledge Layer – that enables more precise answers but also enables agents to understand where they are, what they are doing, why they are doing it, and what consequences their next step will have.
6. The Database of the Future: Adaptive
Once context becomes dynamic and adaptive, the limitations of today’s databases become clear.
While hardware and models are advancing, the databases beneath them are still stuck in the thinking of the 1970s, meaning AI systems are operating on architectures that were never built for them.
The next generation of AI databases might function similarly to “live code.” Queries are rewritten and optimised iteratively during execution, borrowing from modern compiler designs, such as Just-in-Time (JIT) techniques. The execution plan continually adapts to data distributions, load patterns, and the available hardware. The result is a permanent feedback loop in which the database becomes more efficient with every iteration, even as complexity and data volumes grow, forming the foundation for the knowledge layer that AI agents need.
Bracing for tangible AI ROI
The question of ROI in 2026 will be answered in how organisations build systems that provide AI with the right context at the right time.
Agents need to understand where they are, what they are doing, and why at any given moment. That demands information architectures that are dynamic, relational, and adaptive. Knowledge graphs and graph databases will be transformational in this next phase. They transform AI into a system that can reason over real-world complexity, ensuring intelligence is grounded in a structured approach.