Most software acquisitions look coherent in the model. Revenue synergies seem to align, cost efficiencies are mapped, and AI capability strengthens the growth narrative. What is usually less coherent is the technology estate that has to deliver against those assumptions.
AI is increasingly embedded in the rationale behind large transactions. PwC recently noted that around a third of the largest deals completed in 2025 cited AI as part of the strategic case. When AI is priced into valuation, expectations around scalability and data maturity are set early.
However, the integration work that follows rarely starts from a clean slate. Overlapping platforms are common, data structures rarely align as cleanly as expected, and deeper architectural constraints often only become visible once systems are operating under live production load.
AI does not create that complexity, but it accelerates when it becomes visible.
Two definitions of efficiency
In most transactions, “efficiency” comes up early in the synergy model. For deal teams, it typically translates into margin expansion and a leaner cost base, with growth expected to follow. For technology leaders, efficiency is often judged differently, by how resilient systems are, how secure the architecture remains, and whether change can be introduced without destabilising what is already in place
Those definitions are not necessarily in conflict, but they tend to operate on different timelines. Financial models assume rationalisation can proceed quickly, overlapping systems can be retired, and duplicated spend can be eliminated. Engineering teams know that unwinding entitlements takes time. Re-platforming workloads is rarely linear. Even relatively contained integration changes can move more slowly than a spreadsheet implies.
AI can amplify this tension because investment in cloud capacity, data pipelines, tooling, and specialist talent often ramps up at the very start of integration, well before system overlap has been reduced or cost synergies realised.
From the outside, that can look like underperformance. From inside the technology function, it often reflects sequencing risk. You cannot optimise what you do not yet fully understand or have a clear overview of.
Acceleration increases the cost of being wrong
In most software acquisitions, there is space to discover what has actually been bought. Application inventories are validated first. Integration points are then tested in practice, which is often when contractual or architectural constraints surface that diligence did not fully expose. Immediate technical convergence is not always required to begin delivering value.
AI can change that dynamic. When advanced analytics or embedded intelligence sit at the centre of the deal thesis, infrastructure scaling and data integration cannot wait for a prolonged discovery cycle. Architectural decisions are made early, often before the combined estate has been fully tested under load.
If assumptions around data quality or workload intensity prove optimistic, the impact shows up quickly in cloud consumption and delivery timelines. AI workloads are resource-intensive, and specialist talent costs do not flex easily when schedules slip.
The risk is the reduced tolerance for iteration once execution is underway.
The real risk is rigidity, not imperfect diligence
Data lineage is often opaque, and no diligence process can fully reveal how two software estates will behave once combined. Contractual constraints are buried in legacy agreements. Application dependencies often only become visible once changes occur.
AI does not introduce these uncertainties, but it exposes them faster. When new capabilities depend on clean, accessible, well-governed data, weaknesses in integration become operational constraints. When model performance depends on stable infrastructure, architectural shortcuts are quickly revealed.
It is tempting to attribute post-deal friction to gaps in diligence. More often, the decisive factor is how willing an organisation is to adjust once reality goes a different direction from the thesis. Sticking rigidly to the original integration roadmap can create more friction than revisiting assumptions early.
Technology leaders need explicit executive backing to revisit financial and integration assumptions when operational evidence diverges from the original deal model. It is a sign that execution is grounded in fact rather than optimism.
Where the sunk cost fallacy takes hold
Post-deal integration is rarely linear; often, it can unveil new information, which means original plans tend to require adjustment. In AI-driven acquisitions, that adjustment window can close quickly.
Once a large amount of capital has been committed to a chosen platform or model stack, reversing course can become more difficult. Teams can see friction emerging, maybe integration is slower than expected, or infrastructure demand is higher than forecast, yet momentum continues.
The sunk cost fallacy thrives in this environment, and leaders can hesitate to revisit earlier decisions because doing so appears to undermine the deal narrative. Meaning, over time, incremental investments compound, making alternatives seem progressively less viable.
The challenge is rarely a single catastrophic misstep. It is usually a series of small decisions that go unchallenged because too much has already been spent to then pause.
Optionality is a technical discipline
Organisations that preserve flexibility in their data platforms can avoid unnecessary lock-in and are better positioned to address the change that acquisition brings and the breakneck pace of AI innovation.
That flexibility extends beyond tooling. It starts with adaptable data governance, ensuring consistency, quality and compliance of existing and newly acquired data and applications. Without that adaptability, AI investment is layered onto fragmentation, increasing both cost and operational risk.
It’s important to establish which datasets are genuinely production-ready, which systems can be retired without destabilising the estate, and which capabilities should be deferred is part of disciplined execution.
Boards may see this as slowing down. In practice, it is how technology functions prevent short term enthusiasm from hardening into long term constraint.
AI shortens the distance between decision and consequence
That said, every acquisition carries integration risk. What AI changes is the speed at which consequences become visible. Infrastructure spend accumulates quickly. Product roadmaps are scrutinised sooner. Market expectations harden around promised capabilities.
For technology leaders, that means architectural choices should withstand scrutiny earlier in the lifecycle. The tolerance for rework is lower because cost and visibility are higher. Decisions that might once have been absorbed over several budget cycles now surface within quarters.
None of this argues against AI as part of a deal thesis. AI can unlock genuine differentiation when it is grounded in operational reality. The point is that once AI is priced into the transaction, execution discipline becomes the primary determinant of value.
In traditional software deals, the strategy often dominates the narrative. In AI-led acquisitions, architectural discipline and the readiness to adjust are what decide whether ambition translates into operational reality.