In my conversations with Irish boards, I’m seeing a dangerous rush to brand firms as ‘AI-first’. The fear of being left behind and a desire to impress the market is driving this – with boards eagerly signing off on AI strategies – but the operational reality is often starkly different.
Instead of scalable, governed systems delivering measurable returns, many businesses are struggling to get their pilots off the ground, running repackaged automation or ungoverned shadow AI policies – whether by design or not.
This widening gap between the boardroom’s ambition on AI and the reality on the frontline is no longer just an internal compliance or IT headache. It has become a severe liability when it comes to the perceived value of a business.
Two years ago claiming that your business was using AI was seen almost as a golden ticket to strengthen your market appeal. But investors are now past the honeymoon phase, and adoption of AI for AI’s sake is no longer a successful approach. It’s not enough to call yourself different because you use AI, you need to show how it makes you different.
What’s behind the rise in ‘unproven’ AI?
AI tech has evolved fast, and that’s meant Irish firms have been under a lot of pressure to act fast and keep up. Fortunately, they’ve been some of the quickest to adapt; AI adoption reached 91% in Ireland in 2025, almost doubling from 49% the year before. And Ireland now ranks fourth globally for AI adoption.
It’s testament to the agility of Irish firms and highlights some of the nation’s biggest strengths: a high-quality education system, a skilled workforce, and thousands of forward-thinking multinational businesses.
However, change is now coming faster than our expertise can keep up with. While AI knowledge is still in its infancy, executives are already expected to have all the answers and lead firms through the change.
Typically, we’d build strategies based on experience. But we don’t have much experience when it comes to AI. So, in a rush to stay competitive and quell some of the initial anxiety, many executives launched AI-first strategies and initiatives without the specialist knowledge, experience and infrastructure that’s actually needed.
We are already seeing the financial fallout of this ill-informed ambition. Large Irish organisations have wasted an estimated €720 million on AI projects that produced no usable outcome, with 99% of IT decision-makers experiencing at least one project failure.
Furthermore, while 98% of Irish organisations have started their AI journey, a recent PwC survey found that only 17% of CEOs can actually point to increased revenues as a result. That is a massive execution gap, and it’s exactly what is triggering alarm bells during M&A due diligence. Without the ability to prove AI usage is secure, scalable, and – most importantly – actually delivering on returns, companies risk a valuation discount as investors are increasingly penalising unsubstantiated AI claims and unproven ROI.
We’re writing the AI rules as we go along
The financial fallout from this is already materialising. In some circumstances, it’s led to businesses implementing AI tools for tasks that are actually being done by intelligent automation (IA).
In part, this happens when there isn’t the right tools or data in place to support AI. An example of this might be a chatbot that is really just a repackaged search bar, regurgitating existing information with no reasoning capabilities or proprietary logic.
I’ve noticed this stems from that fundamental disconnect between the boardroom and the frontline. Boards are signing off on massive budgets for AI transformation, but without the right technical oversight, teams end up buying intelligent automation with a nice interface. If a firm is sold an AI solution, that’s what they’ll call it, no questions asked. But claiming to have AI capabilities when you’re really just running a glorified search bar will not fool an investor, and it certainly won’t improve your valuation.
Employees bringing in their own AI tools like Claude or ChatGPT is another issue. According to Deloitte, some 65% of Irish workers now either use free external tools at work or pay for the LLM of their choice.
On the surface, everything appears good and the business seems to be more efficient. But there are problems underneath. For one, confidential company data is being lost to public datasets and training models, which is likely why 61% of Irish business leaders now report being highly concerned about AI-related cybersecurity risks.
When M&A buyers or investors conduct technical due diligence, they don’t see agile innovation here, instead they see unquantifiable risk. Ungoverned shadow AI immediately raises red flags for IP contamination, GDPR breaches, and non-compliance with incoming frameworks like the EU AI Act. We are already seeing the cracks form: over half (54%) of Irish IT decision-makers admit they have been unable to explain their AI’s decision-making to regulators. In today’s market, that lack of explainability and governance directly translates to a valuation discount.
Then there is the immediate threat of hallucination and bias. More than a third of generative AI users believe that AI always produces factually accurate responses. Similar numbers are quoted for the proportion of users who think AI responses are unbiased. If staff are using these outputs to then inform their own decision making, the business could end up having to justify decisions that were made on incorrect data or with poor reasoning – which won’t be looked upon favourably by regulators.
What are investors looking for?
If you want to keep investors happy, you need to prove that your AI is secure, scalable and substantiated.
Investors don’t want pilot programs or intriguing theoretical applications – they want to see that AI has been integrated into the core of the business to transform processes for the better. And they need reassurance that what you’ve built can be scaled without exponentially increasing risk and compliance concerns.
Bespoke AI tools built to handle specific problems are often much more desirable in an investor’s eyes than a general-purpose tool in a nice UI, because they will typically offer better ROI over the long-term.
I’ve seen management teams throw all kinds of resources at AI, hoping it will stick. The problem is that many underestimate the actual hands-on effort that’s required to get AI implementations to actually succeed. Rarely do you implement a solution and unlock significant cost-saving benefits on day one. It’s a long process of testing, implementing, tweaking, and testing again. It’s not an easy fix for other problems occurring within the business – AI requires its complete own commitment.
This might sound quite daunting, but there are a few key areas businesses can start with.
Review your AI toolbox
First, you need to understand exactly how AI is currently being used by staff: which tools, what kinds of tasks, and how often it’s being used.
From a security and governance perspective, firms must at minimum establish how their data is being used and where it’s being sent to, and bring in guidelines to ensure staff are using AI responsibly.
But beyond compliance, this audit process also forces firms to understand their functional requirements before implementing new AI.
It’s something I can’t stress enough: if you don’t understand the needs of your employees and the problems the AI actually needs to solve, it’s unlikely your AI project will maximise ROI.
The promise of a new AI tool might sound great, but if it doesn’t actually help anybody to do anything more efficiently, it won’t make it to successful adoption.
Data is the starting point not the end
Before running with AI, firms must first build the strongest foundation they possibly can – and a big part of that is cleaning up their data.
Too often, I’ve seen AI projects fail purely because the focus was entirely on the end result.
The need to prove ROI means that firms will often approach a new project by centering all their attention on what the outcome is. Pilots then fail because the AI can’t handle messy data or because it’s coming up with decisions that nobody can understand the justification for.
The issue is, that if you don’t fix your fragmented data now, your AI is going to be effectively of little or no value. With how quickly businesses are accumulating data, not getting a grip on it now means you’ll only have a bigger problem in the future.
Firms need to come at this from the complete opposite direction. AI models cannot retroactively fix fragmented, legacy data architectures. Clean, structured, and governed data isn’t just nice to have; it’s the bare minimum. If your foundational data lacks traceability, your AI outputs cannot be trusted for commercial decision-making. Thinking that AI can magically fill in the gaps for itself is a dangerous and an incredibly expensive assumption.
Get teams engaged
The most successful AI implementations aren’t just dropping in software and letting people get on with it. Change management is a make-or-break part of the process, but many enterprises underestimate the value of putting effort in at this stage.
Teams should be involved as early as possible, and ideally given the chance to test well before the actual go-live to ensure everything works as needed and there’s been no major oversights.
Taking the time to collect feedback from employees across all levels is essential, especially during testing and the first 30 days of an implementation.
A mentality I come across often is that implementation itself equals project success, but this is a trap.
You need to start with the basics, ensuring that compliance is robust and you have a measurable return on investment – whether it be resolution time for a specific task, or something else. These matter far more to a business and investors than simply stating a solution has been installed for X number of users.
Delivering real AI to drive sustainable competitive advantage
AI itself is no longer a differentiator, it’s the norm. Firms aren’t being financially rewarded for the vague promise of AI. It’s rewarding those who’ve put in the work to ensure it can actually deliver.
For Irish business leaders, they can’t coast on AI hype any longer. Moving beyond surface-level adoption and towards real and measurable implementation is going to be vital to protect businesses and their market valuations going forwards.
But a word of caution: a rush to adopt shouldn’t mean steaming ahead blindly into pilot theatre. AI ambition is no longer enough to impress the market or justify a premium. Boards must instead focus on the fundamentals that make AI work in practice – strong governance and oversight, clearly defined use cases that solve business problems, high-quality and well-structured data, as well as meaningful engagement from teams across the organisation.
When these elements are in place, AI shifts from being an unproven cost center to a reliable driver of value. It’s at this point that organisations can confidently scale their initiatives, demonstrate measurable return and separate themselves in the eyes of investors.
When ambition, execution, discipline and accountability all align, AI surpasses experimentation and becomes a core capability that consistently delivers measurable outcomes and sustainable competitive advantage.