Measuring the ROI on AI: A Boardroom Guide for UK Enterprises
- Darren Emery
- 9 hours ago
- 5 min read
The ROI Imperative: Why boards must treat AI as a capital allocation decision.

Across UK enterprises, 8 in 10 AI projects never scale. The reason isn’t technical. It’s strategic. This guide shows how boards can avoid becoming the next cautionary tale by treating AI as a capital allocation decision.
“In theory, there is no difference between theory and practice. In practice, there is.” - Yogi Berra
Artificial intelligence has had no shortage of headlines. From constant claims about replacing entire industries to executives being asked (again) by their boards, “What’s our AI strategy?”, the pressure to act is heavy.
But in reality, the vast majority of AI initiatives never make it out of pilot. Gartner has long suggested that around 80% of AI projects stall or fail to scale. And the reason is rarely technical. It’s strategic.
AI often becomes a pseudo-initiative - a proof of concept here, a chatbot there, maybe a team of data scientists working on a cool new model that no one knows how to deploy. Impressive demos. Minimal impact.
For the boardroom, this raises the critical question: how do we frame AI investments not as scattered experiments, but as deliberate strategic choices with measurable return?
That is the ROI imperative. And it’s where most organisations are falling short.
From Hype to Hard Numbers

Boards don’t care about models, GPUs, or the elegance of your data pipeline. They care about outcomes: revenue growth, margin improvement, risk mitigation, and innovation that sticks.
When AI is positioned as an R&D curiosity, it gets R&D results - slow, unpredictable, hard to scale. But when AI is positioned as an investment decision - competing with other portfolio bets - the level of rigour changes entirely:
Where does this create measurable value?
What strategic priority does it support?
What is the cost of delay if we don’t invest?
These are not data science questions. They are boardroom questions. Yet too many firms leave the AI conversation trapped in the “innovation lab,” disconnected from the organisation’s real priorities.
The result? Plenty of AI pilots. Few AI profits.
A Familiar Story in Financial Services
Some months ago, I worked with a major financial services firm in the City. They had set up a dedicated AI task force with no shortage of talent - a half-dozen data scientists, a handful of consultants, and a big ambition to explore “how AI could transform the business.”
The problem? They were operating almost entirely in isolation from product and engineering. One business unit wanted to test AI for customer service, another for fraud detection, another for credit scoring. Each was running their own pilots, with no alignment, no shared roadmap, and no clear integration back into the product portfolio.
On paper, it looked like progress. In practice, it was fragmentation. An expensive initiative with no line of sight to strategy, no agreed measures of success, and no ability to scale across the enterprise.
That story is not unusual. It illustrates the AI implementation challenges in UK financial services firms: without integration into strategy, AI becomes a series of “interesting” activities rather than a meaningful investment.
Why AI Initiatives Fail in the Enterprise

Through our work with mid-to-large UK enterprises, three patterns are near universal when AI investments under-deliver:
Scattergun adoption.
AI initiatives emerge wherever an enthusiastic exec has budget. The CFO funds a fraud-detection proof of concept, the CMO buys an AI-driven customer analytics tool, and the CIO experiments with a code assistant. None of these are bad ideas - but without alignment, they’re isolated, duplicative, and unscalable.
Strategy lost in translation.
Boards talk about customer centricity or operational efficiency, but AI teams hear “build models.” Somewhere between strategy decks and delivery backlogs, the connection to outcomes is lost.
No feedback loops.
AI models are not “set and forget.” They drift, degrade, and demand ongoing learning. Without a feedback mechanism tied to business outcomes, organisations fail to know if their investment is working - until it’s too late.
These are not technology problems. They are leadership and alignment problems.
The ROI Lens: How Boards Should View AI
Reframing AI as an asset class in the portfolio - not a playground - changes the conversation. The lens shifts from hype to hard trade-offs.
For a UK board, the right investment questions are:
Fit for Purpose - Does this AI initiative directly enable one of our strategic choices (e.g. differentiation, operational efficiency, customer experience)?
Time to Value - What is the realistic timeline to observable business impact, not just technical deployment?
Scalability – Can this capability scale across the enterprise, or is it a niche tool locked in one function?
Measurability - What leading and lagging indicators will prove ROI beyond vanity metrics?
Think of AI like any other capital allocation. You wouldn’t fund a dozen random projects across your estate without a master plan (would you?). So why tolerate a that with AI?
Adapt. Align. Accelerate.

At Agilicist, we use a simple framing model when organisations are considering big bets like AI:
Adapt - Understand the landscape. Where does AI genuinely intersect with your business model and strategic intent? This isn’t about “what’s possible” but “what matters.”
Align - Create coherence across the enterprise. AI investment must map cleanly to strategy and integrate into the portfolio, not compete for attention. Alignment is the difference between a hundred experiments and a handful of scalable successes.
Accelerate - Build momentum by executing with discipline. Define clear metrics, feedback loops, and accountability so you’re not just deploying AI, but continuously proving its value.
This model is deliberately simple because complexity is already the enemy in most large organisations. When boards ask, “Where’s the ROI?” the answer must be obvious, measurable, and unarguable.
The UK Context: Pragmatism Over Hype

In my view, one of the quiet truths in the UK market is that our organisations tend to be more cautious with technology bets than their US counterparts. That can be a strength - less money wasted on hype - but also a risk, as competitors who align AI strategically may accelerate ahead.
The challenge, then, is finding the middle ground: avoiding paralysis by analysis whilst ensuring AI is never funded without a deliberate, robust business case.
Research from McKinsey suggests that companies integrating AI into strategic priorities can see EBIT margins increase by 3–5%, while scattergun adopters rarely scale beyond the pilot stage. That’s the difference between AI as a curiosity and AI as a competitive advantage.
A Framework for Executives
When your board inevitably asks, “What’s our AI strategy?”, here’s a more useful reframing:
What strategic priority is AI enabling? (If none, don’t fund it.)
How does this initiative integrate across the portfolio?
What outcomes and metrics will prove value within 12-18 months?
What feedback loops ensure continuous learning and adjustment?
If you can’t answer these questions, you don’t have an AI strategy. You have an AI wishlist.
Closing: Avoiding the Cautionary Tale

Every executive I speak with knows the headlines: bold AI initiatives that captured the imagination, but burned through budget, and delivered little. No one wants to be that cautionary tale in front of their board.
The ROI imperative is simple but demanding: treat AI as a portfolio choice, not a side experiment. Link it tightly to strategy, measure it rigorously, and ensure alignment across the whole organisation.
At Agilicist, this is the work we do. We help leadership teams move from random bets to coherent, ROI-driven portfolios. We help them adapt to the opportunity, align around strategy, and accelerate execution towards outcomes.
So if you’re about to commit serious capital to AI, ask yourself: are you funding an experiment, or are you building an advantage?
The board won’t care about your demos. They’ll care about your results.