Your 5-Year Plan Isn’t Obsolete Because of AI - It’s Obsolete Because It Was Fiction
- Darren Emery
- 23 hours ago
- 5 min read

Most executive teams are heading into 2026 planning with a familiar ritual.
Strategy offsite.
Roadmap refresh.
Investment cases sharpened.
Confidence projected.
And yet, many already know the undeniable truth:
The plan won’t survive contact with reality.
Not because leaders are incompetent.
Not because teams can’t deliver.
And not even because AI is “changing everything.”
It’s because most enterprise plans were never designed to adapt to reality in the first place.
AI just removed the last layer of plausible deniability.
By the time the offsite ends, some of those plans are already outdated by shifts in market demand, regulatory changes, or competitive moves.
The Problem Isn’t Speed. It’s Pretend Certainty.

Let’s address the AI elephant in the room.
Yes, the cost of intelligence is collapsing.
Yes, execution is getting faster.
Yes, individual productivity is rising sharply.
But if faster execution automatically created better outcomes, large enterprises would already be unbeatable.
They aren’t.
Because execution has never been the real constraint.
The constraint is decision quality under uncertainty - and how quickly the organisation can change its mind without descending into chaos.
Most multi-year plans quietly assume:
Stable priorities
Predictable demand
Clear cause-and-effect
A linear path from strategy to delivery
None of those assumptions hold anymore - if they ever did.
Executives are now seeing every misalignment instantly, amplified across every team, every backlog, and every customer touchpoint.
AI doesn’t make this worse.
It makes it obvious.
Why 5-Year Plans Persist (Even Though Everyone Knows They’re Flawed)
If senior leaders privately doubt long-range plans, why do they keep producing them?
Three reasons.
1. Planning Is Confused with Control
Detailed plans give a sense of control in complex environments.
They look rigorous. They feel responsible.
But control through prediction collapses when the environment changes faster than the plan can.
2. Roadmaps Provide Political Cover
A roadmap answers uncomfortable questions:
“Why are we doing this?”
“Why are we spending this much?”
“Why hasn’t value shown up yet?”
Even if it’s wrong, it reduces scrutiny - temporarily.
3. The System Rewards Activity, Not Adaptation
Annual funding cycles, fixed targets, and delivery commitments penalise learning and reward sticking to the script.
So organisations optimise for looking certain, not being right.
AI Exposes the Planning Illusion

AI doesn’t just accelerate delivery.
It compresses feedback loops.
Customer response arrives faster
Market shifts are visible sooner
Competitive moves replicate in weeks, not years
This creates a severe mismatch:
Strategy and governance move at annual speed.
Reality moves at weekly speed.
The result?
Backlogs grow, but value doesn’t
Roadmaps change, but funding doesn’t
Teams deliver faster into the same bottlenecks
Research from McKinsey and PMI shows that roughly 70% of large-scale transformation initiatives fail to deliver measurable value - and AI only magnifies this if priorities aren’t explicit.
AI doesn’t break these systems.
It just removes the excuses.
The Real Risk for 2026: Locking in the Wrong Decisions Early
Here’s the uncomfortable question executives should be asking right now:
What if we’re about to lock in the wrong priorities - faster and at greater scale than ever before?
Because that’s what most 2026 plans are doing.
Not due to lack of intelligence.
But due to structural misalignment between:
Strategy
Funding
Governance
Delivery
Learning
This is why organisations end up:
Busy but ineffective
Data-rich but insight-poor
Fast but directionless
Strategy Was Never the Slide Deck
In many enterprises, “strategy” quietly became:
A set of investment themes
A portfolio of projects
A prioritised backlog from the executive team
But real strategy is simpler - and harder.
It is a small set of integrated choices:
Who we serve
What problems we will solve
How we will win
What we will not do
Everything else should flow from that.
Most plans don’t fail because teams didn’t execute.
They fail because those choices were never made explicit enough to guide execution.
Why Execution Breaks Down (Even When Everyone Is Talented)

When strategy is ambiguous, delivery fills the gap.
Teams make local trade-offs.
Roadmaps grow to accommodate every stakeholder.
Velocity looks healthy.
Throughput quietly collapses.
This is how you end up with:
Stable teams constantly being reshuffled
Product managers negotiating scope instead of value
Scrum Masters doing project admin
Leadership wondering why outcomes lag behind investment
None of this is a people problem.
It’s an operating model problem.
The Shift Leaders Must Make for 2026
If AI accelerates everything, leaders must shift focus from planning accuracy to decision adaptability.
That requires three fundamental changes.
1. From Plans → Intent
Strategy must be expressed as clear intent, not detailed prediction.
Intent answers:
What matters now
What success looks like
What trade-offs we’re willing to make
It gives teams direction without false certainty.
2. From Output → Throughput
Stop asking, “Are teams fully utilised?”
Start asking, “How fast does value flow from idea to outcome?”
This reframes performance around:
Flow
Feedback
Learning
Impact
Not activity.
One financial services client of ours discovered that three major initiatives were misaligned with strategic intent - despite consuming 40% of the annual budget.
Addressing it in time prevented £10M of wasted spend.
3. From Governance as Control → Governance as Enablement
Decision rights, funding models, and prioritisation mechanisms.
All these must match the speed of the environment.
Otherwise, AI simply accelerates work into slow approvals, unclear ownership, and political trade-offs.

This Is Where Most 2026 Roadmaps Will Fail
Not in the ideas.
Not in the ambition.
Not in the technology.
They’ll fail in the translation.
Strategy → priorities → funding → delivery → outcomes.
That translation is where:
Assumptions hide
Trade-offs get avoided
Accountability blurs
Learning arrives too late
By the time results disappoint, the organisation is already committed - financially, politically, and emotionally.
A Different Question to Ask Before You Lock the Plan
Before finalising your 2026 roadmap, ask this:
If the environment shifts materially in six months, can we change direction without rewriting the entire plan?
If the honest answer is no, the risk isn’t AI.
The risk is rigidity.
What Strong Leaders Are Doing Differently
The executives navigating this well are not chasing perfect foresight.
They are:
Making strategic choices explicit
Designing for fast learning
Aligning funding and governance to value streams
Creating clarity on what won’t be pursued
They treat planning as a hypothesis, not a promise.
A Practical Invitation
If you’re heading into 2026 planning and any of this resonates, here’s the practical question:
Can you clearly trace today’s priorities back to strategic intent - and forward to measurable outcomes?
If not, that gap will only widen under AI acceleration.
This is exactly what our Strategy to Execution workshop is designed to address.
In a focused, executive-level session, we help leadership teams:
Make strategic choices explicit
Stress-test priorities against real constraints
Expose where roadmaps quietly diverge from intent
Create a clear line of sight from strategy to delivery
No frameworks.
No process theatre.
No false certainty.
Just clarity - before the plan is locked.
There’s limited capacity for executive-level workshops before planning season.
A 30-minute session now can prevent months of misaligned investment later.

Final Thought
AI didn’t make your 5-year plan obsolete.
It just made it impossible to ignore the fiction it was built on.
If you want 2026 to be adaptive rather than fragile, now is the moment to address it - not after the roadmap is already committed.
If you’d like to explore this, let’s have a 30-minute conversation before planning season does it for you.




Comments