
Most AI strategy decks die quietly within ninety days of the readout. They are not wrong, exactly. They are just not designed to survive contact with operations. The strategy treats AI like a transformation programme. Operations treats it like every other vendor pitch. The middle ground is a roadmap that is opinionated about sequence, honest about risk, and built so a CFO can defend it next quarter.
Why most AI strategy work fails
We have audited a lot of strategy documents in the last two years. The failure patterns are consistent.
- ·Too many initiatives. Twenty workloads on a roadmap means none of them get done. Three to five well-sequenced initiatives outperform a dashboard full of red dots.
- ·No kill criteria. Every initiative is described in terms of what success looks like, but not what would make the team stop. Without an explicit kill threshold, pilots become zombies.
- ·Data assumptions hidden in footnotes. Most strategies assume the data is available, clean, and consented. The actual data work (usually the longest pole in the tent) is buried under a heading called 'foundations'.
- ·Governance treated as a phase. Privacy, model risk, audit, and incident handling are bolted on at the end. By then the architecture has already made decisions for you.
- ·No commercial owner. The strategy gets a sponsor in the leadership team but no commercial owner who reports on whether the spend earned its keep.
What a real AI strategy looks like
An AI strategy worth executing is short, sequenced, and load-bearing. It answers four questions and stops.
1. Where are we starting?
Honest description of current AI activity, including the experiments your teams have been running quietly. List what is working, what is not, and what should be wound down. Most organisations have more in flight than leadership realises. Mapping the current state stops you from accidentally funding a project twice.
2. What are we doing first?
One initiative, named explicitly, with a sponsor, a budget, a six-to-twelve-week build window, and a kill criteria. The first initiative is the most important decision in the strategy because everything that follows is downstream of it. Pick the one that is most likely to succeed and most likely to teach you something useful, even if it is not the highest-value option.
3. What are we doing next?
A sequenced view of the next two to four initiatives, with the dependencies on the first one made explicit. This is where the strategy stops being a list and starts being a plan. If initiative two cannot start until initiative one has produced specific outputs, that should be on the page.
4. How will we govern it?
The operating model. Who owns AI in the organisation. Who reviews initiatives at gate. How model risk, privacy, and security feed into the build cycle. Where the commercial accountability sits. This section is the difference between a strategy that survives a board review and one that does not.
Sequencing in practice
The right first initiative is rarely the one with the highest theoretical ROI. It is the one where the data is already clean, the team is already asking for it, the workflow is already structured, and the political backing is real. We have started clients with low-glamour automation work (quote turnaround, claims triage, scheduling support) because those workloads delivered value inside a quarter and built the credibility for the bigger initiatives.
The wrong first initiative is the one the executive is most excited about. Excitement is not a substitute for readiness. If the data is messy, the workflow is fragmented, and adoption is uncertain, the project will land badly and poison the next round of investment.
Where Australian context changes the strategy
Three things are different about doing AI strategy in Australia. First, data sovereignty matters more than the global market average. Most boards we work with want Australian-hosted models or in-environment deployment by default. Second, regulatory exposure varies enormously by sector. Financial services sit under APRA, aged care under ACQSC, professional services under their respective bodies. The strategy has to respect that perimeter. Third, the talent market is small enough that you cannot assume you will hire your way out of capability gaps. The strategy has to plan for partner-supported delivery and team capability building in parallel.
What to do with the strategy after the readout
Commit to the first initiative inside two weeks. Set a quarterly review on the roadmap, not an annual one. Treat governance as a live system, not a document. And keep the strategy short. A strategy document that needs a memo to summarise it has already failed.
If you are coming into the next planning cycle with AI on the agenda, the question is not whether to write a strategy. It is whether the strategy you write is one your team can act on next Monday. Most are not. The fix is honest scoping, ruthless sequencing, and a roadmap short enough to read in twenty minutes.
Related service
AI Strategy and Roadmap
Want to apply this thinking to your operation? Our ai strategy and roadmap engagement is the structured next step.
Learn about AI Strategy and Roadmap

