
Most AI readiness assessments end up in a drawer. We have seen the documents: sixty-page slide decks heavy on benchmarks, light on what to do on Monday. The assessments that get used have three things in common: they audit the operation, not the lab; they rank against commercial impact, not novelty; and they tell the leader what to stop doing as well as what to start.
If you are running an Australian business between $5M and $100M in revenue and your board has asked you to bring back an AI plan, this is what an actual readiness assessment should cover and how to use the output.
What gets audited, in plain terms
A readiness assessment is not an AI maturity quiz. It is a structured look at your operation through five lenses, with a defensible commercial case at the end.
1. Workflow audit
Where in the operation are decisions being made repeatedly with the same inputs and similar outcomes? Where are people rekeying data between systems? Where does customer service spend most of its time? Where does the COO ask for a report that takes someone two days to compile? These are the candidates for AI work, and a good assessment names them by team, by system, and by person.
2. Data foundations
AI initiatives that fail at the build stage usually fail because the data was not as clean, accessible, or consented as the strategy assumed. The audit looks at where your data lives, who owns it, how it flows, what is duplicated, and what cannot be used because of consent or contractual constraint. This is not a data warehouse exercise. It is a readiness check.
3. Systems and integration
Your CRM, ERP, finance system, project management tools, and operational systems are the surface AI will integrate with. The assessment maps the API and webhook surface, identifies vendor constraints, and flags where an integration will be cheap and where it will be expensive. This is also where shadow IT shows up: the spreadsheets and Notion pages that hold real operational logic and have to be respected.
4. Governance and risk
Privacy Act and Australian Privacy Principle obligations, sector-specific rules (APRA CPS 230 and CPS 234 for financial services, ACQSC for aged care, professional standards for legal and accounting), data sovereignty, and your existing risk framework. The output is not a legal opinion. It is a clear view of what AI work would have to defend itself against in this organisation.
5. People and adoption
The single biggest predictor of AI success is whether the people in the workflow will use the system. The assessment looks at who would be affected, what they currently do, what would change, and where the political weight is. Adoption is not a launch problem. It is a scoping problem.
What the output should look like
The deliverable is short. Long assessments are an indication the consultant is hiding behind volume. We aim for around twenty pages, structured into three sections.
- ·A ranked opportunity list (typically five to twelve workloads) with effort, value, data readiness, and adoption risk for each.
- ·A sequenced roadmap covering the next six to twelve months, with explicit dependencies between initiatives.
- ·A list of things to stop or pause: pilots that should be killed, vendor conversations that are not worth continuing, and assumptions that need to be retested before more spend goes in.
How to use the assessment without it ending up in a drawer
The hardest part of an assessment is what happens after the readout. The pattern that works in mid-market Australian businesses is to commit to the first build inside two weeks of the readout, even if it is small. The opportunity ranking should make the first build obvious. If it does not, the assessment is wrong.
Pick the workload that has the strongest combination of commercial impact, data readiness, and adoption willingness. Move it into a focused build. Use the remaining items as the input for next year's investment plan. The assessment becomes an operating document rather than a strategy artefact.
What an assessment is not
It is not a vendor pitch in disguise. It is not a pilot. It is not a justification for a platform you have already chosen. If the assessment process feels like it is pushing you towards a specific tool, the consultant is selling, not advising. Ask what they would recommend if their preferred platform did not exist. The answer tells you everything.
When you do not need an assessment
If the workload is obvious, the data is in one system, and the team is asking for it, you do not need a six-week audit. You need a four-week build. The assessment is for the situations where leadership has decided AI is real but does not yet know where to start, or where multiple teams are running uncoordinated experiments and a sequenced view is needed.
If you are in either of those situations, an assessment is the cheapest thing you will do all year. If you are not, skip it and go to build.
Related service
AI Readiness Assessment
Want to apply this thinking to your operation? Our ai readiness assessment engagement is the structured next step.
Learn about AI Readiness Assessment

