What Happens During an AI Readiness Assessment? Complete Walkthrough
If you're considering an AI readiness assessment but don't know what to expect, this is the complete walkthrough — every phase, every deliverable, what you'll learn, and how long it takes.
Kevin Zai
"AI readiness assessment" appears on a lot of consulting websites. The term is rarely defined precisely, which makes it hard to know what you're actually buying.
If you're considering an assessment but don't know what to expect — what happens, who's involved, how long it takes, what you'll actually get — this is the complete walkthrough. This is exactly how we run assessments, described in enough detail that you can evaluate whether it matches what you need.
What an Assessment Is (and Isn't)
An AI readiness assessment is a structured evaluation of an organization's current state across the dimensions that determine AI project success. It produces a diagnosis and a prioritized action plan.
It is not:
- A vendor selection exercise (though it may surface tool recommendations)
- A proof of concept or prototype (that's a separate engagement)
- A year-long transformation project (it's a bounded diagnostic)
- A sales pitch for services you don't need (a good assessment tells you when you're ready to move forward without outside help)
A well-run assessment takes 2-4 weeks and produces 3-5 specific deliverables. Here's the detailed breakdown.
Phase 1: Stakeholder Interviews (Days 1-5)
The first phase is structured conversations with the people who will be most affected by AI — and the people whose buy-in determines whether AI projects succeed.
Who we talk to
Executive sponsor (1-2 hours): What is the business case for AI investment? What are the top 3 outcomes you're trying to drive? What does success look like in 12 months? What organizational obstacles are you most worried about?
Operations leads (1 hour each): What are the most painful, high-volume, repetitive processes in your area? Where do bottlenecks occur? What data do you have, and how is it structured? What's tried to be fixed before and why didn't it work?
IT/Engineering lead (1-2 hours): What does your current infrastructure look like? What's the API surface area of your core systems? What security and compliance constraints apply? What's the team's current capability with ML or AI tooling?
Front-line staff sample (30 minutes each, 3-5 people): What do you actually do all day? Where do you spend time on things that feel like they shouldn't require your judgment? What tools would make your job better?
What we're learning
The stakeholder interviews answer questions the scorecard can't: the political dynamics that will affect adoption, the institutional knowledge that makes certain approaches viable or unviable, the real reasons previous initiatives didn't stick.
They also surface discrepancy. When a CEO believes AI adoption is a top priority and the IT team has never heard of an AI budget, that discrepancy is data — and it's critical data for the action plan.
Phase 2: Data and Systems Audit (Days 3-8, parallel with interviews)
Concurrent with the interviews, we conduct a technical audit of the data and systems that AI would operate on.
Data quality review
We look at a sample of the organization's key datasets:
- What fields exist? What's the completeness rate on each?
- Are categorical fields consistent (same value spelled different ways, merged into one field, etc.)?
- Are there duplicates? How are they handled?
- Is there documentation for what each field means?
- Who owns data quality? Is there a process for maintaining it?
This review doesn't require access to sensitive data — it requires access to schema documentation, sample anonymized records, and the people responsible for data systems.
Systems connectivity
We map what systems the organization uses and assess:
- Which have APIs or export capabilities?
- What's the authentication model?
- Are there existing integrations between systems?
- What are the data transfer and retention policies?
Systems that can't be connected to AI tools can't benefit from AI automation. This step often surfaces integration constraints that weren't on anyone's radar.
Workflow documentation review
We review existing process documentation:
- Are core workflows documented?
- Are decision criteria written down?
- Are exception cases captured?
Processes that aren't documented need to be documented before AI can automate them — and that documentation work is often a pre-project requirement that surfaces during assessment.
Phase 3: Use Case Identification and Scoring (Days 6-10)
Using the inputs from interviews and the audit, we build a list of candidate AI use cases and score them on a standard framework.
The use case framework
Each candidate use case is scored on:
Business value (1-5): What is the dollar or time value of improving this process? How many people are affected? How often does it occur?
Technical feasibility (1-5): Is the data available and clean enough? Can the systems be connected? Is the AI task type (classification, generation, extraction) one that current models handle reliably?
Organizational readiness (1-5): Is there a champion? Is the process documented? Is there resistance from the affected team? Has something similar been tried before?
Time to value (1-5): How quickly could a working prototype be built and tested? How long until production deployment? Are there dependencies that extend the timeline?
The composite score determines the priority ranking. High-scoring use cases become the recommended starting points.
Why use cases get deprioritized
Not every compelling-sounding use case makes the priority list. Common deprioritization reasons:
- Data isn't ready (would require 3 months of data clean-up before AI can use it)
- Systems can't be connected (no API, vendor won't support integration)
- Change management risk too high (team resistance + no internal champion)
- Better non-AI solution exists (a simpler workflow automation would do the job)
Honest use case ranking requires willingness to deprioritize use cases that look appealing but aren't actually ready. That honesty is what separates a good assessment from a sales exercise.
Phase 4: Readiness Scoring (Day 8-10)
We produce the organization's score across the 6 readiness dimensions:
- Data quality and accessibility
- Process documentation
- Technical infrastructure
- Leadership alignment
- Team AI fluency
- Use case clarity
Each dimension gets a score and a narrative explanation of what's driving the score and what improvement would require.
Phase 5: Deliverables and Readout (Days 10-14)
Deliverable 1: Readiness Score Report (15-20 pages)
Full readiness scores across all 6 dimensions, with evidence, gaps identified, and improvement actions for each gap. Includes a 90-day, 6-month, and 12-month view.
Deliverable 2: Top 5 Use Case Briefs
One-page summary for each of the top 5 recommended use cases:
- Problem statement and business value
- Technical approach
- Data requirements
- Estimated implementation timeline and cost
- Success metrics
- Known risks
Deliverable 3: Prioritized Roadmap
A phased roadmap showing recommended sequencing across 12 months:
- Phase 1 (Months 1-3): Foundation work (data, documentation, infrastructure)
- Phase 2 (Months 2-6): First deployment — the highest-scoring use case
- Phase 3 (Months 4-12): Expand to top 3-5 use cases
Readout session
A 2-hour presentation and discussion with the key stakeholders: findings, scores, recommendations, and prioritization rationale. This is where we walk through the roadmap and answer questions.
Decision point
After the readout, organizations have three paths:
- Implement the roadmap internally (we've given them everything they need)
- Engage us for Phase 2 work (implementation, starting with the top-priority use case)
- Pause and address readiness gaps before returning to AI investment
A third of our assessments end with the organization doing the work themselves, with the assessment as their guide. That's a good outcome — it means the assessment did its job.
Ready to see where you stand? Book your AI readiness assessment — we'll walk you through the full process, starting with a 30-minute scoping call to make sure it's the right engagement for your situation.
Ready to Start?
Find your highest-leverage AI opportunity
Take the free AI Readiness Scorecard to identify where agents can save the most time in your business — or book a strategy session and we will map out your first deployment together.