Day 10: AI Readiness Assessment 📊

This assessment evaluates your organisation's readiness across six critical dimensions. Use the sliders to rate each indicator based on your current state. The tool will generate a comprehensive AI Playbook prompt tailored to your specific readiness stage.

How to Score

Interview as many stakeholders as possible in your organisation and ask them to self-assess. The score is the lowest value in all tiers, but the different departments should be analysed to see where the gaps are strongest.

1. Skills & Capability

Assess the organisation's ability to execute AI initiatives safely and effectively.

Why: AI is a fundamental capability shift, not just a software upgrade. If the workforce lacks "AI Literacy", tools will either be misused (creating risk) or ignored (wasting license costs).

0
Workforce Literacy
Uninformed Isolated Certified
Uninformed (0 pts)
General staff lack understanding of AI risks (e.g., data leakage, hallucinations). Reliance is entirely on external vendors.
Engineering Maturity
Restricted Ad-Hoc Standardised
Restricted (0 pts)
Technology teams are blocked from using modern AI tooling due to policy. No internal capability to integrate LLMs.
Learning Strategy
None Generic Role-Based
None (0 pts)
No budget or resources allocated for upskilling. Learning is reactive and unstructured.

2. Culture & Change

Assess the organisational readiness to adopt new ways of working.

Why: Trust is the currency of AI adoption. If employees believe AI is primarily a cost-cutting mechanism designed to replace them, they will silently sabotage initiatives.

0
Strategic Clarity
Absent Ambiguous Defined Vision
Absent (0 pts)
No clear vision from the Executive team. AI is perceived purely as a headcount reduction mechanism.
Employee Sentiment
Fear Tentative Incentivised Innovation
Fear (0 pts)
High anxiety regarding job security. Usage is hidden to avoid scrutiny ("Shadow IT").
Ownership
Blockers Scepticism Co-Creation
Blockers (0 pts)
Senior stakeholders resist changes to established workflows.

3. Systems & Data

Assess the technical infrastructure's ability to support intelligence.

Why: AI agents cannot reason over data locked in legacy systems or offline files. Proprietary data is the competitive advantage, but it must be accessible via API and structured for machine reading.

0
Data Governance
No Ownership Bureaucratic Embedded
No Ownership (0 pts)
No defined ownership. Access is either wide open (risk) or totally locked (useless). Security policies are outdated. Existence of Shadow AI in the organisation.
Data Reliability
Unreliable Variable Machine Readable
Unreliable (0 pts)
Data is untrusted, duplicated, or lacks context. Users do not trust reports. AI has no way to distinguish fact from outdated drafts.
Data Architecture
Siloed Fragmented Semantic & API-First
Siloed (0 pts)
Point-to-point connections. Data locked in legacy on-prem systems (e.g., in PDFs or spreadsheets - Bronze/Unstructured). No API layer for agents to hook into.

4. Use Case Portfolio

Assess the rigour applied to investment decisions.

Why: Many organisations suffer from 'Magpie Syndrome' (chasing shiny demos). Sustainable scaling requires shifting from technology-led experiments ('Can we build it?') to value-led investments ('Should we build it?').

0
Investment Logic
Reactive Efficiency Only Balanced
Reactive (0 pts)
Projects are initiated based on hype or competitor activity rather than strategic need.
Measurement
Activity Based Soft Metrics Hard Outcomes
Activity Based (0 pts)
Success is defined by "launching the tool" or user volume. No link to P&L.
Feasibility
Unverified Assumed Validated
Unverified (0 pts)
Projects are approved without technical diligence. High failure rate during execution due to data gaps.

5. Operating Model & Governance

Assess the controls and structures managing AI deployment.

Why: Governance balances Speed, Cost, and Compliance. Without automated testing, strict spend controls, and guaranteed Data Sovereignty, Legal and Finance will indefinitely block deployment.

0
Testing Environments
High Risk Slow Automated
High Risk (0 pts)
Teams test in production with live data, or are blocked from testing entirely.
Cost Control
Unknown Manual Central Registry
Unknown (0 pts)
No central view of which agents are running. High risk of redundancy and unmanaged cost.
Risk & Compliance
None Bottleneck Compliance by Design
None (0 pts)
No policies on IP, GDPR, or AI liability. AI output goes straight to customers without review or compliance checks.

6. Workforce & Future Roles

Assess the adaptation of roles and organisational structure.

Why: AI decomposes the boundaries between roles. Maintaining existing role accountabilities while adding AI tools creates confusion and burnout. Organisations must redesign roles to focus on high-value human work.

0
Job Design
Rigid Overloaded Fluid
Rigid (0 pts)
Job descriptions are unchanged. Employees view AI tasks as outside their contractual scope.
Reskilling
None Reactive Proactive
None (0 pts)
No plan exists for how to utilise capacity released by automation.
Talent Strategy
Legacy Niche Native
Legacy (0 pts)
Recruitment criteria remain unchanged from pre-AI standards.

Total Assessment Score

0 / 36
Experimentation Phase
Skills & Capability
0/6
Culture & Change
0/6
Systems & Data
0/6
Use Case Portfolio
0/6
Operating Model & Governance
0/6
Workforce & Future Roles
0/6

Understanding Your Playbook

The AI will generate a comprehensive AI Playbook based on your assessment scores. Your playbook will contain four critical chapters: