AI Readiness Audit Report Generator
Generates professional AI readiness audit reports for consultants working with SMBs through structured interviews, scoring, and comprehensive deliverables
Free to install β no account needed
Copy the command below and paste into your agent.
Instant access β’ No coding needed β’ No account needed
What you get in 5 minutes
- Full skill code ready to install
- Works with 4 AI agents
- Lifetime updates included
Description
--- name: ai-audit-report description: > Generates a complete, professional AI Readiness Audit Report for AI consultants working with small and medium businesses. Use this skill whenever a user wants to write up audit findings, generate a client report after an AI audit, score a client's AI readiness, or turn audit notes and answers into a structured deliverable. Also trigger when the user says things like "I finished my audit", "write up my findings", "generate the report", "score my client", or "create an AI readiness report". The skill conducts a structured interview across 5 domains, scores each 1β5, and produces a full written report with scorecard, findings, recommendations, effort/impact matrix, and 90-day roadmap. --- # AI Readiness Audit Report Skill You are helping an AI consultant turn their audit findings into a polished, professional client report. Your job is to ask structured questions across 5 domains, score each domain from 1β5, and generate a complete written report the consultant can deliver directly to their client. --- ## Step 1: Gather Basic Context Before the domain interview, ask for the essentials in one message: ``` Let's build your AI Readiness Report. First, a few basics: 1. **Client name** (person and/or company) 2. **Your name / business name** 3. **Industry / type of business** (e.g., e-commerce, coaching, healthcare) 4. **Company size** (solo / 2β10 / 10β50 / 50+) 5. **Date of audit** (or approximate month) 6. **Overall gut feeling** β before we score anything, how AI-ready did this client feel to you? (Not ready / Some potential / Solid foundation / Ready to move) ``` Wait for answers before proceeding. --- ## Step 2: Domain Interview (5 Domains) Work through each domain one at a time. For each domain, ask 3β4 diagnostic questions, then score it 1β5 based on the answers before moving to the next. Show the score to the consultant and ask if it feels right before continuing. They can adjust β this is their report. --- ### Domain 1: Processes & Workflows **Ask:** - How well are the client's core business processes documented? (Written SOPs / Mostly in their head / Mix of both) - Which tasks are most repetitive or time-consuming each week? - Are there clear triggers and outputs for these tasks, or are they highly variable? - How does the team handle exceptions and edge cases in their workflows? **Scoring guide:** - **1** β No documented processes, everything ad hoc, high variability - **2** β Some processes exist but undocumented or inconsistent - **3** β Key processes documented, some repetitive tasks identified - **4** β Well-documented processes, clear triggers/outputs, good candidates for AI - **5** β Fully documented, optimised workflows with measurable outputs β AI-ready --- ### Domain 2: Data & Quality **Ask:** - Where does the client's business data live? (Spreadsheets / CRM / Scattered across tools / Nowhere structured) - How clean and consistent is their data? Any known quality issues? - Do they collect customer data? Is it structured and accessible? - Are they aware of relevant data privacy regulations (e.g., GDPR)? **Scoring guide:** - **1** β No structured data, nothing centralised, no awareness of data needs - **2** β Some data exists but scattered, inconsistent, or hard to access - **3** β Core data centralised in 1β2 tools, moderate quality - **4** β Clean, structured data with good coverage across business functions - **5** β High-quality, well-governed data with privacy compliance in place --- ### Domain 3: Tools & Technology **Ask:** - What tools and platforms does the client currently use day-to-day? - Are these tools well-integrated, or do they work in silos? - Has the client experimented with any AI tools already? What was the result? - Is there API access or technical capability to connect tools? **Scoring guide:** - **1** β Basic tools only, no integrations, no AI experimentation - **2** β Some tools in use, minimal integration, AI curiosity but no action - **3** β Decent tool stack, some integrations, at least one AI tool tried - **4** β Well-integrated stack with API capability, active AI tool use - **5** β Modern, connected tech stack actively using AI β ready to scale --- ### Domain 4: Team & AI Maturity **Ask:** - What is the team's general comfort level with new technology? (1β5) - Has anyone on the team used AI tools in their work, even informally? - Is there resistance or enthusiasm toward AI adoption? - Who would be the internal champion for AI initiatives? **Scoring guide:** - **1** β Strong resistance, low tech comfort, no internal champion - **2** β Skeptical but open, low AI awareness, no clear champion - **3** β Curious and willing, some AI awareness, potential champion identified - **4** β Enthusiastic, one or more active AI users, clear internal champion - **5** β AI-forward culture, team actively experimenting, strong internal drive --- ### Domain 5: Leadership & Strategy **Ask:** - Does leadership understand what AI can and cannot do for their business? - Is there budget allocated (or willingness to allocate) for AI initiatives? - Is there a business problem or goal that is explicitly driving this AI interest? - How does leadership make technology decisions β top-down, consensus, or reactive? **Scoring guide:** - **1** β No leadership buy-in, no budget, AI is not a priority - **2** β Vague interest from leadership, no budget or clear driver - **3** β Leadership supportive, some budget possible, general AI interest - **4** β Leadership committed, budget available, clear business driver - **5** β AI is a strategic priority with budget, sponsor, and defined goals --- ## Step 3: Calculate Overall Score After all 5 domains are scored, calculate the overall score: **Overall Score = Average of all 5 domain scores (rounded to 1 decimal)** Map to maturity level: | Score | Maturity Level | Description | |-------|---------------|-------------| | 1.0β1.9 | π΄ AI Beginner | Foundational work needed before AI can add value | | 2.0β2.9 | π AI Aware | Awareness exists but significant gaps remain | | 3.0β3.9 | π‘ AI Ready | Solid foundation β targeted AI adoption is viable | | 4.0β4.6 | π’ AI Capable | Strong position β ready for meaningful AI investment | | 4.7β5.0 | π AI Advanced | Exceptional readiness β scale and innovate with confidence | --- ## Step 4: Generate the Full Report Once all domains are scored and confirmed, generate the complete report. --- ### Report Structure #### Cover Information ``` AI Readiness Audit Report Client: [Name / Company] Prepared by: [Consultant name / business] Date: [Month Year] Confidential ``` #### 1. Executive Summary (1 page) - One paragraph describing the client, their business context, and why they pursued this audit - Overall maturity level and score (with the coloured indicator) - Top 3 headline findings (what stood out most β positive and concerning) - The single most important recommendation #### 2. AI Readiness Scorecard Present as a clear table: | Domain | Score | Maturity | |--------|-------|---------| | Processes & Workflows | X/5 | Label | | Data & Quality | X/5 | Label | | Tools & Technology | X/5 | Label | | Team & AI Maturity | X/5 | Label | | Leadership & Strategy | X/5 | Label | | **Overall** | **X.X/5** | **Label** | Follow with 2β3 sentences interpreting the pattern β e.g., if tools are high but data is low, name that gap explicitly. #### 3. Domain Findings (one section per domain) For each domain, write: - **What we found** β 2β3 sentences summarising the current state - **What this means** β 1β2 sentences on the business implication - **Score rationale** β brief explanation of why this score was given Use plain, jargon-free language. Write for a business owner, not a technologist. #### 4. Recommendations & Effort/Impact Matrix List 5β8 specific, actionable recommendations. For each: - One-line description of the recommendation - Effort: Low / Medium / High - Impact: Low / Medium / High - Timeline: Quick win (0β4 weeks) / Short-term (1β3 months) / Strategic (3β12 months) Present as a table. Lead with quick wins. #### 5. 90-Day Roadmap Three phases of 30 days each: **Month 1 β Foundation** Focus on quick wins and removing the biggest blockers identified in the audit. List 2β3 specific actions. **Month 2 β Build** Start implementing the highest-impact, lowest-effort recommendations. List 2β3 specific actions. **Month 3 β Expand** Tackle medium-effort items and begin planning for strategic initiatives. List 2β3 specific actions. #### 6. Closing Note A short, warm paragraph from the consultant: - Acknowledge the client's openness and effort - Express confidence in their direction - Invite next steps (implementation support, follow-up audit, etc.) --- ## Step 5: Offer Export After generating the report, ask: ``` Your AI Readiness Report is complete! Would you like me to: - [ ] Export as a Word document (.docx) β ready to brand and send - [ ] Export as a PDF - [ ] Keep it here to copy-paste ``` If they want an export, use the docx or pdf skill accordingly. --- ## Quality Standards - **Score honestly** β don't round up to be nice. A 2.4 overall score is valuable information. The client paid for truth, not flattery. - **Be specific** β reference actual tools, processes, and examples the client mentioned. Never write generic filler. - **Write for the client, not the consultant** β the report lands on a business owner's desk. Avoid technical jargon unless explained. - **Balance** β every domain finding should acknowledge both strengths and gaps. No client is all bad or all good. - **Actionable** β every recommendation must be something the client can actually do, not vague advice like "improve your data practices." --- ## Reference Files - `references/scoring-rubric.md` β Detailed scoring criteria and edge cases for each domain - `references/recommendation-library.md` β 40+ pre-written recommendations organised by domain and maturity level β load when generating recommendations - `assets/report-phrases.md` β Consultant-approved language for findings, transitions, and closing notes
Security Status
Scanned
Passed automated security checks
Related AI Tools
More Grow Business tools you might like
Clawra Selfie
FreeEdit Clawra's reference image with Grok Imagine (xAI Aurora) and send selfies to messaging channels via OpenClaw
Agent Skills for Context Engineering
FreeA comprehensive collection of Agent Skills for context engineering, multi-agent architectures, and production agent systems. Use when building, optimizing, or debugging agent systems that require effective context management.
Terraform Skill for Claude
FreeUse when working with Terraform or OpenTofu - creating modules, writing tests (native test framework, Terratest), setting up CI/CD pipelines, reviewing configurations, choosing between testing approaches, debugging state issues, implementing security
NotebookLM Research Assistant Skill
FreeUse this skill to query your Google NotebookLM notebooks directly from Claude Code for source-grounded, citation-backed answers from Gemini. Browser automation, library management, persistent auth. Drastically reduced hallucinations through document-
Engineering Advanced Skills (POWERFUL Tier)
Free"25 advanced engineering agent skills and plugins for Claude Code, Codex, Gemini CLI, Cursor, OpenClaw. Agent design, RAG, MCP servers, CI/CD, database design, observability, security auditing, release management, platform ops."
Clawra Selfie
FreeEdit Clawra's reference image with Grok Imagine (xAI Aurora) and send selfies to messaging channels via OpenClaw