RTFM Testing
Test documentation quality by spawning fresh agents with zero context. Validates that docs are complete, clear, and usable by newcomers. Use when testing READMEs, tutorials, setup guides, or API docs before publishing.
Free to install — no account needed
Copy the command below and paste into your agent.
Instant access • No coding needed • No account needed
What you get in 5 minutes
- Full skill code ready to install
- Works with 4 AI agents
- Lifetime updates included
Description
--- name: rtfm-testing description: Test documentation quality by spawning fresh agents with zero context. Validates that docs are complete, clear, and usable by newcomers. Use when testing READMEs, tutorials, setup guides, or API docs before publishing. --- # RTFM Testing A documentation quality methodology that spawns fresh agents to validate whether docs are actually usable. > "If an amnesiac can't follow your docs, your docs suck." ## The Problem Documentation written by the person who built the thing is almost always incomplete. They fill in gaps unconsciously. They assume context. They skip "obvious" steps. RTFM Testing fixes this by spawning a fresh agent with zero context and asking: can you complete this task using only the docs? ## When to Use - Before publishing docs, READMEs, tutorials, or setup guides - When users report confusion but you can't see why - After major refactors to validate docs still work - As part of CI for documentation-heavy projects ## How It Works 1. **Identify the task** — What should someone be able to do after reading the docs? 2. **Bundle the docs** — Collect all relevant documentation (and nothing else) 3. **Spawn a fresh tester** — Use the TESTER.md prompt with `sessions_spawn` 4. **Analyze failures** — Every confusion point is a doc bug 5. **Fix and repeat** — Update docs, respawn, retest until clean ## Usage ``` sessions_spawn( task: "Complete the following task using ONLY the provided documentation. [TASK DESCRIPTION]\n\n---\n\n[PASTE DOCS HERE]", agentId: "default", label: "rtfm-test" ) ``` Or use the full TESTER.md prompt for more structured output. ## Metrics - **Cold Start Score** — Number of spawn cycles until task completion (lower = better docs) - **Gap Count** — Number of `[GAP]` reports per run - **Gap Categories** — Missing steps, unclear language, wrong assumptions, missing prerequisites ## Key Principles 1. **No hints** — Don't help the tester. Let it fail. 2. **Literal reading** — Tester must not infer or guess 3. **Docs only** — No external knowledge, no "common sense" 4. **Failures are signal** — Every stumble is actionable feedback ## Files - `SKILL.md` — This file - `TESTER.md` — System prompt for the fresh agent - `GAPS.md` — Output format specification
Security Status
Scanned
Passed automated security checks
Related AI Tools
More Career Boost tools you might like
ru-text — Russian Text Quality
FreeApplies professional Russian typography, grammar, and style rules to improve text quality across content types
/forge:工作流总入口
Free'Forge 工作流总入口。检查项目状态,推荐下一步该用哪个 skill。任何时候不知道下一步该干什么,就用 /forge。触发方式:用户说"forge"、"下一步"、"接下来做什么"、"继续"(在没有明确上下文时)。'
TypeScript React & Next.js Production Patterns
FreeProduction-grade TypeScript reference for React & Next.js covering type safety, component patterns, API validation, state management, and debugging
Java Backend Interview Simulator
FreeSimulates realistic Java backend technical interviews with customizable interviewer styles and candidate levels for Chinese tech companies
Charles Proxy Session Extractor
FreeExtracts HTTP/HTTPS request and response data from Charles Proxy session files (.chlsj format), including URLs, methods, status codes, headers, request bodies, and response bodies. Use when analyzing captured network traffic from Charles Proxy debug
AI News & Trends Intelligence
FreeFetches latest AI/ML news, trending open-source projects, and social media discussions from 75+ curated sources for comprehensive AI briefings