docs2md
Use when the user asks about a library, framework, SDK, CLI, or cloud service and you need real, current documentation rather than recalled training data. Pulls a docs site into a single markdown file via a Docker-isolated crawler, then reads it back
Free to install — no account needed
Copy the command below and paste into your agent.
Instant access • No coding needed • No account needed
What you get in 5 minutes
- Full skill code ready to install
- Works with 4 AI agents
- Lifetime updates included
Run this helper
Answer a few questions and let this helper do the work.
▸Advanced: use with your AI agent
Description
--- name: docs2md description: Use when the user asks about a library, framework, SDK, CLI, or cloud service and you need real, current documentation rather than recalled training data. Pulls a docs site into a single markdown file via a Docker-isolated crawler, then reads it back into context. Trigger on questions like "how do I use X", "what does Y's API look like", or any time the user provides a docs URL. --- # docs2md Container-isolated documentation crawler. Turns a docs URL into one markdown file the agent can read. ## When to use Activate when **any** of these are true: - The user asks about a specific library / framework / tool / API. - The user references a feature, version, or option you cannot fully verify from training data. - The user provides a documentation URL directly. - You are about to answer from training memory and the answer is load-bearing (code that will be written, an architectural decision). Skip when: - The question is general programming or language-level (not tool-specific). - The user explicitly says "don't fetch docs" / "from memory is fine". - You already have ground truth from a prior `docs2md` run in this session — re-read that file rather than re-crawling. ## How to use 1. Identify the docs root URL. Top-level (`https://docs.example.com`) beats a single deep page — BFS will fan out from there. 2. Run the wrapper from this skill's directory: ```bash path=$(./scrape <url> 2>/dev/null) ``` `path` is the absolute path to a single `output.md`. Default crawl: BFS, depth 3, up to 100 pages, 10 parallel fetches, same-domain only. 3. Read the file with the `Read` tool. 4. Answer grounded in its contents. Quote verbatim for API signatures, flag names, exact strings, version-sensitive details. ## Tuning | Situation | Flags | |----------------------------------------|--------------------------------------| | Small reference site | `--max-pages 20 --max-depth 2` | | Large sprawling docs | `--max-pages 200` + narrow seed URL | | Single page, no traversal | `--max-pages 1 --max-depth 0` | | Fragile / rate-limited host | `--concurrency 2` | | Fast public docs, want speed | `--concurrency 20` | ## Output format One markdown file. Pages are concatenated, separated by: ``` ============================================================ # <page url> ============================================================ ``` `./scrape` stdout = **exactly the absolute path**. Stderr carries progress (`✓ [N] <url>`) and the same path with an arrow prefix. So `path=$(./scrape ... 2>/dev/null)` captures cleanly. ## Cost & latency - First-ever invocation on a machine: pulls the upstream Docker image. Subsequent invocations reuse the cache. - Container start: ~2.5 s. - Crawl: 1–3 s per page at default concurrency, varies by host. - Prefer one well-scoped crawl over many tiny ones; temp directories accumulate (`mktemp -d` under `/tmp` or `/var/folders/...`). ## What this skill is not - Not a search engine. It crawls — it does not rank or filter. - Not a replacement for an authenticated docs source (Confluence, internal wikis). Public web only. - Not a long-term store. Output is in a tempdir; copy it elsewhere if you need persistence. ## Failure modes - **Zero pages with low `--max-pages`**: BFS sometimes spends the budget enumerating siblings before fetching them. Bump `--max-pages` or `--max-depth`. - **`docker: command not found`**: this skill requires Docker on the host. Tell the user; do not attempt a slim non-Docker fallback. - **First run is slower** because it pulls the upstream image. To warm the cache: `docker pull unclecode/crawl4ai:0.8.6`. ## Minimal usage example ```bash # inside the skill's directory path=$(./scrape https://docs.crawl4ai.com --max-pages 30 2>/dev/null) ``` Then read `$path` and answer from it.
Security Status
Unvetted
Not yet security scanned
Related AI Tools
More Save Money tools you might like
Family History Research Planning Skill
FreeProvides assistance with planning family history and genealogy research projects.
Naming Skill
FreeName products, SaaS, brands, open source projects, bots, and apps. Use when the user needs to name something, find a brand name, or pick a product name. Metaphor-driven process that produces memorable, meaningful names and avoids AI slop.
Profit Margin Calculator
$7.99Find hidden profit leaks — see exactly where your money goes
guard-scanner
Free"Security scanner and runtime guard for OpenClaw skills, MCP servers, and AI agent workflows. Detects prompt injection, identity hijacking, memory poisoning, A2A contagion, secret leaks, supply-chain abuse, and dangerous tool calls with 364 static th
bbc-skill — Bilibili Comment Collector
FreeFetch Bilibili (哔哩哔哩) video comments for UP主 self-analysis. Use when the user asks to collect, download, export, or analyze comments on a Bilibili video (BV号 / URL / UID). Produces JSONL + summary.json suitable for further Claude Code analysis (senti
Life OS · Personal Decision Engine
Free"A personal decision engine with 16 independent AI agents, checks and balances, and swappable cultural themes. Covers relationships, finance, learning, execution, risk control, health, and infrastructure. Use when facing complex personal decisions (c