Big Project Workflow
"Two-stage AI coding workflow controller for medium and large projects: Claude Code CLI does product planning (project brief, requirements questionnaire, PRD, specs, roadmap), Codex implements vertical slices from those documents. Trellis owns specs/
Free to install — no account needed
Copy the command below and paste into your agent.
Instant access • No coding needed • No account needed
What you get in 5 minutes
- Full skill code ready to install
- Works with 2 AI agents
- Lifetime updates included
Run this helper
Answer a few questions and let this helper do the work.
▸Advanced: use with your AI agent
Description
--- name: big-project-workflow description: "Two-stage AI coding workflow controller for medium and large projects: Claude Code CLI does product planning (project brief, requirements questionnaire, PRD, specs, roadmap), Codex implements vertical slices from those documents. Trellis owns specs/tasks/workspace memory; this skill orchestrates the Claude→Codex handoff and enforces safety boundaries. Use when the user mentions 大项目工作流, large/medium project, new SaaS, multi-module system, requirements questionnaire, Claude Code CLI planning, Codex implementation handoff, Trellis bootstrap, non-system-drive project, or vertical slice. Skip for: single-file edits, casual Q&A, copy polish, throwaway prototypes, or tasks the user explicitly wants done fast without process." --- # Big Project Workflow A thin orchestrator on top of [Trellis](https://github.com/mindfold-ai/trellis). Trellis owns specs, tasks, workspace journals, and the workflow lifecycle. This skill adds: non-system-drive bootstrap, a structured Claude→Codex handoff (with a real requirements questionnaire), and safety boundaries. Detailed checklists live in `references/` and load on demand. ## Tool Contract | Tool | Owns | |------|------| | **Trellis** | `.trellis/spec/`, `.trellis/tasks/`, `.trellis/workspace/`, `.trellis/workflow.md`, SessionStart hook | | **Claude Code CLI** | Stage 1 — project brief, requirements questionnaire & answers, PRD, specs, roadmap, kickoff task | | **Codex** | Stage 2 — implementation, verification, debugging, doc updates, review | ## Use / Do Not Use Use for new products, multi-module apps, projects involving database/API/LLM/deployment decisions, large refactors that need scope and verification, and any workflow where Claude writes documents first and Codex implements afterward. Do not use for tiny one-file fixes, casual Q&A, copy polish, or quick throwaway prototypes when the user explicitly wants speed over process. ## Phase 0: Inspect Current State (existing repo) If the user invokes the skill in an existing repository, inspect before creating or editing any workflow document. Read these anchors when present: - `AGENTS.md`, `README.md` - package files (`package.json`, `pyproject.toml`, `requirements.txt`, ...) - `docs/` - `.trellis/`, `.trellis/workflow.md`, `.trellis/spec/`, `.trellis/tasks/`, `.trellis/workspace/` - `.claude/`, `.codex/` - tests, `.env.example` Then report: - current repository state, - whether Trellis exists, - whether PRD/spec/roadmap exist, - whether tests and verification commands exist, - which phase should run next. Routing: if key Stage 1 documents are missing, run Stage 1. If they are present and a kickoff task exists, run Stage 2. If only verification is missing, add a smoke check first. ## Bootstrap A New Project For a new project, prefer a **non-system drive**. On Windows the script refuses system-drive targets unless `--allow-system-drive` is passed. PowerShell: ```powershell $SkillRoot = if ($env:CODEX_HOME) { Join-Path $env:CODEX_HOME "skills/big-project-workflow" } else { Join-Path $HOME ".codex/skills/big-project-workflow" } python (Join-Path $SkillRoot "scripts/bootstrap.py") ` --root "<non-system-drive>/codex-projects" ` --name "<project-slug>" ` --developer "<developer-name>" ` --brief "One paragraph describing the project goal" ` --trellis-version latest ``` Bash / zsh: ```bash SKILL_ROOT="${CODEX_HOME:-$HOME/.codex}/skills/big-project-workflow" python3 "$SKILL_ROOT/scripts/bootstrap.py" \ --root "$HOME/codex-projects" \ --name "<project-slug>" \ --developer "<developer-name>" \ --brief "One paragraph describing the project goal" \ --trellis-version latest ``` The script creates the project folder, runs `git init`, runs `trellis init --codex -u <developer> -y` (default), and renders the four templates from `assets/templates/` into the project. If `trellis` is not on `PATH`, the script prints the install command and exits cleanly — it does **not** silently install npm packages. ### Output language Pass `--language en` or `--language zh-CN` to control the language of the rendered templates and the bootstrap `Next steps` hint. Default is `auto`, which checks `BIG_PROJECT_LANGUAGE`, then `LANG` / `LC_ALL` / `LANGUAGE` env vars, then the system locale, mapping any Chinese locale to `zh-CN` and everything else to `en`. To add a new language, drop a `<template>.<lang>.md` file next to the existing English template; the renderer prefers the localized variant when present and falls back to English otherwise. ## Stage 1: Claude Plans Run from the project root inside an **interactive** Claude Code session. Do not use `claude -p` (headless / print mode) — Stage 1 produces a structured questionnaire that needs a human to answer it. Headless mode silently fills every answer with `(default assumption)`, defeating the core value of this workflow. PowerShell: ```powershell cd <project-path> claude ``` Bash / zsh: ```bash cd <project-path> claude ``` In the Claude Code session, say: > Read `docs/claude/00-prd-spec-prompt.md` and run Stage 1. Claude inspects the project, asks you to confirm scope and the questionnaire size, runs the questionnaire interactively, and only then writes the PRD / specs / roadmap / kickoff task / handoff doc. To pick a planning model, use `/model` inside the session. Prefer Opus-class with 1M context (e.g. `claude-opus-4-7[1m]`); the exact identifier changes over time, and the bundled prompt does not pin one. Claude writes or updates the Stage 1 document set: - `docs/PROJECT_BRIEF.md` - `docs/REQUIREMENTS_QUESTIONNAIRE.md` - `docs/REQUIREMENTS_ANSWERS.md` - `docs/PRD.md` - `docs/ROADMAP.md` - `.trellis/spec/*` (only the tiers the project actually needs) - `.trellis/tasks/001-implementation-kickoff.md` - `docs/codex/00-implementation-handoff.md` Detailed per-document field lists are in `references/handoff-flow.md` — load on demand, do not memorize. ### Stage 1 — Project Mode (Q0, asked first) Before generating questions, Stage 1 must ask the user: > Is this a **single-shot project** (everything you want shipped in > one run, no v2) or an **iterative product** (multiple versions, v1 > intentionally narrow)? The answer is locked as Q0 in `docs/REQUIREMENTS_ANSWERS.md` and shapes every later document: - **single-shot** — `ROADMAP.md` covers *every* feature. PRD has no Future Roadmap section. Out of Scope = items the user has ruled out forever. Default for tiny / small projects. - **iterative** — `ROADMAP.md` covers v1 only. PRD § Future Roadmap lists deferred features. Out of Scope = never-do items, distinct from "later" items. Default for medium / large projects. When the user is unsure, state the default for their project size explicitly and let them override. ### Stage 1 — Requirements Questionnaire (THE core value-add) Before writing the PRD, generate a structured questionnaire. This is the single largest reason this workflow outperforms ad-hoc prompting. - Question count by project size: - **Tiny project** (single script, smoke test, throwaway tool): 10-20 questions - **Small project**: 30-60 questions - **Medium project**: 80-150 questions - **Large project**: 150-500 questions - Pick the band from the brief; if unsure, ask the user before generating. - Group by module. - Each question carries four fields: **question**, **type** (single-choice / multi-choice / open / boundary condition / acceptance criterion), **why it matters**, **default if user skips**. - Cover, when relevant: target users, roles & permissions, core user journeys, pages/interfaces, data objects, business rules, edge cases, admin/backoffice, API design, database design, AI/LLM integration, prompt & fallback rules, logging & analytics, security & privacy, testing, deployment, out-of-scope boundaries. - In **single-shot** mode, do NOT split questions into "MVP now" vs "v2 later" — every feature the user wants is in scope. After the user answers (or chooses to skip), write `docs/REQUIREMENTS_ANSWERS.md`. Mark each defaulted item with `(default assumption)` so Codex can see where the gaps are. Claude does not implement product code in Stage 1 unless the user explicitly asks. ## Optional Research Gate: codex-autoresearch `codex-autoresearch` is an optional companion Codex skill, installed from `leo-lilinxiao/codex-autoresearch`. It is **not** a hard dependency for this workflow and should not slow down tiny / obvious tasks. Use it before Stage 2 implementation, or before a high-risk roadmap slice, when the task involves: - unfamiliar third-party APIs or SDKs, - auth, encryption, secrets handling, database migrations, deployment, performance, security-sensitive changes, - LLM provider behavior, eval strategy, prompt safety, or tool-use design, - large dependency upgrades or cross-cutting refactors, - failing tests whose root cause is unclear, - optimization work with a measurable metric (latency, coverage, type errors, lint count, bundle size, etc.). Skip it when the slice is straightforward and the PRD/specs already contain enough concrete implementation detail. When research is needed, ask Codex to invoke `$codex-autoresearch` with a narrow goal and a mechanical verification metric. Capture the outcome in the Task Plan under "Research notes" before editing production code. The research loop may run foreground or background, but it must respect the current repo boundary and the task scope. ## Stage 2: Codex Implements After Claude finishes, ask Codex: ```text Read docs/codex/00-implementation-handoff.md and implement the first vertical slice from the generated roadmap. ``` Codex follows the handoff doc: 1. Read context (Trellis workflow, PRD, roadmap, requirements answers, kickoff task). 2. Output a `## Task Plan` block (goal / scope / out-of-scope / files / risks / verification / acceptance criteria) **before** editing code. 3. For greenfield projects, build the **smallest runnable vertical slice** first — request → handler → DB or LLM → response → assertion — before layering on features. 4. Implement one focused task. Do not absorb future-roadmap work. 5. Run verification (`npm run lint/typecheck/test/build`, `pytest`, `ruff check .`, `mypy .`, or whatever the project ships). Fix in scope; report unrelated failures separately. 6. Summarize: what changed, files changed, acceptance status, verification results, known issues, next task. ## Review Mode When the user asks for a review (rather than an implementation), do not implement first. Read PRD, specs, the diff, and verification output; lead with findings by severity. Output structure (full template in `references/safety-rules.md`): ```markdown ## Review Result Status: Pass / Pass with concerns / Fail ## Critical / Major / Minor issues ## Scope violations ## Missing acceptance criteria ## Verification (command + result) ## Recommended fix order ``` If the diff lacks tests, the best status is "Pass with concerns" with the missing verification flagged under Major. Do not silently apply fixes during review unless the user requests it. ## Hard Rules ### Stage 1 Belongs To Claude (Codex Must Refuse) The Stage 1 documents below exist in two states: 1. **Scaffolded** — `bootstrap.py` rendered them with `{{name}}` / `{{brief}}` / `{{date}}` / `{{language}}` filled and most sections marked TBD. 2. **Stage 1 complete** — an interactive Claude Code session ran the structured questionnaire with a real human, captured real answers, and refined the docs. **Codex must NEVER move a file from state 1 to state 2.** That includes "helpfully" filling defaults, "completing the workflow end-to-end", "unblocking the user when Claude isn't available", and any other phrasing the user might use to ask Codex to skip the human-in-the-loop step. Headless self-fill silently destroys the questionnaire — the single largest reason this workflow exists. Files Codex must not author or refine: - `docs/PROJECT_BRIEF.md` (Codex may render the bootstrap scaffold; it must not refine TBD sections) - `docs/REQUIREMENTS_QUESTIONNAIRE.md` - `docs/REQUIREMENTS_ANSWERS.md` - `docs/PRD.md` - `docs/ROADMAP.md` - `.trellis/spec/*` (any new spec file beyond what Trellis itself ships) - `.trellis/tasks/001-implementation-kickoff.md` When the user asks Codex to "test the workflow end-to-end", "run Stage 1 yourself", "fill the questionnaire with defaults for speed", "Claude isn't available so just do it", or any equivalent, Codex must **refuse** with this message (translate to the user's language): > Stage 1 must run in interactive Claude Code, not Codex. The > structured questionnaire is this workflow's core value, and any > automated self-fill silently destroys it. To proceed: > > cd <project-path> > claude > > Then in the Claude Code session say: *"Read > `docs/claude/00-prd-spec-prompt.md` and run Stage 1."* Come back to > Codex when `docs/REQUIREMENTS_ANSWERS.md` contains real answers, not > only `(default assumption)` lines, and I'll start Stage 2. Codex MAY always: - run `bootstrap.py` to produce scaffolds; - read the planning files to report current state to the user; - start Stage 2 implementation **only after** the Stage 1 completeness gate in `docs/codex/00-implementation-handoff.md` passes. ### Do Not Code Too Early For medium or large projects, do not implement product code until these exist or are explicitly substituted by equivalent existing files: - Project Brief - Requirements Answers (or confirmed default assumptions) - PRD - Technical Specs - Roadmap or task list - Acceptance Criteria - Verification Commands Exceptions: - the user explicitly requests a throwaway prototype, - the task is a small independent edit, - the project already has equivalent docs. ### Safety Boundaries Before important changes, run `git status`. Prefer a feature branch for non-trivial work. Do **not**: - run destructive commands without explicit approval, - delete user files unless the task requires it, - edit outside the current repo unless explicitly asked, - commit secrets or hardcode API keys, - edit `.env` (create `.env.example` instead), - add large dependencies without justification, - delete tests or weaken assertions to pass checks. For shareable files, use placeholders (`<project-root>`, `<developer-name>`, `<non-system-drive>`) instead of personal paths or usernames. Full discipline lives in `references/safety-rules.md`. ### Control Scope One task at a time. Do not turn one task into a project-wide rewrite unless the user approves. Do not implement future-roadmap features inside the current task. ### Don't Block On Assumptions If an answer is missing, draft an explicit assumption and continue. Stop only when the missing answer affects safety, cost, legal risk, or irreversible architecture. ## Optional References Load these only when the situation calls for them: - `references/handoff-flow.md` — full per-document field checklists for Project Brief / PRD / specs / roadmap, plus Bad/Good spec examples. - `references/safety-rules.md` — extended safety, Done Checklist, Review Mode template. - `references/llm-spec-checklist.md` — LLM project checklist (provider abstraction, prompt templates, guardrails, evals, tests). - `references/SKILL.zh-CN.md` — Chinese reading version. Codex loads this English file as the contract; the Chinese file is for humans. ## Bundled Resources - `scripts/bootstrap.py` — cross-platform project bootstrap. - `assets/templates/` — Markdown templates the bootstrap script renders. - `agents/openai.yaml` — UI metadata for Codex skill chips. ## Publish Safety Gate Before publishing this skill repository: - scan for personal absolute paths, local usernames, machine-specific tool directories, tokens, API keys, and private project names, - keep examples generic and replace user-specific values with placeholders, - ensure `SKILL.md` contains the complete operational workflow (no hidden instructions in references), - keep scripts parameterized via CLI arguments and environment variables, - run a smoke bootstrap in a temp folder and remove only the verified temp folder afterward.
Security Status
Scanned
Passed automated security checks
Related AI Tools
More Make Money tools you might like
paper-fetch
FreeUse when the user wants to download a paper PDF from a DOI, title, or URL via legal open-access sources. Tries Unpaywall, arXiv, bioRxiv/medRxiv, PubMed Central, and Semantic Scholar in order. Never uses Sci-Hub or paywall bypass.
Beautiful Prose (Claude Skill)
FreeA hard-edged writing style contract for timeless, forceful English prose without modern AI tics. Use when users ask for prose or rewrites that must be clean, exact, concrete, and free of AI cadence, filler, or therapeutic tone.
SkillCheck (Free)
FreeValidate Claude Code skills against Anthropic guidelines. Use when user says "check skill", "skillcheck", "validate SKILL.md", or asks to find issues in skill definitions. Covers structural and semantic validation. Do NOT use for anti-slop detection,
Design Checker Skill
Free"Audit designs against 18 professional rules across Figma files and code (HTML/CSS/React/Vue/Tailwind). Detects framework automatically, runs code superpowers (aria, focus, contrast, tokens, responsive, motion, forms, navigation, spacing), audits for
Rails Convention Engineer
FreeRails 8.x application architecture, implementation, and review guidance for production codebases. Use when building or reviewing Ruby on Rails 8 features across models, controllers, routes, Hotwire, jobs, APIs, performance, security, and testing. Tri
Vibe Science v7.0 — TRACE
FreeScientific research engine with agentic tree search. Infinite loops until discovery, rigorous tracking, adversarial review, serendipity preserved.