- Your Code Reviews Just Got 12 Expert Opinions. In 30 Seconds.
-
-
- Here's what happened when we shipped yesterday: security audit, performance analysis, architectural review, pattern detection, and eight more specialized checks—all running in parallel. No meetings. No waiting. Just answers. That's compounding engineering: 29 specialized agents, 23 workflow commands, and 18 skills that make today's work easier than yesterday's.
-
Why Your Third Code Review Should Be Easier Than Your First
-
- Think about the last time you fixed a Rails N+1 query. You found it. You fixed it. Then next month, different developer, same bug, same investigation. That's linear engineering—you solved it, but the solution evaporated.
-
-
-
-
-
-
- "Most engineering work is amnesia. You solve a problem on Tuesday, forget the solution by Friday, and re-solve it next quarter. Compounding engineering is different: each solved problem teaches the system. The security review you run today makes tomorrow's review smarter. The pattern you codify this sprint prevents bugs in the next three."
-
-
-
-
-
-
-
-
-
Plan
-
Stop starting over from scratch
-
- You know that moment when you open a ticket and think "how did we solve this last time?" The framework-docs-researcher already knows. The git-history-analyzer remembers what worked in March. Run /plan and three research agents work in parallel—one reading docs, one analyzing your repo's history, one finding community patterns. In 60 seconds, you have a plan built on institutional memory instead of starting cold.
-
- The security-sentinel has checked 10,000 PRs for SQL injection. The kieran-rails-reviewer never approves a controller with business logic. They don't get tired, don't skip Friday afternoon reviews, don't forget the conventions you agreed on in March. Run /work and watch your plan execute with quality gates that actually enforce your standards—every single time.
-
- Type /review PR#123 and go get coffee. When you come back, you'll have a security audit (did you sanitize that user input?), performance analysis (N+1 spotted on line 47), architecture review (this breaks the pattern from v2.3), data integrity check (that migration will fail in production), and eight more specialized reviews. All running in parallel. All categorized by severity. All stored as actionable P1/P2/P3 todos you can knock out in order.
-
- Remember that CORS issue you debugged for three hours last month? Neither do I. That's the problem. Run /compound right after you fix something and it captures the solution as searchable documentation with YAML frontmatter. Next time someone hits the same issue, they grep for "CORS production" and find your answer in five seconds instead of re-debugging for three hours. That's how you compound.
-
- Think of them as coworkers who never quit. The security-sentinel has seen every SQL injection variant. The kieran-rails-reviewer enforces conventions with zero compromise. The performance-oracle spots N+1 queries while you're still reading the PR. Run them solo or launch twelve in parallel—your choice.
-
-
-
-
-
-
Review Agents (11)
-
-
-
- kieran-rails-reviewer
- Rails
-
-
Super senior Rails developer with impeccable taste. Applies strict conventions for Turbo Streams, namespacing, and the "duplication over complexity" philosophy.
- claude agent kieran-rails-reviewer
-
-
-
- dhh-rails-reviewer
- Rails
-
-
Reviews code from DHH's perspective. Focus on Rails conventions, simplicity, and avoiding over-engineering.
- claude agent dhh-rails-reviewer
-
-
-
- kieran-python-reviewer
- Python
-
-
Python code review with strict conventions. PEP 8 compliance, type hints, and Pythonic patterns.
- claude agent kieran-python-reviewer
-
-
-
- kieran-typescript-reviewer
- TypeScript
-
-
TypeScript review with focus on type safety, modern patterns, and clean architecture.
- claude agent kieran-typescript-reviewer
-
-
-
- security-sentinel
- Security
-
-
Security audits and vulnerability assessments. OWASP top 10, injection attacks, authentication flaws.
Analyze architectural decisions, compliance, and system design patterns.
- claude agent architecture-strategist
-
-
-
- data-integrity-guardian
- Data
-
-
Database migrations and data integrity review. Schema changes, foreign keys, data consistency.
- claude agent data-integrity-guardian
-
-
-
- pattern-recognition-specialist
- Patterns
-
-
Analyze code for patterns and anti-patterns. Design patterns, code smells, refactoring opportunities.
- claude agent pattern-recognition-specialist
-
-
-
- code-simplicity-reviewer
- Quality
-
-
Final pass for simplicity and minimalism. Remove unnecessary complexity, improve readability.
- claude agent code-simplicity-reviewer
-
-
-
- julik-frontend-races-reviewer
- JavaScript
-
-
Review JavaScript and Stimulus code for race conditions, DOM event handling, promise management, and timer cleanup.
- claude agent julik-frontend-races-reviewer
-
-
-
-
-
-
-
Research Agents (4)
-
-
-
- framework-docs-researcher
- Research
-
-
Research framework documentation and best practices. Find official guidance and community patterns.
- claude agent framework-docs-researcher
-
-
-
- best-practices-researcher
- Research
-
-
Gather external best practices and examples from the community and industry standards.
- claude agent best-practices-researcher
-
-
-
- git-history-analyzer
- Git
-
-
Analyze git history and code evolution. Understand how code has changed and why.
- claude agent git-history-analyzer
-
-
-
- repo-research-analyst
- Research
-
-
Research repository structure and conventions. Understand project patterns and organization.
- claude agent repo-research-analyst
-
-
-
-
-
-
-
Design Agents (3)
-
-
-
- design-iterator
- Design
-
-
Iteratively refine UI through systematic design iterations with screenshots and feedback loops.
- claude agent design-iterator
-
-
-
- figma-design-sync
- Figma
-
-
Synchronize web implementations with Figma designs. Pixel-perfect matching.
- claude agent figma-design-sync
-
-
-
- design-implementation-reviewer
- Review
-
-
Verify UI implementations match Figma designs. Catch visual regressions.
- claude agent design-implementation-reviewer
-
-
-
-
-
-
-
Workflow Agents (5)
-
-
-
- bug-reproduction-validator
- Bugs
-
-
Systematically reproduce and validate bug reports. Create minimal reproduction cases.
- claude agent bug-reproduction-validator
-
-
-
- pr-comment-resolver
- PR
-
-
Address PR comments and implement fixes. Batch process review feedback.
- claude agent pr-comment-resolver
-
-
-
- lint
- Quality
-
-
Run linting and code quality checks on Ruby and ERB files.
- claude agent lint
-
-
-
- spec-flow-analyzer
- Testing
-
-
Analyze user flows and identify gaps in specifications.
- claude agent spec-flow-analyzer
-
-
-
- every-style-editor
- Content
-
-
Edit content to conform to Every's style guide.
- claude agent every-style-editor
-
-
-
-
-
-
-
Documentation Agent (1)
-
-
-
- ankane-readme-writer
- Docs
-
-
Create READMEs following Ankane-style template for Ruby gems. Clean, concise, comprehensive documentation that gets straight to the point.
- claude agent ankane-readme-writer
-
-
-
-
-
-
-
-
-
- 23 Powerful Commands
-
-
- Slash commands that replace entire workflows. /review is your code review committee. /plan is your research team. /triage sorts 50 todos in the time it takes you to read five. Each one automates hours of work into a single line.
-
-
-
-
-
-
Workflow Commands
-
-
-
- /plan
- core
-
-
Create comprehensive implementation plans with research agents and stakeholder analysis.
-
-
-
- /review
- core
-
-
Run exhaustive code reviews using 12 or more parallel agents, ultra-thinking, and worktrees.
-
-
-
- /work
- core
-
-
Execute work items systematically with progress tracking and validation.
-
-
-
- /compound
- core
-
-
Document solved problems to compound team knowledge. Turn learnings into reusable patterns.
-
-
-
-
-
-
-
Utility Commands
-
-
-
- /changelog
- util
-
-
Create engaging changelogs for recent merges.
-
-
-
- /create-agent-skill
- util
-
-
Create or edit Claude Code skills with expert guidance.
-
-
-
- /generate_command
- util
-
-
Generate new slash commands from templates.
-
-
-
- /heal-skill
- util
-
-
Fix skill documentation issues automatically.
-
-
-
- /plan_review
- util
-
-
Multi-agent plan review in parallel.
-
-
-
- /prime
- util
-
-
Prime/setup command for project initialization.
-
-
-
- /report-bug
- util
-
-
Report bugs in the plugin with structured templates.
-
-
-
- /reproduce-bug
- util
-
-
Reproduce bugs using logs and console output.
-
-
-
- /triage
- util
-
-
Triage and prioritize issues interactively.
-
-
-
- /resolve_parallel
- util
-
-
Resolve TODO comments in parallel.
-
-
-
- /resolve_pr_parallel
- util
-
-
Resolve PR comments in parallel.
-
-
-
- /resolve_todo_parallel
- util
-
-
Resolve file-based todos in parallel.
-
-
-
- /release-docs
- util
-
-
Build and update the documentation site with current plugin components.
-
-
-
- /deploy-docs
- util
-
-
Validate and prepare documentation for GitHub Pages deployment.
-
-
-
-
-
-
-
-
-
- 18 Intelligent Skills
-
-
- Domain expertise on tap. Need to write a Ruby gem? The andrew-kane-gem-writer knows the patterns Andrew uses in 50+ popular gems. Building a Rails app? The dhh-rails-style enforces 37signals conventions. Generating images? The gemini-imagegen has Google's AI on speed dial. Just invoke the skill and watch it work.
-
-
-
-
-
-
Development Tools
-
-
-
- andrew-kane-gem-writer
- Ruby
-
-
Write Ruby gems following Andrew Kane's patterns. Clean APIs, smart defaults, comprehensive testing.
Build type-safe LLM applications with DSPy.rb. Structured prompting, optimization, providers.
- skill: dspy-ruby
-
-
-
- frontend-design
- Design
-
-
Create production-grade frontend interfaces with modern CSS, responsive design, accessibility.
- skill: frontend-design
-
-
-
- create-agent-skills
- Meta
-
-
Expert guidance for creating Claude Code skills. Templates, best practices, validation.
- skill: create-agent-skills
-
-
-
- skill-creator
- Meta
-
-
Guide for creating effective Claude Code skills with structured workflows.
- skill: skill-creator
-
-
-
- compound-docs
- Docs
-
-
Capture solved problems as categorized documentation with YAML schema.
- skill: compound-docs
-
-
-
-
-
-
-
Content & Workflow
-
-
-
- every-style-editor
- Content
-
-
Review copy for Every's style guide compliance.
- skill: every-style-editor
-
-
-
- file-todos
- Workflow
-
-
File-based todo tracking system with priorities and status.
- skill: file-todos
-
-
-
- git-worktree
- Git
-
-
Manage Git worktrees for parallel development on multiple branches.
- skill: git-worktree
-
-
-
-
-
-
-
Image Generation
-
-
-
- gemini-imagegen
- AI Images
-
-
- Generate and edit images using Google's Gemini API. Text-to-image, image editing,
- multi-turn refinement, and composition from up to 14 reference images.
-
-
-
- Text-to-image generation
-
-
- Image editing & manipulation
-
-
- Multi-turn refinement
-
-
- Multiple reference images (up to 14)
-
-
- Google Search grounding (Pro)
-
-
- skill: gemini-imagegen
-
Requires: GEMINI_API_KEY environment variable
-
-
-
-
-
-
-
-
-
- 1 MCP Server
-
-
- Playwright gives Claude a browser—it can click buttons, take screenshots, fill forms, and validate what your users actually see. Context7 gives it instant access to docs for 100+ frameworks. Need to know how Next.js handles dynamic routes? Context7 fetches the answer in real-time instead of hallucinating from outdated training data.
-
-
-
-
-
-
-
- Playwright
-
-
Your AI can now see and click like a user. Test flows, grab screenshots, debug what's actually rendering.
-
-
Tools Provided: 6 tools
-
-
browser_navigate - Navigate to URLs
-
browser_take_screenshot - Take screenshots
-
browser_click - Click elements
-
browser_fill_form - Fill form fields
-
browser_snapshot - Get accessibility snapshot
-
browser_evaluate - Execute JavaScript
-
-
-
-
-
-
- Context7
-
-
Stop getting outdated answers. Context7 fetches current docs from 100+ frameworks in real-time.
-
-
Tools Provided: 2 tools
-
-
resolve-library-id - Find library ID
-
get-library-docs - Get documentation
-
-
Supports: Rails, React, Next.js, Vue, Django, Laravel, and more than 100 others
-
-
-
-
-
-
-
-
-
Three Commands. Zero Configuration.
-
- You're literally 30 seconds from running your first 12-agent code review. No config files. No API keys (except for image generation). Just copy, paste, go.
-
-
-
-
-
-
1
-
-
Add the Marketplace
-
-
claude /plugin marketplace add https://github.com/EveryInc/compound-engineering-plugin
-
-
-
-
-
2
-
-
Install the Plugin
-
-
claude /plugin install compound-engineering
-
-
-
-
-
3
-
-
Ship Faster
-
-
# Run a 12-agent code review
-/review PR#123
-
-# Get a security audit
-claude agent security-sentinel
-
-# Generate an image
-skill: gemini-imagegen
-
-
-
-
-
-
-
-
-
-
Frequently Asked Questions
-
-
-
-
-
What is Compounding Engineering?
-
-
-
-
- It's the opposite of how most teams work. Normally, you fix a bug, ship it, and forget it. Next month someone hits the same bug and re-solves it from scratch. Compounding engineering means each fix teaches the system. Your third code review is faster than your first because the agents learned patterns. Your tenth security audit catches issues you missed in audit #2. The work accumulates instead of evaporating.
-
-
-
-
-
-
How do agents differ from skills?
-
-
-
-
- Agents are coworkers with specific jobs. The security-sentinel does security reviews. The kieran-rails-reviewer enforces Rails conventions. You call them directly: claude agent security-sentinel.
-
-
- Skills are expertise Claude can tap into when needed. The dhh-rails-style knows 37signals Rails patterns. The gemini-imagegen knows how to generate images. Claude invokes them automatically when relevant, or you can explicitly call them: skill: dhh-rails-style.
-
-
-
-
-
-
Why aren't MCP servers loading automatically?
-
-
-
-
- Yeah, we know. It's a current limitation. The workaround is simple: manually add the MCP servers to your .claude/settings.json file. Check the README for copy-paste config. Takes 30 seconds and you're done.
-
-
-
-
-
-
Can I use this with languages other than Ruby/Rails?
-
-
-
-
- Absolutely. We've got Python and TypeScript reviewers alongside the Rails ones. And the workflow commands, research agents, and skills like gemini-imagegen don't care what language you write. The security-sentinel finds SQL injection whether it's in Rails, Django, or Laravel.
-
-
-
-
-
-
How do I create my own agents or skills?
-
-
-
-
- Run /create-agent-skill or invoke the create-agent-skills skill. Both give you templates, enforce best practices, and walk you through the structure. You'll have a working agent or skill in minutes instead of reverse-engineering from examples.
-
-
-
-
-
-
-
-
-
- Free & Open Source
-
Install Once. Compound Forever.
-
- Your next code review takes 30 seconds. The one after that? Even faster. That's compounding. Get 29 expert agents, 23 workflow commands, and 18 specialized skills working for you right now.
-
- Think of agents as your expert teammates who never sleep. You've got 23 specialists here—each one obsessed with a single domain. Call them individually when you need focused expertise, or orchestrate them together for multi-angle analysis. They're opinionated, they're fast, and they remember your codebase better than you do.
-
-
-
-
How to Use Agents
-
-
# Basic invocation
-claude agent [agent-name]
-
-# With a specific message
-claude agent [agent-name] "Your message here"
-
-# Examples
-claude agent kieran-rails-reviewer
-claude agent security-sentinel "Audit the payment flow"
-
-
-
-
-
-
Review Agents (10)
-
Your code review dream team. These agents catch what humans miss at 2am—security holes, performance cliffs, architectural drift, and those "it works but I hate it" moments. They're picky. They disagree with each other. That's the point.
-
-
-
-
kieran-rails-reviewer
- Rails
-
-
- Your senior Rails developer who's seen too many "clever" solutions fail in production. Obsessed with code that's boring, predictable, and maintainable. Strict on existing code (because touching it risks everything), pragmatic on new isolated features (because shipping matters). If you've ever thought "this works but feels wrong," this reviewer will tell you why.
-
New Code - Pragmatic. If it's isolated and works, it's acceptable.
-
Turbo Streams - Simple turbo streams MUST be inline arrays in controllers.
-
Testing as Quality - Hard-to-test code = poor structure that needs refactoring.
-
Naming (5-Second Rule) - Must understand what a view/component does in 5 seconds from its name.
-
Namespacing - Always use class Module::ClassName pattern.
-
Duplication > Complexity - Simple duplicated code is better than complex DRY abstractions.
-
-
-
claude agent kieran-rails-reviewer "Review the UserController"
-
-
-
-
-
-
dhh-rails-reviewer
- Rails
-
-
- What if DHH reviewed your Rails PR? He'd ask why you're building React inside Rails, why you need six layers of abstraction for a form, and whether you've forgotten that Rails already solved this problem. This agent channels that energy—blunt, opinionated, allergic to complexity.
-
Challenges overengineering and microservices mentality
-
-
-
claude agent dhh-rails-reviewer
-
-
-
-
-
-
kieran-python-reviewer
- Python
-
-
- Your Pythonic perfectionist who believes type hints aren't optional and dict.get() beats try/except KeyError. Expects modern Python 3.10+ patterns—no legacy syntax, no typing.List when list works natively. If your code looks like Java translated to Python, prepare for rewrites.
-
-
Key Focus Areas
-
-
Type hints for all functions
-
Pythonic patterns and idioms
-
Modern Python syntax
-
Import organization
-
Module extraction signals
-
-
-
claude agent kieran-python-reviewer
-
-
-
-
-
-
kieran-typescript-reviewer
- TypeScript
-
-
- TypeScript's type system is a gift—don't throw it away with any. This reviewer treats any like a code smell that needs justification. Expects proper types, clean imports, and code that doesn't need comments because the types explain everything. You added TypeScript for safety; this agent makes sure you actually get it.
-
-
Key Focus Areas
-
-
No any without justification
-
Component/module extraction signals
-
Import organization
-
Modern TypeScript patterns
-
Testability assessment
-
-
-
claude agent kieran-typescript-reviewer
-
-
-
-
-
-
security-sentinel
- Security
-
-
- Security vulnerabilities hide in boring code—the "just grab the user ID from params" line that ships a privilege escalation bug to production. This agent thinks like an attacker: SQL injection, XSS, auth bypass, leaked secrets. Run it before touching authentication, payments, or anything with PII. Your users' data depends on paranoia.
-
-
Security Checks
-
-
Input validation analysis
-
SQL injection risk assessment
-
XSS vulnerability detection
-
Authentication/authorization audit
-
Sensitive data exposure scanning
-
OWASP Top 10 compliance
-
Hardcoded secrets search
-
-
-
claude agent security-sentinel "Audit the payment flow"
-
-
-
-
-
-
performance-oracle
- Performance
-
-
- Your code works fine with 10 users. What happens at 10,000? This agent time-travels to your future scaling problems—N+1 queries that murder your database, O(n²) algorithms hiding in loops, missing indexes, memory leaks. It thinks in Big O notation and asks uncomfortable questions about what breaks first when traffic spikes.
-
-
Analysis Areas
-
-
Algorithmic complexity (Big O notation)
-
N+1 query pattern detection
-
Proper index usage verification
-
Memory management review
-
Caching opportunity identification
-
Network usage optimization
-
Frontend bundle impact
-
-
-
claude agent performance-oracle
-
-
-
-
-
-
architecture-strategist
- Architecture
-
-
- Every "small change" either reinforces your architecture or starts eroding it. This agent zooms out to see if your fix actually fits the system's design—or if you're bolting duct tape onto a crumbling foundation. It speaks SOLID principles, microservice boundaries, and API contracts. Call it when you're about to make a change that "feels weird."
-
-
Analysis Areas
-
-
Overall system structure understanding
-
Change context within architecture
-
Architectural violation identification
-
SOLID principles compliance
-
Microservice boundary assessment
-
API contract evaluation
-
-
-
claude agent architecture-strategist
-
-
-
-
-
-
data-integrity-guardian
- Data
-
-
- Migrations can't be rolled back once they're run on production. This agent is your last line of defense before you accidentally drop a column with user data, create a race condition in transactions, or violate GDPR. It obsesses over referential integrity, rollback safety, and data constraints. Your database is forever; migrations should be paranoid.
-
-
Review Areas
-
-
Migration safety and reversibility
-
Data constraint validation
-
Transaction boundary review
-
Referential integrity preservation
-
Privacy compliance (GDPR, CCPA)
-
Data corruption scenario checking
-
-
-
claude agent data-integrity-guardian
-
-
-
-
-
-
pattern-recognition-specialist
- Patterns
-
-
- Patterns tell stories—Factory, Observer, God Object, Copy-Paste Programming. This agent reads your code like an archaeologist reading artifacts. It spots the good patterns (intentional design), the anti-patterns (accumulated tech debt), and the duplicated blocks you swore you'd refactor later. Runs tools like jscpd because humans miss repetition that machines catch instantly.
-
- Simplicity is violent discipline. This agent asks "do you actually need this?" about every line, every abstraction, every dependency. YAGNI isn't a suggestion—it's the law. Your 200-line feature with three layers of indirection? This agent will show you the 50-line version that does the same thing. Complexity is a liability; simplicity compounds.
-
-
Simplification Checks
-
-
Analyze every line for necessity
-
Simplify complex logic
-
Remove redundancy and duplication
-
Challenge abstractions
-
Optimize for readability
-
Eliminate premature generalization
-
-
-
claude agent code-simplicity-reviewer
-
-
-
-
-
-
-
Research Agents (4)
-
Stop guessing. These agents dig through documentation, GitHub repos, git history, and real-world examples to give you answers backed by evidence. They read faster than you, remember more than you, and synthesize patterns you'd miss. Perfect for "how should I actually do this?" questions.
-
-
-
-
framework-docs-researcher
- Research
-
-
- Official docs are scattered. GitHub examples are inconsistent. Deprecations hide in changelogs. This agent pulls it all together—docs, source code, version constraints, real-world examples. Ask "how do I use Hotwire Turbo?" and get back patterns that actually work in production, not toy tutorials.
-
-
Capabilities
-
-
Fetch official framework and library documentation
-
Identify version-specific constraints and deprecations
-
Search GitHub for real-world usage examples
-
Analyze gem/library source code using bundle show
-
Synthesize findings with practical examples
-
-
-
claude agent framework-docs-researcher "Research Hotwire Turbo patterns"
-
-
-
-
-
-
best-practices-researcher
- Research
-
-
- "Best practices" are everywhere and contradictory. This agent cuts through the noise by evaluating sources (official docs, trusted blogs, real GitHub repos), checking recency, and synthesizing actionable guidance. You get code templates, patterns that scale, and answers you can trust—not StackOverflow copy-paste roulette.
-
-
Capabilities
-
-
Leverage multiple sources (Context7 MCP, web search, GitHub)
-
Evaluate information quality and recency
-
Synthesize into actionable guidance
-
Provide code examples and templates
-
Research issue templates and community engagement
-
-
-
claude agent best-practices-researcher "Find pagination patterns"
-
-
-
-
-
-
git-history-analyzer
- Git
-
-
- Your codebase has a history—decisions, patterns, mistakes. This agent does archaeology with git tools: file evolution, blame analysis, contributor expertise mapping. Ask "why does this code exist?" and get the commit that explains it. Spot patterns in how bugs appear. Understand the design decisions buried in history.
-
-
Analysis Techniques
-
-
Trace file evolution using git log --follow
-
Determine code origins using git blame -w -C -C -C
-
Identify patterns from commit history
-
Map key contributors and expertise areas
-
Extract historical patterns of issues and fixes
-
-
-
claude agent git-history-analyzer "Analyze changes to User model"
-
-
-
-
-
-
repo-research-analyst
- Research
-
-
- Every repo has conventions—some documented, most tribal knowledge. This agent reads ARCHITECTURE.md, issue templates, PR patterns, and actual code to reverse-engineer the standards. Perfect for joining a new project or ensuring your PR matches the team's implicit style. Finds the rules nobody wrote down.
-
-
Analysis Areas
-
-
Architecture and documentation files (ARCHITECTURE.md, README.md, CLAUDE.md)
-
GitHub issues for patterns and conventions
-
Issue/PR templates and guidelines
-
Implementation patterns using ast-grep or rg
-
Project-specific conventions
-
-
-
claude agent repo-research-analyst
-
-
-
-
-
-
-
Workflow Agents (5)
-
Tedious work you hate doing. These agents handle the grind—reproducing bugs, resolving PR comments, running linters, analyzing specs. They're fast, they don't complain, and they free you up to solve interesting problems instead of mechanical ones.
-
-
-
-
bug-reproduction-validator
- Bugs
-
-
- Half of bug reports aren't bugs—they're user errors, environment issues, or misunderstood features. This agent systematically reproduces the reported behavior, classifies what it finds (Confirmed, Can't Reproduce, Not a Bug, etc.), and assesses severity. Saves you from chasing ghosts or missing real issues.
-
-
Classification Types
-
-
Confirmed - Bug reproduced successfully
-
Cannot Reproduce - Unable to reproduce
-
Not a Bug - Expected behavior
-
Environmental - Environment-specific issue
-
Data - Data-related issue
-
User Error - User misunderstanding
-
-
-
claude agent bug-reproduction-validator
-
-
-
-
-
-
pr-comment-resolver
- PR
-
-
- Code review comments pile up. This agent reads them, plans fixes, implements changes, and reports back what it did. It doesn't argue with reviewers or skip hard feedback—it just resolves the work systematically. Great for burning through a dozen "change this variable name" comments in seconds.
-
-
Workflow
-
-
Analyze code review comments
-
Plan the resolution before implementation
-
Implement requested modifications
-
Verify resolution doesn't break functionality
-
Provide clear resolution reports
-
-
-
claude agent pr-comment-resolver
-
-
-
-
-
-
lint
- Quality
-
-
- Linters are pedantic robots that enforce consistency. This agent runs StandardRB, ERBLint, and Brakeman for you—checking Ruby style, ERB templates, and security issues. It's fast (uses the Haiku model) and catches the formatting noise before CI does.
-
- Specs always have gaps—edge cases nobody thought about, ambiguous requirements, missing error states. This agent maps all possible user flows, identifies what's unclear or missing, and generates the questions you need to ask stakeholders. Runs before you code to avoid building the wrong thing.
-
-
Analysis Areas
-
-
Map all possible user flows and permutations
-
Identify gaps, ambiguities, and missing specifications
- Style guides are arbitrary rules that make writing consistent. This agent enforces Every's particular quirks—title case in headlines, no overused filler words ("actually," "very"), active voice, Oxford commas. It's a line-by-line grammar cop for content that needs to match the brand.
-
-
Style Checks
-
-
Title case in headlines, sentence case elsewhere
-
Company singular/plural usage
-
Remove overused words (actually, very, just)
-
Enforce active voice
-
Apply formatting rules (Oxford commas, em dashes)
-
-
-
claude agent every-style-editor
-
-
-
-
-
-
-
Design Agents (3)
-
Design is iteration. These agents take screenshots, compare them to Figma, make targeted improvements, and repeat. They fix spacing, alignment, colors, typography—the visual details that compound into polish. Perfect for closing the gap between "it works" and "it looks right."
-
-
-
-
design-iterator
- Design
-
-
- Design doesn't happen in one pass. This agent runs a loop: screenshot the UI, analyze what's off (spacing, colors, alignment), implement 3-5 targeted fixes, repeat. Run it for 10 iterations and watch rough interfaces transform into polished designs through systematic refinement.
-
-
Process
-
-
Take focused screenshots of target elements
-
Analyze current state and identify 3-5 improvements
-
Implement targeted CSS/design changes
-
Document changes made
-
Repeat for specified iterations (default 10)
-
-
-
claude agent design-iterator
-
-
-
-
-
-
figma-design-sync
- Figma
-
-
- Designers hand you a Figma file. You build it. Then: "the spacing is wrong, the font is off, the colors don't match." This agent compares your implementation to the Figma spec, identifies every visual discrepancy, and fixes them automatically. Designers stay happy. You stay sane.
-
-
Workflow
-
-
Extract design specifications from Figma
-
Capture implementation screenshots
-
Conduct systematic visual comparison
-
Make precise code changes to fix discrepancies
-
Verify implementation matches design
-
-
-
claude agent figma-design-sync
-
-
-
-
-
-
design-implementation-reviewer
- Review
-
-
- Before you ship UI changes, run this agent. It compares your implementation against Figma at a pixel level—layouts, typography, colors, spacing, responsive behavior. Uses the Opus model for detailed visual analysis. Catches the "close enough" mistakes that users notice but you don't.
-
-
Comparison Areas
-
-
Layouts and structure
-
Typography (fonts, sizes, weights)
-
Colors and themes
-
Spacing and alignment
-
Different viewport sizes
-
-
-
claude agent design-implementation-reviewer
-
-
-
-
-
-
-
Documentation Agent (1)
-
-
-
-
ankane-readme-writer
- Docs
-
-
- Andrew Kane writes READMEs that are models of clarity—concise, scannable, zero fluff. This agent generates gem documentation in that style: 15 words max per sentence, imperative voice, single-purpose code examples. If your README rambles, this agent will fix it.
-
- /release-docs command moved from plugin to local .claude/commands/ -
- This is a repository maintenance command and should not be distributed to users. Command count reduced from 24 to 23.
-
-
-
-
-
-
-
-
-
v2.32.1
- 2026-02-12
-
-
-
-
Changed
-
-
- /workflows:review command - Added learnings-researcher
- agent to the parallel review phase. The review now searches docs/solutions/ for past
- issues related to the PR's modules and patterns, surfacing "Known Pattern" findings during synthesis.
-
-
-
-
-
-
-
-
-
v2.6.0
- 2024-11-26
-
-
-
-
Removed
-
-
- feedback-codifier agent - Removed from workflow agents.
- Agent count reduced from 24 to 23.
-
-
-
-
-
-
-
-
-
v2.5.0
- 2024-11-25
-
-
-
-
Added
-
-
- /report-bug command - New slash command for reporting bugs in the
- compound-engineering plugin. Provides a structured workflow that gathers bug information
- through guided questions, collects environment details automatically, and creates a GitHub
- issue in the EveryInc/compound-engineering-plugin repository.
-
-
-
-
-
-
-
-
-
v2.4.1
- 2024-11-24
-
-
-
-
Improved
-
-
- design-iterator agent - Added focused screenshot guidance: always capture
- only the target element/area instead of full page screenshots. Includes browser_resize
- recommendations, element-targeted screenshot workflow using browser_snapshot refs, and
- explicit instruction to never use fullPage mode.
-
-
-
-
-
-
-
-
-
v2.4.0
- 2024-11-24
-
-
-
-
Fixed
-
-
- MCP Configuration - Moved MCP servers back to plugin.json
- following working examples from anthropics/life-sciences plugins.
-
-
- Context7 URL - Updated to use HTTP type with correct endpoint URL.
-
-
-
-
-
-
-
-
-
v2.3.0
- 2024-11-24
-
-
-
-
Changed
-
-
- MCP Configuration - Moved MCP servers from inline plugin.json
- to separate .mcp.json file per Claude Code best practices.
-
-
-
-
-
-
-
-
-
v2.2.1
- 2024-11-24
-
-
-
-
Fixed
-
-
- Playwright MCP Server - Added missing "type": "stdio" field
- required for MCP server configuration to load properly.
-
-
-
-
-
-
-
-
-
v2.2.0
- 2024-11-24
-
-
-
-
Added
-
-
- Context7 MCP Server - Bundled Context7 for instant framework documentation
- lookup. Provides up-to-date docs for Rails, React, Next.js, and more than 100 other frameworks.
-
-
-
-
-
-
-
-
-
v2.1.0
- 2024-11-24
-
-
-
-
Added
-
-
- Playwright MCP Server - Bundled @playwright/mcp for browser
- automation across all projects. Provides screenshot, navigation, click, fill, and evaluate tools.
-
-
-
-
-
-
Changed
-
-
Replaced all Puppeteer references with Playwright across agents and commands:
-
-
bug-reproduction-validator agent
-
design-iterator agent
-
design-implementation-reviewer agent
-
figma-design-sync agent
-
generate_command command
-
-
-
-
-
-
-
-
-
-
v2.0.2
- 2024-11-24
-
-
-
-
Changed
-
-
- design-iterator agent - Updated description to emphasize proactive usage
- when design work isn't coming together on first attempt.
-
-
-
-
-
-
-
-
-
v2.0.1
- 2024-11-24
-
-
-
-
Added
-
-
CLAUDE.md - Project instructions with versioning requirements
- Here's the thing about slash commands: they're workflows you'd spend 20 minutes doing manually, compressed into one line. Type /plan and watch three agents launch in parallel to research your codebase while you grab coffee. That's the point—automation that actually saves time, not busywork dressed up as productivity.
-
-
-
-
-
Workflow Commands (four)
-
These are the big four: Plan your feature, Review your code, Work through the implementation, and Codify what you learned. Every professional developer does this cycle—these commands just make you faster at it.
-
-
-
- /plan
-
-
- You've got a feature request and a blank page. This command turns "we need OAuth" into a structured plan that actually tells you what to build—researched, reviewed, and ready to execute.
-
-
Arguments
-
[feature description, bug report, or improvement idea]
-
Workflow
-
-
Repository Research (Parallel) - Launch three agents simultaneously:
-
SpecFlow Analysis - Run spec-flow-analyzer for user flows
-
Choose Detail Level:
-
-
MINIMAL - Simple bugs/small improvements
-
MORE - Standard features
-
A LOT - Major features with phases
-
-
-
Write Plan - Save as plans/<issue_title>.md
-
Review - Call /plan_review for multi-agent feedback
-
-
-
-
-
This command does NOT write code. It only researches and creates the plan.
-
-
-
-
/plan Add OAuth integration for third-party auth
-/plan Fix N+1 query in user dashboard
-
-
-
-
-
- /review
-
-
- Twelve specialized reviewers examine your PR in parallel—security, performance, architecture, patterns. It's like code review by committee, except the committee finishes in two minutes instead of two days.
-
-
Arguments
-
[PR number, GitHub URL, branch name, or "latest"]
-
Workflow
-
-
Setup - Detect review target, optionally use git-worktree for isolation
- Point this at a plan file and watch it execute—reading requirements, setting up environment, running tests, creating commits, opening PRs. It's the "just build the thing" button you wish you always had.
-
- Just fixed a gnarly bug? This captures the solution before you forget it. Seven agents analyze what you did, why it worked, and how to prevent it next time. Each documented solution compounds your team's knowledge.
-
-
Arguments
-
[optional: brief context about the fix]
-
Workflow
-
-
Preconditions - Verify problem is solved and verified working
The supporting cast—commands that do one specific thing really well. Generate changelogs, resolve todos in parallel, triage findings, create new commands. The utilities you reach for daily.
-
-
-
- /changelog
-
-
- Turn your git history into a changelog people actually want to read. Breaking changes at the top, fun facts at the bottom, everything organized by what matters to your users.
-
-
Arguments
-
[optional: daily|weekly, or time period in days]
-
Output Sections
-
-
Breaking Changes (top priority)
-
New Features
-
Bug Fixes
-
Other Improvements
-
Shoutouts
-
Fun Fact
-
-
-
/changelog daily
-/changelog weekly
-/changelog 7
-
-
-
-
-
- /create-agent-skill
-
-
- Need a new skill? This walks you through creating one that actually works—proper frontmatter, clear documentation, all the conventions baked in. Think of it as scaffolding for skills.
-
-
Arguments
-
[skill description or requirements]
-
-
/create-agent-skill PDF processing for document analysis
-/create-agent-skill Web scraping with error handling
-
-
-
-
-
- /generate_command
-
-
- Same idea, but for commands instead of skills. Tell it what workflow you're tired of doing manually, and it generates a proper slash command with all the right patterns.
-
-
Arguments
-
[command purpose and requirements]
-
-
/generate_command Security audit for codebase
-/generate_command Automated performance testing
-
-
-
-
-
- /heal-skill
-
-
- Skills drift—APIs change, URLs break, parameters get renamed. When a skill stops working, this figures out what's wrong and fixes the documentation. You approve the changes before anything commits.
-
-
Arguments
-
[optional: specific issue to fix]
-
Approval Options
-
-
Apply and commit
-
Apply without commit
-
Revise changes
-
Cancel
-
-
-
/heal-skill API endpoint URL changed
-/heal-skill parameter validation error
-
-
-
-
-
- /plan_review
-
-
- Before you execute a plan, have three reviewers tear it apart—Rails conventions, best practices, simplicity. Better to find the problems in the plan than in production.
-
-
Arguments
-
[plan file path or plan content]
-
Review Agents
-
-
dhh-rails-reviewer - Rails conventions
-
kieran-rails-reviewer - Rails best practices
-
code-simplicity-reviewer - Simplicity and clarity
-
-
-
/plan_review plans/user-authentication.md
-
-
-
-
-
- /report-bug
-
-
- Something broken? This collects all the context—what broke, what you expected, error messages, environment—and files a proper bug report. No more "it doesn't work" issues.
-
/report-bug Agent not working
-/report-bug Command failing with timeout
-
-
-
-
-
- /reproduce-bug
-
-
- Give it a GitHub issue number and it tries to actually reproduce the bug—reading the issue, analyzing code paths, iterating until it finds the root cause. Then it posts findings back to the issue.
-
-
Arguments
-
[GitHub issue number]
-
Investigation Process
-
-
Read GitHub issue details
-
Launch parallel investigation agents
-
Analyze code for failure points
-
Iterate until root cause found
-
Post findings to GitHub issue
-
-
-
/reproduce-bug 142
-
-
-
-
-
- /triage
-
-
- Got a pile of code review findings or security audit results? This turns them into actionable todos—one at a time, you decide: create the todo, skip it, or modify and re-present.
-
-
Arguments
-
[findings list or source type]
-
User Decisions
-
-
"yes" - Create/update todo file, change status to ready
-
"next" - Skip and delete from todos
-
"custom" - Modify and re-present
-
-
-
-
-
This command does NOT write code. It only categorizes and creates todo files.
- All those TODO comments scattered through your codebase? This finds them, builds a dependency graph, and spawns parallel agents to resolve them all at once. Clears the backlog in minutes.
-
- Same deal, but for PR review comments. Fetch unresolved threads, spawn parallel resolver agents, commit the fixes, and mark threads as resolved. Your reviewers will wonder how you're so fast.
-
-
Arguments
-
[optional: PR number or current PR]
-
Process
-
-
Get all unresolved PR comments
-
Create TodoWrite list
-
Launch parallel pr-comment-resolver agents
-
Commit, resolve threads, and push
-
-
-
/resolve_pr_parallel
-/resolve_pr_parallel 123
-
-
-
-
-
- /resolve_todo_parallel
-
-
- Those todo files in your /todos directory? Point this at them and watch parallel agents knock them out—analyzing dependencies, executing in the right order, marking resolved as they finish.
-
- Your project initialization command. What exactly it does depends on your project setup—think of it as the "get everything ready" button before you start coding.
-
- Five minutes from now, you'll run a single command that spins up 10 AI agents—each with a different specialty—to review your pull request in parallel. Security, performance, architecture, accessibility, all happening at once. That's the plugin. Let's get you set up.
-
Think of the marketplace as an app store. You're adding it to Claude Code's list of places to look for plugins:
-
-
claude /plugin marketplace add https://github.com/EveryInc/compound-engineering-plugin
-
-
-
Step 2: Install the Plugin
-
Now grab the plugin itself:
-
-
claude /plugin install compound-engineering
-
-
-
Step 3: Verify Installation
-
Check that it worked:
-
-
claude /plugin list
-
-
You'll see compound-engineering in the list. If you do, you're ready.
-
-
-
-
-
Known Issue: MCP Servers
-
- The bundled MCP servers (Playwright for browser automation, Context7 for docs) don't always auto-load. If you need them, there's a manual config step below. Otherwise, ignore this—everything else works fine.
-
-
-
-
-
-
-
-
Quick Start
-
-
Let's see what this thing can actually do. I'll show you three workflows you'll use constantly:
-
-
Run a Code Review
-
This is the big one. Type /review and watch it spawn 10+ specialized reviewers:
-
-
# Review a PR by number
-/review 123
-
-# Review the current branch
-/review
-
-# Review a specific branch
-/review feature/my-feature
-
-
-
Use a Specialized Agent
-
Sometimes you just need one expert. Call them directly:
-
-
# Rails code review with Kieran's conventions
-claude agent kieran-rails-reviewer "Review the UserController"
-
-# Security audit
-claude agent security-sentinel "Audit authentication flow"
-
-# Research best practices
-claude agent best-practices-researcher "Find pagination patterns for Rails"
-
-
-
Invoke a Skill
-
Skills are like loading a reference book into Claude's brain. When you need deep knowledge in a specific domain:
-
-
# Generate images with Gemini
-skill: gemini-imagegen
-
-# Write Ruby in DHH's style
-skill: dhh-ruby-style
-
-# Create a new Claude Code skill
-skill: create-agent-skills
-
-
-
-
-
-
Configuration
-
-
MCP Server Configuration
-
- If the MCP servers didn't load automatically, paste this into .claude/settings.json:
-
Right now, only one skill needs an API key. If you use Gemini's image generation:
-
-
-
-
Variable
-
Required For
-
Description
-
-
-
-
-
GEMINI_API_KEY
-
gemini-imagegen
-
Google Gemini API key for image generation
-
-
-
-
-
-
-
-
The Compounding Engineering Philosophy
-
-
- Every unit of engineering work should make subsequent units of work easier—not harder.
-
-
-
Here's how it works in practice—the four-step loop you'll run over and over:
-
-
-
-
-
1. Plan
-
- Before you write a single line, figure out what you're building and why. Use research agents to gather examples, patterns, and context. Think of it as Google Search meets expert consultation.
-
-
-
-
-
2. Delegate
-
- Now build it—with help. Each agent specializes in something (Rails, security, design). You stay in the driver's seat, but you've got a team of specialists riding shotgun.
-
-
-
-
-
3. Assess
-
- Before you ship, run the gauntlet. Security agent checks for vulnerabilities. Performance agent flags N+1 queries. Architecture agent questions your design choices. All at once, all in parallel.
-
-
-
-
-
4. Codify
-
- You just solved a problem. Write it down. Next time you (or your teammate) face this, you'll have a runbook. That's the "compounding" part—each solution makes the next one faster.
-
-
-
-
-
-
-
-
Using Agents
-
-
- Think of agents as coworkers with different job titles. You wouldn't ask your security engineer to design your UI, right? Same concept here—each agent has a specialty, and you call the one you need.
-
-
-
Invoking Agents
-
-
# Basic syntax
-claude agent [agent-name] "[optional message]"
-
-# Examples
-claude agent kieran-rails-reviewer
-claude agent security-sentinel "Audit the payment flow"
-claude agent git-history-analyzer "Show changes to user model"
- Commands are macros that run entire workflows for you. One command can spin up a dozen agents, coordinate their work, collect results, and hand you a summary. It's automation all the way down.
-
- Here's the difference: agents are who does the work, skills are what they know. When you invoke a skill, you're loading a reference library into Claude's context—patterns, templates, examples, workflows. It's like handing Claude a technical manual.
-
-
-
Invoking Skills
-
-
# In your prompt, reference the skill
-skill: gemini-imagegen
-
-# Or ask Claude to use it
-"Use the dhh-ruby-style skill to refactor this code"
-
-
-
Skill Structure
-
Peek inside a skill directory and you'll usually find:
-
-
SKILL.md - The main instructions (what Claude reads first)
-
references/ - Deep dives on concepts and patterns
-
templates/ - Copy-paste code snippets
-
workflows/ - Step-by-step "how to" guides
-
scripts/ - Actual executable code (when words aren't enough)
- You'll spend most of your time here. This workflow is why the plugin exists—to turn code review from a bottleneck into a superpower.
-
-
-
Basic Review
-
-
# Review a PR
-/review 123
-
-# Review current branch
-/review
-
-
-
Understanding Findings
-
Every finding gets a priority label. Here's what they mean:
-
-
P1 Critical - Don't merge until this is fixed. Think: SQL injection, data loss, crashes in production.
-
P2 Important - Fix before shipping. Performance regressions, N+1 queries, shaky architecture.
-
P3 Nice-to-Have - Would be better, but ship without it if you need to. Documentation, minor cleanup, style issues.
-
-
-
Working with Todo Files
-
After a review, you'll have a todos/ directory full of markdown files. Each one is a single issue to fix:
-
-
# List all pending todos
-ls todos/*-pending-*.md
-
-# Triage findings
-/triage
-
-# Resolve todos in parallel
-/resolve_todo_parallel
-
-
-
-
-
-
Creating Custom Agents
-
-
- The built-in agents cover a lot of ground, but every team has unique needs. Maybe you want a "rails-api-reviewer" that enforces your company's API standards. That's 10 minutes of work.
-
-
-
Agent File Structure
-
-
---
-name: my-custom-agent
-description: Brief description of what this agent does
----
-
-# Agent Instructions
-
-You are [role description].
-
-## Your Responsibilities
-1. First responsibility
-2. Second responsibility
-
-## Guidelines
-- Guideline one
-- Guideline two
-
-
-
Agent Location
-
Drop your agent file in one of these directories:
-
-
.claude/agents/ - Just for this project (committed to git)
-
~/.claude/agents/ - Available in all your projects (stays on your machine)
-
-
-
-
-
-
The Easy Way
-
- Don't write the YAML by hand. Just run /create-agent-skill and answer a few questions. The command generates the file, validates the format, and puts it in the right place.
-
-
-
-
-
-
-
-
Creating Custom Skills
-
-
- Skills are heavier than agents—they're knowledge bases, not just prompts. You're building a mini library that Claude can reference. Worth the effort for things you do repeatedly.
-
---
-name: my-skill
-description: Brief description shown when skill is invoked
----
-
-# Skill Title
-
-Detailed instructions for using this skill.
-
-## Quick Start
-...
-
-## Reference Materials
-The skill includes references in the `references/` directory.
-
-## Templates
-Use templates from the `templates/` directory.
-
-
-
-
-
-
Get Help Building Skills
-
- Type skill: create-agent-skills and Claude loads expert guidance on skill architecture, best practices, file organization, and validation. It's like having a senior engineer walk you through it.
-
- Think of MCP servers as power tools that plug into Claude Code. Want Claude to actually open a browser and click around your app? That's Playwright. Need the latest Rails docs without leaving your terminal? That's Context7. The plugin bundles both servers—they just work when you install.
-
-
-
-
-
-
Known Issue: Auto-Loading
-
- Sometimes MCP servers don't wake up automatically. If Claude can't take screenshots or look up docs, you'll need to add them manually. See Manual Configuration for the fix.
-
-
-
-
-
-
-
Playwright
-
- You know how you can tell a junior developer "open Chrome and click the login button"? Now you can tell Claude the same thing. Playwright gives Claude hands to control a real browser—clicking buttons, filling forms, taking screenshots, running JavaScript. It's like pair programming with someone who has a browser open next to you.
-
-
-
Tools Provided
-
-
-
-
Tool
-
Description
-
-
-
-
-
browser_navigate
-
Go to any URL—your localhost dev server, production, staging, that competitor's site you're studying
-
-
-
browser_take_screenshot
-
Capture what you're seeing right now. Perfect for "does this look right?" design reviews
-
-
-
browser_click
-
Click buttons, links, whatever. Claude finds it by text or CSS selector, just like you would
-
-
-
browser_fill_form
-
Type into forms faster than you can. Great for testing signup flows without manual clicking
-
-
-
browser_snapshot
-
Get the page's accessibility tree—how screen readers see it. Useful for understanding structure without HTML noise
-
-
-
browser_evaluate
-
Run any JavaScript in the page. Check localStorage, trigger functions, read variables—full console access
-
-
-
-
-
When You'll Use This
-
-
Design reviews without leaving the terminal - "Take a screenshot of the new navbar on mobile" gets you a PNG in seconds
-
Testing signup flows while you code - "Fill in the registration form with test@example.com and click submit" runs the test for you
-
Debugging production issues - "Navigate to the error page and show me what's in localStorage" gives you the state without opening DevTools
-
Competitive research - "Go to competitor.com and screenshot their pricing page" builds your swipe file automatically
-
-
-
Example Usage
-
-
# Just talk to Claude naturally—it knows when to use Playwright
-
-# Design review
-"Take a screenshot of the login page"
-
-# Testing a form
-"Navigate to /signup and fill in the email field with test@example.com"
-
-# Debug JavaScript state
-"Go to localhost:3000 and run console.log(window.currentUser)"
-
-# The browser runs in the background. You'll get results without switching windows.
- Ever ask Claude about a framework and get an answer from 2023? Context7 fixes that. It's a documentation service that keeps Claude current with 100+ frameworks—Rails, React, Next.js, Django, whatever you're using. Think of it as having the official docs piped directly into Claude's brain.
-
-
-
Tools Provided
-
-
-
-
Tool
-
Description
-
-
-
-
-
resolve-library-id
-
Maps "Rails" to the actual library identifier Context7 uses. You don't call this—Claude does it automatically
-
-
-
get-library-docs
-
Fetches the actual documentation pages. Ask "How does useEffect work?" and this grabs the latest React docs
-
-
-
-
-
What's Covered
-
Over 100 frameworks and libraries. Here's a taste of what you can look up:
-
-
-
Backend
-
-
Ruby on Rails
-
Django
-
Laravel
-
Express
-
FastAPI
-
Spring Boot
-
-
-
-
Frontend
-
-
React
-
Vue.js
-
Angular
-
Svelte
-
Next.js
-
Nuxt
-
-
-
-
Mobile
-
-
React Native
-
Flutter
-
SwiftUI
-
Kotlin
-
-
-
-
Tools & Libraries
-
-
Tailwind CSS
-
PostgreSQL
-
Redis
-
GraphQL
-
Prisma
-
And many more...
-
-
-
-
-
Example Usage
-
-
# Just ask about the framework—Claude fetches current docs automatically
-
-"Look up the Rails ActionCable documentation"
-
-"How does the useEffect hook work in React?"
-
-"What are the best practices for PostgreSQL indexes?"
-
-# You get answers based on the latest docs, not Claude's training cutoff
- If the servers don't load automatically (you'll know because Claude can't take screenshots or fetch docs), you need to wire them up yourself. It's a two-minute copy-paste job.
-
-
-
Project-Level Configuration
-
To enable for just this project, add this to .claude/settings.json in your project root:
After you add the config, restart Claude Code. Then test that everything works:
-
-
# Ask Claude what it has
-"What MCP tools do you have access to?"
-
-# Test Playwright (should work now)
-"Take a screenshot of the current directory listing"
-
-# Test Context7 (should fetch real docs)
-"Look up Rails Active Record documentation"
-
-# If either fails, double-check your JSON syntax and file paths
- Think of skills as reference manuals that Claude Code can read mid-conversation. When you're writing Rails code and want DHH's style, or building a gem like Andrew Kane would, you don't need to paste documentation—just invoke the skill. Claude reads it, absorbs the patterns, and writes code that way.
-
-
-
-
How to Use Skills
-
-
# In your prompt, reference the skill
-skill: [skill-name]
-
-# Examples
-skill: gemini-imagegen
-skill: dhh-rails-style
-skill: create-agent-skills
-
-
-
-
-
-
-
Skills vs Agents
-
- Agents are personas—they do things. Skills are knowledge—they teach Claude how to do things. Use claude agent [name] when you want someone to review your code. Use skill: [name] when you want to write code in a particular style yourself.
-
-
-
-
-
-
-
Development Tools (8)
-
These skills teach Claude specific coding styles and architectural patterns. Use them when you want code that follows a particular philosophy—not just any working code, but code that looks like it was written by a specific person or framework.
-
-
-
-
create-agent-skills
- Meta
-
-
- You're writing a skill right now, but you're not sure if you're structuring the SKILL.md file correctly. Should the examples go before the theory? How do you organize workflows vs. references? This skill is the answer—it's the master template for building skills themselves.
-
- The simpler, step-by-step version of create-agent-skills. When you just want a checklist to follow from blank file to packaged skill, use this. It's less about theory, more about "do step 1, then step 2."
-
-
6-Step Process
-
-
Understand skill usage patterns with examples
-
Plan reusable skill contents
-
Initialize skill using template
-
Edit skill with clear instructions
-
Package skill into distributable zip
-
Iterate based on testing feedback
-
-
-
skill: skill-creator
-
-
-
-
-
-
dhh-rails-style
- Rails
-
-
- Comprehensive 37signals Rails conventions based on Marc Köhlbrugge's analysis of 265 PRs from the Fizzy codebase. Covers everything from REST mapping to state-as-records, Turbo/Stimulus patterns, CSS with OKLCH colors, Minitest with fixtures, and Solid Queue/Cache/Cable patterns.
-
-
Key Patterns
-
-
REST Purity - Verbs become nouns (close → closure)
-
State as Records - Boolean columns → separate records
-
Fat Models - Business logic, authorization, broadcasting
gems.md - What to use vs avoid, decision framework
-
-
-
skill: dhh-rails-style
-
-
-
-
-
-
andrew-kane-gem-writer
- Ruby
-
-
- Andrew Kane has written 100+ Ruby gems with 374 million downloads. Every gem follows the same patterns: minimal dependencies, class macro DSLs, Rails integration without Rails coupling. When you're building a gem and want it to feel production-ready from day one, this is how you do it.
-
-
Philosophy
-
-
Simplicity over cleverness
-
Zero or minimal dependencies
-
Explicit code over metaprogramming
-
Rails integration without Rails coupling
-
-
Key Patterns
-
-
Class macro DSL for configuration
-
ActiveSupport.on_load for Rails integration
-
class << self with attr_accessor
-
Railtie pattern for hooks
-
Minitest (no RSpec)
-
-
Reference Files
-
-
references/module-organization.md
-
references/rails-integration.md
-
references/database-adapters.md
-
references/testing-patterns.md
-
-
-
skill: andrew-kane-gem-writer
-
-
-
-
-
-
dspy-ruby
- AI
-
-
- You're adding AI features to your Rails app, but you don't want brittle prompt strings scattered everywhere. DSPy.rb gives you type-safe signatures, composable predictors, and tool-using agents. This skill shows you how to use it—from basic inference to ReAct agents that iterate until they get the answer right.
-
-
Predictor Types
-
-
Predict - Basic inference
-
ChainOfThought - Reasoning with explanations
-
ReAct - Tool-using agents with iteration
-
CodeAct - Dynamic code generation
-
-
Supported Providers
-
-
OpenAI (GPT-4, GPT-4o-mini)
-
Anthropic Claude
-
Google Gemini
-
Ollama (free, local)
-
OpenRouter
-
-
Requirements
-
-
-
OPENAI_API_KEY
-
For OpenAI provider
-
-
-
ANTHROPIC_API_KEY
-
For Anthropic provider
-
-
-
GOOGLE_API_KEY
-
For Gemini provider
-
-
-
-
skill: dspy-ruby
-
-
-
-
-
-
frontend-design
- Design
-
-
- You've seen what AI usually generates: Inter font, purple gradients, rounded corners on everything. This skill teaches Claude to design interfaces that don't look like every other AI-generated site. It's about purposeful typography, unexpected color palettes, and interfaces with personality.
-
-
Design Thinking
-
-
Purpose - What is the interface for?
-
Tone - What feeling should it evoke?
-
Constraints - Technical and brand limitations
-
Differentiation - How to stand out
-
-
Focus Areas
-
-
Typography with distinctive font choices
-
Color & theme coherence with CSS variables
-
Motion and animation patterns
-
Spatial composition with asymmetry
-
Backgrounds (gradients, textures, patterns)
-
-
-
-
-
Avoids generic AI aesthetics like Inter fonts, purple gradients, and rounded corners everywhere.
-
-
-
-
skill: frontend-design
-
-
-
-
-
-
compound-docs
- Docs
-
-
- You just fixed a weird build error after an hour of debugging. Tomorrow you'll forget how you fixed it. This skill automatically detects when you solve something (phrases like "that worked" or "it's fixed") and documents it with YAML frontmatter so you can find it again. Each documented solution compounds your team's knowledge.
-
- Build AI agents using prompt-native architecture where features are defined in prompts, not code. When creating autonomous agents, designing MCP servers, or implementing self-modifying systems, this skill guides the "trust the agent's intelligence" philosophy.
-
-
Key Patterns
-
-
Prompt-Native Features - Define features in prompts, not code
-
MCP Tool Design - Build tools agents can use effectively
-
System Prompts - Write instructions that guide agent behavior
-
Self-Modification - Allow agents to improve their own prompts
-
-
Core Principle
-
Whatever the user can do, the agent can do. Whatever the user can see, the agent can see.
-
-
skill: agent-native-architecture
-
-
-
-
-
-
-
Content & Workflow (3)
-
Writing, editing, and organizing work. These skills handle everything from style guide compliance to git worktree management—the meta-work that makes the real work easier.
-
-
-
-
every-style-editor
- Content
-
-
- You wrote a draft, but you're not sure if it matches Every's style guide. Should "internet" be capitalized? Is this comma splice allowed? This skill does a four-phase line-by-line review: context, detailed edits, mechanical checks, and actionable recommendations. It's like having a copy editor who never gets tired.
-
-
Four-Phase Review
-
-
Initial Assessment - Context, type, audience, tone
-
Detailed Line Edit - Sentence structure, punctuation, capitalization
- Your todo list is a bunch of markdown files in a todos/ directory. Each filename encodes status, priority, and description. No database, no UI, just files with YAML frontmatter. When you need to track work without setting up Jira, this is the system.
-
- You're working on a feature branch, but you need to review a PR without losing your current work. Git worktrees let you have multiple branches checked out simultaneously in separate directories. This skill manages them—create, switch, cleanup—so you can context-switch without stashing or committing half-finished code.
-
-
Commands
-
-
# Create new worktree
-bash scripts/worktree-manager.sh create feature-login
-
-# List worktrees
-bash scripts/worktree-manager.sh list
-
-# Switch to worktree
-bash scripts/worktree-manager.sh switch feature-login
-
-# Clean up completed worktrees
-bash scripts/worktree-manager.sh cleanup
-
-
Integration
-
-
Works with /review for isolated PR analysis
-
Works with /work for parallel feature development
-
-
Requirements
-
-
Git 2.8+ (for worktree support)
-
Worktrees stored in .worktrees/ directory
-
-
-
skill: git-worktree
-
-
-
-
-
-
-
Image Generation (1)
-
Generate images with AI. Not stock photos you found on Unsplash—images you describe and the model creates.
-
-
-
-
gemini-imagegen
- AI Images
-
-
- Need a logo with specific text? A product mockup on a marble surface? An illustration in a kawaii style? This skill wraps Google's Gemini image generation API. You describe what you want, it generates it. You can edit existing images, refine over multiple turns, or compose from reference images. All through simple Python scripts.
-
-
-
Features
-
-
Text-to-image generation
-
Image editing & manipulation
-
Multi-turn iterative refinement
-
Multiple reference images (up to 14)
-
Google Search grounding (Pro)
-
-
-
Available Models
-
-
-
-
Model
-
Resolution
-
Best For
-
-
-
-
-
gemini-2.5-flash-image
-
1024px
-
Speed, high-volume tasks
-
-
-
gemini-3-pro-image-preview
-
Up to 4K
-
Professional assets, complex instructions
-
-
-
-
-
Quick Start
-
-
# Text-to-image
-python scripts/generate_image.py "A cat wearing a wizard hat" output.png
-
-# Edit existing image
-python scripts/edit_image.py input.png "Add a rainbow in the background" output.png
-
-# Multi-turn chat
-python scripts/multi_turn_chat.py
Photorealistic - Include camera details: lens type, lighting, angle, mood
-
Stylized Art - Specify style explicitly: kawaii, cel-shading, bold outlines
-
Text in Images - Be explicit about font style and placement (use Pro model)
-
Product Mockups - Describe lighting setup and surface
-
-
-
Requirements
-
-
-
GEMINI_API_KEY
-
Required environment variable
-
-
-
google-genai
-
Python package
-
-
-
pillow
-
Python package for image handling
-
-
-
-
-
-
-
All generated images include SynthID watermarks. Image-only mode won't work with Google Search grounding.
-
-
-
-
-
skill: gemini-imagegen
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/plugins/compound-engineering/.claude-plugin/plugin.json b/plugins/compound-engineering/.claude-plugin/plugin.json
index 0d35df6..cd132c3 100644
--- a/plugins/compound-engineering/.claude-plugin/plugin.json
+++ b/plugins/compound-engineering/.claude-plugin/plugin.json
@@ -1,6 +1,6 @@
{
"name": "compound-engineering",
- "version": "2.37.2",
+ "version": "2.38.0",
"description": "AI-powered development tools. 29 agents, 22 commands, 20 skills, 1 MCP server for code review, research, design, and workflow automation.",
"author": {
"name": "Kieran Klaassen",
diff --git a/plugins/compound-engineering/CHANGELOG.md b/plugins/compound-engineering/CHANGELOG.md
index 370861e..4bb846a 100644
--- a/plugins/compound-engineering/CHANGELOG.md
+++ b/plugins/compound-engineering/CHANGELOG.md
@@ -5,6 +5,16 @@ All notable changes to the compound-engineering plugin will be documented in thi
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+## [2.38.0] - 2026-03-01
+
+### Changed
+- `workflows:plan`, `workflows:work`, `workflows:review`, `workflows:brainstorm`, `workflows:compound` renamed to `ce:plan`, `ce:work`, `ce:review`, `ce:brainstorm`, `ce:compound` for clarity — the `ce:` prefix unambiguously identifies these as compound-engineering commands
+
+### Deprecated
+- `workflows:*` commands — all five remain functional as aliases that forward to their `ce:*` equivalents with a deprecation notice. Will be removed in a future version.
+
+---
+
## [2.37.2] - 2026-03-01
### Added
diff --git a/plugins/compound-engineering/CLAUDE.md b/plugins/compound-engineering/CLAUDE.md
index dc34c27..18196f8 100644
--- a/plugins/compound-engineering/CLAUDE.md
+++ b/plugins/compound-engineering/CLAUDE.md
@@ -35,7 +35,8 @@ agents/
└── docs/ # Documentation agents
commands/
-├── workflows/ # Core workflow commands (workflows:plan, workflows:review, etc.)
+├── ce/ # Core workflow commands (ce:plan, ce:review, etc.)
+├── workflows/ # Deprecated aliases for ce:* commands
└── *.md # Utility commands
skills/
@@ -44,13 +45,14 @@ skills/
## Command Naming Convention
-**Workflow commands** use `workflows:` prefix to avoid collisions with built-in commands:
-- `/workflows:plan` - Create implementation plans
-- `/workflows:review` - Run comprehensive code reviews
-- `/workflows:work` - Execute work items systematically
-- `/workflows:compound` - Document solved problems
+**Workflow commands** use `ce:` prefix to unambiguously identify them as compound-engineering commands:
+- `/ce:plan` - Create implementation plans
+- `/ce:review` - Run comprehensive code reviews
+- `/ce:work` - Execute work items systematically
+- `/ce:compound` - Document solved problems
+- `/ce:brainstorm` - Explore requirements and approaches before planning
-**Why `workflows:`?** Claude Code has built-in `/plan` and `/review` commands. Using `name: workflows:plan` in frontmatter creates a unique `/workflows:plan` command with no collision.
+**Why `ce:`?** Claude Code has built-in `/plan` and `/review` commands. The `ce:` namespace (short for compound-engineering) makes it immediately clear these commands belong to this plugin. The legacy `workflows:` prefix is still supported as deprecated aliases that forward to the `ce:*` equivalents.
## Skill Compliance Checklist
diff --git a/plugins/compound-engineering/README.md b/plugins/compound-engineering/README.md
index 59b441b..33a4ea1 100644
--- a/plugins/compound-engineering/README.md
+++ b/plugins/compound-engineering/README.md
@@ -73,15 +73,17 @@ Agents are organized into categories for easier discovery.
### Workflow Commands
-Core workflow commands use `workflows:` prefix to avoid collisions with built-in commands:
+Core workflow commands use `ce:` prefix to unambiguously identify them as compound-engineering commands:
| Command | Description |
|---------|-------------|
-| `/workflows:brainstorm` | Explore requirements and approaches before planning |
-| `/workflows:plan` | Create implementation plans |
-| `/workflows:review` | Run comprehensive code reviews |
-| `/workflows:work` | Execute work items systematically |
-| `/workflows:compound` | Document solved problems to compound team knowledge |
+| `/ce:brainstorm` | Explore requirements and approaches before planning |
+| `/ce:plan` | Create implementation plans |
+| `/ce:review` | Run comprehensive code reviews |
+| `/ce:work` | Execute work items systematically |
+| `/ce:compound` | Document solved problems to compound team knowledge |
+
+> **Deprecated aliases:** `/workflows:plan`, `/workflows:work`, `/workflows:review`, `/workflows:brainstorm`, `/workflows:compound` still work but show a deprecation warning. Use `ce:*` equivalents.
### Utility Commands
diff --git a/plugins/compound-engineering/agents/research/git-history-analyzer.md b/plugins/compound-engineering/agents/research/git-history-analyzer.md
index fca36ca..296e480 100644
--- a/plugins/compound-engineering/agents/research/git-history-analyzer.md
+++ b/plugins/compound-engineering/agents/research/git-history-analyzer.md
@@ -56,4 +56,4 @@ When analyzing, consider:
Your insights should help developers understand not just what the code does, but why it evolved to its current state, informing better decisions for future changes.
-Note that files in `docs/plans/` and `docs/solutions/` are compound-engineering pipeline artifacts created by `/workflows:plan`. They are intentional, permanent living documents — do not recommend their removal or characterize them as unnecessary.
+Note that files in `docs/plans/` and `docs/solutions/` are compound-engineering pipeline artifacts created by `/ce:plan`. They are intentional, permanent living documents — do not recommend their removal or characterize them as unnecessary.
diff --git a/plugins/compound-engineering/agents/research/learnings-researcher.md b/plugins/compound-engineering/agents/research/learnings-researcher.md
index a53a260..bae9328 100644
--- a/plugins/compound-engineering/agents/research/learnings-researcher.md
+++ b/plugins/compound-engineering/agents/research/learnings-researcher.md
@@ -257,7 +257,7 @@ Structure your findings as:
## Integration Points
This agent is designed to be invoked by:
-- `/workflows:plan` - To inform planning with institutional knowledge
+- `/ce:plan` - To inform planning with institutional knowledge
- `/deepen-plan` - To add depth with relevant learnings
- Manual invocation before starting work on a feature
diff --git a/plugins/compound-engineering/agents/review/code-simplicity-reviewer.md b/plugins/compound-engineering/agents/review/code-simplicity-reviewer.md
index d7e01ff..0627822 100644
--- a/plugins/compound-engineering/agents/review/code-simplicity-reviewer.md
+++ b/plugins/compound-engineering/agents/review/code-simplicity-reviewer.md
@@ -48,7 +48,7 @@ When reviewing code, you will:
- Eliminate extensibility points without clear use cases
- Question generic solutions for specific problems
- Remove "just in case" code
- - Never flag `docs/plans/*.md` or `docs/solutions/*.md` for removal — these are compound-engineering pipeline artifacts created by `/workflows:plan` and used as living documents by `/workflows:work`
+ - Never flag `docs/plans/*.md` or `docs/solutions/*.md` for removal — these are compound-engineering pipeline artifacts created by `/ce:plan` and used as living documents by `/ce:work`
6. **Optimize for Readability**:
- Prefer self-documenting code over comments
diff --git a/plugins/compound-engineering/commands/ce/brainstorm.md b/plugins/compound-engineering/commands/ce/brainstorm.md
new file mode 100644
index 0000000..8527a4e
--- /dev/null
+++ b/plugins/compound-engineering/commands/ce/brainstorm.md
@@ -0,0 +1,145 @@
+---
+name: ce:brainstorm
+description: Explore requirements and approaches through collaborative dialogue before planning implementation
+argument-hint: "[feature idea or problem to explore]"
+---
+
+# Brainstorm a Feature or Improvement
+
+**Note: The current year is 2026.** Use this when dating brainstorm documents.
+
+Brainstorming helps answer **WHAT** to build through collaborative dialogue. It precedes `/ce:plan`, which answers **HOW** to build it.
+
+**Process knowledge:** Load the `brainstorming` skill for detailed question techniques, approach exploration patterns, and YAGNI principles.
+
+## Feature Description
+
+ #$ARGUMENTS
+
+**If the feature description above is empty, ask the user:** "What would you like to explore? Please describe the feature, problem, or improvement you're thinking about."
+
+Do not proceed until you have a feature description from the user.
+
+## Execution Flow
+
+### Phase 0: Assess Requirements Clarity
+
+Evaluate whether brainstorming is needed based on the feature description.
+
+**Clear requirements indicators:**
+- Specific acceptance criteria provided
+- Referenced existing patterns to follow
+- Described exact expected behavior
+- Constrained, well-defined scope
+
+**If requirements are already clear:**
+Use **AskUserQuestion tool** to suggest: "Your requirements seem detailed enough to proceed directly to planning. Should I run `/ce:plan` instead, or would you like to explore the idea further?"
+
+### Phase 1: Understand the Idea
+
+#### 1.1 Repository Research (Lightweight)
+
+Run a quick repo scan to understand existing patterns:
+
+- Task repo-research-analyst("Understand existing patterns related to: ")
+
+Focus on: similar features, established patterns, CLAUDE.md guidance.
+
+#### 1.2 Collaborative Dialogue
+
+Use the **AskUserQuestion tool** to ask questions **one at a time**.
+
+**Guidelines (see `brainstorming` skill for detailed techniques):**
+- Prefer multiple choice when natural options exist
+- Start broad (purpose, users) then narrow (constraints, edge cases)
+- Validate assumptions explicitly
+- Ask about success criteria
+
+**Exit condition:** Continue until the idea is clear OR user says "proceed"
+
+### Phase 2: Explore Approaches
+
+Propose **2-3 concrete approaches** based on research and conversation.
+
+For each approach, provide:
+- Brief description (2-3 sentences)
+- Pros and cons
+- When it's best suited
+
+Lead with your recommendation and explain why. Apply YAGNI—prefer simpler solutions.
+
+Use **AskUserQuestion tool** to ask which approach the user prefers.
+
+### Phase 3: Capture the Design
+
+Write a brainstorm document to `docs/brainstorms/YYYY-MM-DD--brainstorm.md`.
+
+**Document structure:** See the `brainstorming` skill for the template format. Key sections: What We're Building, Why This Approach, Key Decisions, Open Questions.
+
+Ensure `docs/brainstorms/` directory exists before writing.
+
+**IMPORTANT:** Before proceeding to Phase 4, check if there are any Open Questions listed in the brainstorm document. If there are open questions, YOU MUST ask the user about each one using AskUserQuestion before offering to proceed to planning. Move resolved questions to a "Resolved Questions" section.
+
+### Phase 4: Handoff
+
+Use **AskUserQuestion tool** to present next steps:
+
+**Question:** "Brainstorm captured. What would you like to do next?"
+
+**Options:**
+1. **Review and refine** - Improve the document through structured self-review
+2. **Proceed to planning** - Run `/ce:plan` (will auto-detect this brainstorm)
+3. **Share to Proof** - Upload to Proof for collaborative review and sharing
+4. **Ask more questions** - I have more questions to clarify before moving on
+5. **Done for now** - Return later
+
+**If user selects "Share to Proof":**
+
+```bash
+CONTENT=$(cat docs/brainstorms/YYYY-MM-DD--brainstorm.md)
+TITLE="Brainstorm: "
+RESPONSE=$(curl -s -X POST https://www.proofeditor.ai/share/markdown \
+ -H "Content-Type: application/json" \
+ -d "$(jq -n --arg title "$TITLE" --arg markdown "$CONTENT" --arg by "ai:compound" '{title: $title, markdown: $markdown, by: $by}')")
+PROOF_URL=$(echo "$RESPONSE" | jq -r '.tokenUrl')
+```
+
+Display the URL prominently: `View & collaborate in Proof: `
+
+If the curl fails, skip silently. Then return to the Phase 4 options.
+
+**If user selects "Ask more questions":** YOU (Claude) return to Phase 1.2 (Collaborative Dialogue) and continue asking the USER questions one at a time to further refine the design. The user wants YOU to probe deeper - ask about edge cases, constraints, preferences, or areas not yet explored. Continue until the user is satisfied, then return to Phase 4.
+
+**If user selects "Review and refine":**
+
+Load the `document-review` skill and apply it to the brainstorm document.
+
+When document-review returns "Review complete", present next steps:
+
+1. **Move to planning** - Continue to `/ce:plan` with this document
+2. **Done for now** - Brainstorming complete. To start planning later: `/ce:plan [document-path]`
+
+## Output Summary
+
+When complete, display:
+
+```
+Brainstorm complete!
+
+Document: docs/brainstorms/YYYY-MM-DD--brainstorm.md
+
+Key decisions:
+- [Decision 1]
+- [Decision 2]
+
+Next: Run `/ce:plan` when ready to implement.
+```
+
+## Important Guidelines
+
+- **Stay focused on WHAT, not HOW** - Implementation details belong in the plan
+- **Ask one question at a time** - Don't overwhelm
+- **Apply YAGNI** - Prefer simpler approaches
+- **Keep outputs concise** - 200-300 words per section max
+
+NEVER CODE! Just explore and document decisions.
diff --git a/plugins/compound-engineering/commands/ce/compound.md b/plugins/compound-engineering/commands/ce/compound.md
new file mode 100644
index 0000000..8637955
--- /dev/null
+++ b/plugins/compound-engineering/commands/ce/compound.md
@@ -0,0 +1,240 @@
+---
+name: ce:compound
+description: Document a recently solved problem to compound your team's knowledge
+argument-hint: "[optional: brief context about the fix]"
+---
+
+# /compound
+
+Coordinate multiple subagents working in parallel to document a recently solved problem.
+
+## Purpose
+
+Captures problem solutions while context is fresh, creating structured documentation in `docs/solutions/` with YAML frontmatter for searchability and future reference. Uses parallel subagents for maximum efficiency.
+
+**Why "compound"?** Each documented solution compounds your team's knowledge. The first time you solve a problem takes research. Document it, and the next occurrence takes minutes. Knowledge compounds.
+
+## Usage
+
+```bash
+/ce:compound # Document the most recent fix
+/ce:compound [brief context] # Provide additional context hint
+```
+
+## Execution Strategy: Two-Phase Orchestration
+
+
+**Only ONE file gets written - the final documentation.**
+
+Phase 1 subagents return TEXT DATA to the orchestrator. They must NOT use Write, Edit, or create any files. Only the orchestrator (Phase 2) writes the final documentation file.
+
+
+### Phase 1: Parallel Research
+
+
+
+Launch these subagents IN PARALLEL. Each returns text data to the orchestrator.
+
+#### 1. **Context Analyzer**
+ - Extracts conversation history
+ - Identifies problem type, component, symptoms
+ - Validates against schema
+ - Returns: YAML frontmatter skeleton
+
+#### 2. **Solution Extractor**
+ - Analyzes all investigation steps
+ - Identifies root cause
+ - Extracts working solution with code examples
+ - Returns: Solution content block
+
+#### 3. **Related Docs Finder**
+ - Searches `docs/solutions/` for related documentation
+ - Identifies cross-references and links
+ - Finds related GitHub issues
+ - Returns: Links and relationships
+
+#### 4. **Prevention Strategist**
+ - Develops prevention strategies
+ - Creates best practices guidance
+ - Generates test cases if applicable
+ - Returns: Prevention/testing content
+
+#### 5. **Category Classifier**
+ - Determines optimal `docs/solutions/` category
+ - Validates category against schema
+ - Suggests filename based on slug
+ - Returns: Final path and filename
+
+
+
+### Phase 2: Assembly & Write
+
+
+
+**WAIT for all Phase 1 subagents to complete before proceeding.**
+
+The orchestrating agent (main conversation) performs these steps:
+
+1. Collect all text results from Phase 1 subagents
+2. Assemble complete markdown file from the collected pieces
+3. Validate YAML frontmatter against schema
+4. Create directory if needed: `mkdir -p docs/solutions/[category]/`
+5. Write the SINGLE final file: `docs/solutions/[category]/[filename].md`
+
+
+
+### Phase 3: Optional Enhancement
+
+**WAIT for Phase 2 to complete before proceeding.**
+
+
+
+Based on problem type, optionally invoke specialized agents to review the documentation:
+
+- **performance_issue** → `performance-oracle`
+- **security_issue** → `security-sentinel`
+- **database_issue** → `data-integrity-guardian`
+- **test_failure** → `cora-test-reviewer`
+- Any code-heavy issue → `kieran-rails-reviewer` + `code-simplicity-reviewer`
+
+
+
+## What It Captures
+
+- **Problem symptom**: Exact error messages, observable behavior
+- **Investigation steps tried**: What didn't work and why
+- **Root cause analysis**: Technical explanation
+- **Working solution**: Step-by-step fix with code examples
+- **Prevention strategies**: How to avoid in future
+- **Cross-references**: Links to related issues and docs
+
+## Preconditions
+
+
+
+ Problem has been solved (not in-progress)
+
+
+ Solution has been verified working
+
+
+ Non-trivial problem (not simple typo or obvious error)
+
+
+
+## What It Creates
+
+**Organized documentation:**
+
+- File: `docs/solutions/[category]/[filename].md`
+
+**Categories auto-detected from problem:**
+
+- build-errors/
+- test-failures/
+- runtime-errors/
+- performance-issues/
+- database-issues/
+- security-issues/
+- ui-bugs/
+- integration-issues/
+- logic-errors/
+
+## Common Mistakes to Avoid
+
+| ❌ Wrong | ✅ Correct |
+|----------|-----------|
+| Subagents write files like `context-analysis.md`, `solution-draft.md` | Subagents return text data; orchestrator writes one final file |
+| Research and assembly run in parallel | Research completes → then assembly runs |
+| Multiple files created during workflow | Single file: `docs/solutions/[category]/[filename].md` |
+
+## Success Output
+
+```
+✓ Documentation complete
+
+Subagent Results:
+ ✓ Context Analyzer: Identified performance_issue in brief_system
+ ✓ Solution Extractor: 3 code fixes
+ ✓ Related Docs Finder: 2 related issues
+ ✓ Prevention Strategist: Prevention strategies, test suggestions
+ ✓ Category Classifier: `performance-issues`
+
+Specialized Agent Reviews (Auto-Triggered):
+ ✓ performance-oracle: Validated query optimization approach
+ ✓ kieran-rails-reviewer: Code examples meet Rails standards
+ ✓ code-simplicity-reviewer: Solution is appropriately minimal
+ ✓ every-style-editor: Documentation style verified
+
+File created:
+- docs/solutions/performance-issues/n-plus-one-brief-generation.md
+
+This documentation will be searchable for future reference when similar
+issues occur in the Email Processing or Brief System modules.
+
+What's next?
+1. Continue workflow (recommended)
+2. Link related documentation
+3. Update other references
+4. View documentation
+5. Other
+```
+
+## The Compounding Philosophy
+
+This creates a compounding knowledge system:
+
+1. First time you solve "N+1 query in brief generation" → Research (30 min)
+2. Document the solution → docs/solutions/performance-issues/n-plus-one-briefs.md (5 min)
+3. Next time similar issue occurs → Quick lookup (2 min)
+4. Knowledge compounds → Team gets smarter
+
+The feedback loop:
+
+```
+Build → Test → Find Issue → Research → Improve → Document → Validate → Deploy
+ ↑ ↓
+ └──────────────────────────────────────────────────────────────────────┘
+```
+
+**Each unit of engineering work should make subsequent units of work easier—not harder.**
+
+## Auto-Invoke
+
+ - "that worked" - "it's fixed" - "working now" - "problem solved"
+
+ Use /ce:compound [context] to document immediately without waiting for auto-detection.
+
+## Routes To
+
+`compound-docs` skill
+
+## Applicable Specialized Agents
+
+Based on problem type, these agents can enhance documentation:
+
+### Code Quality & Review
+- **kieran-rails-reviewer**: Reviews code examples for Rails best practices
+- **code-simplicity-reviewer**: Ensures solution code is minimal and clear
+- **pattern-recognition-specialist**: Identifies anti-patterns or repeating issues
+
+### Specific Domain Experts
+- **performance-oracle**: Analyzes performance_issue category solutions
+- **security-sentinel**: Reviews security_issue solutions for vulnerabilities
+- **cora-test-reviewer**: Creates test cases for prevention strategies
+- **data-integrity-guardian**: Reviews database_issue migrations and queries
+
+### Enhancement & Documentation
+- **best-practices-researcher**: Enriches solution with industry best practices
+- **every-style-editor**: Reviews documentation style and clarity
+- **framework-docs-researcher**: Links to Rails/gem documentation references
+
+### When to Invoke
+- **Auto-triggered** (optional): Agents can run post-documentation for enhancement
+- **Manual trigger**: User can invoke agents after /ce:compound completes for deeper review
+- **Customize agents**: Edit `compound-engineering.local.md` or invoke the `setup` skill to configure which review agents are used across all workflows
+
+## Related Commands
+
+- `/research [topic]` - Deep investigation (searches docs/solutions/ for patterns)
+- `/ce:plan` - Planning workflow (references documented solutions)
diff --git a/plugins/compound-engineering/commands/ce/plan.md b/plugins/compound-engineering/commands/ce/plan.md
new file mode 100644
index 0000000..e4b0240
--- /dev/null
+++ b/plugins/compound-engineering/commands/ce/plan.md
@@ -0,0 +1,636 @@
+---
+name: ce:plan
+description: Transform feature descriptions into well-structured project plans following conventions
+argument-hint: "[feature description, bug report, or improvement idea]"
+---
+
+# Create a plan for a new feature or bug fix
+
+## Introduction
+
+**Note: The current year is 2026.** Use this when dating plans and searching for recent documentation.
+
+Transform feature descriptions, bug reports, or improvement ideas into well-structured markdown files issues that follow project conventions and best practices. This command provides flexible detail levels to match your needs.
+
+## Feature Description
+
+ #$ARGUMENTS
+
+**If the feature description above is empty, ask the user:** "What would you like to plan? Please describe the feature, bug fix, or improvement you have in mind."
+
+Do not proceed until you have a clear feature description from the user.
+
+### 0. Idea Refinement
+
+**Check for brainstorm output first:**
+
+Before asking questions, look for recent brainstorm documents in `docs/brainstorms/` that match this feature:
+
+```bash
+ls -la docs/brainstorms/*.md 2>/dev/null | head -10
+```
+
+**Relevance criteria:** A brainstorm is relevant if:
+- The topic (from filename or YAML frontmatter) semantically matches the feature description
+- Created within the last 14 days
+- If multiple candidates match, use the most recent one
+
+**If a relevant brainstorm exists:**
+1. Read the brainstorm document **thoroughly** — every section matters
+2. Announce: "Found brainstorm from [date]: [topic]. Using as foundation for planning."
+3. Extract and carry forward **ALL** of the following into the plan:
+ - Key decisions and their rationale
+ - Chosen approach and why alternatives were rejected
+ - Constraints and requirements discovered during brainstorming
+ - Open questions (flag these for resolution during planning)
+ - Success criteria and scope boundaries
+ - Any specific technical choices or patterns discussed
+4. **Skip the idea refinement questions below** — the brainstorm already answered WHAT to build
+5. Use brainstorm content as the **primary input** to research and planning phases
+6. **Critical: The brainstorm is the origin document.** Throughout the plan, reference specific decisions with `(see brainstorm: docs/brainstorms/)` when carrying forward conclusions. Do not paraphrase decisions in a way that loses their original context — link back to the source.
+7. **Do not omit brainstorm content** — if the brainstorm discussed it, the plan must address it (even if briefly). Scan each brainstorm section before finalizing the plan to verify nothing was dropped.
+
+**If multiple brainstorms could match:**
+Use **AskUserQuestion tool** to ask which brainstorm to use, or whether to proceed without one.
+
+**If no brainstorm found (or not relevant), run idea refinement:**
+
+Refine the idea through collaborative dialogue using the **AskUserQuestion tool**:
+
+- Ask questions one at a time to understand the idea fully
+- Prefer multiple choice questions when natural options exist
+- Focus on understanding: purpose, constraints and success criteria
+- Continue until the idea is clear OR user says "proceed"
+
+**Gather signals for research decision.** During refinement, note:
+
+- **User's familiarity**: Do they know the codebase patterns? Are they pointing to examples?
+- **User's intent**: Speed vs thoroughness? Exploration vs execution?
+- **Topic risk**: Security, payments, external APIs warrant more caution
+- **Uncertainty level**: Is the approach clear or open-ended?
+
+**Skip option:** If the feature description is already detailed, offer:
+"Your description is clear. Should I proceed with research, or would you like to refine it further?"
+
+## Main Tasks
+
+### 1. Local Research (Always Runs - Parallel)
+
+
+First, I need to understand the project's conventions, existing patterns, and any documented learnings. This is fast and local - it informs whether external research is needed.
+
+
+Run these agents **in parallel** to gather local context:
+
+- Task repo-research-analyst(feature_description)
+- Task learnings-researcher(feature_description)
+
+**What to look for:**
+- **Repo research:** existing patterns, CLAUDE.md guidance, technology familiarity, pattern consistency
+- **Learnings:** documented solutions in `docs/solutions/` that might apply (gotchas, patterns, lessons learned)
+
+These findings inform the next step.
+
+### 1.5. Research Decision
+
+Based on signals from Step 0 and findings from Step 1, decide on external research.
+
+**High-risk topics → always research.** Security, payments, external APIs, data privacy. The cost of missing something is too high. This takes precedence over speed signals.
+
+**Strong local context → skip external research.** Codebase has good patterns, CLAUDE.md has guidance, user knows what they want. External research adds little value.
+
+**Uncertainty or unfamiliar territory → research.** User is exploring, codebase has no examples, new technology. External perspective is valuable.
+
+**Announce the decision and proceed.** Brief explanation, then continue. User can redirect if needed.
+
+Examples:
+- "Your codebase has solid patterns for this. Proceeding without external research."
+- "This involves payment processing, so I'll research current best practices first."
+
+### 1.5b. External Research (Conditional)
+
+**Only run if Step 1.5 indicates external research is valuable.**
+
+Run these agents in parallel:
+
+- Task best-practices-researcher(feature_description)
+- Task framework-docs-researcher(feature_description)
+
+### 1.6. Consolidate Research
+
+After all research steps complete, consolidate findings:
+
+- Document relevant file paths from repo research (e.g., `app/services/example_service.rb:42`)
+- **Include relevant institutional learnings** from `docs/solutions/` (key insights, gotchas to avoid)
+- Note external documentation URLs and best practices (if external research was done)
+- List related issues or PRs discovered
+- Capture CLAUDE.md conventions
+
+**Optional validation:** Briefly summarize findings and ask if anything looks off or missing before proceeding to planning.
+
+### 2. Issue Planning & Structure
+
+
+Think like a product manager - what would make this issue clear and actionable? Consider multiple perspectives
+
+
+**Title & Categorization:**
+
+- [ ] Draft clear, searchable issue title using conventional format (e.g., `feat: Add user authentication`, `fix: Cart total calculation`)
+- [ ] Determine issue type: enhancement, bug, refactor
+- [ ] Convert title to filename: add today's date prefix, strip prefix colon, kebab-case, add `-plan` suffix
+ - Example: `feat: Add User Authentication` → `2026-01-21-feat-add-user-authentication-plan.md`
+ - Keep it descriptive (3-5 words after prefix) so plans are findable by context
+
+**Stakeholder Analysis:**
+
+- [ ] Identify who will be affected by this issue (end users, developers, operations)
+- [ ] Consider implementation complexity and required expertise
+
+**Content Planning:**
+
+- [ ] Choose appropriate detail level based on issue complexity and audience
+- [ ] List all necessary sections for the chosen template
+- [ ] Gather supporting materials (error logs, screenshots, design mockups)
+- [ ] Prepare code examples or reproduction steps if applicable, name the mock filenames in the lists
+
+### 3. SpecFlow Analysis
+
+After planning the issue structure, run SpecFlow Analyzer to validate and refine the feature specification:
+
+- Task compound-engineering:workflow:spec-flow-analyzer(feature_description, research_findings)
+
+**SpecFlow Analyzer Output:**
+
+- [ ] Review SpecFlow analysis results
+- [ ] Incorporate any identified gaps or edge cases into the issue
+- [ ] Update acceptance criteria based on SpecFlow findings
+
+### 4. Choose Implementation Detail Level
+
+Select how comprehensive you want the issue to be, simpler is mostly better.
+
+#### 📄 MINIMAL (Quick Issue)
+
+**Best for:** Simple bugs, small improvements, clear features
+
+**Includes:**
+
+- Problem statement or feature description
+- Basic acceptance criteria
+- Essential context only
+
+**Structure:**
+
+````markdown
+---
+title: [Issue Title]
+type: [feat|fix|refactor]
+status: active
+date: YYYY-MM-DD
+origin: docs/brainstorms/YYYY-MM-DD--brainstorm.md # if originated from brainstorm, otherwise omit
+---
+
+# [Issue Title]
+
+[Brief problem/feature description]
+
+## Acceptance Criteria
+
+- [ ] Core requirement 1
+- [ ] Core requirement 2
+
+## Context
+
+[Any critical information]
+
+## MVP
+
+### test.rb
+
+```ruby
+class Test
+ def initialize
+ @name = "test"
+ end
+end
+```
+
+## Sources
+
+- **Origin brainstorm:** [docs/brainstorms/YYYY-MM-DD--brainstorm.md](path) — include if plan originated from a brainstorm
+- Related issue: #[issue_number]
+- Documentation: [relevant_docs_url]
+````
+
+#### 📋 MORE (Standard Issue)
+
+**Best for:** Most features, complex bugs, team collaboration
+
+**Includes everything from MINIMAL plus:**
+
+- Detailed background and motivation
+- Technical considerations
+- Success metrics
+- Dependencies and risks
+- Basic implementation suggestions
+
+**Structure:**
+
+```markdown
+---
+title: [Issue Title]
+type: [feat|fix|refactor]
+status: active
+date: YYYY-MM-DD
+origin: docs/brainstorms/YYYY-MM-DD--brainstorm.md # if originated from brainstorm, otherwise omit
+---
+
+# [Issue Title]
+
+## Overview
+
+[Comprehensive description]
+
+## Problem Statement / Motivation
+
+[Why this matters]
+
+## Proposed Solution
+
+[High-level approach]
+
+## Technical Considerations
+
+- Architecture impacts
+- Performance implications
+- Security considerations
+
+## System-Wide Impact
+
+- **Interaction graph**: [What callbacks/middleware/observers fire when this runs?]
+- **Error propagation**: [How do errors flow across layers? Do retry strategies align?]
+- **State lifecycle risks**: [Can partial failure leave orphaned/inconsistent state?]
+- **API surface parity**: [What other interfaces expose similar functionality and need the same change?]
+- **Integration test scenarios**: [Cross-layer scenarios that unit tests won't catch]
+
+## Acceptance Criteria
+
+- [ ] Detailed requirement 1
+- [ ] Detailed requirement 2
+- [ ] Testing requirements
+
+## Success Metrics
+
+[How we measure success]
+
+## Dependencies & Risks
+
+[What could block or complicate this]
+
+## Sources & References
+
+- **Origin brainstorm:** [docs/brainstorms/YYYY-MM-DD--brainstorm.md](path) — include if plan originated from a brainstorm
+- Similar implementations: [file_path:line_number]
+- Best practices: [documentation_url]
+- Related PRs: #[pr_number]
+```
+
+#### 📚 A LOT (Comprehensive Issue)
+
+**Best for:** Major features, architectural changes, complex integrations
+
+**Includes everything from MORE plus:**
+
+- Detailed implementation plan with phases
+- Alternative approaches considered
+- Extensive technical specifications
+- Resource requirements and timeline
+- Future considerations and extensibility
+- Risk mitigation strategies
+- Documentation requirements
+
+**Structure:**
+
+```markdown
+---
+title: [Issue Title]
+type: [feat|fix|refactor]
+status: active
+date: YYYY-MM-DD
+origin: docs/brainstorms/YYYY-MM-DD--brainstorm.md # if originated from brainstorm, otherwise omit
+---
+
+# [Issue Title]
+
+## Overview
+
+[Executive summary]
+
+## Problem Statement
+
+[Detailed problem analysis]
+
+## Proposed Solution
+
+[Comprehensive solution design]
+
+## Technical Approach
+
+### Architecture
+
+[Detailed technical design]
+
+### Implementation Phases
+
+#### Phase 1: [Foundation]
+
+- Tasks and deliverables
+- Success criteria
+- Estimated effort
+
+#### Phase 2: [Core Implementation]
+
+- Tasks and deliverables
+- Success criteria
+- Estimated effort
+
+#### Phase 3: [Polish & Optimization]
+
+- Tasks and deliverables
+- Success criteria
+- Estimated effort
+
+## Alternative Approaches Considered
+
+[Other solutions evaluated and why rejected]
+
+## System-Wide Impact
+
+### Interaction Graph
+
+[Map the chain reaction: what callbacks, middleware, observers, and event handlers fire when this code runs? Trace at least two levels deep. Document: "Action X triggers Y, which calls Z, which persists W."]
+
+### Error & Failure Propagation
+
+[Trace errors from lowest layer up. List specific error classes and where they're handled. Identify retry conflicts, unhandled error types, and silent failure swallowing.]
+
+### State Lifecycle Risks
+
+[Walk through each step that persists state. Can partial failure orphan rows, duplicate records, or leave caches stale? Document cleanup mechanisms or their absence.]
+
+### API Surface Parity
+
+[List all interfaces (classes, DSLs, endpoints) that expose equivalent functionality. Note which need updating and which share the code path.]
+
+### Integration Test Scenarios
+
+[3-5 cross-layer test scenarios that unit tests with mocks would never catch. Include expected behavior for each.]
+
+## Acceptance Criteria
+
+### Functional Requirements
+
+- [ ] Detailed functional criteria
+
+### Non-Functional Requirements
+
+- [ ] Performance targets
+- [ ] Security requirements
+- [ ] Accessibility standards
+
+### Quality Gates
+
+- [ ] Test coverage requirements
+- [ ] Documentation completeness
+- [ ] Code review approval
+
+## Success Metrics
+
+[Detailed KPIs and measurement methods]
+
+## Dependencies & Prerequisites
+
+[Detailed dependency analysis]
+
+## Risk Analysis & Mitigation
+
+[Comprehensive risk assessment]
+
+## Resource Requirements
+
+[Team, time, infrastructure needs]
+
+## Future Considerations
+
+[Extensibility and long-term vision]
+
+## Documentation Plan
+
+[What docs need updating]
+
+## Sources & References
+
+### Origin
+
+- **Brainstorm document:** [docs/brainstorms/YYYY-MM-DD--brainstorm.md](path) — include if plan originated from a brainstorm. Key decisions carried forward: [list 2-3 major decisions from brainstorm]
+
+### Internal References
+
+- Architecture decisions: [file_path:line_number]
+- Similar features: [file_path:line_number]
+- Configuration: [file_path:line_number]
+
+### External References
+
+- Framework documentation: [url]
+- Best practices guide: [url]
+- Industry standards: [url]
+
+### Related Work
+
+- Previous PRs: #[pr_numbers]
+- Related issues: #[issue_numbers]
+- Design documents: [links]
+```
+
+### 5. Issue Creation & Formatting
+
+
+Apply best practices for clarity and actionability, making the issue easy to scan and understand
+
+
+**Content Formatting:**
+
+- [ ] Use clear, descriptive headings with proper hierarchy (##, ###)
+- [ ] Include code examples in triple backticks with language syntax highlighting
+- [ ] Add screenshots/mockups if UI-related (drag & drop or use image hosting)
+- [ ] Use task lists (- [ ]) for trackable items that can be checked off
+- [ ] Add collapsible sections for lengthy logs or optional details using `` tags
+- [ ] Apply appropriate emoji for visual scanning (🐛 bug, ✨ feature, 📚 docs, ♻️ refactor)
+
+**Cross-Referencing:**
+
+- [ ] Link to related issues/PRs using #number format
+- [ ] Reference specific commits with SHA hashes when relevant
+- [ ] Link to code using GitHub's permalink feature (press 'y' for permanent link)
+- [ ] Mention relevant team members with @username if needed
+- [ ] Add links to external resources with descriptive text
+
+**Code & Examples:**
+
+````markdown
+# Good example with syntax highlighting and line references
+
+
+```ruby
+# app/services/user_service.rb:42
+def process_user(user)
+
+# Implementation here
+
+end
+```
+
+# Collapsible error logs
+
+
+Full error stacktrace
+
+`Error details here...`
+
+
+````
+
+**AI-Era Considerations:**
+
+- [ ] Account for accelerated development with AI pair programming
+- [ ] Include prompts or instructions that worked well during research
+- [ ] Note which AI tools were used for initial exploration (Claude, Copilot, etc.)
+- [ ] Emphasize comprehensive testing given rapid implementation
+- [ ] Document any AI-generated code that needs human review
+
+### 6. Final Review & Submission
+
+**Brainstorm cross-check (if plan originated from a brainstorm):**
+
+Before finalizing, re-read the brainstorm document and verify:
+- [ ] Every key decision from the brainstorm is reflected in the plan
+- [ ] The chosen approach matches what was decided in the brainstorm
+- [ ] Constraints and requirements from the brainstorm are captured in acceptance criteria
+- [ ] Open questions from the brainstorm are either resolved or flagged
+- [ ] The `origin:` frontmatter field points to the brainstorm file
+- [ ] The Sources section includes the brainstorm with a summary of carried-forward decisions
+
+**Pre-submission Checklist:**
+
+- [ ] Title is searchable and descriptive
+- [ ] Labels accurately categorize the issue
+- [ ] All template sections are complete
+- [ ] Links and references are working
+- [ ] Acceptance criteria are measurable
+- [ ] Add names of files in pseudo code examples and todo lists
+- [ ] Add an ERD mermaid diagram if applicable for new model changes
+
+## Write Plan File
+
+**REQUIRED: Write the plan file to disk before presenting any options.**
+
+```bash
+mkdir -p docs/plans/
+```
+
+Use the Write tool to save the complete plan to `docs/plans/YYYY-MM-DD---plan.md`. This step is mandatory and cannot be skipped — even when running as part of LFG/SLFG or other automated pipelines.
+
+Confirm: "Plan written to docs/plans/[filename]"
+
+**Pipeline mode:** If invoked from an automated workflow (LFG, SLFG, or any `disable-model-invocation` context), skip all AskUserQuestion calls. Make decisions automatically and proceed to writing the plan without interactive prompts.
+
+## Output Format
+
+**Filename:** Use the date and kebab-case filename from Step 2 Title & Categorization.
+
+```
+docs/plans/YYYY-MM-DD---plan.md
+```
+
+Examples:
+- ✅ `docs/plans/2026-01-15-feat-user-authentication-flow-plan.md`
+- ✅ `docs/plans/2026-02-03-fix-checkout-race-condition-plan.md`
+- ✅ `docs/plans/2026-03-10-refactor-api-client-extraction-plan.md`
+- ❌ `docs/plans/2026-01-15-feat-thing-plan.md` (not descriptive - what "thing"?)
+- ❌ `docs/plans/2026-01-15-feat-new-feature-plan.md` (too vague - what feature?)
+- ❌ `docs/plans/2026-01-15-feat: user auth-plan.md` (invalid characters - colon and space)
+- ❌ `docs/plans/feat-user-auth-plan.md` (missing date prefix)
+
+## Post-Generation Options
+
+After writing the plan file, use the **AskUserQuestion tool** to present these options:
+
+**Question:** "Plan ready at `docs/plans/YYYY-MM-DD---plan.md`. What would you like to do next?"
+
+**Options:**
+1. **Open plan in editor** - Open the plan file for review
+2. **Run `/deepen-plan`** - Enhance each section with parallel research agents (best practices, performance, UI)
+3. **Run `/technical_review`** - Technical feedback from code-focused reviewers (DHH, Kieran, Simplicity)
+4. **Review and refine** - Improve the document through structured self-review
+5. **Share to Proof** - Upload to Proof for collaborative review and sharing
+6. **Start `/ce:work`** - Begin implementing this plan locally
+7. **Start `/ce:work` on remote** - Begin implementing in Claude Code on the web (use `&` to run in background)
+8. **Create Issue** - Create issue in project tracker (GitHub/Linear)
+
+Based on selection:
+- **Open plan in editor** → Run `open docs/plans/.md` to open the file in the user's default editor
+- **`/deepen-plan`** → Call the /deepen-plan command with the plan file path to enhance with research
+- **`/technical_review`** → Call the /technical_review command with the plan file path
+- **Review and refine** → Load `document-review` skill.
+- **Share to Proof** → Upload the plan to Proof:
+ ```bash
+ CONTENT=$(cat docs/plans/.md)
+ TITLE="Plan: "
+ RESPONSE=$(curl -s -X POST https://www.proofeditor.ai/share/markdown \
+ -H "Content-Type: application/json" \
+ -d "$(jq -n --arg title "$TITLE" --arg markdown "$CONTENT" --arg by "ai:compound" '{title: $title, markdown: $markdown, by: $by}')")
+ PROOF_URL=$(echo "$RESPONSE" | jq -r '.tokenUrl')
+ ```
+ Display: `View & collaborate in Proof: ` — skip silently if curl fails. Then return to options.
+- **`/ce:work`** → Call the /ce:work command with the plan file path
+- **`/ce:work` on remote** → Run `/ce:work docs/plans/.md &` to start work in background for Claude Code web
+- **Create Issue** → See "Issue Creation" section below
+- **Other** (automatically provided) → Accept free text for rework or specific changes
+
+**Note:** If running `/ce:plan` with ultrathink enabled, automatically run `/deepen-plan` after plan creation for maximum depth and grounding.
+
+Loop back to options after Simplify or Other changes until user selects `/ce:work` or `/technical_review`.
+
+## Issue Creation
+
+When user selects "Create Issue", detect their project tracker from CLAUDE.md:
+
+1. **Check for tracker preference** in user's CLAUDE.md (global or project):
+ - Look for `project_tracker: github` or `project_tracker: linear`
+ - Or look for mentions of "GitHub Issues" or "Linear" in their workflow section
+
+2. **If GitHub:**
+
+ Use the title and type from Step 2 (already in context - no need to re-read the file):
+
+ ```bash
+ gh issue create --title ": " --body-file
+ ```
+
+3. **If Linear:**
+
+ ```bash
+ linear issue create --title "" --description "$(cat )"
+ ```
+
+4. **If no tracker configured:**
+ Ask user: "Which project tracker do you use? (GitHub/Linear/Other)"
+ - Suggest adding `project_tracker: github` or `project_tracker: linear` to their CLAUDE.md
+
+5. **After creation:**
+ - Display the issue URL
+ - Ask if they want to proceed to `/ce:work` or `/technical_review`
+
+NEVER CODE! Just research and write the plan.
diff --git a/plugins/compound-engineering/commands/ce/review.md b/plugins/compound-engineering/commands/ce/review.md
new file mode 100644
index 0000000..cf4a061
--- /dev/null
+++ b/plugins/compound-engineering/commands/ce/review.md
@@ -0,0 +1,525 @@
+---
+name: ce:review
+description: Perform exhaustive code reviews using multi-agent analysis, ultra-thinking, and worktrees
+argument-hint: "[PR number, GitHub URL, branch name, or latest]"
+---
+
+# Review Command
+
+ Perform exhaustive code reviews using multi-agent analysis, ultra-thinking, and Git worktrees for deep local inspection.
+
+## Introduction
+
+Senior Code Review Architect with expertise in security, performance, architecture, and quality assurance
+
+## Prerequisites
+
+
+- Git repository with GitHub CLI (`gh`) installed and authenticated
+- Clean main/master branch
+- Proper permissions to create worktrees and access the repository
+- For document reviews: Path to a markdown file or document
+
+
+## Main Tasks
+
+### 1. Determine Review Target & Setup (ALWAYS FIRST)
+
+ #$ARGUMENTS
+
+
+First, I need to determine the review target type and set up the code for analysis.
+
+
+#### Immediate Actions:
+
+
+
+- [ ] Determine review type: PR number (numeric), GitHub URL, file path (.md), or empty (current branch)
+- [ ] Check current git branch
+- [ ] If ALREADY on the target branch (PR branch, requested branch name, or the branch already checked out for review) → proceed with analysis on current branch
+- [ ] If DIFFERENT branch than the review target → offer to use worktree: "Use git-worktree skill for isolated Call `skill: git-worktree` with branch name"
+- [ ] Fetch PR metadata using `gh pr view --json` for title, body, files, linked issues
+- [ ] Set up language-specific analysis tools
+- [ ] Prepare security scanning environment
+- [ ] Make sure we are on the branch we are reviewing. Use gh pr checkout to switch to the branch or manually checkout the branch.
+
+Ensure that the code is ready for analysis (either in worktree or on current branch). ONLY then proceed to the next step.
+
+
+
+#### Protected Artifacts
+
+
+The following paths are compound-engineering pipeline artifacts and must never be flagged for deletion, removal, or gitignore by any review agent:
+
+- `docs/plans/*.md` — Plan files created by `/ce:plan`. These are living documents that track implementation progress (checkboxes are checked off by `/ce:work`).
+- `docs/solutions/*.md` — Solution documents created during the pipeline.
+
+If a review agent flags any file in these directories for cleanup or removal, discard that finding during synthesis. Do not create a todo for it.
+
+
+#### Load Review Agents
+
+Read `compound-engineering.local.md` in the project root. If found, use `review_agents` from YAML frontmatter. If the markdown body contains review context, pass it to each agent as additional instructions.
+
+If no settings file exists, invoke the `setup` skill to create one. Then read the newly created file and continue.
+
+#### Parallel Agents to review the PR:
+
+
+
+Run all configured review agents in parallel using Task tool. For each agent in the `review_agents` list:
+
+```
+Task {agent-name}(PR content + review context from settings body)
+```
+
+Additionally, always run these regardless of settings:
+- Task agent-native-reviewer(PR content) - Verify new features are agent-accessible
+- Task learnings-researcher(PR content) - Search docs/solutions/ for past issues related to this PR's modules and patterns
+
+
+
+#### Conditional Agents (Run if applicable):
+
+
+
+These agents are run ONLY when the PR matches specific criteria. Check the PR files list to determine if they apply:
+
+**MIGRATIONS: If PR contains database migrations, schema.rb, or data backfills:**
+
+- Task schema-drift-detector(PR content) - Detects unrelated schema.rb changes by cross-referencing against included migrations (run FIRST)
+- Task data-migration-expert(PR content) - Validates ID mappings match production, checks for swapped values, verifies rollback safety
+- Task deployment-verification-agent(PR content) - Creates Go/No-Go deployment checklist with SQL verification queries
+
+**When to run:**
+- PR includes files matching `db/migrate/*.rb` or `db/schema.rb`
+- PR modifies columns that store IDs, enums, or mappings
+- PR includes data backfill scripts or rake tasks
+- PR title/body mentions: migration, backfill, data transformation, ID mapping
+
+**What these agents check:**
+- `schema-drift-detector`: Cross-references schema.rb changes against PR migrations to catch unrelated columns/indexes from local database state
+- `data-migration-expert`: Verifies hard-coded mappings match production reality (prevents swapped IDs), checks for orphaned associations, validates dual-write patterns
+- `deployment-verification-agent`: Produces executable pre/post-deploy checklists with SQL queries, rollback procedures, and monitoring plans
+
+
+
+### 2. Ultra-Thinking Deep Dive Phases
+
+ For each phase below, spend maximum cognitive effort. Think step by step. Consider all angles. Question assumptions. And bring all reviews in a synthesis to the user.
+
+
+Complete system context map with component interactions
+
+
+#### Phase 1: Stakeholder Perspective Analysis
+
+ ULTRA-THINK: Put yourself in each stakeholder's shoes. What matters to them? What are their pain points?
+
+
+
+1. **Developer Perspective**
+
+ - How easy is this to understand and modify?
+ - Are the APIs intuitive?
+ - Is debugging straightforward?
+ - Can I test this easily?
+
+2. **Operations Perspective**
+
+ - How do I deploy this safely?
+ - What metrics and logs are available?
+ - How do I troubleshoot issues?
+ - What are the resource requirements?
+
+3. **End User Perspective**
+
+ - Is the feature intuitive?
+ - Are error messages helpful?
+ - Is performance acceptable?
+ - Does it solve my problem?
+
+4. **Security Team Perspective**
+
+ - What's the attack surface?
+ - Are there compliance requirements?
+ - How is data protected?
+ - What are the audit capabilities?
+
+5. **Business Perspective**
+ - What's the ROI?
+ - Are there legal/compliance risks?
+ - How does this affect time-to-market?
+ - What's the total cost of ownership?
+
+#### Phase 2: Scenario Exploration
+
+ ULTRA-THINK: Explore edge cases and failure scenarios. What could go wrong? How does the system behave under stress?
+
+
+
+- [ ] **Happy Path**: Normal operation with valid inputs
+- [ ] **Invalid Inputs**: Null, empty, malformed data
+- [ ] **Boundary Conditions**: Min/max values, empty collections
+- [ ] **Concurrent Access**: Race conditions, deadlocks
+- [ ] **Scale Testing**: 10x, 100x, 1000x normal load
+- [ ] **Network Issues**: Timeouts, partial failures
+- [ ] **Resource Exhaustion**: Memory, disk, connections
+- [ ] **Security Attacks**: Injection, overflow, DoS
+- [ ] **Data Corruption**: Partial writes, inconsistency
+- [ ] **Cascading Failures**: Downstream service issues
+
+### 3. Multi-Angle Review Perspectives
+
+#### Technical Excellence Angle
+
+- Code craftsmanship evaluation
+- Engineering best practices
+- Technical documentation quality
+- Tooling and automation assessment
+
+#### Business Value Angle
+
+- Feature completeness validation
+- Performance impact on users
+- Cost-benefit analysis
+- Time-to-market considerations
+
+#### Risk Management Angle
+
+- Security risk assessment
+- Operational risk evaluation
+- Compliance risk verification
+- Technical debt accumulation
+
+#### Team Dynamics Angle
+
+- Code review etiquette
+- Knowledge sharing effectiveness
+- Collaboration patterns
+- Mentoring opportunities
+
+### 4. Simplification and Minimalism Review
+
+Run the Task code-simplicity-reviewer() to see if we can simplify the code.
+
+### 5. Findings Synthesis and Todo Creation Using file-todos Skill
+
+ ALL findings MUST be stored in the todos/ directory using the file-todos skill. Create todo files immediately after synthesis - do NOT present findings for user approval first. Use the skill for structured todo management.
+
+#### Step 1: Synthesize All Findings
+
+
+Consolidate all agent reports into a categorized list of findings.
+Remove duplicates, prioritize by severity and impact.
+
+
+
+
+- [ ] Collect findings from all parallel agents
+- [ ] Surface learnings-researcher results: if past solutions are relevant, flag them as "Known Pattern" with links to docs/solutions/ files
+- [ ] Discard any findings that recommend deleting or gitignoring files in `docs/plans/` or `docs/solutions/` (see Protected Artifacts above)
+- [ ] Categorize by type: security, performance, architecture, quality, etc.
+- [ ] Assign severity levels: 🔴 CRITICAL (P1), 🟡 IMPORTANT (P2), 🔵 NICE-TO-HAVE (P3)
+- [ ] Remove duplicate or overlapping findings
+- [ ] Estimate effort for each finding (Small/Medium/Large)
+
+
+
+#### Step 2: Create Todo Files Using file-todos Skill
+
+ Use the file-todos skill to create todo files for ALL findings immediately. Do NOT present findings one-by-one asking for user approval. Create all todo files in parallel using the skill, then summarize results to user.
+
+**Implementation Options:**
+
+**Option A: Direct File Creation (Fast)**
+
+- Create todo files directly using Write tool
+- All findings in parallel for speed
+- Use standard template from `.claude/skills/file-todos/assets/todo-template.md`
+- Follow naming convention: `{issue_id}-pending-{priority}-{description}.md`
+
+**Option B: Sub-Agents in Parallel (Recommended for Scale)** For large PRs with 15+ findings, use sub-agents to create finding files in parallel:
+
+```bash
+# Launch multiple finding-creator agents in parallel
+Task() - Create todos for first finding
+Task() - Create todos for second finding
+Task() - Create todos for third finding
+etc. for each finding.
+```
+
+Sub-agents can:
+
+- Process multiple findings simultaneously
+- Write detailed todo files with all sections filled
+- Organize findings by severity
+- Create comprehensive Proposed Solutions
+- Add acceptance criteria and work logs
+- Complete much faster than sequential processing
+
+**Execution Strategy:**
+
+1. Synthesize all findings into categories (P1/P2/P3)
+2. Group findings by severity
+3. Launch 3 parallel sub-agents (one per severity level)
+4. Each sub-agent creates its batch of todos using the file-todos skill
+5. Consolidate results and present summary
+
+**Process (Using file-todos Skill):**
+
+1. For each finding:
+
+ - Determine severity (P1/P2/P3)
+ - Write detailed Problem Statement and Findings
+ - Create 2-3 Proposed Solutions with pros/cons/effort/risk
+ - Estimate effort (Small/Medium/Large)
+ - Add acceptance criteria and work log
+
+2. Use file-todos skill for structured todo management:
+
+ ```bash
+ skill: file-todos
+ ```
+
+ The skill provides:
+
+ - Template location: `.claude/skills/file-todos/assets/todo-template.md`
+ - Naming convention: `{issue_id}-{status}-{priority}-{description}.md`
+ - YAML frontmatter structure: status, priority, issue_id, tags, dependencies
+ - All required sections: Problem Statement, Findings, Solutions, etc.
+
+3. Create todo files in parallel:
+
+ ```bash
+ {next_id}-pending-{priority}-{description}.md
+ ```
+
+4. Examples:
+
+ ```
+ 001-pending-p1-path-traversal-vulnerability.md
+ 002-pending-p1-api-response-validation.md
+ 003-pending-p2-concurrency-limit.md
+ 004-pending-p3-unused-parameter.md
+ ```
+
+5. Follow template structure from file-todos skill: `.claude/skills/file-todos/assets/todo-template.md`
+
+**Todo File Structure (from template):**
+
+Each todo must include:
+
+- **YAML frontmatter**: status, priority, issue_id, tags, dependencies
+- **Problem Statement**: What's broken/missing, why it matters
+- **Findings**: Discoveries from agents with evidence/location
+- **Proposed Solutions**: 2-3 options, each with pros/cons/effort/risk
+- **Recommended Action**: (Filled during triage, leave blank initially)
+- **Technical Details**: Affected files, components, database changes
+- **Acceptance Criteria**: Testable checklist items
+- **Work Log**: Dated record with actions and learnings
+- **Resources**: Links to PR, issues, documentation, similar patterns
+
+**File naming convention:**
+
+```
+{issue_id}-{status}-{priority}-{description}.md
+
+Examples:
+- 001-pending-p1-security-vulnerability.md
+- 002-pending-p2-performance-optimization.md
+- 003-pending-p3-code-cleanup.md
+```
+
+**Status values:**
+
+- `pending` - New findings, needs triage/decision
+- `ready` - Approved by manager, ready to work
+- `complete` - Work finished
+
+**Priority values:**
+
+- `p1` - Critical (blocks merge, security/data issues)
+- `p2` - Important (should fix, architectural/performance)
+- `p3` - Nice-to-have (enhancements, cleanup)
+
+**Tagging:** Always add `code-review` tag, plus: `security`, `performance`, `architecture`, `rails`, `quality`, etc.
+
+#### Step 3: Summary Report
+
+After creating all todo files, present comprehensive summary:
+
+````markdown
+## ✅ Code Review Complete
+
+**Review Target:** PR #XXXX - [PR Title] **Branch:** [branch-name]
+
+### Findings Summary:
+
+- **Total Findings:** [X]
+- **🔴 CRITICAL (P1):** [count] - BLOCKS MERGE
+- **🟡 IMPORTANT (P2):** [count] - Should Fix
+- **🔵 NICE-TO-HAVE (P3):** [count] - Enhancements
+
+### Created Todo Files:
+
+**P1 - Critical (BLOCKS MERGE):**
+
+- `001-pending-p1-{finding}.md` - {description}
+- `002-pending-p1-{finding}.md` - {description}
+
+**P2 - Important:**
+
+- `003-pending-p2-{finding}.md` - {description}
+- `004-pending-p2-{finding}.md` - {description}
+
+**P3 - Nice-to-Have:**
+
+- `005-pending-p3-{finding}.md` - {description}
+
+### Review Agents Used:
+
+- kieran-rails-reviewer
+- security-sentinel
+- performance-oracle
+- architecture-strategist
+- agent-native-reviewer
+- [other agents]
+
+### Next Steps:
+
+1. **Address P1 Findings**: CRITICAL - must be fixed before merge
+
+ - Review each P1 todo in detail
+ - Implement fixes or request exemption
+ - Verify fixes before merging PR
+
+2. **Triage All Todos**:
+ ```bash
+ ls todos/*-pending-*.md # View all pending todos
+ /triage # Use slash command for interactive triage
+ ```
+
+3. **Work on Approved Todos**:
+
+ ```bash
+ /resolve_todo_parallel # Fix all approved items efficiently
+ ```
+
+4. **Track Progress**:
+ - Rename file when status changes: pending → ready → complete
+ - Update Work Log as you work
+ - Commit todos: `git add todos/ && git commit -m "refactor: add code review findings"`
+
+### Severity Breakdown:
+
+**🔴 P1 (Critical - Blocks Merge):**
+
+- Security vulnerabilities
+- Data corruption risks
+- Breaking changes
+- Critical architectural issues
+
+**🟡 P2 (Important - Should Fix):**
+
+- Performance issues
+- Significant architectural concerns
+- Major code quality problems
+- Reliability issues
+
+**🔵 P3 (Nice-to-Have):**
+
+- Minor improvements
+- Code cleanup
+- Optimization opportunities
+- Documentation updates
+````
+
+### 6. End-to-End Testing (Optional)
+
+
+
+**First, detect the project type from PR files:**
+
+| Indicator | Project Type |
+|-----------|--------------|
+| `*.xcodeproj`, `*.xcworkspace`, `Package.swift` (iOS) | iOS/macOS |
+| `Gemfile`, `package.json`, `app/views/*`, `*.html.*` | Web |
+| Both iOS files AND web files | Hybrid (test both) |
+
+
+
+
+
+After presenting the Summary Report, offer appropriate testing based on project type:
+
+**For Web Projects:**
+```markdown
+**"Want to run browser tests on the affected pages?"**
+1. Yes - run `/test-browser`
+2. No - skip
+```
+
+**For iOS Projects:**
+```markdown
+**"Want to run Xcode simulator tests on the app?"**
+1. Yes - run `/xcode-test`
+2. No - skip
+```
+
+**For Hybrid Projects (e.g., Rails + Hotwire Native):**
+```markdown
+**"Want to run end-to-end tests?"**
+1. Web only - run `/test-browser`
+2. iOS only - run `/xcode-test`
+3. Both - run both commands
+4. No - skip
+```
+
+
+
+#### If User Accepts Web Testing:
+
+Spawn a subagent to run browser tests (preserves main context):
+
+```
+Task general-purpose("Run /test-browser for PR #[number]. Test all affected pages, check for console errors, handle failures by creating todos and fixing.")
+```
+
+The subagent will:
+1. Identify pages affected by the PR
+2. Navigate to each page and capture snapshots (using Playwright MCP or agent-browser CLI)
+3. Check for console errors
+4. Test critical interactions
+5. Pause for human verification on OAuth/email/payment flows
+6. Create P1 todos for any failures
+7. Fix and retry until all tests pass
+
+**Standalone:** `/test-browser [PR number]`
+
+#### If User Accepts iOS Testing:
+
+Spawn a subagent to run Xcode tests (preserves main context):
+
+```
+Task general-purpose("Run /xcode-test for scheme [name]. Build for simulator, install, launch, take screenshots, check for crashes.")
+```
+
+The subagent will:
+1. Verify XcodeBuildMCP is installed
+2. Discover project and schemes
+3. Build for iOS Simulator
+4. Install and launch app
+5. Take screenshots of key screens
+6. Capture console logs for errors
+7. Pause for human verification (Sign in with Apple, push, IAP)
+8. Create P1 todos for any failures
+9. Fix and retry until all tests pass
+
+**Standalone:** `/xcode-test [scheme]`
+
+### Important: P1 Findings Block Merge
+
+Any **🔴 P1 (CRITICAL)** findings must be addressed before merging the PR. Present these prominently and ensure they're resolved before accepting the PR.
diff --git a/plugins/compound-engineering/commands/ce/work.md b/plugins/compound-engineering/commands/ce/work.md
new file mode 100644
index 0000000..3e09c43
--- /dev/null
+++ b/plugins/compound-engineering/commands/ce/work.md
@@ -0,0 +1,470 @@
+---
+name: ce:work
+description: Execute work plans efficiently while maintaining quality and finishing features
+argument-hint: "[plan file, specification, or todo file path]"
+---
+
+# Work Plan Execution Command
+
+Execute a work plan efficiently while maintaining quality and finishing features.
+
+## Introduction
+
+This command takes a work document (plan, specification, or todo file) and executes it systematically. The focus is on **shipping complete features** by understanding requirements quickly, following existing patterns, and maintaining quality throughout.
+
+## Input Document
+
+ #$ARGUMENTS
+
+## Execution Workflow
+
+### Phase 1: Quick Start
+
+1. **Read Plan and Clarify**
+
+ - Read the work document completely
+ - Review any references or links provided in the plan
+ - If anything is unclear or ambiguous, ask clarifying questions now
+ - Get user approval to proceed
+ - **Do not skip this** - better to ask questions now than build the wrong thing
+
+2. **Setup Environment**
+
+ First, check the current branch:
+
+ ```bash
+ current_branch=$(git branch --show-current)
+ default_branch=$(git symbolic-ref refs/remotes/origin/HEAD 2>/dev/null | sed 's@^refs/remotes/origin/@@')
+
+ # Fallback if remote HEAD isn't set
+ if [ -z "$default_branch" ]; then
+ default_branch=$(git rev-parse --verify origin/main >/dev/null 2>&1 && echo "main" || echo "master")
+ fi
+ ```
+
+ **If already on a feature branch** (not the default branch):
+ - Ask: "Continue working on `[current_branch]`, or create a new branch?"
+ - If continuing, proceed to step 3
+ - If creating new, follow Option A or B below
+
+ **If on the default branch**, choose how to proceed:
+
+ **Option A: Create a new branch**
+ ```bash
+ git pull origin [default_branch]
+ git checkout -b feature-branch-name
+ ```
+ Use a meaningful name based on the work (e.g., `feat/user-authentication`, `fix/email-validation`).
+
+ **Option B: Use a worktree (recommended for parallel development)**
+ ```bash
+ skill: git-worktree
+ # The skill will create a new branch from the default branch in an isolated worktree
+ ```
+
+ **Option C: Continue on the default branch**
+ - Requires explicit user confirmation
+ - Only proceed after user explicitly says "yes, commit to [default_branch]"
+ - Never commit directly to the default branch without explicit permission
+
+ **Recommendation**: Use worktree if:
+ - You want to work on multiple features simultaneously
+ - You want to keep the default branch clean while experimenting
+ - You plan to switch between branches frequently
+
+3. **Create Todo List**
+ - Use TodoWrite to break plan into actionable tasks
+ - Include dependencies between tasks
+ - Prioritize based on what needs to be done first
+ - Include testing and quality check tasks
+ - Keep tasks specific and completable
+
+### Phase 2: Execute
+
+1. **Task Execution Loop**
+
+ For each task in priority order:
+
+ ```
+ while (tasks remain):
+ - Mark task as in_progress in TodoWrite
+ - Read any referenced files from the plan
+ - Look for similar patterns in codebase
+ - Implement following existing conventions
+ - Write tests for new functionality
+ - Run System-Wide Test Check (see below)
+ - Run tests after changes
+ - Mark task as completed in TodoWrite
+ - Mark off the corresponding checkbox in the plan file ([ ] → [x])
+ - Evaluate for incremental commit (see below)
+ ```
+
+ **System-Wide Test Check** — Before marking a task done, pause and ask:
+
+ | Question | What to do |
+ |----------|------------|
+ | **What fires when this runs?** Callbacks, middleware, observers, event handlers — trace two levels out from your change. | Read the actual code (not docs) for callbacks on models you touch, middleware in the request chain, `after_*` hooks. |
+ | **Do my tests exercise the real chain?** If every dependency is mocked, the test proves your logic works *in isolation* — it says nothing about the interaction. | Write at least one integration test that uses real objects through the full callback/middleware chain. No mocks for the layers that interact. |
+ | **Can failure leave orphaned state?** If your code persists state (DB row, cache, file) before calling an external service, what happens when the service fails? Does retry create duplicates? | Trace the failure path with real objects. If state is created before the risky call, test that failure cleans up or that retry is idempotent. |
+ | **What other interfaces expose this?** Mixins, DSLs, alternative entry points (Agent vs Chat vs ChatMethods). | Grep for the method/behavior in related classes. If parity is needed, add it now — not as a follow-up. |
+ | **Do error strategies align across layers?** Retry middleware + application fallback + framework error handling — do they conflict or create double execution? | List the specific error classes at each layer. Verify your rescue list matches what the lower layer actually raises. |
+
+ **When to skip:** Leaf-node changes with no callbacks, no state persistence, no parallel interfaces. If the change is purely additive (new helper method, new view partial), the check takes 10 seconds and the answer is "nothing fires, skip."
+
+ **When this matters most:** Any change that touches models with callbacks, error handling with fallback/retry, or functionality exposed through multiple interfaces.
+
+ **IMPORTANT**: Always update the original plan document by checking off completed items. Use the Edit tool to change `- [ ]` to `- [x]` for each task you finish. This keeps the plan as a living document showing progress and ensures no checkboxes are left unchecked.
+
+2. **Incremental Commits**
+
+ After completing each task, evaluate whether to create an incremental commit:
+
+ | Commit when... | Don't commit when... |
+ |----------------|---------------------|
+ | Logical unit complete (model, service, component) | Small part of a larger unit |
+ | Tests pass + meaningful progress | Tests failing |
+ | About to switch contexts (backend → frontend) | Purely scaffolding with no behavior |
+ | About to attempt risky/uncertain changes | Would need a "WIP" commit message |
+
+ **Heuristic:** "Can I write a commit message that describes a complete, valuable change? If yes, commit. If the message would be 'WIP' or 'partial X', wait."
+
+ **Commit workflow:**
+ ```bash
+ # 1. Verify tests pass (use project's test command)
+ # Examples: bin/rails test, npm test, pytest, go test, etc.
+
+ # 2. Stage only files related to this logical unit (not `git add .`)
+ git add
+
+ # 3. Commit with conventional message
+ git commit -m "feat(scope): description of this unit"
+ ```
+
+ **Handling merge conflicts:** If conflicts arise during rebasing or merging, resolve them immediately. Incremental commits make conflict resolution easier since each commit is small and focused.
+
+ **Note:** Incremental commits use clean conventional messages without attribution footers. The final Phase 4 commit/PR includes the full attribution.
+
+3. **Follow Existing Patterns**
+
+ - The plan should reference similar code - read those files first
+ - Match naming conventions exactly
+ - Reuse existing components where possible
+ - Follow project coding standards (see CLAUDE.md)
+ - When in doubt, grep for similar implementations
+
+4. **Test Continuously**
+
+ - Run relevant tests after each significant change
+ - Don't wait until the end to test
+ - Fix failures immediately
+ - Add new tests for new functionality
+ - **Unit tests with mocks prove logic in isolation. Integration tests with real objects prove the layers work together.** If your change touches callbacks, middleware, or error handling — you need both.
+
+5. **Figma Design Sync** (if applicable)
+
+ For UI work with Figma designs:
+
+ - Implement components following design specs
+ - Use figma-design-sync agent iteratively to compare
+ - Fix visual differences identified
+ - Repeat until implementation matches design
+
+6. **Track Progress**
+ - Keep TodoWrite updated as you complete tasks
+ - Note any blockers or unexpected discoveries
+ - Create new tasks if scope expands
+ - Keep user informed of major milestones
+
+### Phase 3: Quality Check
+
+1. **Run Core Quality Checks**
+
+ Always run before submitting:
+
+ ```bash
+ # Run full test suite (use project's test command)
+ # Examples: bin/rails test, npm test, pytest, go test, etc.
+
+ # Run linting (per CLAUDE.md)
+ # Use linting-agent before pushing to origin
+ ```
+
+2. **Consider Reviewer Agents** (Optional)
+
+ Use for complex, risky, or large changes. Read agents from `compound-engineering.local.md` frontmatter (`review_agents`). If no settings file, invoke the `setup` skill to create one.
+
+ Run configured agents in parallel with Task tool. Present findings and address critical issues.
+
+3. **Final Validation**
+ - All TodoWrite tasks marked completed
+ - All tests pass
+ - Linting passes
+ - Code follows existing patterns
+ - Figma designs match (if applicable)
+ - No console errors or warnings
+
+4. **Prepare Operational Validation Plan** (REQUIRED)
+ - Add a `## Post-Deploy Monitoring & Validation` section to the PR description for every change.
+ - Include concrete:
+ - Log queries/search terms
+ - Metrics or dashboards to watch
+ - Expected healthy signals
+ - Failure signals and rollback/mitigation trigger
+ - Validation window and owner
+ - If there is truly no production/runtime impact, still include the section with: `No additional operational monitoring required` and a one-line reason.
+
+### Phase 4: Ship It
+
+1. **Create Commit**
+
+ ```bash
+ git add .
+ git status # Review what's being committed
+ git diff --staged # Check the changes
+
+ # Commit with conventional format
+ git commit -m "$(cat <<'EOF'
+ feat(scope): description of what and why
+
+ Brief explanation if needed.
+
+ 🤖 Generated with [Claude Code](https://claude.com/claude-code)
+
+ Co-Authored-By: Claude
+ EOF
+ )"
+ ```
+
+2. **Capture and Upload Screenshots for UI Changes** (REQUIRED for any UI work)
+
+ For **any** design changes, new views, or UI modifications, you MUST capture and upload screenshots:
+
+ **Step 1: Start dev server** (if not running)
+ ```bash
+ bin/dev # Run in background
+ ```
+
+ **Step 2: Capture screenshots with agent-browser CLI**
+ ```bash
+ agent-browser open http://localhost:3000/[route]
+ agent-browser snapshot -i
+ agent-browser screenshot output.png
+ ```
+ See the `agent-browser` skill for detailed usage.
+
+ **Step 3: Upload using imgup skill**
+ ```bash
+ skill: imgup
+ # Then upload each screenshot:
+ imgup -h pixhost screenshot.png # pixhost works without API key
+ # Alternative hosts: catbox, imagebin, beeimg
+ ```
+
+ **What to capture:**
+ - **New screens**: Screenshot of the new UI
+ - **Modified screens**: Before AND after screenshots
+ - **Design implementation**: Screenshot showing Figma design match
+
+ **IMPORTANT**: Always include uploaded image URLs in PR description. This provides visual context for reviewers and documents the change.
+
+3. **Create Pull Request**
+
+ ```bash
+ git push -u origin feature-branch-name
+
+ gh pr create --title "Feature: [Description]" --body "$(cat <<'EOF'
+ ## Summary
+ - What was built
+ - Why it was needed
+ - Key decisions made
+
+ ## Testing
+ - Tests added/modified
+ - Manual testing performed
+
+ ## Post-Deploy Monitoring & Validation
+ - **What to monitor/search**
+ - Logs:
+ - Metrics/Dashboards:
+ - **Validation checks (queries/commands)**
+ - `command or query here`
+ - **Expected healthy behavior**
+ - Expected signal(s)
+ - **Failure signal(s) / rollback trigger**
+ - Trigger + immediate action
+ - **Validation window & owner**
+ - Window:
+ - Owner:
+ - **If no operational impact**
+ - `No additional operational monitoring required: `
+
+ ## Before / After Screenshots
+ | Before | After |
+ |--------|-------|
+ |  |  |
+
+ ## Figma Design
+ [Link if applicable]
+
+ ---
+
+ [](https://github.com/EveryInc/compound-engineering-plugin) 🤖 Generated with [Claude Code](https://claude.com/claude-code)
+ EOF
+ )"
+ ```
+
+4. **Update Plan Status**
+
+ If the input document has YAML frontmatter with a `status` field, update it to `completed`:
+ ```
+ status: active → status: completed
+ ```
+
+5. **Notify User**
+ - Summarize what was completed
+ - Link to PR
+ - Note any follow-up work needed
+ - Suggest next steps if applicable
+
+---
+
+## Swarm Mode (Optional)
+
+For complex plans with multiple independent workstreams, enable swarm mode for parallel execution with coordinated agents.
+
+### When to Use Swarm Mode
+
+| Use Swarm Mode when... | Use Standard Mode when... |
+|------------------------|---------------------------|
+| Plan has 5+ independent tasks | Plan is linear/sequential |
+| Multiple specialists needed (review + test + implement) | Single-focus work |
+| Want maximum parallelism | Simpler mental model preferred |
+| Large feature with clear phases | Small feature or bug fix |
+
+### Enabling Swarm Mode
+
+To trigger swarm execution, say:
+
+> "Make a Task list and launch an army of agent swarm subagents to build the plan"
+
+Or explicitly request: "Use swarm mode for this work"
+
+### Swarm Workflow
+
+When swarm mode is enabled, the workflow changes:
+
+1. **Create Team**
+ ```
+ Teammate({ operation: "spawnTeam", team_name: "work-{timestamp}" })
+ ```
+
+2. **Create Task List with Dependencies**
+ - Parse plan into TaskCreate items
+ - Set up blockedBy relationships for sequential dependencies
+ - Independent tasks have no blockers (can run in parallel)
+
+3. **Spawn Specialized Teammates**
+ ```
+ Task({
+ team_name: "work-{timestamp}",
+ name: "implementer",
+ subagent_type: "general-purpose",
+ prompt: "Claim implementation tasks, execute, mark complete",
+ run_in_background: true
+ })
+
+ Task({
+ team_name: "work-{timestamp}",
+ name: "tester",
+ subagent_type: "general-purpose",
+ prompt: "Claim testing tasks, run tests, mark complete",
+ run_in_background: true
+ })
+ ```
+
+4. **Coordinate and Monitor**
+ - Team lead monitors task completion
+ - Spawn additional workers as phases unblock
+ - Handle plan approval if required
+
+5. **Cleanup**
+ ```
+ Teammate({ operation: "requestShutdown", target_agent_id: "implementer" })
+ Teammate({ operation: "requestShutdown", target_agent_id: "tester" })
+ Teammate({ operation: "cleanup" })
+ ```
+
+See the `orchestrating-swarms` skill for detailed swarm patterns and best practices.
+
+---
+
+## Key Principles
+
+### Start Fast, Execute Faster
+
+- Get clarification once at the start, then execute
+- Don't wait for perfect understanding - ask questions and move
+- The goal is to **finish the feature**, not create perfect process
+
+### The Plan is Your Guide
+
+- Work documents should reference similar code and patterns
+- Load those references and follow them
+- Don't reinvent - match what exists
+
+### Test As You Go
+
+- Run tests after each change, not at the end
+- Fix failures immediately
+- Continuous testing prevents big surprises
+
+### Quality is Built In
+
+- Follow existing patterns
+- Write tests for new code
+- Run linting before pushing
+- Use reviewer agents for complex/risky changes only
+
+### Ship Complete Features
+
+- Mark all tasks completed before moving on
+- Don't leave features 80% done
+- A finished feature that ships beats a perfect feature that doesn't
+
+## Quality Checklist
+
+Before creating PR, verify:
+
+- [ ] All clarifying questions asked and answered
+- [ ] All TodoWrite tasks marked completed
+- [ ] Tests pass (run project's test command)
+- [ ] Linting passes (use linting-agent)
+- [ ] Code follows existing patterns
+- [ ] Figma designs match implementation (if applicable)
+- [ ] Before/after screenshots captured and uploaded (for UI changes)
+- [ ] Commit messages follow conventional format
+- [ ] PR description includes Post-Deploy Monitoring & Validation section (or explicit no-impact rationale)
+- [ ] PR description includes summary, testing notes, and screenshots
+- [ ] PR description includes Compound Engineered badge
+
+## When to Use Reviewer Agents
+
+**Don't use by default.** Use reviewer agents only when:
+
+- Large refactor affecting many files (10+)
+- Security-sensitive changes (authentication, permissions, data access)
+- Performance-critical code paths
+- Complex algorithms or business logic
+- User explicitly requests thorough review
+
+For most features: tests + linting + following patterns is sufficient.
+
+## Common Pitfalls to Avoid
+
+- **Analysis paralysis** - Don't overthink, read the plan and execute
+- **Skipping clarifying questions** - Ask now, not after building wrong thing
+- **Ignoring plan references** - The plan has links for a reason
+- **Testing at the end** - Test continuously or suffer later
+- **Forgetting TodoWrite** - Track progress or lose track of what's done
+- **80% done syndrome** - Finish the feature, don't move on early
+- **Over-reviewing simple changes** - Save reviewer agents for complex work
diff --git a/plugins/compound-engineering/commands/deepen-plan.md b/plugins/compound-engineering/commands/deepen-plan.md
index a705476..604972e 100644
--- a/plugins/compound-engineering/commands/deepen-plan.md
+++ b/plugins/compound-engineering/commands/deepen-plan.md
@@ -10,7 +10,7 @@ argument-hint: "[path to plan file]"
**Note: The current year is 2026.** Use this when searching for recent documentation and best practices.
-This command takes an existing plan (from `/workflows:plan`) and enhances each section with parallel research agents. Each major element gets its own dedicated research sub-agent to find:
+This command takes an existing plan (from `/ce:plan`) and enhances each section with parallel research agents. Each major element gets its own dedicated research sub-agent to find:
- Best practices and industry patterns
- Performance optimizations
- UI/UX improvements (if applicable)
@@ -145,13 +145,13 @@ Task general-purpose: "Use the security-patterns skill at ~/.claude/skills/secur
### 3. Discover and Apply Learnings/Solutions
-Check for documented learnings from /workflows:compound. These are solved problems stored as markdown files. Spawn a sub-agent for each learning to check if it's relevant.
+Check for documented learnings from /ce:compound. These are solved problems stored as markdown files. Spawn a sub-agent for each learning to check if it's relevant.
**LEARNINGS LOCATION - Check these exact folders:**
```
-docs/solutions/ <-- PRIMARY: Project-level learnings (created by /workflows:compound)
+docs/solutions/ <-- PRIMARY: Project-level learnings (created by /ce:compound)
├── performance-issues/
│ └── *.md
├── debugging-patterns/
@@ -370,7 +370,7 @@ Wait for ALL parallel agents to complete - skills, research agents, review agent
**Collect outputs from ALL sources:**
1. **Skill-based sub-agents** - Each skill's full output (code examples, patterns, recommendations)
-2. **Learnings/Solutions sub-agents** - Relevant documented learnings from /workflows:compound
+2. **Learnings/Solutions sub-agents** - Relevant documented learnings from /ce:compound
3. **Research agents** - Best practices, documentation, real-world examples
4. **Review agents** - All feedback from every reviewer (architecture, security, performance, simplicity, etc.)
5. **Context7 queries** - Framework documentation and patterns
@@ -481,14 +481,14 @@ After writing the enhanced plan, use the **AskUserQuestion tool** to present the
**Options:**
1. **View diff** - Show what was added/changed
2. **Run `/technical_review`** - Get feedback from reviewers on enhanced plan
-3. **Start `/workflows:work`** - Begin implementing this enhanced plan
+3. **Start `/ce:work`** - Begin implementing this enhanced plan
4. **Deepen further** - Run another round of research on specific sections
5. **Revert** - Restore original plan (if backup exists)
Based on selection:
- **View diff** → Run `git diff [plan_path]` or show before/after
- **`/technical_review`** → Call the /technical_review command with the plan file path
-- **`/workflows:work`** → Call the /workflows:work command with the plan file path
+- **`/ce:work`** → Call the /ce:work command with the plan file path
- **Deepen further** → Ask which sections need more research, then re-run those agents
- **Revert** → Restore from git or backup
diff --git a/plugins/compound-engineering/commands/lfg.md b/plugins/compound-engineering/commands/lfg.md
index 86f40e5..f057403 100644
--- a/plugins/compound-engineering/commands/lfg.md
+++ b/plugins/compound-engineering/commands/lfg.md
@@ -8,10 +8,10 @@ disable-model-invocation: true
Run these slash commands in order. Do not do anything else. Do not stop between steps — complete every step through to the end.
1. **Optional:** If the `ralph-wiggum` skill is available, run `/ralph-wiggum:ralph-loop "finish all slash commands" --completion-promise "DONE"`. If not available or it fails, skip and continue to step 2 immediately.
-2. `/workflows:plan $ARGUMENTS`
+2. `/ce:plan $ARGUMENTS`
3. `/compound-engineering:deepen-plan`
-4. `/workflows:work`
-5. `/workflows:review`
+4. `/ce:work`
+5. `/ce:review`
6. `/compound-engineering:resolve_todo_parallel`
7. `/compound-engineering:test-browser`
8. `/compound-engineering:feature-video`
diff --git a/plugins/compound-engineering/commands/slfg.md b/plugins/compound-engineering/commands/slfg.md
index 050d24e..32d2e76 100644
--- a/plugins/compound-engineering/commands/slfg.md
+++ b/plugins/compound-engineering/commands/slfg.md
@@ -10,15 +10,15 @@ Swarm-enabled LFG. Run these steps in order, parallelizing where indicated. Do n
## Sequential Phase
1. **Optional:** If the `ralph-wiggum` skill is available, run `/ralph-wiggum:ralph-loop "finish all slash commands" --completion-promise "DONE"`. If not available or it fails, skip and continue to step 2 immediately.
-2. `/workflows:plan $ARGUMENTS`
+2. `/ce:plan $ARGUMENTS`
3. `/compound-engineering:deepen-plan`
-4. `/workflows:work` — **Use swarm mode**: Make a Task list and launch an army of agent swarm subagents to build the plan
+4. `/ce:work` — **Use swarm mode**: Make a Task list and launch an army of agent swarm subagents to build the plan
## Parallel Phase
After work completes, launch steps 5 and 6 as **parallel swarm agents** (both only need code to be written):
-5. `/workflows:review` — spawn as background Task agent
+5. `/ce:review` — spawn as background Task agent
6. `/compound-engineering:test-browser` — spawn as background Task agent
Wait for both to complete before continuing.
diff --git a/plugins/compound-engineering/commands/test-xcode.md b/plugins/compound-engineering/commands/test-xcode.md
index 82d5c8b..10cba1b 100644
--- a/plugins/compound-engineering/commands/test-xcode.md
+++ b/plugins/compound-engineering/commands/test-xcode.md
@@ -323,9 +323,9 @@ mcp__xcodebuildmcp__shutdown_simulator({ simulator_id: "[uuid]" })
/xcode-test current
```
-## Integration with /workflows:review
+## Integration with /ce:review
-When reviewing PRs that touch iOS code, the `/workflows:review` command can spawn this as a subagent:
+When reviewing PRs that touch iOS code, the `/ce:review` command can spawn this as a subagent:
```
Task general-purpose("Run /xcode-test for scheme [name]. Build, install on simulator, test key screens, check for crashes.")
diff --git a/plugins/compound-engineering/commands/workflows/brainstorm.md b/plugins/compound-engineering/commands/workflows/brainstorm.md
index 08c44ca..d421810 100644
--- a/plugins/compound-engineering/commands/workflows/brainstorm.md
+++ b/plugins/compound-engineering/commands/workflows/brainstorm.md
@@ -1,145 +1,10 @@
---
name: workflows:brainstorm
-description: Explore requirements and approaches through collaborative dialogue before planning implementation
+description: "[DEPRECATED] Use /ce:brainstorm instead — renamed for clarity."
argument-hint: "[feature idea or problem to explore]"
+disable-model-invocation: true
---
-# Brainstorm a Feature or Improvement
+NOTE: /workflows:brainstorm is deprecated. Please use /ce:brainstorm instead. This alias will be removed in a future version.
-**Note: The current year is 2026.** Use this when dating brainstorm documents.
-
-Brainstorming helps answer **WHAT** to build through collaborative dialogue. It precedes `/workflows:plan`, which answers **HOW** to build it.
-
-**Process knowledge:** Load the `brainstorming` skill for detailed question techniques, approach exploration patterns, and YAGNI principles.
-
-## Feature Description
-
- #$ARGUMENTS
-
-**If the feature description above is empty, ask the user:** "What would you like to explore? Please describe the feature, problem, or improvement you're thinking about."
-
-Do not proceed until you have a feature description from the user.
-
-## Execution Flow
-
-### Phase 0: Assess Requirements Clarity
-
-Evaluate whether brainstorming is needed based on the feature description.
-
-**Clear requirements indicators:**
-- Specific acceptance criteria provided
-- Referenced existing patterns to follow
-- Described exact expected behavior
-- Constrained, well-defined scope
-
-**If requirements are already clear:**
-Use **AskUserQuestion tool** to suggest: "Your requirements seem detailed enough to proceed directly to planning. Should I run `/workflows:plan` instead, or would you like to explore the idea further?"
-
-### Phase 1: Understand the Idea
-
-#### 1.1 Repository Research (Lightweight)
-
-Run a quick repo scan to understand existing patterns:
-
-- Task repo-research-analyst("Understand existing patterns related to: ")
-
-Focus on: similar features, established patterns, CLAUDE.md guidance.
-
-#### 1.2 Collaborative Dialogue
-
-Use the **AskUserQuestion tool** to ask questions **one at a time**.
-
-**Guidelines (see `brainstorming` skill for detailed techniques):**
-- Prefer multiple choice when natural options exist
-- Start broad (purpose, users) then narrow (constraints, edge cases)
-- Validate assumptions explicitly
-- Ask about success criteria
-
-**Exit condition:** Continue until the idea is clear OR user says "proceed"
-
-### Phase 2: Explore Approaches
-
-Propose **2-3 concrete approaches** based on research and conversation.
-
-For each approach, provide:
-- Brief description (2-3 sentences)
-- Pros and cons
-- When it's best suited
-
-Lead with your recommendation and explain why. Apply YAGNI—prefer simpler solutions.
-
-Use **AskUserQuestion tool** to ask which approach the user prefers.
-
-### Phase 3: Capture the Design
-
-Write a brainstorm document to `docs/brainstorms/YYYY-MM-DD--brainstorm.md`.
-
-**Document structure:** See the `brainstorming` skill for the template format. Key sections: What We're Building, Why This Approach, Key Decisions, Open Questions.
-
-Ensure `docs/brainstorms/` directory exists before writing.
-
-**IMPORTANT:** Before proceeding to Phase 4, check if there are any Open Questions listed in the brainstorm document. If there are open questions, YOU MUST ask the user about each one using AskUserQuestion before offering to proceed to planning. Move resolved questions to a "Resolved Questions" section.
-
-### Phase 4: Handoff
-
-Use **AskUserQuestion tool** to present next steps:
-
-**Question:** "Brainstorm captured. What would you like to do next?"
-
-**Options:**
-1. **Review and refine** - Improve the document through structured self-review
-2. **Proceed to planning** - Run `/workflows:plan` (will auto-detect this brainstorm)
-3. **Share to Proof** - Upload to Proof for collaborative review and sharing
-4. **Ask more questions** - I have more questions to clarify before moving on
-5. **Done for now** - Return later
-
-**If user selects "Share to Proof":**
-
-```bash
-CONTENT=$(cat docs/brainstorms/YYYY-MM-DD--brainstorm.md)
-TITLE="Brainstorm: "
-RESPONSE=$(curl -s -X POST https://www.proofeditor.ai/share/markdown \
- -H "Content-Type: application/json" \
- -d "$(jq -n --arg title "$TITLE" --arg markdown "$CONTENT" --arg by "ai:compound" '{title: $title, markdown: $markdown, by: $by}')")
-PROOF_URL=$(echo "$RESPONSE" | jq -r '.tokenUrl')
-```
-
-Display the URL prominently: `View & collaborate in Proof: `
-
-If the curl fails, skip silently. Then return to the Phase 4 options.
-
-**If user selects "Ask more questions":** YOU (Claude) return to Phase 1.2 (Collaborative Dialogue) and continue asking the USER questions one at a time to further refine the design. The user wants YOU to probe deeper - ask about edge cases, constraints, preferences, or areas not yet explored. Continue until the user is satisfied, then return to Phase 4.
-
-**If user selects "Review and refine":**
-
-Load the `document-review` skill and apply it to the brainstorm document.
-
-When document-review returns "Review complete", present next steps:
-
-1. **Move to planning** - Continue to `/workflows:plan` with this document
-2. **Done for now** - Brainstorming complete. To start planning later: `/workflows:plan [document-path]`
-
-## Output Summary
-
-When complete, display:
-
-```
-Brainstorm complete!
-
-Document: docs/brainstorms/YYYY-MM-DD--brainstorm.md
-
-Key decisions:
-- [Decision 1]
-- [Decision 2]
-
-Next: Run `/workflows:plan` when ready to implement.
-```
-
-## Important Guidelines
-
-- **Stay focused on WHAT, not HOW** - Implementation details belong in the plan
-- **Ask one question at a time** - Don't overwhelm
-- **Apply YAGNI** - Prefer simpler approaches
-- **Keep outputs concise** - 200-300 words per section max
-
-NEVER CODE! Just explore and document decisions.
+/ce:brainstorm $ARGUMENTS
diff --git a/plugins/compound-engineering/commands/workflows/compound.md b/plugins/compound-engineering/commands/workflows/compound.md
index 9dffc1a..aedbc9f 100644
--- a/plugins/compound-engineering/commands/workflows/compound.md
+++ b/plugins/compound-engineering/commands/workflows/compound.md
@@ -1,240 +1,10 @@
---
name: workflows:compound
-description: Document a recently solved problem to compound your team's knowledge
+description: "[DEPRECATED] Use /ce:compound instead — renamed for clarity."
argument-hint: "[optional: brief context about the fix]"
+disable-model-invocation: true
---
-# /compound
+NOTE: /workflows:compound is deprecated. Please use /ce:compound instead. This alias will be removed in a future version.
-Coordinate multiple subagents working in parallel to document a recently solved problem.
-
-## Purpose
-
-Captures problem solutions while context is fresh, creating structured documentation in `docs/solutions/` with YAML frontmatter for searchability and future reference. Uses parallel subagents for maximum efficiency.
-
-**Why "compound"?** Each documented solution compounds your team's knowledge. The first time you solve a problem takes research. Document it, and the next occurrence takes minutes. Knowledge compounds.
-
-## Usage
-
-```bash
-/workflows:compound # Document the most recent fix
-/workflows:compound [brief context] # Provide additional context hint
-```
-
-## Execution Strategy: Two-Phase Orchestration
-
-
-**Only ONE file gets written - the final documentation.**
-
-Phase 1 subagents return TEXT DATA to the orchestrator. They must NOT use Write, Edit, or create any files. Only the orchestrator (Phase 2) writes the final documentation file.
-
-
-### Phase 1: Parallel Research
-
-
-
-Launch these subagents IN PARALLEL. Each returns text data to the orchestrator.
-
-#### 1. **Context Analyzer**
- - Extracts conversation history
- - Identifies problem type, component, symptoms
- - Validates against schema
- - Returns: YAML frontmatter skeleton
-
-#### 2. **Solution Extractor**
- - Analyzes all investigation steps
- - Identifies root cause
- - Extracts working solution with code examples
- - Returns: Solution content block
-
-#### 3. **Related Docs Finder**
- - Searches `docs/solutions/` for related documentation
- - Identifies cross-references and links
- - Finds related GitHub issues
- - Returns: Links and relationships
-
-#### 4. **Prevention Strategist**
- - Develops prevention strategies
- - Creates best practices guidance
- - Generates test cases if applicable
- - Returns: Prevention/testing content
-
-#### 5. **Category Classifier**
- - Determines optimal `docs/solutions/` category
- - Validates category against schema
- - Suggests filename based on slug
- - Returns: Final path and filename
-
-
-
-### Phase 2: Assembly & Write
-
-
-
-**WAIT for all Phase 1 subagents to complete before proceeding.**
-
-The orchestrating agent (main conversation) performs these steps:
-
-1. Collect all text results from Phase 1 subagents
-2. Assemble complete markdown file from the collected pieces
-3. Validate YAML frontmatter against schema
-4. Create directory if needed: `mkdir -p docs/solutions/[category]/`
-5. Write the SINGLE final file: `docs/solutions/[category]/[filename].md`
-
-
-
-### Phase 3: Optional Enhancement
-
-**WAIT for Phase 2 to complete before proceeding.**
-
-
-
-Based on problem type, optionally invoke specialized agents to review the documentation:
-
-- **performance_issue** → `performance-oracle`
-- **security_issue** → `security-sentinel`
-- **database_issue** → `data-integrity-guardian`
-- **test_failure** → `cora-test-reviewer`
-- Any code-heavy issue → `kieran-rails-reviewer` + `code-simplicity-reviewer`
-
-
-
-## What It Captures
-
-- **Problem symptom**: Exact error messages, observable behavior
-- **Investigation steps tried**: What didn't work and why
-- **Root cause analysis**: Technical explanation
-- **Working solution**: Step-by-step fix with code examples
-- **Prevention strategies**: How to avoid in future
-- **Cross-references**: Links to related issues and docs
-
-## Preconditions
-
-
-
- Problem has been solved (not in-progress)
-
-
- Solution has been verified working
-
-
- Non-trivial problem (not simple typo or obvious error)
-
-
-
-## What It Creates
-
-**Organized documentation:**
-
-- File: `docs/solutions/[category]/[filename].md`
-
-**Categories auto-detected from problem:**
-
-- build-errors/
-- test-failures/
-- runtime-errors/
-- performance-issues/
-- database-issues/
-- security-issues/
-- ui-bugs/
-- integration-issues/
-- logic-errors/
-
-## Common Mistakes to Avoid
-
-| ❌ Wrong | ✅ Correct |
-|----------|-----------|
-| Subagents write files like `context-analysis.md`, `solution-draft.md` | Subagents return text data; orchestrator writes one final file |
-| Research and assembly run in parallel | Research completes → then assembly runs |
-| Multiple files created during workflow | Single file: `docs/solutions/[category]/[filename].md` |
-
-## Success Output
-
-```
-✓ Documentation complete
-
-Subagent Results:
- ✓ Context Analyzer: Identified performance_issue in brief_system
- ✓ Solution Extractor: 3 code fixes
- ✓ Related Docs Finder: 2 related issues
- ✓ Prevention Strategist: Prevention strategies, test suggestions
- ✓ Category Classifier: `performance-issues`
-
-Specialized Agent Reviews (Auto-Triggered):
- ✓ performance-oracle: Validated query optimization approach
- ✓ kieran-rails-reviewer: Code examples meet Rails standards
- ✓ code-simplicity-reviewer: Solution is appropriately minimal
- ✓ every-style-editor: Documentation style verified
-
-File created:
-- docs/solutions/performance-issues/n-plus-one-brief-generation.md
-
-This documentation will be searchable for future reference when similar
-issues occur in the Email Processing or Brief System modules.
-
-What's next?
-1. Continue workflow (recommended)
-2. Link related documentation
-3. Update other references
-4. View documentation
-5. Other
-```
-
-## The Compounding Philosophy
-
-This creates a compounding knowledge system:
-
-1. First time you solve "N+1 query in brief generation" → Research (30 min)
-2. Document the solution → docs/solutions/performance-issues/n-plus-one-briefs.md (5 min)
-3. Next time similar issue occurs → Quick lookup (2 min)
-4. Knowledge compounds → Team gets smarter
-
-The feedback loop:
-
-```
-Build → Test → Find Issue → Research → Improve → Document → Validate → Deploy
- ↑ ↓
- └──────────────────────────────────────────────────────────────────────┘
-```
-
-**Each unit of engineering work should make subsequent units of work easier—not harder.**
-
-## Auto-Invoke
-
- - "that worked" - "it's fixed" - "working now" - "problem solved"
-
- Use /workflows:compound [context] to document immediately without waiting for auto-detection.
-
-## Routes To
-
-`compound-docs` skill
-
-## Applicable Specialized Agents
-
-Based on problem type, these agents can enhance documentation:
-
-### Code Quality & Review
-- **kieran-rails-reviewer**: Reviews code examples for Rails best practices
-- **code-simplicity-reviewer**: Ensures solution code is minimal and clear
-- **pattern-recognition-specialist**: Identifies anti-patterns or repeating issues
-
-### Specific Domain Experts
-- **performance-oracle**: Analyzes performance_issue category solutions
-- **security-sentinel**: Reviews security_issue solutions for vulnerabilities
-- **cora-test-reviewer**: Creates test cases for prevention strategies
-- **data-integrity-guardian**: Reviews database_issue migrations and queries
-
-### Enhancement & Documentation
-- **best-practices-researcher**: Enriches solution with industry best practices
-- **every-style-editor**: Reviews documentation style and clarity
-- **framework-docs-researcher**: Links to Rails/gem documentation references
-
-### When to Invoke
-- **Auto-triggered** (optional): Agents can run post-documentation for enhancement
-- **Manual trigger**: User can invoke agents after /workflows:compound completes for deeper review
-- **Customize agents**: Edit `compound-engineering.local.md` or invoke the `setup` skill to configure which review agents are used across all workflows
-
-## Related Commands
-
-- `/research [topic]` - Deep investigation (searches docs/solutions/ for patterns)
-- `/workflows:plan` - Planning workflow (references documented solutions)
+/ce:compound $ARGUMENTS
diff --git a/plugins/compound-engineering/commands/workflows/plan.md b/plugins/compound-engineering/commands/workflows/plan.md
index fd18ff5..d2407ea 100644
--- a/plugins/compound-engineering/commands/workflows/plan.md
+++ b/plugins/compound-engineering/commands/workflows/plan.md
@@ -1,636 +1,10 @@
---
name: workflows:plan
-description: Transform feature descriptions into well-structured project plans following conventions
+description: "[DEPRECATED] Use /ce:plan instead — renamed for clarity."
argument-hint: "[feature description, bug report, or improvement idea]"
+disable-model-invocation: true
---
-# Create a plan for a new feature or bug fix
+NOTE: /workflows:plan is deprecated. Please use /ce:plan instead. This alias will be removed in a future version.
-## Introduction
-
-**Note: The current year is 2026.** Use this when dating plans and searching for recent documentation.
-
-Transform feature descriptions, bug reports, or improvement ideas into well-structured markdown files issues that follow project conventions and best practices. This command provides flexible detail levels to match your needs.
-
-## Feature Description
-
- #$ARGUMENTS
-
-**If the feature description above is empty, ask the user:** "What would you like to plan? Please describe the feature, bug fix, or improvement you have in mind."
-
-Do not proceed until you have a clear feature description from the user.
-
-### 0. Idea Refinement
-
-**Check for brainstorm output first:**
-
-Before asking questions, look for recent brainstorm documents in `docs/brainstorms/` that match this feature:
-
-```bash
-ls -la docs/brainstorms/*.md 2>/dev/null | head -10
-```
-
-**Relevance criteria:** A brainstorm is relevant if:
-- The topic (from filename or YAML frontmatter) semantically matches the feature description
-- Created within the last 14 days
-- If multiple candidates match, use the most recent one
-
-**If a relevant brainstorm exists:**
-1. Read the brainstorm document **thoroughly** — every section matters
-2. Announce: "Found brainstorm from [date]: [topic]. Using as foundation for planning."
-3. Extract and carry forward **ALL** of the following into the plan:
- - Key decisions and their rationale
- - Chosen approach and why alternatives were rejected
- - Constraints and requirements discovered during brainstorming
- - Open questions (flag these for resolution during planning)
- - Success criteria and scope boundaries
- - Any specific technical choices or patterns discussed
-4. **Skip the idea refinement questions below** — the brainstorm already answered WHAT to build
-5. Use brainstorm content as the **primary input** to research and planning phases
-6. **Critical: The brainstorm is the origin document.** Throughout the plan, reference specific decisions with `(see brainstorm: docs/brainstorms/)` when carrying forward conclusions. Do not paraphrase decisions in a way that loses their original context — link back to the source.
-7. **Do not omit brainstorm content** — if the brainstorm discussed it, the plan must address it (even if briefly). Scan each brainstorm section before finalizing the plan to verify nothing was dropped.
-
-**If multiple brainstorms could match:**
-Use **AskUserQuestion tool** to ask which brainstorm to use, or whether to proceed without one.
-
-**If no brainstorm found (or not relevant), run idea refinement:**
-
-Refine the idea through collaborative dialogue using the **AskUserQuestion tool**:
-
-- Ask questions one at a time to understand the idea fully
-- Prefer multiple choice questions when natural options exist
-- Focus on understanding: purpose, constraints and success criteria
-- Continue until the idea is clear OR user says "proceed"
-
-**Gather signals for research decision.** During refinement, note:
-
-- **User's familiarity**: Do they know the codebase patterns? Are they pointing to examples?
-- **User's intent**: Speed vs thoroughness? Exploration vs execution?
-- **Topic risk**: Security, payments, external APIs warrant more caution
-- **Uncertainty level**: Is the approach clear or open-ended?
-
-**Skip option:** If the feature description is already detailed, offer:
-"Your description is clear. Should I proceed with research, or would you like to refine it further?"
-
-## Main Tasks
-
-### 1. Local Research (Always Runs - Parallel)
-
-
-First, I need to understand the project's conventions, existing patterns, and any documented learnings. This is fast and local - it informs whether external research is needed.
-
-
-Run these agents **in parallel** to gather local context:
-
-- Task repo-research-analyst(feature_description)
-- Task learnings-researcher(feature_description)
-
-**What to look for:**
-- **Repo research:** existing patterns, CLAUDE.md guidance, technology familiarity, pattern consistency
-- **Learnings:** documented solutions in `docs/solutions/` that might apply (gotchas, patterns, lessons learned)
-
-These findings inform the next step.
-
-### 1.5. Research Decision
-
-Based on signals from Step 0 and findings from Step 1, decide on external research.
-
-**High-risk topics → always research.** Security, payments, external APIs, data privacy. The cost of missing something is too high. This takes precedence over speed signals.
-
-**Strong local context → skip external research.** Codebase has good patterns, CLAUDE.md has guidance, user knows what they want. External research adds little value.
-
-**Uncertainty or unfamiliar territory → research.** User is exploring, codebase has no examples, new technology. External perspective is valuable.
-
-**Announce the decision and proceed.** Brief explanation, then continue. User can redirect if needed.
-
-Examples:
-- "Your codebase has solid patterns for this. Proceeding without external research."
-- "This involves payment processing, so I'll research current best practices first."
-
-### 1.5b. External Research (Conditional)
-
-**Only run if Step 1.5 indicates external research is valuable.**
-
-Run these agents in parallel:
-
-- Task best-practices-researcher(feature_description)
-- Task framework-docs-researcher(feature_description)
-
-### 1.6. Consolidate Research
-
-After all research steps complete, consolidate findings:
-
-- Document relevant file paths from repo research (e.g., `app/services/example_service.rb:42`)
-- **Include relevant institutional learnings** from `docs/solutions/` (key insights, gotchas to avoid)
-- Note external documentation URLs and best practices (if external research was done)
-- List related issues or PRs discovered
-- Capture CLAUDE.md conventions
-
-**Optional validation:** Briefly summarize findings and ask if anything looks off or missing before proceeding to planning.
-
-### 2. Issue Planning & Structure
-
-
-Think like a product manager - what would make this issue clear and actionable? Consider multiple perspectives
-
-
-**Title & Categorization:**
-
-- [ ] Draft clear, searchable issue title using conventional format (e.g., `feat: Add user authentication`, `fix: Cart total calculation`)
-- [ ] Determine issue type: enhancement, bug, refactor
-- [ ] Convert title to filename: add today's date prefix, strip prefix colon, kebab-case, add `-plan` suffix
- - Example: `feat: Add User Authentication` → `2026-01-21-feat-add-user-authentication-plan.md`
- - Keep it descriptive (3-5 words after prefix) so plans are findable by context
-
-**Stakeholder Analysis:**
-
-- [ ] Identify who will be affected by this issue (end users, developers, operations)
-- [ ] Consider implementation complexity and required expertise
-
-**Content Planning:**
-
-- [ ] Choose appropriate detail level based on issue complexity and audience
-- [ ] List all necessary sections for the chosen template
-- [ ] Gather supporting materials (error logs, screenshots, design mockups)
-- [ ] Prepare code examples or reproduction steps if applicable, name the mock filenames in the lists
-
-### 3. SpecFlow Analysis
-
-After planning the issue structure, run SpecFlow Analyzer to validate and refine the feature specification:
-
-- Task compound-engineering:workflow:spec-flow-analyzer(feature_description, research_findings)
-
-**SpecFlow Analyzer Output:**
-
-- [ ] Review SpecFlow analysis results
-- [ ] Incorporate any identified gaps or edge cases into the issue
-- [ ] Update acceptance criteria based on SpecFlow findings
-
-### 4. Choose Implementation Detail Level
-
-Select how comprehensive you want the issue to be, simpler is mostly better.
-
-#### 📄 MINIMAL (Quick Issue)
-
-**Best for:** Simple bugs, small improvements, clear features
-
-**Includes:**
-
-- Problem statement or feature description
-- Basic acceptance criteria
-- Essential context only
-
-**Structure:**
-
-````markdown
----
-title: [Issue Title]
-type: [feat|fix|refactor]
-status: active
-date: YYYY-MM-DD
-origin: docs/brainstorms/YYYY-MM-DD--brainstorm.md # if originated from brainstorm, otherwise omit
----
-
-# [Issue Title]
-
-[Brief problem/feature description]
-
-## Acceptance Criteria
-
-- [ ] Core requirement 1
-- [ ] Core requirement 2
-
-## Context
-
-[Any critical information]
-
-## MVP
-
-### test.rb
-
-```ruby
-class Test
- def initialize
- @name = "test"
- end
-end
-```
-
-## Sources
-
-- **Origin brainstorm:** [docs/brainstorms/YYYY-MM-DD--brainstorm.md](path) — include if plan originated from a brainstorm
-- Related issue: #[issue_number]
-- Documentation: [relevant_docs_url]
-````
-
-#### 📋 MORE (Standard Issue)
-
-**Best for:** Most features, complex bugs, team collaboration
-
-**Includes everything from MINIMAL plus:**
-
-- Detailed background and motivation
-- Technical considerations
-- Success metrics
-- Dependencies and risks
-- Basic implementation suggestions
-
-**Structure:**
-
-```markdown
----
-title: [Issue Title]
-type: [feat|fix|refactor]
-status: active
-date: YYYY-MM-DD
-origin: docs/brainstorms/YYYY-MM-DD--brainstorm.md # if originated from brainstorm, otherwise omit
----
-
-# [Issue Title]
-
-## Overview
-
-[Comprehensive description]
-
-## Problem Statement / Motivation
-
-[Why this matters]
-
-## Proposed Solution
-
-[High-level approach]
-
-## Technical Considerations
-
-- Architecture impacts
-- Performance implications
-- Security considerations
-
-## System-Wide Impact
-
-- **Interaction graph**: [What callbacks/middleware/observers fire when this runs?]
-- **Error propagation**: [How do errors flow across layers? Do retry strategies align?]
-- **State lifecycle risks**: [Can partial failure leave orphaned/inconsistent state?]
-- **API surface parity**: [What other interfaces expose similar functionality and need the same change?]
-- **Integration test scenarios**: [Cross-layer scenarios that unit tests won't catch]
-
-## Acceptance Criteria
-
-- [ ] Detailed requirement 1
-- [ ] Detailed requirement 2
-- [ ] Testing requirements
-
-## Success Metrics
-
-[How we measure success]
-
-## Dependencies & Risks
-
-[What could block or complicate this]
-
-## Sources & References
-
-- **Origin brainstorm:** [docs/brainstorms/YYYY-MM-DD--brainstorm.md](path) — include if plan originated from a brainstorm
-- Similar implementations: [file_path:line_number]
-- Best practices: [documentation_url]
-- Related PRs: #[pr_number]
-```
-
-#### 📚 A LOT (Comprehensive Issue)
-
-**Best for:** Major features, architectural changes, complex integrations
-
-**Includes everything from MORE plus:**
-
-- Detailed implementation plan with phases
-- Alternative approaches considered
-- Extensive technical specifications
-- Resource requirements and timeline
-- Future considerations and extensibility
-- Risk mitigation strategies
-- Documentation requirements
-
-**Structure:**
-
-```markdown
----
-title: [Issue Title]
-type: [feat|fix|refactor]
-status: active
-date: YYYY-MM-DD
-origin: docs/brainstorms/YYYY-MM-DD--brainstorm.md # if originated from brainstorm, otherwise omit
----
-
-# [Issue Title]
-
-## Overview
-
-[Executive summary]
-
-## Problem Statement
-
-[Detailed problem analysis]
-
-## Proposed Solution
-
-[Comprehensive solution design]
-
-## Technical Approach
-
-### Architecture
-
-[Detailed technical design]
-
-### Implementation Phases
-
-#### Phase 1: [Foundation]
-
-- Tasks and deliverables
-- Success criteria
-- Estimated effort
-
-#### Phase 2: [Core Implementation]
-
-- Tasks and deliverables
-- Success criteria
-- Estimated effort
-
-#### Phase 3: [Polish & Optimization]
-
-- Tasks and deliverables
-- Success criteria
-- Estimated effort
-
-## Alternative Approaches Considered
-
-[Other solutions evaluated and why rejected]
-
-## System-Wide Impact
-
-### Interaction Graph
-
-[Map the chain reaction: what callbacks, middleware, observers, and event handlers fire when this code runs? Trace at least two levels deep. Document: "Action X triggers Y, which calls Z, which persists W."]
-
-### Error & Failure Propagation
-
-[Trace errors from lowest layer up. List specific error classes and where they're handled. Identify retry conflicts, unhandled error types, and silent failure swallowing.]
-
-### State Lifecycle Risks
-
-[Walk through each step that persists state. Can partial failure orphan rows, duplicate records, or leave caches stale? Document cleanup mechanisms or their absence.]
-
-### API Surface Parity
-
-[List all interfaces (classes, DSLs, endpoints) that expose equivalent functionality. Note which need updating and which share the code path.]
-
-### Integration Test Scenarios
-
-[3-5 cross-layer test scenarios that unit tests with mocks would never catch. Include expected behavior for each.]
-
-## Acceptance Criteria
-
-### Functional Requirements
-
-- [ ] Detailed functional criteria
-
-### Non-Functional Requirements
-
-- [ ] Performance targets
-- [ ] Security requirements
-- [ ] Accessibility standards
-
-### Quality Gates
-
-- [ ] Test coverage requirements
-- [ ] Documentation completeness
-- [ ] Code review approval
-
-## Success Metrics
-
-[Detailed KPIs and measurement methods]
-
-## Dependencies & Prerequisites
-
-[Detailed dependency analysis]
-
-## Risk Analysis & Mitigation
-
-[Comprehensive risk assessment]
-
-## Resource Requirements
-
-[Team, time, infrastructure needs]
-
-## Future Considerations
-
-[Extensibility and long-term vision]
-
-## Documentation Plan
-
-[What docs need updating]
-
-## Sources & References
-
-### Origin
-
-- **Brainstorm document:** [docs/brainstorms/YYYY-MM-DD--brainstorm.md](path) — include if plan originated from a brainstorm. Key decisions carried forward: [list 2-3 major decisions from brainstorm]
-
-### Internal References
-
-- Architecture decisions: [file_path:line_number]
-- Similar features: [file_path:line_number]
-- Configuration: [file_path:line_number]
-
-### External References
-
-- Framework documentation: [url]
-- Best practices guide: [url]
-- Industry standards: [url]
-
-### Related Work
-
-- Previous PRs: #[pr_numbers]
-- Related issues: #[issue_numbers]
-- Design documents: [links]
-```
-
-### 5. Issue Creation & Formatting
-
-
-Apply best practices for clarity and actionability, making the issue easy to scan and understand
-
-
-**Content Formatting:**
-
-- [ ] Use clear, descriptive headings with proper hierarchy (##, ###)
-- [ ] Include code examples in triple backticks with language syntax highlighting
-- [ ] Add screenshots/mockups if UI-related (drag & drop or use image hosting)
-- [ ] Use task lists (- [ ]) for trackable items that can be checked off
-- [ ] Add collapsible sections for lengthy logs or optional details using `` tags
-- [ ] Apply appropriate emoji for visual scanning (🐛 bug, ✨ feature, 📚 docs, ♻️ refactor)
-
-**Cross-Referencing:**
-
-- [ ] Link to related issues/PRs using #number format
-- [ ] Reference specific commits with SHA hashes when relevant
-- [ ] Link to code using GitHub's permalink feature (press 'y' for permanent link)
-- [ ] Mention relevant team members with @username if needed
-- [ ] Add links to external resources with descriptive text
-
-**Code & Examples:**
-
-````markdown
-# Good example with syntax highlighting and line references
-
-
-```ruby
-# app/services/user_service.rb:42
-def process_user(user)
-
-# Implementation here
-
-end
-```
-
-# Collapsible error logs
-
-
-Full error stacktrace
-
-`Error details here...`
-
-
-````
-
-**AI-Era Considerations:**
-
-- [ ] Account for accelerated development with AI pair programming
-- [ ] Include prompts or instructions that worked well during research
-- [ ] Note which AI tools were used for initial exploration (Claude, Copilot, etc.)
-- [ ] Emphasize comprehensive testing given rapid implementation
-- [ ] Document any AI-generated code that needs human review
-
-### 6. Final Review & Submission
-
-**Brainstorm cross-check (if plan originated from a brainstorm):**
-
-Before finalizing, re-read the brainstorm document and verify:
-- [ ] Every key decision from the brainstorm is reflected in the plan
-- [ ] The chosen approach matches what was decided in the brainstorm
-- [ ] Constraints and requirements from the brainstorm are captured in acceptance criteria
-- [ ] Open questions from the brainstorm are either resolved or flagged
-- [ ] The `origin:` frontmatter field points to the brainstorm file
-- [ ] The Sources section includes the brainstorm with a summary of carried-forward decisions
-
-**Pre-submission Checklist:**
-
-- [ ] Title is searchable and descriptive
-- [ ] Labels accurately categorize the issue
-- [ ] All template sections are complete
-- [ ] Links and references are working
-- [ ] Acceptance criteria are measurable
-- [ ] Add names of files in pseudo code examples and todo lists
-- [ ] Add an ERD mermaid diagram if applicable for new model changes
-
-## Write Plan File
-
-**REQUIRED: Write the plan file to disk before presenting any options.**
-
-```bash
-mkdir -p docs/plans/
-```
-
-Use the Write tool to save the complete plan to `docs/plans/YYYY-MM-DD---plan.md`. This step is mandatory and cannot be skipped — even when running as part of LFG/SLFG or other automated pipelines.
-
-Confirm: "Plan written to docs/plans/[filename]"
-
-**Pipeline mode:** If invoked from an automated workflow (LFG, SLFG, or any `disable-model-invocation` context), skip all AskUserQuestion calls. Make decisions automatically and proceed to writing the plan without interactive prompts.
-
-## Output Format
-
-**Filename:** Use the date and kebab-case filename from Step 2 Title & Categorization.
-
-```
-docs/plans/YYYY-MM-DD---plan.md
-```
-
-Examples:
-- ✅ `docs/plans/2026-01-15-feat-user-authentication-flow-plan.md`
-- ✅ `docs/plans/2026-02-03-fix-checkout-race-condition-plan.md`
-- ✅ `docs/plans/2026-03-10-refactor-api-client-extraction-plan.md`
-- ❌ `docs/plans/2026-01-15-feat-thing-plan.md` (not descriptive - what "thing"?)
-- ❌ `docs/plans/2026-01-15-feat-new-feature-plan.md` (too vague - what feature?)
-- ❌ `docs/plans/2026-01-15-feat: user auth-plan.md` (invalid characters - colon and space)
-- ❌ `docs/plans/feat-user-auth-plan.md` (missing date prefix)
-
-## Post-Generation Options
-
-After writing the plan file, use the **AskUserQuestion tool** to present these options:
-
-**Question:** "Plan ready at `docs/plans/YYYY-MM-DD---plan.md`. What would you like to do next?"
-
-**Options:**
-1. **Open plan in editor** - Open the plan file for review
-2. **Run `/deepen-plan`** - Enhance each section with parallel research agents (best practices, performance, UI)
-3. **Run `/technical_review`** - Technical feedback from code-focused reviewers (DHH, Kieran, Simplicity)
-4. **Review and refine** - Improve the document through structured self-review
-5. **Share to Proof** - Upload to Proof for collaborative review and sharing
-6. **Start `/workflows:work`** - Begin implementing this plan locally
-7. **Start `/workflows:work` on remote** - Begin implementing in Claude Code on the web (use `&` to run in background)
-8. **Create Issue** - Create issue in project tracker (GitHub/Linear)
-
-Based on selection:
-- **Open plan in editor** → Run `open docs/plans/.md` to open the file in the user's default editor
-- **`/deepen-plan`** → Call the /deepen-plan command with the plan file path to enhance with research
-- **`/technical_review`** → Call the /technical_review command with the plan file path
-- **Review and refine** → Load `document-review` skill.
-- **Share to Proof** → Upload the plan to Proof:
- ```bash
- CONTENT=$(cat docs/plans/.md)
- TITLE="Plan: "
- RESPONSE=$(curl -s -X POST https://www.proofeditor.ai/share/markdown \
- -H "Content-Type: application/json" \
- -d "$(jq -n --arg title "$TITLE" --arg markdown "$CONTENT" --arg by "ai:compound" '{title: $title, markdown: $markdown, by: $by}')")
- PROOF_URL=$(echo "$RESPONSE" | jq -r '.tokenUrl')
- ```
- Display: `View & collaborate in Proof: ` — skip silently if curl fails. Then return to options.
-- **`/workflows:work`** → Call the /workflows:work command with the plan file path
-- **`/workflows:work` on remote** → Run `/workflows:work docs/plans/.md &` to start work in background for Claude Code web
-- **Create Issue** → See "Issue Creation" section below
-- **Other** (automatically provided) → Accept free text for rework or specific changes
-
-**Note:** If running `/workflows:plan` with ultrathink enabled, automatically run `/deepen-plan` after plan creation for maximum depth and grounding.
-
-Loop back to options after Simplify or Other changes until user selects `/workflows:work` or `/technical_review`.
-
-## Issue Creation
-
-When user selects "Create Issue", detect their project tracker from CLAUDE.md:
-
-1. **Check for tracker preference** in user's CLAUDE.md (global or project):
- - Look for `project_tracker: github` or `project_tracker: linear`
- - Or look for mentions of "GitHub Issues" or "Linear" in their workflow section
-
-2. **If GitHub:**
-
- Use the title and type from Step 2 (already in context - no need to re-read the file):
-
- ```bash
- gh issue create --title ": " --body-file
- ```
-
-3. **If Linear:**
-
- ```bash
- linear issue create --title "" --description "$(cat )"
- ```
-
-4. **If no tracker configured:**
- Ask user: "Which project tracker do you use? (GitHub/Linear/Other)"
- - Suggest adding `project_tracker: github` or `project_tracker: linear` to their CLAUDE.md
-
-5. **After creation:**
- - Display the issue URL
- - Ask if they want to proceed to `/workflows:work` or `/technical_review`
-
-NEVER CODE! Just research and write the plan.
+/ce:plan $ARGUMENTS
diff --git a/plugins/compound-engineering/commands/workflows/review.md b/plugins/compound-engineering/commands/workflows/review.md
index 570cf49..7897e85 100644
--- a/plugins/compound-engineering/commands/workflows/review.md
+++ b/plugins/compound-engineering/commands/workflows/review.md
@@ -1,525 +1,10 @@
---
name: workflows:review
-description: Perform exhaustive code reviews using multi-agent analysis, ultra-thinking, and worktrees
+description: "[DEPRECATED] Use /ce:review instead — renamed for clarity."
argument-hint: "[PR number, GitHub URL, branch name, or latest]"
+disable-model-invocation: true
---
-# Review Command
+NOTE: /workflows:review is deprecated. Please use /ce:review instead. This alias will be removed in a future version.
- Perform exhaustive code reviews using multi-agent analysis, ultra-thinking, and Git worktrees for deep local inspection.
-
-## Introduction
-
-Senior Code Review Architect with expertise in security, performance, architecture, and quality assurance
-
-## Prerequisites
-
-
-- Git repository with GitHub CLI (`gh`) installed and authenticated
-- Clean main/master branch
-- Proper permissions to create worktrees and access the repository
-- For document reviews: Path to a markdown file or document
-
-
-## Main Tasks
-
-### 1. Determine Review Target & Setup (ALWAYS FIRST)
-
- #$ARGUMENTS
-
-
-First, I need to determine the review target type and set up the code for analysis.
-
-
-#### Immediate Actions:
-
-
-
-- [ ] Determine review type: PR number (numeric), GitHub URL, file path (.md), or empty (current branch)
-- [ ] Check current git branch
-- [ ] If ALREADY on the target branch (PR branch, requested branch name, or the branch already checked out for review) → proceed with analysis on current branch
-- [ ] If DIFFERENT branch than the review target → offer to use worktree: "Use git-worktree skill for isolated Call `skill: git-worktree` with branch name"
-- [ ] Fetch PR metadata using `gh pr view --json` for title, body, files, linked issues
-- [ ] Set up language-specific analysis tools
-- [ ] Prepare security scanning environment
-- [ ] Make sure we are on the branch we are reviewing. Use gh pr checkout to switch to the branch or manually checkout the branch.
-
-Ensure that the code is ready for analysis (either in worktree or on current branch). ONLY then proceed to the next step.
-
-
-
-#### Protected Artifacts
-
-
-The following paths are compound-engineering pipeline artifacts and must never be flagged for deletion, removal, or gitignore by any review agent:
-
-- `docs/plans/*.md` — Plan files created by `/workflows:plan`. These are living documents that track implementation progress (checkboxes are checked off by `/workflows:work`).
-- `docs/solutions/*.md` — Solution documents created during the pipeline.
-
-If a review agent flags any file in these directories for cleanup or removal, discard that finding during synthesis. Do not create a todo for it.
-
-
-#### Load Review Agents
-
-Read `compound-engineering.local.md` in the project root. If found, use `review_agents` from YAML frontmatter. If the markdown body contains review context, pass it to each agent as additional instructions.
-
-If no settings file exists, invoke the `setup` skill to create one. Then read the newly created file and continue.
-
-#### Parallel Agents to review the PR:
-
-
-
-Run all configured review agents in parallel using Task tool. For each agent in the `review_agents` list:
-
-```
-Task {agent-name}(PR content + review context from settings body)
-```
-
-Additionally, always run these regardless of settings:
-- Task agent-native-reviewer(PR content) - Verify new features are agent-accessible
-- Task learnings-researcher(PR content) - Search docs/solutions/ for past issues related to this PR's modules and patterns
-
-
-
-#### Conditional Agents (Run if applicable):
-
-
-
-These agents are run ONLY when the PR matches specific criteria. Check the PR files list to determine if they apply:
-
-**MIGRATIONS: If PR contains database migrations, schema.rb, or data backfills:**
-
-- Task schema-drift-detector(PR content) - Detects unrelated schema.rb changes by cross-referencing against included migrations (run FIRST)
-- Task data-migration-expert(PR content) - Validates ID mappings match production, checks for swapped values, verifies rollback safety
-- Task deployment-verification-agent(PR content) - Creates Go/No-Go deployment checklist with SQL verification queries
-
-**When to run:**
-- PR includes files matching `db/migrate/*.rb` or `db/schema.rb`
-- PR modifies columns that store IDs, enums, or mappings
-- PR includes data backfill scripts or rake tasks
-- PR title/body mentions: migration, backfill, data transformation, ID mapping
-
-**What these agents check:**
-- `schema-drift-detector`: Cross-references schema.rb changes against PR migrations to catch unrelated columns/indexes from local database state
-- `data-migration-expert`: Verifies hard-coded mappings match production reality (prevents swapped IDs), checks for orphaned associations, validates dual-write patterns
-- `deployment-verification-agent`: Produces executable pre/post-deploy checklists with SQL queries, rollback procedures, and monitoring plans
-
-
-
-### 2. Ultra-Thinking Deep Dive Phases
-
- For each phase below, spend maximum cognitive effort. Think step by step. Consider all angles. Question assumptions. And bring all reviews in a synthesis to the user.
-
-
-Complete system context map with component interactions
-
-
-#### Phase 1: Stakeholder Perspective Analysis
-
- ULTRA-THINK: Put yourself in each stakeholder's shoes. What matters to them? What are their pain points?
-
-
-
-1. **Developer Perspective**
-
- - How easy is this to understand and modify?
- - Are the APIs intuitive?
- - Is debugging straightforward?
- - Can I test this easily?
-
-2. **Operations Perspective**
-
- - How do I deploy this safely?
- - What metrics and logs are available?
- - How do I troubleshoot issues?
- - What are the resource requirements?
-
-3. **End User Perspective**
-
- - Is the feature intuitive?
- - Are error messages helpful?
- - Is performance acceptable?
- - Does it solve my problem?
-
-4. **Security Team Perspective**
-
- - What's the attack surface?
- - Are there compliance requirements?
- - How is data protected?
- - What are the audit capabilities?
-
-5. **Business Perspective**
- - What's the ROI?
- - Are there legal/compliance risks?
- - How does this affect time-to-market?
- - What's the total cost of ownership?
-
-#### Phase 2: Scenario Exploration
-
- ULTRA-THINK: Explore edge cases and failure scenarios. What could go wrong? How does the system behave under stress?
-
-
-
-- [ ] **Happy Path**: Normal operation with valid inputs
-- [ ] **Invalid Inputs**: Null, empty, malformed data
-- [ ] **Boundary Conditions**: Min/max values, empty collections
-- [ ] **Concurrent Access**: Race conditions, deadlocks
-- [ ] **Scale Testing**: 10x, 100x, 1000x normal load
-- [ ] **Network Issues**: Timeouts, partial failures
-- [ ] **Resource Exhaustion**: Memory, disk, connections
-- [ ] **Security Attacks**: Injection, overflow, DoS
-- [ ] **Data Corruption**: Partial writes, inconsistency
-- [ ] **Cascading Failures**: Downstream service issues
-
-### 3. Multi-Angle Review Perspectives
-
-#### Technical Excellence Angle
-
-- Code craftsmanship evaluation
-- Engineering best practices
-- Technical documentation quality
-- Tooling and automation assessment
-
-#### Business Value Angle
-
-- Feature completeness validation
-- Performance impact on users
-- Cost-benefit analysis
-- Time-to-market considerations
-
-#### Risk Management Angle
-
-- Security risk assessment
-- Operational risk evaluation
-- Compliance risk verification
-- Technical debt accumulation
-
-#### Team Dynamics Angle
-
-- Code review etiquette
-- Knowledge sharing effectiveness
-- Collaboration patterns
-- Mentoring opportunities
-
-### 4. Simplification and Minimalism Review
-
-Run the Task code-simplicity-reviewer() to see if we can simplify the code.
-
-### 5. Findings Synthesis and Todo Creation Using file-todos Skill
-
- ALL findings MUST be stored in the todos/ directory using the file-todos skill. Create todo files immediately after synthesis - do NOT present findings for user approval first. Use the skill for structured todo management.
-
-#### Step 1: Synthesize All Findings
-
-
-Consolidate all agent reports into a categorized list of findings.
-Remove duplicates, prioritize by severity and impact.
-
-
-
-
-- [ ] Collect findings from all parallel agents
-- [ ] Surface learnings-researcher results: if past solutions are relevant, flag them as "Known Pattern" with links to docs/solutions/ files
-- [ ] Discard any findings that recommend deleting or gitignoring files in `docs/plans/` or `docs/solutions/` (see Protected Artifacts above)
-- [ ] Categorize by type: security, performance, architecture, quality, etc.
-- [ ] Assign severity levels: 🔴 CRITICAL (P1), 🟡 IMPORTANT (P2), 🔵 NICE-TO-HAVE (P3)
-- [ ] Remove duplicate or overlapping findings
-- [ ] Estimate effort for each finding (Small/Medium/Large)
-
-
-
-#### Step 2: Create Todo Files Using file-todos Skill
-
- Use the file-todos skill to create todo files for ALL findings immediately. Do NOT present findings one-by-one asking for user approval. Create all todo files in parallel using the skill, then summarize results to user.
-
-**Implementation Options:**
-
-**Option A: Direct File Creation (Fast)**
-
-- Create todo files directly using Write tool
-- All findings in parallel for speed
-- Use standard template from `.claude/skills/file-todos/assets/todo-template.md`
-- Follow naming convention: `{issue_id}-pending-{priority}-{description}.md`
-
-**Option B: Sub-Agents in Parallel (Recommended for Scale)** For large PRs with 15+ findings, use sub-agents to create finding files in parallel:
-
-```bash
-# Launch multiple finding-creator agents in parallel
-Task() - Create todos for first finding
-Task() - Create todos for second finding
-Task() - Create todos for third finding
-etc. for each finding.
-```
-
-Sub-agents can:
-
-- Process multiple findings simultaneously
-- Write detailed todo files with all sections filled
-- Organize findings by severity
-- Create comprehensive Proposed Solutions
-- Add acceptance criteria and work logs
-- Complete much faster than sequential processing
-
-**Execution Strategy:**
-
-1. Synthesize all findings into categories (P1/P2/P3)
-2. Group findings by severity
-3. Launch 3 parallel sub-agents (one per severity level)
-4. Each sub-agent creates its batch of todos using the file-todos skill
-5. Consolidate results and present summary
-
-**Process (Using file-todos Skill):**
-
-1. For each finding:
-
- - Determine severity (P1/P2/P3)
- - Write detailed Problem Statement and Findings
- - Create 2-3 Proposed Solutions with pros/cons/effort/risk
- - Estimate effort (Small/Medium/Large)
- - Add acceptance criteria and work log
-
-2. Use file-todos skill for structured todo management:
-
- ```bash
- skill: file-todos
- ```
-
- The skill provides:
-
- - Template location: `.claude/skills/file-todos/assets/todo-template.md`
- - Naming convention: `{issue_id}-{status}-{priority}-{description}.md`
- - YAML frontmatter structure: status, priority, issue_id, tags, dependencies
- - All required sections: Problem Statement, Findings, Solutions, etc.
-
-3. Create todo files in parallel:
-
- ```bash
- {next_id}-pending-{priority}-{description}.md
- ```
-
-4. Examples:
-
- ```
- 001-pending-p1-path-traversal-vulnerability.md
- 002-pending-p1-api-response-validation.md
- 003-pending-p2-concurrency-limit.md
- 004-pending-p3-unused-parameter.md
- ```
-
-5. Follow template structure from file-todos skill: `.claude/skills/file-todos/assets/todo-template.md`
-
-**Todo File Structure (from template):**
-
-Each todo must include:
-
-- **YAML frontmatter**: status, priority, issue_id, tags, dependencies
-- **Problem Statement**: What's broken/missing, why it matters
-- **Findings**: Discoveries from agents with evidence/location
-- **Proposed Solutions**: 2-3 options, each with pros/cons/effort/risk
-- **Recommended Action**: (Filled during triage, leave blank initially)
-- **Technical Details**: Affected files, components, database changes
-- **Acceptance Criteria**: Testable checklist items
-- **Work Log**: Dated record with actions and learnings
-- **Resources**: Links to PR, issues, documentation, similar patterns
-
-**File naming convention:**
-
-```
-{issue_id}-{status}-{priority}-{description}.md
-
-Examples:
-- 001-pending-p1-security-vulnerability.md
-- 002-pending-p2-performance-optimization.md
-- 003-pending-p3-code-cleanup.md
-```
-
-**Status values:**
-
-- `pending` - New findings, needs triage/decision
-- `ready` - Approved by manager, ready to work
-- `complete` - Work finished
-
-**Priority values:**
-
-- `p1` - Critical (blocks merge, security/data issues)
-- `p2` - Important (should fix, architectural/performance)
-- `p3` - Nice-to-have (enhancements, cleanup)
-
-**Tagging:** Always add `code-review` tag, plus: `security`, `performance`, `architecture`, `rails`, `quality`, etc.
-
-#### Step 3: Summary Report
-
-After creating all todo files, present comprehensive summary:
-
-````markdown
-## ✅ Code Review Complete
-
-**Review Target:** PR #XXXX - [PR Title] **Branch:** [branch-name]
-
-### Findings Summary:
-
-- **Total Findings:** [X]
-- **🔴 CRITICAL (P1):** [count] - BLOCKS MERGE
-- **🟡 IMPORTANT (P2):** [count] - Should Fix
-- **🔵 NICE-TO-HAVE (P3):** [count] - Enhancements
-
-### Created Todo Files:
-
-**P1 - Critical (BLOCKS MERGE):**
-
-- `001-pending-p1-{finding}.md` - {description}
-- `002-pending-p1-{finding}.md` - {description}
-
-**P2 - Important:**
-
-- `003-pending-p2-{finding}.md` - {description}
-- `004-pending-p2-{finding}.md` - {description}
-
-**P3 - Nice-to-Have:**
-
-- `005-pending-p3-{finding}.md` - {description}
-
-### Review Agents Used:
-
-- kieran-rails-reviewer
-- security-sentinel
-- performance-oracle
-- architecture-strategist
-- agent-native-reviewer
-- [other agents]
-
-### Next Steps:
-
-1. **Address P1 Findings**: CRITICAL - must be fixed before merge
-
- - Review each P1 todo in detail
- - Implement fixes or request exemption
- - Verify fixes before merging PR
-
-2. **Triage All Todos**:
- ```bash
- ls todos/*-pending-*.md # View all pending todos
- /triage # Use slash command for interactive triage
- ```
-
-3. **Work on Approved Todos**:
-
- ```bash
- /resolve_todo_parallel # Fix all approved items efficiently
- ```
-
-4. **Track Progress**:
- - Rename file when status changes: pending → ready → complete
- - Update Work Log as you work
- - Commit todos: `git add todos/ && git commit -m "refactor: add code review findings"`
-
-### Severity Breakdown:
-
-**🔴 P1 (Critical - Blocks Merge):**
-
-- Security vulnerabilities
-- Data corruption risks
-- Breaking changes
-- Critical architectural issues
-
-**🟡 P2 (Important - Should Fix):**
-
-- Performance issues
-- Significant architectural concerns
-- Major code quality problems
-- Reliability issues
-
-**🔵 P3 (Nice-to-Have):**
-
-- Minor improvements
-- Code cleanup
-- Optimization opportunities
-- Documentation updates
-````
-
-### 6. End-to-End Testing (Optional)
-
-
-
-**First, detect the project type from PR files:**
-
-| Indicator | Project Type |
-|-----------|--------------|
-| `*.xcodeproj`, `*.xcworkspace`, `Package.swift` (iOS) | iOS/macOS |
-| `Gemfile`, `package.json`, `app/views/*`, `*.html.*` | Web |
-| Both iOS files AND web files | Hybrid (test both) |
-
-
-
-
-
-After presenting the Summary Report, offer appropriate testing based on project type:
-
-**For Web Projects:**
-```markdown
-**"Want to run browser tests on the affected pages?"**
-1. Yes - run `/test-browser`
-2. No - skip
-```
-
-**For iOS Projects:**
-```markdown
-**"Want to run Xcode simulator tests on the app?"**
-1. Yes - run `/xcode-test`
-2. No - skip
-```
-
-**For Hybrid Projects (e.g., Rails + Hotwire Native):**
-```markdown
-**"Want to run end-to-end tests?"**
-1. Web only - run `/test-browser`
-2. iOS only - run `/xcode-test`
-3. Both - run both commands
-4. No - skip
-```
-
-
-
-#### If User Accepts Web Testing:
-
-Spawn a subagent to run browser tests (preserves main context):
-
-```
-Task general-purpose("Run /test-browser for PR #[number]. Test all affected pages, check for console errors, handle failures by creating todos and fixing.")
-```
-
-The subagent will:
-1. Identify pages affected by the PR
-2. Navigate to each page and capture snapshots (using Playwright MCP or agent-browser CLI)
-3. Check for console errors
-4. Test critical interactions
-5. Pause for human verification on OAuth/email/payment flows
-6. Create P1 todos for any failures
-7. Fix and retry until all tests pass
-
-**Standalone:** `/test-browser [PR number]`
-
-#### If User Accepts iOS Testing:
-
-Spawn a subagent to run Xcode tests (preserves main context):
-
-```
-Task general-purpose("Run /xcode-test for scheme [name]. Build for simulator, install, launch, take screenshots, check for crashes.")
-```
-
-The subagent will:
-1. Verify XcodeBuildMCP is installed
-2. Discover project and schemes
-3. Build for iOS Simulator
-4. Install and launch app
-5. Take screenshots of key screens
-6. Capture console logs for errors
-7. Pause for human verification (Sign in with Apple, push, IAP)
-8. Create P1 todos for any failures
-9. Fix and retry until all tests pass
-
-**Standalone:** `/xcode-test [scheme]`
-
-### Important: P1 Findings Block Merge
-
-Any **🔴 P1 (CRITICAL)** findings must be addressed before merging the PR. Present these prominently and ensure they're resolved before accepting the PR.
+/ce:review $ARGUMENTS
diff --git a/plugins/compound-engineering/commands/workflows/work.md b/plugins/compound-engineering/commands/workflows/work.md
index 739a2d9..16b38d5 100644
--- a/plugins/compound-engineering/commands/workflows/work.md
+++ b/plugins/compound-engineering/commands/workflows/work.md
@@ -1,470 +1,10 @@
---
name: workflows:work
-description: Execute work plans efficiently while maintaining quality and finishing features
+description: "[DEPRECATED] Use /ce:work instead — renamed for clarity."
argument-hint: "[plan file, specification, or todo file path]"
+disable-model-invocation: true
---
-# Work Plan Execution Command
+NOTE: /workflows:work is deprecated. Please use /ce:work instead. This alias will be removed in a future version.
-Execute a work plan efficiently while maintaining quality and finishing features.
-
-## Introduction
-
-This command takes a work document (plan, specification, or todo file) and executes it systematically. The focus is on **shipping complete features** by understanding requirements quickly, following existing patterns, and maintaining quality throughout.
-
-## Input Document
-
- #$ARGUMENTS
-
-## Execution Workflow
-
-### Phase 1: Quick Start
-
-1. **Read Plan and Clarify**
-
- - Read the work document completely
- - Review any references or links provided in the plan
- - If anything is unclear or ambiguous, ask clarifying questions now
- - Get user approval to proceed
- - **Do not skip this** - better to ask questions now than build the wrong thing
-
-2. **Setup Environment**
-
- First, check the current branch:
-
- ```bash
- current_branch=$(git branch --show-current)
- default_branch=$(git symbolic-ref refs/remotes/origin/HEAD 2>/dev/null | sed 's@^refs/remotes/origin/@@')
-
- # Fallback if remote HEAD isn't set
- if [ -z "$default_branch" ]; then
- default_branch=$(git rev-parse --verify origin/main >/dev/null 2>&1 && echo "main" || echo "master")
- fi
- ```
-
- **If already on a feature branch** (not the default branch):
- - Ask: "Continue working on `[current_branch]`, or create a new branch?"
- - If continuing, proceed to step 3
- - If creating new, follow Option A or B below
-
- **If on the default branch**, choose how to proceed:
-
- **Option A: Create a new branch**
- ```bash
- git pull origin [default_branch]
- git checkout -b feature-branch-name
- ```
- Use a meaningful name based on the work (e.g., `feat/user-authentication`, `fix/email-validation`).
-
- **Option B: Use a worktree (recommended for parallel development)**
- ```bash
- skill: git-worktree
- # The skill will create a new branch from the default branch in an isolated worktree
- ```
-
- **Option C: Continue on the default branch**
- - Requires explicit user confirmation
- - Only proceed after user explicitly says "yes, commit to [default_branch]"
- - Never commit directly to the default branch without explicit permission
-
- **Recommendation**: Use worktree if:
- - You want to work on multiple features simultaneously
- - You want to keep the default branch clean while experimenting
- - You plan to switch between branches frequently
-
-3. **Create Todo List**
- - Use TodoWrite to break plan into actionable tasks
- - Include dependencies between tasks
- - Prioritize based on what needs to be done first
- - Include testing and quality check tasks
- - Keep tasks specific and completable
-
-### Phase 2: Execute
-
-1. **Task Execution Loop**
-
- For each task in priority order:
-
- ```
- while (tasks remain):
- - Mark task as in_progress in TodoWrite
- - Read any referenced files from the plan
- - Look for similar patterns in codebase
- - Implement following existing conventions
- - Write tests for new functionality
- - Run System-Wide Test Check (see below)
- - Run tests after changes
- - Mark task as completed in TodoWrite
- - Mark off the corresponding checkbox in the plan file ([ ] → [x])
- - Evaluate for incremental commit (see below)
- ```
-
- **System-Wide Test Check** — Before marking a task done, pause and ask:
-
- | Question | What to do |
- |----------|------------|
- | **What fires when this runs?** Callbacks, middleware, observers, event handlers — trace two levels out from your change. | Read the actual code (not docs) for callbacks on models you touch, middleware in the request chain, `after_*` hooks. |
- | **Do my tests exercise the real chain?** If every dependency is mocked, the test proves your logic works *in isolation* — it says nothing about the interaction. | Write at least one integration test that uses real objects through the full callback/middleware chain. No mocks for the layers that interact. |
- | **Can failure leave orphaned state?** If your code persists state (DB row, cache, file) before calling an external service, what happens when the service fails? Does retry create duplicates? | Trace the failure path with real objects. If state is created before the risky call, test that failure cleans up or that retry is idempotent. |
- | **What other interfaces expose this?** Mixins, DSLs, alternative entry points (Agent vs Chat vs ChatMethods). | Grep for the method/behavior in related classes. If parity is needed, add it now — not as a follow-up. |
- | **Do error strategies align across layers?** Retry middleware + application fallback + framework error handling — do they conflict or create double execution? | List the specific error classes at each layer. Verify your rescue list matches what the lower layer actually raises. |
-
- **When to skip:** Leaf-node changes with no callbacks, no state persistence, no parallel interfaces. If the change is purely additive (new helper method, new view partial), the check takes 10 seconds and the answer is "nothing fires, skip."
-
- **When this matters most:** Any change that touches models with callbacks, error handling with fallback/retry, or functionality exposed through multiple interfaces.
-
- **IMPORTANT**: Always update the original plan document by checking off completed items. Use the Edit tool to change `- [ ]` to `- [x]` for each task you finish. This keeps the plan as a living document showing progress and ensures no checkboxes are left unchecked.
-
-2. **Incremental Commits**
-
- After completing each task, evaluate whether to create an incremental commit:
-
- | Commit when... | Don't commit when... |
- |----------------|---------------------|
- | Logical unit complete (model, service, component) | Small part of a larger unit |
- | Tests pass + meaningful progress | Tests failing |
- | About to switch contexts (backend → frontend) | Purely scaffolding with no behavior |
- | About to attempt risky/uncertain changes | Would need a "WIP" commit message |
-
- **Heuristic:** "Can I write a commit message that describes a complete, valuable change? If yes, commit. If the message would be 'WIP' or 'partial X', wait."
-
- **Commit workflow:**
- ```bash
- # 1. Verify tests pass (use project's test command)
- # Examples: bin/rails test, npm test, pytest, go test, etc.
-
- # 2. Stage only files related to this logical unit (not `git add .`)
- git add
-
- # 3. Commit with conventional message
- git commit -m "feat(scope): description of this unit"
- ```
-
- **Handling merge conflicts:** If conflicts arise during rebasing or merging, resolve them immediately. Incremental commits make conflict resolution easier since each commit is small and focused.
-
- **Note:** Incremental commits use clean conventional messages without attribution footers. The final Phase 4 commit/PR includes the full attribution.
-
-3. **Follow Existing Patterns**
-
- - The plan should reference similar code - read those files first
- - Match naming conventions exactly
- - Reuse existing components where possible
- - Follow project coding standards (see CLAUDE.md)
- - When in doubt, grep for similar implementations
-
-4. **Test Continuously**
-
- - Run relevant tests after each significant change
- - Don't wait until the end to test
- - Fix failures immediately
- - Add new tests for new functionality
- - **Unit tests with mocks prove logic in isolation. Integration tests with real objects prove the layers work together.** If your change touches callbacks, middleware, or error handling — you need both.
-
-5. **Figma Design Sync** (if applicable)
-
- For UI work with Figma designs:
-
- - Implement components following design specs
- - Use figma-design-sync agent iteratively to compare
- - Fix visual differences identified
- - Repeat until implementation matches design
-
-6. **Track Progress**
- - Keep TodoWrite updated as you complete tasks
- - Note any blockers or unexpected discoveries
- - Create new tasks if scope expands
- - Keep user informed of major milestones
-
-### Phase 3: Quality Check
-
-1. **Run Core Quality Checks**
-
- Always run before submitting:
-
- ```bash
- # Run full test suite (use project's test command)
- # Examples: bin/rails test, npm test, pytest, go test, etc.
-
- # Run linting (per CLAUDE.md)
- # Use linting-agent before pushing to origin
- ```
-
-2. **Consider Reviewer Agents** (Optional)
-
- Use for complex, risky, or large changes. Read agents from `compound-engineering.local.md` frontmatter (`review_agents`). If no settings file, invoke the `setup` skill to create one.
-
- Run configured agents in parallel with Task tool. Present findings and address critical issues.
-
-3. **Final Validation**
- - All TodoWrite tasks marked completed
- - All tests pass
- - Linting passes
- - Code follows existing patterns
- - Figma designs match (if applicable)
- - No console errors or warnings
-
-4. **Prepare Operational Validation Plan** (REQUIRED)
- - Add a `## Post-Deploy Monitoring & Validation` section to the PR description for every change.
- - Include concrete:
- - Log queries/search terms
- - Metrics or dashboards to watch
- - Expected healthy signals
- - Failure signals and rollback/mitigation trigger
- - Validation window and owner
- - If there is truly no production/runtime impact, still include the section with: `No additional operational monitoring required` and a one-line reason.
-
-### Phase 4: Ship It
-
-1. **Create Commit**
-
- ```bash
- git add .
- git status # Review what's being committed
- git diff --staged # Check the changes
-
- # Commit with conventional format
- git commit -m "$(cat <<'EOF'
- feat(scope): description of what and why
-
- Brief explanation if needed.
-
- 🤖 Generated with [Claude Code](https://claude.com/claude-code)
-
- Co-Authored-By: Claude
- EOF
- )"
- ```
-
-2. **Capture and Upload Screenshots for UI Changes** (REQUIRED for any UI work)
-
- For **any** design changes, new views, or UI modifications, you MUST capture and upload screenshots:
-
- **Step 1: Start dev server** (if not running)
- ```bash
- bin/dev # Run in background
- ```
-
- **Step 2: Capture screenshots with agent-browser CLI**
- ```bash
- agent-browser open http://localhost:3000/[route]
- agent-browser snapshot -i
- agent-browser screenshot output.png
- ```
- See the `agent-browser` skill for detailed usage.
-
- **Step 3: Upload using imgup skill**
- ```bash
- skill: imgup
- # Then upload each screenshot:
- imgup -h pixhost screenshot.png # pixhost works without API key
- # Alternative hosts: catbox, imagebin, beeimg
- ```
-
- **What to capture:**
- - **New screens**: Screenshot of the new UI
- - **Modified screens**: Before AND after screenshots
- - **Design implementation**: Screenshot showing Figma design match
-
- **IMPORTANT**: Always include uploaded image URLs in PR description. This provides visual context for reviewers and documents the change.
-
-3. **Create Pull Request**
-
- ```bash
- git push -u origin feature-branch-name
-
- gh pr create --title "Feature: [Description]" --body "$(cat <<'EOF'
- ## Summary
- - What was built
- - Why it was needed
- - Key decisions made
-
- ## Testing
- - Tests added/modified
- - Manual testing performed
-
- ## Post-Deploy Monitoring & Validation
- - **What to monitor/search**
- - Logs:
- - Metrics/Dashboards:
- - **Validation checks (queries/commands)**
- - `command or query here`
- - **Expected healthy behavior**
- - Expected signal(s)
- - **Failure signal(s) / rollback trigger**
- - Trigger + immediate action
- - **Validation window & owner**
- - Window:
- - Owner:
- - **If no operational impact**
- - `No additional operational monitoring required: `
-
- ## Before / After Screenshots
- | Before | After |
- |--------|-------|
- |  |  |
-
- ## Figma Design
- [Link if applicable]
-
- ---
-
- [](https://github.com/EveryInc/compound-engineering-plugin) 🤖 Generated with [Claude Code](https://claude.com/claude-code)
- EOF
- )"
- ```
-
-4. **Update Plan Status**
-
- If the input document has YAML frontmatter with a `status` field, update it to `completed`:
- ```
- status: active → status: completed
- ```
-
-5. **Notify User**
- - Summarize what was completed
- - Link to PR
- - Note any follow-up work needed
- - Suggest next steps if applicable
-
----
-
-## Swarm Mode (Optional)
-
-For complex plans with multiple independent workstreams, enable swarm mode for parallel execution with coordinated agents.
-
-### When to Use Swarm Mode
-
-| Use Swarm Mode when... | Use Standard Mode when... |
-|------------------------|---------------------------|
-| Plan has 5+ independent tasks | Plan is linear/sequential |
-| Multiple specialists needed (review + test + implement) | Single-focus work |
-| Want maximum parallelism | Simpler mental model preferred |
-| Large feature with clear phases | Small feature or bug fix |
-
-### Enabling Swarm Mode
-
-To trigger swarm execution, say:
-
-> "Make a Task list and launch an army of agent swarm subagents to build the plan"
-
-Or explicitly request: "Use swarm mode for this work"
-
-### Swarm Workflow
-
-When swarm mode is enabled, the workflow changes:
-
-1. **Create Team**
- ```
- Teammate({ operation: "spawnTeam", team_name: "work-{timestamp}" })
- ```
-
-2. **Create Task List with Dependencies**
- - Parse plan into TaskCreate items
- - Set up blockedBy relationships for sequential dependencies
- - Independent tasks have no blockers (can run in parallel)
-
-3. **Spawn Specialized Teammates**
- ```
- Task({
- team_name: "work-{timestamp}",
- name: "implementer",
- subagent_type: "general-purpose",
- prompt: "Claim implementation tasks, execute, mark complete",
- run_in_background: true
- })
-
- Task({
- team_name: "work-{timestamp}",
- name: "tester",
- subagent_type: "general-purpose",
- prompt: "Claim testing tasks, run tests, mark complete",
- run_in_background: true
- })
- ```
-
-4. **Coordinate and Monitor**
- - Team lead monitors task completion
- - Spawn additional workers as phases unblock
- - Handle plan approval if required
-
-5. **Cleanup**
- ```
- Teammate({ operation: "requestShutdown", target_agent_id: "implementer" })
- Teammate({ operation: "requestShutdown", target_agent_id: "tester" })
- Teammate({ operation: "cleanup" })
- ```
-
-See the `orchestrating-swarms` skill for detailed swarm patterns and best practices.
-
----
-
-## Key Principles
-
-### Start Fast, Execute Faster
-
-- Get clarification once at the start, then execute
-- Don't wait for perfect understanding - ask questions and move
-- The goal is to **finish the feature**, not create perfect process
-
-### The Plan is Your Guide
-
-- Work documents should reference similar code and patterns
-- Load those references and follow them
-- Don't reinvent - match what exists
-
-### Test As You Go
-
-- Run tests after each change, not at the end
-- Fix failures immediately
-- Continuous testing prevents big surprises
-
-### Quality is Built In
-
-- Follow existing patterns
-- Write tests for new code
-- Run linting before pushing
-- Use reviewer agents for complex/risky changes only
-
-### Ship Complete Features
-
-- Mark all tasks completed before moving on
-- Don't leave features 80% done
-- A finished feature that ships beats a perfect feature that doesn't
-
-## Quality Checklist
-
-Before creating PR, verify:
-
-- [ ] All clarifying questions asked and answered
-- [ ] All TodoWrite tasks marked completed
-- [ ] Tests pass (run project's test command)
-- [ ] Linting passes (use linting-agent)
-- [ ] Code follows existing patterns
-- [ ] Figma designs match implementation (if applicable)
-- [ ] Before/after screenshots captured and uploaded (for UI changes)
-- [ ] Commit messages follow conventional format
-- [ ] PR description includes Post-Deploy Monitoring & Validation section (or explicit no-impact rationale)
-- [ ] PR description includes summary, testing notes, and screenshots
-- [ ] PR description includes Compound Engineered badge
-
-## When to Use Reviewer Agents
-
-**Don't use by default.** Use reviewer agents only when:
-
-- Large refactor affecting many files (10+)
-- Security-sensitive changes (authentication, permissions, data access)
-- Performance-critical code paths
-- Complex algorithms or business logic
-- User explicitly requests thorough review
-
-For most features: tests + linting + following patterns is sufficient.
-
-## Common Pitfalls to Avoid
-
-- **Analysis paralysis** - Don't overthink, read the plan and execute
-- **Skipping clarifying questions** - Ask now, not after building wrong thing
-- **Ignoring plan references** - The plan has links for a reason
-- **Testing at the end** - Test continuously or suffer later
-- **Forgetting TodoWrite** - Track progress or lose track of what's done
-- **80% done syndrome** - Finish the feature, don't move on early
-- **Over-reviewing simple changes** - Save reviewer agents for complex work
+/ce:work $ARGUMENTS
diff --git a/plugins/compound-engineering/skills/brainstorming/SKILL.md b/plugins/compound-engineering/skills/brainstorming/SKILL.md
index 0a994dd..5a092cd 100644
--- a/plugins/compound-engineering/skills/brainstorming/SKILL.md
+++ b/plugins/compound-engineering/skills/brainstorming/SKILL.md
@@ -131,7 +131,7 @@ topic:
- [Any unresolved questions for the planning phase]
## Next Steps
-→ `/workflows:plan` for implementation details
+→ `/ce:plan` for implementation details
```
**Output Location:** `docs/brainstorms/YYYY-MM-DD--brainstorm.md`
@@ -140,7 +140,7 @@ topic:
Present clear options for what to do next:
-1. **Proceed to planning** → Run `/workflows:plan`
+1. **Proceed to planning** → Run `/ce:plan`
2. **Refine further** → Continue exploring the design
3. **Done for now** → User will return later
@@ -187,4 +187,4 @@ Planning answers **HOW** to build it:
- Technical details and code patterns
- Testing strategy and verification
-When brainstorm output exists, `/workflows:plan` should detect it and use it as input, skipping its own idea refinement phase.
+When brainstorm output exists, `/ce:plan` should detect it and use it as input, skipping its own idea refinement phase.
diff --git a/plugins/compound-engineering/skills/document-review/SKILL.md b/plugins/compound-engineering/skills/document-review/SKILL.md
index e9cb3b2..3376c32 100644
--- a/plugins/compound-engineering/skills/document-review/SKILL.md
+++ b/plugins/compound-engineering/skills/document-review/SKILL.md
@@ -36,7 +36,7 @@ Score the document against these criteria:
| **Specificity** | Concrete enough for next step (brainstorm → can plan, plan → can implement) |
| **YAGNI** | No hypothetical features, simplest approach chosen |
-If invoked within a workflow (after `/workflows:brainstorm` or `/workflows:plan`), also check:
+If invoked within a workflow (after `/ce:brainstorm` or `/ce:plan`), also check:
- **User intent fidelity** — Document reflects what was discussed, assumptions validated
## Step 4: Identify the Critical Improvement
diff --git a/plugins/compound-engineering/skills/file-todos/SKILL.md b/plugins/compound-engineering/skills/file-todos/SKILL.md
index c67dcf9..4525025 100644
--- a/plugins/compound-engineering/skills/file-todos/SKILL.md
+++ b/plugins/compound-engineering/skills/file-todos/SKILL.md
@@ -185,7 +185,7 @@ Work logs serve as:
| Trigger | Flow | Tool |
|---------|------|------|
-| Code review | `/workflows:review` → Findings → `/triage` → Todos | Review agent + skill |
+| Code review | `/ce:review` → Findings → `/triage` → Todos | Review agent + skill |
| PR comments | `/resolve_pr_parallel` → Individual fixes → Todos | gh CLI + skill |
| Code TODOs | `/resolve_todo_parallel` → Fixes + Complex todos | Agent + skill |
| Planning | Brainstorm → Create todo → Work → Complete | Skill |
diff --git a/plugins/compound-engineering/skills/git-worktree/SKILL.md b/plugins/compound-engineering/skills/git-worktree/SKILL.md
index 1ba22f4..19b8806 100644
--- a/plugins/compound-engineering/skills/git-worktree/SKILL.md
+++ b/plugins/compound-engineering/skills/git-worktree/SKILL.md
@@ -38,8 +38,8 @@ git worktree add .worktrees/feature-name -b feature-name main
Use this skill in these scenarios:
-1. **Code Review (`/workflows:review`)**: If NOT already on the target branch (PR branch or requested branch), offer worktree for isolated review
-2. **Feature Work (`/workflows:work`)**: Always ask if user wants parallel worktree or live branch work
+1. **Code Review (`/ce:review`)**: If NOT already on the target branch (PR branch or requested branch), offer worktree for isolated review
+2. **Feature Work (`/ce:work`)**: Always ask if user wants parallel worktree or live branch work
3. **Parallel Development**: When working on multiple features simultaneously
4. **Cleanup**: After completing work in a worktree
@@ -47,7 +47,7 @@ Use this skill in these scenarios:
### In Claude Code Workflows
-The skill is automatically called from `/workflows:review` and `/workflows:work` commands:
+The skill is automatically called from `/ce:review` and `/ce:work` commands:
```
# For review: offers worktree if not on PR branch
@@ -204,7 +204,7 @@ bash ${CLAUDE_PLUGIN_ROOT}/skills/git-worktree/scripts/worktree-manager.sh clean
## Integration with Workflows
-### `/workflows:review`
+### `/ce:review`
Instead of always creating a worktree:
@@ -217,7 +217,7 @@ Instead of always creating a worktree:
- no → proceed with PR diff on current branch
```
-### `/workflows:work`
+### `/ce:work`
Always offer choice:
diff --git a/plugins/compound-engineering/skills/setup/SKILL.md b/plugins/compound-engineering/skills/setup/SKILL.md
index 239739a..736d254 100644
--- a/plugins/compound-engineering/skills/setup/SKILL.md
+++ b/plugins/compound-engineering/skills/setup/SKILL.md
@@ -6,7 +6,7 @@ disable-model-invocation: true
# Compound Engineering Setup
-Interactive setup for `compound-engineering.local.md` — configures which agents run during `/workflows:review` and `/workflows:work`.
+Interactive setup for `compound-engineering.local.md` — configures which agents run during `/ce:review` and `/ce:work`.
## Step 1: Check Existing Config
@@ -145,7 +145,7 @@ plan_review_agents: [{computed plan agent list}]
# Review Context
Add project-specific review instructions here.
-These notes are passed to all review agents during /workflows:review and /workflows:work.
+These notes are passed to all review agents during /ce:review and /ce:work.
Examples:
- "We use Turbo Frames heavily — check for frame-busting issues"