diff --git a/plugins/compound-engineering/commands/workflows/plan.md b/plugins/compound-engineering/commands/workflows/plan.md
deleted file mode 100644
index f348ccf..0000000
--- a/plugins/compound-engineering/commands/workflows/plan.md
+++ /dev/null
@@ -1,571 +0,0 @@
----
-name: workflows:plan
-description: Transform feature descriptions into well-structured project plans following conventions
-argument-hint: "[feature description, bug report, or improvement idea]"
----
-
-# Create a plan for a new feature or bug fix
-
-## Introduction
-
-**Note: The current year is 2026.** Use this when dating plans and searching for recent documentation.
-
-Transform feature descriptions, bug reports, or improvement ideas into well-structured markdown files issues that follow project conventions and best practices. This command provides flexible detail levels to match your needs.
-
-## Feature Description
-
- #$ARGUMENTS
-
-**If the feature description above is empty, ask the user:** "What would you like to plan? Please describe the feature, bug fix, or improvement you have in mind."
-
-Do not proceed until you have a clear feature description from the user.
-
-### 0. Idea Refinement
-
-**Check for brainstorm output first:**
-
-Before asking questions, look for recent brainstorm documents in `docs/brainstorms/` that match this feature:
-
-```bash
-ls -la docs/brainstorms/*.md 2>/dev/null | head -10
-```
-
-**Relevance criteria:** A brainstorm is relevant if:
-- The topic (from filename or YAML frontmatter) semantically matches the feature description
-- Created within the last 14 days
-- If multiple candidates match, use the most recent one
-
-**If a relevant brainstorm exists:**
-1. Read the brainstorm document
-2. Announce: "Found brainstorm from [date]: [topic]. Using as context for planning."
-3. Extract key decisions, chosen approach, and open questions
-4. **Skip the idea refinement questions below** - the brainstorm already answered WHAT to build
-5. Use brainstorm decisions as input to the research phase
-
-**If multiple brainstorms could match:**
-Use **AskUserQuestion tool** to ask which brainstorm to use, or whether to proceed without one.
-
-**If no brainstorm found (or not relevant), run idea refinement:**
-
-Refine the idea through collaborative dialogue using the **AskUserQuestion tool**:
-
-- Ask questions one at a time to understand the idea fully
-- Prefer multiple choice questions when natural options exist
-- Focus on understanding: purpose, constraints and success criteria
-- Continue until the idea is clear OR user says "proceed"
-
-**Gather signals for research decision.** During refinement, note:
-
-- **User's familiarity**: Do they know the codebase patterns? Are they pointing to examples?
-- **User's intent**: Speed vs thoroughness? Exploration vs execution?
-- **Topic risk**: Security, payments, external APIs warrant more caution
-- **Uncertainty level**: Is the approach clear or open-ended?
-
-**Skip option:** If the feature description is already detailed, offer:
-"Your description is clear. Should I proceed with research, or would you like to refine it further?"
-
-## Main Tasks
-
-### 1. Local Research (Always Runs - Parallel)
-
-
-First, I need to understand the project's conventions, existing patterns, and any documented learnings. This is fast and local - it informs whether external research is needed.
-
-
-Run these agents **in parallel** to gather local context:
-
-- Task repo-research-analyst(feature_description)
-- Task learnings-researcher(feature_description)
-
-**What to look for:**
-- **Repo research:** existing patterns, CLAUDE.md guidance, technology familiarity, pattern consistency
-- **Learnings:** documented solutions in `docs/solutions/` that might apply (gotchas, patterns, lessons learned)
-
-These findings inform the next step.
-
-### 1.5. Research Decision
-
-Based on signals from Step 0 and findings from Step 1, decide on external research.
-
-**High-risk topics → always research.** Security, payments, external APIs, data privacy. The cost of missing something is too high. This takes precedence over speed signals.
-
-**Strong local context → skip external research.** Codebase has good patterns, CLAUDE.md has guidance, user knows what they want. External research adds little value.
-
-**Uncertainty or unfamiliar territory → research.** User is exploring, codebase has no examples, new technology. External perspective is valuable.
-
-**Announce the decision and proceed.** Brief explanation, then continue. User can redirect if needed.
-
-Examples:
-- "Your codebase has solid patterns for this. Proceeding without external research."
-- "This involves payment processing, so I'll research current best practices first."
-
-### 1.5b. External Research (Conditional)
-
-**Only run if Step 1.5 indicates external research is valuable.**
-
-Run these agents in parallel:
-
-- Task best-practices-researcher(feature_description)
-- Task framework-docs-researcher(feature_description)
-
-### 1.6. Consolidate Research
-
-After all research steps complete, consolidate findings:
-
-- Document relevant file paths from repo research (e.g., `app/services/example_service.rb:42`)
-- **Include relevant institutional learnings** from `docs/solutions/` (key insights, gotchas to avoid)
-- Note external documentation URLs and best practices (if external research was done)
-- List related issues or PRs discovered
-- Capture CLAUDE.md conventions
-
-**Optional validation:** Briefly summarize findings and ask if anything looks off or missing before proceeding to planning.
-
-### 2. Issue Planning & Structure
-
-
-Think like a product manager - what would make this issue clear and actionable? Consider multiple perspectives
-
-
-**Title & Categorization:**
-
-- [ ] Draft clear, searchable issue title using conventional format (e.g., `feat: Add user authentication`, `fix: Cart total calculation`)
-- [ ] Determine issue type: enhancement, bug, refactor
-- [ ] Convert title to filename: add today's date prefix, strip prefix colon, kebab-case, add `-plan` suffix
- - Example: `feat: Add User Authentication` → `2026-01-21-feat-add-user-authentication-plan.md`
- - Keep it descriptive (3-5 words after prefix) so plans are findable by context
-
-**Stakeholder Analysis:**
-
-- [ ] Identify who will be affected by this issue (end users, developers, operations)
-- [ ] Consider implementation complexity and required expertise
-
-**Content Planning:**
-
-- [ ] Choose appropriate detail level based on issue complexity and audience
-- [ ] List all necessary sections for the chosen template
-- [ ] Gather supporting materials (error logs, screenshots, design mockups)
-- [ ] Prepare code examples or reproduction steps if applicable, name the mock filenames in the lists
-
-### 3. SpecFlow Analysis
-
-After planning the issue structure, run SpecFlow Analyzer to validate and refine the feature specification:
-
-- Task spec-flow-analyzer(feature_description, research_findings)
-
-**SpecFlow Analyzer Output:**
-
-- [ ] Review SpecFlow analysis results
-- [ ] Incorporate any identified gaps or edge cases into the issue
-- [ ] Update acceptance criteria based on SpecFlow findings
-
-### 4. Choose Implementation Detail Level
-
-Select how comprehensive you want the issue to be, simpler is mostly better.
-
-#### 📄 MINIMAL (Quick Issue)
-
-**Best for:** Simple bugs, small improvements, clear features
-
-**Includes:**
-
-- Problem statement or feature description
-- Basic acceptance criteria
-- Essential context only
-
-**Structure:**
-
-````markdown
----
-title: [Issue Title]
-type: [feat|fix|refactor]
-status: active
-date: YYYY-MM-DD
----
-
-# [Issue Title]
-
-[Brief problem/feature description]
-
-## Acceptance Criteria
-
-- [ ] Core requirement 1
-- [ ] Core requirement 2
-
-## Context
-
-[Any critical information]
-
-## MVP
-
-### test.rb
-
-```ruby
-class Test
- def initialize
- @name = "test"
- end
-end
-```
-
-## References
-
-- Related issue: #[issue_number]
-- Documentation: [relevant_docs_url]
-````
-
-#### 📋 MORE (Standard Issue)
-
-**Best for:** Most features, complex bugs, team collaboration
-
-**Includes everything from MINIMAL plus:**
-
-- Detailed background and motivation
-- Technical considerations
-- Success metrics
-- Dependencies and risks
-- Basic implementation suggestions
-
-**Structure:**
-
-```markdown
----
-title: [Issue Title]
-type: [feat|fix|refactor]
-status: active
-date: YYYY-MM-DD
----
-
-# [Issue Title]
-
-## Overview
-
-[Comprehensive description]
-
-## Problem Statement / Motivation
-
-[Why this matters]
-
-## Proposed Solution
-
-[High-level approach]
-
-## Technical Considerations
-
-- Architecture impacts
-- Performance implications
-- Security considerations
-
-## Acceptance Criteria
-
-- [ ] Detailed requirement 1
-- [ ] Detailed requirement 2
-- [ ] Testing requirements
-
-## Success Metrics
-
-[How we measure success]
-
-## Dependencies & Risks
-
-[What could block or complicate this]
-
-## References & Research
-
-- Similar implementations: [file_path:line_number]
-- Best practices: [documentation_url]
-- Related PRs: #[pr_number]
-```
-
-#### 📚 A LOT (Comprehensive Issue)
-
-**Best for:** Major features, architectural changes, complex integrations
-
-**Includes everything from MORE plus:**
-
-- Detailed implementation plan with phases
-- Alternative approaches considered
-- Extensive technical specifications
-- Resource requirements and timeline
-- Future considerations and extensibility
-- Risk mitigation strategies
-- Documentation requirements
-
-**Structure:**
-
-```markdown
----
-title: [Issue Title]
-type: [feat|fix|refactor]
-status: active
-date: YYYY-MM-DD
----
-
-# [Issue Title]
-
-## Overview
-
-[Executive summary]
-
-## Problem Statement
-
-[Detailed problem analysis]
-
-## Proposed Solution
-
-[Comprehensive solution design]
-
-## Technical Approach
-
-### Architecture
-
-[Detailed technical design]
-
-### Implementation Phases
-
-#### Phase 1: [Foundation]
-
-- Tasks and deliverables
-- Success criteria
-- Estimated effort
-
-#### Phase 2: [Core Implementation]
-
-- Tasks and deliverables
-- Success criteria
-- Estimated effort
-
-#### Phase 3: [Polish & Optimization]
-
-- Tasks and deliverables
-- Success criteria
-- Estimated effort
-
-## Alternative Approaches Considered
-
-[Other solutions evaluated and why rejected]
-
-## Acceptance Criteria
-
-### Functional Requirements
-
-- [ ] Detailed functional criteria
-
-### Non-Functional Requirements
-
-- [ ] Performance targets
-- [ ] Security requirements
-- [ ] Accessibility standards
-
-### Quality Gates
-
-- [ ] Test coverage requirements
-- [ ] Documentation completeness
-- [ ] Code review approval
-
-## Success Metrics
-
-[Detailed KPIs and measurement methods]
-
-## Dependencies & Prerequisites
-
-[Detailed dependency analysis]
-
-## Risk Analysis & Mitigation
-
-[Comprehensive risk assessment]
-
-## Resource Requirements
-
-[Team, time, infrastructure needs]
-
-## Future Considerations
-
-[Extensibility and long-term vision]
-
-## Documentation Plan
-
-[What docs need updating]
-
-## References & Research
-
-### Internal References
-
-- Architecture decisions: [file_path:line_number]
-- Similar features: [file_path:line_number]
-- Configuration: [file_path:line_number]
-
-### External References
-
-- Framework documentation: [url]
-- Best practices guide: [url]
-- Industry standards: [url]
-
-### Related Work
-
-- Previous PRs: #[pr_numbers]
-- Related issues: #[issue_numbers]
-- Design documents: [links]
-```
-
-### 5. Issue Creation & Formatting
-
-
-Apply best practices for clarity and actionability, making the issue easy to scan and understand
-
-
-**Content Formatting:**
-
-- [ ] Use clear, descriptive headings with proper hierarchy (##, ###)
-- [ ] Include code examples in triple backticks with language syntax highlighting
-- [ ] Add screenshots/mockups if UI-related (drag & drop or use image hosting)
-- [ ] Use task lists (- [ ]) for trackable items that can be checked off
-- [ ] Add collapsible sections for lengthy logs or optional details using `` tags
-- [ ] Apply appropriate emoji for visual scanning (🐛 bug, ✨ feature, 📚 docs, ♻️ refactor)
-
-**Cross-Referencing:**
-
-- [ ] Link to related issues/PRs using #number format
-- [ ] Reference specific commits with SHA hashes when relevant
-- [ ] Link to code using GitHub's permalink feature (press 'y' for permanent link)
-- [ ] Mention relevant team members with @username if needed
-- [ ] Add links to external resources with descriptive text
-
-**Code & Examples:**
-
-````markdown
-# Good example with syntax highlighting and line references
-
-
-```ruby
-# app/services/user_service.rb:42
-def process_user(user)
-
-# Implementation here
-
-end
-```
-
-# Collapsible error logs
-
-
-Full error stacktrace
-
-`Error details here...`
-
-
-````
-
-**AI-Era Considerations:**
-
-- [ ] Account for accelerated development with AI pair programming
-- [ ] Include prompts or instructions that worked well during research
-- [ ] Note which AI tools were used for initial exploration (Claude, Copilot, etc.)
-- [ ] Emphasize comprehensive testing given rapid implementation
-- [ ] Document any AI-generated code that needs human review
-
-### 6. Final Review & Submission
-
-**Naming Scrutiny (REQUIRED for any plan that introduces new interfaces):**
-
-When the plan proposes new functions, classes, variables, modules, API fields, or database columns, scrutinize every name:
-
-| # | Check | Question |
-|---|-------|----------|
-| 1 | **Caller's perspective** | Does the name describe what it does, not how? |
-| 2 | **No false qualifiers** | Does every `_with_X` / `_and_X` reflect a real choice? |
-| 3 | **Visibility matches intent** | Should private helpers be private? |
-| 4 | **Consistent convention** | Does the pattern match existing codebase conventions? |
-| 5 | **Precise, not vague** | Could this name apply to ten different things? (`data`, `manager`, `handler` = red flags) |
-| 6 | **Complete words** | No ambiguous abbreviations? |
-| 7 | **Correct part of speech** | Functions = verbs, classes = nouns, booleans = assertions? |
-
-Bad names in plans become bad names in code. Catching them here is cheaper than catching them in review.
-
-**Pre-submission Checklist:**
-
-- [ ] Title is searchable and descriptive
-- [ ] Labels accurately categorize the issue
-- [ ] All template sections are complete
-- [ ] Links and references are working
-- [ ] Acceptance criteria are measurable
-- [ ] All proposed names pass the naming scrutiny checklist above
-- [ ] Add names of files in pseudo code examples and todo lists
-- [ ] Add an ERD mermaid diagram if applicable for new model changes
-
-## Output Format
-
-**Filename:** Use the date and kebab-case filename from Step 2 Title & Categorization.
-
-```
-docs/plans/YYYY-MM-DD---plan.md
-```
-
-Examples:
-- ✅ `docs/plans/2026-01-15-feat-user-authentication-flow-plan.md`
-- ✅ `docs/plans/2026-02-03-fix-checkout-race-condition-plan.md`
-- ✅ `docs/plans/2026-03-10-refactor-api-client-extraction-plan.md`
-- ❌ `docs/plans/2026-01-15-feat-thing-plan.md` (not descriptive - what "thing"?)
-- ❌ `docs/plans/2026-01-15-feat-new-feature-plan.md` (too vague - what feature?)
-- ❌ `docs/plans/2026-01-15-feat: user auth-plan.md` (invalid characters - colon and space)
-- ❌ `docs/plans/feat-user-auth-plan.md` (missing date prefix)
-
-## Post-Generation Options
-
-After writing the plan file, use the **AskUserQuestion tool** to present these options:
-
-**Question:** "Plan ready at `docs/plans/YYYY-MM-DD---plan.md`. What would you like to do next?"
-
-**Options:**
-1. **Open plan in editor** - Open the plan file for review
-2. **Run `/deepen-plan`** - Enhance each section with parallel research agents (best practices, performance, UI)
-3. **Run `/technical_review`** - Technical feedback from code-focused reviewers (Tiangolo, Kieran-Python, Simplicity)
-4. **Review and refine** - Improve the document through structured self-review
-5. **Start `/workflows:work`** - Begin implementing this plan locally
-6. **Start `/workflows:work` on remote** - Begin implementing in Claude Code on the web (use `&` to run in background)
-7. **Create Issue** - Create issue in project tracker (GitHub/Linear)
-
-Based on selection:
-- **Open plan in editor** → Run `open docs/plans/.md` to open the file in the user's default editor
-- **`/deepen-plan`** → Call the /deepen-plan command with the plan file path to enhance with research
-- **`/technical_review`** → Call the /technical_review command with the plan file path
-- **Review and refine** → Load `document-review` skill.
-- **`/workflows:work`** → Call the /workflows:work command with the plan file path
-- **`/workflows:work` on remote** → Run `/workflows:work docs/plans/.md &` to start work in background for Claude Code web
-- **Create Issue** → See "Issue Creation" section below
-- **Other** (automatically provided) → Accept free text for rework or specific changes
-
-**Note:** If running `/workflows:plan` with ultrathink enabled, automatically run `/deepen-plan` after plan creation for maximum depth and grounding.
-
-Loop back to options after Simplify or Other changes until user selects `/workflows:work` or `/technical_review`.
-
-## Issue Creation
-
-When user selects "Create Issue", detect their project tracker from CLAUDE.md:
-
-1. **Check for tracker preference** in user's CLAUDE.md (global or project):
- - Look for `project_tracker: github` or `project_tracker: linear`
- - Or look for mentions of "GitHub Issues" or "Linear" in their workflow section
-
-2. **If GitHub:**
-
- Use the title and type from Step 2 (already in context - no need to re-read the file):
-
- ```bash
- gh issue create --title ": " --body-file
- ```
-
-3. **If Linear:**
-
- ```bash
- linear issue create --title "" --description "$(cat )"
- ```
-
-4. **If no tracker configured:**
- Ask user: "Which project tracker do you use? (GitHub/Linear/Other)"
- - Suggest adding `project_tracker: github` or `project_tracker: linear` to their CLAUDE.md
-
-5. **After creation:**
- - Display the issue URL
- - Ask if they want to proceed to `/workflows:work` or `/technical_review`
-
-NEVER CODE! Just research and write the plan.
diff --git a/plugins/compound-engineering/commands/workflows/review.md b/plugins/compound-engineering/commands/workflows/review.md
deleted file mode 100644
index be957c4..0000000
--- a/plugins/compound-engineering/commands/workflows/review.md
+++ /dev/null
@@ -1,616 +0,0 @@
----
-name: workflows:review
-description: Perform exhaustive code reviews using multi-agent analysis, ultra-thinking, and worktrees
-argument-hint: "[PR number, GitHub URL, branch name, or latest]"
----
-
-# Review Command
-
- Perform exhaustive code reviews using multi-agent analysis, ultra-thinking, and Git worktrees for deep local inspection.
-
-## Introduction
-
-Senior Code Review Architect with expertise in security, performance, architecture, and quality assurance
-
-## Prerequisites
-
-
-- Git repository with GitHub CLI (`gh`) installed and authenticated
-- Clean main/master branch
-- Proper permissions to create worktrees and access the repository
-- For document reviews: Path to a markdown file or document
-
-
-## Main Tasks
-
-### 1. Determine Review Target & Setup (ALWAYS FIRST)
-
- #$ARGUMENTS
-
-
-First, I need to determine the review target type and set up the code for analysis.
-
-
-#### Immediate Actions:
-
-
-
-- [ ] Determine review type: PR number (numeric), GitHub URL, file path (.md), or empty (current branch)
-- [ ] Check current git branch
-- [ ] If ALREADY on the target branch (PR branch, requested branch name, or the branch already checked out for review) → proceed with analysis on current branch
-- [ ] If DIFFERENT branch than the review target → offer to use worktree: "Use git-worktree skill for isolated Call `skill: git-worktree` with branch name
-- [ ] Fetch PR metadata using `gh pr view --json` for title, body, files, linked issues
-- [ ] Set up language-specific analysis tools
-- [ ] Prepare security scanning environment
-- [ ] Make sure we are on the branch we are reviewing. Use gh pr checkout to switch to the branch or manually checkout the branch.
-
-Ensure that the code is ready for analysis (either in worktree or on current branch). ONLY then proceed to the next step.
-
-
-
-#### Protected Artifacts
-
-
-The following paths are compound-engineering pipeline artifacts and must never be flagged for deletion, removal, or gitignore by any review agent:
-
-- `docs/plans/*.md` — Plan files created by `/workflows:plan`. These are living documents that track implementation progress (checkboxes are checked off by `/workflows:work`).
-- `docs/solutions/*.md` — Solution documents created during the pipeline.
-
-If a review agent flags any file in these directories for cleanup or removal, discard that finding during synthesis. Do not create a todo for it.
-
-
-#### Load Review Agents
-
-Read `compound-engineering.local.md` in the project root. If found, use `review_agents` from YAML frontmatter. If the markdown body contains review context, pass it to each agent as additional instructions.
-
-If no settings file exists, invoke the `setup` skill to create one. Then read the newly created file and continue.
-
-#### Parallel Agents to review the PR:
-
-
-
-Run all configured review agents in parallel using Task tool. For each agent in the `review_agents` list:
-
-```
-Task {agent-name}(PR content + review context from settings body)
-```
-
-Additionally, always run these regardless of settings:
-- Task agent-native-reviewer(PR content) - Verify new features are agent-accessible
-- Task learnings-researcher(PR content) - Search docs/solutions/ for past issues related to this PR's modules and patterns
-
-
-
-#### Conditional Agents (Run if applicable):
-
-
-
-These agents are run ONLY when the PR matches specific criteria. Check the PR files list to determine if they apply:
-
-**MIGRATIONS: If PR contains database migrations, schema.rb, or data backfills:**
-
-- Task schema-drift-detector(PR content) - Detects unrelated schema.rb changes by cross-referencing against included migrations (run FIRST)
-- Task data-migration-expert(PR content) - Validates ID mappings match production, checks for swapped values, verifies rollback safety
-- Task deployment-verification-agent(PR content) - Creates Go/No-Go deployment checklist with SQL verification queries
-
-**When to run:**
-- PR includes files matching `db/migrate/*.rb` or `db/schema.rb`
-- PR modifies columns that store IDs, enums, or mappings
-- PR includes data backfill scripts or rake tasks
-- PR title/body mentions: migration, backfill, data transformation, ID mapping
-
-**What these agents check:**
-- `schema-drift-detector`: Cross-references schema.rb changes against PR migrations to catch unrelated columns/indexes from local database state
-- `data-migration-expert`: Verifies hard-coded mappings match production reality (prevents swapped IDs), checks for orphaned associations, validates dual-write patterns
-- `deployment-verification-agent`: Produces executable pre/post-deploy checklists with SQL queries, rollback procedures, and monitoring plans
-
-
-
-### 4. Ultra-Thinking Deep Dive Phases
-
- For each phase below, spend maximum cognitive effort. Think step by step. Consider all angles. Question assumptions. And bring all reviews in a synthesis to the user.
-
-
-Complete system context map with component interactions
-
-
-#### Phase 3: Stakeholder Perspective Analysis
-
- ULTRA-THINK: Put yourself in each stakeholder's shoes. What matters to them? What are their pain points?
-
-
-
-1. **Developer Perspective**
-
- - How easy is this to understand and modify?
- - Are the APIs intuitive?
- - Is debugging straightforward?
- - Can I test this easily?
-
-2. **Operations Perspective**
-
- - How do I deploy this safely?
- - What metrics and logs are available?
- - How do I troubleshoot issues?
- - What are the resource requirements?
-
-3. **End User Perspective**
-
- - Is the feature intuitive?
- - Are error messages helpful?
- - Is performance acceptable?
- - Does it solve my problem?
-
-4. **Security Team Perspective**
-
- - What's the attack surface?
- - Are there compliance requirements?
- - How is data protected?
- - What are the audit capabilities?
-
-5. **Business Perspective**
- - What's the ROI?
- - Are there legal/compliance risks?
- - How does this affect time-to-market?
- - What's the total cost of ownership?
-
-#### Phase 4: Scenario Exploration
-
- ULTRA-THINK: Explore edge cases and failure scenarios. What could go wrong? How does the system behave under stress?
-
-
-
-- [ ] **Happy Path**: Normal operation with valid inputs
-- [ ] **Invalid Inputs**: Null, empty, malformed data
-- [ ] **Boundary Conditions**: Min/max values, empty collections
-- [ ] **Concurrent Access**: Race conditions, deadlocks
-- [ ] **Scale Testing**: 10x, 100x, 1000x normal load
-- [ ] **Network Issues**: Timeouts, partial failures
-- [ ] **Resource Exhaustion**: Memory, disk, connections
-- [ ] **Security Attacks**: Injection, overflow, DoS
-- [ ] **Data Corruption**: Partial writes, inconsistency
-- [ ] **Cascading Failures**: Downstream service issues
-
-### 6. Multi-Angle Review Perspectives
-
-#### Technical Excellence Angle
-
-- Code craftsmanship evaluation
-- Engineering best practices
-- Technical documentation quality
-- Tooling and automation assessment
-- **Naming accuracy** (see Naming Scrutiny below)
-
-#### Naming Scrutiny (REQUIRED)
-
-Every name introduced or modified in the PR must pass these checks:
-
-| # | Check | Question |
-|---|-------|----------|
-| 1 | **Caller's perspective** | Does the name describe what it does, not how? |
-| 2 | **No false qualifiers** | Does every `_with_X` / `_and_X` reflect a real choice? |
-| 3 | **Visibility matches intent** | Are private helpers actually private? |
-| 4 | **Consistent convention** | Does the pattern match every other instance in the codebase? |
-| 5 | **Precise, not vague** | Could this name apply to ten different things? (`data`, `manager`, `handler` = red flags) |
-| 6 | **Complete words** | No ambiguous abbreviations? (`auth` = authentication or authorization?) |
-| 7 | **Correct part of speech** | Functions = verbs, classes = nouns, booleans = assertions? |
-
-**Common anti-patterns to flag:**
-- False optionality: `save_with_validation()` when validation is mandatory
-- Leaked implementation: `create_batch_with_items()` when callers just need `create_batch()`
-- Type encoding: `word_string`, `new_hash` instead of domain terms
-- Structural naming: `input`, `output`, `result` instead of what they contain
-- Doppelgangers: names differing by one letter (`useProfileQuery` vs `useProfilesQuery`)
-
-Include naming findings in the synthesized review. Flag as P2 (Important) unless the name is actively misleading about behavior (P1).
-
-#### Business Value Angle
-
-- Feature completeness validation
-- Performance impact on users
-- Cost-benefit analysis
-- Time-to-market considerations
-
-#### Risk Management Angle
-
-- Security risk assessment
-- Operational risk evaluation
-- Compliance risk verification
-- Technical debt accumulation
-
-#### Team Dynamics Angle
-
-- Code review etiquette
-- Knowledge sharing effectiveness
-- Collaboration patterns
-- Mentoring opportunities
-
-### 4. Simplification and Minimalism Review
-
-Run the Task code-simplicity-reviewer() to see if we can simplify the code.
-
-### 5. Findings Synthesis and Todo Creation Using file-todos Skill
-
- ALL findings MUST be stored in the todos/ directory using the file-todos skill. Create todo files immediately after synthesis - do NOT present findings for user approval first. Use the skill for structured todo management.
-
-#### Step 1: Synthesize All Findings
-
-
-Consolidate all agent reports into a categorized list of findings.
-Remove duplicates, prioritize by severity and impact.
-
-
-
-
-- [ ] Collect findings from all parallel agents
-- [ ] Surface learnings-researcher results: if past solutions are relevant, flag them as "Known Pattern" with links to docs/solutions/ files
-- [ ] Discard any findings that recommend deleting or gitignoring files in `docs/plans/` or `docs/solutions/` (see Protected Artifacts above)
-- [ ] Categorize by type: security, performance, architecture, quality, etc.
-- [ ] Assign severity levels: 🔴 CRITICAL (P1), 🟡 IMPORTANT (P2), 🔵 NICE-TO-HAVE (P3)
-- [ ] Remove duplicate or overlapping findings
-- [ ] Estimate effort for each finding (Small/Medium/Large)
-
-
-
-#### Step 2: Pressure Test Each Finding
-
-
-
-**IMPORTANT: Treat agent findings as suggestions, not mandates.**
-
-Not all findings are equally valid. Apply engineering judgment before creating todos. The goal is to make the right call for the codebase, not rubber-stamp every suggestion.
-
-**For each finding, verify:**
-
-| Check | Question |
-|-------|----------|
-| **Code** | Does the concern actually apply to this specific code? |
-| **Tests** | Are there existing tests that already cover this case? |
-| **Usage** | How is this code used in practice? Does the concern matter? |
-| **Compatibility** | Would the suggested change break anything? |
-| **Prior Decisions** | Was this intentional? Is there a documented reason? |
-| **Cost vs Benefit** | Is the fix worth the effort and risk? |
-
-**Assess each finding:**
-
-| Assessment | Meaning |
-|------------|---------|
-| **Clear & Correct** | Valid concern, well-reasoned, applies here |
-| **Unclear** | Ambiguous or missing context |
-| **Likely Incorrect** | Agent misunderstands code, context, or requirements |
-| **YAGNI** | Over-engineering, premature abstraction, no clear benefit |
-| **Duplicate** | Already covered by another finding (merge into existing) |
-
-**IMPORTANT: ALL findings become todos.** Never drop agent feedback - include the pressure test assessment IN each todo so `/triage` can use it.
-
-Each todo will include:
-- The assessment (Clear & Correct / Unclear / Likely Incorrect / YAGNI)
-- The verification results (what was checked)
-- Technical justification (why valid, or why you think it should be skipped)
-- Recommended action for triage (Fix now / Clarify / Push back / Skip)
-
-**Provide technical justification for all assessments:**
-- Don't just label - explain WHY with specific reasoning
-- Reference codebase constraints, requirements, or trade-offs
-- Example: "This abstraction would be YAGNI - we only have one implementation and no plans for variants. Adding it now increases complexity without clear benefit."
-
-The human reviews during `/triage` and makes the final call.
-
-
-
-#### Step 3: Create Todo Files Using file-todos Skill
-
- Use the file-todos skill to create todo files for ALL findings immediately. Do NOT present findings one-by-one asking for user approval. Create all todo files in parallel using the skill, then summarize results to user.
-
-**Implementation Options:**
-
-**Option A: Direct File Creation (Fast)**
-
-- Create todo files directly using Write tool
-- All findings in parallel for speed
-- Invoke `Skill: "compound-engineering:file-todos"` and read the template from its assets directory
-- Follow naming convention: `{issue_id}-pending-{priority}-{description}.md`
-
-**Option B: Sub-Agents in Parallel (Recommended for Scale)** For large PRs with 15+ findings, use sub-agents to create finding files in parallel:
-
-```bash
-# Launch multiple finding-creator agents in parallel
-Task() - Create todos for first finding
-Task() - Create todos for second finding
-Task() - Create todos for third finding
-etc. for each finding.
-```
-
-Sub-agents can:
-
-- Process multiple findings simultaneously
-- Write detailed todo files with all sections filled
-- Organize findings by severity
-- Create comprehensive Proposed Solutions
-- Add acceptance criteria and work logs
-- Complete much faster than sequential processing
-
-**Execution Strategy:**
-
-1. Synthesize all findings into categories (P1/P2/P3)
-2. Group findings by severity
-3. Launch 3 parallel sub-agents (one per severity level)
-4. Each sub-agent creates its batch of todos using the file-todos skill
-5. Consolidate results and present summary
-
-**Process (Using file-todos Skill):**
-
-1. For each finding:
-
- - Determine severity (P1/P2/P3)
- - Write detailed Problem Statement and Findings
- - Create 2-3 Proposed Solutions with pros/cons/effort/risk
- - Estimate effort (Small/Medium/Large)
- - Add acceptance criteria and work log
-
-2. Use file-todos skill for structured todo management:
-
- ```
- Skill: "compound-engineering:file-todos"
- ```
-
- The skill provides:
-
- - Template at `./assets/todo-template.md` (relative to skill directory)
- - Naming convention: `{issue_id}-{status}-{priority}-{description}.md`
- - YAML frontmatter structure: status, priority, issue_id, tags, dependencies
- - All required sections: Problem Statement, Findings, Solutions, etc.
-
-3. Create todo files in parallel:
-
- ```bash
- {next_id}-pending-{priority}-{description}.md
- ```
-
-4. Examples:
-
- ```
- 001-pending-p1-path-traversal-vulnerability.md
- 002-pending-p1-api-response-validation.md
- 003-pending-p2-concurrency-limit.md
- 004-pending-p3-unused-parameter.md
- ```
-
-5. Follow template structure from file-todos skill (read `./assets/todo-template.md` from skill directory)
-
-**Todo File Structure (from template):**
-
-Each todo must include:
-
-- **YAML frontmatter**: status, priority, issue_id, tags, dependencies
-- **Problem Statement**: What's broken/missing, why it matters
-- **Assessment (Pressure Test)**: Verification results and engineering judgment
- - Assessment: Clear & Correct / Unclear / YAGNI
- - Verified: Code, Tests, Usage, Prior Decisions
- - Technical Justification: Why this finding is valid (or why skipped)
-- **Findings**: Discoveries from agents with evidence/location
-- **Proposed Solutions**: 2-3 options, each with pros/cons/effort/risk
-- **Recommended Action**: (Filled during triage, leave blank initially)
-- **Technical Details**: Affected files, components, database changes
-- **Acceptance Criteria**: Testable checklist items
-- **Work Log**: Dated record with actions and learnings
-- **Resources**: Links to PR, issues, documentation, similar patterns
-
-**File naming convention:**
-
-```
-{issue_id}-{status}-{priority}-{description}.md
-
-Examples:
-- 001-pending-p1-security-vulnerability.md
-- 002-pending-p2-performance-optimization.md
-- 003-pending-p3-code-cleanup.md
-```
-
-**Status values:**
-
-- `pending` - New findings, needs triage/decision
-- `ready` - Approved by manager, ready to work
-- `complete` - Work finished
-
-**Priority values:**
-
-- `p1` - Critical (blocks merge, security/data issues)
-- `p2` - Important (should fix, architectural/performance)
-- `p3` - Nice-to-have (enhancements, cleanup)
-
-**Tagging:** Always add `code-review` tag, plus: `security`, `performance`, `architecture`, `rails`, `quality`, etc.
-
-#### Step 4: Summary Report
-
-After creating all todo files, present comprehensive summary:
-
-````markdown
-## ✅ Code Review Complete
-
-**Review Target:** PR #XXXX - [PR Title] **Branch:** [branch-name]
-
-### Findings Summary:
-
-- **Total Findings:** [X]
-- **🔴 CRITICAL (P1):** [count] - BLOCKS MERGE
-- **🟡 IMPORTANT (P2):** [count] - Should Fix
-- **🔵 NICE-TO-HAVE (P3):** [count] - Enhancements
-
-### Created Todo Files:
-
-**P1 - Critical (BLOCKS MERGE):**
-
-- `001-pending-p1-{finding}.md` - {description}
-- `002-pending-p1-{finding}.md` - {description}
-
-**P2 - Important:**
-
-- `003-pending-p2-{finding}.md` - {description}
-- `004-pending-p2-{finding}.md` - {description}
-
-**P3 - Nice-to-Have:**
-
-- `005-pending-p3-{finding}.md` - {description}
-
-### Review Agents Used:
-
-- kieran-python-reviewer
-- security-sentinel
-- performance-oracle
-- architecture-strategist
-- agent-native-reviewer
-- [other agents]
-
-### Assessment Summary (Pressure Test Results):
-
-All agent findings were pressure tested and included in todos:
-
-| Assessment | Count | Description |
-|------------|-------|-------------|
-| **Clear & Correct** | {X} | Valid concerns, recommend fixing |
-| **Unclear** | {X} | Need clarification before implementing |
-| **Likely Incorrect** | {X} | May misunderstand context - review during triage |
-| **YAGNI** | {X} | May be over-engineering - review during triage |
-| **Duplicate** | {X} | Merged into other findings |
-
-**Note:** All assessments are included in the todo files. Human judgment during `/triage` makes the final call on whether to accept, clarify, or reject each item.
-
-### Next Steps:
-
-1. **Address P1 Findings**: CRITICAL - must be fixed before merge
-
- - Review each P1 todo in detail
- - Implement fixes or request exemption
- - Verify fixes before merging PR
-
-2. **Triage All Todos**:
- ```bash
- ls todos/*-pending-*.md # View all pending todos
- /triage # Use slash command for interactive triage
- ```
-````
-
-3. **Work on Approved Todos**:
-
- ```bash
- /resolve_todo_parallel # Fix all approved items efficiently
- ```
-
-4. **Track Progress**:
- - Rename file when status changes: pending → ready → complete
- - Update Work Log as you work
- - Commit todos: `git add todos/ && git commit -m "refactor: add code review findings"`
-
-### Severity Breakdown:
-
-**🔴 P1 (Critical - Blocks Merge):**
-
-- Security vulnerabilities
-- Data corruption risks
-- Breaking changes
-- Critical architectural issues
-
-**🟡 P2 (Important - Should Fix):**
-
-- Performance issues
-- Significant architectural concerns
-- Major code quality problems
-- Reliability issues
-
-**🔵 P3 (Nice-to-Have):**
-
-- Minor improvements
-- Code cleanup
-- Optimization opportunities
-- Documentation updates
-
-```
-
-### 7. End-to-End Testing (Optional)
-
-
-
-**First, detect the project type from PR files:**
-
-| Indicator | Project Type |
-|-----------|--------------|
-| `*.xcodeproj`, `*.xcworkspace`, `Package.swift` (iOS) | iOS/macOS |
-| `Gemfile`, `package.json`, `app/views/*`, `*.html.*` | Web |
-| Both iOS files AND web files | Hybrid (test both) |
-
-
-
-
-
-After presenting the Summary Report, offer appropriate testing based on project type:
-
-**For Web Projects:**
-```markdown
-**"Want to run browser tests on the affected pages?"**
-1. Yes - run `/test-browser`
-2. No - skip
-```
-
-**For iOS Projects:**
-```markdown
-**"Want to run Xcode simulator tests on the app?"**
-1. Yes - run `/xcode-test`
-2. No - skip
-```
-
-**For Hybrid Projects (e.g., Rails + Hotwire Native):**
-```markdown
-**"Want to run end-to-end tests?"**
-1. Web only - run `/test-browser`
-2. iOS only - run `/xcode-test`
-3. Both - run both commands
-4. No - skip
-```
-
-
-
-#### If User Accepts Web Testing:
-
-Spawn a subagent to run browser tests (preserves main context):
-
-```
-Task general-purpose("Run /test-browser for PR #[number]. Test all affected pages, check for console errors, handle failures by creating todos and fixing.")
-```
-
-The subagent will:
-1. Identify pages affected by the PR
-2. Navigate to each page and capture snapshots (using Playwright MCP or agent-browser CLI)
-3. Check for console errors
-4. Test critical interactions
-5. Pause for human verification on OAuth/email/payment flows
-6. Create P1 todos for any failures
-7. Fix and retry until all tests pass
-
-**Standalone:** `/test-browser [PR number]`
-
-#### If User Accepts iOS Testing:
-
-Spawn a subagent to run Xcode tests (preserves main context):
-
-```
-Task general-purpose("Run /xcode-test for scheme [name]. Build for simulator, install, launch, take screenshots, check for crashes.")
-```
-
-The subagent will:
-1. Verify XcodeBuildMCP is installed
-2. Discover project and schemes
-3. Build for iOS Simulator
-4. Install and launch app
-5. Take screenshots of key screens
-6. Capture console logs for errors
-7. Pause for human verification (Sign in with Apple, push, IAP)
-8. Create P1 todos for any failures
-9. Fix and retry until all tests pass
-
-**Standalone:** `/xcode-test [scheme]`
-
-### Important: P1 Findings Block Merge
-
-Any **🔴 P1 (CRITICAL)** findings must be addressed before merging the PR. Present these prominently and ensure they're resolved before accepting the PR.
-```
diff --git a/plugins/compound-engineering/commands/workflows/work.md b/plugins/compound-engineering/commands/workflows/work.md
deleted file mode 100644
index 373dec0..0000000
--- a/plugins/compound-engineering/commands/workflows/work.md
+++ /dev/null
@@ -1,471 +0,0 @@
----
-name: workflows:work
-description: Execute work plans efficiently while maintaining quality and finishing features
-argument-hint: "[plan file, specification, or todo file path]"
----
-
-# Work Plan Execution Command
-
-Execute a work plan efficiently while maintaining quality and finishing features.
-
-## Introduction
-
-This command takes a work document (plan, specification, or todo file) and executes it systematically. The focus is on **shipping complete features** by understanding requirements quickly, following existing patterns, and maintaining quality throughout.
-
-## Input Document
-
- #$ARGUMENTS
-
-## Execution Workflow
-
-### Phase 1: Quick Start
-
-1. **Read Plan and Clarify**
-
- - Read the work document completely
- - Review any references or links provided in the plan
- - If anything is unclear or ambiguous, ask clarifying questions now
- - Get user approval to proceed
- - **Do not skip this** - better to ask questions now than build the wrong thing
-
-2. **Setup Environment**
-
- First, check the current branch:
-
- ```bash
- current_branch=$(git branch --show-current)
- default_branch=$(git symbolic-ref refs/remotes/origin/HEAD 2>/dev/null | sed 's@^refs/remotes/origin/@@')
-
- # Fallback if remote HEAD isn't set
- if [ -z "$default_branch" ]; then
- default_branch=$(git rev-parse --verify origin/main >/dev/null 2>&1 && echo "main" || echo "master")
- fi
- ```
-
- **If already on a feature branch** (not the default branch):
- - Ask: "Continue working on `[current_branch]`, or create a new branch?"
- - If continuing, proceed to step 3
- - If creating new, follow Option A or B below
-
- **If on the default branch**, choose how to proceed:
-
- **Option A: Create a new branch**
- ```bash
- git pull origin [default_branch]
- git checkout -b feature-branch-name
- ```
- Use a meaningful name based on the work (e.g., `feat/user-authentication`, `fix/email-validation`).
-
- **Option B: Use a worktree (recommended for parallel development)**
- ```bash
- skill: git-worktree
- # The skill will create a new branch from the default branch in an isolated worktree
- ```
-
- **Option C: Continue on the default branch**
- - Requires explicit user confirmation
- - Only proceed after user explicitly says "yes, commit to [default_branch]"
- - Never commit directly to the default branch without explicit permission
-
- **Recommendation**: Use worktree if:
- - You want to work on multiple features simultaneously
- - You want to keep the default branch clean while experimenting
- - You plan to switch between branches frequently
-
-3. **Create Todo List**
- - Use TodoWrite to break plan into actionable tasks
- - Include dependencies between tasks
- - Prioritize based on what needs to be done first
- - Include testing and quality check tasks
- - Keep tasks specific and completable
-
-### Phase 2: Execute
-
-1. **Task Execution Loop**
-
- For each task in priority order:
-
- ```
- while (tasks remain):
- - Mark task as in_progress in TodoWrite
- - Read any referenced files from the plan
- - Look for similar patterns in codebase
- - Implement following existing conventions
- - Write tests for new functionality
- - Run tests after changes
- - Mark task as completed in TodoWrite
- - Mark off the corresponding checkbox in the plan file ([ ] → [x])
- - Evaluate for incremental commit (see below)
- ```
-
- **IMPORTANT**: Always update the original plan document by checking off completed items. Use the Edit tool to change `- [ ]` to `- [x]` for each task you finish. This keeps the plan as a living document showing progress and ensures no checkboxes are left unchecked.
-
-2. **Incremental Commits**
-
- After completing each task, evaluate whether to create an incremental commit:
-
- | Commit when... | Don't commit when... |
- |----------------|---------------------|
- | Logical unit complete (model, service, component) | Small part of a larger unit |
- | Tests pass + meaningful progress | Tests failing |
- | About to switch contexts (backend → frontend) | Purely scaffolding with no behavior |
- | About to attempt risky/uncertain changes | Would need a "WIP" commit message |
-
- **Heuristic:** "Can I write a commit message that describes a complete, valuable change? If yes, commit. If the message would be 'WIP' or 'partial X', wait."
-
- **Commit workflow:**
- ```bash
- # 1. Verify tests pass (use project's test command)
- # Examples: bin/rails test, npm test, pytest, go test, etc.
-
- # 2. Stage only files related to this logical unit (not `git add .`)
- git add
-
- # 3. Commit with conventional message
- git commit -m "feat(scope): description of this unit"
- ```
-
- **Handling merge conflicts:** If conflicts arise during rebasing or merging, resolve them immediately. Incremental commits make conflict resolution easier since each commit is small and focused.
-
- **Note:** Incremental commits use clean conventional messages without attribution footers. The final Phase 4 commit/PR includes the full attribution.
-
-3. **Follow Existing Patterns**
-
- - The plan should reference similar code - read those files first
- - Match naming conventions exactly
- - Reuse existing components where possible
- - Follow project coding standards (see CLAUDE.md)
- - When in doubt, grep for similar implementations
-
-4. **Naming Scrutiny (Apply to every new name)**
-
- Before committing any new function, class, variable, module, or field name:
-
- | # | Check | Question |
- |---|-------|----------|
- | 1 | **Caller's perspective** | Does the name describe what it does, not how? |
- | 2 | **No false qualifiers** | Does every `_with_X` / `_and_X` reflect a real choice? |
- | 3 | **Visibility matches intent** | Are private helpers actually private? |
- | 4 | **Consistent convention** | Does the pattern match every other instance in the codebase? |
- | 5 | **Precise, not vague** | Could this name apply to ten different things? |
- | 6 | **Complete words** | No ambiguous abbreviations? |
- | 7 | **Correct part of speech** | Functions = verbs, classes = nouns, booleans = assertions? |
-
- **Quick validation:** Search the codebase for the naming pattern you're using. If your convention doesn't match existing instances, align with the codebase.
-
-5. **Test Continuously**
-
- - Run relevant tests after each significant change
- - Don't wait until the end to test
- - Fix failures immediately
- - Add new tests for new functionality
-
-6. **Figma Design Sync** (if applicable)
-
- For UI work with Figma designs:
-
- - Implement components following design specs
- - Use figma-design-sync agent iteratively to compare
- - Fix visual differences identified
- - Repeat until implementation matches design
-
-7. **Track Progress**
- - Keep TodoWrite updated as you complete tasks
- - Note any blockers or unexpected discoveries
- - Create new tasks if scope expands
- - Keep user informed of major milestones
-
-### Phase 3: Quality Check
-
-1. **Run Core Quality Checks**
-
- Always run before submitting:
-
- ```bash
- # Run full test suite (use project's test command)
- # Examples: bin/rails test, npm test, pytest, go test, etc.
-
- # Run linting (per CLAUDE.md)
- # Use linting-agent before pushing to origin
- ```
-
-2. **Consider Reviewer Agents** (Optional)
-
- Use for complex, risky, or large changes. Read agents from `compound-engineering.local.md` frontmatter (`review_agents`). If no settings file, invoke the `setup` skill to create one.
-
- Run configured agents in parallel with Task tool. Present findings and address critical issues.
-
-3. **Final Validation**
- - All TodoWrite tasks marked completed
- - All tests pass
- - Linting passes
- - Code follows existing patterns
- - Figma designs match (if applicable)
- - No console errors or warnings
-
-4. **Prepare Operational Validation Plan** (REQUIRED)
- - Add a `## Post-Deploy Monitoring & Validation` section to the PR description for every change.
- - Include concrete:
- - Log queries/search terms
- - Metrics or dashboards to watch
- - Expected healthy signals
- - Failure signals and rollback/mitigation trigger
- - Validation window and owner
- - If there is truly no production/runtime impact, still include the section with: `No additional operational monitoring required` and a one-line reason.
-
-### Phase 4: Ship It
-
-1. **Create Commit**
-
- ```bash
- git add .
- git status # Review what's being committed
- git diff --staged # Check the changes
-
- # Commit with conventional format
- git commit -m "$(cat <<'EOF'
- feat(scope): description of what and why
-
- Brief explanation if needed.
-
- 🤖 Generated with [Claude Code](https://claude.com/claude-code)
-
- Co-Authored-By: Claude
- EOF
- )"
- ```
-
-2. **Capture and Upload Screenshots for UI Changes** (REQUIRED for any UI work)
-
- For **any** design changes, new views, or UI modifications, you MUST capture and upload screenshots:
-
- **Step 1: Start dev server** (if not running)
- ```bash
- bin/dev # Run in background
- ```
-
- **Step 2: Capture screenshots with agent-browser CLI**
- ```bash
- agent-browser open http://localhost:3000/[route]
- agent-browser snapshot -i
- agent-browser screenshot output.png
- ```
- See the `agent-browser` skill for detailed usage.
-
- **Step 3: Upload using imgup skill**
- ```bash
- skill: imgup
- # Then upload each screenshot:
- imgup -h pixhost screenshot.png # pixhost works without API key
- # Alternative hosts: catbox, imagebin, beeimg
- ```
-
- **What to capture:**
- - **New screens**: Screenshot of the new UI
- - **Modified screens**: Before AND after screenshots
- - **Design implementation**: Screenshot showing Figma design match
-
- **IMPORTANT**: Always include uploaded image URLs in PR description. This provides visual context for reviewers and documents the change.
-
-3. **Create Pull Request**
-
- ```bash
- git push -u origin feature-branch-name
-
- gh pr create --title "Feature: [Description]" --body "$(cat <<'EOF'
- ## Summary
- - What was built
- - Why it was needed
- - Key decisions made
-
- ## Testing
- - Tests added/modified
- - Manual testing performed
-
- ## Post-Deploy Monitoring & Validation
- - **What to monitor/search**
- - Logs:
- - Metrics/Dashboards:
- - **Validation checks (queries/commands)**
- - `command or query here`
- - **Expected healthy behavior**
- - Expected signal(s)
- - **Failure signal(s) / rollback trigger**
- - Trigger + immediate action
- - **Validation window & owner**
- - Window:
- - Owner:
- - **If no operational impact**
- - `No additional operational monitoring required: `
-
- ## Before / After Screenshots
- | Before | After |
- |--------|-------|
- |  |  |
-
- ## Figma Design
- [Link if applicable]
-
- ---
-
- [](https://github.com/EveryInc/compound-engineering-plugin) 🤖 Generated with [Claude Code](https://claude.com/claude-code)
- EOF
- )"
- ```
-
-4. **Update Plan Status**
-
- If the input document has YAML frontmatter with a `status` field, update it to `completed`:
- ```
- status: active → status: completed
- ```
-
-5. **Notify User**
- - Summarize what was completed
- - Link to PR
- - Note any follow-up work needed
- - Suggest next steps if applicable
-
----
-
-## Swarm Mode (Optional)
-
-For complex plans with multiple independent workstreams, enable swarm mode for parallel execution with coordinated agents.
-
-### When to Use Swarm Mode
-
-| Use Swarm Mode when... | Use Standard Mode when... |
-|------------------------|---------------------------|
-| Plan has 5+ independent tasks | Plan is linear/sequential |
-| Multiple specialists needed (review + test + implement) | Single-focus work |
-| Want maximum parallelism | Simpler mental model preferred |
-| Large feature with clear phases | Small feature or bug fix |
-
-### Enabling Swarm Mode
-
-To trigger swarm execution, say:
-
-> "Make a Task list and launch an army of agent swarm subagents to build the plan"
-
-Or explicitly request: "Use swarm mode for this work"
-
-### Swarm Workflow
-
-When swarm mode is enabled, the workflow changes:
-
-1. **Create Team**
- ```
- Teammate({ operation: "spawnTeam", team_name: "work-{timestamp}" })
- ```
-
-2. **Create Task List with Dependencies**
- - Parse plan into TaskCreate items
- - Set up blockedBy relationships for sequential dependencies
- - Independent tasks have no blockers (can run in parallel)
-
-3. **Spawn Specialized Teammates**
- ```
- Task({
- team_name: "work-{timestamp}",
- name: "implementer",
- subagent_type: "general-purpose",
- prompt: "Claim implementation tasks, execute, mark complete",
- run_in_background: true
- })
-
- Task({
- team_name: "work-{timestamp}",
- name: "tester",
- subagent_type: "general-purpose",
- prompt: "Claim testing tasks, run tests, mark complete",
- run_in_background: true
- })
- ```
-
-4. **Coordinate and Monitor**
- - Team lead monitors task completion
- - Spawn additional workers as phases unblock
- - Handle plan approval if required
-
-5. **Cleanup**
- ```
- Teammate({ operation: "requestShutdown", target_agent_id: "implementer" })
- Teammate({ operation: "requestShutdown", target_agent_id: "tester" })
- Teammate({ operation: "cleanup" })
- ```
-
-See the `orchestrating-swarms` skill for detailed swarm patterns and best practices.
-
----
-
-## Key Principles
-
-### Start Fast, Execute Faster
-
-- Get clarification once at the start, then execute
-- Don't wait for perfect understanding - ask questions and move
-- The goal is to **finish the feature**, not create perfect process
-
-### The Plan is Your Guide
-
-- Work documents should reference similar code and patterns
-- Load those references and follow them
-- Don't reinvent - match what exists
-
-### Test As You Go
-
-- Run tests after each change, not at the end
-- Fix failures immediately
-- Continuous testing prevents big surprises
-
-### Quality is Built In
-
-- Follow existing patterns
-- Write tests for new code
-- Run linting before pushing
-- Use reviewer agents for complex/risky changes only
-
-### Ship Complete Features
-
-- Mark all tasks completed before moving on
-- Don't leave features 80% done
-- A finished feature that ships beats a perfect feature that doesn't
-
-## Quality Checklist
-
-Before creating PR, verify:
-
-- [ ] All clarifying questions asked and answered
-- [ ] All TodoWrite tasks marked completed
-- [ ] Tests pass (run project's test command)
-- [ ] Linting passes (use linting-agent)
-- [ ] Code follows existing patterns
-- [ ] All new names pass naming scrutiny (caller's perspective, no false qualifiers, correct visibility, consistent conventions, precise, complete words, correct part of speech)
-- [ ] Figma designs match implementation (if applicable)
-- [ ] Before/after screenshots captured and uploaded (for UI changes)
-- [ ] Commit messages follow conventional format
-- [ ] PR description includes Post-Deploy Monitoring & Validation section (or explicit no-impact rationale)
-- [ ] PR description includes summary, testing notes, and screenshots
-- [ ] PR description includes Compound Engineered badge
-
-## When to Use Reviewer Agents
-
-**Don't use by default.** Use reviewer agents only when:
-
-- Large refactor affecting many files (10+)
-- Security-sensitive changes (authentication, permissions, data access)
-- Performance-critical code paths
-- Complex algorithms or business logic
-- User explicitly requests thorough review
-
-For most features: tests + linting + following patterns is sufficient.
-
-## Common Pitfalls to Avoid
-
-- **Analysis paralysis** - Don't overthink, read the plan and execute
-- **Skipping clarifying questions** - Ask now, not after building wrong thing
-- **Ignoring plan references** - The plan has links for a reason
-- **Testing at the end** - Test continuously or suffer later
-- **Forgetting TodoWrite** - Track progress or lose track of what's done
-- **80% done syndrome** - Finish the feature, don't move on early
-- **Over-reviewing simple changes** - Save reviewer agents for complex work
diff --git a/plugins/compound-engineering/commands/essay-edit.md b/plugins/compound-engineering/skills/ce-essay-edit/SKILL.md
similarity index 99%
rename from plugins/compound-engineering/commands/essay-edit.md
rename to plugins/compound-engineering/skills/ce-essay-edit/SKILL.md
index 2d78934..a4fb6e7 100644
--- a/plugins/compound-engineering/commands/essay-edit.md
+++ b/plugins/compound-engineering/skills/ce-essay-edit/SKILL.md
@@ -1,5 +1,5 @@
---
-name: essay-edit
+name: ce-essay-edit
description: Expert essay editor that polishes written work through granular line-level editing and structural review. Preserves the author's voice and intent — never softens or genericizes. Pairs with /essay-outline.
argument-hint: "[path to essay file, or paste the essay]"
---
diff --git a/plugins/compound-engineering/commands/essay-outline.md b/plugins/compound-engineering/skills/ce-essay-outline/SKILL.md
similarity index 99%
rename from plugins/compound-engineering/commands/essay-outline.md
rename to plugins/compound-engineering/skills/ce-essay-outline/SKILL.md
index 3f952f7..e5dc243 100644
--- a/plugins/compound-engineering/commands/essay-outline.md
+++ b/plugins/compound-engineering/skills/ce-essay-outline/SKILL.md
@@ -1,5 +1,5 @@
---
-name: essay-outline
+name: ce-essay-outline
description: Transform a brain dump into a story-structured essay outline. Pressure tests the idea, validates story structure using the Saunders framework, and produces a tight outline written to file.
argument-hint: "[brain dump — your raw ideas, however loose]"
---