Files
claude-engineering-plugin/plugins/compound-engineering/commands/workflows/work.md
Kieran Klaassen 56b174a056 Add configurable review agents via setup skill and compound-engineering.local.md (#124)
* feat(commands): add /compound-engineering-setup for configurable agents

Adds a new setup command that allows users to configure which review
agents to use instead of hardcoding them in workflows. This enables:

- Multi-step onboarding with AskUserQuestion for easy setup
- Auto-detection of project type (Rails, Python, TypeScript, etc.)
- Three setup modes: Quick (smart defaults), Advanced, and Minimal
- Configuration stored in .claude/compound-engineering.json
- Support for both global (~/.claude/) and project-specific config

Updated workflows to read from config:
- /workflows:review - reads reviewAgents from config
- /plan_review - reads planReviewAgents from config
- /workflows:work - references config for reviewer agents
- /workflows:compound - references config for specialized agents

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* feat: auto-trigger setup when no config exists

Workflows now detect missing config and offer inline quick setup:
- "Quick Setup" - auto-detect project type, create config, continue
- "Full Setup" - run /compound-engineering-setup for customization
- "Skip" - use defaults just this once

This ensures users get onboarded automatically when running any
workflow for the first time, without needing to know about the
setup command beforehand.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* feat(review): wire all conditionalAgents categories

Extended /workflows:review to invoke conditional agents for:
- migrations (existing)
- frontend (new): JS/TS/Stimulus changes
- architecture (new): structural changes, 10+ files
- data (new): model/ActiveRecord changes

Each category reads from conditionalAgents.* config key and
runs appropriate specialized agents when file patterns match.

Resolves: todos/001-ready-p2-conditional-agents-not-invoked.md

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* chore: mark todo #001 as complete

* feat(setup): add custom agent discovery and modify flow

- Auto-detect custom agents in .claude/agents/ and ~/.claude/agents/
- Add modify existing config flow (add/remove agents, view config)
- Include guide for creating custom review agents
- Add customAgents mapping in config to track agent file paths
- Update changelog with new config schema including customAgents

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* chore: remove completed todos directory

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* [2.29.1] Improve /workflows:brainstorm question flow

- Add "Ask more questions" option at handoff phase
- Clarify that Claude should ask the user questions (not wait for user)
- Require resolving ALL open questions before offering to proceed

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Simplify plugin settings: replace 486-line wizard with .local.md pattern

- Rewrite setup.md (486 → 95 lines): detect project type, create
  .claude/compound-engineering.local.md with smart defaults
- Make review.md and work.md config-aware: read agents from .local.md
  frontmatter, fall back to auto-detected defaults
- Wire schema-drift-detector into review.md migrations conditional block
- Delete technical_review.md (duplicated /plan_review)
- Add disable-model-invocation to setup.md
- Bump to v2.32.0

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Rewrite .claude/ paths for OpenCode/Codex targets, add npm publish workflow

- Converters now rewrite .claude/ → .opencode/ (OpenCode) and .codex/ (Codex)
  in command bodies and agent bodies so .local.md settings work cross-platform
- Apply transformContentForCodex to agent bodies (was only commands before)
- Add GitHub Action to auto-publish to npm on version bump merge to main
- Bump to v0.4.0

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(workflows-work): require post-deploy monitoring section

Add a mandatory Post-Deploy Monitoring & Validation section to the /workflows:work PR template, include no-impact fallback guidance, and enforce it in the quality checklist.

* Add learnings-researcher to review workflow, fix docs site counts

- Add learnings-researcher as parallel agent #14 in /workflows:review
  so past solutions from docs/solutions/ are surfaced during code review
- Make /release-docs command invocable (remove disable-model-invocation)
- Fix stale counts across docs site (agents 28→29, commands 19→24,
  skills 15→18, MCP servers 2→1)
- Bump version to 2.32.1

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Move /release-docs to local .claude/commands/, bump to 2.32.2

Repo maintenance command doesn't need to be distributed to plugin users.
Update command count 24 → 23 across plugin.json, marketplace.json, and docs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Move settings to project root: compound-engineering.local.md

Tool-agnostic location — works for Claude, Codex, OpenCode without
path rewriting. No global fallback, just project root.

Update commands (setup, review, work) and converter tests.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Make /compound-engineering-setup interactive with auto-detect fast path

Two paths: "Auto-configure" (one click, smart defaults) or "Customize"
(pick stack, focus areas, review depth). Uses AskUserQuestion throughout.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Replace /compound-engineering-setup command with setup skill

Setup is now a skill invoked on-demand when compound-engineering.local.md
doesn't exist. Review and work commands just say "invoke the setup skill"
instead of inlining the full setup flow.

- Remove commands/setup.md (command)
- Add skills/setup/SKILL.md (skill with interactive AskUserQuestion flow)
- Simplify review.md and work.md to reference the skill
- Counts: 29 agents, 22 commands, 19 skills

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Prepare v2.33.0 release: setup skill, configurable review agents

- Bump version to 2.33.0
- Consolidate CHANGELOG entries for this branch
- Fix README: update counts (29/22/19), add setup + resolve-pr-parallel skills
- Remove stale /compound-engineering-setup command reference

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-12 11:43:16 -06:00

14 KiB

name, description, argument-hint
name description argument-hint
workflows:work Execute work plans efficiently while maintaining quality and finishing features [plan file, specification, or todo file path]

Work Plan Execution Command

Execute a work plan efficiently while maintaining quality and finishing features.

Introduction

This command takes a work document (plan, specification, or todo file) and executes it systematically. The focus is on shipping complete features by understanding requirements quickly, following existing patterns, and maintaining quality throughout.

Input Document

<input_document> #$ARGUMENTS </input_document>

Execution Workflow

Phase 1: Quick Start

  1. Read Plan and Clarify

    • Read the work document completely
    • Review any references or links provided in the plan
    • If anything is unclear or ambiguous, ask clarifying questions now
    • Get user approval to proceed
    • Do not skip this - better to ask questions now than build the wrong thing
  2. Setup Environment

    First, check the current branch:

    current_branch=$(git branch --show-current)
    default_branch=$(git symbolic-ref refs/remotes/origin/HEAD 2>/dev/null | sed 's@^refs/remotes/origin/@@')
    
    # Fallback if remote HEAD isn't set
    if [ -z "$default_branch" ]; then
      default_branch=$(git rev-parse --verify origin/main >/dev/null 2>&1 && echo "main" || echo "master")
    fi
    

    If already on a feature branch (not the default branch):

    • Ask: "Continue working on [current_branch], or create a new branch?"
    • If continuing, proceed to step 3
    • If creating new, follow Option A or B below

    If on the default branch, choose how to proceed:

    Option A: Create a new branch

    git pull origin [default_branch]
    git checkout -b feature-branch-name
    

    Use a meaningful name based on the work (e.g., feat/user-authentication, fix/email-validation).

    Option B: Use a worktree (recommended for parallel development)

    skill: git-worktree
    # The skill will create a new branch from the default branch in an isolated worktree
    

    Option C: Continue on the default branch

    • Requires explicit user confirmation
    • Only proceed after user explicitly says "yes, commit to [default_branch]"
    • Never commit directly to the default branch without explicit permission

    Recommendation: Use worktree if:

    • You want to work on multiple features simultaneously
    • You want to keep the default branch clean while experimenting
    • You plan to switch between branches frequently
  3. Create Todo List

    • Use TodoWrite to break plan into actionable tasks
    • Include dependencies between tasks
    • Prioritize based on what needs to be done first
    • Include testing and quality check tasks
    • Keep tasks specific and completable

Phase 2: Execute

  1. Task Execution Loop

    For each task in priority order:

    while (tasks remain):
      - Mark task as in_progress in TodoWrite
      - Read any referenced files from the plan
      - Look for similar patterns in codebase
      - Implement following existing conventions
      - Write tests for new functionality
      - Run tests after changes
      - Mark task as completed in TodoWrite
      - Mark off the corresponding checkbox in the plan file ([ ] → [x])
      - Evaluate for incremental commit (see below)
    

    IMPORTANT: Always update the original plan document by checking off completed items. Use the Edit tool to change - [ ] to - [x] for each task you finish. This keeps the plan as a living document showing progress and ensures no checkboxes are left unchecked.

  2. Incremental Commits

    After completing each task, evaluate whether to create an incremental commit:

    Commit when... Don't commit when...
    Logical unit complete (model, service, component) Small part of a larger unit
    Tests pass + meaningful progress Tests failing
    About to switch contexts (backend → frontend) Purely scaffolding with no behavior
    About to attempt risky/uncertain changes Would need a "WIP" commit message

    Heuristic: "Can I write a commit message that describes a complete, valuable change? If yes, commit. If the message would be 'WIP' or 'partial X', wait."

    Commit workflow:

    # 1. Verify tests pass (use project's test command)
    # Examples: bin/rails test, npm test, pytest, go test, etc.
    
    # 2. Stage only files related to this logical unit (not `git add .`)
    git add <files related to this logical unit>
    
    # 3. Commit with conventional message
    git commit -m "feat(scope): description of this unit"
    

    Handling merge conflicts: If conflicts arise during rebasing or merging, resolve them immediately. Incremental commits make conflict resolution easier since each commit is small and focused.

    Note: Incremental commits use clean conventional messages without attribution footers. The final Phase 4 commit/PR includes the full attribution.

  3. Follow Existing Patterns

    • The plan should reference similar code - read those files first
    • Match naming conventions exactly
    • Reuse existing components where possible
    • Follow project coding standards (see CLAUDE.md)
    • When in doubt, grep for similar implementations
  4. Test Continuously

    • Run relevant tests after each significant change
    • Don't wait until the end to test
    • Fix failures immediately
    • Add new tests for new functionality
  5. Figma Design Sync (if applicable)

    For UI work with Figma designs:

    • Implement components following design specs
    • Use figma-design-sync agent iteratively to compare
    • Fix visual differences identified
    • Repeat until implementation matches design
  6. Track Progress

    • Keep TodoWrite updated as you complete tasks
    • Note any blockers or unexpected discoveries
    • Create new tasks if scope expands
    • Keep user informed of major milestones

Phase 3: Quality Check

  1. Run Core Quality Checks

    Always run before submitting:

    # Run full test suite (use project's test command)
    # Examples: bin/rails test, npm test, pytest, go test, etc.
    
    # Run linting (per CLAUDE.md)
    # Use linting-agent before pushing to origin
    
  2. Consider Reviewer Agents (Optional)

    Use for complex, risky, or large changes. Read agents from compound-engineering.local.md frontmatter (review_agents). If no settings file, invoke the setup skill to create one.

    Run configured agents in parallel with Task tool. Present findings and address critical issues.

  3. Final Validation

    • All TodoWrite tasks marked completed
    • All tests pass
    • Linting passes
    • Code follows existing patterns
    • Figma designs match (if applicable)
    • No console errors or warnings
  4. Prepare Operational Validation Plan (REQUIRED)

    • Add a ## Post-Deploy Monitoring & Validation section to the PR description for every change.
    • Include concrete:
      • Log queries/search terms
      • Metrics or dashboards to watch
      • Expected healthy signals
      • Failure signals and rollback/mitigation trigger
      • Validation window and owner
    • If there is truly no production/runtime impact, still include the section with: No additional operational monitoring required and a one-line reason.

Phase 4: Ship It

  1. Create Commit

    git add .
    git status  # Review what's being committed
    git diff --staged  # Check the changes
    
    # Commit with conventional format
    git commit -m "$(cat <<'EOF'
    feat(scope): description of what and why
    
    Brief explanation if needed.
    
    🤖 Generated with [Claude Code](https://claude.com/claude-code)
    
    Co-Authored-By: Claude <noreply@anthropic.com>
    EOF
    )"
    
  2. Capture and Upload Screenshots for UI Changes (REQUIRED for any UI work)

    For any design changes, new views, or UI modifications, you MUST capture and upload screenshots:

    Step 1: Start dev server (if not running)

    bin/dev  # Run in background
    

    Step 2: Capture screenshots with agent-browser CLI

    agent-browser open http://localhost:3000/[route]
    agent-browser snapshot -i
    agent-browser screenshot output.png
    

    See the agent-browser skill for detailed usage.

    Step 3: Upload using imgup skill

    skill: imgup
    # Then upload each screenshot:
    imgup -h pixhost screenshot.png  # pixhost works without API key
    # Alternative hosts: catbox, imagebin, beeimg
    

    What to capture:

    • New screens: Screenshot of the new UI
    • Modified screens: Before AND after screenshots
    • Design implementation: Screenshot showing Figma design match

    IMPORTANT: Always include uploaded image URLs in PR description. This provides visual context for reviewers and documents the change.

  3. Create Pull Request

    git push -u origin feature-branch-name
    
    gh pr create --title "Feature: [Description]" --body "$(cat <<'EOF'
    ## Summary
    - What was built
    - Why it was needed
    - Key decisions made
    
    ## Testing
    - Tests added/modified
    - Manual testing performed
    
    ## Post-Deploy Monitoring & Validation
    - **What to monitor/search**
      - Logs:
      - Metrics/Dashboards:
    - **Validation checks (queries/commands)**
      - `command or query here`
    - **Expected healthy behavior**
      - Expected signal(s)
    - **Failure signal(s) / rollback trigger**
      - Trigger + immediate action
    - **Validation window & owner**
      - Window:
      - Owner:
    - **If no operational impact**
      - `No additional operational monitoring required: <reason>`
    
    ## Before / After Screenshots
    | Before | After |
    |--------|-------|
    | ![before](URL) | ![after](URL) |
    
    ## Figma Design
    [Link if applicable]
    
    ---
    
    [![Compound Engineered](https://img.shields.io/badge/Compound-Engineered-6366f1)](https://github.com/EveryInc/compound-engineering-plugin) 🤖 Generated with [Claude Code](https://claude.com/claude-code)
    EOF
    )"
    
  4. Notify User

    • Summarize what was completed
    • Link to PR
    • Note any follow-up work needed
    • Suggest next steps if applicable

Swarm Mode (Optional)

For complex plans with multiple independent workstreams, enable swarm mode for parallel execution with coordinated agents.

When to Use Swarm Mode

Use Swarm Mode when... Use Standard Mode when...
Plan has 5+ independent tasks Plan is linear/sequential
Multiple specialists needed (review + test + implement) Single-focus work
Want maximum parallelism Simpler mental model preferred
Large feature with clear phases Small feature or bug fix

Enabling Swarm Mode

To trigger swarm execution, say:

"Make a Task list and launch an army of agent swarm subagents to build the plan"

Or explicitly request: "Use swarm mode for this work"

Swarm Workflow

When swarm mode is enabled, the workflow changes:

  1. Create Team

    Teammate({ operation: "spawnTeam", team_name: "work-{timestamp}" })
    
  2. Create Task List with Dependencies

    • Parse plan into TaskCreate items
    • Set up blockedBy relationships for sequential dependencies
    • Independent tasks have no blockers (can run in parallel)
  3. Spawn Specialized Teammates

    Task({
      team_name: "work-{timestamp}",
      name: "implementer",
      subagent_type: "general-purpose",
      prompt: "Claim implementation tasks, execute, mark complete",
      run_in_background: true
    })
    
    Task({
      team_name: "work-{timestamp}",
      name: "tester",
      subagent_type: "general-purpose",
      prompt: "Claim testing tasks, run tests, mark complete",
      run_in_background: true
    })
    
  4. Coordinate and Monitor

    • Team lead monitors task completion
    • Spawn additional workers as phases unblock
    • Handle plan approval if required
  5. Cleanup

    Teammate({ operation: "requestShutdown", target_agent_id: "implementer" })
    Teammate({ operation: "requestShutdown", target_agent_id: "tester" })
    Teammate({ operation: "cleanup" })
    

See the orchestrating-swarms skill for detailed swarm patterns and best practices.


Key Principles

Start Fast, Execute Faster

  • Get clarification once at the start, then execute
  • Don't wait for perfect understanding - ask questions and move
  • The goal is to finish the feature, not create perfect process

The Plan is Your Guide

  • Work documents should reference similar code and patterns
  • Load those references and follow them
  • Don't reinvent - match what exists

Test As You Go

  • Run tests after each change, not at the end
  • Fix failures immediately
  • Continuous testing prevents big surprises

Quality is Built In

  • Follow existing patterns
  • Write tests for new code
  • Run linting before pushing
  • Use reviewer agents for complex/risky changes only

Ship Complete Features

  • Mark all tasks completed before moving on
  • Don't leave features 80% done
  • A finished feature that ships beats a perfect feature that doesn't

Quality Checklist

Before creating PR, verify:

  • All clarifying questions asked and answered
  • All TodoWrite tasks marked completed
  • Tests pass (run project's test command)
  • Linting passes (use linting-agent)
  • Code follows existing patterns
  • Figma designs match implementation (if applicable)
  • Before/after screenshots captured and uploaded (for UI changes)
  • Commit messages follow conventional format
  • PR description includes Post-Deploy Monitoring & Validation section (or explicit no-impact rationale)
  • PR description includes summary, testing notes, and screenshots
  • PR description includes Compound Engineered badge

When to Use Reviewer Agents

Don't use by default. Use reviewer agents only when:

  • Large refactor affecting many files (10+)
  • Security-sensitive changes (authentication, permissions, data access)
  • Performance-critical code paths
  • Complex algorithms or business logic
  • User explicitly requests thorough review

For most features: tests + linting + following patterns is sufficient.

Common Pitfalls to Avoid

  • Analysis paralysis - Don't overthink, read the plan and execute
  • Skipping clarifying questions - Ask now, not after building wrong thing
  • Ignoring plan references - The plan has links for a reason
  • Testing at the end - Test continuously or suffer later
  • Forgetting TodoWrite - Track progress or lose track of what's done
  • 80% done syndrome - Finish the feature, don't move on early
  • Over-reviewing simple changes - Save reviewer agents for complex work