[2.9.0] Rename plugin to compound-engineering

BREAKING: Plugin renamed from compounding-engineering to compound-engineering.
Users will need to reinstall with the new name:

  claude /plugin install compound-engineering

Changes:
- Renamed plugin directory and all references
- Updated documentation counts (24 agents, 19 commands)
- Added julik-frontend-races-reviewer to docs

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Kieran Klaassen
2025-12-02 17:32:04 -08:00
parent 4b49e5344d
commit 6c5b3e40db
121 changed files with 136 additions and 117 deletions

View File

@@ -0,0 +1,17 @@
---
name: codify
description: "[DEPRECATED] Use /compound instead - Document solved problems"
argument-hint: "[optional: brief context about the fix]"
---
# /codify is deprecated
**This command has been renamed to `/compound`.**
The new name better reflects the compounding engineering philosophy: each documented solution compounds your team's knowledge.
---
Tell the user: "Note: `/codify` has been renamed to `/compound`. Please use `/compound` going forward."
Now run the `/compound` command with the same arguments: #$ARGUMENTS

View File

@@ -0,0 +1,202 @@
---
name: compound
description: Document a recently solved problem to compound your team's knowledge
argument-hint: "[optional: brief context about the fix]"
---
# /compound
Coordinate multiple subagents working in parallel to document a recently solved problem.
## Purpose
Captures problem solutions while context is fresh, creating structured documentation in `docs/solutions/` with YAML frontmatter for searchability and future reference. Uses parallel subagents for maximum efficiency.
**Why "compound"?** Each documented solution compounds your team's knowledge. The first time you solve a problem takes research. Document it, and the next occurrence takes minutes. Knowledge compounds.
## Usage
```bash
/compound # Document the most recent fix
/compound [brief context] # Provide additional context hint
```
## Execution Strategy: Parallel Subagents
This command launches multiple specialized subagents IN PARALLEL to maximize efficiency:
### 1. **Context Analyzer** (Parallel)
- Extracts conversation history
- Identifies problem type, component, symptoms
- Validates against CORA schema
- Returns: YAML frontmatter skeleton
### 2. **Solution Extractor** (Parallel)
- Analyzes all investigation steps
- Identifies root cause
- Extracts working solution with code examples
- Returns: Solution content block
### 3. **Related Docs Finder** (Parallel)
- Searches `docs/solutions/` for related documentation
- Identifies cross-references and links
- Finds related GitHub issues
- Returns: Links and relationships
### 4. **Prevention Strategist** (Parallel)
- Develops prevention strategies
- Creates best practices guidance
- Generates test cases if applicable
- Returns: Prevention/testing content
### 5. **Category Classifier** (Parallel)
- Determines optimal `docs/solutions/` category
- Validates category against schema
- Suggests filename based on slug
- Returns: Final path and filename
### 6. **Documentation Writer** (Parallel)
- Assembles complete markdown file
- Validates YAML frontmatter
- Formats content for readability
- Creates the file in correct location
### 7. **Optional: Specialized Agent Invocation** (Post-Documentation)
Based on problem type detected, automatically invoke applicable agents:
- **performance_issue** → `performance-oracle`
- **security_issue** → `security-sentinel`
- **database_issue** → `data-integrity-guardian`
- **test_failure** → `cora-test-reviewer`
- Any code-heavy issue → `kieran-rails-reviewer` + `code-simplicity-reviewer`
## What It Captures
- **Problem symptom**: Exact error messages, observable behavior
- **Investigation steps tried**: What didn't work and why
- **Root cause analysis**: Technical explanation
- **Working solution**: Step-by-step fix with code examples
- **Prevention strategies**: How to avoid in future
- **Cross-references**: Links to related issues and docs
## Preconditions
<preconditions enforcement="advisory">
<check condition="problem_solved">
Problem has been solved (not in-progress)
</check>
<check condition="solution_verified">
Solution has been verified working
</check>
<check condition="non_trivial">
Non-trivial problem (not simple typo or obvious error)
</check>
</preconditions>
## What It Creates
**Organized documentation:**
- File: `docs/solutions/[category]/[filename].md`
**Categories auto-detected from problem:**
- build-errors/
- test-failures/
- runtime-errors/
- performance-issues/
- database-issues/
- security-issues/
- ui-bugs/
- integration-issues/
- logic-errors/
## Success Output
```
✓ Parallel documentation generation complete
Primary Subagent Results:
✓ Context Analyzer: Identified performance_issue in brief_system
✓ Solution Extractor: Extracted 3 code fixes
✓ Related Docs Finder: Found 2 related issues
✓ Prevention Strategist: Generated test cases
✓ Category Classifier: docs/solutions/performance-issues/
✓ Documentation Writer: Created complete markdown
Specialized Agent Reviews (Auto-Triggered):
✓ performance-oracle: Validated query optimization approach
✓ kieran-rails-reviewer: Code examples meet Rails standards
✓ code-simplicity-reviewer: Solution is appropriately minimal
✓ every-style-editor: Documentation style verified
File created:
- docs/solutions/performance-issues/n-plus-one-brief-generation.md
This documentation will be searchable for future reference when similar
issues occur in the Email Processing or Brief System modules.
What's next?
1. Continue workflow (recommended)
2. Link related documentation
3. Update other references
4. View documentation
5. Other
```
## The Compounding Philosophy
This creates a compounding knowledge system:
1. First time you solve "N+1 query in brief generation" → Research (30 min)
2. Document the solution → docs/solutions/performance-issues/n-plus-one-briefs.md (5 min)
3. Next time similar issue occurs → Quick lookup (2 min)
4. Knowledge compounds → Team gets smarter
The feedback loop:
```
Build → Test → Find Issue → Research → Improve → Document → Validate → Deploy
↑ ↓
└──────────────────────────────────────────────────────────────────────┘
```
**Each unit of engineering work should make subsequent units of work easier—not harder.**
## Auto-Invoke
<auto_invoke> <trigger_phrases> - "that worked" - "it's fixed" - "working now" - "problem solved" </trigger_phrases>
<manual_override> Use /compound [context] to document immediately without waiting for auto-detection. </manual_override> </auto_invoke>
## Routes To
`compound-docs` skill
## Applicable Specialized Agents
Based on problem type, these agents can enhance documentation:
### Code Quality & Review
- **kieran-rails-reviewer**: Reviews code examples for Rails best practices
- **code-simplicity-reviewer**: Ensures solution code is minimal and clear
- **pattern-recognition-specialist**: Identifies anti-patterns or repeating issues
### Specific Domain Experts
- **performance-oracle**: Analyzes performance_issue category solutions
- **security-sentinel**: Reviews security_issue solutions for vulnerabilities
- **cora-test-reviewer**: Creates test cases for prevention strategies
- **data-integrity-guardian**: Reviews database_issue migrations and queries
### Enhancement & Documentation
- **best-practices-researcher**: Enriches solution with industry best practices
- **every-style-editor**: Reviews documentation style and clarity
- **framework-docs-researcher**: Links to Rails/gem documentation references
### When to Invoke
- **Auto-triggered** (optional): Agents can run post-documentation for enhancement
- **Manual trigger**: User can invoke agents after /compound completes for deeper review
## Related Commands
- `/research [topic]` - Deep investigation (searches docs/solutions/ for patterns)
- `/plan` - Planning workflow (references documented solutions)

View File

@@ -0,0 +1,424 @@
---
name: plan
description: Transform feature descriptions into well-structured project plans following conventions
argument-hint: "[feature description, bug report, or improvement idea]"
---
# Create a plan for a new feature or bug fix
## Introduction
**Note: The current year is 2025.** Use this when dating plans and searching for recent documentation.
Transform feature descriptions, bug reports, or improvement ideas into well-structured markdown files issues that follow project conventions and best practices. This command provides flexible detail levels to match your needs.
## Feature Description
<feature_description> #$ARGUMENTS </feature_description>
**If the feature description above is empty, ask the user:** "What would you like to plan? Please describe the feature, bug fix, or improvement you have in mind."
Do not proceed until you have a clear feature description from the user.
## Main Tasks
### 1. Repository Research & Context Gathering
<thinking>
First, I need to understand the project's conventions and existing patterns, leveraging all available resources and use paralel subagents to do this.
</thinking>
Runn these three agents in paralel at the same time:
- Task repo-research-analyst(feature_description)
- Task best-practices-researcher(feature_description)
- Task framework-docs-researcher(feature_description)
**Reference Collection:**
- [ ] Document all research findings with specific file paths (e.g., `app/services/example_service.rb:42`)
- [ ] Include URLs to external documentation and best practices guides
- [ ] Create a reference list of similar issues or PRs (e.g., `#123`, `#456`)
- [ ] Note any team conventions discovered in `CLAUDE.md` or team documentation
### 2. Issue Planning & Structure
<thinking>
Think like a product manager - what would make this issue clear and actionable? Consider multiple perspectives
</thinking>
**Title & Categorization:**
- [ ] Draft clear, searchable issue title using conventional format (e.g., `feat:`, `fix:`, `docs:`)
- [ ] Determine issue type: enhancement, bug, refactor
**Stakeholder Analysis:**
- [ ] Identify who will be affected by this issue (end users, developers, operations)
- [ ] Consider implementation complexity and required expertise
**Content Planning:**
- [ ] Choose appropriate detail level based on issue complexity and audience
- [ ] List all necessary sections for the chosen template
- [ ] Gather supporting materials (error logs, screenshots, design mockups)
- [ ] Prepare code examples or reproduction steps if applicable, name the mock filenames in the lists
### 3. SpecFlow Analysis
After planning the issue structure, run SpecFlow Analyzer to validate and refine the feature specification:
- Task spec-flow-analyzer(feature_description, research_findings)
**SpecFlow Analyzer Output:**
- [ ] Review SpecFlow analysis results
- [ ] Incorporate any identified gaps or edge cases into the issue
- [ ] Update acceptance criteria based on SpecFlow findings
### 4. Choose Implementation Detail Level
Select how comprehensive you want the issue to be, simpler is mostly better.
#### 📄 MINIMAL (Quick Issue)
**Best for:** Simple bugs, small improvements, clear features
**Includes:**
- Problem statement or feature description
- Basic acceptance criteria
- Essential context only
**Structure:**
````markdown
[Brief problem/feature description]
## Acceptance Criteria
- [ ] Core requirement 1
- [ ] Core requirement 2
## Context
[Any critical information]
## MVP
### test.rb
```ruby
class Test
def initialize
@name = "test"
end
end
```
## References
- Related issue: #[issue_number]
- Documentation: [relevant_docs_url]
#### 📋 MORE (Standard Issue)
**Best for:** Most features, complex bugs, team collaboration
**Includes everything from MINIMAL plus:**
- Detailed background and motivation
- Technical considerations
- Success metrics
- Dependencies and risks
- Basic implementation suggestions
**Structure:**
```markdown
## Overview
[Comprehensive description]
## Problem Statement / Motivation
[Why this matters]
## Proposed Solution
[High-level approach]
## Technical Considerations
- Architecture impacts
- Performance implications
- Security considerations
## Acceptance Criteria
- [ ] Detailed requirement 1
- [ ] Detailed requirement 2
- [ ] Testing requirements
## Success Metrics
[How we measure success]
## Dependencies & Risks
[What could block or complicate this]
## References & Research
- Similar implementations: [file_path:line_number]
- Best practices: [documentation_url]
- Related PRs: #[pr_number]
```
#### 📚 A LOT (Comprehensive Issue)
**Best for:** Major features, architectural changes, complex integrations
**Includes everything from MORE plus:**
- Detailed implementation plan with phases
- Alternative approaches considered
- Extensive technical specifications
- Resource requirements and timeline
- Future considerations and extensibility
- Risk mitigation strategies
- Documentation requirements
**Structure:**
```markdown
## Overview
[Executive summary]
## Problem Statement
[Detailed problem analysis]
## Proposed Solution
[Comprehensive solution design]
## Technical Approach
### Architecture
[Detailed technical design]
### Implementation Phases
#### Phase 1: [Foundation]
- Tasks and deliverables
- Success criteria
- Estimated effort
#### Phase 2: [Core Implementation]
- Tasks and deliverables
- Success criteria
- Estimated effort
#### Phase 3: [Polish & Optimization]
- Tasks and deliverables
- Success criteria
- Estimated effort
## Alternative Approaches Considered
[Other solutions evaluated and why rejected]
## Acceptance Criteria
### Functional Requirements
- [ ] Detailed functional criteria
### Non-Functional Requirements
- [ ] Performance targets
- [ ] Security requirements
- [ ] Accessibility standards
### Quality Gates
- [ ] Test coverage requirements
- [ ] Documentation completeness
- [ ] Code review approval
## Success Metrics
[Detailed KPIs and measurement methods]
## Dependencies & Prerequisites
[Detailed dependency analysis]
## Risk Analysis & Mitigation
[Comprehensive risk assessment]
## Resource Requirements
[Team, time, infrastructure needs]
## Future Considerations
[Extensibility and long-term vision]
## Documentation Plan
[What docs need updating]
## References & Research
### Internal References
- Architecture decisions: [file_path:line_number]
- Similar features: [file_path:line_number]
- Configuration: [file_path:line_number]
### External References
- Framework documentation: [url]
- Best practices guide: [url]
- Industry standards: [url]
### Related Work
- Previous PRs: #[pr_numbers]
- Related issues: #[issue_numbers]
- Design documents: [links]
```
### 5. Issue Creation & Formatting
<thinking>
Apply best practices for clarity and actionability, making the issue easy to scan and understand
</thinking>
**Content Formatting:**
- [ ] Use clear, descriptive headings with proper hierarchy (##, ###)
- [ ] Include code examples in triple backticks with language syntax highlighting
- [ ] Add screenshots/mockups if UI-related (drag & drop or use image hosting)
- [ ] Use task lists (- [ ]) for trackable items that can be checked off
- [ ] Add collapsible sections for lengthy logs or optional details using `<details>` tags
- [ ] Apply appropriate emoji for visual scanning (🐛 bug, ✨ feature, 📚 docs, ♻️ refactor)
**Cross-Referencing:**
- [ ] Link to related issues/PRs using #number format
- [ ] Reference specific commits with SHA hashes when relevant
- [ ] Link to code using GitHub's permalink feature (press 'y' for permanent link)
- [ ] Mention relevant team members with @username if needed
- [ ] Add links to external resources with descriptive text
**Code & Examples:**
```markdown
# Good example with syntax highlighting and line references
```
```ruby
# app/services/user_service.rb:42
def process_user(user)
# Implementation here
end
```
````
# Collapsible error logs
<details>
<summary>Full error stacktrace</summary>
`Error details here...`
</details>
**AI-Era Considerations:**
- [ ] Account for accelerated development with AI pair programming
- [ ] Include prompts or instructions that worked well during research
- [ ] Note which AI tools were used for initial exploration (Claude, Copilot, etc.)
- [ ] Emphasize comprehensive testing given rapid implementation
- [ ] Document any AI-generated code that needs human review
### 6. Final Review & Submission
**Pre-submission Checklist:**
- [ ] Title is searchable and descriptive
- [ ] Labels accurately categorize the issue
- [ ] All template sections are complete
- [ ] Links and references are working
- [ ] Acceptance criteria are measurable
- [ ] Add names of files in pseudo code examples and todo lists
- [ ] Add an ERD mermaid diagram if applicable for new model changes
## Output Format
Write the plan to `plans/<issue_title>.md`
## Post-Generation Options
After writing the plan file, use the **AskUserQuestion tool** to present these options:
**Question:** "Plan ready at `plans/<issue_title>.md`. What would you like to do next?"
**Options:**
1. **Start `/work`** - Begin implementing this plan
2. **Run `/plan_review`** - Get feedback from reviewers (DHH, Kieran, Simplicity)
3. **Create Issue** - Create issue in project tracker (GitHub/Linear)
4. **Simplify** - Reduce detail level
5. **Rework** - Change approach or request specific changes
Based on selection:
- **`/work`** → Call the /work command with the plan file path
- **`/plan_review`** → Call the /plan_review command with the plan file path
- **Create Issue** → See "Issue Creation" section below
- **Simplify** → Ask "What should I simplify?" then regenerate simpler version
- **Rework** → Ask "What would you like changed?" then regenerate with changes
- **Other** (automatically provided) → Accept free text, act on it
Loop back to options after Simplify/Rework until user selects `/work` or `/plan_review`.
## Issue Creation
When user selects "Create Issue", detect their project tracker from CLAUDE.md:
1. **Check for tracker preference** in user's CLAUDE.md (global or project):
- Look for `project_tracker: github` or `project_tracker: linear`
- Or look for mentions of "GitHub Issues" or "Linear" in their workflow section
2. **If GitHub:**
```bash
# Extract title from plan filename (kebab-case to Title Case)
# Read plan content for body
gh issue create --title "feat: [Plan Title]" --body-file plans/<issue_title>.md
```
3. **If Linear:**
```bash
# Use linear CLI if available, or provide instructions
# linear issue create --title "[Plan Title]" --description "$(cat plans/<issue_title>.md)"
```
4. **If no tracker configured:**
Ask user: "Which project tracker do you use? (GitHub/Linear/Other)"
- Suggest adding `project_tracker: github` or `project_tracker: linear` to their CLAUDE.md
5. **After creation:**
- Display the issue URL
- Ask if they want to proceed to `/work` or `/plan_review`
NEVER CODE! Just research and write the plan.

View File

@@ -0,0 +1,405 @@
---
name: review
description: Perform exhaustive code reviews using multi-agent analysis, ultra-thinking, and worktrees
argument-hint: "[PR number, GitHub URL, branch name, or latest]"
---
# Review Command
<command_purpose> Perform exhaustive code reviews using multi-agent analysis, ultra-thinking, and Git worktrees for deep local inspection. </command_purpose>
## Introduction
<role>Senior Code Review Architect with expertise in security, performance, architecture, and quality assurance</role>
## Prerequisites
<requirements>
- Git repository with GitHub CLI (`gh`) installed and authenticated
- Clean main/master branch
- Proper permissions to create worktrees and access the repository
- For document reviews: Path to a markdown file or document
</requirements>
## Main Tasks
### 1. Determine Review Target & Setup (ALWAYS FIRST)
<review_target> #$ARGUMENTS </review_target>
<thinking>
First, I need to determine the review target type and set up the code for analysis.
</thinking>
#### Immediate Actions:
<task_list>
- [ ] Determine review type: PR number (numeric), GitHub URL, file path (.md), or empty (current branch)
- [ ] Check current git branch
- [ ] If ALREADY on the PR branch → proceed with analysis on current branch
- [ ] If DIFFERENT branch → offer to use worktree: "Use git-worktree skill for isolated Call `skill: git-worktree` with branch name
- [ ] Fetch PR metadata using `gh pr view --json` for title, body, files, linked issues
- [ ] Set up language-specific analysis tools
- [ ] Prepare security scanning environment
- [ ] Make sure we are on the branch we are reviewing. Use gh pr checkout to switch to the branch or manually checkout the branch.
Ensure that the code is ready for analysis (either in worktree or on current branch). ONLY then proceed to the next step.
</task_list>
#### Parallel Agents to review the PR:
<parallel_tasks>
Run ALL or most of these agents at the same time:
1. Task kieran-rails-reviewer(PR content)
2. Task dhh-rails-reviewer(PR title)
3. If turbo is used: Task rails-turbo-expert(PR content)
4. Task git-history-analyzer(PR content)
5. Task dependency-detective(PR content)
6. Task pattern-recognition-specialist(PR content)
7. Task architecture-strategist(PR content)
8. Task code-philosopher(PR content)
9. Task security-sentinel(PR content)
10. Task performance-oracle(PR content)
11. Task devops-harmony-analyst(PR content)
12. Task data-integrity-guardian(PR content)
</parallel_tasks>
### 4. Ultra-Thinking Deep Dive Phases
<ultrathink_instruction> For each phase below, spend maximum cognitive effort. Think step by step. Consider all angles. Question assumptions. And bring all reviews in a synthesis to the user.</ultrathink_instruction>
<deliverable>
Complete system context map with component interactions
</deliverable>
#### Phase 3: Stakeholder Perspective Analysis
<thinking_prompt> ULTRA-THINK: Put yourself in each stakeholder's shoes. What matters to them? What are their pain points? </thinking_prompt>
<stakeholder_perspectives>
1. **Developer Perspective** <questions>
- How easy is this to understand and modify?
- Are the APIs intuitive?
- Is debugging straightforward?
- Can I test this easily? </questions>
2. **Operations Perspective** <questions>
- How do I deploy this safely?
- What metrics and logs are available?
- How do I troubleshoot issues?
- What are the resource requirements? </questions>
3. **End User Perspective** <questions>
- Is the feature intuitive?
- Are error messages helpful?
- Is performance acceptable?
- Does it solve my problem? </questions>
4. **Security Team Perspective** <questions>
- What's the attack surface?
- Are there compliance requirements?
- How is data protected?
- What are the audit capabilities? </questions>
5. **Business Perspective** <questions>
- What's the ROI?
- Are there legal/compliance risks?
- How does this affect time-to-market?
- What's the total cost of ownership? </questions> </stakeholder_perspectives>
#### Phase 4: Scenario Exploration
<thinking_prompt> ULTRA-THINK: Explore edge cases and failure scenarios. What could go wrong? How does the system behave under stress? </thinking_prompt>
<scenario_checklist>
- [ ] **Happy Path**: Normal operation with valid inputs
- [ ] **Invalid Inputs**: Null, empty, malformed data
- [ ] **Boundary Conditions**: Min/max values, empty collections
- [ ] **Concurrent Access**: Race conditions, deadlocks
- [ ] **Scale Testing**: 10x, 100x, 1000x normal load
- [ ] **Network Issues**: Timeouts, partial failures
- [ ] **Resource Exhaustion**: Memory, disk, connections
- [ ] **Security Attacks**: Injection, overflow, DoS
- [ ] **Data Corruption**: Partial writes, inconsistency
- [ ] **Cascading Failures**: Downstream service issues </scenario_checklist>
### 6. Multi-Angle Review Perspectives
#### Technical Excellence Angle
- Code craftsmanship evaluation
- Engineering best practices
- Technical documentation quality
- Tooling and automation assessment
#### Business Value Angle
- Feature completeness validation
- Performance impact on users
- Cost-benefit analysis
- Time-to-market considerations
#### Risk Management Angle
- Security risk assessment
- Operational risk evaluation
- Compliance risk verification
- Technical debt accumulation
#### Team Dynamics Angle
- Code review etiquette
- Knowledge sharing effectiveness
- Collaboration patterns
- Mentoring opportunities
### 4. Simplification and Minimalism Review
Run the Task code-simplicity-reviewer() to see if we can simplify the code.
### 5. Findings Synthesis and Todo Creation Using file-todos Skill
<critical_requirement> ALL findings MUST be stored in the todos/ directory using the file-todos skill. Create todo files immediately after synthesis - do NOT present findings for user approval first. Use the skill for structured todo management. </critical_requirement>
#### Step 1: Synthesize All Findings
<thinking>
Consolidate all agent reports into a categorized list of findings.
Remove duplicates, prioritize by severity and impact.
</thinking>
<synthesis_tasks>
- [ ] Collect findings from all parallel agents
- [ ] Categorize by type: security, performance, architecture, quality, etc.
- [ ] Assign severity levels: 🔴 CRITICAL (P1), 🟡 IMPORTANT (P2), 🔵 NICE-TO-HAVE (P3)
- [ ] Remove duplicate or overlapping findings
- [ ] Estimate effort for each finding (Small/Medium/Large)
</synthesis_tasks>
#### Step 2: Create Todo Files Using file-todos Skill
<critical_instruction> Use the file-todos skill to create todo files for ALL findings immediately. Do NOT present findings one-by-one asking for user approval. Create all todo files in parallel using the skill, then summarize results to user. </critical_instruction>
**Implementation Options:**
**Option A: Direct File Creation (Fast)**
- Create todo files directly using Write tool
- All findings in parallel for speed
- Use standard template from `.claude/skills/file-todos/assets/todo-template.md`
- Follow naming convention: `{issue_id}-pending-{priority}-{description}.md`
**Option B: Sub-Agents in Parallel (Recommended for Scale)** For large PRs with 15+ findings, use sub-agents to create finding files in parallel:
```bash
# Launch multiple finding-creator agents in parallel
Task() - Create todos for first finding
Task() - Create todos for second finding
Task() - Create todos for third finding
etc. for each finding.
```
Sub-agents can:
- Process multiple findings simultaneously
- Write detailed todo files with all sections filled
- Organize findings by severity
- Create comprehensive Proposed Solutions
- Add acceptance criteria and work logs
- Complete much faster than sequential processing
**Execution Strategy:**
1. Synthesize all findings into categories (P1/P2/P3)
2. Group findings by severity
3. Launch 3 parallel sub-agents (one per severity level)
4. Each sub-agent creates its batch of todos using the file-todos skill
5. Consolidate results and present summary
**Process (Using file-todos Skill):**
1. For each finding:
- Determine severity (P1/P2/P3)
- Write detailed Problem Statement and Findings
- Create 2-3 Proposed Solutions with pros/cons/effort/risk
- Estimate effort (Small/Medium/Large)
- Add acceptance criteria and work log
2. Use file-todos skill for structured todo management:
```bash
skill: file-todos
```
The skill provides:
- Template location: `.claude/skills/file-todos/assets/todo-template.md`
- Naming convention: `{issue_id}-{status}-{priority}-{description}.md`
- YAML frontmatter structure: status, priority, issue_id, tags, dependencies
- All required sections: Problem Statement, Findings, Solutions, etc.
3. Create todo files in parallel:
```bash
{next_id}-pending-{priority}-{description}.md
```
4. Examples:
```
001-pending-p1-path-traversal-vulnerability.md
002-pending-p1-api-response-validation.md
003-pending-p2-concurrency-limit.md
004-pending-p3-unused-parameter.md
```
5. Follow template structure from file-todos skill: `.claude/skills/file-todos/assets/todo-template.md`
**Todo File Structure (from template):**
Each todo must include:
- **YAML frontmatter**: status, priority, issue_id, tags, dependencies
- **Problem Statement**: What's broken/missing, why it matters
- **Findings**: Discoveries from agents with evidence/location
- **Proposed Solutions**: 2-3 options, each with pros/cons/effort/risk
- **Recommended Action**: (Filled during triage, leave blank initially)
- **Technical Details**: Affected files, components, database changes
- **Acceptance Criteria**: Testable checklist items
- **Work Log**: Dated record with actions and learnings
- **Resources**: Links to PR, issues, documentation, similar patterns
**File naming convention:**
```
{issue_id}-{status}-{priority}-{description}.md
Examples:
- 001-pending-p1-security-vulnerability.md
- 002-pending-p2-performance-optimization.md
- 003-pending-p3-code-cleanup.md
```
**Status values:**
- `pending` - New findings, needs triage/decision
- `ready` - Approved by manager, ready to work
- `complete` - Work finished
**Priority values:**
- `p1` - Critical (blocks merge, security/data issues)
- `p2` - Important (should fix, architectural/performance)
- `p3` - Nice-to-have (enhancements, cleanup)
**Tagging:** Always add `code-review` tag, plus: `security`, `performance`, `architecture`, `rails`, `quality`, etc.
#### Step 3: Summary Report
After creating all todo files, present comprehensive summary:
````markdown
## ✅ Code Review Complete
**Review Target:** PR #XXXX - [PR Title] **Branch:** [branch-name]
### Findings Summary:
- **Total Findings:** [X]
- **🔴 CRITICAL (P1):** [count] - BLOCKS MERGE
- **🟡 IMPORTANT (P2):** [count] - Should Fix
- **🔵 NICE-TO-HAVE (P3):** [count] - Enhancements
### Created Todo Files:
**P1 - Critical (BLOCKS MERGE):**
- `001-pending-p1-{finding}.md` - {description}
- `002-pending-p1-{finding}.md` - {description}
**P2 - Important:**
- `003-pending-p2-{finding}.md` - {description}
- `004-pending-p2-{finding}.md` - {description}
**P3 - Nice-to-Have:**
- `005-pending-p3-{finding}.md` - {description}
### Review Agents Used:
- kieran-rails-reviewer
- security-sentinel
- performance-oracle
- architecture-strategist
- [other agents]
### Next Steps:
1. **Address P1 Findings**: CRITICAL - must be fixed before merge
- Review each P1 todo in detail
- Implement fixes or request exemption
- Verify fixes before merging PR
2. **Triage All Todos**:
```bash
ls todos/*-pending-*.md # View all pending todos
/triage # Use slash command for interactive triage
```
````
3. **Work on Approved Todos**:
```bash
/resolve_todo_parallel # Fix all approved items efficiently
```
4. **Track Progress**:
- Rename file when status changes: pending → ready → complete
- Update Work Log as you work
- Commit todos: `git add todos/ && git commit -m "refactor: add code review findings"`
### Severity Breakdown:
**🔴 P1 (Critical - Blocks Merge):**
- Security vulnerabilities
- Data corruption risks
- Breaking changes
- Critical architectural issues
**🟡 P2 (Important - Should Fix):**
- Performance issues
- Significant architectural concerns
- Major code quality problems
- Reliability issues
**🔵 P3 (Nice-to-Have):**
- Minor improvements
- Code cleanup
- Optimization opportunities
- Documentation updates
```
### Important: P1 Findings Block Merge
Any **🔴 P1 (CRITICAL)** findings must be addressed before merging the PR. Present these prominently and ensure they're resolved before accepting the PR.
```

View File

@@ -0,0 +1,275 @@
---
name: work
description: Execute work plans efficiently while maintaining quality and finishing features
argument-hint: "[plan file, specification, or todo file path]"
---
# Work Plan Execution Command
Execute a work plan efficiently while maintaining quality and finishing features.
## Introduction
This command takes a work document (plan, specification, or todo file) and executes it systematically. The focus is on **shipping complete features** by understanding requirements quickly, following existing patterns, and maintaining quality throughout.
## Input Document
<input_document> #$ARGUMENTS </input_document>
## Execution Workflow
### Phase 1: Quick Start
1. **Read Plan and Clarify**
- Read the work document completely
- Review any references or links provided in the plan
- If anything is unclear or ambiguous, ask clarifying questions now
- Get user approval to proceed
- **Do not skip this** - better to ask questions now than build the wrong thing
2. **Setup Environment**
Choose your work style:
**Option A: Live work on current branch**
```bash
git checkout main && git pull origin main
git checkout -b feature-branch-name
```
**Option B: Parallel work with worktree (recommended for parallel development)**
```bash
# Ask user first: "Work in parallel with worktree or on current branch?"
# If worktree:
skill: git-worktree
# The skill will create a new branch from main in an isolated worktree
```
**Recommendation**: Use worktree if:
- You want to work on multiple features simultaneously
- You want to keep main clean while experimenting
- You plan to switch between branches frequently
Use live branch if:
- You're working on a single feature
- You prefer staying in the main repository
3. **Create Todo List**
- Use TodoWrite to break plan into actionable tasks
- Include dependencies between tasks
- Prioritize based on what needs to be done first
- Include testing and quality check tasks
- Keep tasks specific and completable
### Phase 2: Execute
1. **Task Execution Loop**
For each task in priority order:
```
while (tasks remain):
- Mark task as in_progress in TodoWrite
- Read any referenced files from the plan
- Look for similar patterns in codebase
- Implement following existing conventions
- Write tests for new functionality
- Run tests after changes
- Mark task as completed
```
2. **Follow Existing Patterns**
- The plan should reference similar code - read those files first
- Match naming conventions exactly
- Reuse existing components where possible
- Follow project coding standards (see CLAUDE.md)
- When in doubt, grep for similar implementations
3. **Test Continuously**
- Run relevant tests after each significant change
- Don't wait until the end to test
- Fix failures immediately
- Add new tests for new functionality
4. **Figma Design Sync** (if applicable)
For UI work with Figma designs:
- Implement components following design specs
- Use figma-design-sync agent iteratively to compare
- Fix visual differences identified
- Repeat until implementation matches design
5. **Track Progress**
- Keep TodoWrite updated as you complete tasks
- Note any blockers or unexpected discoveries
- Create new tasks if scope expands
- Keep user informed of major milestones
### Phase 3: Quality Check
1. **Run Core Quality Checks**
Always run before submitting:
```bash
# Run full test suite
bin/rails test
# Run linting (per CLAUDE.md)
# Use linting-agent before pushing to origin
```
2. **Consider Reviewer Agents** (Optional)
Use for complex, risky, or large changes:
- **code-simplicity-reviewer**: Check for unnecessary complexity
- **kieran-rails-reviewer**: Verify Rails conventions (Rails projects)
- **performance-oracle**: Check for performance issues
- **security-sentinel**: Scan for security vulnerabilities
- **cora-test-reviewer**: Review test quality (CORA projects)
Run reviewers in parallel with Task tool:
```
Task(code-simplicity-reviewer): "Review changes for simplicity"
Task(kieran-rails-reviewer): "Check Rails conventions"
```
Present findings to user and address critical issues.
3. **Final Validation**
- All TodoWrite tasks marked completed
- All tests pass
- Linting passes
- Code follows existing patterns
- Figma designs match (if applicable)
- No console errors or warnings
### Phase 4: Ship It
1. **Create Commit**
```bash
git add .
git status # Review what's being committed
git diff --staged # Check the changes
# Commit with conventional format
git commit -m "$(cat <<'EOF'
feat(scope): description of what and why
Brief explanation if needed.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
EOF
)"
```
2. **Create Pull Request**
```bash
git push -u origin feature-branch-name
gh pr create --title "Feature: [Description]" --body "$(cat <<'EOF'
## Summary
- What was built
- Why it was needed
- Key decisions made
## Testing
- Tests added/modified
- Manual testing performed
## Screenshots/Videos
[If UI changes]
## Figma Design
[Link if applicable]
🤖 Generated with [Claude Code](https://claude.com/claude-code)
EOF
)"
```
3. **Notify User**
- Summarize what was completed
- Link to PR
- Note any follow-up work needed
- Suggest next steps if applicable
---
## Key Principles
### Start Fast, Execute Faster
- Get clarification once at the start, then execute
- Don't wait for perfect understanding - ask questions and move
- The goal is to **finish the feature**, not create perfect process
### The Plan is Your Guide
- Work documents should reference similar code and patterns
- Load those references and follow them
- Don't reinvent - match what exists
### Test As You Go
- Run tests after each change, not at the end
- Fix failures immediately
- Continuous testing prevents big surprises
### Quality is Built In
- Follow existing patterns
- Write tests for new code
- Run linting before pushing
- Use reviewer agents for complex/risky changes only
### Ship Complete Features
- Mark all tasks completed before moving on
- Don't leave features 80% done
- A finished feature that ships beats a perfect feature that doesn't
## Quality Checklist
Before creating PR, verify:
- [ ] All clarifying questions asked and answered
- [ ] All TodoWrite tasks marked completed
- [ ] Tests pass (run `bin/rails test`)
- [ ] Linting passes (use linting-agent)
- [ ] Code follows existing patterns
- [ ] Figma designs match implementation (if applicable)
- [ ] Commit messages follow conventional format
- [ ] PR description includes summary and testing notes
## When to Use Reviewer Agents
**Don't use by default.** Use reviewer agents only when:
- Large refactor affecting many files (10+)
- Security-sensitive changes (authentication, permissions, data access)
- Performance-critical code paths
- Complex algorithms or business logic
- User explicitly requests thorough review
For most features: tests + linting + following patterns is sufficient.
## Common Pitfalls to Avoid
- **Analysis paralysis** - Don't overthink, read the plan and execute
- **Skipping clarifying questions** - Ask now, not after building wrong thing
- **Ignoring plan references** - The plan has links for a reason
- **Testing at the end** - Test continuously or suffer later
- **Forgetting TodoWrite** - Track progress or lose track of what's done
- **80% done syndrome** - Finish the feature, don't move on early
- **Over-reviewing simple changes** - Save reviewer agents for complex work