feat(plugin): reorganize compounding-engineering v2.0.0

Major restructure of the compounding-engineering plugin:

## Agents (24 total, now categorized)
- review/ (10): architecture-strategist, code-simplicity-reviewer,
  data-integrity-guardian, dhh-rails-reviewer, kieran-rails-reviewer,
  kieran-python-reviewer, kieran-typescript-reviewer,
  pattern-recognition-specialist, performance-oracle, security-sentinel
- research/ (4): best-practices-researcher, framework-docs-researcher,
  git-history-analyzer, repo-research-analyst
- design/ (3): design-implementation-reviewer, design-iterator,
  figma-design-sync
- workflow/ (6): bug-reproduction-validator, every-style-editor,
  feedback-codifier, lint, pr-comment-resolver, spec-flow-analyzer
- docs/ (1): ankane-readme-writer

## Commands (15 total)
- Moved workflow commands to commands/workflows/ subdirectory
- Added: changelog, create-agent-skill, heal-skill, plan_review,
  prime, reproduce-bug, resolve_parallel, resolve_pr_parallel

## Skills (11 total)
- Added: andrew-kane-gem-writer, codify-docs, create-agent-skills,
  dhh-ruby-style, dspy-ruby, every-style-editor, file-todos,
  frontend-design, git-worktree, skill-creator
- Kept: gemini-imagegen

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Kieran Klaassen
2025-11-24 11:42:18 -08:00
parent 8cd694c518
commit 8cc99ab483
99 changed files with 16491 additions and 647 deletions

View File

@@ -0,0 +1,144 @@
---
name: changelog
description: Create engaging changelogs for recent merges to main branch
argument-hint: "[optional: daily|weekly, or time period in days]"
---
You are a witty and enthusiastic product marketer tasked with creating a fun, engaging change log for an internal development team. Your goal is to summarize the latest merges to the main branch, highlighting new features, bug fixes, and giving credit to the hard-working developers.
## Time Period
- For daily changelogs: Look at PRs merged in the last 24 hours
- For weekly summaries: Look at PRs merged in the last 7 days
- Always specify the time period in the title (e.g., "Daily" vs "Weekly")
- Default: Get the latest changes from the last day from the main branch of the repository
## PR Analysis
Analyze the provided GitHub changes and related issues. Look for:
1. New features that have been added
2. Bug fixes that have been implemented
3. Any other significant changes or improvements
4. References to specific issues and their details
5. Names of contributors who made the changes
6. Use gh cli to lookup the PRs as well and the description of the PRs
7. Check PR labels to identify feature type (feature, bug, chore, etc.)
8. Look for breaking changes and highlight them prominently
9. Include PR numbers for traceability
10. Check if PRs are linked to issues and include issue context
## Content Priorities
1. Breaking changes (if any) - MUST be at the top
2. User-facing features
3. Critical bug fixes
4. Performance improvements
5. Developer experience improvements
6. Documentation updates
## Formatting Guidelines
Now, create a change log summary with the following guidelines:
1. Keep it concise and to the point
2. Highlight the most important changes first
3. Group similar changes together (e.g., all new features, all bug fixes)
4. Include issue references where applicable
5. Mention the names of contributors, giving them credit for their work
6. Add a touch of humor or playfulness to make it engaging
7. Use emojis sparingly to add visual interest
8. Keep total message under 2000 characters for Discord
9. Use consistent emoji for each section
10. Format code/technical terms in backticks
11. Include PR numbers in parentheses (e.g., "Fixed login bug (#123)")
## Deployment Notes
When relevant, include:
- Database migrations required
- Environment variable updates needed
- Manual intervention steps post-deploy
- Dependencies that need updating
Your final output should be formatted as follows:
<change_log>
# 🚀 [Daily/Weekly] Change Log: [Current Date]
## 🚨 Breaking Changes (if any)
[List any breaking changes that require immediate attention]
## 🌟 New Features
[List new features here with PR numbers]
## 🐛 Bug Fixes
[List bug fixes here with PR numbers]
## 🛠️ Other Improvements
[List other significant changes or improvements]
## 🙌 Shoutouts
[Mention contributors and their contributions]
## 🎉 Fun Fact of the Day
[Include a brief, work-related fun fact or joke]
</change_log>
## Style Guide Review
Now review the changelog using the EVERY_WRITE_STYLE.md file and go one by one to make sure you are following the style guide. Use multiple agents, run in parallel to make it faster.
Remember, your final output should only include the content within the <change_log> tags. Do not include any of your thought process or the original data in the output.
## Discord Posting
Once you have the changelog, post it to Discord using the following commands:
### Post to default channel:
```
rails runner 'DiscordWebhookClient.new.send_message(content: "{{CHANGELOG}}")'
```
### Post to 🌤cora channel:
```
# Write changelog to temporary file
File.write("/tmp/changelog.txt", changelog_content)
# Post to Discord
rails runner 'content = File.read("/tmp/changelog.txt"); DiscordWebhookClient.new(token: "https://discord.com/api/webhooks/1378934451735760926/HUpZ81La0aPcbFspgwAsJZ7fcN1-6sj37BhRtrHeG19rhPnX5zZSpM8NttST6Qkb48uh").send_message(content: content)'
# Clean up
rm /tmp/changelog.txt
```
## Error Handling
- If no changes in the time period, post a "quiet day" message: "🌤️ Quiet day! No new changes merged."
- If unable to fetch PR details, list the PR numbers for manual review
- Always validate message length before posting to Discord (max 2000 chars)
## Schedule Recommendations
- Run daily at 6 AM NY time for previous day's changes
- Run weekly summary on Mondays for the previous week
- Special runs after major releases or deployments
## Audience Considerations
Adjust the tone and detail level based on the channel:
- **Dev team channels**: Include technical details, performance metrics, code snippets
- **Product team channels**: Focus on user-facing changes and business impact
- **Leadership channels**: Highlight progress on key initiatives and blockers

View File

@@ -0,0 +1,7 @@
---
description: Create or edit Claude Code skills with expert guidance on structure and best practices
allowed-tools: Skill(create-agent-skills)
argument-hint: [skill description or requirements]
---
Invoke the create-agent-skills skill for: $ARGUMENTS

View File

@@ -1,3 +1,9 @@
---
name: generate_command
description: Create a new custom slash command following conventions and best practices
argument-hint: "[command purpose and requirements]"
---
# Create a Custom Claude Code Command
Create a new slash command in `.claude/commands/` for the requested task.
@@ -37,6 +43,23 @@ Create a new slash command in `.claude/commands/` for the requested task.
5. **Think first** - use "think hard" or "plan" keywords for complex problems
6. **Iterate** - guide the process step by step
## Required: YAML Frontmatter
**EVERY command MUST start with YAML frontmatter:**
```yaml
---
name: command-name
description: Brief description of what this command does (max 100 chars)
argument-hint: "[what arguments the command accepts]"
---
```
**Fields:**
- `name`: Lowercase command identifier (used internally)
- `description`: Clear, concise summary of command purpose
- `argument-hint`: Shows user what arguments are expected (e.g., `[file path]`, `[PR number]`, `[optional: format]`)
## Structure Your Command
```markdown
@@ -93,14 +116,8 @@ Implement #$ARGUMENTS following these steps:
- Ensure code follows CLAUDE.md conventions
4. Verify
- Run tests:
- Rails: `bin/rails test` or `bundle exec rspec`
- TypeScript: `npm test` or `yarn test` (Jest/Vitest)
- Python: `pytest` or `python -m pytest`
- Run linter:
- Rails: `bundle exec standardrb` or `bundle exec rubocop`
- TypeScript: `npm run lint` or `eslint .`
- Python: `ruff check .` or `flake8`
- Run tests: `bin/rails test`
- Run linter: `bundle exec standardrb`
- Check changes with git diff
5. Commit (optional)
@@ -108,4 +125,38 @@ Implement #$ARGUMENTS following these steps:
- Write clear commit message
```
Now create the command file at `.claude/commands/[name].md` with the structure above.
## Creating the Command File
1. **Create the file** at `.claude/commands/[name].md` or `.claude/commands/workflows/[name].md`
2. **Start with YAML frontmatter** (see section above)
3. **Structure the command** using the template above
4. **Test the command** by using it with appropriate arguments
## Command File Template
```markdown
---
name: command-name
description: What this command does
argument-hint: "[expected arguments]"
---
# Command Title
Brief introduction of what the command does and when to use it.
## Workflow
### Step 1: [First Major Step]
Details about what to do.
### Step 2: [Second Major Step]
Details about what to do.
## Success Criteria
- [ ] Expected outcome 1
- [ ] Expected outcome 2
```

View File

@@ -0,0 +1,141 @@
---
description: Heal skill documentation by applying corrections discovered during execution with approval workflow
argument-hint: [optional: specific issue to fix]
allowed-tools: [Read, Edit, Bash(ls:*), Bash(git:*)]
---
<objective>
Update a skill's SKILL.md and related files based on corrections discovered during execution.
Analyze the conversation to detect which skill is running, reflect on what went wrong, propose specific fixes, get user approval, then apply changes with optional commit.
</objective>
<context>
Skill detection: !`ls -1 ./skills/*/SKILL.md | head -5`
</context>
<quick_start>
<workflow>
1. **Detect skill** from conversation context (invocation messages, recent SKILL.md references)
2. **Reflect** on what went wrong and how you discovered the fix
3. **Present** proposed changes with before/after diffs
4. **Get approval** before making any edits
5. **Apply** changes and optionally commit
</workflow>
</quick_start>
<process>
<step_1 name="detect_skill">
Identify the skill from conversation context:
- Look for skill invocation messages
- Check which SKILL.md was recently referenced
- Examine current task context
Set: `SKILL_NAME=[skill-name]` and `SKILL_DIR=./skills/$SKILL_NAME`
If unclear, ask the user.
</step_1>
<step_2 name="reflection_and_analysis">
Focus on $ARGUMENTS if provided, otherwise analyze broader context.
Determine:
- **What was wrong**: Quote specific sections from SKILL.md that are incorrect
- **Discovery method**: Context7, error messages, trial and error, documentation lookup
- **Root cause**: Outdated API, incorrect parameters, wrong endpoint, missing context
- **Scope of impact**: Single section or multiple? Related files affected?
- **Proposed fix**: Which files, which sections, before/after for each
</step_2>
<step_3 name="scan_affected_files">
```bash
ls -la $SKILL_DIR/
ls -la $SKILL_DIR/references/ 2>/dev/null
ls -la $SKILL_DIR/scripts/ 2>/dev/null
```
</step_3>
<step_4 name="present_proposed_changes">
Present changes in this format:
```
**Skill being healed:** [skill-name]
**Issue discovered:** [1-2 sentence summary]
**Root cause:** [brief explanation]
**Files to be modified:**
- [ ] SKILL.md
- [ ] references/[file].md
- [ ] scripts/[file].py
**Proposed changes:**
### Change 1: SKILL.md - [Section name]
**Location:** Line [X] in SKILL.md
**Current (incorrect):**
```
[exact text from current file]
```
**Corrected:**
```
[new text]
```
**Reason:** [why this fixes the issue]
[repeat for each change across all files]
**Impact assessment:**
- Affects: [authentication/API endpoints/parameters/examples/etc.]
**Verification:**
These changes will prevent: [specific error that prompted this]
```
</step_4>
<step_5 name="request_approval">
```
Should I apply these changes?
1. Yes, apply and commit all changes
2. Apply but don't commit (let me review first)
3. Revise the changes (I'll provide feedback)
4. Cancel (don't make changes)
Choose (1-4):
```
**Wait for user response. Do not proceed without approval.**
</step_5>
<step_6 name="apply_changes">
Only after approval (option 1 or 2):
1. Use Edit tool for each correction across all files
2. Read back modified sections to verify
3. If option 1, commit with structured message showing what was healed
4. Confirm completion with file list
</step_6>
</process>
<success_criteria>
- Skill correctly detected from conversation context
- All incorrect sections identified with before/after
- User approved changes before application
- All edits applied across SKILL.md and related files
- Changes verified by reading back
- Commit created if user chose option 1
- Completion confirmed with file list
</success_criteria>
<verification>
Before completing:
- Read back each modified section to confirm changes applied
- Ensure cross-file consistency (SKILL.md examples match references/)
- Verify git commit created if option 1 was selected
- Check no unintended files were modified
</verification>

View File

@@ -0,0 +1,7 @@
---
name: plan_review
description: Have multiple specialized agents review a plan in parallel
argument-hint: "[plan file path or plan content]"
---
Have @agent-dhh-rails-reviewer @agent-kieran-rails-reviewer @agent-code-simplicity-reviewer review this plan in parallel.

View File

@@ -0,0 +1,3 @@
Avoid over-engineering. Only make changes that are directly requested or clearly necessary. Keep solutions simple and focused. Don't add features, refactor code, or make "improvements" beyond what was asked. A bug fix doesn't need surrounding code cleaned up. A simple feature doesn't need extra configurability. Don't add error handling, fallbacks, or validation for scenarios that can't happen. Trust internal code and framework guarantees. Only validate at system boundaries (user input, external APIs). Don't use backwards-compatibility shims when you can just change the code. Don't create helpers, utilities, or abstractions for one-time operations. Don't design for hypothetical future requirements. The right amount of complexity is the minimum needed for the current task. Reuse existing abstractions where possible and follow the DRY principle.
ALWAYS read and understand relevant files before proposing code edits. Do not speculate about code you have not inspected. If the user references a specific file/path, you MUST open and inspect it before explaining or proposing fixes. Be rigorous and persistent in searching code for key facts. Thoroughly review the style, conventions, and abstractions of the codebase before implementing new features or abstractions.

View File

@@ -0,0 +1,27 @@
---
name: reproduce-bug
description: Reproduce and investigate a bug using logs and console inspection
argument-hint: "[GitHub issue number]"
---
Look at github issue #$ARGUMENTS and read the issue description and comments.
Then, run the following agents in parallel to reproduce the bug:
1. Task rails-console-explorer(issue_description)
2. Task appsignal-log-investigator (issue_description)
Then think about the places it could go wrong looking at the codebase. Look for loggin output we can look for.
Then, run the following agents in parallel again to find any logs that could help us reproduce the bug.
1. Task rails-console-explorer(issue_description)
2. Task appsignal-log-investigator (issue_description)
Keep running these agents until you have a good idea of what is going on.
**Reference Collection:**
- [ ] Document all research findings with specific file paths (e.g., `app/services/example_service.rb:42`)
Then, add a comment to the issue with the findings and how to reproduce the bug.

View File

@@ -0,0 +1,34 @@
---
name: resolve_parallel
description: Resolve all TODO comments using parallel processing
argument-hint: "[optional: specific TODO pattern or file]"
---
Resolve all TODO comments using parallel processing.
## Workflow
### 1. Analyze
Gather the things todo from above.
### 2. Plan
Create a TodoWrite list of all unresolved items grouped by type.Make sure to look at dependencies that might occur and prioritize the ones needed by others. For example, if you need to change a name, you must wait to do the others. Output a mermaid flow diagram showing how we can do this. Can we do everything in parallel? Do we need to do one first that leads to others in parallel? I'll put the to-dos in the mermaid diagram flowwise so the agent knows how to proceed in order.
### 3. Implement (PARALLEL)
Spawn a pr-comment-resolver agent for each unresolved item in parallel.
So if there are 3 comments, it will spawn 3 pr-comment-resolver agents in parallel. liek this
1. Task pr-comment-resolver(comment1)
2. Task pr-comment-resolver(comment2)
3. Task pr-comment-resolver(comment3)
Always run all in parallel subagents/Tasks for each Todo item.
### 4. Commit & Resolve
- Commit changes
- Push to remote

View File

@@ -0,0 +1,49 @@
---
name: resolve_pr_parallel
description: Resolve all PR comments using parallel processing
argument-hint: "[optional: PR number or current PR]"
---
Resolve all PR comments using parallel processing.
Claude Code automatically detects and understands your git context:
- Current branch detection
- Associated PR context
- All PR comments and review threads
- Can work with any PR by specifying the PR number, or ask it.
## Workflow
### 1. Analyze
Get all unresolved comments for PR
```bash
gh pr status
bin/get-pr-comments PR_NUMBER
```
### 2. Plan
Create a TodoWrite list of all unresolved items grouped by type.
### 3. Implement (PARALLEL)
Spawn a pr-comment-resolver agent for each unresolved item in parallel.
So if there are 3 comments, it will spawn 3 pr-comment-resolver agents in parallel. liek this
1. Task pr-comment-resolver(comment1)
2. Task pr-comment-resolver(comment2)
3. Task pr-comment-resolver(comment3)
Always run all in parallel subagents/Tasks for each Todo item.
### 4. Commit & Resolve
- Commit changes
- Run bin/resolve-pr-thread THREAD_ID_1
- Push to remote
Last, check bin/get-pr-comments PR_NUMBER again to see if all comments are resolved. They should be, if not, repeat the process from 1.

View File

@@ -1,3 +1,9 @@
---
name: resolve_todo_parallel
description: Resolve all pending CLI todos using parallel processing
argument-hint: "[optional: specific todo ID or pattern]"
---
Resolve all TODO comments using parallel processing.
## Workflow

View File

@@ -1,390 +0,0 @@
# Review Command
<command_purpose> Perform exhaustive code reviews using multi-agent analysis, ultra-thinking, and Git worktrees for deep local inspection. </command_purpose>
## Introduction
<role>Senior Code Review Architect with expertise in security, performance, architecture, and quality assurance</role>
## Prerequisites
<requirements>
- Git repository with GitHub CLI (`gh`) installed and authenticated
- Clean main/master branch
- Proper permissions to create worktrees and access the repository
- For document reviews: Path to a markdown file or document
</requirements>
## Main Tasks
### 1. Worktree Creation and Branch Checkout (ALWAYS FIRST)
<review_target> #$ARGUMENTS </review_target>
<critical_requirement> MUST create worktree FIRST to enable local code analysis. No exceptions. </critical_requirement>
<thinking>
First, I need to determine the review target type and set up the worktree.
This enables all subsequent agents to analyze actual code, not just diffs.
</thinking>
#### Immediate Actions:
<task_list>
- [ ] Determine review type: PR number (numeric), GitHub URL, file path (.md), or empty (latest PR)
- [ ] Create worktree directory structure at `$git_root/.worktrees/reviews/pr-$identifier`
- [ ] Check out PR branch in isolated worktree using `gh pr checkout`
- [ ] Navigate to worktree - ALL subsequent analysis happens here
- Fetch PR metadata using `gh pr view --json` for title, body, files, linked issues
- Clone PR branch into worktree with full history `gh pr checkout $identifier`
- Set up language-specific analysis tools
- Prepare security scanning environment
Ensure that the worktree is set up correctly and that the PR is checked out. ONLY then proceed to the next step.
</task_list>
#### Detect Project Type
<thinking>
Determine the project type by analyzing the codebase structure and files.
This will inform which language-specific reviewers to use.
</thinking>
<project_type_detection>
Check for these indicators to determine project type:
**Rails Project**:
- `Gemfile` with `rails` gem
- `config/application.rb`
- `app/` directory structure
**TypeScript Project**:
- `tsconfig.json`
- `package.json` with TypeScript dependencies
- `.ts` or `.tsx` files
**Python Project**:
- `requirements.txt` or `pyproject.toml`
- `.py` files
- `setup.py` or `poetry.lock`
Based on detection, set appropriate reviewers for parallel execution.
</project_type_detection>
#### Parallel Agents to review the PR:
<parallel_tasks>
Run ALL or most of these agents at the same time, adjusting language-specific reviewers based on project type:
**Language-Specific Reviewers (choose based on project type)**:
For Rails projects:
1. Task kieran-rails-reviewer(PR content)
2. Task dhh-rails-reviewer(PR title)
3. If turbo is used: Task rails-turbo-expert(PR content)
For TypeScript projects:
1. Task kieran-typescript-reviewer(PR content)
For Python projects:
1. Task kieran-python-reviewer(PR content)
**Universal Reviewers (run for all project types)**:
4. Task git-history-analyzer(PR content)
5. Task dependency-detective(PR content)
6. Task pattern-recognition-specialist(PR content)
7. Task architecture-strategist(PR content)
8. Task code-philosopher(PR content)
9. Task security-sentinel(PR content)
10. Task performance-oracle(PR content)
11. Task devops-harmony-analyst(PR content)
12. Task data-integrity-guardian(PR content)
</parallel_tasks>
### 4. Ultra-Thinking Deep Dive Phases
<ultrathink_instruction> For each phase below, spend maximum cognitive effort. Think step by step. Consider all angles. Question assumptions. And bring all reviews in a synthesis to the user.</ultrathink_instruction>
<deliverable>
Complete system context map with component interactions
</deliverable>
#### Phase 3: Stakeholder Perspective Analysis
<thinking_prompt> ULTRA-THINK: Put yourself in each stakeholder's shoes. What matters to them? What are their pain points? </thinking_prompt>
<stakeholder_perspectives>
1. **Developer Perspective** <questions>
- How easy is this to understand and modify?
- Are the APIs intuitive?
- Is debugging straightforward?
- Can I test this easily? </questions>
2. **Operations Perspective** <questions>
- How do I deploy this safely?
- What metrics and logs are available?
- How do I troubleshoot issues?
- What are the resource requirements? </questions>
3. **End User Perspective** <questions>
- Is the feature intuitive?
- Are error messages helpful?
- Is performance acceptable?
- Does it solve my problem? </questions>
4. **Security Team Perspective** <questions>
- What's the attack surface?
- Are there compliance requirements?
- How is data protected?
- What are the audit capabilities? </questions>
5. **Business Perspective** <questions>
- What's the ROI?
- Are there legal/compliance risks?
- How does this affect time-to-market?
- What's the total cost of ownership? </questions> </stakeholder_perspectives>
#### Phase 4: Scenario Exploration
<thinking_prompt> ULTRA-THINK: Explore edge cases and failure scenarios. What could go wrong? How does the system behave under stress? </thinking_prompt>
<scenario_checklist>
- [ ] **Happy Path**: Normal operation with valid inputs
- [ ] **Invalid Inputs**: Null, empty, malformed data
- [ ] **Boundary Conditions**: Min/max values, empty collections
- [ ] **Concurrent Access**: Race conditions, deadlocks
- [ ] **Scale Testing**: 10x, 100x, 1000x normal load
- [ ] **Network Issues**: Timeouts, partial failures
- [ ] **Resource Exhaustion**: Memory, disk, connections
- [ ] **Security Attacks**: Injection, overflow, DoS
- [ ] **Data Corruption**: Partial writes, inconsistency
- [ ] **Cascading Failures**: Downstream service issues </scenario_checklist>
### 6. Multi-Angle Review Perspectives
#### Technical Excellence Angle
- Code craftsmanship evaluation
- Engineering best practices
- Technical documentation quality
- Tooling and automation assessment
#### Business Value Angle
- Feature completeness validation
- Performance impact on users
- Cost-benefit analysis
- Time-to-market considerations
#### Risk Management Angle
- Security risk assessment
- Operational risk evaluation
- Compliance risk verification
- Technical debt accumulation
#### Team Dynamics Angle
- Code review etiquette
- Knowledge sharing effectiveness
- Collaboration patterns
- Mentoring opportunities
### 4. Simplification and Minimalism Review
Run the Task code-simplicity-reviewer() to see if we can simplify the code.
### 5. Findings Synthesis and Todo Creation
<critical_requirement> All findings MUST be converted to actionable todos in the CLI todo system </critical_requirement>
#### Step 1: Synthesize All Findings
<thinking>
Consolidate all agent reports into a categorized list of findings.
Remove duplicates, prioritize by severity and impact.
</thinking>
<synthesis_tasks>
- [ ] Collect findings from all parallel agents
- [ ] Categorize by type: security, performance, architecture, quality, etc.
- [ ] Assign severity levels: 🔴 CRITICAL (P1), 🟡 IMPORTANT (P2), 🔵 NICE-TO-HAVE (P3)
- [ ] Remove duplicate or overlapping findings
- [ ] Estimate effort for each finding (Small/Medium/Large)
</synthesis_tasks>
#### Step 2: Present Findings for Triage
For EACH finding, present in this format:
```
---
Finding #X: [Brief Title]
Severity: 🔴 P1 / 🟡 P2 / 🔵 P3
Category: [Security/Performance/Architecture/Quality/etc.]
Description:
[Detailed explanation of the issue or improvement]
Location: [file_path:line_number]
Problem:
[What's wrong or could be better]
Impact:
[Why this matters, what could happen]
Proposed Solution:
[How to fix it]
Effort: Small/Medium/Large
---
Do you want to add this to the todo list?
1. yes - create todo file
2. next - skip this finding
3. custom - modify before creating
```
#### Step 3: Create Todo Files for Approved Findings
<instructions>
When user says "yes", create a properly formatted todo file:
</instructions>
<todo_creation_process>
1. **Determine next issue ID:**
```bash
ls todos/ | grep -o '^[0-9]\+' | sort -n | tail -1
```
2. **Generate filename:**
```
{next_id}-pending-{priority}-{brief-description}.md
```
Example: `042-pending-p1-sql-injection-risk.md`
3. **Create file from template:**
```bash
cp todos/000-pending-p1-TEMPLATE.md todos/{new_filename}
```
4. **Populate with finding data:**
```yaml
---
status: pending
priority: p1 # or p2, p3 based on severity
issue_id: "042"
tags: [code-review, security, rails] # add relevant tags
dependencies: []
---
# [Finding Title]
## Problem Statement
[Detailed description from finding]
## Findings
- Discovered during code review by [agent names]
- Location: [file_path:line_number]
- [Key discoveries from agents]
## Proposed Solutions
### Option 1: [Primary solution from finding]
- **Pros**: [Benefits]
- **Cons**: [Drawbacks]
- **Effort**: [Small/Medium/Large]
- **Risk**: [Low/Medium/High]
## Recommended Action
[Leave blank - needs manager triage]
## Technical Details
- **Affected Files**: [List from finding]
- **Related Components**: [Models, controllers, services affected]
- **Database Changes**: [Yes/No - describe if yes]
## Resources
- Code review PR: [PR link if applicable]
- Related findings: [Other finding numbers]
- Agent reports: [Which agents flagged this]
## Acceptance Criteria
- [ ] [Specific criteria based on solution]
- [ ] Tests pass
- [ ] Code reviewed
## Work Log
### {date} - Code Review Discovery
**By:** Claude Code Review System
**Actions:**
- Discovered during comprehensive code review
- Analyzed by multiple specialized agents
- Categorized and prioritized
**Learnings:**
- [Key insights from agent analysis]
## Notes
Source: Code review performed on {date}
Review command: /workflows:review {arguments}
```
5. **Track creation:**
Add to TodoWrite list if tracking multiple findings
</todo_creation_process>
#### Step 4: Summary Report
After processing all findings:
```markdown
## Code Review Complete
**Review Target:** [PR number or branch]
**Total Findings:** [X]
**Todos Created:** [Y]
### Created Todos:
- `{issue_id}-pending-p1-{description}.md` - {title}
- `{issue_id}-pending-p2-{description}.md` - {title}
...
### Skipped Findings:
- [Finding #Z]: {reason}
...
### Next Steps:
1. Triage pending todos: `ls todos/*-pending-*.md`
2. Use `/triage` to review and approve
3. Work on approved items: `/resolve_todo_parallel`
```
#### Alternative: Batch Creation
If user wants to convert all findings to todos without review:
```bash
# Ask: "Create todos for all X findings? (yes/no/show-critical-only)"
# If yes: create todo files for all findings in parallel
# If show-critical-only: only present P1 findings for triage
```

View File

@@ -1,8 +1,18 @@
---
name: triage
description: Triage and categorize findings for the CLI todo system
argument-hint: "[findings list or source type]"
---
- First set the /model to Haiku
- Then read all pending todos in the todos/ directory
Present all findings, decisions, or issues here one by one for triage. The goal is to go through each item and decide whether to add it to the CLI todo system.
**IMPORTANT: DO NOT CODE ANYTHING DURING TRIAGE!**
This command is for:
- Triaging code review findings
- Processing security audit results
- Reviewing performance analysis
@@ -46,38 +56,43 @@ Do you want to add this to the todo list?
**When user says "yes":**
1. **Determine next issue ID:**
```bash
ls todos/ | grep -o '^[0-9]\+' | sort -n | tail -1
```
1. **Update existing todo file** (if it exists) or **Create new filename:**
If todo already exists (from code review):
- Rename file from `{id}-pending-{priority}-{desc}.md``{id}-ready-{priority}-{desc}.md`
- Update YAML frontmatter: `status: pending``status: ready`
- Keep issue_id, priority, and description unchanged
If creating new todo:
2. **Create filename:**
```
{next_id}-pending-{priority}-{brief-description}.md
{next_id}-ready-{priority}-{brief-description}.md
```
Priority mapping:
- 🔴 P1 (CRITICAL) → `p1`
- 🟡 P2 (IMPORTANT) → `p2`
- 🔵 P3 (NICE-TO-HAVE) → `p3`
Example: `042-pending-p1-transaction-boundaries.md`
Example: `042-ready-p1-transaction-boundaries.md`
3. **Create from template:**
```bash
cp todos/000-pending-p1-TEMPLATE.md todos/{new_filename}
```
2. **Update YAML frontmatter:**
4. **Populate the file:**
```yaml
---
status: pending
priority: p1 # or p2, p3 based on severity
status: ready # IMPORTANT: Change from "pending" to "ready"
priority: p1 # or p2, p3 based on severity
issue_id: "042"
tags: [category, relevant-tags]
dependencies: []
---
```
3. **Populate or update the file:**
```yaml
# [Issue Title]
## Problem Statement
@@ -97,7 +112,7 @@ Do you want to add this to the todo list?
- **Risk**: [Low/Medium/High]
## Recommended Action
[Leave blank - will be filled during approval]
[Filled during triage - specific action plan]
## Technical Details
- **Affected Files**: [List files]
@@ -115,12 +130,12 @@ Do you want to add this to the todo list?
## Work Log
### {date} - Initial Discovery
### {date} - Approved for Work
**By:** Claude Triage System
**Actions:**
- Issue discovered during [triage session type]
- Categorized as {severity}
- Estimated effort: {effort}
- Issue approved during triage session
- Status changed from pending → ready
- Ready to be picked up and worked on
**Learnings:**
- [Context and insights]
@@ -129,14 +144,16 @@ Do you want to add this to the todo list?
Source: Triage session on {date}
```
5. **Confirm creation:**
"✅ Created: `{filename}` - Issue #{issue_id}"
4. **Confirm approval:** "✅ Approved: `{new_filename}` (Issue #{issue_id}) - Status: **ready** → Ready to work on"
**When user says "next":**
- **Delete the todo file** - Remove it from todos/ directory since it's not relevant
- Skip to the next item
- Track skipped items for summary
**When user says "custom":**
- Ask what to modify (priority, description, details)
- Update the information
- Present revised version
@@ -152,52 +169,76 @@ Do you want to add this to the todo list?
After all items processed:
```markdown
````markdown
## Triage Complete
**Total Items:** [X]
**Todos Created:** [Y]
**Skipped:** [Z]
**Total Items:** [X] **Todos Approved (ready):** [Y] **Skipped:** [Z]
### Created Todos:
- `042-pending-p1-transaction-boundaries.md` - Transaction boundary issue
- `043-pending-p2-cache-optimization.md` - Cache performance improvement
...
### Approved Todos (Ready for Work):
### Skipped Items:
- Item #5: [reason]
- Item #12: [reason]
- `042-ready-p1-transaction-boundaries.md` - Transaction boundary issue
- `043-ready-p2-cache-optimization.md` - Cache performance improvement ...
### Skipped Items (Deleted):
- Item #5: [reason] - Removed from todos/
- Item #12: [reason] - Removed from todos/
### Summary of Changes Made:
During triage, the following status updates occurred:
- **Pending → Ready:** Filenames and frontmatter updated to reflect approved status
- **Deleted:** Todo files for skipped findings removed from todos/ directory
- Each approved file now has `status: ready` in YAML frontmatter
### Next Steps:
1. Review pending todos: `ls todos/*-pending-*.md`
2. Approve for work: Move from pending → ready status
3. Start work: Use `/resolve_todo_parallel` or pick individually
1. View approved todos ready for work:
```bash
ls todos/*-ready-*.md
```
````
2. Start work on approved items:
```bash
/resolve_todo_parallel # Work on multiple approved items efficiently
```
3. Or pick individual items to work on
4. As you work, update todo status:
- Ready → In Progress (in your local context as you work)
- In Progress → Complete (rename file: ready → complete, update frontmatter)
```
## Example Response Format
```
---
Issue #5: Missing Transaction Boundaries for Multi-Step Operations
Severity: 🔴 P1 (CRITICAL)
Category: Data Integrity / Security
Description:
The google_oauth2_connected callback in GoogleOauthCallbacks concern performs multiple database
operations without transaction protection. If any step fails midway, the database is left in an
inconsistent state.
Description: The google_oauth2_connected callback in GoogleOauthCallbacks concern performs multiple database operations without transaction protection. If any step fails midway, the database is left in an inconsistent state.
Location: app/controllers/concerns/google_oauth_callbacks.rb:13-50
Problem Scenario:
1. User.update succeeds (email changed)
2. Account.save! fails (validation error)
3. Result: User has changed email but no associated Account
4. Next login attempt fails completely
Operations Without Transaction:
- User confirmation (line 13)
- Waitlist removal (line 14)
- User profile update (line 21-23)
@@ -205,18 +246,65 @@ Operations Without Transaction:
- Avatar attachment (line 39-45)
- Journey creation (line 47)
Proposed Solution:
Wrap all operations in ApplicationRecord.transaction do ... end block
Proposed Solution: Wrap all operations in ApplicationRecord.transaction do ... end block
Estimated Effort: Small (30 minutes)
---
Do you want to add this to the todo list?
1. yes - create todo file
2. next - skip this item
3. custom - modify before creating
```
Do not code, and if you say yes, make sure to mark the todo as ready to pick up or something. If you make any changes, update the file and then continue to read the next one. If next is selecrte make sure to remove the todo from the list since its not relevant.
## Important Implementation Details
Every time you present the todo as a header, can you say what the progress of the triage is, how many we have done and how many are left, and an estimated time for completion, looking at how quickly we go through them as well?
### Status Transitions During Triage
**When "yes" is selected:**
1. Rename file: `{id}-pending-{priority}-{desc}.md` → `{id}-ready-{priority}-{desc}.md`
2. Update YAML frontmatter: `status: pending` → `status: ready`
3. Update Work Log with triage approval entry
4. Confirm: "✅ Approved: `{filename}` (Issue #{issue_id}) - Status: **ready**"
**When "next" is selected:**
1. Delete the todo file from todos/ directory
2. Skip to next item
3. No file remains in the system
### Progress Tracking
Every time you present a todo as a header, include:
- **Progress:** X/Y completed (e.g., "3/10 completed")
- **Estimated time remaining:** Based on how quickly you're progressing
- **Pacing:** Monitor time per finding and adjust estimate accordingly
Example:
```
Progress: 3/10 completed | Estimated time: ~2 minutes remaining
```
### Do Not Code During Triage
- ✅ Present findings
- ✅ Make yes/next/custom decisions
- ✅ Update todo files (rename, frontmatter, work log)
- ❌ Do NOT implement fixes or write code
- ❌ Do NOT add detailed implementation details
- ❌ That's for /resolve_todo_parallel phase
```
When done give these options
```markdown
What would you like to do next?
1. run /resolve_todo_parallel to resolve the todos
2. commit the todos
3. nothing, go chill
```

View File

@@ -1,150 +0,0 @@
# Work Plan Execution Command
## Introduction
This command helps you analyze a work document (plan, Markdown file, specification, or any structured document), create a comprehensive todo list using the TodoWrite tool, and then systematically execute each task until the entire plan is completed. It combines deep analysis with practical execution to transform plans into reality.
## Prerequisites
- A work document to analyze (plan file, specification, or any structured document)
- Clear understanding of project context and goals
- Access to necessary tools and permissions for implementation
- Ability to test and validate completed work
- Git repository with main branch
## Main Tasks
### 1. Setup Development Environment
- Ensure main branch is up to date
- Create feature branch with descriptive name
- Setup worktree for isolated development
- Configure development environment
### 2. Analyze Input Document
<input_document> #$ARGUMENTS </input_document>
## Execution Workflow
### Phase 1: Environment Setup
1. **Update Main Branch**
```bash
git checkout main
git pull origin main
```
2. **Create Feature Branch and Worktree**
- Determine appropriate branch name from document
- Get the root directory of the Git repository:
```bash
git_root=$(git rev-parse --show-toplevel)
```
- Create worktrees directory if it doesn't exist:
```bash
mkdir -p "$git_root/.worktrees"
```
- Add .worktrees to .gitignore if not already there:
```bash
if ! grep -q "^\.worktrees$" "$git_root/.gitignore"; then
echo ".worktrees" >> "$git_root/.gitignore"
fi
```
- Create the new worktree with feature branch:
```bash
git worktree add -b feature-branch-name "$git_root/.worktrees/feature-branch-name" main
```
- Change to the new worktree directory:
```bash
cd "$git_root/.worktrees/feature-branch-name"
```
3. **Verify Environment**
- Confirm in correct worktree directory
- Install dependencies if needed
- Run initial tests to ensure clean state
### Phase 2: Document Analysis and Planning
1. **Read Input Document**
- Use Read tool to examine the work document
- Identify all deliverables and requirements
- Note any constraints or dependencies
- Extract success criteria
2. **Create Task Breakdown**
- Convert requirements into specific tasks
- Add implementation details for each task
- Include testing and validation steps
- Consider edge cases and error handling
3. **Build Todo List**
- Use TodoWrite to create comprehensive list
- Set priorities based on dependencies
- Include all subtasks and checkpoints
- Add documentation and review tasks
### Phase 3: Systematic Execution
1. **Task Execution Loop**
```
while (tasks remain):
- Select next task (priority + dependencies)
- Mark as in_progress
- Execute task completely
- Validate completion
- Mark as completed
- Update progress
```
2. **Quality Assurance**
- Run tests after each task
- Execute lint and typecheck commands
- Verify no regressions
- Check against acceptance criteria
- Document any issues found
3. **Progress Tracking**
- Regularly update task status
- Note any blockers or delays
- Create new tasks for discoveries
- Maintain work visibility
### Phase 4: Completion and Submission
1. **Final Validation**
- Verify all tasks completed
- Run comprehensive test suite
- Execute final lint and typecheck
- Check all deliverables present
- Ensure documentation updated
2. **Prepare for Submission**
- Stage and commit all changes
- Write commit messages
- Push feature branch to remote
- Create detailed pull request
3. **Create Pull Request**
```bash
git push -u origin feature-branch-name
gh pr create --title "Feature: [Description]" --body "[Detailed description]"
```

View File

@@ -0,0 +1,198 @@
---
name: codify
description: Document a recently solved problem for the knowledge base
argument-hint: "[optional: brief context about the fix]"
---
# /codify
Coordinate multiple subagents working in parallel to document a recently solved problem.
## Purpose
Captures problem solutions while context is fresh, creating structured documentation in `docs/solutions/` with YAML frontmatter for searchability and future reference. Uses parallel subagents for maximum efficiency.
## Usage
```bash
/codify # Document the most recent fix
/codify [brief context] # Provide additional context hint
```
## Execution Strategy: Parallel Subagents
This command launches multiple specialized subagents IN PARALLEL to maximize efficiency:
### 1. **Context Analyzer** (Parallel)
- Extracts conversation history
- Identifies problem type, component, symptoms
- Validates against CORA schema
- Returns: YAML frontmatter skeleton
### 2. **Solution Extractor** (Parallel)
- Analyzes all investigation steps
- Identifies root cause
- Extracts working solution with code examples
- Returns: Solution content block
### 3. **Related Docs Finder** (Parallel)
- Searches `docs/solutions/` for related documentation
- Identifies cross-references and links
- Finds related GitHub issues
- Returns: Links and relationships
### 4. **Prevention Strategist** (Parallel)
- Develops prevention strategies
- Creates best practices guidance
- Generates test cases if applicable
- Returns: Prevention/testing content
### 5. **Category Classifier** (Parallel)
- Determines optimal `docs/solutions/` category
- Validates category against schema
- Suggests filename based on slug
- Returns: Final path and filename
### 6. **Documentation Writer** (Parallel)
- Assembles complete markdown file
- Validates YAML frontmatter
- Formats content for readability
- Creates the file in correct location
### 7. **Optional: Specialized Agent Invocation** (Post-Documentation)
Based on problem type detected, automatically invoke applicable agents:
- **performance_issue** → `performance-oracle`
- **security_issue** → `security-sentinel`
- **database_issue** → `data-integrity-guardian`
- **test_failure** → `cora-test-reviewer`
- Any code-heavy issue → `kieran-rails-reviewer` + `code-simplicity-reviewer`
## What It Captures
- **Problem symptom**: Exact error messages, observable behavior
- **Investigation steps tried**: What didn't work and why
- **Root cause analysis**: Technical explanation
- **Working solution**: Step-by-step fix with code examples
- **Prevention strategies**: How to avoid in future
- **Cross-references**: Links to related issues and docs
## Preconditions
<preconditions enforcement="advisory">
<check condition="problem_solved">
Problem has been solved (not in-progress)
</check>
<check condition="solution_verified">
Solution has been verified working
</check>
<check condition="non_trivial">
Non-trivial problem (not simple typo or obvious error)
</check>
</preconditions>
## What It Creates
**Organized documentation:**
- File: `docs/solutions/[category]/[filename].md`
**Categories auto-detected from problem:**
- build-errors/
- test-failures/
- runtime-errors/
- performance-issues/
- database-issues/
- security-issues/
- ui-bugs/
- integration-issues/
- logic-errors/
## Success Output
```
✓ Parallel documentation generation complete
Primary Subagent Results:
✓ Context Analyzer: Identified performance_issue in brief_system
✓ Solution Extractor: Extracted 3 code fixes
✓ Related Docs Finder: Found 2 related issues
✓ Prevention Strategist: Generated test cases
✓ Category Classifier: docs/solutions/performance-issues/
✓ Documentation Writer: Created complete markdown
Specialized Agent Reviews (Auto-Triggered):
✓ performance-oracle: Validated query optimization approach
✓ kieran-rails-reviewer: Code examples meet Rails standards
✓ code-simplicity-reviewer: Solution is appropriately minimal
✓ every-style-editor: Documentation style verified
File created:
- docs/solutions/performance-issues/n-plus-one-brief-generation.md
This documentation will be searchable for future reference when similar
issues occur in the Email Processing or Brief System modules.
What's next?
1. Continue workflow (recommended)
2. Link related documentation
3. Update other references
4. View documentation
5. Other
```
## Why This Matters
This creates a compounding knowledge system:
1. First time you solve "N+1 query in brief generation" → Research (30 min)
2. Document the solution → docs/solutions/performance-issues/n-plus-one-briefs.md (5 min)
3. Next time similar issue occurs → Quick lookup (2 min)
4. Knowledge compounds → Team gets smarter
The feedback loop:
```
Build → Test → Find Issue → Research → Improve → Document → Validate → Deploy
↑ ↓
└──────────────────────────────────────────────────────────────────────┘
```
## Auto-Invoke
<auto_invoke> <trigger_phrases> - "that worked" - "it's fixed" - "working now" - "problem solved" </trigger_phrases>
<manual_override> Use /codify [context] to document immediately without waiting for auto-detection. </manual_override> </auto_invoke>
## Routes To
`codify-docs` skill
## Applicable Specialized Agents
Based on problem type, these agents can enhance documentation:
### Code Quality & Review
- **kieran-rails-reviewer**: Reviews code examples for Rails best practices
- **code-simplicity-reviewer**: Ensures solution code is minimal and clear
- **pattern-recognition-specialist**: Identifies anti-patterns or repeating issues
### Specific Domain Experts
- **performance-oracle**: Analyzes performance_issue category solutions
- **security-sentinel**: Reviews security_issue solutions for vulnerabilities
- **cora-test-reviewer**: Creates test cases for prevention strategies
- **data-integrity-guardian**: Reviews database_issue migrations and queries
### Enhancement & Documentation
- **best-practices-researcher**: Enriches solution with industry best practices
- **every-style-editor**: Reviews documentation style and clarity
- **framework-docs-researcher**: Links to Rails/gem documentation references
### When to Invoke
- **Auto-triggered** (optional): Agents can run post-documentation for enhancement
- **Manual trigger**: User can invoke agents after /codify completes for deeper review
## Related Commands
- `/research [topic]` - Deep investigation (searches docs/solutions/ for patterns)
- `/plan` - Planning workflow (references documented solutions)

View File

@@ -1,4 +1,10 @@
# Create GitHub Issue
---
name: plan
description: Transform feature descriptions into well-structured project plans following conventions
argument-hint: "[feature description, bug report, or improvement idea]"
---
# Create a plan for a new feature or bug fix
## Introduction
@@ -19,8 +25,8 @@ First, I need to understand the project's conventions and existing patterns, lev
Runn these three agents in paralel at the same time:
- Task repo-research-analyst(feature_description)
- Task best-practices-researcher (feature_description)
- Task framework-docs-researcher (feature_description)
- Task best-practices-researcher(feature_description)
- Task framework-docs-researcher(feature_description)
**Reference Collection:**
@@ -38,7 +44,6 @@ Think like a product manager - what would make this issue clear and actionable?
**Title & Categorization:**
- [ ] Draft clear, searchable issue title using conventional format (e.g., `feat:`, `fix:`, `docs:`)
- [ ] Identify appropriate labels from repository's label set (`gh label list`)
- [ ] Determine issue type: enhancement, bug, refactor
**Stakeholder Analysis:**
@@ -53,9 +58,21 @@ Think like a product manager - what would make this issue clear and actionable?
- [ ] Gather supporting materials (error logs, screenshots, design mockups)
- [ ] Prepare code examples or reproduction steps if applicable, name the mock filenames in the lists
### 3. Choose Implementation Detail Level
### 3. SpecFlow Analysis
Select how comprehensive you want the issue to be:
After planning the issue structure, run SpecFlow Analyzer to validate and refine the feature specification:
- Task spec-flow-analyzer(feature_description, research_findings)
**SpecFlow Analyzer Output:**
- [ ] Review SpecFlow analysis results
- [ ] Incorporate any identified gaps or edge cases into the issue
- [ ] Update acceptance criteria based on SpecFlow findings
### 4. Choose Implementation Detail Level
Select how comprehensive you want the issue to be, simpler is mostly better.
#### 📄 MINIMAL (Quick Issue)
@@ -97,7 +114,6 @@ end
- Related issue: #[issue_number]
- Documentation: [relevant_docs_url]
````
#### 📋 MORE (Standard Issue)
@@ -275,7 +291,7 @@ end
- Design documents: [links]
```
### 4. Issue Creation & Formatting
### 5. Issue Creation & Formatting
<thinking>
Apply best practices for clarity and actionability, making the issue easy to scan and understand
@@ -302,26 +318,26 @@ Apply best practices for clarity and actionability, making the issue easy to sca
```markdown
# Good example with syntax highlighting and line references
```
\`\`\`ruby
```ruby
# app/services/user_service.rb:42
def process_user(user)
# Implementation here
end \`\`\`
end
```
````
# Collapsible error logs
<details>
<summary>Full error stacktrace</summary>
\`\`\` Error details here... \`\`\`
`Error details here...`
</details>
```
**AI-Era Considerations:**
@@ -331,7 +347,7 @@ end \`\`\`
- [ ] Emphasize comprehensive testing given rapid implementation
- [ ] Document any AI-generated code that needs human review
### 5. Final Review & Submission
### 6. Final Review & Submission
**Pre-submission Checklist:**
@@ -345,11 +361,9 @@ end \`\`\`
## Output Format
Present the complete issue content within `<github_issue>` tags, ready for GitHub CLI:
write to plans/<issue_title>.md
```bash
gh issue create --title "[TITLE]" --body "[CONTENT]" --label "[LABELS]"
```
Now call the /plan_review command with the plan file as the argument. Make sure to include the plan file in the command.
## Thinking Approaches
@@ -357,3 +371,9 @@ gh issue create --title "[TITLE]" --body "[CONTENT]" --label "[LABELS]"
- **User-Centric:** Consider end-user impact and experience
- **Technical:** Evaluate implementation complexity and architecture fit
- **Strategic:** Align with project goals and roadmap
After you get the review back, ask the user questions about the current state of the plan and what the reviewers came back with. Make sure to underatand if this plan is too big or thinks are missing. Are there any other considerations that should be included? Keep askign questions until the user is happy with the plan. THEN update the plan file with the user's feedback.
Optional you can ask to create a Github issue from the plan file.
NEVER CODE! Just research and write the plan.

View File

@@ -0,0 +1,405 @@
---
name: review
description: Perform exhaustive code reviews using multi-agent analysis, ultra-thinking, and worktrees
argument-hint: "[PR number, GitHub URL, branch name, or latest]"
---
# Review Command
<command_purpose> Perform exhaustive code reviews using multi-agent analysis, ultra-thinking, and Git worktrees for deep local inspection. </command_purpose>
## Introduction
<role>Senior Code Review Architect with expertise in security, performance, architecture, and quality assurance</role>
## Prerequisites
<requirements>
- Git repository with GitHub CLI (`gh`) installed and authenticated
- Clean main/master branch
- Proper permissions to create worktrees and access the repository
- For document reviews: Path to a markdown file or document
</requirements>
## Main Tasks
### 1. Determine Review Target & Setup (ALWAYS FIRST)
<review_target> #$ARGUMENTS </review_target>
<thinking>
First, I need to determine the review target type and set up the code for analysis.
</thinking>
#### Immediate Actions:
<task_list>
- [ ] Determine review type: PR number (numeric), GitHub URL, file path (.md), or empty (current branch)
- [ ] Check current git branch
- [ ] If ALREADY on the PR branch → proceed with analysis on current branch
- [ ] If DIFFERENT branch → offer to use worktree: "Use git-worktree skill for isolated Call `skill: git-worktree` with branch name
- [ ] Fetch PR metadata using `gh pr view --json` for title, body, files, linked issues
- [ ] Set up language-specific analysis tools
- [ ] Prepare security scanning environment
- [ ] Make sure we are on the branch we are reviewing. Use gh pr checkout to switch to the branch or manually checkout the branch.
Ensure that the code is ready for analysis (either in worktree or on current branch). ONLY then proceed to the next step.
</task_list>
#### Parallel Agents to review the PR:
<parallel_tasks>
Run ALL or most of these agents at the same time:
1. Task kieran-rails-reviewer(PR content)
2. Task dhh-rails-reviewer(PR title)
3. If turbo is used: Task rails-turbo-expert(PR content)
4. Task git-history-analyzer(PR content)
5. Task dependency-detective(PR content)
6. Task pattern-recognition-specialist(PR content)
7. Task architecture-strategist(PR content)
8. Task code-philosopher(PR content)
9. Task security-sentinel(PR content)
10. Task performance-oracle(PR content)
11. Task devops-harmony-analyst(PR content)
12. Task data-integrity-guardian(PR content)
</parallel_tasks>
### 4. Ultra-Thinking Deep Dive Phases
<ultrathink_instruction> For each phase below, spend maximum cognitive effort. Think step by step. Consider all angles. Question assumptions. And bring all reviews in a synthesis to the user.</ultrathink_instruction>
<deliverable>
Complete system context map with component interactions
</deliverable>
#### Phase 3: Stakeholder Perspective Analysis
<thinking_prompt> ULTRA-THINK: Put yourself in each stakeholder's shoes. What matters to them? What are their pain points? </thinking_prompt>
<stakeholder_perspectives>
1. **Developer Perspective** <questions>
- How easy is this to understand and modify?
- Are the APIs intuitive?
- Is debugging straightforward?
- Can I test this easily? </questions>
2. **Operations Perspective** <questions>
- How do I deploy this safely?
- What metrics and logs are available?
- How do I troubleshoot issues?
- What are the resource requirements? </questions>
3. **End User Perspective** <questions>
- Is the feature intuitive?
- Are error messages helpful?
- Is performance acceptable?
- Does it solve my problem? </questions>
4. **Security Team Perspective** <questions>
- What's the attack surface?
- Are there compliance requirements?
- How is data protected?
- What are the audit capabilities? </questions>
5. **Business Perspective** <questions>
- What's the ROI?
- Are there legal/compliance risks?
- How does this affect time-to-market?
- What's the total cost of ownership? </questions> </stakeholder_perspectives>
#### Phase 4: Scenario Exploration
<thinking_prompt> ULTRA-THINK: Explore edge cases and failure scenarios. What could go wrong? How does the system behave under stress? </thinking_prompt>
<scenario_checklist>
- [ ] **Happy Path**: Normal operation with valid inputs
- [ ] **Invalid Inputs**: Null, empty, malformed data
- [ ] **Boundary Conditions**: Min/max values, empty collections
- [ ] **Concurrent Access**: Race conditions, deadlocks
- [ ] **Scale Testing**: 10x, 100x, 1000x normal load
- [ ] **Network Issues**: Timeouts, partial failures
- [ ] **Resource Exhaustion**: Memory, disk, connections
- [ ] **Security Attacks**: Injection, overflow, DoS
- [ ] **Data Corruption**: Partial writes, inconsistency
- [ ] **Cascading Failures**: Downstream service issues </scenario_checklist>
### 6. Multi-Angle Review Perspectives
#### Technical Excellence Angle
- Code craftsmanship evaluation
- Engineering best practices
- Technical documentation quality
- Tooling and automation assessment
#### Business Value Angle
- Feature completeness validation
- Performance impact on users
- Cost-benefit analysis
- Time-to-market considerations
#### Risk Management Angle
- Security risk assessment
- Operational risk evaluation
- Compliance risk verification
- Technical debt accumulation
#### Team Dynamics Angle
- Code review etiquette
- Knowledge sharing effectiveness
- Collaboration patterns
- Mentoring opportunities
### 4. Simplification and Minimalism Review
Run the Task code-simplicity-reviewer() to see if we can simplify the code.
### 5. Findings Synthesis and Todo Creation Using file-todos Skill
<critical_requirement> ALL findings MUST be stored in the todos/ directory using the file-todos skill. Create todo files immediately after synthesis - do NOT present findings for user approval first. Use the skill for structured todo management. </critical_requirement>
#### Step 1: Synthesize All Findings
<thinking>
Consolidate all agent reports into a categorized list of findings.
Remove duplicates, prioritize by severity and impact.
</thinking>
<synthesis_tasks>
- [ ] Collect findings from all parallel agents
- [ ] Categorize by type: security, performance, architecture, quality, etc.
- [ ] Assign severity levels: 🔴 CRITICAL (P1), 🟡 IMPORTANT (P2), 🔵 NICE-TO-HAVE (P3)
- [ ] Remove duplicate or overlapping findings
- [ ] Estimate effort for each finding (Small/Medium/Large)
</synthesis_tasks>
#### Step 2: Create Todo Files Using file-todos Skill
<critical_instruction> Use the file-todos skill to create todo files for ALL findings immediately. Do NOT present findings one-by-one asking for user approval. Create all todo files in parallel using the skill, then summarize results to user. </critical_instruction>
**Implementation Options:**
**Option A: Direct File Creation (Fast)**
- Create todo files directly using Write tool
- All findings in parallel for speed
- Use standard template from `.claude/skills/file-todos/assets/todo-template.md`
- Follow naming convention: `{issue_id}-pending-{priority}-{description}.md`
**Option B: Sub-Agents in Parallel (Recommended for Scale)** For large PRs with 15+ findings, use sub-agents to create finding files in parallel:
```bash
# Launch multiple finding-creator agents in parallel
Task() - Create todos for first finding
Task() - Create todos for second finding
Task() - Create todos for third finding
etc. for each finding.
```
Sub-agents can:
- Process multiple findings simultaneously
- Write detailed todo files with all sections filled
- Organize findings by severity
- Create comprehensive Proposed Solutions
- Add acceptance criteria and work logs
- Complete much faster than sequential processing
**Execution Strategy:**
1. Synthesize all findings into categories (P1/P2/P3)
2. Group findings by severity
3. Launch 3 parallel sub-agents (one per severity level)
4. Each sub-agent creates its batch of todos using the file-todos skill
5. Consolidate results and present summary
**Process (Using file-todos Skill):**
1. For each finding:
- Determine severity (P1/P2/P3)
- Write detailed Problem Statement and Findings
- Create 2-3 Proposed Solutions with pros/cons/effort/risk
- Estimate effort (Small/Medium/Large)
- Add acceptance criteria and work log
2. Use file-todos skill for structured todo management:
```bash
skill: file-todos
```
The skill provides:
- Template location: `.claude/skills/file-todos/assets/todo-template.md`
- Naming convention: `{issue_id}-{status}-{priority}-{description}.md`
- YAML frontmatter structure: status, priority, issue_id, tags, dependencies
- All required sections: Problem Statement, Findings, Solutions, etc.
3. Create todo files in parallel:
```bash
{next_id}-pending-{priority}-{description}.md
```
4. Examples:
```
001-pending-p1-path-traversal-vulnerability.md
002-pending-p1-api-response-validation.md
003-pending-p2-concurrency-limit.md
004-pending-p3-unused-parameter.md
```
5. Follow template structure from file-todos skill: `.claude/skills/file-todos/assets/todo-template.md`
**Todo File Structure (from template):**
Each todo must include:
- **YAML frontmatter**: status, priority, issue_id, tags, dependencies
- **Problem Statement**: What's broken/missing, why it matters
- **Findings**: Discoveries from agents with evidence/location
- **Proposed Solutions**: 2-3 options, each with pros/cons/effort/risk
- **Recommended Action**: (Filled during triage, leave blank initially)
- **Technical Details**: Affected files, components, database changes
- **Acceptance Criteria**: Testable checklist items
- **Work Log**: Dated record with actions and learnings
- **Resources**: Links to PR, issues, documentation, similar patterns
**File naming convention:**
```
{issue_id}-{status}-{priority}-{description}.md
Examples:
- 001-pending-p1-security-vulnerability.md
- 002-pending-p2-performance-optimization.md
- 003-pending-p3-code-cleanup.md
```
**Status values:**
- `pending` - New findings, needs triage/decision
- `ready` - Approved by manager, ready to work
- `complete` - Work finished
**Priority values:**
- `p1` - Critical (blocks merge, security/data issues)
- `p2` - Important (should fix, architectural/performance)
- `p3` - Nice-to-have (enhancements, cleanup)
**Tagging:** Always add `code-review` tag, plus: `security`, `performance`, `architecture`, `rails`, `quality`, etc.
#### Step 3: Summary Report
After creating all todo files, present comprehensive summary:
````markdown
## ✅ Code Review Complete
**Review Target:** PR #XXXX - [PR Title] **Branch:** [branch-name]
### Findings Summary:
- **Total Findings:** [X]
- **🔴 CRITICAL (P1):** [count] - BLOCKS MERGE
- **🟡 IMPORTANT (P2):** [count] - Should Fix
- **🔵 NICE-TO-HAVE (P3):** [count] - Enhancements
### Created Todo Files:
**P1 - Critical (BLOCKS MERGE):**
- `001-pending-p1-{finding}.md` - {description}
- `002-pending-p1-{finding}.md` - {description}
**P2 - Important:**
- `003-pending-p2-{finding}.md` - {description}
- `004-pending-p2-{finding}.md` - {description}
**P3 - Nice-to-Have:**
- `005-pending-p3-{finding}.md` - {description}
### Review Agents Used:
- kieran-rails-reviewer
- security-sentinel
- performance-oracle
- architecture-strategist
- [other agents]
### Next Steps:
1. **Address P1 Findings**: CRITICAL - must be fixed before merge
- Review each P1 todo in detail
- Implement fixes or request exemption
- Verify fixes before merging PR
2. **Triage All Todos**:
```bash
ls todos/*-pending-*.md # View all pending todos
/triage # Use slash command for interactive triage
```
````
3. **Work on Approved Todos**:
```bash
/resolve_todo_parallel # Fix all approved items efficiently
```
4. **Track Progress**:
- Rename file when status changes: pending → ready → complete
- Update Work Log as you work
- Commit todos: `git add todos/ && git commit -m "refactor: add code review findings"`
### Severity Breakdown:
**🔴 P1 (Critical - Blocks Merge):**
- Security vulnerabilities
- Data corruption risks
- Breaking changes
- Critical architectural issues
**🟡 P2 (Important - Should Fix):**
- Performance issues
- Significant architectural concerns
- Major code quality problems
- Reliability issues
**🔵 P3 (Nice-to-Have):**
- Minor improvements
- Code cleanup
- Optimization opportunities
- Documentation updates
```
### Important: P1 Findings Block Merge
Any **🔴 P1 (CRITICAL)** findings must be addressed before merging the PR. Present these prominently and ensure they're resolved before accepting the PR.
```

View File

@@ -0,0 +1,275 @@
---
name: work
description: Execute work plans efficiently while maintaining quality and finishing features
argument-hint: "[plan file, specification, or todo file path]"
---
# Work Plan Execution Command
Execute a work plan efficiently while maintaining quality and finishing features.
## Introduction
This command takes a work document (plan, specification, or todo file) and executes it systematically. The focus is on **shipping complete features** by understanding requirements quickly, following existing patterns, and maintaining quality throughout.
## Input Document
<input_document> #$ARGUMENTS </input_document>
## Execution Workflow
### Phase 1: Quick Start
1. **Read Plan and Clarify**
- Read the work document completely
- Review any references or links provided in the plan
- If anything is unclear or ambiguous, ask clarifying questions now
- Get user approval to proceed
- **Do not skip this** - better to ask questions now than build the wrong thing
2. **Setup Environment**
Choose your work style:
**Option A: Live work on current branch**
```bash
git checkout main && git pull origin main
git checkout -b feature-branch-name
```
**Option B: Parallel work with worktree (recommended for parallel development)**
```bash
# Ask user first: "Work in parallel with worktree or on current branch?"
# If worktree:
skill: git-worktree
# The skill will create a new branch from main in an isolated worktree
```
**Recommendation**: Use worktree if:
- You want to work on multiple features simultaneously
- You want to keep main clean while experimenting
- You plan to switch between branches frequently
Use live branch if:
- You're working on a single feature
- You prefer staying in the main repository
3. **Create Todo List**
- Use TodoWrite to break plan into actionable tasks
- Include dependencies between tasks
- Prioritize based on what needs to be done first
- Include testing and quality check tasks
- Keep tasks specific and completable
### Phase 2: Execute
1. **Task Execution Loop**
For each task in priority order:
```
while (tasks remain):
- Mark task as in_progress in TodoWrite
- Read any referenced files from the plan
- Look for similar patterns in codebase
- Implement following existing conventions
- Write tests for new functionality
- Run tests after changes
- Mark task as completed
```
2. **Follow Existing Patterns**
- The plan should reference similar code - read those files first
- Match naming conventions exactly
- Reuse existing components where possible
- Follow project coding standards (see CLAUDE.md)
- When in doubt, grep for similar implementations
3. **Test Continuously**
- Run relevant tests after each significant change
- Don't wait until the end to test
- Fix failures immediately
- Add new tests for new functionality
4. **Figma Design Sync** (if applicable)
For UI work with Figma designs:
- Implement components following design specs
- Use figma-design-sync agent iteratively to compare
- Fix visual differences identified
- Repeat until implementation matches design
5. **Track Progress**
- Keep TodoWrite updated as you complete tasks
- Note any blockers or unexpected discoveries
- Create new tasks if scope expands
- Keep user informed of major milestones
### Phase 3: Quality Check
1. **Run Core Quality Checks**
Always run before submitting:
```bash
# Run full test suite
bin/rails test
# Run linting (per CLAUDE.md)
# Use linting-agent before pushing to origin
```
2. **Consider Reviewer Agents** (Optional)
Use for complex, risky, or large changes:
- **code-simplicity-reviewer**: Check for unnecessary complexity
- **kieran-rails-reviewer**: Verify Rails conventions (Rails projects)
- **performance-oracle**: Check for performance issues
- **security-sentinel**: Scan for security vulnerabilities
- **cora-test-reviewer**: Review test quality (CORA projects)
Run reviewers in parallel with Task tool:
```
Task(code-simplicity-reviewer): "Review changes for simplicity"
Task(kieran-rails-reviewer): "Check Rails conventions"
```
Present findings to user and address critical issues.
3. **Final Validation**
- All TodoWrite tasks marked completed
- All tests pass
- Linting passes
- Code follows existing patterns
- Figma designs match (if applicable)
- No console errors or warnings
### Phase 4: Ship It
1. **Create Commit**
```bash
git add .
git status # Review what's being committed
git diff --staged # Check the changes
# Commit with conventional format
git commit -m "$(cat <<'EOF'
feat(scope): description of what and why
Brief explanation if needed.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
EOF
)"
```
2. **Create Pull Request**
```bash
git push -u origin feature-branch-name
gh pr create --title "Feature: [Description]" --body "$(cat <<'EOF'
## Summary
- What was built
- Why it was needed
- Key decisions made
## Testing
- Tests added/modified
- Manual testing performed
## Screenshots/Videos
[If UI changes]
## Figma Design
[Link if applicable]
🤖 Generated with [Claude Code](https://claude.com/claude-code)
EOF
)"
```
3. **Notify User**
- Summarize what was completed
- Link to PR
- Note any follow-up work needed
- Suggest next steps if applicable
---
## Key Principles
### Start Fast, Execute Faster
- Get clarification once at the start, then execute
- Don't wait for perfect understanding - ask questions and move
- The goal is to **finish the feature**, not create perfect process
### The Plan is Your Guide
- Work documents should reference similar code and patterns
- Load those references and follow them
- Don't reinvent - match what exists
### Test As You Go
- Run tests after each change, not at the end
- Fix failures immediately
- Continuous testing prevents big surprises
### Quality is Built In
- Follow existing patterns
- Write tests for new code
- Run linting before pushing
- Use reviewer agents for complex/risky changes only
### Ship Complete Features
- Mark all tasks completed before moving on
- Don't leave features 80% done
- A finished feature that ships beats a perfect feature that doesn't
## Quality Checklist
Before creating PR, verify:
- [ ] All clarifying questions asked and answered
- [ ] All TodoWrite tasks marked completed
- [ ] Tests pass (run `bin/rails test`)
- [ ] Linting passes (use linting-agent)
- [ ] Code follows existing patterns
- [ ] Figma designs match implementation (if applicable)
- [ ] Commit messages follow conventional format
- [ ] PR description includes summary and testing notes
## When to Use Reviewer Agents
**Don't use by default.** Use reviewer agents only when:
- Large refactor affecting many files (10+)
- Security-sensitive changes (authentication, permissions, data access)
- Performance-critical code paths
- Complex algorithms or business logic
- User explicitly requests thorough review
For most features: tests + linting + following patterns is sufficient.
## Common Pitfalls to Avoid
- **Analysis paralysis** - Don't overthink, read the plan and execute
- **Skipping clarifying questions** - Ask now, not after building wrong thing
- **Ignoring plan references** - The plan has links for a reason
- **Testing at the end** - Test continuously or suffer later
- **Forgetting TodoWrite** - Track progress or lose track of what's done
- **80% done syndrome** - Finish the feature, don't move on early
- **Over-reviewing simple changes** - Save reviewer agents for complex work