adds skill for handling upstream changes and merging to local
Some checks failed
CI / test (push) Has been cancelled

This commit is contained in:
John Lamb
2026-02-17 10:48:20 -06:00
parent 85f97affb5
commit e092c9e5ad
7 changed files with 272 additions and 632 deletions

View File

@@ -11,8 +11,8 @@
"plugins": [
{
"name": "compound-engineering",
"description": "AI-powered development tools that get smarter with every use. Make each unit of engineering work easier than the last. Includes 25 specialized agents, 23 commands, and 18 skills.",
"version": "2.35.0",
"description": "AI-powered development tools that get smarter with every use. Make each unit of engineering work easier than the last. Includes 25 specialized agents, 23 commands, and 19 skills.",
"version": "2.35.1",
"author": {
"name": "Kieran Klaassen",
"url": "https://github.com/kieranklaassen",

View File

@@ -1,7 +1,7 @@
{
"name": "compound-engineering",
"version": "2.35.0",
"description": "AI-powered development tools. 25 agents, 23 commands, 18 skills, 1 MCP server for code review, research, design, and workflow automation.",
"version": "2.35.1",
"description": "AI-powered development tools. 25 agents, 23 commands, 19 skills, 1 MCP server for code review, research, design, and workflow automation.",
"author": {
"name": "Kieran Klaassen",
"email": "kieran@every.to",

View File

@@ -5,6 +5,16 @@ All notable changes to the compound-engineering plugin will be documented in thi
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [2.35.1] - 2026-02-17
### Added
- **`upstream-merge` skill** - Structured workflow for incorporating upstream git changes while preserving local fork intent. Integrates with file-todos system for triage tracking.
### Removed
- **`dspy-python` skill** - Deleted per triage decision (project uses LangChain/LangGraph, not DSPy)
## [2.35.0] - 2026-02-16
### Changed

View File

@@ -8,7 +8,7 @@ AI-powered development tools that get smarter with every use. Make each unit of
|-----------|-------|
| Agents | 25 |
| Commands | 23 |
| Skills | 18 |
| Skills | 19 |
| MCP Servers | 1 |
## Agents
@@ -127,6 +127,7 @@ Core workflow commands use `workflows:` prefix to avoid collisions with built-in
| `git-worktree` | Manage Git worktrees for parallel development |
| `resolve-pr-parallel` | Resolve PR review comments in parallel |
| `setup` | Configure which review agents run for your project |
| `upstream-merge` | Incorporate upstream git changes while preserving local fork intent |
### Multi-Agent Orchestration

View File

@@ -1,627 +0,0 @@
---
name: dspy-python
description: This skill should be used when working with DSPy, the Python framework for programming language models instead of prompting them. Use this when implementing LLM-powered features, creating DSPy signatures and modules, configuring language model providers (OpenAI, Anthropic, Gemini, Ollama), building agent systems with tools, optimizing prompts with teleprompters, integrating with FastAPI endpoints, or testing DSPy modules with pytest.
---
# DSPy Expert (Python)
## Overview
DSPy is a Python framework that enables developers to **program language models, not prompt them**. Instead of manually crafting prompts, define application requirements through composable, optimizable modules that can be tested, improved, and version-controlled like regular code.
This skill provides comprehensive guidance on:
- Creating signatures for LLM operations
- Building composable modules and workflows
- Configuring multiple LLM providers
- Implementing agents with tools (ReAct)
- Testing with pytest
- Optimizing with teleprompters (MIPROv2, BootstrapFewShot)
- Integrating with FastAPI for production APIs
- Production deployment patterns
## Core Capabilities
### 1. Signatures
Create input/output specifications for LLM operations using inline or class-based signatures.
**When to use**: Defining any LLM task, from simple classification to complex analysis.
**Quick reference**:
```python
import dspy
# Inline signature (simple tasks)
classify = dspy.Predict("email: str -> category: str, priority: str")
# Class-based signature (complex tasks with documentation)
class EmailClassification(dspy.Signature):
"""Classify customer support emails into categories."""
email_subject: str = dspy.InputField(desc="Subject line of the email")
email_body: str = dspy.InputField(desc="Full body content of the email")
category: str = dspy.OutputField(desc="One of: Technical, Billing, General")
priority: str = dspy.OutputField(desc="One of: Low, Medium, High")
```
**Templates**: See [signature-template.py](./assets/signature-template.py) for comprehensive examples including:
- Inline signatures for quick tasks
- Class-based signatures with type hints
- Signatures with Pydantic model outputs
- Multi-field complex signatures
**Best practices**:
- Always provide clear docstrings for class-based signatures
- Use `desc` parameter for field documentation
- Prefer specific descriptions over generic ones
- Use Pydantic models for structured complex outputs
**Full documentation**: See [core-concepts.md](./references/core-concepts.md) sections on Signatures and Type Safety.
### 2. Modules
Build reusable, composable modules that encapsulate LLM operations.
**When to use**: Implementing any LLM-powered feature, especially complex multi-step workflows.
**Quick reference**:
```python
import dspy
class EmailProcessor(dspy.Module):
def __init__(self):
super().__init__()
self.classifier = dspy.ChainOfThought(EmailClassification)
def forward(self, email_subject: str, email_body: str) -> dspy.Prediction:
return self.classifier(
email_subject=email_subject,
email_body=email_body
)
```
**Templates**: See [module-template.py](./assets/module-template.py) for comprehensive examples including:
- Basic modules with single predictors
- Multi-step pipelines that chain modules
- Modules with conditional logic
- Error handling and retry patterns
- Async modules for FastAPI
- Caching implementations
**Module composition**: Chain modules together to create complex workflows:
```python
class Pipeline(dspy.Module):
def __init__(self):
super().__init__()
self.step1 = Classifier()
self.step2 = Analyzer()
self.step3 = Responder()
def forward(self, input_text):
result1 = self.step1(text=input_text)
result2 = self.step2(classification=result1.category)
return self.step3(analysis=result2.analysis)
```
**Full documentation**: See [core-concepts.md](./references/core-concepts.md) sections on Modules and Module Composition.
### 3. Predictor Types
Choose the right predictor for your task:
**Predict**: Basic LLM inference
```python
predictor = dspy.Predict(TaskSignature)
result = predictor(input="data")
```
**ChainOfThought**: Adds automatic step-by-step reasoning
```python
predictor = dspy.ChainOfThought(TaskSignature)
result = predictor(input="data")
# result.reasoning contains the thought process
```
**ReAct**: Tool-using agents with iterative reasoning
```python
predictor = dspy.ReAct(
TaskSignature,
tools=[search_tool, calculator_tool],
max_iters=5
)
```
**ProgramOfThought**: Generates and executes Python code
```python
predictor = dspy.ProgramOfThought(TaskSignature)
result = predictor(task="Calculate factorial of 10")
```
**When to use each**:
- **Predict**: Simple tasks, classification, extraction
- **ChainOfThought**: Complex reasoning, analysis, multi-step thinking
- **ReAct**: Tasks requiring external tools (search, calculation, API calls)
- **ProgramOfThought**: Tasks best solved with generated code
**Full documentation**: See [core-concepts.md](./references/core-concepts.md) section on Predictors.
### 4. LLM Provider Configuration
Support for OpenAI, Anthropic Claude, Google, Ollama, and many more via LiteLLM.
**Quick configuration examples**:
```python
import dspy
# OpenAI
lm = dspy.LM('openai/gpt-4o-mini', api_key=os.environ['OPENAI_API_KEY'])
dspy.configure(lm=lm)
# Anthropic Claude
lm = dspy.LM('anthropic/claude-3-5-sonnet-20241022', api_key=os.environ['ANTHROPIC_API_KEY'])
dspy.configure(lm=lm)
# Google Gemini
lm = dspy.LM('google/gemini-1.5-pro', api_key=os.environ['GOOGLE_API_KEY'])
dspy.configure(lm=lm)
# Local Ollama (free, private)
lm = dspy.LM('ollama_chat/llama3.1', api_base='http://localhost:11434')
dspy.configure(lm=lm)
```
**Templates**: See [config-template.py](./assets/config-template.py) for comprehensive examples including:
- Environment-based configuration
- Multi-model setups for different tasks
- Async LM configuration
- Retry logic and fallback strategies
- Caching with dspy.cache
**Provider compatibility matrix**:
| Feature | OpenAI | Anthropic | Google | Ollama |
|---------|--------|-----------|--------|--------|
| Structured Output | Full | Full | Full | Partial |
| Vision (Images) | Full | Full | Full | Limited |
| Tool Calling | Full | Full | Full | Varies |
| Streaming | Full | Full | Full | Full |
**Cost optimization strategy**:
- Development: Ollama (free) or gpt-4o-mini (cheap)
- Testing: gpt-4o-mini with temperature=0.0
- Production simple tasks: gpt-4o-mini, claude-3-haiku, gemini-1.5-flash
- Production complex tasks: gpt-4o, claude-3-5-sonnet, gemini-1.5-pro
**Full documentation**: See [providers.md](./references/providers.md) for all configuration options.
### 5. FastAPI Integration
Serve DSPy modules as production API endpoints.
**Quick reference**:
```python
from fastapi import FastAPI
from pydantic import BaseModel
import dspy
app = FastAPI()
# Initialize DSPy
lm = dspy.LM('openai/gpt-4o-mini')
dspy.configure(lm=lm)
# Load optimized module
classifier = EmailProcessor()
class EmailRequest(BaseModel):
subject: str
body: str
class EmailResponse(BaseModel):
category: str
priority: str
@app.post("/classify", response_model=EmailResponse)
async def classify_email(request: EmailRequest):
result = classifier(
email_subject=request.subject,
email_body=request.body
)
return EmailResponse(
category=result.category,
priority=result.priority
)
```
**Production patterns**:
- Load optimized modules at startup
- Use Pydantic models for request/response validation
- Implement proper error handling
- Add observability with OpenTelemetry
- Use async where possible
**Full documentation**: See [fastapi-integration.md](./references/fastapi-integration.md) for complete patterns.
### 6. Testing DSPy Modules
Write standard pytest tests for LLM logic.
**Quick reference**:
```python
import pytest
import dspy
@pytest.fixture(scope="module")
def configure_dspy():
lm = dspy.LM('openai/gpt-4o-mini', api_key=os.environ['OPENAI_API_KEY'])
dspy.configure(lm=lm)
def test_email_classifier(configure_dspy):
classifier = EmailProcessor()
result = classifier(
email_subject="Can't log in",
email_body="Unable to access account"
)
assert result.category in ['Technical', 'Billing', 'General']
assert result.priority in ['High', 'Medium', 'Low']
def test_technical_email_classification(configure_dspy):
classifier = EmailProcessor()
result = classifier(
email_subject="Error 500 on checkout",
email_body="Getting server error when trying to complete purchase"
)
assert result.category == 'Technical'
```
**Testing patterns**:
- Use pytest fixtures for DSPy configuration
- Test type correctness of outputs
- Test edge cases (empty inputs, special characters, long texts)
- Use VCR/responses for deterministic API testing
- Integration test complete workflows
**Full documentation**: See [optimization.md](./references/optimization.md) section on Testing.
### 7. Optimization with Teleprompters
Automatically improve prompts and modules using optimization techniques.
**MIPROv2 optimization**:
```python
import dspy
from dspy.teleprompt import MIPROv2
# Define evaluation metric
def accuracy_metric(example, pred, trace=None):
return example.category == pred.category
# Prepare training data
trainset = [
dspy.Example(
email_subject="Can't log in",
email_body="Password reset not working",
category="Technical"
).with_inputs("email_subject", "email_body"),
# More examples...
]
# Run optimization
optimizer = MIPROv2(
metric=accuracy_metric,
num_candidates=10,
init_temperature=0.7
)
optimized_module = optimizer.compile(
EmailProcessor(),
trainset=trainset,
max_bootstrapped_demos=3,
max_labeled_demos=5
)
# Save optimized module
optimized_module.save("optimized_classifier.json")
```
**BootstrapFewShot** (simpler, faster):
```python
from dspy.teleprompt import BootstrapFewShot
optimizer = BootstrapFewShot(
metric=accuracy_metric,
max_bootstrapped_demos=4
)
optimized = optimizer.compile(
EmailProcessor(),
trainset=trainset
)
```
**Full documentation**: See [optimization.md](./references/optimization.md) section on Teleprompters.
### 8. Caching and Performance
Optimize performance with built-in caching.
**Enable caching**:
```python
import dspy
# Enable global caching
dspy.configure(
lm=lm,
cache=True # Uses SQLite by default
)
# Or with custom cache directory
dspy.configure(
lm=lm,
cache_dir="/path/to/cache"
)
```
**Cache control**:
```python
# Clear cache
dspy.cache.clear()
# Disable cache for specific call
with dspy.settings.context(cache=False):
result = module(input="data")
```
**Full documentation**: See [optimization.md](./references/optimization.md) section on Caching.
## Quick Start Workflow
### For New Projects
1. **Install DSPy**:
```bash
pip install dspy-ai
```
2. **Configure LLM provider** (see [config-template.py](./assets/config-template.py)):
```python
import dspy
import os
lm = dspy.LM('openai/gpt-4o-mini', api_key=os.environ['OPENAI_API_KEY'])
dspy.configure(lm=lm)
```
3. **Create a signature** (see [signature-template.py](./assets/signature-template.py)):
```python
class MySignature(dspy.Signature):
"""Clear description of task."""
input_field: str = dspy.InputField(desc="Description")
output_field: str = dspy.OutputField(desc="Description")
```
4. **Create a module** (see [module-template.py](./assets/module-template.py)):
```python
class MyModule(dspy.Module):
def __init__(self):
super().__init__()
self.predictor = dspy.Predict(MySignature)
def forward(self, input_field: str):
return self.predictor(input_field=input_field)
```
5. **Use the module**:
```python
module = MyModule()
result = module(input_field="test")
print(result.output_field)
```
6. **Add tests** (see [optimization.md](./references/optimization.md)):
```python
def test_my_module():
result = MyModule()(input_field="test")
assert isinstance(result.output_field, str)
```
### For FastAPI Applications
1. **Install dependencies**:
```bash
pip install dspy-ai fastapi uvicorn pydantic
```
2. **Create app structure**:
```
my_app/
├── app/
│ ├── __init__.py
│ ├── main.py # FastAPI app
│ ├── dspy_modules/ # DSPy modules
│ │ ├── __init__.py
│ │ └── classifier.py
│ ├── models/ # Pydantic models
│ │ └── __init__.py
│ └── config.py # DSPy configuration
├── tests/
│ └── test_classifier.py
└── requirements.txt
```
3. **Configure DSPy** in `config.py`:
```python
import dspy
import os
def configure_dspy():
lm = dspy.LM(
'openai/gpt-4o-mini',
api_key=os.environ['OPENAI_API_KEY']
)
dspy.configure(lm=lm, cache=True)
```
4. **Create FastAPI app** in `main.py`:
```python
from fastapi import FastAPI
from contextlib import asynccontextmanager
from app.config import configure_dspy
from app.dspy_modules.classifier import EmailProcessor
@asynccontextmanager
async def lifespan(app: FastAPI):
configure_dspy()
yield
app = FastAPI(lifespan=lifespan)
classifier = EmailProcessor()
@app.post("/classify")
async def classify(request: EmailRequest):
result = classifier(
email_subject=request.subject,
email_body=request.body
)
return {"category": result.category, "priority": result.priority}
```
## Common Patterns
### Pattern: Multi-Step Analysis Pipeline
```python
class AnalysisPipeline(dspy.Module):
def __init__(self):
super().__init__()
self.extract = dspy.Predict(ExtractSignature)
self.analyze = dspy.ChainOfThought(AnalyzeSignature)
self.summarize = dspy.Predict(SummarizeSignature)
def forward(self, text: str):
extracted = self.extract(text=text)
analyzed = self.analyze(data=extracted.data)
return self.summarize(analysis=analyzed.result)
```
### Pattern: Agent with Tools
```python
import dspy
def search_web(query: str) -> str:
"""Search the web for information."""
# Implementation here
return f"Results for: {query}"
def calculate(expression: str) -> str:
"""Evaluate a mathematical expression."""
return str(eval(expression))
class ResearchAgent(dspy.Module):
def __init__(self):
super().__init__()
self.agent = dspy.ReAct(
ResearchSignature,
tools=[search_web, calculate],
max_iters=10
)
def forward(self, question: str):
return self.agent(question=question)
```
### Pattern: Conditional Routing
```python
class SmartRouter(dspy.Module):
def __init__(self):
super().__init__()
self.classifier = dspy.Predict(ClassifyComplexity)
self.simple_handler = SimpleModule()
self.complex_handler = ComplexModule()
def forward(self, input_text: str):
classification = self.classifier(text=input_text)
if classification.complexity == "Simple":
return self.simple_handler(input=input_text)
else:
return self.complex_handler(input=input_text)
```
### Pattern: Retry with Fallback
```python
import dspy
from tenacity import retry, stop_after_attempt, wait_exponential
class RobustModule(dspy.Module):
def __init__(self):
super().__init__()
self.predictor = dspy.Predict(TaskSignature)
@retry(
stop=stop_after_attempt(3),
wait=wait_exponential(multiplier=1, min=2, max=10)
)
def forward(self, input_text: str):
result = self.predictor(input=input_text)
self._validate(result)
return result
def _validate(self, result):
if not result.output:
raise ValueError("Empty output from LLM")
```
### Pattern: Pydantic Output Models
```python
from pydantic import BaseModel, Field
import dspy
class ClassificationResult(BaseModel):
category: str = Field(description="Category: Technical, Billing, or General")
priority: str = Field(description="Priority: Low, Medium, or High")
confidence: float = Field(ge=0.0, le=1.0, description="Confidence score")
class TypedClassifier(dspy.Signature):
"""Classify with structured output."""
text: str = dspy.InputField()
result: ClassificationResult = dspy.OutputField()
```
## Resources
This skill includes comprehensive reference materials and templates:
### References (load as needed for detailed information)
- [core-concepts.md](./references/core-concepts.md): Complete guide to signatures, modules, predictors, and best practices
- [providers.md](./references/providers.md): All LLM provider configurations, compatibility matrix, and troubleshooting
- [optimization.md](./references/optimization.md): Testing patterns, teleprompters, caching, and monitoring
- [fastapi-integration.md](./references/fastapi-integration.md): Production patterns for serving DSPy with FastAPI
### Assets (templates for quick starts)
- [signature-template.py](./assets/signature-template.py): Examples of signatures including inline, class-based, and Pydantic outputs
- [module-template.py](./assets/module-template.py): Module patterns including pipelines, agents, async, and caching
- [config-template.py](./assets/config-template.py): Configuration examples for all providers and environments
## When to Use This Skill
Trigger this skill when:
- Implementing LLM-powered features in Python applications
- Creating programmatic interfaces for AI operations
- Building agent systems with tool usage
- Setting up or troubleshooting LLM providers with DSPy
- Optimizing prompts using teleprompters
- Testing LLM functionality with pytest
- Integrating DSPy with FastAPI
- Converting from manual prompt engineering to programmatic approach
- Debugging DSPy code or configuration issues

View File

@@ -0,0 +1,199 @@
---
name: upstream-merge
description: This skill should be used when incorporating upstream git changes into a local fork while preserving local intent. It provides a structured workflow for analyzing divergence, categorizing conflicts, creating triage todos for each conflict, reviewing decisions one-by-one with the user, and executing all resolutions. Triggers on "merge upstream", "incorporate upstream changes", "sync fork", or when local and remote branches have diverged significantly.
---
# Upstream Merge
Incorporate upstream changes into a local fork without losing local intent. Analyze divergence, categorize every changed file, triage conflicts interactively, then execute all decisions in a single structured pass.
## Prerequisites
Before starting, establish context:
1. **Identify the guiding principle** — ask the user what local intent must be preserved (e.g., "FastAPI pivot is non-negotiable", "custom branding must remain"). This principle governs every triage decision.
2. **Confirm remote** — verify `git remote -v` shows the correct upstream origin.
3. **Fetch latest**`git fetch origin` to get current upstream state.
## Phase 1: Analyze Divergence
Gather the full picture before making any decisions.
**Run these commands:**
```bash
# Find common ancestor
git merge-base HEAD origin/main
# Count divergence
git rev-list --count HEAD ^origin/main # local-only commits
git rev-list --count origin/main ^HEAD # remote-only commits
# List all changed files on each side
git diff --name-only $(git merge-base HEAD origin/main) HEAD > /tmp/local-changes.txt
git diff --name-only $(git merge-base HEAD origin/main) origin/main > /tmp/remote-changes.txt
```
**Categorize every file into three buckets:**
| Bucket | Definition | Action |
|--------|-----------|--------|
| **Remote-only** | Changed upstream, untouched locally | Accept automatically |
| **Local-only** | Changed locally, untouched upstream | Keep as-is |
| **Both-changed** | Modified on both sides | Create triage todo |
```bash
# Generate buckets
comm -23 <(sort /tmp/remote-changes.txt) <(sort /tmp/local-changes.txt) > /tmp/remote-only.txt
comm -13 <(sort /tmp/remote-changes.txt) <(sort /tmp/local-changes.txt) > /tmp/local-only.txt
comm -12 <(sort /tmp/remote-changes.txt) <(sort /tmp/local-changes.txt) > /tmp/both-changed.txt
```
**Present summary to user:**
```
Divergence Analysis:
- Common ancestor: [commit hash]
- Local: X commits ahead | Remote: Y commits ahead
- Remote-only: N files (auto-accept)
- Local-only: N files (auto-keep)
- Both-changed: N files (need triage)
```
## Phase 2: Create Triage Todos
For each file in the "both-changed" bucket, create a triage todo using the template at [merge-triage-template.md](./assets/merge-triage-template.md).
**Process:**
1. Determine next issue ID: `ls todos/ | grep -o '^[0-9]\+' | sort -n | tail -1`
2. For each both-changed file:
- Read both versions (local and remote)
- Generate the diff: `git diff $(git merge-base HEAD origin/main)..origin/main -- <file>`
- Analyze what each side intended
- Write a recommendation based on the guiding principle
- Create todo: `todos/{id}-pending-p2-merge-{brief-name}.md`
**Naming convention for merge triage todos:**
```
{id}-pending-p2-merge-{component-name}.md
```
Examples:
- `001-pending-p2-merge-marketplace-json.md`
- `002-pending-p2-merge-kieran-python-reviewer.md`
- `003-pending-p2-merge-workflows-review.md`
**Use parallel agents** to create triage docs when there are many conflicts (batch 4-6 at a time).
**Announce when complete:**
```
Created N triage todos in todos/. Ready to review one-by-one.
```
## Phase 3: Triage (Review One-by-One)
Present each triage todo to the user for a decision. Follow the `/triage` command pattern.
**For each conflict, present:**
```
---
Conflict X/N: [filename]
Category: [agent/command/skill/config]
Conflict Type: [content/modify-delete/add-add]
Remote intent: [what upstream changed and why]
Local intent: [what local changed and why]
Recommendation: [Accept remote / Keep local / Merge both / Keep deleted]
Reasoning: [why, referencing the guiding principle]
---
How should we handle this?
1. Accept remote — take upstream version as-is
2. Keep local — preserve local version
3. Merge both — combine changes (specify how)
4. Keep deleted — file was deleted locally, keep it deleted
```
**Use AskUserQuestion tool** for each decision with appropriate options.
**Record decisions** by updating the triage todo:
- Fill the "Decision" section with the chosen resolution
- Add merge instructions if "merge both" was selected
- Update status: `pending``ready`
**Group related files** when presenting (e.g., present all 7 dspy-ruby files together, not separately).
**Track progress:** Show "X/N completed" with each presentation.
## Phase 4: Execute Decisions
After all triage decisions are made, execute them in a structured order.
### Step 1: Create Working Branch
```bash
git branch backup-local-changes # safety net
git checkout -b merge-upstream origin/main
```
### Step 2: Execute in Order
Process decisions in this sequence to avoid conflicts:
1. **Deletions first** — Remove files that should stay deleted
2. **Copy local-only files**`git checkout backup-local-changes -- <file>` for local additions
3. **Merge files** — Apply "merge both" decisions (the most complex step)
4. **Update metadata** — Counts, versions, descriptions, changelogs
### Step 3: Verify
```bash
# Validate JSON/YAML files
cat <config-files> | python3 -m json.tool > /dev/null
# Verify component counts match descriptions
# (skill-specific: count agents, commands, skills, etc.)
# Check diff summary
git diff --stat HEAD
```
### Step 4: Commit and Merge to Main
```bash
git add <specific-files> # stage explicitly, not -A
git commit -m "Merge upstream vX.Y.Z with [guiding principle] (vX.Y.Z+1)"
git checkout main
git merge merge-upstream
```
**Ask before merging to main** — confirm the user wants to proceed.
## Decision Framework
When making recommendations, apply these heuristics:
| Signal | Recommendation |
|--------|---------------|
| Remote adds new content, no local equivalent | Accept remote |
| Remote updates content local deleted intentionally | Keep deleted |
| Remote has structural improvements (formatting, frontmatter) + local has content changes | Merge both: remote structure + local content |
| Both changed same content differently | Merge both: evaluate which serves the guiding principle |
| Remote renames what local deleted | Keep deleted |
| File is metadata (counts, versions, descriptions) | Defer to Phase 4 — recalculate from actual files |
## Important Rules
- **Never auto-resolve "both-changed" files** — always triage with user
- **Never code during triage** — triage is for decisions only, execution is Phase 4
- **Always create a backup branch** before making changes
- **Always stage files explicitly** — never `git add -A` or `git add .`
- **Group related files** — don't present 7 files from the same skill directory separately
- **Metadata is derived, not merged** — counts, versions, and descriptions should be recalculated from actual files after all other changes are applied
- **Preserve the guiding principle** — every recommendation should reference it

View File

@@ -0,0 +1,57 @@
---
status: pending
priority: p2
issue_id: "XXX"
tags: [upstream-merge]
dependencies: []
---
# Merge Conflict: [filename]
## File Info
| Field | Value |
|-------|-------|
| **File** | `path/to/file` |
| **Category** | agent / command / skill / config / other |
| **Conflict Type** | content / modify-delete / add-add |
## What Changed
### Remote Version
[What the upstream version added, changed, or intended]
### Local Version
[What the local version added, changed, or intended]
## Diff
<details>
<summary>Show diff</summary>
```diff
[Relevant diff content]
```
</details>
## Recommendation
**Suggested resolution:** Accept remote / Keep local / Merge both / Keep deleted
[Reasoning for the recommendation, considering the local fork's guiding principles]
## Decision
**Resolution:** *(filled during triage)*
**Details:** *(specific merge instructions if "merge both")*
## Acceptance Criteria
- [ ] Resolution applied correctly
- [ ] No content lost unintentionally
- [ ] Local intent preserved
- [ ] File validates (JSON/YAML if applicable)