- Broad scope triage: inventory + impact clustering + spot-check drift for 9+ docs, recommends highest-impact area instead of blind ask - Drift classification: sharp boundary between Update (fix references in-skill) and Replace (subagent writes successor learning) - Replacement subagents: sequential subagents write new learnings using ce:compound's document format with investigation evidence already gathered, avoiding redundant research - Stale fallback: when evidence is insufficient for a confident replacement, mark as stale and recommend ce:compound later
22 KiB
name, description, argument-hint, disable-model-invocation
| name | description | argument-hint | disable-model-invocation |
|---|---|---|---|
| ce:compound-refresh | Refresh stale or drifting learnings and pattern docs in docs/solutions/ by reviewing, updating, replacing, or archiving them against the current codebase. Use after refactors, migrations, dependency upgrades, or when a retrieved learning feels outdated or wrong. Also use when reviewing docs/solutions/ for accuracy, when a recently solved problem contradicts an existing learning, or when pattern docs no longer reflect current code. | [optional: scope hint] | true |
Compound Refresh
Maintain the quality of docs/solutions/ over time. This workflow reviews existing learnings against the current codebase, then refreshes any derived pattern docs that depend on them.
Interaction Principles
Follow the same interaction style as ce:brainstorm:
- Ask questions one at a time — use the platform's interactive question tool (e.g.
AskUserQuestionin Claude Code,request_user_inputin Codex) and stop to wait for the answer before continuing - Prefer multiple choice when natural options exist
- Start with scope and intent, then narrow only when needed
- Do not ask the user to make decisions before you have evidence
- Lead with a recommendation and explain it briefly
The goal is not to force the user through a checklist. The goal is to help them make a good maintenance decision with the smallest amount of friction.
Refresh Order
Refresh in this order:
- Review the relevant individual learning docs first
- Note which learnings stayed valid, were updated, were replaced, or were archived
- Then review any pattern docs that depend on those learnings
Why this order:
- learning docs are the primary evidence
- pattern docs are derived from one or more learnings
- stale learnings can make a pattern look more valid than it really is
If the user starts by naming a pattern doc, you may begin there to understand the concern, but inspect the supporting learning docs before changing the pattern.
Maintenance Model
For each candidate artifact, classify it into one of four outcomes:
| Outcome | Meaning | Default action |
|---|---|---|
| Keep | Still accurate and still useful | No file edit by default; report that it was reviewed and remains trustworthy |
| Update | Core solution is still correct, but references drifted | Apply evidence-backed in-place edits |
| Replace | The old artifact is now misleading, but there is a known better replacement | Create a trustworthy successor or revised pattern, then mark/archive the old artifact as needed |
| Archive | No longer useful or applicable | Move the obsolete artifact to docs/solutions/_archived/ with archive metadata when appropriate |
Core Rules
- Evidence informs judgment. The signals below are inputs, not a mechanical scorecard. Use engineering judgment to decide whether the artifact is still trustworthy.
- Prefer no-write Keep. Do not update a doc just to leave a review breadcrumb.
- Match docs to reality, not the reverse. When current code differs from a learning, update the learning to reflect the current code. The skill's job is doc accuracy, not code review — do not ask the user whether code changes were "intentional" or "a regression." If the code changed, the doc should match. If the user thinks the code is wrong, that is a separate concern outside this workflow.
- Be decisive, minimize questions. When evidence is clear (file renamed, class moved, reference broken), apply the update. Only ask the user when the right maintenance action is genuinely ambiguous — not to confirm obvious fixes. The goal is automated maintenance with human oversight on judgment calls, not a question for every finding.
- Avoid low-value churn. Do not edit a doc just to fix a typo, polish wording, or make cosmetic changes that do not materially improve accuracy or usability.
- Use Update only for meaningful, evidence-backed drift. Paths, module names, related links, category metadata, code snippets, and clearly stale wording are fair game when fixing them materially improves accuracy.
- Use Replace only when there is a real replacement. That means either:
- the current conversation contains a recently solved, verified replacement fix, or
- the user provides enough concrete replacement context to document the successor honestly, or
- newer docs, pattern docs, PRs, or issues provide strong successor evidence.
- Archive when the code is gone. If the referenced code, controller, or workflow no longer exists in the codebase and no successor can be found, recommend Archive — don't default to Keep just because the general advice is still "sound." A learning about a deleted feature misleads readers into thinking that feature still exists. When in doubt between Keep and Archive, ask the user — but missing referenced files with no matching code is strong Archive evidence, not a reason to Keep with "medium confidence."
Scope Selection
Start by discovering learnings and pattern docs under docs/solutions/.
Exclude:
README.mddocs/solutions/_archived/
Find all .md files under docs/solutions/, excluding README.md files and anything under _archived/.
If $ARGUMENTS is provided, use it to narrow scope before proceeding. Try these matching strategies in order, stopping at the first that produces results:
- Directory match — check if the argument matches a subdirectory name under
docs/solutions/(e.g.,performance-issues,database-issues) - Frontmatter match — search
module,component, ortagsfields in learning frontmatter for the argument - Filename match — match against filenames (partial matches are fine)
- Content search — search file contents for the argument as a keyword (useful for feature names or feature areas)
If no matches are found, report that and ask the user to clarify.
If no candidate docs are found, report:
No candidate docs found in docs/solutions/.
Run `ce:compound` after solving problems to start building your knowledge base.
Phase 0: Assess and Route
Before asking the user to classify anything:
- Discover candidate artifacts
- Estimate scope
- Choose the lightest interaction path that fits
Route by Scope
| Scope | When to use it | Interaction style |
|---|---|---|
| Focused | 1-2 likely files or user named a specific doc | Investigate directly, then present a recommendation |
| Batch | Up to ~8 mostly independent docs | Investigate first, then present grouped recommendations |
| Broad | 9+ docs, ambiguous, or repo-wide stale-doc sweep | Triage first, then investigate in batches |
Broad Scope Triage
When scope is broad (9+ candidate docs), do a lightweight triage before deep investigation:
- Inventory — read frontmatter of all candidate docs, group by module/component/category
- Impact clustering — identify areas with the densest clusters of learnings + pattern docs. A cluster of 5 learnings and 2 patterns covering the same module is higher-impact than 5 isolated single-doc areas, because staleness in one doc is likely to affect the others.
- Spot-check drift — for each cluster, check whether the primary referenced files still exist. Missing references in a high-impact cluster = strongest signal for where to start.
- Recommend a starting area — present the highest-impact cluster with a brief rationale and ask the user to confirm or redirect.
Example:
Found 24 learnings across 5 areas.
The auth module has 5 learnings and 2 pattern docs that cross-reference
each other — and 3 of those reference files that no longer exist.
I'd start there.
1. Start with auth (recommended)
2. Pick a different area
3. Review everything
Do not ask action-selection questions yet. First gather evidence.
Phase 1: Investigate Candidate Learnings
For each learning in scope, read it, cross-reference its claims against the current codebase, and form a recommendation.
A learning has several dimensions that can independently go stale. Surface-level checks catch the obvious drift, but staleness often hides deeper:
- References — do the file paths, class names, and modules it mentions still exist or have they moved?
- Recommended solution — does the fix still match how the code actually works today? A renamed file with a completely different implementation pattern is not just a path update.
- Code examples — if the learning includes code snippets, do they still reflect the current implementation?
- Related docs — are cross-referenced learnings and patterns still present and consistent?
Match investigation depth to the learning's specificity — a learning referencing exact file paths and code snippets needs more verification than one describing a general principle.
Drift Classification: Update vs Replace
The critical distinction is whether the drift is cosmetic (references moved but the solution is the same) or substantive (the solution itself changed):
- Update territory — file paths moved, classes renamed, links broke, metadata drifted, but the core recommended approach is still how the code works.
ce:compound-refreshfixes these directly. - Replace territory — the recommended solution conflicts with current code, the architectural approach changed, or the pattern is no longer the preferred way. This means a new learning needs to be written. A replacement subagent writes the successor following
ce:compound's document format (frontmatter, problem, root cause, solution, prevention), using the investigation evidence already gathered. The orchestrator does not rewrite learnings inline — it delegates to a subagent for context isolation.
The boundary: if you find yourself rewriting the solution section or changing what the learning recommends, stop — that is Replace, not Update.
Judgment Guidelines
Three guidelines that are easy to get wrong:
- Contradiction = strong Replace signal. If the learning's recommendation conflicts with current code patterns or a recently verified fix, that is not a minor drift — the learning is actively misleading. Classify as Replace.
- Age alone is not a stale signal. A 2-year-old learning that still matches current code is fine. Only use age as a prompt to inspect more carefully.
- Check for successors before archiving. Before recommending Replace or Archive, look for newer learnings, pattern docs, PRs, or issues covering the same problem space. If successor evidence exists, prefer Replace over Archive so readers are directed to the newer guidance.
Phase 1.5: Investigate Pattern Docs
After reviewing the underlying learning docs, investigate any relevant pattern docs under docs/solutions/patterns/.
Pattern docs are high-leverage — a stale pattern is more dangerous than a stale individual learning because future work may treat it as broadly applicable guidance. Evaluate whether the generalized rule still holds given the refreshed state of the learnings it depends on.
A pattern doc with no clear supporting learnings is a stale signal — investigate carefully before keeping it unchanged.
Subagent Strategy
Use subagents for context isolation when investigating multiple artifacts — not just because the task sounds complex. Choose the lightest approach that fits:
| Approach | When to use |
|---|---|
| Main thread only | Small scope, short docs |
| Sequential subagents | 1-2 artifacts with many supporting files to read |
| Parallel subagents | 3+ truly independent artifacts with low overlap |
| Batched subagents | Broad sweeps — narrow scope first, then investigate in batches |
Subagents should use dedicated file search and read tools for investigation — not shell commands. This avoids unnecessary permission prompts and is more reliable across platforms.
There are two subagent roles:
- Investigation subagents — read-only. They must not edit files, create successors, or archive anything. Each returns: file path, evidence, recommended action, confidence, and open questions. These can run in parallel when artifacts are independent.
- Replacement subagents — write a single new learning to replace a stale one. These run one at a time, sequentially (each replacement subagent may need to read significant code, and running multiple in parallel risks context exhaustion). The orchestrator handles all archival and metadata updates after each replacement completes.
The orchestrator merges investigation results, detects contradictions, asks the user questions, coordinates replacement subagents, and performs all archival/metadata edits centrally. If two artifacts overlap or discuss the same root issue, investigate them together rather than parallelizing.
Phase 2: Classify the Right Maintenance Action
After gathering evidence, assign one recommended action.
Keep
The learning is still accurate and useful. Do not edit the file — report that it was reviewed and remains trustworthy. Only add last_refreshed if you are already making a meaningful update for another reason.
Update
The core solution is still valid but references have drifted (paths, class names, links, code snippets, metadata). Apply the fixes directly.
Replace
Choose Replace when the learning's core guidance is now misleading — the recommended fix changed materially, the root cause or architecture shifted, or the preferred pattern is different.
The user may have invoked the refresh months after the original learning was written. Do not ask them for replacement context they are unlikely to have — use agent intelligence to investigate the codebase and synthesize the replacement.
Evidence assessment:
By the time you identify a Replace candidate, Phase 1 investigation has already gathered significant evidence: the old learning's claims, what the current code actually does, and where the drift occurred. Assess whether this evidence is sufficient to write a trustworthy replacement:
- Sufficient evidence — you understand both what the old learning recommended AND what the current approach is. The investigation found the current code patterns, the new file locations, the changed architecture. → Proceed to write the replacement (see Phase 4 Replace Flow).
- Insufficient evidence — the drift is so fundamental that you cannot confidently document the current approach. The entire subsystem was replaced, or the new architecture is too complex to understand from a file scan alone. → Mark as stale in place:
- Add
status: stale,stale_reason: [what you found],stale_date: YYYY-MM-DDto the frontmatter - Report what evidence you found and what is missing
- Recommend the user run
ce:compoundafter their next encounter with that area, when they have fresh problem-solving context
- Add
Archive
Choose Archive when:
- The code or workflow no longer exists
- The learning is obsolete and has no modern replacement worth documenting
- The learning is redundant and no longer useful on its own
- There is no meaningful successor evidence suggesting it should be replaced instead
Action:
- Move the file to
docs/solutions/_archived/, preserving directory structure when helpful - Add:
archived_date: YYYY-MM-DDarchive_reason: [why it was archived]
Auto-archive when evidence is unambiguous:
- the referenced code, controller, or workflow is gone and no successor exists in the codebase
- the learning is fully superseded by a clearly better successor
- the document is plainly redundant and adds no distinct value
Do not keep a learning just because its general advice is "still sound" — if the specific code it references is gone, the learning misleads readers. Archive it.
If there is a clearly better successor, strongly consider Replace before Archive so the old artifact points readers toward the newer guidance.
Pattern Guidance
Apply the same four outcomes (Keep, Update, Replace, Archive) to pattern docs, but evaluate them as derived guidance rather than incident-level learnings. Key differences:
- Keep: the underlying learnings still support the generalized rule and examples remain representative
- Update: the rule holds but examples, links, scope, or supporting references drifted
- Replace: the generalized rule is now misleading, or the underlying learnings support a different synthesis. Base the replacement on the refreshed learning set — do not invent new rules from guesswork
- Archive: the pattern is no longer valid, no longer recurring, or fully subsumed by a stronger pattern doc
If "archive" feels too strong but the pattern should no longer be elevated, reduce its prominence in place if the docs structure supports that.
Phase 3: Ask for Decisions
Most Updates should be applied directly without asking. Only ask the user when:
- The right action is genuinely ambiguous (Update vs Replace vs Archive)
- You are about to Archive a document and the evidence is not unambiguous (see auto-archive criteria in Phase 2). When auto-archive criteria are met, proceed without asking.
- You are about to create a successor via
ce:compound
Do not ask questions about whether code changes were intentional, whether the user wants to fix bugs in the code, or other concerns outside doc maintenance. Stay in your lane — doc accuracy.
Question Style
Always present choices using the platform's interactive question tool (e.g. AskUserQuestion in Claude Code, request_user_input in Codex). If the environment has no interactive prompt tool, present numbered options in plain text and wait for the user's response before proceeding.
Question rules:
- Ask one question at a time
- Prefer multiple choice
- Lead with the recommended option
- Explain the rationale for the recommendation in one concise sentence
- Avoid asking the user to choose from actions that are not actually plausible
Focused Scope
For a single artifact, present:
- file path
- 2-4 bullets of evidence
- recommended action
Then ask:
This [learning/pattern] looks like a [Update/Keep/Replace/Archive].
Why: [one-sentence rationale based on the evidence]
What would you like to do?
1. [Recommended action]
2. [Second plausible action]
3. Skip for now
Do not list all four actions unless all four are genuinely plausible.
Batch Scope
For several learnings:
- Group obvious Keep cases together
- Group obvious Update cases together when the fixes are straightforward
- Present Replace cases individually or in very small groups
- Present Archive cases individually unless they are strong auto-archive candidates
Ask for confirmation in stages:
- Confirm grouped Keep/Update recommendations
- Then handle Replace one at a time
- Then handle Archive one at a time unless the archive is unambiguous and safe to auto-apply
Broad Scope
If the user asked for a sweeping refresh, keep the interaction incremental:
- Narrow scope first
- Investigate a manageable batch
- Present recommendations
- Ask whether to continue to the next batch
Do not front-load the user with a full maintenance queue.
Phase 4: Execute the Chosen Action
Keep Flow
No file edit by default. Summarize why the learning remains trustworthy.
Update Flow
Apply in-place edits only when the solution is still substantively correct.
Examples of valid in-place updates:
- Rename
app/models/auth_token.rbreference toapp/models/session_token.rb - Update
module: AuthTokentomodule: SessionToken - Fix outdated links to related docs
- Refresh implementation notes after a directory move
Examples that should not be in-place updates:
- Fixing a typo with no effect on understanding
- Rewording prose for style alone
- Small cleanup that does not materially improve accuracy or usability
- The old fix is now an anti-pattern
- The system architecture changed enough that the old guidance is misleading
- The troubleshooting path is materially different
Those cases require Replace, not Update.
Replace Flow
Process Replace candidates one at a time, sequentially. Each replacement is written by a subagent to protect the main context window.
When evidence is sufficient:
- Spawn a single subagent to write the replacement learning. Pass it:
- The old learning's full content
- A summary of the investigation evidence (what changed, what the current code does, why the old guidance is misleading)
- The target path and category (same category as the old learning unless the category itself changed)
- The subagent writes the new learning following
ce:compound's document format: YAML frontmatter (title, category, date, module, component, tags), problem description, root cause, current solution with code examples, and prevention tips. It should use dedicated file search and read tools if it needs additional context beyond what was passed. - After the subagent completes, the orchestrator:
- Adds
superseded_by: [new learning path]to the old learning's frontmatter - Moves the old learning to
docs/solutions/_archived/
- Adds
When evidence is insufficient:
- Mark the learning as stale in place:
- Add to frontmatter:
status: stale,stale_reason: [what you found],stale_date: YYYY-MM-DD
- Add to frontmatter:
- Report what evidence was found and what is missing
- Recommend the user run
ce:compoundafter their next encounter with that area
Archive Flow
Archive only when a learning is clearly obsolete or redundant. Do not archive a document just because it is old.
Output Format
After processing the selected scope, report:
Compound Refresh Summary
========================
Scanned: N learnings
Kept: X
Updated: Y
Replaced: Z
Archived: W
Skipped: V
Then list the affected files and what changed.
For Keep outcomes, list them under a reviewed-without-edits section so the result is visible without creating git churn.
Relationship to ce:compound
ce:compoundcaptures a newly solved, verified problemce:compound-refreshmaintains older learnings as the codebase evolves
Use Replace only when the refresh process has enough real replacement context to hand off honestly into ce:compound.