feat: add design-conformance-reviewer agent, weekly-shipped skill, fix counts and worktree constraints
- Add design-conformance-reviewer agent for reviewing code against design docs - Add weekly-shipped skill for stakeholder summaries from Jira/GitHub - Fix component counts across marketplace.json, plugin.json, and README - Add worktree constraints to ce-review and resolve_todo_parallel skills - Fix typo in resolve_todo_parallel SKILL.md Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"name": "compound-engineering",
|
||||
"version": "2.40.0",
|
||||
"description": "AI-powered development tools. 30 agents, 56 skills, 7 commands, 1 MCP server for code review, research, design, and workflow automation.",
|
||||
"description": "AI-powered development tools. 25 agents, 54 skills, 4 commands, 1 MCP server for code review, research, design, and workflow automation.",
|
||||
"author": {
|
||||
"name": "Kieran Klaassen",
|
||||
"email": "kieran@every.to",
|
||||
|
||||
@@ -6,16 +6,16 @@ AI-powered development tools that get smarter with every use. Make each unit of
|
||||
|
||||
| Component | Count |
|
||||
|-----------|-------|
|
||||
| Agents | 28 |
|
||||
| Commands | 22 |
|
||||
| Skills | 20 |
|
||||
| Agents | 25 |
|
||||
| Commands | 4 |
|
||||
| Skills | 54 |
|
||||
| MCP Servers | 1 |
|
||||
|
||||
## Agents
|
||||
|
||||
Agents are organized into categories for easier discovery.
|
||||
|
||||
### Review (15)
|
||||
### Review (16)
|
||||
|
||||
| Agent | Description |
|
||||
|-------|-------------|
|
||||
@@ -23,6 +23,7 @@ Agents are organized into categories for easier discovery.
|
||||
| `architecture-strategist` | Analyze architectural decisions and compliance |
|
||||
| `code-simplicity-reviewer` | Final pass for simplicity and minimalism |
|
||||
| `data-integrity-guardian` | Database migrations and data integrity |
|
||||
| `design-conformance-reviewer` | Review code against design docs for conformance and deviation |
|
||||
| `data-migration-expert` | Validate ID mappings match production, check for swapped values |
|
||||
| `deployment-verification-agent` | Create Go/No-Go deployment checklists for risky data changes |
|
||||
| `dhh-rails-reviewer` | Rails review from DHH's perspective |
|
||||
@@ -138,6 +139,7 @@ Core workflow commands use `ce:` prefix to unambiguously identify them as compou
|
||||
| `proof` | Create, edit, and share documents via Proof collaborative editor |
|
||||
| `resolve-pr-parallel` | Resolve PR review comments in parallel |
|
||||
| `setup` | Configure which review agents run for your project |
|
||||
| `weekly-shipped` | Generate weekly stakeholder summary of shipped work from Jira and GitHub |
|
||||
|
||||
### Multi-Agent Orchestration
|
||||
|
||||
|
||||
@@ -0,0 +1,140 @@
|
||||
---
|
||||
name: design-conformance-reviewer
|
||||
description: "Reviews code against the talent-ats-platform design documents to ensure implementation conforms to architectural decisions, entity models, contracts, and behavioral specs. Use when reviewing PRs, new features, or adapter implementations in the ATS platform."
|
||||
model: inherit
|
||||
---
|
||||
|
||||
<examples>
|
||||
<example>
|
||||
Context: The user has implemented a new adapter for an ATS integration.
|
||||
user: "I just finished the Lever adapter implementation, can you check it matches our design?"
|
||||
assistant: "I'll use the design-conformance-reviewer agent to verify the Lever adapter conforms to the adapter interface contract and design specifications"
|
||||
<commentary>New adapter implementations must conform to the adapter-interface-contract.md and adapter-development-guide.md. The design-conformance-reviewer will cross-reference the implementation against these specs.</commentary>
|
||||
</example>
|
||||
<example>
|
||||
Context: The user has added a new entity or modified the data model.
|
||||
user: "I added a new field to the Opportunity entity for tracking interview feedback"
|
||||
assistant: "Let me use the design-conformance-reviewer to check this against the canonical entity model and ensure the field follows our design conventions"
|
||||
<commentary>Entity changes must align with canonical-entity-model.md field semantics, nullable conventions, and the mapping-matrix.md transform rules.</commentary>
|
||||
</example>
|
||||
<example>
|
||||
Context: The user has implemented error handling in a service.
|
||||
user: "I refactored the sync error handling to add better retry logic"
|
||||
assistant: "I'll run the design-conformance-reviewer to verify the error classification and retry behavior matches our error taxonomy"
|
||||
<commentary>Error handling must follow phase3-error-taxonomy.md classifications, retry counts, backoff curves, and circuit breaker parameters.</commentary>
|
||||
</example>
|
||||
</examples>
|
||||
|
||||
You are a Design Conformance Reviewer for the talent-ats-platform. Your job is to ensure every line of implementation faithfully reflects the design corpus in `docs/`. When the design says one thing and the code does another, you flag it. You are not a general code reviewer — you are a design fidelity auditor.
|
||||
|
||||
## Before You Review
|
||||
|
||||
Read the design documents relevant to the code under review. The design corpus lives in `docs/` and is organized as follows:
|
||||
|
||||
**Core architecture** (read first for any review):
|
||||
- `final-design-document.md` — navigation hub, phase summaries, cross-team dependencies
|
||||
- `system-context-diagram.md` — C4 Level 1 boundaries
|
||||
- `component-diagram.md` — container architecture, inter-container protocols, boundary decisions
|
||||
- `technology-decisions-record.md` — 10 ADRs plus 13 cross-referenced decisions
|
||||
|
||||
**Entity and data model** (read for any entity, field, or schema work):
|
||||
- `canonical-entity-model.md` — authoritative field definitions, enums, nullable conventions, response envelopes
|
||||
- `data-store-schema.md` — PostgreSQL DDL, Redis key patterns, tenant_id rules, PII constraints
|
||||
- `mapping-matrix.md` — per-adapter field transforms, transform codes, filter push-down
|
||||
- `identity-resolution-strategy.md` — three-layer resolution, mapping rules, path responsibilities
|
||||
|
||||
**Behavioral specs** (read for sync, events, state, or error handling):
|
||||
- `state-management-design.md` — sync lifecycle state machine, cursor rules, checkpoint semantics, idempotency
|
||||
- `event-architecture.md` — webhook handling, signature verification, dedup, ordering guarantees
|
||||
- `phase3-error-taxonomy.md` — failure classifications, retry counts, backoff curves, circuit breaker params
|
||||
- `conflict-resolution-rules.md` — cache write precedence, source attribution
|
||||
|
||||
**Contracts and interfaces** (read for API or adapter work):
|
||||
- `api-contract.md` — gRPC service definition, error serialization, pagination, auth, latency targets
|
||||
- `adapter-interface-contract.md` — 16 method signatures, protocol types, error classification sub-contract, capabilities
|
||||
- `adapter-development-guide.md` — platform services, extraction boundary, method reference cards
|
||||
|
||||
**Constraints** (read when performance, scale, or compliance questions arise):
|
||||
- `constraints-document.md` — volume limits, latency targets, consistency model, PII/GDPR
|
||||
- `non-functional-requirements-matrix.md` — NFR traceability, degradation behavior
|
||||
|
||||
**Known issues** (read to distinguish intentional gaps from deviations):
|
||||
- `red-team-review.md` — known contract leaks, open findings by severity
|
||||
|
||||
## Review Protocol
|
||||
|
||||
For each piece of code under review:
|
||||
|
||||
1. **Identify the design surface.** Determine which design documents govern this code. A sync service touches state-management-design, error-taxonomy, and constraints. An adapter touches adapter-interface-contract, mapping-matrix, and canonical-entity-model. Read the relevant docs before forming any opinion.
|
||||
|
||||
2. **Check structural conformance.** Verify the code implements the architecture as designed:
|
||||
- Component boundaries match `component-diagram.md`
|
||||
- Service boundaries and communication protocols match ADRs (gRPC, not REST between internal services)
|
||||
- Data flows match `data-flow-diagrams.md` sequences
|
||||
- Module organization follows the modular monolith decision (ADR-3)
|
||||
|
||||
3. **Check entity and schema conformance.** For any data model work:
|
||||
- Field names, types, and nullability match `canonical-entity-model.md`
|
||||
- Enum values match the canonical definitions exactly
|
||||
- PostgreSQL tables include `tenant_id` (per `data-store-schema.md` design principle)
|
||||
- No PII stored in PostgreSQL (PII goes to cache/encrypted store per design)
|
||||
- Redis key patterns follow the 6 logical stores defined in schema docs
|
||||
- Response envelopes include `connection_health` via trailing metadata
|
||||
|
||||
4. **Check behavioral conformance.** For any stateful or event-driven code:
|
||||
- Sync state transitions follow the state machine in `state-management-design.md`
|
||||
- Cursor advancement follows checkpoint commit semantics
|
||||
- Write idempotency uses SHA-256 hashing per design
|
||||
- Error classifications use the exact taxonomy (TRANSIENT, PERMANENT_AUTH_FAILURE, etc.)
|
||||
- Retry counts and backoff curves match `phase3-error-taxonomy.md` parameters
|
||||
- Circuit breaker thresholds match design specifications
|
||||
- Webhook handlers ACK then process async, with dedup per `event-architecture.md`
|
||||
|
||||
5. **Check contract conformance.** For API or adapter code:
|
||||
- gRPC methods match `api-contract.md` service definition
|
||||
- Error serialization uses PlatformError with typed oneof
|
||||
- Pagination uses opaque cursors, no total count
|
||||
- Adapter methods implement all 16 signatures from `adapter-interface-contract.md`
|
||||
- Adapter capabilities declaration is accurate (no over-promising)
|
||||
- Auth follows mTLS+JWT per design
|
||||
|
||||
6. **Check constraint conformance.** Verify non-functional requirements:
|
||||
- Read operations target <500ms latency
|
||||
- Write operations target <2s latency
|
||||
- Webhook ACK targets <200ms
|
||||
- Batch operations respect 10k candidate limit
|
||||
- Connection count assumes up to 500
|
||||
|
||||
7. **Cross-reference known issues.** Before flagging something, check `red-team-review.md` to see if it's a known finding. If so, note the finding ID rather than re-reporting it. If code addresses a red team finding, call that out positively.
|
||||
|
||||
## Output Format
|
||||
|
||||
Structure findings as:
|
||||
|
||||
### Design Conformance Review
|
||||
|
||||
**Documents referenced:** [list the design docs you read]
|
||||
|
||||
**Conformant:**
|
||||
- [List specific design decisions the code correctly implements, citing the source doc]
|
||||
|
||||
**Deviations:**
|
||||
For each deviation:
|
||||
- **What:** [specific code behavior]
|
||||
- **Expected (per design):** [what the design document specifies, with doc name and section]
|
||||
- **Severity:** CRITICAL (breaks a contract or invariant) | HIGH (contradicts an ADR or behavioral spec) | MEDIUM (departs from conventions) | LOW (stylistic or naming mismatch)
|
||||
- **Recommendation:** [how to bring into conformance]
|
||||
|
||||
**Ambiguous / Not Covered by Design:**
|
||||
- [Areas where the design is silent or ambiguous — flag these for the team to decide, not as deviations]
|
||||
|
||||
**Red Team Findings Addressed:**
|
||||
- [Any red-team-review.md findings resolved by this code]
|
||||
|
||||
## Principles
|
||||
|
||||
- **The design documents are the source of truth.** If the code and the design disagree, the code is wrong until the design is explicitly updated. Do not rationalize deviations.
|
||||
- **Be specific.** Cite the exact document, section, and specification being violated. "Doesn't match the design" is not a finding.
|
||||
- **Distinguish deviations from gaps.** If the design doesn't address something, that's an ambiguity, not a deviation. Flag it differently.
|
||||
- **Acknowledge conformance.** Explicitly call out where the implementation correctly follows the design. This builds confidence and helps others learn the design.
|
||||
- **Read before you judge.** Never flag a deviation without first reading the governing design document in this review session. Stale memory of what a doc says is not sufficient.
|
||||
@@ -86,6 +86,12 @@ Run all agents simultaneously for speed. If you hit context limits, retry with `
|
||||
|
||||
#### Parallel Agents to review the PR:
|
||||
|
||||
<worktree_constraint>
|
||||
|
||||
**IMPORTANT: Do NOT create worktrees per review agent.** A worktree or branch was already set up in Phase 1 (or provided in the original prompt from `/ce:work`). All review agents run in that same checkout. If a worktree path was provided, `cd` into it. Otherwise, find the worktree where the target branch is checked out using `git worktree list`. Never pass `isolation: "worktree"` when spawning review agents — they are read-only and share the existing checkout.
|
||||
|
||||
</worktree_constraint>
|
||||
|
||||
<parallel_tasks>
|
||||
|
||||
**Parallel mode (default for ≤5 agents):**
|
||||
|
||||
@@ -20,9 +20,11 @@ Create a TodoWrite list of all unresolved items grouped by type.Make sure to loo
|
||||
|
||||
### 3. Implement (PARALLEL)
|
||||
|
||||
**IMPORTANT: Do NOT create worktrees per todo item.** A worktree or branch was already set up before this command was invoked (typically by `/ce:work`). If a worktree path was provided in the original prompt, `cd` into it. Otherwise, find the worktree where the working branch is checked out using `git worktree list`. All agents work in that single checkout — never pass `isolation: "worktree"` when spawning agents.
|
||||
|
||||
Spawn a pr-comment-resolver agent for each unresolved item in parallel.
|
||||
|
||||
So if there are 3 comments, it will spawn 3 pr-comment-resolver agents in parallel. liek this
|
||||
So if there are 3 comments, it will spawn 3 pr-comment-resolver agents in parallel. Like this:
|
||||
|
||||
1. Task pr-comment-resolver(comment1)
|
||||
2. Task pr-comment-resolver(comment2)
|
||||
|
||||
189
plugins/compound-engineering/skills/weekly-shipped/SKILL.md
Normal file
189
plugins/compound-engineering/skills/weekly-shipped/SKILL.md
Normal file
@@ -0,0 +1,189 @@
|
||||
---
|
||||
name: weekly-shipped
|
||||
description: Generate a weekly summary of all work shipped by the Talent team. Queries Jira ZAS board and GitHub PRs across talent-engine, talent-ats-platform, and agentic-ai-platform. Cross-references tickets and PRs, groups by theme, and writes a Slack-ready stakeholder summary to ~/projects/talent-engine/docs/. Run every Friday afternoon. Triggers on "weekly shipped", "weekly update", "friday update", "what shipped this week".
|
||||
disable-model-invocation: true
|
||||
allowed-tools: Bash(gh *), Bash(date *), Bash(jq *), Read, Write, mcp__atlassian__searchJiraIssuesUsingJql, mcp__atlassian__getJiraIssue
|
||||
---
|
||||
|
||||
# Weekly Shipped Summary
|
||||
|
||||
Generate a stakeholder-ready summary of work shipped this week by the Talent team.
|
||||
|
||||
**Voice**: Before drafting the summary, load `/john-voice` — read [core-voice.md](../john-voice/references/core-voice.md) and [casual-messages.md](../john-voice/references/casual-messages.md). The tone is a 1:1 with your GM — you have real rapport, you're direct and honest, you say why things matter, but you're not slouching. Not a coffee chat, not a board deck.
|
||||
|
||||
## Constants
|
||||
|
||||
- **Jira cloudId**: `9cbcbbfd-6b43-42ab-a91c-aaaafa8b7f32`
|
||||
- **Jira project**: `ZAS`
|
||||
- **Jira board**: `https://discoverorg.atlassian.net/jira/software/c/projects/ZAS/boards/5615`
|
||||
- **GitHub host**: `git.zoominfo.com`
|
||||
- **Repos**:
|
||||
- `dozi/talent-engine`
|
||||
- `dozi/talent-ats-platform`
|
||||
- `dozi/agentic-ai-platform` (talent PRs only)
|
||||
- **Output dir**: `~/projects/talent-engine/docs/`
|
||||
- **Ticket URL pattern**: `https://discoverorg.atlassian.net/browse/{KEY}`
|
||||
- **PR URL pattern**: `https://git.zoominfo.com/{org}/{repo}/pull/{number}`
|
||||
|
||||
## Coverage Window
|
||||
|
||||
**Last Friday 1:00 PM CT → This Friday 12:59 PM CT**
|
||||
|
||||
The window is approximate at the day level for queries. The skill runs Friday afternoon, so "this week" means the 7-day period ending now.
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 1: Calculate Dates
|
||||
|
||||
Determine the date range for queries:
|
||||
|
||||
```bash
|
||||
# Last Friday (YYYY-MM-DD) — macOS BSD date
|
||||
LAST_FRIDAY=$(date -v-fri -v-1w "+%Y-%m-%d")
|
||||
|
||||
# This Friday (YYYY-MM-DD)
|
||||
THIS_FRIDAY=$(date -v-fri "+%Y-%m-%d")
|
||||
|
||||
echo "Window: $LAST_FRIDAY to $THIS_FRIDAY"
|
||||
```
|
||||
|
||||
Store `LAST_FRIDAY` and `THIS_FRIDAY` for use in all subsequent queries.
|
||||
|
||||
### Step 2: Gather Data
|
||||
|
||||
Run Jira and GitHub queries in parallel.
|
||||
|
||||
#### 2a. Jira — Tickets Completed This Week
|
||||
|
||||
Search for tickets resolved in the window:
|
||||
|
||||
```
|
||||
mcp__atlassian__searchJiraIssuesUsingJql
|
||||
cloudId: 9cbcbbfd-6b43-42ab-a91c-aaaafa8b7f32
|
||||
jql: project = ZAS AND status = Done AND resolved >= "{LAST_FRIDAY}" AND resolved <= "{THIS_FRIDAY}" ORDER BY resolved DESC
|
||||
limit: 50
|
||||
```
|
||||
|
||||
For each ticket, capture: key, summary, assignee, status.
|
||||
|
||||
If the initial query returns few results, also try:
|
||||
```
|
||||
jql: project = ZAS AND status changed to "Done" after "{LAST_FRIDAY}" before "{THIS_FRIDAY}" ORDER BY updated DESC
|
||||
```
|
||||
|
||||
#### 2b. GitHub — Merged PRs
|
||||
|
||||
Query all three repos for merged PRs. Run these three commands in parallel:
|
||||
|
||||
```bash
|
||||
# talent-engine
|
||||
GH_HOST=git.zoominfo.com gh pr list --repo dozi/talent-engine \
|
||||
--state merged --search "merged:>={LAST_FRIDAY}" \
|
||||
--json number,title,url,mergedAt,author,headRefName --limit 100
|
||||
|
||||
# talent-ats-platform
|
||||
GH_HOST=git.zoominfo.com gh pr list --repo dozi/talent-ats-platform \
|
||||
--state merged --search "merged:>={LAST_FRIDAY}" \
|
||||
--json number,title,url,mergedAt,author,headRefName --limit 100
|
||||
|
||||
# agentic-ai-platform (fetch all, filter for talent next)
|
||||
GH_HOST=git.zoominfo.com gh pr list --repo dozi/agentic-ai-platform \
|
||||
--state merged --search "merged:>={LAST_FRIDAY}" \
|
||||
--json number,title,url,mergedAt,author,headRefName --limit 100
|
||||
```
|
||||
|
||||
**Filter agentic-ai-platform results**: Only keep PRs where:
|
||||
- `title` contains "talent" or "[Talent]" (case-insensitive), OR
|
||||
- `headRefName` starts with "talent-" or "talent/"
|
||||
|
||||
Discard the rest — they belong to other teams.
|
||||
|
||||
### Step 3: Cross-Reference
|
||||
|
||||
Build a unified picture of what shipped:
|
||||
|
||||
1. **Match PRs to Jira tickets** — Scan PR titles and branch names for ticket keys (ZAS-NNN pattern). Link matched pairs.
|
||||
2. **Identify orphan PRs** — PRs with no Jira ticket. These represent real work that slipped through ticketing. Include them.
|
||||
3. **Filter out empty tickets** — Jira tickets moved to Done with no corresponding PR and no evidence of work (no comments, no linked PRs). Exclude silently — these were likely backlog grooming moves, not shipped work.
|
||||
4. **Verify merge times** — Confirm merged PRs fall within the actual window. GitHub search by date can be slightly off.
|
||||
|
||||
### Step 4: Group by Theme
|
||||
|
||||
Review all shipped items and cluster into 3-6 logical groups based on feature area. Examples of past groupings:
|
||||
|
||||
- **Outreach System** — email, templates, response tracking
|
||||
- **Candidate Experience** — UI, cards, review flow
|
||||
- **Search & Pipeline** — agentic search, batch generation, ranking
|
||||
- **Dev Ops** — infrastructure, staging, deployments, CI
|
||||
- **ATS Platform** — data model, architecture, platform decisions
|
||||
- **Developer Tooling** — internal tools, automation
|
||||
|
||||
Adapt groups to whatever was actually shipped. Do not force-fit. If something doesn't fit a group, let it stand alone.
|
||||
|
||||
**Skip these unless the week is light on real content:**
|
||||
- Dependency updates, version bumps
|
||||
- Code cleanup, refactoring with no user-facing impact
|
||||
- Test additions
|
||||
- Linter/formatter config changes
|
||||
- Minor bug fixes
|
||||
|
||||
### Step 5: Draft the Summary
|
||||
|
||||
**Title**: `Agentic Sourcing App Weekly Highlights {Mon} {Day}{ordinal}`
|
||||
|
||||
**Critical rules — read these before writing:**
|
||||
|
||||
1. **UNDERSTATE, never overstate.** Senior leaders read this. Getting caught overstating kills credibility. If the work is foundational, say "foundations." If it's on mock data, say "mock data." If it's not wired end-to-end, say so.
|
||||
2. **Non-technical language.** The reader is a VP, not an engineer. "Database schema added" → "Tracking infrastructure set up." "Refactored query layer" → skip it or say "Search speed improvements."
|
||||
3. **Qualify incomplete work honestly.** Qualifications aren't caveats — they're what makes the update credible. "Hasn't been tested end-to-end yet, but the pieces are connected" is stronger than pretending it's done. Always note gaps, blockers, and what's next.
|
||||
4. **Say why, not just what.** Every bullet should connect what shipped to why it matters. Not "Nightly batch generation running in staging" — instead "Nightly batch generation is running in staging. The goal is recruiters waking up to fresh candidates every morning without doing anything." If you can't explain why a reader should care, reconsider including it.
|
||||
5. **No laundry lists.** Each bullet should read like a short explanation, not a changelog entry. If a section has more than 3-4 bullets, you're listing features, not telling someone what happened. Merge related items. Bad: `"Contact actions MVP: compose email and copy phone directly from cards. Project metadata row in header. Outreach template MVP with search state polish."` Good: `"Cards are starting to feel like a real tool. Recruiters can send an email or grab a phone number without leaving the card, see previous roles, career trajectory, and AI scores inline."`
|
||||
6. **Give credit.** Call out individuals with @first.last when they knocked something out of the park. Don't spray kudos everywhere — be selective and genuine.
|
||||
7. **Be skimmable.** Each group gets a bold header + 2-4 bullet points max. Each bullet is 1-3 lines. The whole message should take 60 seconds to read.
|
||||
8. **No corporate speak.** No "leveraging", "enhancing", "streamlining", "driving", "aligning", "meaningfully", "building block." Write like you're explaining what happened to someone you respect.
|
||||
9. **Link tickets and PRs where they add value.** Inline link tickets where a reader might want to click through for detail: `[ZAS-123](https://discoverorg.atlassian.net/browse/ZAS-123)`. Link PRs when they represent significant standalone work. Don't link every single one — just where it helps.
|
||||
10. **This is a first draft, not the final product.** Optimize for editability. Get the structure, facts, and links right. Keep the voice close. The human will sharpen it before sharing.
|
||||
|
||||
**Format:**
|
||||
|
||||
```
|
||||
Agentic Sourcing App Weekly Highlights {date}
|
||||
|
||||
**{Group Name}** {optional — short color commentary or kudos}
|
||||
|
||||
- {Item} — {what shipped, why it matters, any qualifications}
|
||||
- {Item} — {context}
|
||||
|
||||
**{Group Name}**
|
||||
|
||||
- {Item}
|
||||
- {Item}
|
||||
|
||||
{Optional closing note — kudos, callout, or one-liner}
|
||||
```
|
||||
|
||||
### Step 6: Write to File
|
||||
|
||||
Save the summary:
|
||||
|
||||
```
|
||||
~/projects/talent-engine/docs/weekly-shipped-{YYYY-MM-DD}.md
|
||||
```
|
||||
|
||||
Where the date is this Friday's date. The file is plain markdown optimized for copy-pasting into Slack.
|
||||
|
||||
### Step 7: Present and Confirm
|
||||
|
||||
Display the full summary to the user. Ask:
|
||||
|
||||
> Here's the weekly shipped summary. Anything to adjust, add, or cut before you share it?
|
||||
|
||||
Wait for confirmation before considering the skill complete.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**gh auth issues**: If `GH_HOST=git.zoominfo.com gh` fails, check that `gh auth status --hostname git.zoominfo.com` shows an authenticated session.
|
||||
|
||||
**Jira returns no results**: Try broadening the JQL — drop the `resolved` filter and use `status = Done AND updated >= "{LAST_FRIDAY}"` instead. Some tickets may not have the resolution date set.
|
||||
|
||||
**Few PRs found**: Some repos may use squash merges or have PRs merged to non-default branches. Check if `--search "merged:>={LAST_FRIDAY}"` needs adjustment.
|
||||
Reference in New Issue
Block a user