minor updates, includes new skills for just-ship-it and push to proof
Some checks failed
CI / test (push) Has been cancelled

This commit is contained in:
John Lamb
2026-03-13 18:20:27 -05:00
parent 4bc2409d91
commit 24d77808c0
8 changed files with 885 additions and 1 deletions

View File

@@ -5,6 +5,7 @@ All notable changes to the compound-engineering plugin will be documented in thi
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
<<<<<<< Updated upstream
## [2.39.0] - 2026-03-10
### Added
@@ -22,6 +23,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Added
- **`story-lens` skill** - Evaluate prose quality using George Saunders's storytelling framework (beat causality, escalation, three E's, character accumulation, moral/technical unity)
- **`sync-confluence` skill** - Sync local markdown docs to Confluence Cloud pages via REST API. Handles first-time setup, page creation, and bulk updates with automatic mapping file management.
## [2.36.0] - 2026-02-27

View File

@@ -8,7 +8,7 @@ description: This skill should be used when the user wants to create a Jira tick
Write Jira tickets that sound like a human wrote them. Drafts go through tone review before the user sees them, and nothing gets created without explicit approval.
## Reference
For tickets pertaining to Talent Engine (Agentic App), TalentOS, Comparably, or the ATS Platform: Use the `ZR` Jira project
For tickets pertaining to Talent Engine (Agentic App), TalentOS, Comparably, or the ATS Platform: Use the `ZAS` Jira project
When creating epics and tickets for Talent Engine always add the label `talent-engine` and prefix the name with "[Agentic App]"
When creating epics and tickets for the ATS Platform always add the label `ats-platform` and prefix the name with "[ATS Platform]"

View File

@@ -1,6 +1,7 @@
---
name: john-voice
description: "This skill should be used whenever writing content that should sound like John Lamb wrote it. It applies to all written output including Slack messages, emails, Jira tickets, technical docs, prose, blog posts, cover letters, and any other communication. This skill provides John's authentic writing voice, tone, and style patterns organized by venue and audience. Other skills should invoke this skill when producing written content on John's behalf. Triggers on any content generation, drafting, or editing task where the output represents John's voice."
allowed-tools: Read
---
# John's Writing Voice

View File

@@ -0,0 +1,45 @@
---
name: proof-push
description: This skill should be used when the user wants to push a markdown document to a running Proof server instance. It accepts a file path as an argument, posts the markdown content to the Proof API, and returns the document slug and URL. Triggers on "push to proof", "proof push", "open in proof", "send to proof", or any request to render markdown in Proof.
---
# Proof Push
Push a local markdown file to a running Proof server and open it in the browser.
## Usage
Accept a markdown file path as the argument. If no path is provided, ask for one.
### Execution
Run the bundled script to post the document:
```bash
bash scripts/proof_push.sh <file-path> [server-url]
```
- `file-path` — absolute or relative path to a `.md` file (required)
- `server-url` — Proof server URL, defaults to `http://localhost:4000`
The script:
1. Reads the file content
2. POSTs to `/share/markdown` as JSON with `{markdown, title}`
3. Returns the slug, base URL, and editor URL with access token
### Output
Report the returned slug and URLs to the user. The editor URL (with token) gives full edit access.
### Error Handling
If the script fails, check:
- Is the Proof server running? (`curl http://localhost:4000`)
- Does the file exist and contain non-empty markdown?
- Is `jq` installed? (required for JSON construction)
## Resources
### scripts/
- `proof_push.sh` — Shell script that posts markdown to Proof's `/share/markdown` endpoint and returns the document slug and URLs.

View File

@@ -0,0 +1,34 @@
#!/usr/bin/env bash
# Push a markdown file to a running Proof server and return the document URL.
# Usage: proof_push.sh <path-to-markdown> [server-url]
set -euo pipefail
FILE="${1:?Usage: proof_push.sh <markdown-file> [server-url]}"
SERVER="${2:-http://localhost:4000}"
UI_URL="${3:-http://localhost:3000}"
if [[ ! -f "$FILE" ]]; then
echo "error: file not found: $FILE" >&2
exit 1
fi
TITLE=$(basename "$FILE" .md)
RESPONSE=$(curl -s -X POST "${SERVER}/share/markdown" \
-H "Content-Type: application/json" \
-d "$(jq -n --arg md "$(cat "$FILE")" --arg title "$TITLE" '{markdown: $md, title: $title}')")
SLUG=$(echo "$RESPONSE" | jq -r '.slug // empty')
ERROR=$(echo "$RESPONSE" | jq -r '.error // empty')
if [[ -z "$SLUG" ]]; then
echo "error: failed to create document${ERROR:+: $ERROR}" >&2
echo "$RESPONSE" >&2
exit 1
fi
TOKEN_PATH=$(echo "$RESPONSE" | jq -r '.tokenPath // empty')
echo "slug: $SLUG"
echo "url: ${UI_URL}/d/${SLUG}"
[[ -n "$TOKEN_PATH" ]] && echo "editor-url: ${UI_URL}${TOKEN_PATH}"

View File

@@ -0,0 +1,120 @@
---
name: ship-it
description: This skill should be used when the user wants to ticket, branch, commit, and open a PR in one shot. It creates a Jira ticket from conversation context, assigns it, moves it to In Progress, creates a branch, commits changes, pushes, and opens a PR. Triggers on "ship it", "ticket and PR this", "put up a PR", "let's ship this", or any request to package completed work into a ticket + PR.
---
# Ship It
End-to-end workflow: Jira ticket + branch + commit + push + PR from conversation context. Run after a fix or feature is done and needs to be formally shipped.
## Constants
- **Jira cloudId**: `9cbcbbfd-6b43-42ab-a91c-aaaafa8b7f32`
- **Jira project**: `ZAS`
- **Issue type**: `Story`
- **Assignee accountId**: `712020:62c4d18e-a579-49c1-b228-72fbc63186de`
- **PR target branch**: `stg` (unless specified otherwise)
## Workflow
### Step 1: Gather Context
Analyze the conversation above to determine:
- **What was done** — the fix, feature, or change
- **Why** — the problem or motivation
- **Which files changed** — run `git diff` and `git status` to see the actual changes
Synthesize a ticket summary (under 80 chars, imperative mood) and a brief description. Do not ask the user to describe the work — extract it from conversation context.
### Step 2: Create Jira Ticket
Use `/john-voice` to draft the ticket content, then create via MCP:
```
mcp__atlassian__createJiraIssue
cloudId: 9cbcbbfd-6b43-42ab-a91c-aaaafa8b7f32
projectKey: ZAS
issueTypeName: Story
summary: <ticket title>
description: <ticket body>
assignee_account_id: 712020:62c4d18e-a579-49c1-b228-72fbc63186de
contentFormat: markdown
```
Extract the ticket key (e.g. `ZAS-123`) from the response.
### Step 3: Move to In Progress
Get transitions and find the "In Progress" transition ID:
```
mcp__atlassian__getTransitionsForJiraIssue
cloudId: 9cbcbbfd-6b43-42ab-a91c-aaaafa8b7f32
issueIdOrKey: <ticket key>
```
Then apply the transition:
```
mcp__atlassian__transitionJiraIssue
cloudId: 9cbcbbfd-6b43-42ab-a91c-aaaafa8b7f32
issueIdOrKey: <ticket key>
transition: { "id": "<transition_id>" }
```
### Step 4: Create Branch
Create and switch to a new branch named after the ticket:
```bash
git checkout -b <ticket-key>
```
Example: `git checkout -b ZAS-123`
### Step 5: Commit Changes
Stage and commit all relevant changes. Use the ticket key as a prefix in the commit message. Follow project git conventions (lowercase, no periods, casual).
```bash
git add <specific files>
git commit -m "<ticket-key> <short description>"
```
Example: `ZAS-123 fix candidate email field mapping`
Include the co-author trailer:
```
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
```
### Step 6: Push and Open PR
Push the branch:
```bash
git push -u origin <ticket-key>
```
Use `/john-voice` to write the PR title and body. Create the PR:
```bash
gh pr create --title "<PR title>" --base stg --body "<PR body>"
```
PR body format:
```markdown
## Summary
<2-3 bullets describing the change>
## Jira
[<ticket-key>](https://discoverorg.atlassian.net/browse/<ticket-key>)
## Test plan
<bulleted checklist>
```
### Step 7: Report
Output the ticket URL and PR URL to the user.

View File

@@ -0,0 +1,153 @@
---
name: sync-confluence
description: This skill should be used when syncing local markdown documentation to Confluence Cloud pages. It handles first-time setup (creating mapping files and docs directories), pushing updates to existing pages, and creating new pages with interactive destination prompts. Triggers on "sync to confluence", "push docs to confluence", "update confluence pages", "create a confluence page", or any request to publish markdown content to Confluence.
allowed-tools: Read, Bash(find *), Bash(source *), Bash(uv run *)
---
# Sync Confluence
Sync local markdown files to Confluence Cloud pages via REST API. Handles the full lifecycle: first-time project setup, page creation, and bulk updates.
## Prerequisites
Two environment variables must be set (typically in `~/.zshrc`):
- `CONFLUENCE_EMAIL` — Atlassian account email
- `CONFLUENCE_API_TOKEN_WRITE` — Atlassian API token with write scope (falls back to `CONFLUENCE_API_TOKEN`)
Generate tokens at: https://id.atlassian.com/manage-profile/security/api-tokens
The script requires `uv` to be installed. Dependencies (`markdown`, `requests`, `truststore`) are declared inline via PEP 723 and resolved automatically by `uv run`.
## Workflow
### 1. Check for Mapping File
Before running the sync script, check whether a `.confluence-mapping.json` exists in the project:
```bash
find "$(git rev-parse --show-toplevel 2>/dev/null || pwd)" -name ".confluence-mapping.json" -maxdepth 3 2>/dev/null
```
- **If found** — skip to step 3 (Sync).
- **If not found** — proceed to step 2 (First-Time Setup).
### 2. First-Time Setup
When no mapping file exists, gather configuration interactively via `AskUserQuestion`:
1. **Confluence base URL** — e.g., `https://myorg.atlassian.net/wiki`
2. **Space key** — short identifier in Confluence URLs (e.g., `ZR`, `ENG`)
3. **Parent page ID** — the page under which synced pages nest. Tell the user: "Open the parent page in Confluence — the page ID is the number in the URL."
4. **Parent page title** — prefix for generated page titles (e.g., `ATS Platform`)
5. **Docs directory** — where markdown files live relative to repo root (default: `docs/`)
Then create the docs directory and mapping file:
```python
import json
from pathlib import Path
config = {
"confluence": {
"cloudId": "<domain>.atlassian.net",
"spaceId": "",
"spaceKey": "<SPACE_KEY>",
"baseUrl": "<BASE_URL>"
},
"parentPage": {
"id": "<PARENT_PAGE_ID>",
"title": "<PARENT_TITLE>",
"url": "<BASE_URL>/spaces/<SPACE_KEY>/pages/<PARENT_PAGE_ID>"
},
"pages": {},
"unmapped": [],
"lastSynced": ""
}
docs_dir = Path("<REPO_ROOT>") / "<DOCS_DIR>"
docs_dir.mkdir(parents=True, exist_ok=True)
mapping_path = docs_dir / ".confluence-mapping.json"
mapping_path.write_text(json.dumps(config, indent=2) + "\n")
```
To discover `spaceId` (required for page creation), run:
```bash
source ~/.zshrc && curl -s -u "${CONFLUENCE_EMAIL}:${CONFLUENCE_API_TOKEN_WRITE}" \
-H "X-Atlassian-Token: no-check" \
"<BASE_URL>/rest/api/space/<SPACE_KEY>" | python3 -c "import sys,json; print(json.load(sys.stdin)['id'])"
```
Update the mapping file with the discovered spaceId before proceeding.
### 3. Sync — Running the Script
The sync script is at `${CLAUDE_PLUGIN_ROOT}/skills/sync-confluence/scripts/sync_confluence.py`.
**Always source shell profile before running** to load env vars:
```bash
source ~/.zshrc && uv run ${CLAUDE_PLUGIN_ROOT}/skills/sync-confluence/scripts/sync_confluence.py [options]
```
#### Common Operations
| Command | What it does |
|---------|-------------|
| _(no flags)_ | Sync all markdown files in docs dir |
| `--dry-run` | Preview changes without API calls |
| `--file docs/my-doc.md` | Sync a single file |
| `--update-only` | Only update existing pages, skip unmapped files |
| `--create-only` | Only create new pages, skip existing |
| `--mapping-file path/to/file` | Use a specific mapping file |
| `--docs-dir path/to/dir` | Override docs directory |
### 4. Creating a New Confluence Page
When the user wants to create a new page:
1. Ask for the page topic/title
2. Create the markdown file in the docs directory with a `# Title` heading and content
3. Run the sync script with `--file` pointing to the new file
4. The script detects the unmapped file, creates the page, and updates the mapping
**Title resolution order:** First `# H1` from the markdown → filename-derived title → raw filename. Titles are prefixed with the parent page title (e.g., `My Project: New Page`).
### 5. Mapping File Structure
```json
{
"confluence": {
"cloudId": "myorg.atlassian.net",
"spaceId": "1234567890",
"spaceKey": "ZR",
"baseUrl": "https://myorg.atlassian.net/wiki"
},
"parentPage": {
"id": "123456789",
"title": "My Project",
"url": "https://..."
},
"pages": {
"my-doc.md": {
"pageId": "987654321",
"title": "My Project: My Doc",
"url": "https://..."
}
},
"unmapped": [],
"lastSynced": "2026-03-03"
}
```
The script updates this file after each successful sync. Do not manually edit page entries unless correcting a known error.
## Technical Notes
- **Auth:** Confluence REST API v1 with Basic Auth + `X-Atlassian-Token: no-check`. Some Cloud instances block v2 or require this XSRF bypass.
- **Content format:** Markdown converted to Confluence storage format (XHTML) via Python `markdown` library with tables, fenced code, and TOC extensions.
- **SSL:** `truststore` delegates cert verification to the OS trust store, handling corporate SSL proxies (Zscaler, etc.).
- **Rate limiting:** Automatic retry with backoff on 429 and 5xx responses.
- **Sync timestamp:** `> **Last synced to Confluence**: YYYY-MM-DD` injected into the Confluence copy only. Local files are untouched.
- **Versioning:** Page versions auto-increment. The script GETs the current version before PUTting.

View File

@@ -0,0 +1,529 @@
#!/usr/bin/env python3
# /// script
# requires-python = ">=3.11"
# dependencies = ["markdown", "requests", "truststore"]
# ///
"""Sync markdown docs to Confluence Cloud.
Reads a .confluence-mapping.json file, syncs local markdown files
to Confluence pages via REST API v2, and updates the mapping file.
Run with: uv run scripts/sync_confluence.py [options]
"""
import argparse
import base64
import json
import os
import re
import subprocess
import sys
import time
from datetime import date, timezone, datetime
from pathlib import Path
from urllib.parse import quote
import truststore
truststore.inject_into_ssl()
import markdown
import requests
# ---------------------------------------------------------------------------
# Path discovery
# ---------------------------------------------------------------------------
def find_repo_root() -> Path | None:
"""Walk up from CWD to find a git repo root."""
try:
result = subprocess.run(
["git", "rev-parse", "--show-toplevel"],
capture_output=True, text=True, check=True,
)
return Path(result.stdout.strip())
except (subprocess.CalledProcessError, FileNotFoundError):
return None
def find_mapping_file(start: Path) -> Path | None:
"""Search for .confluence-mapping.json walking up from *start*.
Checks <dir>/docs/.confluence-mapping.json and
<dir>/.confluence-mapping.json at each level.
"""
current = start.resolve()
while True:
for candidate in (
current / "docs" / ".confluence-mapping.json",
current / ".confluence-mapping.json",
):
if candidate.is_file():
return candidate
parent = current.parent
if parent == current:
break
current = parent
return None
# ---------------------------------------------------------------------------
# Mapping file helpers
# ---------------------------------------------------------------------------
def load_mapping(path: Path) -> dict:
"""Load and lightly validate the mapping file."""
data = json.loads(path.read_text(encoding="utf-8"))
for key in ("confluence", "parentPage"):
if key not in data:
raise ValueError(f"Mapping file missing required key: '{key}'")
data.setdefault("pages", {})
data.setdefault("unmapped", [])
return data
def save_mapping(path: Path, data: dict) -> None:
"""Write the mapping file with stable formatting."""
path.write_text(json.dumps(data, indent=2) + "\n", encoding="utf-8")
# ---------------------------------------------------------------------------
# Markdown → Confluence storage format
# ---------------------------------------------------------------------------
MD_EXTENSIONS = [
"markdown.extensions.tables",
"markdown.extensions.fenced_code",
"markdown.extensions.toc",
"markdown.extensions.md_in_html",
"markdown.extensions.sane_lists",
]
MD_EXTENSION_CONFIGS: dict = {
"markdown.extensions.toc": {"permalink": False},
}
def md_to_storage(md_content: str) -> str:
"""Convert markdown to Confluence storage-format XHTML."""
return markdown.markdown(
md_content,
extensions=MD_EXTENSIONS,
extension_configs=MD_EXTENSION_CONFIGS,
output_format="xhtml",
)
# ---------------------------------------------------------------------------
# Title helpers
# ---------------------------------------------------------------------------
def extract_h1(md_content: str) -> str | None:
"""Return the first ``# Heading`` from *md_content*, or None."""
for line in md_content.splitlines():
stripped = line.strip()
if stripped.startswith("# ") and not stripped.startswith("## "):
return stripped[2:].strip()
return None
def title_from_filename(filename: str) -> str:
"""Derive a human-readable title from a kebab-case filename."""
stem = filename.removesuffix(".md")
words = stem.split("-")
# Capitalise each word, then fix known acronyms/terms
title = " ".join(w.capitalize() for w in words)
acronyms = {
"Ats": "ATS", "Api": "API", "Ms": "MS", "Unie": "UNIE",
"Id": "ID", "Opa": "OPA", "Zi": "ZI", "Cql": "CQL",
"Jql": "JQL", "Sdk": "SDK", "Oauth": "OAuth", "Cdn": "CDN",
"Aws": "AWS", "Gcp": "GCP", "Grpc": "gRPC",
}
for wrong, right in acronyms.items():
title = re.sub(rf"\b{wrong}\b", right, title)
return title
def resolve_title(filename: str, md_content: str, parent_title: str | None) -> str:
"""Pick the best page title for a file.
Priority: H1 from markdown > filename-derived > raw filename.
If *parent_title* is set, prefix with ``<parent>: <title>``.
"""
title = extract_h1(md_content) or title_from_filename(filename)
if parent_title:
# Avoid double-prefixing if the title already starts with parent
if not title.startswith(parent_title):
title = f"{parent_title}: {title}"
return title
# ---------------------------------------------------------------------------
# Sync timestamp injection (Confluence copy only — local files untouched)
# ---------------------------------------------------------------------------
_SYNC_RE = re.compile(r"> \*\*Last synced to Confluence\*\*:.*")
def inject_sync_timestamp(md_content: str, sync_date: str) -> str:
"""Add or update the sync-timestamp callout in *md_content*."""
stamp = f"> **Last synced to Confluence**: {sync_date}"
if _SYNC_RE.search(md_content):
return _SYNC_RE.sub(stamp, md_content)
lines = md_content.split("\n")
insert_at = 0
# After YAML front-matter
if lines and lines[0].strip() == "---":
for i, line in enumerate(lines[1:], 1):
if line.strip() == "---":
insert_at = i + 1
break
# Or after first H1
elif lines and lines[0].startswith("# "):
insert_at = 1
lines.insert(insert_at, "")
lines.insert(insert_at + 1, stamp)
lines.insert(insert_at + 2, "")
return "\n".join(lines)
# ---------------------------------------------------------------------------
# Confluence REST API v1 client
# ---------------------------------------------------------------------------
class ConfluenceClient:
"""Thin wrapper around the Confluence Cloud REST API v1.
Uses Basic Auth (email + API token) with X-Atlassian-Token header,
which is required by some Confluence Cloud instances that block v2
or enforce XSRF protection.
"""
def __init__(self, base_url: str, email: str, api_token: str):
self.base_url = base_url.rstrip("/")
self.session = requests.Session()
cred = base64.b64encode(f"{email}:{api_token}".encode()).decode()
self.session.headers.update({
"Authorization": f"Basic {cred}",
"X-Atlassian-Token": "no-check",
"Content-Type": "application/json",
"Accept": "application/json",
})
# -- low-level helpers ---------------------------------------------------
def _request(self, method: str, path: str, **kwargs) -> requests.Response:
"""Make a request with basic retry on 429 / 5xx."""
url = f"{self.base_url}{path}"
for attempt in range(4):
resp = self.session.request(method, url, **kwargs)
if resp.status_code == 429:
wait = int(resp.headers.get("Retry-After", 5))
print(f" Rate-limited, waiting {wait}s …")
time.sleep(wait)
continue
if resp.status_code >= 500 and attempt < 3:
time.sleep(2 ** attempt)
continue
resp.raise_for_status()
return resp
resp.raise_for_status() # final attempt — let it raise
return resp # unreachable, keeps type-checkers happy
# -- page operations -----------------------------------------------------
def get_page(self, page_id: str) -> dict:
"""Fetch page metadata including current version number."""
return self._request(
"GET", f"/rest/api/content/{page_id}",
params={"expand": "version"},
).json()
def create_page(
self, *, space_key: str, parent_id: str, title: str, body: str,
) -> dict:
payload = {
"type": "page",
"title": title,
"space": {"key": space_key},
"ancestors": [{"id": parent_id}],
"body": {
"storage": {
"value": body,
"representation": "storage",
},
},
}
return self._request("POST", "/rest/api/content", json=payload).json()
def update_page(
self, *, page_id: str, title: str, body: str, version_msg: str = "",
) -> dict:
current = self.get_page(page_id)
next_ver = current["version"]["number"] + 1
payload = {
"type": "page",
"title": title,
"body": {
"storage": {
"value": body,
"representation": "storage",
},
},
"version": {"number": next_ver, "message": version_msg},
}
return self._request(
"PUT", f"/rest/api/content/{page_id}", json=payload,
).json()
# ---------------------------------------------------------------------------
# URL builder
# ---------------------------------------------------------------------------
def page_url(base_url: str, space_key: str, page_id: str, title: str) -> str:
"""Build a human-friendly Confluence page URL."""
safe = quote(title.replace(" ", "+"), safe="+")
return f"{base_url}/spaces/{space_key}/pages/{page_id}/{safe}"
# ---------------------------------------------------------------------------
# Core sync logic
# ---------------------------------------------------------------------------
def sync_file(
client: ConfluenceClient,
md_path: Path,
mapping: dict,
*,
dry_run: bool = False,
) -> dict | None:
"""Sync one markdown file. Returns page-info dict or None on failure."""
filename = md_path.name
cfg = mapping["confluence"]
parent = mapping["parentPage"]
pages = mapping["pages"]
existing = pages.get(filename)
today = date.today().isoformat()
md_content = md_path.read_text(encoding="utf-8")
md_for_confluence = inject_sync_timestamp(md_content, today)
storage_body = md_to_storage(md_for_confluence)
# Resolve title — keep existing title for already-mapped pages
if existing:
title = existing["title"]
else:
title = resolve_title(filename, md_content, parent.get("title"))
base = cfg.get("baseUrl", "")
space_key = cfg.get("spaceKey", "")
# -- update existing page ------------------------------------------------
if existing:
pid = existing["pageId"]
if dry_run:
print(f" [dry-run] update {filename} (page {pid})")
return existing
try:
client.update_page(
page_id=pid,
title=title,
body=storage_body,
version_msg=f"Synced from local docs {today}",
)
url = page_url(base, space_key, pid, title)
print(f" updated {filename}")
return {"pageId": pid, "title": title, "url": url}
except requests.HTTPError as exc:
_report_error("update", filename, exc)
return None
# -- create new page -----------------------------------------------------
if dry_run:
print(f" [dry-run] create {filename}{title}")
return {"pageId": "DRY_RUN", "title": title, "url": ""}
try:
result = client.create_page(
space_key=cfg["spaceKey"],
parent_id=parent["id"],
title=title,
body=storage_body,
)
pid = result["id"]
url = page_url(base, space_key, pid, title)
print(f" created {filename} (page {pid})")
return {"pageId": pid, "title": title, "url": url}
except requests.HTTPError as exc:
_report_error("create", filename, exc)
return None
def _report_error(verb: str, filename: str, exc: requests.HTTPError) -> None:
print(f" FAILED {verb} {filename}: {exc}")
if exc.response is not None:
body = exc.response.text[:500]
print(f" {body}")
# ---------------------------------------------------------------------------
# CLI
# ---------------------------------------------------------------------------
def build_parser() -> argparse.ArgumentParser:
p = argparse.ArgumentParser(
description="Sync markdown docs to Confluence Cloud.",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
environment variables
CONFLUENCE_EMAIL Atlassian account email
CONFLUENCE_API_TOKEN_WRITE Atlassian API token (write-scoped)
CONFLUENCE_API_TOKEN Fallback if _WRITE is not set
CONFLUENCE_BASE_URL Wiki base URL (overrides mapping file)
examples
%(prog)s # sync all docs
%(prog)s --dry-run # preview without changes
%(prog)s --file docs/my-doc.md # sync one file
%(prog)s --update-only # only update existing pages
""",
)
p.add_argument("--docs-dir", type=Path,
help="Docs directory (default: inferred from mapping file location)")
p.add_argument("--mapping-file", type=Path,
help="Path to .confluence-mapping.json (default: auto-detect)")
p.add_argument("--file", type=Path, dest="single_file",
help="Sync a single file instead of all docs")
p.add_argument("--dry-run", action="store_true",
help="Show what would happen without making API calls")
p.add_argument("--create-only", action="store_true",
help="Only create new pages (skip existing)")
p.add_argument("--update-only", action="store_true",
help="Only update existing pages (skip new)")
return p
def resolve_base_url(cfg: dict) -> str | None:
"""Derive the Confluence base URL from env or mapping config."""
from_env = os.environ.get("CONFLUENCE_BASE_URL")
if from_env:
return from_env.rstrip("/")
from_cfg = cfg.get("baseUrl")
if from_cfg:
return from_cfg.rstrip("/")
# cloudId might be a domain like "discoverorg.atlassian.net"
cloud_id = cfg.get("cloudId", "")
if "." in cloud_id:
return f"https://{cloud_id}/wiki"
return None
def main() -> None:
parser = build_parser()
args = parser.parse_args()
# -- discover paths ------------------------------------------------------
repo_root = find_repo_root() or Path.cwd()
if args.mapping_file:
mapping_path = args.mapping_file.resolve()
else:
mapping_path = find_mapping_file(repo_root)
if not mapping_path or not mapping_path.is_file():
print("ERROR: cannot find .confluence-mapping.json")
print(" Pass --mapping-file or run from within the project.")
sys.exit(1)
docs_dir = args.docs_dir.resolve() if args.docs_dir else mapping_path.parent
print(f"mapping: {mapping_path}")
print(f"docs dir: {docs_dir}")
# -- load config ---------------------------------------------------------
mapping = load_mapping(mapping_path)
cfg = mapping["confluence"]
email = os.environ.get("CONFLUENCE_EMAIL", "")
# Prefer write-scoped token, fall back to general token
token = (os.environ.get("CONFLUENCE_API_TOKEN_WRITE")
or os.environ.get("CONFLUENCE_API_TOKEN", ""))
base_url = resolve_base_url(cfg)
if not email or not token:
print("ERROR: CONFLUENCE_EMAIL and CONFLUENCE_API_TOKEN_WRITE must be set.")
print(" https://id.atlassian.com/manage-profile/security/api-tokens")
sys.exit(1)
if not base_url:
print("ERROR: cannot determine Confluence base URL.")
print(" Set CONFLUENCE_BASE_URL or add baseUrl to the mapping file.")
sys.exit(1)
# Ensure baseUrl is persisted so page_url() works
cfg.setdefault("baseUrl", base_url)
client = ConfluenceClient(base_url, email, token)
# -- collect files -------------------------------------------------------
if args.single_file:
target = args.single_file.resolve()
if not target.is_file():
print(f"ERROR: file not found: {target}")
sys.exit(1)
md_files = [target]
else:
md_files = sorted(
p for p in docs_dir.glob("*.md")
if not p.name.startswith(".")
)
if not md_files:
print("No markdown files found.")
sys.exit(0)
pages = mapping["pages"]
if args.create_only:
md_files = [f for f in md_files if f.name not in pages]
elif args.update_only:
md_files = [f for f in md_files if f.name in pages]
total = len(md_files)
mode = "dry-run" if args.dry_run else "live"
print(f"\n{total} file(s) to sync ({mode})\n")
# -- sync ----------------------------------------------------------------
created = updated = failed = 0
for i, md_path in enumerate(md_files, 1):
filename = md_path.name
is_new = filename not in pages
prefix = f"[{i}/{total}]"
result = sync_file(client, md_path, mapping, dry_run=args.dry_run)
if result:
if not args.dry_run:
pages[filename] = result
if is_new:
created += 1
else:
updated += 1
else:
failed += 1
# -- persist mapping -----------------------------------------------------
if not args.dry_run and (created or updated):
mapping["lastSynced"] = date.today().isoformat()
# Clean synced files out of the unmapped list
synced = {f.name for f in md_files}
mapping["unmapped"] = [u for u in mapping.get("unmapped", []) if u not in synced]
save_mapping(mapping_path, mapping)
print(f"\nmapping file updated")
# -- summary -------------------------------------------------------------
print(f"\ndone: {created} created · {updated} updated · {failed} failed")
if failed:
sys.exit(1)
if __name__ == "__main__":
main()