Agent skill
phoenix-truth-case-orchestrator
End-to-end truth-case orchestration for the Phoenix VC fund-modeling platform. Use when running tests/truth-cases/runner.test.ts, computing module-level pass rates, updating docs/phase0-validation-report.md and docs/failure-triage.md, or deciding between Phase 1A/1B/1C.
Install this agent skill to your Project
npx add-skill https://github.com/majiayu000/claude-skill-registry/tree/main/skills/testing/phoenix-truth-case-orchestrator-nikhillinit-updog-restore-644617a6
SKILL.md
Phoenix Truth-Case Orchestrator
You coordinate all truth-case work for the Phoenix VC fund model. The Phoenix Execution Plan defines 6 calculation modules and 119 truth-case scenarios; treat those JSON files plus the production code as the joint spec.
When to Use
Invoke this skill when:
- Running or modifying
tests/truth-cases/runner.test.ts - Updating module-level pass rates or Phase 0 / Phase 1A reports
- Classifying failures as CODE BUG / TRUTH CASE ERROR / MISSING FEATURE
- Making path decisions (Phase 1A vs 1B vs 1C) based on gate thresholds
Do not use this skill for:
- Fixing calculation precision (use
phoenix-precision-guard) - Changing core waterfall semantics (use
phoenix-waterfall-ledger-semantics)
Files You Own
- Truth-case JSONs:
docs/xirr.truth-cases.jsondocs/waterfall-tier.truth-cases.jsondocs/waterfall-ledger.truth-cases.jsondocs/fees.truth-cases.jsondocs/capital-allocation.truth-cases.jsondocs/exit-recycling.truth-cases.json
- Runner + helpers:
tests/truth-cases/runner.test.ts(build via/devif missing)tests/truth-cases/validation-helpers.ts(build via/devif missing)
- Reports:
docs/phase0-validation-report.mddocs/failure-triage.md(create if missing)
Core Workflow
1. Run the Truth-Case Suite
Preferred:
/test-smart truth-cases
Fallback:
npm test -- tests/truth-cases/runner.test.ts --run --reporter=verbose
Capture:
- Per-module pass/fail counts
- Overall pass rate
- List of failing scenarios and error messages
2. Compute Module-Level Pass Rates
For each module:
- Count scenarios from the corresponding
docs/*.truth-cases.json - Count passing vs failing tests from the runner output
- Update the Module-Level Pass Rates table in
docs/phase0-validation-report.mdwith:X / Total (Y%)- PASS/FAIL relative to 1A/1B/1C thresholds
- Recommended path per module
3. Classify Failures
For every failing scenario:
- Read the error message
- Inspect the production code implementing that module
- Inspect the truth-case JSON
Classify each as:
CODE BUGTRUTH CASE ERRORMISSING FEATURE
Record each scenario under the appropriate heading in docs/failure-triage.md.
4. Fix Truth-Case Errors First
If you determine a TRUTH CASE ERROR:
-
Fix the JSON
-
Re-run only that scenario:
bashnpm test -- tests/truth-cases/runner.test.ts -t "<scenario-name>" -
Confirm it passes and doesn't break other scenarios
Only after JSON is correct should you propose code changes.
5. Apply Gate Logic for Phase Routing
Using the thresholds defined in the Phoenix Execution Plan, decide:
- Phase 1A (cleanup), 1B (bug fix), or 1C (rebuild)
Update docs/phase0-validation-report.md with:
- Module-level pass rates
- Overall test pass rate
- Chosen path (1A/1B/1C) and rationale
Numeric & Structural Semantics
When working in validation-helpers.ts or the runner:
- Use
toBeCloseTo(expected, 6)for all numeric comparisons - Ignore
notesfields in expected JSON - Treat
nullin expectedgpClawbackas "expectundefinedin code" - For min/max ranges (e.g., recycled_min/max), assert:
actual >= minactual <= max
- Support both:
- Aggregate
totals - Row-level assertions (e.g., ledger rows)
- Aggregate
Invariants
- Never reduce the overall truth-case pass rate without explanation
- Never change truth-case semantics silently: update both JSON and docs
- Always re-run the truth-case suite after modifying code or JSON
Didn't find tool you were looking for?