Agent skill
evidence-draft
Create per-subsection evidence packs (NO PROSE): claim candidates, concrete comparisons, evaluation protocol, limitations, plus citation-backed evidence snippets with provenance. **Trigger**: evidence draft, evidence pack, claim candidates, concrete comparisons, evidence snippets, provenance, 证据草稿, 证据包, 可引用事实. **Use when**: `outline/subsection_briefs.jsonl` exists and you want evidence-first section drafting where every paragraph can be backed by traceable citations/snippets. **Skip if**: `outline/evidence_drafts.jsonl` already exists and is refined (no placeholders; >=4 grounded comparisons per subsection in survey mode; `blocking_missing` empty). **Network**: none (richer evidence improves with abstracts/fulltext). **Guardrail**: NO PROSE; do not invent facts; only use citation keys that exist in `citations/ref.bib`.
Install this agent skill to your Project
npx add-skill https://github.com/WILLOSCAR/research-units-pipeline-skills/tree/main/.codex/skills/evidence-draft
SKILL.md
Evidence Draft
Build deterministic outline/evidence_drafts.jsonl packs from briefs + notes + optional evidence bindings.
Compatibility mode is active: this migration preserves the existing JSONL contract while moving evidence-quality policy, sparse-evidence routing, and evaluation-anchor rules into references/ and assets/.
Load Order
Always read:
references/overview.mdreferences/evidence_quality_policy.md
Read by task:
references/block_vs_downgrade.mdwhen deciding whether thin evidence should block drafting or only downgrade claim strengthreferences/evaluation_anchor_rules.mdwhen evaluation tokens, protocol context, or numeric claims are weakreferences/examples_sparse_evidence.mdfor evidence-thin pack calibrationreferences/source_text_hygiene.mdwhen paper self-narration or generic result wrappers are leaking into pack snippets / claim candidates
Machine-readable assets:
assets/evidence_pack_schema.jsonassets/evidence_policy.jsonassets/source_text_hygiene.json
Inputs
Required:
outline/subsection_briefs.jsonlpapers/paper_notes.jsonlcitations/ref.bib
Optional but recommended:
papers/evidence_bank.jsonloutline/evidence_bindings.jsonl
Outputs
Keep the current output contract:
outline/evidence_drafts.jsonl- optional human-readable mirrors under
outline/evidence_drafts/
Script Boundary
Use scripts/run.py only for:
- deterministic joins across briefs / notes / evidence bank / bindings
- snippet extraction and provenance assembly
- policy-driven
blocking_missing/downgrade_signals/verify_fieldsmaterialization - pack validation and Markdown mirror generation
Do not treat run.py as the place for:
- filler bullets that make thin evidence look complete
- hidden sparse-evidence judgment that is not inspectable from
references//assets/ - reader-facing narrative prose
Output Shape Rules
Keep these stable:
- preserve the existing top-level pack fields already used by downstream survey pipelines
claim_candidatesmust remain snippet-derivedconcrete_comparisonsmust remain genuinely two-sided; if one cluster has no usable highlight, drop the card and surface thin evidence upstream instead of fabricating an A-vs-B contrast- snippet sampling should stay cluster-aware: when a subsection has explicit clusters, evidence selection should avoid collapsing onto one route just because its abstracts contain louder result sentences
- sparse evidence should surface as explicit blockers / downgrade signals / verify fields, not filler bullets
- citation keys must remain constrained to
citations/ref.bib
Compatibility Notes
Current mode is reference-first with deterministic compatibility:
assets/evidence_policy.jsondefines pack thresholds and sparse-evidence routingassets/evidence_pack_schema.jsondocuments/validates the stable pack shapescripts/run.pystill materializes the existing JSONL + Markdown outputs, but no longer pads sparse sections with generic caution prose
Quick Start
python .codex/skills/evidence-draft/scripts/run.py --workspace <workspace_dir>
Execution Notes
When running in compatibility mode, scripts/run.py currently reads:
outline/subsection_briefs.jsonlpapers/paper_notes.jsonlcitations/ref.bib- optionally
papers/evidence_bank.jsonlandoutline/evidence_bindings.jsonl assets/evidence_policy.jsonandassets/evidence_pack_schema.json
Script
Quick Start
python .codex/skills/evidence-draft/scripts/run.py --workspace <workspace_dir>
All Options
--workspace <dir>--unit-id <id>--inputs <path1;path2>--outputs <path1;path2>--checkpoint <C*>
Examples
python .codex/skills/evidence-draft/scripts/run.py --workspace workspaces/<ws>
Troubleshooting
- If packs look complete despite thin evidence, inspect
assets/evidence_policy.jsonandreferences/block_vs_downgrade.mdbefore changing Python. - If evaluation bullets are generic, inspect
references/evaluation_anchor_rules.mdand the policy asset. - If claims are strong but evidence is abstract/title-only, downgrade via
downgrade_signalsandverify_fieldsrather than adding narrative caveats.
Didn't find tool you were looking for?