Agent skill
literature-review-jenniferied-everything-machine
Stars
163
Forks
31
Install this agent skill to your Project
npx add-skill https://github.com/majiayu000/claude-skill-registry/tree/main/skills/testing/literature-review-jenniferied-everything-machine
SKILL.md
Literature Reviewer Skill
Invocation
/literature-review [phase]
Where [phase] is one of:
required-reading- Process professor-provided textsexplore- Exploratory searches to discover the fielddefine-domains- Define domains based on explorationsearch [domain]- Systematic search for a specific domaincritique [domain]- Run critique loop on domain summarysynthesize- Write unified review chapterstatus- Show current progress
Workflow Overview
Phase 1: Required Reading
Input: 7 professor-provided texts in academic/texts/
Output: Understanding of what Artistic Research is
For each text:
- Read and extract key concepts
- Add to
../submission/references/bibliography.bib - Document: What does this tell me about AR?
- Document: How does this relate to Kepler project?
- Update
checkpoint.md
Phase 2: Exploration
Input: Search queries from CLAUDE.md Output: Understanding of the AR field, citation landscape
Steps:
- Run exploratory searches across venues
- Document findings (topics, citation counts, gaps)
- Identify emerging themes/clusters
- Note typical citation counts in AR field
- Update
checkpoint.mdwith findings
Phase 3: Domain Definition
Input: Exploration findings Output: Domain definitions, calibrated tier thresholds
Steps:
- Review exploration notes
- Identify 3-5 coherent domains
- Calibrate citation tier thresholds for AR field
- Update CLAUDE.md with domain definitions
- Update
todo.mdwith domain-specific tasks
Phase 4: Systematic Search (per domain)
Input: Domain definition, search terms Output: Domain summary, BibTeX entries
Steps:
- Load context from
checkpoint.md - Search using MCPs (Semantic Scholar, OpenAlex)
- Search AR venues (JAR, PARSE, VIS, RC)
- Triage papers by tier
- Write domain summary (1500-2500 words)
- Export BibTeX to
data/exports/domain_*.bib - Update
checkpoint.md
Phase 5: Critique Loop
Input: Domain summary draft Output: Revised summary, critique log
Steps:
- Grade summary A-F on:
- Completeness (all tiers covered?)
- Coherence (logical flow?)
- Relevance (connects to Kepler project?)
- Citation quality (tier distribution appropriate?)
- Log critique to
reviews.log - Revise based on feedback
- Re-grade until B+ or better
Phase 6: Synthesis
Input: All domain summaries Output: Unified review chapter
Steps:
- Read all domain summaries
- Write unified
systematic_review_chapter.md - Update
../submission/docs/02-literaturrecherche.md - Merge BibTeX into
../submission/references/bibliography.bib - Build PDF:
cd ../submission && make literatur - Final critique loop on unified chapter
Critique Grading Rubric
| Grade | Criteria |
|---|---|
| A | Comprehensive, well-structured, all tiers covered, clear relevance |
| B | Good coverage, minor gaps, coherent structure |
| C | Adequate but missing key papers or weak connections |
| D | Significant gaps, poor structure, unclear relevance |
| F | Incomplete, major issues, requires substantial revision |
Minimum passing grade: B
File Outputs
| File | Content |
|---|---|
checkpoint.md |
Session state after each phase |
todo.md |
Updated task list |
reviews.log |
All critique feedback |
data/exports/*.bib |
BibTeX per domain |
drafts/*.md |
Domain summaries |
Integration Points
Final outputs go to:
../submission/docs/02-literaturrecherche.md- Chapter content../submission/references/bibliography.bib- Citations
Build command:
bash
cd ../submission && make literatur
Session Recovery
If context fills or session restarts:
- Read
checkpoint.mdfirst - Resume from last incomplete phase/domain
- Continue workflow from that point
Didn't find tool you were looking for?