Agent skills
Skills you can use with AI coding agents, indexed from public GitHub repositories.
-
distributed-llm-pretraining-torchtitan
Provides PyTorch-native distributed LLM pretraining using torchtitan with 4D parallelism (FSDP2, TP, PP, CP). Use when pretraining Llama 3.1, DeepSeek V3, or custom models at scale from 8 to 512+ GPUs with Float8, torch.compile, and distributed checkpointing.
Orchestra-Research/AI-Research-SKILLs 6,644
-
implementing-llms-litgpt
Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.
Orchestra-Research/AI-Research-SKILLs 6,644
-
calendar
Interact with Apple Calendar via AppleScript. Use when the user asks to check calendar, view events, create events, manage schedule, find free time, or list calendars. Triggers include "my calendar", "my schedule", "calendar events", "create event", "add to calendar", "what's on my calendar", "free time", "available slots", "upcoming events", "today's events". Requires macOS with Calendar.app.
jacobrask/claude-skills 4
-
jmap-email
Enables JMAP email operations using Node.js and jmap-jam library. Use when working with JMAP email servers, FastMail, Cyrus IMAP, Stalwart Mail Server, or when user mentions email search, reading, sending, or mailbox management.
jacobrask/claude-skills 4
-
safari-tabs
Interact with Safari browser tabs, reading list, bookmarks, and history via AppleScript. Use when the user asks to analyze, organize, summarize, deduplicate, close, export, or manage their Safari tabs. Also handles reading list, bookmarks, and history searches. Triggers include "my tabs", "open tabs", "Safari tabs", "clean up my browser", "what tabs do I have open", "organize my tabs", "too many tabs", "reading list", "bookmarks", "browser history", "export tabs". Requires macOS with Safari.
jacobrask/claude-skills 4
-
knowledge-base
Manage your personal knowledge base of curated resources, bookmarks, and excerpts. Triggers include "knowledge base", "kb", "add to knowledge", "add tabs to", "what do I have on", "what do we know about", "find resources about". Use with safari-tabs skill for bulk ingestion from Safari windows. Location is ~/knowledge/.
jacobrask/claude-skills 4
-
peft-fine-tuning
Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models (7B-70B) with limited GPU memory, when you need to train <1% of parameters with minimal accuracy loss, or for multi-adapter serving. HuggingFace's official library integrated with transformers ecosystem.
Orchestra-Research/AI-Research-SKILLs 6,644
-
llama-factory
Expert guidance for fine-tuning LLMs with LLaMA-Factory - WebUI no-code, 100+ models, 2/3/4/5/6/8-bit QLoRA, multimodal support
Orchestra-Research/AI-Research-SKILLs 6,644
-
unsloth
Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization
Orchestra-Research/AI-Research-SKILLs 6,644
-
axolotl
Expert guidance for fine-tuning LLMs with Axolotl - YAML configs, 100+ models, LoRA/QLoRA, DPO/KTO/ORPO/GRPO, multimodal support
Orchestra-Research/AI-Research-SKILLs 6,644
-
moe-training
Train Mixture of Experts (MoE) models using DeepSpeed or HuggingFace. Use when training large-scale models with limited compute (5× cost reduction vs dense models), implementing sparse architectures like Mixtral 8x7B or DeepSeek-V3, or scaling model capacity without proportional compute increase. Covers MoE architectures, routing mechanisms, load balancing, expert parallelism, and inference optimization.
Orchestra-Research/AI-Research-SKILLs 6,644
-
model-pruning
Reduce LLM size and accelerate inference using pruning techniques like Wanda and SparseGPT. Use when compressing models without retraining, achieving 50% sparsity with minimal accuracy loss, or enabling faster inference on hardware accelerators. Covers unstructured pruning, structured pruning, N:M sparsity, magnitude pruning, and one-shot methods.
Orchestra-Research/AI-Research-SKILLs 6,644
-
knowledge-distillation
Compress large language models using knowledge distillation from teacher to student models. Use when deploying smaller models with retained performance, transferring GPT-4 capabilities to open-source models, or reducing inference costs. Covers temperature scaling, soft targets, reverse KLD, logit distillation, and MiniLLM training strategies.
Orchestra-Research/AI-Research-SKILLs 6,644
-
long-context
Extend context windows of transformer models using RoPE, YaRN, ALiBi, and position interpolation techniques. Use when processing long documents (32k-128k+ tokens), extending pre-trained models beyond original context limits, or implementing efficient positional encodings. Covers rotary embeddings, attention biases, interpolation methods, and extrapolation strategies for LLMs.
Orchestra-Research/AI-Research-SKILLs 6,644
-
speculative-decoding
Accelerate LLM inference using speculative decoding, Medusa multiple heads, and lookahead decoding techniques. Use when optimizing inference speed (1.5-3.6× speedup), reducing latency for real-time applications, or deploying models with limited compute. Covers draft models, tree-based attention, Jacobi iteration, parallel token generation, and production deployment strategies.
Orchestra-Research/AI-Research-SKILLs 6,644
-
model-merging
Merge multiple fine-tuned models using mergekit to combine capabilities without retraining. Use when creating specialized models by blending domain-specific expertise (math + coding + chat), improving performance beyond single models, or experimenting rapidly with model variants. Covers SLERP, TIES-Merging, DARE, Task Arithmetic, linear merging, and production deployment strategies.
Orchestra-Research/AI-Research-SKILLs 6,644
-
tensorboard
Visualize training metrics, debug models with histograms, compare experiments, visualize model graphs, and profile performance with TensorBoard - Google's ML visualization toolkit
Orchestra-Research/AI-Research-SKILLs 6,644
-
weights-and-biases
Track ML experiments with automatic logging, visualize training in real-time, optimize hyperparameters with sweeps, and manage model registry with W&B - collaborative MLOps platform
Orchestra-Research/AI-Research-SKILLs 6,644
-
mlflow
Track ML experiments, manage model registry with versioning, deploy models to production, and reproduce experiments with MLflow - framework-agnostic ML lifecycle platform
Orchestra-Research/AI-Research-SKILLs 6,644
-
experiment-tracking-swanlab
Provides guidance for experiment tracking with SwanLab. Use when you need open-source run tracking, local or self-hosted dashboards, and lightweight media logging for ML workflows.
Orchestra-Research/AI-Research-SKILLs 6,644
-
qdrant-vector-search
High-performance vector similarity search engine for RAG and semantic search. Use when building production RAG systems requiring fast nearest neighbor search, hybrid search with filtering, or scalable vector storage with Rust-powered performance.
Orchestra-Research/AI-Research-SKILLs 6,644
-
chroma
Open-source embedding database for AI applications. Store embeddings and metadata, perform vector and full-text search, filter by metadata. Simple 4-function API. Scales from notebooks to production clusters. Use for semantic search, RAG applications, or document retrieval. Best for local development and open-source projects.
Orchestra-Research/AI-Research-SKILLs 6,644
-
pinecone
Managed vector database for production AI applications. Fully managed, auto-scaling, with hybrid search (dense + sparse), metadata filtering, and namespaces. Low latency (<100ms p95). Use for production RAG, recommendation systems, or semantic search at scale. Best for serverless, managed infrastructure.
Orchestra-Research/AI-Research-SKILLs 6,644
-
faiss
Facebook's library for efficient similarity search and clustering of dense vectors. Supports billions of vectors, GPU acceleration, and various index types (Flat, IVF, HNSW). Use for fast k-NN search, large-scale vector retrieval, or when you need pure similarity search without metadata. Best for high-performance applications.
Orchestra-Research/AI-Research-SKILLs 6,644