Token Optimizer MCP

Token Optimizer MCP

Efficient token reduction for context window optimization in AI workflows

6
Stars
2
Forks
6
Watchers
9
Issues
Token Optimizer MCP is a server that reduces context window usage for AI tools by intelligently caching, compressing, and externally storing large content. It offers smart replacements for standard file and API operations, integrating seamlessly with tools such as Claude Desktop and other code assistants. The tool employs persistent caching with SQLite, high-ratio Brotli compression, and precise token counting to optimize both accuracy and efficiency. Production results indicate consistent 60-90% token usage reduction over tens of thousands of operations.

Key Features

Intelligent caching of large content
Brotli-based high-ratio compression
Persistent SQLite cache across sessions
Smart replacements for standard file and API tools
Automatic optimization for read, grep, glob, and diff operations
Detailed token analysis using tiktoken
Seamless tool integration and auto-configuration
Zero external dependencies for offline operation
Rich set of 61+ specialized tools
Production-ready TypeScript implementation

Use Cases

Reducing token usage for large context window operations
Optimizing file reading and editing in AI-assisted development
Integrating efficient token management into code assistants
Persisting compressed API responses outside model context
Automating token analysis and reduction in batch workflows
Optimizing database or codebase operations for AI tools
Tracking and reporting cache hit rates and savings
Enabling offline AI workflows with persistent caching
Automating context trimming for long-running workspaces
Enhancing performance in AI-driven IDE extensions

README

Token Optimizer MCP

Intelligent token optimization through caching, compression, and smart tooling for Claude Code and Claude Desktop

Overview

Token Optimizer MCP is a Model Context Protocol (MCP) server that reduces context window usage by 60-90% through intelligent caching, compression, and smart tool replacements. By storing compressed content externally in SQLite and providing optimized alternatives to standard tools, the server helps you maximize your available context window.

Production Results: 60-90% token reduction across 38,000+ operations in real-world usage.

Key Features

  • Smart Tool Replacements: Automatic optimization for Read, Grep, Glob, and more
  • Context Window Optimization: Store content externally to free up context space
  • High Compression: Brotli compression (2-4x typical, up to 82x for repetitive content)
  • Persistent Caching: SQLite-based cache that persists across sessions
  • Accurate Token Counting: Uses tiktoken for precise token measurements
  • 61 Specialized Tools: File operations, API caching, database optimization, monitoring, and more
  • Zero External Dependencies: Completely offline operation
  • Production Ready: Built with TypeScript for reliability

Installation

Quick Install (Recommended)

Windows

powershell
# Run PowerShell as Administrator, then:
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser

# Install globally (hooks install automatically!)
npm install -g @ooples/token-optimizer-mcp

macOS / Linux

bash
# Install globally (hooks install automatically!)
npm install -g @ooples/token-optimizer-mcp

That's it! The postinstall script will automatically:

  1. ✅ Install token-optimizer-mcp globally via npm
  2. ✅ Auto-detect and configure all installed AI tools (Claude Desktop, Cursor, Cline, etc.)
  3. ✅ Set up automatic token optimization on every tool call
  4. ✅ Configure workspace trust and execution permissions

Result: 60-90% token reduction across all operations!

Note: If automatic installation is skipped (e.g., in CI environments), you can manually run the installer:

  • Windows: powershell -ExecutionPolicy Bypass -File install-hooks.ps1
  • macOS/Linux: bash install-hooks.sh

Manual Configuration

For detailed platform-specific installation instructions, see docs/HOOKS-INSTALLATION.md.

Available Tools (65 Total)

Core Caching & Optimization (8 tools)

  • optimize_text - Compress and cache text (primary tool for token reduction)
  • get_cached - Retrieve previously cached text
  • compress_text - Compress text using Brotli
  • decompress_text - Decompress Brotli-compressed text
  • count_tokens - Count tokens using tiktoken (GPT-4 tokenizer)
  • analyze_optimization - Analyze text and get optimization recommendations
  • get_cache_stats - View cache hit rates and compression ratios
  • clear_cache - Clear all cached data

Usage Example:

typescript
// Cache large content to remove it from context window
optimize_text({
  text: "Large API response or file content...",
  key: "api-response-key",
  quality: 11
})
// Result: 60-90% token reduction

Smart File Operations (10 tools)

Optimized replacements for standard file tools with intelligent caching and diff-based updates:

  • smart_read - Read files with 80% token reduction through caching and diffs
  • smart_write - Write files with verification and change tracking
  • smart_edit - Line-based file editing with diff-only output (90% reduction)
  • smart_grep - Search file contents with match-only output (80% reduction)
  • smart_glob - File pattern matching with path-only results (75% reduction)
  • smart_diff - Git diffs with diff-only output (85% reduction)
  • smart_branch - Git branch listing with structured JSON (60% reduction)
  • smart_log - Git commit history with smart filtering (75% reduction)
  • smart_merge - Git merge management with conflict analysis (80% reduction)
  • smart_status - Git status with status-only output (70% reduction)

Usage Example:

typescript
// Read a file with automatic caching
smart_read({ path: "/path/to/file.ts" })
// First read: full content
// Subsequent reads: only diff (80% reduction)

API & Database Operations (10 tools)

Intelligent caching and optimization for external data sources:

  • smart_api_fetch - HTTP requests with caching and retry logic (83% reduction on cache hits)
  • smart_cache_api - API response caching with TTL/ETag/event-based strategies
  • smart_database - Database queries with connection pooling and caching (83% reduction)
  • smart_sql - SQL query analysis with optimization suggestions (83% reduction)
  • smart_schema - Database schema analysis with intelligent caching
  • smart_graphql - GraphQL query optimization with complexity analysis (83% reduction)
  • smart_rest - REST API analysis with endpoint discovery (83% reduction)
  • smart_orm - ORM query optimization with N+1 detection (83% reduction)
  • smart_migration - Database migration tracking (83% reduction)
  • smart_websocket - WebSocket connection management with message tracking

Usage Example:

typescript
// Fetch API with automatic caching
smart_api_fetch({
  method: "GET",
  url: "https://api.example.com/data",
  ttl: 300
})
// Cached responses: 95% token reduction

Build & Test Operations (10 tools)

Development workflow optimization with intelligent caching:

  • smart_build - TypeScript builds with diff-based change detection
  • smart_test - Test execution with incremental test selection
  • smart_lint - ESLint with incremental analysis and auto-fix
  • smart_typecheck - TypeScript type checking with caching
  • smart_install - Package installation with dependency analysis
  • smart_docker - Docker operations with layer analysis
  • smart_logs - Log aggregation with pattern filtering
  • smart_network - Network diagnostics with anomaly detection
  • smart_processes - Process monitoring with resource tracking
  • smart_system_metrics - System resource monitoring with performance recommendations

Usage Example:

typescript
// Run tests with caching
smart_test({
  onlyChanged: true,  // Only test changed files
  coverage: true
})

Advanced Caching (10 tools)

Enterprise-grade caching strategies with 87-92% token reduction:

  • smart_cache - Multi-tier cache (L1/L2/L3) with 6 eviction strategies (90% reduction)
  • cache_warmup - Intelligent cache pre-warming with schedule support (87% reduction)
  • cache_analytics - Real-time dashboards and trend analysis (88% reduction)
  • cache_benchmark - Performance testing and strategy comparison (89% reduction)
  • cache_compression - 6 compression algorithms with adaptive selection (89% reduction)
  • cache_invalidation - Dependency tracking and pattern-based invalidation (88% reduction)
  • cache_optimizer - ML-based recommendations and bottleneck detection (89% reduction)
  • cache_partition - Sharding and consistent hashing (87% reduction)
  • cache_replication - Distributed replication with conflict resolution (88% reduction)
  • predictive_cache - ML-based predictive caching with ARIMA/LSTM (91% reduction)

Usage Example:

typescript
// Configure multi-tier cache
smart_cache({
  operation: "configure",
  evictionStrategy: "LRU",
  l1MaxSize: 1000,
  l2MaxSize: 10000
})

Monitoring & Dashboards (7 tools)

Comprehensive monitoring with 88-92% token reduction through intelligent caching:

  • alert_manager - Multi-channel alerting (email, Slack, webhook) with routing (89% reduction)
  • metric_collector - Time-series metrics with multi-source support (88% reduction)
  • monitoring_integration - External platform integration (Prometheus, Grafana, Datadog) (87% reduction)
  • custom_widget - Dashboard widgets with template caching (88% reduction)
  • data_visualizer - Interactive visualizations with SVG optimization (92% reduction)
  • health_monitor - System health checks with state compression (91% reduction)
  • log_dashboard - Log analysis with pattern detection (90% reduction)

Usage Example:

typescript
// Create an alert
alert_manager({
  operation: "create-alert",
  alertName: "high-cpu-usage",
  channels: ["slack", "email"],
  threshold: { type: "above", value: 80 }
})

System Operations (6 tools)

System-level operations with smart caching:

  • smart_cron - Scheduled task management (cron/Windows Task Scheduler) (85% reduction)
  • smart_user - User and permission management across platforms (86% reduction)
  • smart_ast_grep - Structural code search with AST indexing (83% reduction)
  • get_session_stats - Session-level token usage statistics
  • analyze_project_tokens - Project-wide token analysis and cost estimation
  • optimize_session - Compress large file operations from current session

Usage Example:

typescript
// View session token usage
get_session_stats({})
// Result: Detailed breakdown of token usage by tool

How It Works

Token Analytics (4 tools)

Granular token usage analytics for pinpointing optimization opportunities:

  • get_hook_analytics - Token usage breakdown by hook phase (PreToolUse, PostToolUse, etc.)
  • get_action_analytics - Token usage breakdown by tool/action (Read, Write, Grep, etc.)
  • get_mcp_server_analytics - Token usage breakdown by MCP server (token-optimizer, filesystem, etc.)
  • export_analytics - Export analytics data in JSON or CSV format with filtering

Usage Example:

typescript
// Get per-hook analytics
get_hook_analytics({
  startDate: "2025-01-01T00:00:00Z",
  endDate: "2025-12-31T23:59:59Z"
})
// Result: Shows which hooks consume the most tokens

// Get per-action analytics
get_action_analytics({})
// Result: Shows which tools use the most tokens

// Export analytics as CSV
export_analytics({
  format: "csv",
  hookPhase: "PreToolUse"
})
// Result: CSV export filtered by PreToolUse hook

Key Features:

  • Per-hook phase tracking (PreToolUse, PostToolUse, SessionStart, etc.)
  • Per-action tracking (Read, Write, count_tokens, etc.)
  • Per-MCP-server tracking (token-optimizer, filesystem, GitHub, etc.)
  • Date range filtering
  • JSON and CSV export
  • Persistent storage with SQLite
  • Zero performance impact (async batched writes)

Global Hooks System (7-Phase Optimization)

When global hooks are installed, token-optimizer-mcp runs automatically on every tool call:

┌─────────────────────────────────────────────────────────────┐
│ Phase 1: PreToolUse - Tool Replacement                      │
│ ├─ Read   → smart_read   (80% token reduction)             │
│ ├─ Grep   → smart_grep   (80% token reduction)             │
│ └─ Glob   → smart_glob   (75% token reduction)             │
└─────────────────────────────────────────────────────────────┘
                          ↓
┌─────────────────────────────────────────────────────────────┐
│ Phase 2: Input Validation - Cache Lookups                   │
│ └─ get_cached checks if operation was already done          │
└─────────────────────────────────────────────────────────────┘
                          ↓
┌─────────────────────────────────────────────────────────────┐
│ Phase 3: PostToolUse - Output Optimization                  │
│ ├─ optimize_text for large outputs                          │
│ └─ compress_text for repeated content                       │
└─────────────────────────────────────────────────────────────┘
                          ↓
┌─────────────────────────────────────────────────────────────┐
│ Phase 4: Session Tracking                                   │
│ └─ Log all operations to operations-{sessionId}.csv         │
└─────────────────────────────────────────────────────────────┘
                          ↓
┌─────────────────────────────────────────────────────────────┐
│ Phase 5: UserPromptSubmit - Prompt Optimization             │
│ └─ Optimize user prompts before sending to API              │
└─────────────────────────────────────────────────────────────┘
                          ↓
┌─────────────────────────────────────────────────────────────┐
│ Phase 6: PreCompact - Pre-Compaction Optimization           │
│ └─ Optimize before Claude Code compacts the conversation    │
└─────────────────────────────────────────────────────────────┘
                          ↓
┌─────────────────────────────────────────────────────────────┐
│ Phase 7: Metrics & Reporting                                │
│ └─ Track token reduction metrics and generate reports       │
└─────────────────────────────────────────────────────────────┘

Production Performance

Based on 38,000+ operations in real-world usage:

Tool Category Avg Token Reduction Cache Hit Rate
File Operations 60-90% >80%
API Responses 83-95% >75%
Database Queries 83-90% >70%
Build/Test Output 70-85% >65%

Per-Session Savings: 300K-700K tokens (worth $0.90-$2.10 at $3/M tokens)

Usage Examples

Basic Caching

typescript
// Cache large content to remove from context window
const result = await optimize_text({
  text: "Large API response or file content...",
  key: "cache-key",
  quality: 11
});
// Result: Original tokens removed, only cache key remains (~50 tokens)

// Retrieve later
const cached = await get_cached({ key: "cache-key" });
// Result: Full original content restored

Smart File Reading

typescript
// First read: full content
await smart_read({ path: "/src/app.ts" });

// Subsequent reads: only changes (80% reduction)
await smart_read({ path: "/src/app.ts" });

API Caching

typescript
// First request: fetch and cache
await smart_api_fetch({
  method: "GET",
  url: "https://api.example.com/data",
  ttl: 300
});

// Subsequent requests: cached (95% reduction)
await smart_api_fetch({
  method: "GET",
  url: "https://api.example.com/data"
});

Session Analysis

typescript
// View token usage for current session
await get_session_stats({});
// Result: Breakdown by tool, operation, and savings

// Analyze entire project
await analyze_project_tokens({
  projectPath: "/path/to/project"
});
// Result: Cost estimation and optimization opportunities

Technology Stack

  • Runtime: Node.js 20+
  • Language: TypeScript
  • Database: SQLite (better-sqlite3)
  • Token Counting: tiktoken (GPT-4 tokenizer)
  • Compression: Brotli (built-in Node.js)
  • Caching: Multi-tier LRU/LFU/FIFO caching
  • Protocol: MCP SDK (@modelcontextprotocol/sdk)

Supported AI Tools

The automated installer detects and configures token-optimizer-mcp for:

  • Claude Code - CLI with global hooks integration
  • Claude Desktop - Native desktop application
  • Cursor IDE - AI-first code editor
  • Cline - VS Code extension (formerly Claude Dev)
  • GitHub Copilot - VS Code with MCP support
  • Windsurf IDE - AI-powered development environment

No manual configuration needed - the installer automatically detects and configures all installed tools!

Documentation

Performance Characteristics

  • Compression Ratio: 2-4x typical (up to 82x for repetitive content)
  • Context Window Savings: 60-90% average across all operations
  • Cache Hit Rate: >80% in typical usage
  • Operation Overhead: <10ms for cache operations (optimized from 50-70ms)
  • Compression Speed: ~1ms per KB of text
  • Hook Overhead: <10ms per operation (7x improvement from in-memory optimizations)

Performance Optimizations

The PowerShell hooks have been optimized to reduce overhead from 50-70ms to <10ms through:

  • In-Memory Session State: Session data kept in memory instead of disk I/O on every operation
  • Batched Log Writes: Operation logs buffered and flushed every 5 seconds or 100 operations
  • Lazy Persistence: Disk writes only occur when necessary (session end, optimization, reports)

Environment Variables

Control hook behavior with these environment variables:

Performance Controls

  • TOKEN_OPTIMIZER_USE_FILE_SESSION (default: false)

    • Set to true to revert to file-based session tracking (legacy mode)
    • Use if you encounter issues with in-memory session state
    • Example: $env:TOKEN_OPTIMIZER_USE_FILE_SESSION = "true"
  • TOKEN_OPTIMIZER_SYNC_LOG_WRITES (default: false)

    • Set to true to disable batched log writes
    • Forces immediate writes to disk (slower but more resilient)
    • Use for debugging or if logs are being lost
    • Example: $env:TOKEN_OPTIMIZER_SYNC_LOG_WRITES = "true"
  • TOKEN_OPTIMIZER_DEBUG_LOGGING (default: true)

    • Set to false to disable DEBUG-level logging
    • Reduces log file size and improves performance
    • INFO/WARN/ERROR logs still written
    • Example: $env:TOKEN_OPTIMIZER_DEBUG_LOGGING = "false"

Development Path

  • TOKEN_OPTIMIZER_DEV_PATH
    • Path to local development installation
    • Automatically set to ~/source/repos/token-optimizer-mcp if not specified
    • Override for custom development paths
    • Example: $env:TOKEN_OPTIMIZER_DEV_PATH = "C:\dev\token-optimizer-mcp"

Performance Impact: Using in-memory mode (default) provides a 7x improvement in hook overhead:

  • Before: 50-70ms per hook operation
  • After: <10ms per hook operation
  • 85% reduction in hook latency

Monitoring Token Savings

Real-Time Session Monitoring

To view your actual token SAVINGS, use the get_session_stats tool:

typescript
// View current session statistics with token savings breakdown
await get_session_stats({});

Output includes:

  • Total tokens saved (this is the actual savings amount!)
  • Token reduction percentage (e.g., "60% reduction")
  • Cache hit rate and compression ratios
  • Breakdown by tool (Read, Grep, Glob, etc.)
  • Top 10 most optimized operations with before/after comparison

Example Output:

json
{
  "sessionId": "abc-123",
  "totalTokensSaved": 125430,  // ← THIS is your savings!
  "tokenReductionPercent": 68.2,
  "originalTokens": 184000,
  "optimizedTokens": 58570,
  "cacheHitRate": 72.0,
  "byTool": {
    "smart_read": { "saved": 45000, "percent": 80 },
    "smart_grep": { "saved": 32000, "percent": 75 }
  }
}

Session Tracking Files

All operations are automatically tracked in session data files:

Location: ~/.claude-global/hooks/data/current-session.txt

Format:

json
{
  "sessionId": "abc-123",
  "sessionStart": "20251031-082211",
  "totalOperations": 1250,      // ← Number of operations
  "totalTokens": 184000,         // ← Cumulative token COUNT
  "lastOptimized": 1698765432,
  "savings": {                   // ← Auto-updated every 10 operations (Issue #113)
    "totalTokensSaved": 125430,  // Tokens saved by compression
    "tokenReductionPercent": 68.2,  // Percentage of tokens saved
    "originalTokens": 184000,    // Original token count before optimization
    "optimizedTokens": 58570,    // Token count after optimization
    "cacheHitRate": 42.5,        // Cache hit rate percentage
    "compressionRatio": 0.32,    // Compression efficiency (lower is better)
    "lastUpdated": "20251031-092500"  // Last savings update timestamp
  }
}

New in v1.x: The savings object is now automatically updated every 10 operations, eliminating the need to manually call get_session_stats() for real-time monitoring. This provides instant visibility into token optimization performance.

How it works:

  • Every 10 operations, the PowerShell hooks automatically call get_cache_stats() MCP tool
  • Savings metrics are calculated from cache performance data (compression ratio, original vs compressed sizes)
  • The session file is atomically updated with the latest savings data
  • If the MCP call fails, the update is skipped gracefully without blocking operations

Note: For detailed per-operation analysis, use get_session_stats(). The session file provides high-level aggregate metrics.

Project-Wide Analysis

Analyze token usage across your entire project:

typescript
// Analyze project token costs
await analyze_project_tokens({
  projectPath: "/path/to/project"
});

Provides:

  • Total token cost estimation
  • Largest files by token count
  • Optimization opportunities
  • Cost projections at current API rates

Cache Performance

Monitor cache hit rates and storage efficiency:

typescript
// View cache statistics
await get_cache_stats({});

Metrics:

  • Total entries
  • Cache hit rate (%)
  • Average compression ratio
  • Total storage saved
  • Most frequently accessed keys

Troubleshooting

Common Issues and Solutions

Issue: "Invalid or malformed JSON" in Claude Code Settings

Symptom: Claude Code shows "Invalid Settings" error after running install-hooks

Cause: UTF-8 BOM (Byte Order Mark) was added to settings.json files

Solution: Upgrade to v3.0.2+ which fixes the BOM issue:

bash
npm install -g @ooples/token-optimizer-mcp@latest

If you're already on v3.0.2+, manually remove the BOM:

powershell
# Windows: Remove BOM from settings.json
$content = Get-Content "~/.claude/settings.json" -Raw
$content = $content -replace '^\xEF\xBB\xBF', ''
$content | Set-Content "~/.claude/settings.json" -Encoding utf8NoBOM
bash
# Linux: Remove BOM from settings.json
sed -i '1s/^\xEF\xBB\xBF//' ~/.claude/settings.json

# macOS: Remove BOM from settings.json (BSD sed requires empty string after -i)
sed -i '' '1s/^\xef\xbb\xbf//' ~/.claude/settings.json

Issue: Hooks Not Working After Installation

Symptom: Token optimization not occurring automatically

Diagnosis:

  1. Check if hooks are installed:

    powershell
    # Windows
    Get-Content ~/.claude/settings.json | ConvertFrom-Json | Select-Object -ExpandProperty hooks
    
    bash
    # macOS/Linux
    cat ~/.claude/settings.json | jq .hooks
    
  2. Verify dispatcher.ps1 exists:

    powershell
    # Windows
    Test-Path ~/.claude-global/hooks/dispatcher.ps1
    
    bash
    # macOS/Linux
    [ -f ~/.claude-global/hooks/dispatcher.sh ] && echo "Exists" || echo "Missing"
    

Solution: Re-run the installer:

powershell
# Windows
powershell -ExecutionPolicy Bypass -File install-hooks.ps1
bash
# macOS/Linux
bash install-hooks.sh

Issue: Low Cache Hit Rate (<50%)

Symptom: Session stats show cache hit rate below 50%

Causes:

  1. Working with many new files (expected)
  2. Cache was recently cleared
  3. TTL (time-to-live) is too short

Solutions:

  1. Warm up the cache before starting work:

    typescript
    await cache_warmup({
      paths: ["/path/to/frequently/used/files"],
      recursive: true
    });
    
  2. Increase TTL for stable APIs:

    typescript
    await smart_api_fetch({
      url: "https://api.example.com/data",
      ttl: 3600  // 1 hour instead of default 5 minutes
    });
    
  3. Check cache size limits:

    typescript
    await smart_cache({
      operation: "configure",
      l1MaxSize: 2000,  // Increase from default 1000
      l2MaxSize: 20000  // Increase from default 10000
    });
    

Issue: High Memory Usage

Symptom: Node.js process using excessive memory

Cause: Large cache in memory (L1/L2 tiers)

Solution: Configure cache limits:

typescript
await smart_cache({
  operation: "configure",
  evictionStrategy: "LRU",  // Least Recently Used
  l1MaxSize: 500,  // Reduce L1 cache
  l2MaxSize: 5000  // Reduce L2 cache
});

Or clear the cache:

typescript
await clear_cache({});

Issue: Slow First-Time Operations

Symptom: Initial Read/Grep/Glob operations are slow

Cause: Cache is empty, building indexes

Solution: This is expected behavior. Subsequent operations will be 80-90% faster.

To pre-warm the cache:

typescript
await cache_warmup({
  paths: ["/src", "/tests", "/docs"],
  recursive: true,
  schedule: "startup"  // Auto-warm on every session start
});

Issue: "Permission denied" Errors on Windows

Symptom: Cannot write to cache or log files

Cause: PowerShell execution policy or file permissions

Solution:

  1. Set execution policy:

    powershell
    Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
    
  2. Check file permissions:

    powershell
    icacls "$env:USERPROFILE\.token-optimizer"
    
  3. Re-run installer as Administrator if needed

Issue: Cache Files Growing Too Large

Symptom: ~/.token-optimizer/cache.db is >1GB

Cause: Caching very large files or many API responses

Solution:

  1. Clear old entries:

    typescript
    await clear_cache({ olderThan: 7 });  // Clear entries older than 7 days
    
  2. Reduce cache retention:

    typescript
    await smart_cache({
      operation: "configure",
      defaultTTL: 3600  // 1 hour instead of 7 days
    });
    
  3. Manually delete cache (nuclear option):

    bash
    rm -rf ~/.token-optimizer/cache.db
    

Getting Help

If you encounter issues not covered here:

  1. Check the hook logs: ~/.claude-global/hooks/logs/dispatcher.log
  2. Check session data: ~/.claude-global/hooks/data/current-session.txt
  3. File an issue: GitHub Issues
    • Include debug logs
    • Include your OS and Node.js version
    • Include the output of get_session_stats

Limitations

  • Small Text: Best for content >500 characters (cache overhead on small snippets)
  • One-Time Content: No benefit for content that won't be referenced again
  • Cache Storage: Automatic cleanup after 7 days to prevent disk usage issues
  • Token Counting: Uses GPT-4 tokenizer (approximation for Claude, but close enough)

License

MIT License - see LICENSE for details

Author

Built for optimal Claude Code token efficiency by the ooples team.

Star History

Star History Chart

Repository Owner

ooples
ooples

User

Repository Details

Language TypeScript
Default Branch master
Size 2,756 KB
Contributors 2
License MIT License
MCP Verified Nov 12, 2025

Programming Languages

TypeScript
90.68%
Powershell
4.21%
JavaScript
3.81%
Shell
0.7%
CSS
0.24%
Python
0.21%
HTML
0.15%

Tags

Topics

ai caching claude compression llm mcp mcp-server token-optimization

Join Our Newsletter

Stay updated with the latest AI tools, news, and offers by subscribing to our weekly newsletter.

We respect your privacy. Unsubscribe at any time.

Related MCPs

Discover similar Model Context Protocol servers

  • Smart Tree

    Smart Tree

    Lightning Fast Directory Visualization and AI-Enabled Context Compression.

    Smart Tree is a high-performance directory visualization tool built with Rust, offering advanced context management and AI-integrated features. It supports persistent memory anchoring, compressed context formats for reduced token usage, and real-time, interactive terminal UI via Spicy TUI mode. The tool provides programmatic hooks for MCP compliance, enabling AI assistants to recall and leverage project insights efficiently.

    • 175
    • MCP
    • 8b-is/smart-tree
  • mcp-read-website-fast

    mcp-read-website-fast

    Fast, token-efficient web content extraction and Markdown conversion for AI agents.

    Provides a Model Context Protocol (MCP) compatible server that rapidly fetches web pages, removes noise, and converts content to clean Markdown with link preservation. Designed for local use by AI-powered tools like IDEs and large language models, it offers optimized token usage, concurrency, polite crawling, and smart caching. Integrates with Claude Code, VS Code, JetBrains IDEs, Cursor, and other MCP clients.

    • 111
    • MCP
    • just-every/mcp-read-website-fast
  • Content Core

    Content Core

    AI-powered content extraction and processing platform with seamless model context integration.

    Content Core is an AI-driven platform for extracting, formatting, transcribing, and summarizing content from a wide variety of sources including documents, media files, web pages, images, and archives. It offers intelligent auto-detection and engine selection to optimize processing, and provides integrations via CLI, Python library, Raycast extension, macOS Services, and the Model Context Protocol (MCP). The platform supports context-aware AI summaries and direct integration with Claude through MCP for enhanced user workflows. Users can access zero-install options and benefit from enhanced processing capabilities such as advanced PDF parsing, OCR, and smart summarization.

    • 85
    • MCP
    • lfnovo/content-core
  • Web3 Research MCP

    Web3 Research MCP

    Deep, automated, and structured crypto research delivered locally.

    Web3 Research MCP enables detailed and comprehensive research into cryptocurrency tokens by automating data gathering, analysis, and reporting tasks. It integrates multiple data sources such as CoinGecko, CoinMarketCap, and DeFiLlama to create structured research plans, track research progress, and generate in-depth reports on token fundamentals, market data, and more. Optimized for local deployment, it can be used with platforms like Claude Desktop and Cursor to streamline the research workflow.

    • 144
    • MCP
    • aaronjmars/web3-research-mcp
  • Sourcerer MCP

    Sourcerer MCP

    Semantic code search & navigation MCP server for efficient AI agent context retrieval.

    Sourcerer MCP provides a Model Context Protocol (MCP) server that enables AI agents to perform semantic code search and navigation. By indexing codebases at the function, class, and chunk level, it allows agents to retrieve only the necessary code snippets, greatly reducing token consumption. The tool integrates with Tree-sitter for language parsing and OpenAI for generating code embeddings, supporting advanced contextual code understanding without full file ingestion.

    • 95
    • MCP
    • st3v3nmw/sourcerer-mcp
  • ImageSorcery MCP

    ImageSorcery MCP

    Local image recognition and editing tools for AI assistants

    ImageSorcery MCP provides AI assistants with advanced image processing capabilities, all executed locally for privacy and efficiency. It offers a suite of tools for editing, analyzing, and enhancing images, including cropping, resizing, object detection, and OCR. The software leverages pre-trained models and OpenCV techniques to handle a wide range of image tasks without requiring cloud services. Designed for AI integration, it allows natural language prompts to control image manipulation and recognition workflows.

    • 236
    • MCP
    • sunriseapps/imagesorcery-mcp
  • Didn't find tool you were looking for?

    Be as detailed as possible for better results