MCP Internet Speed Test

MCP Internet Speed Test

Standardized internet speed and network performance testing for AI models via MCP.

11
Stars
7
Forks
11
Watchers
0
Issues
MCP Internet Speed Test implements the Model Context Protocol (MCP) to enable AI models and agents to measure, analyze, and report diverse network performance metrics through a standardized interface. It supports download, upload, latency, jitter, and cache analysis, along with multi-CDN and geographic location detection. By offering an MCP-compatible server with robust testing features, it allows seamless integration with LLMs and AI tools for real-time network assessment and diagnostics.

Key Features

Smart incremental speed testing using SpeedOf.Me methodology
Download and upload bandwidth measurement from 128KB to 100MB
Network latency measurement with server location details
Jitter analysis using multiple latency samples
Detection of major CDN providers (Fastly, Cloudflare, AWS CloudFront)
Physical location mapping of network points of presence
Cache status and header analysis
Extraction of detailed server and CDN response metadata
Single-command comprehensive test execution
Async support for high-performance network testing

Use Cases

Enabling LLMs to assess client internet speed and network conditions
Automated troubleshooting of connectivity issues in AI agents
Monitoring and diagnosing latency and jitter for real-time applications
Identifying CDN-specific routing and performance impact
Analyzing cache utilization and optimizing content delivery
Providing contextual network insights for agentic reasoning
Supporting real-world environment simulation for AI models
Enhancing digital experience testing in distributed deployments
Testing remote network conditions during AI-driven workflow execution
Integrating network health checks into AI-powered productivity tools

README

Trust Score smithery badge

MseeP.ai Security Assessment Badge

MCP Internet Speed Test

An implementation of a Model Context Protocol (MCP) for internet speed testing. It allows AI models and agents to measure, analyze, and report network performance metrics through a standardized interface.

📦 Available on PyPI: https://pypi.org/project/mcp-internet-speed-test/

🚀 Quick Start:

bash
pip install mcp-internet-speed-test
mcp-internet-speed-test

What is MCP?

The Model Context Protocol (MCP) provides a standardized way for Large Language Models (LLMs) to interact with external tools and data sources. Think of it as the "USB-C for AI applications" - a common interface that allows AI systems to access real-world capabilities and information.

Features

  • Smart Incremental Testing: Uses SpeedOf.Me methodology with 8-second threshold for optimal accuracy
  • Download Speed Testing: Measures bandwidth using files from 128KB to 100MB from GitHub repository
  • Upload Speed Testing: Tests upload bandwidth using generated data from 128KB to 100MB
  • Latency Testing: Measures network latency with detailed server location information
  • Jitter Analysis: Calculates network stability using multiple latency samples (default: 5)
  • Multi-CDN Support: Detects and provides info for Fastly, Cloudflare, and AWS CloudFront
  • Geographic Location: Maps POP codes to physical locations (50+ locations worldwide)
  • Cache Analysis: Detects HIT/MISS status and cache headers
  • Server Metadata: Extracts detailed CDN headers including x-served-by, via, x-cache
  • Comprehensive Testing: Single function to run all tests with complete metrics

Installation

Prerequisites

  • Python 3.12 or higher (required for async support)
  • pip or uv package manager

Option 1: Install from PyPI with pip (Recommended)

bash
# Install the package globally
pip install mcp-internet-speed-test

# Run the MCP server
mcp-internet-speed-test

Option 2: Install from PyPI with uv

bash
# Install the package globally
uv add mcp-internet-speed-test

# Or run directly without installing
uvx mcp-internet-speed-test

Option 3: Using docker

bash
# Build the Docker image
docker build -t mcp-internet-speed-test .

# Run the MCP server in a Docker container
docker run -it --rm -v $(pwd):/app -w /app mcp-internet-speed-test

Option 4: Development/Local Installation

If you want to contribute or modify the code:

bash
# Clone the repository
git clone https://github.com/inventer-dev/mcp-internet-speed-test.git
cd mcp-internet-speed-test

# Install in development mode
pip install -e .

# Or using uv
uv sync
uv run python -m mcp_internet_speed_test.main

Dependencies

The package automatically installs these dependencies:

  • mcp[cli]>=1.6.0: MCP server framework with CLI integration
  • httpx>=0.27.0: Async HTTP client for speed tests

Configuration

To use this MCP server with Claude Desktop or other MCP clients, add it to your MCP configuration file.

Claude Desktop Configuration

Edit your Claude Desktop MCP configuration file:

Option 1: Using pip installed package (Recommended)

json
{
    "mcpServers": {
        "mcp-internet-speed-test": {
            "command": "mcp-internet-speed-test"
        }
    }
}

Option 2: Using uvx

json
{
    "mcpServers": {
        "mcp-internet-speed-test": {
            "command": "uvx",
            "args": ["mcp-internet-speed-test"]
        }
    }
}

API Tools

The MCP Internet Speed Test provides the following tools:

Testing Functions

  1. measure_download_speed: Measures download bandwidth (in Mbps) with server location info
  2. measure_upload_speed: Measures upload bandwidth (in Mbps) with server location info
  3. measure_latency: Measures network latency (in ms) with server location info
  4. measure_jitter: Measures network jitter by analyzing latency variations with server info
  5. get_server_info: Get detailed CDN server information for any URL without running speed tests
  6. run_complete_test: Comprehensive test with all metrics and server metadata

CDN Server Detection

This speed test now provides detailed information about the CDN servers serving your tests:

What You Get

  • CDN Provider: Identifies if you're connecting to Fastly, Cloudflare, or Amazon CloudFront
  • Geographic Location: Shows the physical location of the server (e.g., "Mexico City, Mexico")
  • POP Code: Three-letter code identifying the Point of Presence (e.g., "MEX", "QRO", "DFW")
  • Cache Status: Whether content is served from cache (HIT) or fetched from origin (MISS)
  • Server Headers: Full HTTP headers including x-served-by, via, and x-cache

Technical Implementation

Smart Testing Methodology

  • Incremental Approach: Starts with small files (128KB) and progressively increases
  • Time-Based Optimization: Uses 8-second base threshold + 4-second additional buffer
  • Accuracy Focus: Selects optimal file size that provides reliable measurements
  • Multi-Provider Support: Tests against geographically distributed endpoints

CDN Detection Capabilities

  • Fastly: Detects POP codes and maps to 50+ global locations
  • Cloudflare: Identifies data centers and geographic regions
  • AWS CloudFront: Recognizes edge locations across continents
  • Header Analysis: Parses x-served-by, via, x-cache, and custom CDN headers

Why This Matters

  • Network Diagnostics: Understand which server is actually serving your tests
  • Performance Analysis: Correlate speed results with server proximity
  • CDN Optimization: Identify if your ISP's routing is optimal
  • Geographic Awareness: Know if tests are running from your expected region
  • Troubleshooting: Identify routing issues and CDN misconfigurations

Example Server Info Output

json
{
  "cdn_provider": "Fastly",
  "pop_code": "MEX",
  "pop_location": "Mexico City, Mexico",
  "served_by": "cache-mex4329-MEX",
  "cache_status": "HIT",
  "x_cache": "HIT, HIT"
}

Technical Configuration

Default Test Files Repository

GitHub Repository: inventer-dev/speed-test-files
Branch: main
File Sizes: 128KB, 256KB, 512KB, 1MB, 2MB, 5MB, 10MB, 20MB, 40MB, 50MB, 100MB

Upload Endpoints Priority

  1. Cloudflare Workers (httpi.dev) - Global distribution, highest priority
  2. HTTPBin (httpbin.org) - AWS-based, secondary endpoint

Supported CDN Locations (150+ POPs)

Fastly POPs: MEX, QRO, DFW, LAX, NYC, MIA, LHR, FRA, AMS, CDG, NRT, SIN, SYD, GRU, SCL, BOG, MAD, MIL...

Cloudflare Centers: DFW, LAX, SJC, SEA, ORD, MCI, IAD, ATL, MIA, YYZ, LHR, FRA, AMS, CDG, ARN, STO...

AWS CloudFront: ATL, BOS, ORD, CMH, DFW, DEN, IAD, LAX, MIA, MSP, JFK, SEA, SJC, AMS, ATH, TXL...

Performance Thresholds

  • Base Test Duration: 8.0 seconds
  • Additional Buffer: 4.0 seconds
  • Maximum File Size: Configurable (default: 100MB)
  • Jitter Samples: 5 measurements (configurable)

Troubleshooting

Common Issues

MCP Server Connection

  1. Path Configuration: Ensure absolute path is used in MCP configuration
  2. Directory Permissions: Verify read/execute permissions for the project directory
  3. Python Version: Requires Python 3.12+ with async support
  4. Dependencies: Install fastmcp and httpx packages

Speed Test Issues

  1. GitHub Repository Access: Ensure inventer-dev/speed-test-files is accessible
  2. Firewall/Proxy: Check if corporate firewalls block test endpoints
  3. CDN Routing: Some ISPs may route differently to CDNs
  4. Network Stability: Jitter tests require stable connections

Performance Considerations

  • File Size Limits: Large files (>50MB) may timeout on slow connections
  • Upload Endpoints: If primary endpoint fails, fallback is automatic
  • Geographic Accuracy: POP detection depends on CDN header consistency

Development

Project Structure

mcp-internet-speed-test/
├── mcp_internet_speed_test/  # Main package directory
│   ├── __init__.py      # Package initialization
│   └── main.py          # MCP server implementation
├── README.md           # This documentation
├── Dockerfile          # Container configuration
└── pyproject.toml      # Python project configuration

Key Components

Configuration Constants

  • GITHUB_RAW_URL: Base URL for test files repository
  • UPLOAD_ENDPOINTS: Prioritized list of upload test endpoints
  • SIZE_PROGRESSION: Ordered list of file sizes for incremental testing
  • *_POP_LOCATIONS: Mappings of CDN codes to geographic locations

Core Functions

  • extract_server_info(): Parses HTTP headers to identify CDN providers
  • measure_*(): Individual test functions for different metrics
  • run_complete_test(): Orchestrates comprehensive testing suite

Configuration Customization

You can customize the following in mcp_internet_speed_test/main.py if you clone the repository:

python
# GitHub repository settings
GITHUB_USERNAME = "your-username"
GITHUB_REPO = "your-speed-test-files"
GITHUB_BRANCH = "main"

# Test duration thresholds
BASE_TEST_DURATION = 8.0  # seconds
ADDITIONAL_TEST_DURATION = 4.0  # seconds

# Default endpoints
DEFAULT_UPLOAD_URL = "your-upload-endpoint"
DEFAULT_LATENCY_URL = "your-latency-endpoint"

Contributing

This is an experimental project and contributions are welcome:

  1. Issues: Report bugs or request features
  2. Pull Requests: Submit code improvements
  3. Documentation: Help improve this README
  4. Testing: Test with different network conditions and CDNs

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • MCP Framework maintainers for standardizing AI tool interactions
  • The Model Context Protocol community for documentation and examples

Star History

Star History Chart

Repository Owner

inventer-dev
inventer-dev

Organization

Repository Details

Language Python
Default Branch main
Size 241 KB
Contributors 4
License MIT License
MCP Verified Nov 12, 2025

Programming Languages

Python
99.17%
Dockerfile
0.83%

Tags

Join Our Newsletter

Stay updated with the latest AI tools, news, and offers by subscribing to our weekly newsletter.

We respect your privacy. Unsubscribe at any time.

Related MCPs

Discover similar Model Context Protocol servers

  • Web Analyzer MCP

    Web Analyzer MCP

    Intelligent web content analysis and summarization via MCP.

    Web Analyzer MCP is an MCP-compliant server designed for intelligent web content analysis and summarization. It leverages FastMCP to perform advanced web scraping, content extraction, and AI-powered question-answering using OpenAI models. The tool integrates with various developer IDEs, offering structured markdown output, essential content extraction, and smart Q&A functionality. Its features streamline content analysis workflows and support flexible model selection.

    • 2
    • MCP
    • kimdonghwi94/web-analyzer-mcp
  • @just-every/mcp-screenshot-website-fast

    @just-every/mcp-screenshot-website-fast

    Fast screenshot capture and tiling optimized for AI model workflows.

    Provides a fast and efficient command-line tool for capturing high-quality screenshots of webpages, specifically optimized for integration with AI vision workflows via the Model Context Protocol (MCP). Automates image tiling to 1072x1072 pixel chunks for optimal processing and compatibility with tools like Claude Vision API. Includes advanced features such as full-page capture, screencast recording, support for JavaScript injection, configurable viewports, and resource-efficient browser management.

    • 89
    • MCP
    • just-every/mcp-screenshot-website-fast
  • Dappier MCP Server

    Dappier MCP Server

    Real-time web search and premium data access for AI agents via Model Context Protocol.

    Dappier MCP Server enables fast, real-time web search and access to premium data sources, including news, financial markets, sports, and weather, for AI agents using the Model Context Protocol (MCP). It integrates seamlessly with tools like Claude Desktop and Cursor, allowing users to enhance their AI workflows with up-to-date, trusted information. Simple installation and configuration are provided for multiple platforms, leveraging API keys for secure access. The solution supports deployment via Smithery and direct installation with 'uv', facilitating rapid setup for developers.

    • 35
    • MCP
    • DappierAI/dappier-mcp
  • MCP Status Observer

    MCP Status Observer

    Monitor and query real-time operational status of digital platforms using Model Context Protocol.

    MCP Status Observer enables real-time monitoring and querying of service status for major digital platforms and AI providers through the Model Context Protocol (MCP). It supports integration with tools like Claude Desktop for streamlined status checks, incident tracking, and impact analysis across a range of platforms including GitHub, Slack, OpenAI, and Cloudflare. The tool provides detailed component-level insights, incident histories, and resolution statuses, empowering rapid awareness of operational issues and outages.

    • 5
    • MCP
    • imprvhub/mcp-status-observer
  • mcp-read-website-fast

    mcp-read-website-fast

    Fast, token-efficient web content extraction and Markdown conversion for AI agents.

    Provides a Model Context Protocol (MCP) compatible server that rapidly fetches web pages, removes noise, and converts content to clean Markdown with link preservation. Designed for local use by AI-powered tools like IDEs and large language models, it offers optimized token usage, concurrency, polite crawling, and smart caching. Integrates with Claude Code, VS Code, JetBrains IDEs, Cursor, and other MCP clients.

    • 111
    • MCP
    • just-every/mcp-read-website-fast
  • Nexus MCP Server

    Nexus MCP Server

    AI integration without the complexity

    Nexus MCP Server implements the Model Context Protocol to provide streamlined AI-powered web search capabilities through OpenRouter and the Perplexity Sonar models. It enables intelligent model search and discovery for MCP-compatible clients such as Claude Desktop and Cursor, supporting context-rich queries and structured citation extraction. The server features robust deployment options, including zero-install NPX execution, and is fully implemented in TypeScript. Its architecture includes production-ready features like error handling, request caching, deduplication, and logging.

    • 16
    • MCP
    • adawalli/nexus
  • Didn't find tool you were looking for?

    Be as detailed as possible for better results