Open-WebSearch MCP Server

Open-WebSearch MCP Server

Multi-engine web search MCP server without API keys

463
Stars
78
Forks
463
Watchers
4
Issues
Open-WebSearch MCP Server is a Model Context Protocol (MCP) compliant server offering web search functionalities using multiple search engines without the need for API keys or authentication. It provides structured search results with titles, URLs, and descriptions, and enables fetching of article content from supported sources such as CSDN and GitHub. The server supports extensive configuration through environment variables, including proxy settings and search engine customization. Designed for flexibility, it operates in both HTTP and stdio modes, making it suitable for integration into larger systems.

Key Features

Multi-engine web search (bing, duckduckgo, brave, baidu, csdn, juejin, exa)
No API keys or authentication required
Configurable HTTP proxy support
Structured results with titles, URLs, and descriptions
Fetch individual article content from sources such as CSDN and GitHub
Configurable server mode (HTTP, stdio, or both)
CORS configuration support
Customizable default search engine and allowed engines
Customizable tool names for MCP actions
Cross-platform installation and quick start via NPX

Use Cases

Web search agent integration for LLMs or chatbots
Retrieving and structuring search results for context-aware applications
Automated article content extraction for downstream processing
Providing up-to-date information for AI applications without API dependency
Enabling search functionality in restricted network environments using proxies
Customizing search tools for workflow automation
Rapid deployment of search agents for prototyping or production
Supporting research or analysis workflows needing multi-source search aggregation
Building search user interfaces with backend search capabilities
Facilitating backend web search services for custom applications

README

Open-WebSearch MCP Server

ModelScope Trust Score smithery badge Version License Issues

🇨🇳 中文 | 🇺🇸 English

A Model Context Protocol (MCP) server based on multi-engine search results, supporting free web search without API keys.

Features

  • Web search using multi-engine results
    • bing
    • baidu
    • linux.do temporarily unsupported
    • csdn
    • duckduckgo
    • exa
    • brave
    • juejin
  • HTTP proxy configuration support for accessing restricted resources
  • No API keys or authentication required
  • Returns structured results with titles, URLs, and descriptions
  • Configurable number of results per search
  • Customizable default search engine
  • Support for fetching individual article content
    • csdn
    • github (README files)

TODO

  • Support for Bing (already supported), DuckDuckGo (already supported), Exa (already supported), Brave (already supported), Google and other search engines
  • Support for more blogs, forums, and social platforms
  • Optimize article content extraction, add support for more sites
  • Support for GitHub README fetching (already supported)

Installation Guide

NPX Quick Start (Recommended)

The fastest way to get started:

bash
# Basic usage
npx open-websearch@latest

# With environment variables (Linux/macOS)
DEFAULT_SEARCH_ENGINE=duckduckgo ENABLE_CORS=true npx open-websearch@latest

# Windows PowerShell
$env:DEFAULT_SEARCH_ENGINE="duckduckgo"; $env:ENABLE_CORS="true"; npx open-websearch@latest

# Windows CMD
set MODE=stdio && set DEFAULT_SEARCH_ENGINE=duckduckgo && npx open-websearch@latest

# Cross-platform (requires cross-env, Used for local development)
npm install -g open-websearch
npx cross-env DEFAULT_SEARCH_ENGINE=duckduckgo ENABLE_CORS=true open-websearch

Environment Variables:

Variable Default Options Description
ENABLE_CORS false true, false Enable CORS
CORS_ORIGIN * Any valid origin CORS origin configuration
DEFAULT_SEARCH_ENGINE bing bing, duckduckgo, exa, brave, baidu, csdn, juejin Default search engine
USE_PROXY false true, false Enable HTTP proxy
PROXY_URL http://127.0.0.1:7890 Any valid URL Proxy server URL
MODE both both, http, stdio Server mode: both HTTP+STDIO, HTTP only, or STDIO only
PORT 3000 1-65535 Server port
ALLOWED_SEARCH_ENGINES empty (all available) Comma-separated engine names Limit which search engines can be used; if the default engine is not in this list, the first allowed engine becomes the default
MCP_TOOL_SEARCH_NAME search Valid MCP tool name Custom name for the search tool
MCP_TOOL_FETCH_LINUXDO_NAME fetchLinuxDoArticle Valid MCP tool name Custom name for the Linux.do article fetch tool
MCP_TOOL_FETCH_CSDN_NAME fetchCsdnArticle Valid MCP tool name Custom name for the CSDN article fetch tool
MCP_TOOL_FETCH_GITHUB_NAME fetchGithubReadme Valid MCP tool name Custom name for the GitHub README fetch tool
MCP_TOOL_FETCH_JUEJIN_NAME fetchJuejinArticle Valid MCP tool name Custom name for the Juejin article fetch tool

Common configurations:

bash
# Enable proxy for restricted regions
USE_PROXY=true PROXY_URL=http://127.0.0.1:7890 npx open-websearch@latest

# Full configuration
DEFAULT_SEARCH_ENGINE=duckduckgo ENABLE_CORS=true USE_PROXY=true PROXY_URL=http://127.0.0.1:7890 PORT=8080 npx open-websearch@latest

Local Installation

  1. Clone or download this repository
  2. Install dependencies:
bash
npm install
  1. Build the server:
bash
npm run build
  1. Add the server to your MCP configuration:

Cherry Studio:

json
{
  "mcpServers": {
    "web-search": {
      "name": "Web Search MCP",
      "type": "streamableHttp",
      "description": "Multi-engine web search with article fetching",
      "isActive": true,
      "baseUrl": "http://localhost:3000/mcp"
    }
  }
}

VSCode (Claude Dev Extension):

json
{
  "mcpServers": {
    "web-search": {
      "transport": {
        "type": "streamableHttp",
        "url": "http://localhost:3000/mcp"
      }
    },
    "web-search-sse": {
      "transport": {
        "type": "sse",
        "url": "http://localhost:3000/sse"
      }
    }
  }
}

Claude Desktop:

json
{
  "mcpServers": {
    "web-search": {
      "transport": {
        "type": "streamableHttp",
        "url": "http://localhost:3000/mcp"
      }
    },
    "web-search-sse": {
      "transport": {
        "type": "sse",
        "url": "http://localhost:3000/sse"
      }
    }
  }
}

NPX Command Line Configuration:

json
{
  "mcpServers": {
    "web-search": {
      "args": [
        "open-websearch@latest"
      ],
      "command": "npx",
      "env": {
        "MODE": "stdio",
        "DEFAULT_SEARCH_ENGINE": "duckduckgo",
        "ALLOWED_SEARCH_ENGINES": "duckduckgo,bing,exa"
      }
    }
  }
}

Local STDIO Configuration for Cherry Studio (Windows):

json
{
  "mcpServers": {
    "open-websearch-local": {
      "command": "node",
      "args": ["C:/path/to/your/project/build/index.js"],
      "env": {
        "MODE": "stdio",
        "DEFAULT_SEARCH_ENGINE": "duckduckgo",
        "ALLOWED_SEARCH_ENGINES": "duckduckgo,bing,exa"
      }
    }
  }
}

Docker Deployment

Quick deployment using Docker Compose:

bash
docker-compose up -d

Or use Docker directly:

bash
docker run -d --name web-search -p 3000:3000 -e ENABLE_CORS=true -e CORS_ORIGIN=* ghcr.io/aas-ee/open-web-search:latest

Environment variable configuration:

Variable Default Options Description
ENABLE_CORS false true, false Enable CORS
CORS_ORIGIN * Any valid origin CORS origin configuration
DEFAULT_SEARCH_ENGINE bing bing, duckduckgo, exa, brave Default search engine
USE_PROXY false true, false Enable HTTP proxy
PROXY_URL http://127.0.0.1:7890 Any valid URL Proxy server URL
PORT 3000 1-65535 Server port

Then configure in your MCP client:

json
{
  "mcpServers": {
    "web-search": {
      "name": "Web Search MCP",
      "type": "streamableHttp",
      "description": "Multi-engine web search with article fetching",
      "isActive": true,
      "baseUrl": "http://localhost:3000/mcp"
    },
    "web-search-sse": {
      "transport": {
        "name": "Web Search MCP",
        "type": "sse",
        "description": "Multi-engine web search with article fetching",
        "isActive": true,
        "url": "http://localhost:3000/sse"
      }
    }
  }
}

Usage Guide

The server provides four tools: search, fetchLinuxDoArticle, fetchCsdnArticle, and fetchGithubReadme.

search Tool Usage

typescript
{
  "query": string,        // Search query
  "limit": number,        // Optional: Number of results to return (default: 10)
  "engines": string[]     // Optional: Engines to use (bing,baidu,linuxdo,csdn,duckduckgo,exa,brave,juejin) default bing
}

Usage example:

typescript
use_mcp_tool({
  server_name: "web-search",
  tool_name: "search",
  arguments: {
    query: "search content",
    limit: 3,  // Optional parameter
    engines: ["bing", "csdn", "duckduckgo", "exa", "brave", "juejin"] // Optional parameter, supports multi-engine combined search
  }
})

Response example:

json
[
  {
    "title": "Example Search Result",
    "url": "https://example.com",
    "description": "Description text of the search result...",
    "source": "Source",
    "engine": "Engine used"
  }
]

fetchCsdnArticle Tool Usage

Used to fetch complete content of CSDN blog articles.

typescript
{
  "url": string    // URL from CSDN search results using the search tool
}

Usage example:

typescript
use_mcp_tool({
  server_name: "web-search",
  tool_name: "fetchCsdnArticle",
  arguments: {
    url: "https://blog.csdn.net/xxx/article/details/xxx"
  }
})

Response example:

json
[
  {
    "content": "Example search result"
  }
]

fetchLinuxDoArticle Tool Usage

Used to fetch complete content of Linux.do forum articles.

typescript
{
  "url": string    // URL from linuxdo search results using the search tool
}

Usage example:

typescript
use_mcp_tool({
  server_name: "web-search",
  tool_name: "fetchLinuxDoArticle",
  arguments: {
    url: "https://xxxx.json"
  }
})

Response example:

json
[
  {
    "content": "Example search result"
  }
]

fetchGithubReadme Tool Usage

Used to fetch README content from GitHub repositories.

typescript
{
  "url": string    // GitHub repository URL (supports HTTPS, SSH formats)
}

Usage example:

typescript
use_mcp_tool({
  server_name: "web-search",
  tool_name: "fetchGithubReadme",
  arguments: {
    url: "https://github.com/Aas-ee/open-webSearch"
  }
})

Supported URL formats:

  • HTTPS: https://github.com/owner/repo
  • HTTPS with .git: https://github.com/owner/repo.git
  • SSH: git@github.com:owner/repo.git
  • URLs with parameters: https://github.com/owner/repo?tab=readme

Response example:

json
[
  {
    "content": "<div align=\"center\">\n\n# Open-WebSearch MCP Server..."
  }
]

fetchJuejinArticle Tool Usage

Used to fetch complete content of Juejin articles.

typescript
{
  "url": string    // Juejin article URL from search results
}

Usage example:

typescript
use_mcp_tool({
  server_name: "web-search",
  tool_name: "fetchJuejinArticle",
  arguments: {
    url: "https://juejin.cn/post/7520959840199360563"
  }
})

Supported URL format:

  • https://juejin.cn/post/{article_id}

Response example:

json
[
  {
    "content": "🚀 开源 AI 联网搜索工具:Open-WebSearch MCP 全新升级,支持多引擎 + 流式响应..."
  }
]

Usage Limitations

Since this tool works by scraping multi-engine search results, please note the following important limitations:

  1. Rate Limiting:

    • Too many searches in a short time may cause the used engines to temporarily block requests
    • Recommendations:
      • Maintain reasonable search frequency
      • Use the limit parameter judiciously
      • Add delays between searches when necessary
  2. Result Accuracy:

    • Depends on the HTML structure of corresponding engines, may fail when engines update
    • Some results may lack metadata like descriptions
    • Complex search operators may not work as expected
  3. Legal Terms:

    • This tool is for personal use only
    • Please comply with the terms of service of corresponding engines
    • Implement appropriate rate limiting based on your actual use case
  4. Search Engine Configuration:

    • Default search engine can be set via the DEFAULT_SEARCH_ENGINE environment variable
    • Supported engines: bing, duckduckgo, exa, brave
    • The default engine is used when searching specific websites
  5. Proxy Configuration:

    • HTTP proxy can be configured when certain search engines are unavailable in specific regions
    • Enable proxy with environment variable USE_PROXY=true
    • Configure proxy server address with PROXY_URL

Contributing

Welcome to submit issue reports and feature improvement suggestions!

Contributor Guide

If you want to fork this repository and publish your own Docker image, you need to make the following configurations:

GitHub Secrets Configuration

To enable automatic Docker image building and publishing, please add the following secrets in your GitHub repository settings (Settings → Secrets and variables → Actions):

Required Secrets:

  • GITHUB_TOKEN: Automatically provided by GitHub (no setup needed)

Optional Secrets (for Alibaba Cloud ACR):

  • ACR_REGISTRY: Your Alibaba Cloud Container Registry URL (e.g., registry.cn-hangzhou.aliyuncs.com)
  • ACR_USERNAME: Your Alibaba Cloud ACR username
  • ACR_PASSWORD: Your Alibaba Cloud ACR password
  • ACR_IMAGE_NAME: Your image name in ACR (e.g., your-namespace/open-web-search)

CI/CD Workflow

The repository includes a GitHub Actions workflow (.github/workflows/docker.yml) that automatically:

  1. Trigger Conditions:

    • Push to main branch
    • Push version tags (v*)
    • Manual workflow trigger
  2. Build and Push to:

    • GitHub Container Registry (ghcr.io) - always enabled
    • Alibaba Cloud Container Registry - only enabled when ACR secrets are configured
  3. Image Tags:

    • ghcr.io/your-username/open-web-search:latest
    • your-acr-address/your-image-name:latest (if ACR is configured)

Fork and Publish Steps:

  1. Fork the repository to your GitHub account
  2. Configure secrets (if you need ACR publishing):
    • Go to Settings → Secrets and variables → Actions in your forked repository
    • Add the ACR-related secrets listed above
  3. Push changes to the main branch or create version tags
  4. GitHub Actions will automatically build and push your Docker image
  5. Use your image, update the Docker command:
    bash
    docker run -d --name web-search -p 3000:3000 -e ENABLE_CORS=true -e CORS_ORIGIN=* ghcr.io/your-username/open-web-search:latest
    

Notes:

  • If you don't configure ACR secrets, the workflow will only publish to GitHub Container Registry
  • Make sure your GitHub repository has Actions enabled
  • The workflow will use your GitHub username (converted to lowercase) as the GHCR image name

Star History

If you find this project helpful, please consider giving it a ⭐ Star!

Star History Chart

Star History

Star History Chart

Repository Owner

Aas-ee
Aas-ee

User

Repository Details

Language TypeScript
Default Branch main
Size 140 KB
Contributors 4
License Apache License 2.0
MCP Verified Nov 12, 2025

Programming Languages

TypeScript
93.71%
JavaScript
5.6%
Dockerfile
0.69%

Tags

Join Our Newsletter

Stay updated with the latest AI tools, news, and offers by subscribing to our weekly newsletter.

We respect your privacy. Unsubscribe at any time.

Related MCPs

Discover similar Model Context Protocol servers

  • Bing Search MCP Server

    Bing Search MCP Server

    MCP server enabling Bing-powered web, news, and image search for AI assistants.

    Bing Search MCP Server provides a Model Context Protocol (MCP) compliant interface for integrating Microsoft Bing Search API capabilities with AI assistants. The server allows AI clients to perform web, news, and image searches programmatically, with features like rate limiting and comprehensive error handling. Designed for easy deployment, it supports integration with clients such as Claude Desktop and Cursor for enhanced search access. Secure configuration via environment variables enables safe use of API keys.

    • 65
    • MCP
    • leehanchung/bing-search-mcp
  • SearXNG MCP Server

    SearXNG MCP Server

    MCP-compliant server integrating the SearXNG API for advanced web search capabilities

    SearXNG MCP Server implements the Model Context Protocol and integrates the SearXNG API to provide extensive web search functionalities. It features intelligent caching, advanced content extraction, and multiple configurable search parameters such as language, time range, and safe search levels. The server exposes tools for both web searching and URL content reading, supporting detailed output customization through input parameters. Designed for seamless MCP deployments, it supports Docker and NPX-based installation with rich configuration options.

    • 321
    • MCP
    • ihor-sokoliuk/mcp-searxng
  • Search1API MCP Server

    Search1API MCP Server

    MCP server enabling search and crawl functions via Search1API.

    Search1API MCP Server is an implementation of the Model Context Protocol (MCP) that provides search and crawl services using the Search1API. It allows seamless integration with MCP-compatible clients, including LibreChat and various developer tools, by managing API key configuration through multiple methods. Built with Node.js, it supports both standalone operation and Docker-based deployment for integration in broader AI toolchains.

    • 157
    • MCP
    • fatwang2/search1api-mcp
  • Parallel Search MCP

    Parallel Search MCP

    Integrate Parallel Search API with any MCP-compatible LLM client.

    Parallel Search MCP provides an interface to use the Parallel Search API seamlessly from any Model Context Protocol (MCP)-compatible language model client. It serves as a proxy server that connects requests to the search API, adding the necessary support for authentication and MCP compatibility. The tool is designed for everyday web search tasks and facilitates easy web integration for LLMs via standardized MCP infrastructure.

    • 3
    • MCP
    • parallel-web/search-mcp
  • OpenAI WebSearch MCP Server

    OpenAI WebSearch MCP Server

    Intelligent web search with OpenAI reasoning model support, fully MCP-compatible.

    OpenAI WebSearch MCP Server provides advanced web search functionality integrated with OpenAI's latest reasoning models, such as gpt-5 and o3-series. It features full compatibility with the Model Context Protocol, enabling easy integration into AI assistants that require up-to-date information and contextual awareness. Built with flexible configuration options, smart reasoning effort controls, and support for location-based search customization. Suitable for environments such as Claude Desktop, Cursor, and automated research workflows.

    • 75
    • MCP
    • ConechoAI/openai-websearch-mcp
  • kagi-server MCP Server

    kagi-server MCP Server

    TypeScript-based MCP server for Kagi Search API integration.

    Kagi-server MCP Server provides a TypeScript implementation of a Model Context Protocol (MCP) server that integrates with the Kagi Search API. It enables standardized tool access for performing web searches and plans for features like summarization, FastGPT integration, and enriched news results. Compatible with Claude Desktop and Smithery installations, it supports secure environment configuration and offers developer utilities such as automatic build and debugging via MCP Inspector.

    • 40
    • MCP
    • ac3xx/mcp-servers-kagi
  • Didn't find tool you were looking for?

    Be as detailed as possible for better results