WebSearch-MCP
Real-time web search for AI assistants via Model Context Protocol.
Key Features
Use Cases
README
WebSearch-MCP
A Model Context Protocol (MCP) server implementation that provides a web search capability over stdio transport. This server integrates with a WebSearch Crawler API to retrieve search results.
Table of Contents
- About
- Installation
- Configuration
- Setup & Integration
- Usage
- Troubleshooting
- Development
- Contributing
- License
About
WebSearch-MCP is a Model Context Protocol server that provides web search capabilities to AI assistants that support MCP. It allows AI models like Claude to search the web in real-time, retrieving up-to-date information about any topic.
The server integrates with a Crawler API service that handles the actual web searches, and communicates with AI assistants using the standardized Model Context Protocol.
Installation
Installing via Smithery
To install WebSearch for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @mnhlt/WebSearch-MCP --client claude
Manual Installation
npm install -g websearch-mcp
Or use without installing:
npx websearch-mcp
Configuration
The WebSearch MCP server can be configured using environment variables:
API_URL: The URL of the WebSearch Crawler API (default:http://localhost:3001)MAX_SEARCH_RESULT: Maximum number of search results to return when not specified in the request (default:5)
Examples:
# Configure API URL
API_URL=https://crawler.example.com npx websearch-mcp
# Configure maximum search results
MAX_SEARCH_RESULT=10 npx websearch-mcp
# Configure both
API_URL=https://crawler.example.com MAX_SEARCH_RESULT=10 npx websearch-mcp
Setup & Integration
Setting up WebSearch-MCP involves two main parts: configuring the crawler service that performs the actual web searches, and integrating the MCP server with your AI client applications.
Setting Up the Crawler Service
The WebSearch MCP server requires a crawler service to perform the actual web searches. You can easily set up the crawler service using Docker Compose.
Prerequisites
- Docker and Docker Compose
Starting the Crawler Service
- Create a file named
docker-compose.ymlwith the following content:
version: '3.8'
services:
crawler:
image: laituanmanh/websearch-crawler:latest
container_name: websearch-api
restart: unless-stopped
ports:
- "3001:3001"
environment:
- NODE_ENV=production
- PORT=3001
- LOG_LEVEL=info
- FLARESOLVERR_URL=http://flaresolverr:8191/v1
depends_on:
- flaresolverr
volumes:
- crawler_storage:/app/storage
flaresolverr:
image: 21hsmw/flaresolverr:nodriver
container_name: flaresolverr
restart: unless-stopped
environment:
- LOG_LEVEL=info
- TZ=UTC
volumes:
crawler_storage:
workaround for Mac Apple Silicon
version: '3.8'
services:
crawler:
image: laituanmanh/websearch-crawler:latest
container_name: websearch-api
platform: "linux/amd64"
restart: unless-stopped
ports:
- "3001:3001"
environment:
- NODE_ENV=production
- PORT=3001
- LOG_LEVEL=info
- FLARESOLVERR_URL=http://flaresolverr:8191/v1
depends_on:
- flaresolverr
volumes:
- crawler_storage:/app/storage
flaresolverr:
image: 21hsmw/flaresolverr:nodriver
platform: "linux/arm64"
container_name: flaresolverr
restart: unless-stopped
environment:
- LOG_LEVEL=info
- TZ=UTC
volumes:
crawler_storage:
- Start the services:
docker-compose up -d
- Verify that the services are running:
docker-compose ps
- Test the crawler API health endpoint:
curl http://localhost:3001/health
Expected response:
{
"status": "ok",
"details": {
"status": "ok",
"flaresolverr": true,
"google": true,
"message": null
}
}
The crawler API will be available at http://localhost:3001.
Testing the Crawler API
You can test the crawler API directly using curl:
curl -X POST http://localhost:3001/crawl \
-H "Content-Type: application/json" \
-d '{
"query": "typescript best practices",
"numResults": 2,
"language": "en",
"filters": {
"excludeDomains": ["youtube.com"],
"resultType": "all"
}
}'
Custom Configuration
You can customize the crawler service by modifying the environment variables in the docker-compose.yml file:
PORT: The port on which the crawler API listens (default: 3001)LOG_LEVEL: Logging level (options: debug, info, warn, error)FLARESOLVERR_URL: URL of the FlareSolverr service (for bypassing Cloudflare protection)
Integrating with MCP Clients
Quick Reference: MCP Configuration
Here's a quick reference for MCP configuration across different clients:
{
"mcpServers": {
"websearch": {
"command": "npx",
"args": [
"websearch-mcp"
],
"environment": {
"API_URL": "http://localhost:3001",
"MAX_SEARCH_RESULT": "5" // reduce to save your tokens, increase for wider information gain
}
}
}
}
Workaround for Windows, due to Issue
{
"mcpServers": {
"websearch": {
"command": "cmd",
"args": [
"/c",
"npx",
"websearch-mcp"
],
"environment": {
"API_URL": "http://localhost:3001",
"MAX_SEARCH_RESULT": "1"
}
}
}
}
Usage
This package implements an MCP server using stdio transport that exposes a web_search tool with the following parameters:
Parameters
query(required): The search query to look upnumResults(optional): Number of results to return (default: 5)language(optional): Language code for search results (e.g., 'en')region(optional): Region code for search results (e.g., 'us')excludeDomains(optional): Domains to exclude from resultsincludeDomains(optional): Only include these domains in resultsexcludeTerms(optional): Terms to exclude from resultsresultType(optional): Type of results to return ('all', 'news', or 'blogs')
Example Search Response
Here's an example of a search response:
{
"query": "machine learning trends",
"results": [
{
"title": "Top Machine Learning Trends in 2025",
"snippet": "The key machine learning trends for 2025 include multimodal AI, generative models, and quantum machine learning applications in enterprise...",
"url": "https://example.com/machine-learning-trends-2025",
"siteName": "AI Research Today",
"byline": "Dr. Jane Smith"
},
{
"title": "The Evolution of Machine Learning: 2020-2025",
"snippet": "Over the past five years, machine learning has evolved from primarily supervised learning approaches to more sophisticated self-supervised and reinforcement learning paradigms...",
"url": "https://example.com/ml-evolution",
"siteName": "Tech Insights",
"byline": "John Doe"
}
]
}
Testing Locally
To test the WebSearch MCP server locally, you can use the included test client:
npm run test-client
This will start the MCP server and a simple command-line interface that allows you to enter search queries and see the results.
You can also configure the API_URL for the test client:
API_URL=https://crawler.example.com npm run test-client
As a Library
You can use this package programmatically:
import { createMCPClient } from '@modelcontextprotocol/sdk';
// Create an MCP client
const client = createMCPClient({
transport: { type: 'subprocess', command: 'npx websearch-mcp' }
});
// Execute a web search
const response = await client.request({
method: 'call_tool',
params: {
name: 'web_search',
arguments: {
query: 'your search query',
numResults: 5,
language: 'en'
}
}
});
console.log(response.result);
Troubleshooting
Crawler Service Issues
- API Unreachable: Ensure that the crawler service is running and accessible at the configured API_URL.
- Search Results Not Available: Check the logs of the crawler service to see if there are any errors:
bash
docker-compose logs crawler - FlareSolverr Issues: Some websites use Cloudflare protection. If you see errors related to this, check if FlareSolverr is working:
bash
docker-compose logs flaresolverr
MCP Server Issues
- Import Errors: Ensure you have the latest version of the MCP SDK:
bash
npm install -g @modelcontextprotocol/sdk@latest - Connection Issues: Make sure the stdio transport is properly configured for your client.
Development
To work on this project:
- Clone the repository
- Install dependencies:
npm install - Build the project:
npm run build - Run in development mode:
npm run dev
The server expects a WebSearch Crawler API as defined in the included swagger.json file. Make sure the API is running at the configured API_URL.
Project Structure
.gitignore: Specifies files that Git should ignore (node_modules, dist, logs, etc.).npmignore: Specifies files that shouldn't be included when publishing to npmpackage.json: Project metadata and dependenciessrc/: Source TypeScript filesdist/: Compiled JavaScript files (generated when building)
Publishing to npm
To publish this package to npm:
- Make sure you have an npm account and are logged in (
npm login) - Update the version in package.json (
npm version patch|minor|major) - Run
npm publish
The .npmignore file ensures that only the necessary files are included in the published package:
- The compiled code in
dist/ - README.md and LICENSE files
- package.json
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
ISC
Star History
Repository Owner
User
Repository Details
Programming Languages
Tags
Topics
Join Our Newsletter
Stay updated with the latest AI tools, news, and offers by subscribing to our weekly newsletter.
Related MCPs
Discover similar Model Context Protocol servers
MCP Web Research Server
Bring real-time web research and Google search capabilities into Claude using MCP.
MCP Web Research Server acts as a Model Context Protocol (MCP) server, seamlessly integrating web research functionalities with Claude Desktop. It enables Google search, webpage content extraction, research session tracking, and screenshot capture, all accessible directly from Claude. The server supports interactive and guided research sessions, exposing session data and screenshots as MCP resources for enhanced context-aware AI interactions.
- ⭐ 284
- MCP
- mzxrai/mcp-webresearch
tavily-search MCP server
A search server that integrates Tavily API with Model Context Protocol tools.
tavily-search MCP server provides an MCP-compliant server to perform search queries using the Tavily API. It returns search results in text format, including AI responses, URLs, and result titles. The server is designed for easy integration with clients like Claude Desktop or Cursor and supports both local and Docker-based deployment. It facilitates AI workflows by offering search functionality as part of a standardized protocol interface.
- ⭐ 44
- MCP
- Tomatio13/mcp-server-tavily
Brave Search MCP Server
MCP integration for web, image, news, video, and local search via Brave Search API.
Implements a Model Context Protocol server that connects with the Brave Search API, enabling AI systems to perform comprehensive web, image, news, video, and local points of interest searches. Provides standardized MCP tools for various search types, each supporting advanced filtering parameters. Designed for easy integration in context-aware model interfaces such as Claude Code.
- ⭐ 86
- MCP
- mikechao/brave-search-mcp
Dappier MCP Server
Real-time web search and premium data access for AI agents via Model Context Protocol.
Dappier MCP Server enables fast, real-time web search and access to premium data sources, including news, financial markets, sports, and weather, for AI agents using the Model Context Protocol (MCP). It integrates seamlessly with tools like Claude Desktop and Cursor, allowing users to enhance their AI workflows with up-to-date, trusted information. Simple installation and configuration are provided for multiple platforms, leveraging API keys for secure access. The solution supports deployment via Smithery and direct installation with 'uv', facilitating rapid setup for developers.
- ⭐ 35
- MCP
- DappierAI/dappier-mcp
Search1API MCP Server
MCP server enabling search and crawl functions via Search1API.
Search1API MCP Server is an implementation of the Model Context Protocol (MCP) that provides search and crawl services using the Search1API. It allows seamless integration with MCP-compatible clients, including LibreChat and various developer tools, by managing API key configuration through multiple methods. Built with Node.js, it supports both standalone operation and Docker-based deployment for integration in broader AI toolchains.
- ⭐ 157
- MCP
- fatwang2/search1api-mcp
mcp-server-webcrawl
Advanced search and retrieval for web crawler data via MCP.
mcp-server-webcrawl provides an AI-oriented server that enables advanced filtering, analysis, and search over data from various web crawlers. Designed for seamless integration with large language models, it supports boolean search, filtering by resource types and HTTP status, and is compatible with popular crawling formats. It facilitates AI clients, such as Claude Desktop, with prompt routines and customizable workflows, making it easy to manage, query, and analyze archived web content. The tool supports integration with multiple crawler outputs and offers templates for automated routines.
- ⭐ 32
- MCP
- pragmar/mcp-server-webcrawl
Didn't find tool you were looking for?