mcp-server-webcrawl

mcp-server-webcrawl

Advanced search and retrieval for web crawler data via MCP.

32
Stars
7
Forks
32
Watchers
0
Issues
mcp-server-webcrawl provides an AI-oriented server that enables advanced filtering, analysis, and search over data from various web crawlers. Designed for seamless integration with large language models, it supports boolean search, filtering by resource types and HTTP status, and is compatible with popular crawling formats. It facilitates AI clients, such as Claude Desktop, with prompt routines and customizable workflows, making it easy to manage, query, and analyze archived web content. The tool supports integration with multiple crawler outputs and offers templates for automated routines.

Key Features

Claude Desktop ready integration
Multi-crawler compatibility
Boolean and fulltext search support
Resource filtering by type and HTTP status
Support for Markdown and code snippets
Prompt routine templates for automated queries
Integration with ArchiveBox, HTTrack, WARC, wget, and more
Command-line installation and setup
Customizable workflow and procedural logic routines
AI-driven web content analysis

Use Cases

Technical SEO auditing of websites
Automated 404 error analysis and reporting
Building and maintaining website knowledgebases
Filtering large web archives for relevant content
Integrating site data into AI-powered chat or desktop tools
Site content analysis and categorization
Granular content discovery for research or intelligence tasks
Routine-driven report generation from crawled data
Supporting data ingestion for LLMs from diverse crawler outputs
Automating internal site resource and status monitoring

README

mcp-server-webcrawl

Advanced search and retrieval for web crawler data. With mcp-server-webcrawl, your AI client filters and analyzes web content under your direction or autonomously. The server includes a fulltext search interface with boolean support, and resource filtering by type, HTTP status, and more.

mcp-server-webcrawl provides the LLM a complete menu with which to search, and works with a variety of web crawlers:

Crawler/Format Description Platforms Setup Guide
ArchiveBox Web archiving tool macOS/Linux Setup Guide
HTTrack GUI mirroring tool macOS/Windows/Linux Setup Guide
InterroBot GUI crawler and analyzer macOS/Windows/Linux Setup Guide
Katana CLI security-focused crawler macOS/Windows/Linux Setup Guide
SiteOne GUI crawler and analyzer macOS/Windows/Linux Setup Guide
WARC Standard web archive format varies by client Setup Guide
wget CLI website mirroring tool macOS/Linux Setup Guide

mcp-server-webcrawl is free and open source, and requires Claude Desktop and Python (>=3.10). It is installed on the command line, via pip install:

bash
pip install mcp-server-webcrawl

For step-by-step MCP server setup, refer to the Setup Guides.

Features

  • Claude Desktop ready
  • Multi-crawler compatible
  • Filter by type, status, and more
  • Boolean search support
  • Support for Markdown and snippets
  • Roll your own website knowledgebase

Prompt Routines

mcp-server-webcrawl provides the toolkit necessary to search web crawl data freestyle, figuring it out as you go, reacting to each query. This is what it was designed for.

It is also capable of running routines (as prompts). You can write these yourself, or use the ones provided. These prompts are copy and paste, and used as raw Markdown. They are enabled by the advanced search provided to the LLM; queries and logic can be embedded in a procedural set of instructions, or even an input loop as is the case with Gopher Service.

Prompt Download Category Description
🔍 SEO Audit auditseo.md audit Technical SEO (search engine optimization) analysis. Covers the basics, with options to dive deeper.
🔗 404 Audit audit404.md audit Broken link detection and pattern analysis. Not only finds issues, but suggests fixes.
⚡ Performance Audit auditperf.md audit Website speed and optimization analysis. Real talk.
📁 File Audit auditfiles.md audit File organization and asset analysis. Discover the composition of your website.
🌐 Gopher Interface gopher.md interface An old-fashioned search interface inspired by the Gopher clients of yesteryear.
⚙️ Search Test testsearch.md self-test A battery of tests to check for Boolean logical inconsistencies in the search query parser and subsequent FTS5 conversion.

If you want to shortcut the site selection (one less query), paste the markdown and in the same request, type "run pasted for [site name or URL]." It will figure it out. When pasted without additional context, you should be prompted to select from a list of crawled sites.

Boolean Search Syntax

The query engine supports field-specific (field: value) searches and complex boolean expressions. Fulltext is supported as a combination of the url, content, and headers fields.

While the API interface is designed to be consumed by the LLM directly, it can be helpful to familiarize yourself with the search syntax. Searches generated by the LLM are inspectable, but generally collapsed in the UI. If you need to see the query, expand the MCP collapsible.

Example Queries

Query Example Description
privacy fulltext single keyword match
"privacy policy" fulltext match exact phrase
boundar* fulltext wildcard matches results starting with boundar (boundary, boundaries)
id: 12345 id field matches a specific resource by ID
url: example.com/somedir url field matches results with URL containing example.com/somedir
type: html type field matches for HTML pages only
status: 200 status field matches specific HTTP status codes (equal to 200)
status: >=400 status field matches specific HTTP status code (greater than or equal to 400)
content: h1 content field matches content (HTTP response body, often, but not always HTML)
headers: text/xml headers field matches HTTP response headers
privacy AND policy fulltext matches both
privacy OR policy fulltext matches either
policy NOT privacy fulltext matches policies not containing privacy
(login OR signin) AND form fulltext matches fulltext login or signin with form
type: html AND status: 200 fulltext matches only HTML pages with HTTP success

Field Search Definitions

Field search provides search precision, allowing you to specify which columns of the search index to filter. Rather than searching the entire content, you can restrict your query to specific attributes like URLs, headers, or content body. This approach improves efficiency when looking for specific attributes or patterns within crawl data.

Field Description
id database ID
url resource URL
type enumerated list of types (see types table)
size file size in bytes
status HTTP response codes
headers HTTP response headers
content HTTP body—HTML, CSS, JS, and more

Field Content

A subset of fields can be independently requested with results, while core fields are always on. Use of headers and content can consume tokens quickly. Use judiciously, or use extras to crunch more results into the context window. Fields are a top level argument, independent of any field searching taking place in the query.

Field Description
id always available
url always available
type always available
status always available
created on request
modified on request
size on request
headers on request
content on request

Content Types

Crawls contain resource types beyond HTML pages. The type: field search allows filtering by broad content type groups, particularly useful when filtering images without complex extension queries. For example, you might search for type: html NOT content: login to find pages without "login," or type: img to analyze image resources. The table below lists all supported content types in the search system.

Type Description
html webpages
iframe iframes
img web images
audio web audio files
video web video files
font web font files
style CSS stylesheets
script JavaScript files
rss RSS syndication feeds
text plain text content
pdf PDF files
doc MS Word documents
other uncategorized

Extras

The extras parameter provides additional processing options, transforming HTTP data (markdown, snippets, regex, xpath), or connecting the LLM to external data (thumbnails). These options can be combined as needed to achieve the desired result format.

Extra Description
thumbnails Generates base64 encoded images to be viewed and analyzed by AI models. Enables image description, content analysis, and visual understanding while keeping token output minimal. Works with images, which can be filtered using type: img in queries. SVG is not supported.
markdown Provides the HTML content field as concise Markdown, reducing token usage and improving readability for LLMs. Works with HTML, which can be filtered using type: html in queries.
regex Extracts regular expression matches from crawled files such as HTML, CSS, JavaScript, etc. Not as precise a tool as XPath for HTML, but supports any text file as a data source. One or more regex patterns can be requested, using the extrasRegex argument.
snippets Matches fulltext queries to contextual keyword usage within the content. When used without requesting the content field (or markdown extra), it can provide an efficient means of refining a search without pulling down the complete page contents. Also great for rendering old school hit-highlighted results as a list, like Google search in 1999. Works with HTML, CSS, JS, or any text-based, crawled file.
xpath Extracts XPath selector data, used in scraping HTML content. Use XPath's text() selector for text-only, element selectors return outerHTML. Only supported with type: html, other types will be ignored. One or more XPath selectors (//h1, count(//h1), etc.) can be requested, using the extrasXpath argument.

Extras provide a means of producing token-efficient HTTP content responses. Markdown produces roughly 1/3 the bytes of the source HTML, snippets are generally 500 or so bytes per result, and XPath can be as specific or broad as you choose. The more focused your requests, the more results you can fit into your LLM session.

The idea, of course, is that the LLM takes care of this for you. If you notice your LLM developing an affinity to the "content" field (full HTML), a nudge in chat to budget tokens using the extras feature should be all that is needed.

Interactive Mode

No AI, just classic Boolean search of your web-archives in a terminal.

mcp-server-webcrawl can double as a terminal search for your web archives. You can run it against your local archives, but it gets more interesting when you realize you can ssh into any remote host and view archives sitting on that host. No downloads, syncs, multifactor logins, or other common drudgery required. With interactive mode, you can be in and searching a crawl sitting on a remote server in no time at all.

Launch with --crawler and --datasource to load into search immediately, or use setup datasrc and crawler in-app.

bash
mcp-server-webcrawl --crawler wget --datasrc /path/to/datasrc --interactive
# or manually enter crawler and datasrc in the UI
mcp-server-webcrawl --interactive

Interactive mode is a way to search through tranches of crawled data, whenever, whereever... in a terminal.

Interactive search interface

Star History

Star History Chart

Repository Owner

pragmar
pragmar

User

Repository Details

Language HTML
Default Branch master
Size 14,260 KB
Contributors 1
License Other
MCP Verified Nov 12, 2025

Programming Languages

HTML
87.21%
Python
12.31%
XSLT
0.44%
Makefile
0.02%
Batchfile
0.02%

Tags

Topics

archivebox httrack interrobot katana knowledgebase mcp mcp-server mcp-servers siteone warc wget

Join Our Newsletter

Stay updated with the latest AI tools, news, and offers by subscribing to our weekly newsletter.

We respect your privacy. Unsubscribe at any time.

Related MCPs

Discover similar Model Context Protocol servers

  • Driflyte MCP Server

    Driflyte MCP Server

    Bridging AI assistants with deep, topic-aware knowledge from web and code sources.

    Driflyte MCP Server acts as a bridge between AI-powered assistants and diverse, topic-aware content sources by exposing a Model Context Protocol (MCP) server. It enables retrieval-augmented generation workflows by crawling, indexing, and serving topic-specific documents from web pages and GitHub repositories. The system is extensible, with planned support for additional knowledge sources, and is designed for easy integration with popular AI tools such as ChatGPT, Claude, and VS Code.

    • 9
    • MCP
    • serkan-ozal/driflyte-mcp-server
  • MCP Server for Deep Research

    MCP Server for Deep Research

    Transform research questions into comprehensive, well-cited reports using an advanced research assistant.

    MCP Server for Deep Research provides an end-to-end workflow for conducting in-depth research on complex topics. It elaborates on research questions, generates subquestions, integrates web search, analyzes and synthesizes retrieved content, and generates structured, well-cited research reports. The tool integrates with Claude Desktop and leverages prompt templates tailored for comprehensive research tasks.

    • 187
    • MCP
    • reading-plus-ai/mcp-server-deep-research
  • Web Analyzer MCP

    Web Analyzer MCP

    Intelligent web content analysis and summarization via MCP.

    Web Analyzer MCP is an MCP-compliant server designed for intelligent web content analysis and summarization. It leverages FastMCP to perform advanced web scraping, content extraction, and AI-powered question-answering using OpenAI models. The tool integrates with various developer IDEs, offering structured markdown output, essential content extraction, and smart Q&A functionality. Its features streamline content analysis workflows and support flexible model selection.

    • 2
    • MCP
    • kimdonghwi94/web-analyzer-mcp
  • AgentQL MCP Server

    AgentQL MCP Server

    MCP-compliant server for structured web data extraction using AgentQL.

    AgentQL MCP Server acts as a Model Context Protocol (MCP) server that leverages AgentQL's data extraction capabilities to fetch structured information from web pages. It allows integration with applications supporting MCP, such as Claude Desktop, VS Code, and Cursor, by providing an accessible interface for extracting structured data based on user-defined prompts. With configurable API key support and streamlined installation, it simplifies the process of connecting web data extraction workflows to AI tools.

    • 120
    • MCP
    • tinyfish-io/agentql-mcp
  • WebScraping.AI MCP Server

    WebScraping.AI MCP Server

    MCP server for advanced web scraping and AI-driven data extraction

    WebScraping.AI MCP Server implements the Model Context Protocol to provide web data extraction and question answering functionalities. It integrates with WebScraping.AI to offer robust tools for retrieving, rendering, and parsing web content, including structured data and natural language answers from web pages. It supports JavaScript rendering, proxy management, device emulation, and custom extraction configurations, making it suitable for both individual and team deployments in AI-assisted workflows.

    • 33
    • MCP
    • webscraping-ai/webscraping-ai-mcp-server
  • tavily-search MCP server

    tavily-search MCP server

    A search server that integrates Tavily API with Model Context Protocol tools.

    tavily-search MCP server provides an MCP-compliant server to perform search queries using the Tavily API. It returns search results in text format, including AI responses, URLs, and result titles. The server is designed for easy integration with clients like Claude Desktop or Cursor and supports both local and Docker-based deployment. It facilitates AI workflows by offering search functionality as part of a standardized protocol interface.

    • 44
    • MCP
    • Tomatio13/mcp-server-tavily
  • Didn't find tool you were looking for?

    Be as detailed as possible for better results