What is this and what can it do?

mcp-server-webcrawl is a MCP server that runs on your computer. It creates a gateway to web crawler archives so that language models (OpenAI, Claude) can filter, and analyze or process the data, automously or under your direction. Use mcp-server-webcrawl for technical inference, content management, marketing, SEO, and more. The sky is the limit!

The server supports a host of web crawl options including wget and WARC archival format. InterroBot, Katana, and SiteOne crawlers are also supported.

mcp-server-webcrawl is free and open source.

Requirements

Claude Desktop (macOS/Windows) currently has what is necessary to run mcp-server-webcrawl. Other MCP clients have not yet been tested. In addition to the Claude Desktop, you'll need to have installed Python (>=3.10).

With Python installed, you should now have "pip" access on Terminal (macOS) or Powershell (Windows). You can install mcp-server-webcrawl with the following command.

pip install mcp-server-webcrawl

At time of writing, OpenAI support for MCP was announced, but nothing tangible yet. Hang tight!

Underlying API

The API is supposed to stay out your way, and to a large degree it can be navigated autonomously by your MCP client. Sometimes, however, you may need to nudge the LLM to the correct field or search strategy. The following is the current API interface for your reference.

webcrawl_sites

This tool retrieves a list of sites (project websites or crawl directories).

Parameter

Type

Required

Description

ids

array<int>

No

List of project IDs to retrieve. Leave empty for all projects.

fields

array<string>

No

List of additional fields to include beyond defaults (id, url). Empty list means default fields only. Options include created (ISO 8601), modified (ISO 8601), and norobots (str).

webcrawl_search

This tool searches for resources (webpages, CSS, images, etc.) across projects and retrieves specified fields.

Parameter

Type

Required

Description

sites

array<int>

No

Optional list of project IDs to filter search results to specific sites. In most scenarios, you'd filter to only one site.

query

string

No

Fulltext search query string. Leave empty to return all resources when filtering on other fields for better precision. Supports fulltext and boolean operators (AND, OR, NOT), quoted phrases, and suffix wildcards, but not prefix wildcards. See below for complete boolean and field search capabilities.

fields

array<string>

No

List of additional fields to include beyond defaults (modified, created). Empty list means default fields only. The content field can lead to large results and should be used with LIMIT.

sort

string

No

Sort order for results. Prefixed with + for ascending, - for descending. ? is a special option for random sort, useful in statistical sampling. Options include: +id, -id, +url, -url, +status, -status, ?.

limit

integer

No

Maximum number of results to return. Default is 20, max is 100.

offset

integer

No

Number of results to skip for pagination. Default is 0.

extras

array<string>

No

Optional array of extra features to include in results. Options include markdown, snippets, and thumbnails. (see extras table)

Crawler Features Support

API support, by parameter, across crawler type.

ParameterwgetWARCInterroBotKatanaSiteOne
Sites/ids
Sites/fields
Search/ids
Search/sites
Search/query
Search/fields
Search/sort
Search/limit
Search/offset
Search/extras

Crawler Field Support

API support, by field, across crawler type.

ParameterwgetWARCInterroBotKatanaSiteOne
id
url
type
status
size
headers
content

①②③ wget (--mirror) does not index HTTP status beyond 200 OK (HTTP errors not saved to disk). wget and SiteOne crawler implementations do not support field searchable HTTP headers. When used in WARC mode (as opposed to simple mirror), wget is capable of collecting HTTP headers and status.

Crawlers have strengths and weaknesses, judge them on how well they fit your needs. Don't worry too much about field support. You probably don't need HTTP headers, except for specialized web-dev, honestly. They all support fulltext boolean search across the crawl data.

Boolean Search Syntax

The query engine supports field-specific (field: value) searches and complex boolean expressions. Fulltext is supported as a combination of the url, content, and headers fields.

While the API interface is designed to be consumed by the LLM directly, it can be helpful to familiarize yourself with the search syntax. Searches generated by the LLM are inspectable, but generally collapsed in the UI. If you need to see the query, expand the MCP collapsable.

Query ExampleDescription
privacyfulltext single keyword match
"privacy policy"fulltext match exact phrase
boundar*fulltext wildcard matches results starting with boundar (boundary, boundaries)
id: 12345id field matches a specific resource by ID
url: example.com/*url field matches results with URL containing example.com/
type: htmltype field matches for HTML pages only
status: 200status field matches specific HTTP status codes (equal to 200)
status: >=400status field matches specific HTTP status code (greater than or equal to 400)
content: h1content field matches content (HTTP response body, often, but not always HTML)
headers: text/xmlheaders field matches HTTP response headers
privacy AND policyfulltext matches both
privacy OR policyfulltext matches either
policy NOT privacyfulltext matches policies not containing privacy
(login OR signin) AND formfulltext matches fullext login or signin with form
type: html AND status: 200fulltext matches only HTML pages with HTTP success

Field Search Definitions

Field search provides search precision, allowing you to specify which columns of the search index to filter. Rather than searching the entire content, you can restrict your query to specific attributes like URLs, headers, or content body. This approach improves efficiency when looking for specific attributes or patterns within crawl data.

FieldDescription
iddatabase ID
urlresource URL
typeenumerated list of types (see types table)
statusHTTP response codes
headersHTTP response headers
contentHTTP body—HTML, CSS, JS, and more

Content Types

Crawls contain a multitude of resource types beyond HTML pages. The type: field search allows filtering by broad content type groups, particularly useful when filtering images without complex extension queries. For example, you might search for type: html NOT content: login to find pages without "login," or type: img to analyze image resources. The table below lists all supported content types in the search system.

TypeDescription
htmlwebpages
iframeiframes
imgweb images
audioweb audio files
videoweb video files
fontweb font files
styleCSS stylesheets
scriptJavaScript files
rssRSS syndication feeds
textplain text content
pdfPDF files
docMS Word documents
otheruncategorized

Extras

The extras parameter provides additional processing options, transforming result data (markdown, snippets), or connecting the LLM to external data (thumbnails). These options can be combined as needed to achieve the desired result format.

ExtraDescription
thumbnailsGenerates base64 encoded images to be viewed and analyzed by AI models. Enables image description, content analysis, and visual understanding while keeping token output minimal. Works with images, which can be filtered using type: img in queries. SVG is not supported.
markdownProvides the HTML content field as concise markdown, reducing token usage and improving readability for LLMs. Works with HTML, which can be filtered using type: html in queries.
snippetsMatches fulltext queries to contextual keyword usage within the content. When used without requesting the content field (or markdown extra), it can provide an efficient means of refining a search without pulling down the complete page contents. Also great for rendering old school hit-highlighted results as a list, like Google search in 1999. Works with HTML, CSS, JS, or any text-based, crawled file.
Abstraction of LLM clients (Claude and OpenAI) communicating with a website archive