Vibe Check MCP

Vibe Check MCP

Plug & play agent oversight tool to keep LLMs aligned, reflective, and safe.

315
Stars
36
Forks
315
Watchers
7
Issues
Vibe Check MCP provides a mentor layer over large language model agents to prevent over-engineering and promote optimal, minimal pathways. Leveraging research-backed oversight, it integrates seamlessly as an MCP server with support for STDIO and streamable HTTP transport. The platform enhances agent reliability, improves task success rates, and significantly reduces harmful actions. Designed for easy plug-and-play with MCP-aware clients, it is trusted across multiple MCP platforms and registries.

Key Features

Plug-and-play MCP server integration
Research-backed agent oversight
STDIO and streamable HTTP transport support
Reduces agent harmful actions
Increases agent success rate
Guides agents toward minimal viable paths
Compatible with MCP-aware clients
Trusted across multiple platforms
Open source (MIT licensed)
Continuous integration/quality scoring

Use Cases

Enhancing reliability of language model agents
Preventing task over-engineering by AI agents
Automated oversight for agent-generated decisions
Plugging oversight layer into existing agent frameworks
Reducing risk of harmful or unsafe agent outputs
Deploying safer agents in production environments
Streamlining agent task execution
Integrating with MCP registries and clients
Supporting research in agent alignment and safety
Facilitating compliance in sensitive deployments

README

Vibe Check MCP

Version Trust Score Security 4.3★/5 on MSEEP PRs Welcome

Plug-and-play mentor layer that stops agents from over-engineering and keeps them on the minimal viable path — research-backed MCP server keeping LLMs aligned, reflective and safe.

Quickstart (npx)

Run the server directly from npm without a local installation. Requires Node >=20. Choose a transport:

Option 1 – MCP client over STDIO

bash
npx -y @pv-bhat/vibe-check-mcp start --stdio
  • Launch from an MCP-aware client (Claude Desktop, Cursor, Windsurf, etc.).
  • [MCP] stdio transport connected indicates the process is waiting for the client.
  • Add this block to your client config so it spawns the command:
json
{
  "mcpServers": {
    "vibe-check-mcp": {
      "command": "npx",
      "args": ["-y", "@pv-bhat/vibe-check-mcp", "start", "--stdio"]
    }
  }
}

Option 2 – Manual HTTP inspection

bash
npx -y @pv-bhat/vibe-check-mcp start --http --port 2091
  • curl http://127.0.0.1:2091/health to confirm the service is live.
  • Send JSON-RPC requests to http://127.0.0.1:2091/rpc.

npx downloads the package on demand for both options. For detailed client setup and other commands like install and doctor, see the documentation below.

Star History Chart

Recognition

  • Featured on PulseMCP “Most Popular (This Week)” front page (week of 13 Oct 2025) 🔗
  • Listed in Anthropic’s official Model Context Protocol repo 🔗
  • Discoverable in the official MCP Registry 🔗
  • Featured on Sean Kochel's Top 9 MCP servers for vibe coders 🔗

Table of Contents


What is Vibe Check MCP?

Vibe Check MCP keeps agents on the minimal viable path and escalates complexity only when evidence demands it. Vibe Check MCP is a lightweight server implementing Anthropic's Model Context Protocol. It acts as an AI meta-mentor for your agents, interrupting pattern inertia with Chain-Pattern Interrupts (CPI) to prevent Reasoning Lock-In (RLI). Think of it as a rubber-duck debugger for LLMs – a quick sanity check before your agent goes down the wrong path.

Overview

Vibe Check MCP pairs a metacognitive signal layer with CPI so agents can pause when risk spikes. Vibe Check surfaces traits, uncertainty, and risk scores; CPI consumes those triggers and enforces an intervention policy before the agent resumes. See the CPI integration guide and the CPI repo at https://github.com/PV-Bhat/cpi for wiring details.

Vibe Check invokes a second LLM to give meta-cognitive feedback to your main agent. Integrating vibe_check calls into agent system prompts and instructing tool calls before irreversible actions significantly improves agent alignment and common-sense. The high-level component map: docs/architecture.md, while the CPI handoff diagram and example shim are captured in docs/integrations/cpi.md.

The Problem: Pattern Inertia & Reasoning Lock-In

Large language models can confidently follow flawed plans. Without an external nudge they may spiral into overengineering or misalignment. Vibe Check provides that nudge through short reflective pauses, improving reliability and safety.

Key Features

Feature Description Benefits
CPI Adaptive Interrupts Phase-aware prompts that challenge assumptions alignment, robustness
Multi-provider LLM Gemini, OpenAI, Anthropic, and OpenRouter support flexibility
History Continuity Summarizes prior advice when sessionId is supplied context retention
Optional vibe_learn Log mistakes and fixes for future reflection self-improvement

What's New in v2.7.4

  • install --client now supports Cursor, Windsurf, and Visual Studio Code with idempotent merges, atomic writes, and .bak rollbacks.
  • HTTP-aware installers preserve serverUrl entries for Windsurf and emit VS Code workspace snippets plus a vscode:mcp/install link when no config is provided.
  • Documentation now consolidates provider keys, transport selection, uninstall guidance, and dedicated client docs at docs/clients.md.

Session Constitution (per-session rules)

Use a lightweight “constitution” to enforce rules per sessionId that CPI will honor. Eg. constitution rules: “no external network calls,” “prefer unit tests before refactors,” “never write secrets to disk.”

API (tools):

  • update_constitution({ sessionId, rules }) → merges/sets rule set for the session
  • reset_constitution({ sessionId }) → clears session rules
  • check_constitution({ sessionId }) → returns effective rules for the session

Development Setup

bash
# Clone and install
git clone https://github.com/PV-Bhat/vibe-check-mcp-server.git
cd vibe-check-mcp-server
npm ci
npm run build
npm test

Use npm for all workflows (npm ci, npm run build, npm test). This project targets Node >=20.

Create a .env file with the API keys you plan to use:

bash
# Gemini (default)
GEMINI_API_KEY=your_gemini_api_key
# Optional providers / Anthropic-compatible endpoints
OPENAI_API_KEY=your_openai_api_key
OPENROUTER_API_KEY=your_openrouter_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
ANTHROPIC_AUTH_TOKEN=your_proxy_bearer_token
ANTHROPIC_BASE_URL=https://api.anthropic.com
ANTHROPIC_VERSION=2023-06-01
# Optional overrides
# DEFAULT_LLM_PROVIDER accepts gemini | openai | openrouter | anthropic
DEFAULT_LLM_PROVIDER=gemini
DEFAULT_MODEL=gemini-2.5-pro

Configuration

See docs/TESTING.md for instructions on how to run tests.

Docker

The repository includes a helper script for one-command setup.

bash
bash scripts/docker-setup.sh

See Automatic Docker Setup for full details.

Provider keys

See API Keys & Secret Management for supported providers, resolution order, storage locations, and security guidance.

Transport selection

The CLI supports stdio and HTTP transports. Transport resolution follows this order: explicit flags (--stdio/--http) → MCP_TRANSPORT → default stdio. When using HTTP, specify --port (or set MCP_HTTP_PORT); the default port is 2091. The generated entries add --stdio or --http --port <n> accordingly, and HTTP-capable clients also receive a http://127.0.0.1:<port> endpoint.

Client installers

Each installer is idempotent and tags entries with "managedBy": "vibe-check-mcp-cli". Backups are written once per run before changes are applied, and merges are atomic (*.bak files make rollback easy). See docs/clients.md for deeper client-specific references.

Claude Desktop

  • Config path: claude_desktop_config.json (auto-discovered per platform).
  • Default transport: stdio (npx … start --stdio).
  • Restart Claude Desktop after installation to load the new MCP server.
  • If an unmanaged entry already exists for vibe-check-mcp, the CLI leaves it untouched and prints a warning.

Cursor

  • Config path: ~/.cursor/mcp.json (provide --config if you store it elsewhere).
  • Schema mirrors Claude’s mcpServers layout.
  • If the file is missing, the CLI prints a ready-to-paste JSON block for Cursor’s settings panel instead of failing.

Windsurf (Cascade)

  • Config path: legacy ~/.codeium/windsurf/mcp_config.json, new builds use ~/.codeium/mcp_config.json.
  • Pass --http to emit an entry with serverUrl for Windsurf’s HTTP client.
  • Existing sentinel-managed serverUrl entries are preserved and updated in place.

Visual Studio Code

  • Workspace config lives at .vscode/mcp.json; profiles also store mcp.json in your VS Code user data directory.
  • Provide --config <path> to target a workspace file. Without --config, the CLI prints a JSON snippet and a vscode:mcp/install?... link you can open directly from the terminal.
  • VS Code supports optional dev fields; pass --dev-watch and/or --dev-debug <value> to populate dev.watch/dev.debug.

Uninstall & rollback

  • Restore the backup generated during installation (the newest *.bak next to your config) to revert immediately.
  • To remove the server manually, delete the vibe-check-mcp entry under mcpServers (Claude/Windsurf/Cursor) or servers (VS Code) as long as it is still tagged with "managedBy": "vibe-check-mcp-cli".

Research & Philosophy

CPI (Chain-Pattern Interrupt) is the research-backed oversight method behind Vibe Check. It injects brief, well-timed “pause points” at risk inflection moments to re-align the agent to the user’s true priority, preventing destructive cascades and reasoning lock-in (RLI). In pooled evaluation across 153 runs, CPI nearly doubles success (~27%→54%) and roughly halves harmful actions (~83%→42%). Optimal interrupt dosage is ~10–20% of steps. Vibe Check MCP implements CPI as an external mentor layer at test time.

Links:

mermaid
flowchart TD
  A[Agent Phase] --> B{Monitor Progress}
  B -- high risk --> C[CPI Interrupt]
  C --> D[Reflect & Adjust]
  B -- smooth --> E[Continue]

Agent Prompting Essentials

In your agent's system prompt, make it clear that vibe_check is a mandatory tool for reflection. Always pass the full user request and other relevant context. After correcting a mistake, you can optionally log it with vibe_learn to build a history for future analysis.

Example snippet:

As an autonomous agent you will:
1. Call vibe_check after planning and before major actions.
2. Provide the full user request and your current plan.
3. Optionally, record resolved issues with vibe_learn.

When to Use Each Tool

Tool Purpose
🛑 vibe_check Challenge assumptions and prevent tunnel vision
🔄 vibe_learn Capture mistakes, preferences, and successes
🧰 update_constitution Set/merge session rules the CPI layer will enforce
🧹 reset_constitution Clear rules for a session
🔎 check_constitution Inspect effective rules for a session

Documentation

Security

This repository includes a CI-based security scan that runs on every pull request. It checks dependencies with npm audit and scans the source for risky patterns. See SECURITY.md for details and how to report issues.

Roadmap (New PRs welcome)

Priority 1 – Builder Experience & Guidance

  • Structured output for vibe_check: Return a JSON envelope such as { advice, riskScore, traits } so downstream agents can reason deterministically while preserving readable reflections.
  • Agent prompt starter kit: Publish a plug-and-play system prompt snippet that teaches the CPI dosage principle (10–20% of steps), calls out risk inflection points, and reminds agents to include the last 5–10 tool calls in taskContext.
  • Documentation refresh: Highlight the new prompt template and context requirements throughout the README and integration guides.

Priority 2 – Core Reliability Requests

  • LLM resilience: Wrap generateResponse in src/utils/llm.ts with retries and exponential backoff, with a follow-up circuit breaker once the basics land.
  • Input sanitization: Validate and cleanse tool arguments in src/index.ts to mitigate prompt-injection vectors.
  • State stewardship: Add TTL-based cleanup in src/utils/state.ts and switch src/utils/storage.ts file writes to fs.promises to avoid blocking the event loop.

These initiatives are tracked as community-facing GitHub issues so contributors can grab them and see progress in the open.

Additional Follow-On Ideas & Good First Issues

  • Telemetry sanity checks: Add a lint-style CI step that verifies docs/ examples compile (e.g., TypeScript snippet type-check) to catch drift between docs and code.
  • CLI help polish: Ensure every CLI subcommand prints a concise --help example aligned with the refreshed prompt guidance.
  • Docs navigation cleanup: Cross-link docs/agent-prompting.md and docs/technical-reference.md from the README section headers to reduce context switching for new contributors.

Contributors & Community

Contributions are welcome! See CONTRIBUTING.md.

Links

Credits & License

Vibe Check MCP is released under the MIT License. Built for reliable, enterprise-ready AI agents.

Author Credits & Links

Vibe Check MCP created by: Pruthvi Bhat, Initiative - https://murst.org/

Star History

Star History Chart

Repository Owner

PV-Bhat
PV-Bhat

User

Repository Details

Language TypeScript
Default Branch main
Size 3,243 KB
Contributors 8
License MIT License
MCP Verified Nov 12, 2025

Programming Languages

TypeScript
72.76%
JavaScript
23.54%
Shell
3.63%
Dockerfile
0.07%

Tags

Topics

agentic-ai agentic-workflow ai-agents chain-of-thought cpi error-handling mcp mcp-server model-context-protocol rli vibe-coding workflow-automation

Join Our Newsletter

Stay updated with the latest AI tools, news, and offers by subscribing to our weekly newsletter.

We respect your privacy. Unsubscribe at any time.

Related MCPs

Discover similar Model Context Protocol servers

  • Think MCP Tool

    Think MCP Tool

    Structured reasoning for agentic AI with the 'think' tool via Model Context Protocol.

    Think MCP Tool provides an MCP (Model Context Protocol) server implementing the 'think' tool for structured reasoning in agentic AI workflows. Inspired by Anthropic's research, it enables AI agents to pause and explicitly record thoughts during complex, multi-step problem solving without altering the environment. The system enhances sequential decision-making, policy compliance, and tool output analysis, and offers advanced extensions for criticism, planning, and searching. Suitable for integration with Claude or other agentic large language models.

    • 80
    • MCP
    • Rai220/think-mcp
  • Ethics Check MCP

    Ethics Check MCP

    Turn any AI into a philosophical sparring partner that challenges assumptions and biases.

    Ethics Check MCP transforms AI assistants by making them actively identify and question ethical concerns, confirmation bias, and unexamined assumptions in conversations. The MCP server modifies models like Claude to interrupt comfortable exchanges and provoke critical thinking through counterarguments and alternative perspectives. By tracking ethical dimensions and user patterns, it delivers tailored challenges to encourage transparent and fair discussion. The tool is designed to keep AI-driven conversations rigorously self-critical and ethically aware.

    • 3
    • MCP
    • r-huijts/ethics-check-mcp
  • Wanaku MCP Router

    Wanaku MCP Router

    A router connecting AI-enabled applications through the Model Context Protocol.

    Wanaku MCP Router serves as a middleware router facilitating standardized context exchange between AI-enabled applications and large language models via the Model Context Protocol (MCP). It streamlines context provisioning, allowing seamless integration and communication in multi-model AI environments. The tool aims to unify and optimize the way applications provide relevant context to LLMs, leveraging open protocol standards.

    • 87
    • MCP
    • wanaku-ai/wanaku
  • kibitz

    kibitz

    The coding agent for professionals with MCP integration.

    kibitz is a coding agent that supports advanced AI collaboration by enabling seamless integration with Model Context Protocol (MCP) servers via WebSockets. It allows users to configure Anthropic API keys, system prompts, and custom context providers for each project, enhancing contextual understanding for coding tasks. The platform is designed for developers and professionals seeking tailored AI-driven coding workflows and provides flexible project-specific configuration.

    • 104
    • MCP
    • nick1udwig/kibitz
  • Teamwork MCP Server

    Teamwork MCP Server

    Seamless Teamwork.com integration for Large Language Models via the Model Context Protocol

    Teamwork MCP Server is an implementation of the Model Context Protocol (MCP) that enables Large Language Models to interact securely and programmatically with Teamwork.com. It offers standardized interfaces, including HTTP and STDIO, allowing AI agents to perform various project management operations. The server supports multiple authentication methods, an extensible toolset architecture, and is designed for production deployments. It provides read-only capability for safe integrations and robust observability features.

    • 11
    • MCP
    • Teamwork/mcp
  • Klavis

    Klavis

    One MCP server for AI agents to handle thousands of tools.

    Klavis provides an MCP (Model Context Protocol) server with over 100 prebuilt integrations for AI agents, enabling seamless connectivity with various tools and services. It offers both cloud-hosted and self-hosted deployment options and includes out-of-the-box OAuth support for secure authentication. Klavis is designed to act as an intelligent connector, streamlining workflow automation and enhancing agent capability through standardized context management.

    • 5,447
    • MCP
    • Klavis-AI/klavis
  • Didn't find tool you were looking for?

    Be as detailed as possible for better results