wcgw

wcgw

Local shell and code agent server with deep AI integration for Model Context Protocol clients.

616
Stars
55
Forks
616
Watchers
5
Issues
wcgw is an MCP server that empowers conversational AI models, such as Claude, with robust shell command execution and code editing capabilities on the user's local machine. It offers advanced tools for syntax-aware file editing, interactive shell command handling, and context management to optimize AI-driven workflows. Key protections are included to safeguard files, prevent accidental overwrites, and streamline large file handling, ensuring smooth automated code development and execution.

Key Features

MCP server compatibility for chat-based AI models
Integrated shell command execution with context-aware feedback
Advanced, syntax-aware code editing and file protection mechanisms
Large file incremental edits to address token limits
Interactive terminal handling (arrow keys, interrupts, ANSI escapes)
Context saving and task checkpointing
Automatic syntax error feedback loop for AI-driven file edits
Smart selection of important files in a workspace based on .gitignore and statistics
Terminal attachment support for monitoring running commands
Seamless handling of multi-command and multiplexed shell sessions

Use Cases

Automating and correcting code compilation through iterative shell commands
Collaborative coding with AI agents on a local environment
Editing and managing large source files without exceeding model token limits
Monitoring and interacting with long-running terminal commands via AI
Safely guiding AI-driven modifications on critical or complex codebases
Enabling context-aware code reviews and checkpoints with attached descriptions
Providing AI with robust feedback on syntax and execution errors for quick resolution
Supporting live terminal session attachments for real-time oversight
Streamlining development workflows through AI-assisted file and command management
Empowering code-writing, architect, or multi-mode development via conversational UI

README

Shell and Coding agent for Claude and other mcp clients

Empowering chat applications to code, build and run on your local machine.

wcgw is an MCP server with tightly integrated shell and code editing tools.

⚠️ Warning: do not allow BashCommand tool without reviewing the command, it may result in data loss.

Tests Mypy strict Build codecov

Demo

Workflow Demo

Updates

  • [6 Oct 2025] Model can now run multiple commands in background. ZSH is now a supported shell. Multiplexing improvements.

  • [27 Apr 2025] Removed support for GPTs over relay server. Only MCP server is supported in version >= 5.

  • [24 Mar 2025] Improved writing and editing experience for sonnet 3.7, CLAUDE.md gets loaded automatically.

  • [16 Feb 2025] You can now attach to the working terminal that the AI uses. See the "attach-to-terminal" section below.

  • [15 Jan 2025] Modes introduced: architect, code-writer, and all powerful wcgw mode.

  • [8 Jan 2025] Context saving tool for saving relevant file paths along with a description in a single file. Can be used as a task checkpoint or for knowledge transfer.

  • [29 Dec 2024] Syntax checking on file writing and edits is now stable. Made initialize tool call useful; sending smart repo structure to claude if any repo is referenced. Large file handling is also now improved.

  • [9 Dec 2024] Vscode extension to paste context on Claude app

🚀 Highlights

  • Create, Execute, Iterate: Ask claude to keep running compiler checks till all errors are fixed, or ask it to keep checking for the status of a long running command till it's done.
  • Large file edit: Supports large file incremental edits to avoid token limit issues. Smartly selects when to do small edits or large rewrite based on % of change needed.
  • Syntax checking on edits: Reports feedback to the LLM if its edits have any syntax errors, so that it can redo it.
  • Interactive Command Handling: Supports interactive commands using arrow keys, interrupt, and ansi escape sequences.
  • File protections:
    • The AI needs to read a file at least once before it's allowed to edit or rewrite it. This avoids accidental overwrites.
    • Avoids context filling up while reading very large files. Files get chunked based on token length.
    • On initialisation the provided workspace's directory structure is returned after selecting important files (based on .gitignore as well as a statistical approach)
    • File edit based on search-replace tries to find correct search block if it has multiple matches based on previous search blocks. Fails otherwise (for correctness).
    • File edit has spacing tolerant matching, with warning on issues like indentation mismatch. If there's no match, the closest match is returned to the AI to fix its mistakes.
    • Using Aider-like search and replace, which has better performance than tool call based search and replace.
  • Shell optimizations:
    • Current working directory is always returned after any shell command to prevent AI from getting lost.
    • Command polling exits after a quick timeout to avoid slow feedback. However, status checking has wait tolerance based on fresh output streaming from a command. Both of these approach combined provides a good shell interaction experience.
    • Supports multiple concurrent background commands alongside the main interactive shell.
  • Saving repo context in a single file: Task checkpointing using "ContextSave" tool saves detailed context in a single file. Tasks can later be resumed in a new chat asking "Resume task id". The saved file can be used to do other kinds of knowledge transfer, such as taking help from another AI.
  • Easily switch between various modes:
    • Ask it to run in 'architect' mode for planning. Inspired by adier's architect mode, work with Claude to come up with a plan first. Leads to better accuracy and prevents premature file editing.
    • Ask it to run in 'code-writer' mode for code editing and project building. You can provide specific paths with wild card support to prevent other files getting edited.
    • By default it runs in 'wcgw' mode that has no restrictions and full authorisation.
    • More details in Modes section
  • Runs in multiplex terminal Use vscode extension or run screen -x to attach to the terminal that the AI runs commands on. See history or interrupt process or interact with the same terminal that AI uses.
  • Automatically load CLAUDE.md/AGENTS.md Loads "CLAUDE.md" or "AGENTS.md" file in project root and sends as instructions during initialisation. Instructions in a global "/.wcgw/CLAUDE.md" or "/.wcgw/AGENTS.md" file are loaded and added along with project specific CLAUDE.md. The file name is case sensitive. CLAUDE.md is attached if it's present otherwise AGENTS.md is attached.

Top use cases examples

  • Solve problem X using python, create and run test cases and fix any issues. Do it in a temporary directory
  • Find instances of code with X behavior in my repository
  • Git clone https://github.com/my/repo in my home directory, then understand the project, set up the environment and build
  • Create a golang htmx tailwind webapp, then open browser to see if it works (use with puppeteer mcp)
  • Edit or update a large file
  • In a separate branch create feature Y, then use github cli to create a PR to original branch
  • Command X is failing in Y directory, please run and fix issues
  • Using X virtual environment run Y command
  • Using cli tools, create build and test an android app. Finally run it using emulator for me to use
  • Fix all mypy issues in my repo at X path.
  • Using 'screen' run my server in background instead, then run another api server in bg, finally run the frontend build. Keep checking logs for any issues in all three
  • Create repo wide unittest cases. Keep iterating through files and creating cases. Also keep running the tests after each update. Do not modify original code.

Claude setup (using mcp)

Mac and linux

First install uv using homebrew brew install uv

(Important: use homebrew to install uv. Otherwise make sure uv is present in a global location like /usr/bin/)

Then create or update claude_desktop_config.json (~/Library/Application Support/Claude/claude_desktop_config.json) with following json.

json
{
  "mcpServers": {
    "wcgw": {
      "command": "uvx",
      "args": ["wcgw@latest"]
    }
  }
}

Then restart claude app.

Optional: Force a specific shell

To use a specific shell (bash or zsh), add the --shell argument:

json
{
  "mcpServers": {
    "wcgw": {
      "command": "uvx",
      "args": ["wcgw@latest", "--shell", "/bin/bash"]
    }
  }
}

If there's an error in setting up

  • If there's an error like "uv ENOENT", make sure uv is installed. Then run 'which uv' in the terminal, and use its output in place of "uv" in the configuration.
  • If there's still an issue, check that uv tool run --python 3.12 wcgw runs in your terminal. It should have no output and shouldn't exit.
  • Try removing ~/.cache/uv folder
  • Try using uv version 0.6.0 for which this tool was tested.
  • Debug the mcp server using npx @modelcontextprotocol/inspector@0.1.7 uv tool run --python 3.12 wcgw

Windows on wsl

This mcp server works only on wsl on windows.

To set it up, install uv

Then add or update the claude config file %APPDATA%\Claude\claude_desktop_config.json with the following

json
{
  "mcpServers": {
    "wcgw": {
      "command": "wsl.exe",
      "args": ["uvx", "wcgw@latest"]
    }
  }
}

When you encounter an error, execute the command wsl uv --python 3.12 wcgw in command prompt. If you get the error /bin/bash: line 1: uv: command not found, it means uv was not installed globally and you need to point to the correct path of uv.

  1. Find where uv is installed:
bash
whereis uv

Example output: uv: /home/mywsl/.local/bin/uv

  1. Test the full path works:
wsl /home/mywsl/.local/bin/uv tool run --python 3.12 wcgw
  1. Update the config with the full path:
{
  "mcpServers": {
    "wcgw": {
      "command": "wsl.exe",
      "args": ["/home/mywsl/.local/bin/uv", "tool", "run", "--python", "3.12", "wcgw"]
    }
  }
}

Replace /home/mywsl/.local/bin/uv with your actual uv path from step 1.

Usage

Wait for a few seconds. You should be able to see this icon if everything goes right.

mcp icon over here

mcp icon

Then ask claude to execute shell commands, read files, edit files, run your code, etc.

Task checkpoint or knowledge transfer

  • You can do a task checkpoint or a knowledge transfer by attaching "KnowledgeTransfer" prompt using "Attach from MCP" button.
  • On running "KnowledgeTransfer" prompt, the "ContextSave" tool will be called saving the task description and all file content together in a single file. An id for the task will be generated.
  • You can in a new chat say "Resume ''", the AI should then call "Initialize" with the task id and load the context from there.
  • Or you can directly open the file generated and share it with another AI for help.

Modes

There are three built-in modes. You may ask Claude to run in one of the modes, like "Use 'architect' mode"

Mode Description Allows Denies Invoke prompt
Architect Designed for you to work with Claude to investigate and understand your repo. Read-only commands FileEdit and Write tool Run in mode='architect'
Code-writer For code writing and development Specified path globs for editing or writing, specified commands FileEdit for paths not matching specified glob, Write for paths not matching specified glob Run in code writer mode, only 'tests/**' allowed, only uv command allowed
**wcgw** Default mode with everything allowed Everything Nothing No prompt, or "Run in wcgw mode"

Note: in code-writer mode either all commands are allowed or none are allowed for now. If you give a list of allowed commands, Claude is instructed to run only those commands, but no actual check happens. (WIP)

Attach to the working terminal to investigate

NEW: the vscode extension now automatically attach the running terminal if workspace path matches.

If you've screen command installed, wcgw runs on a screen instance automatically. If you've started wcgw mcp server, you can list the screen sessions:

screen -ls

And note down the wcgw screen name which will be something like 93358.wcgw.235521 where the last number is in the hour-minute-second format.

You can then attach to the session using screen -x 93358.wcgw.235521

You may interrupt any running command safely.

You can interact with the terminal safely, for example for entering passwords, or entering some text. (Warning: If you run a new command, any new LLM command will interrupt it.)

You shouldn't exit the session using exit or Ctrl-d, instead you should use ctrl+a+d to safely detach without destroying the screen session.

Include the following in ~/.screenrc for better scrolling experience

defscrollback 10000
termcapinfo xterm* ti@:te@

[Optional] Vs code extension

https://marketplace.visualstudio.com/items?itemName=AmanRusia.wcgw

Commands:

  • Select a text and press cmd+' and then enter instructions. This will switch the app to Claude and paste a text containing your instructions, file path, workspace dir, and the selected text.

Examples

example

Using mcp server over docker

First build the docker image docker build -t wcgw https://github.com/rusiaaman/wcgw.git

Then you can update /Users/username/Library/Application Support/Claude/claude_desktop_config.json to have

{
  "mcpServers": {
    "wcgw": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "--mount",
        "type=bind,src=/Users/username/Desktop,dst=/workspace/Desktop",
        "wcgw"
      ]
    }
  }
}

[Optional] Local shell access with openai API key or anthropic API key

Openai

Add OPENAI_API_KEY and OPENAI_ORG_ID env variables.

Then run

uvx wcgw wcgw_local --limit 0.1 # Cost limit $0.1

You can now directly write messages or press enter key to open vim for multiline message and text pasting.

Anthropic

Add ANTHROPIC_API_KEY env variable.

Then run

uvx wcgw wcgw_local --claude

You can now directly write messages or press enter key to open vim for multiline message and text pasting.

Tools

The server provides the following MCP tools:

Shell Operations:

  • Initialize: Reset shell and set up workspace environment
    • Parameters: any_workspace_path (string), initial_files_to_read (string[]), mode_name ("wcgw"|"architect"|"code_writer"), task_id_to_resume (string)
  • BashCommand: Execute shell commands with timeout control
    • Parameters: command (string), wait_for_seconds (int, optional)
    • Parameters: send_text (string) or send_specials (["Enter"|"Key-up"|...]) or send_ascii (int[]), wait_for_seconds (int, optional)

File Operations:

  • ReadFiles: Read content from one or more files
    • Parameters: file_paths (string[])
  • WriteIfEmpty: Create new files or write to empty files
    • Parameters: file_path (string), file_content (string)
  • FileEdit: Edit existing files using search/replace blocks
    • Parameters: file_path (string), file_edit_using_search_replace_blocks (string)
  • ReadImage: Read image files for display/processing
    • Parameters: file_path (string)

Project Management:

  • ContextSave: Save project context and files for Knowledge Transfer or saving task checkpoints to be resumed later
    • Parameters: id (string), project_root_path (string), description (string), relevant_file_globs (string[])

All tools support absolute paths and include built-in protections against common errors. See the MCP specification for detailed protocol information.

Star History

Star History Chart

Repository Owner

rusiaaman
rusiaaman

User

Repository Details

Language Python
Default Branch main
Size 2,502 KB
Contributors 9
License Apache License 2.0
MCP Verified Nov 12, 2025

Programming Languages

Python
99.49%
Dockerfile
0.51%

Tags

Topics

agent ai-agent ai-coding anthropic chatgpt claude claude-desktop custom-gpt llm llm-agent llm-code mcp mcp-server openai shell terminal

Join Our Newsletter

Stay updated with the latest AI tools, news, and offers by subscribing to our weekly newsletter.

We respect your privacy. Unsubscribe at any time.

Related MCPs

Discover similar Model Context Protocol servers

  • interactive-mcp

    interactive-mcp

    Enable interactive, local communication between LLMs and users via MCP.

    interactive-mcp implements a Model Context Protocol (MCP) server in Node.js/TypeScript, allowing Large Language Models (LLMs) to interact directly with users on their local machine. It exposes tools for requesting user input, sending notifications, and managing persistent command-line chat sessions, facilitating real-time communication. Designed for integration with clients like Claude Desktop and VS Code, it operates locally to access OS-level notifications and command prompts. The project is suited for interactive workflows where LLMs require user involvement or confirmation.

    • 313
    • MCP
    • ttommyth/interactive-mcp
  • Perplexity MCP Server

    Perplexity MCP Server

    MCP Server integration for accessing the Perplexity API with context-aware chat completion.

    Perplexity MCP Server provides a Model Context Protocol (MCP) compliant server that interfaces with the Perplexity API, enabling chat completion with citations. Designed for seamless integration with clients such as Claude Desktop, it allows users to send queries and receive context-rich responses from Perplexity. Environment configuration for API key management is supported, and limitations with long-running requests are noted. Future updates are planned to enhance support for client progress reporting.

    • 85
    • MCP
    • tanigami/mcp-server-perplexity
  • kibitz

    kibitz

    The coding agent for professionals with MCP integration.

    kibitz is a coding agent that supports advanced AI collaboration by enabling seamless integration with Model Context Protocol (MCP) servers via WebSockets. It allows users to configure Anthropic API keys, system prompts, and custom context providers for each project, enhancing contextual understanding for coding tasks. The platform is designed for developers and professionals seeking tailored AI-driven coding workflows and provides flexible project-specific configuration.

    • 104
    • MCP
    • nick1udwig/kibitz
  • mcp-cli

    mcp-cli

    A command-line inspector and client for the Model Context Protocol

    mcp-cli is a command-line interface tool designed to interact with Model Context Protocol (MCP) servers. It allows users to run and connect to MCP servers from various sources, inspect available tools, resources, and prompts, and execute commands non-interactively or interactively. The tool supports OAuth for various server types, making integration and automation seamless for developers working with MCP-compliant servers.

    • 391
    • MCP
    • wong2/mcp-cli
  • MCP Claude Code

    MCP Claude Code

    Claude Code-like functionality via the Model Context Protocol.

    Implements a server utilizing the Model Context Protocol to enable Claude Code functionality, allowing AI agents to perform advanced codebase analysis, modification, and command execution. Supports code understanding, file management, and integration with various LLM providers. Offers specialized tools for searching, editing, and delegating tasks, with robust support for Jupyter notebooks. Designed for seamless collaboration with MCP clients including Claude Desktop.

    • 281
    • MCP
    • SDGLBL/mcp-claude-code
  • MCP Shell Server

    MCP Shell Server

    A secure, configurable shell command execution server implementing the Model Context Protocol.

    MCP Shell Server provides secure remote execution of whitelisted shell commands via the Model Context Protocol (MCP). It supports standard input, command output retrieval, and enforces strict safety checks on command operations. The tool allows configuration of allowed commands and execution timeouts, and can be integrated with platforms such as Claude.app and Smithery. With robust security assessments and flexible deployment methods, it facilitates controlled shell access for AI agents.

    • 153
    • MCP
    • tumf/mcp-shell-server
  • Didn't find tool you were looking for?

    Be as detailed as possible for better results