Winx Agent

Winx Agent

High-performance Rust agent for AI code execution and context management with Model Context Protocol support.

21
Stars
8
Forks
21
Watchers
4
Issues
Winx Agent is a Rust implementation of WCGW, offering advanced shell execution and file management for large language model code agents. It delivers high-performance, multi-provider AI integration, including automatic fallbacks and code analysis capabilities. Designed for seamless integration with Claude and other LLMs, it leverages the Model Context Protocol (MCP) for standardized context handling. Multiple operational modes, advanced file operations, and interactive shell support make it suitable for robust AI-driven code workflows.

Key Features

High-performance Rust implementation
Multi-provider AI integration with fallbacks
Advanced file read, write, and edit operations
Interactive and persistent shell command execution
Line-level context tracking and smart file caching
Granular operational modes for different access levels
Automatic AI provider switching on failure
Media file reading and base64 encoding
Repository structure analysis and project management
Seamless MCP protocol integration for LLMs

Use Cases

Automated code execution via LLM agents
Dynamic code analysis and bug detection
Generating code from natural language prompts
Context-aware file navigation for AI workflows
Managing code projects with persistent session state
Read-only code planning and architecture review
Restricted, controlled code modification by AI
Interactive AI-to-AI and AI-to-user conversations
Handling and processing image data for code agents
Rapid switching between AI model providers for reliability

README

Trust Score


๐Ÿ“– Overview

Winx is a Rust reimplementation of WCGW, providing shell execution and file management capabilities for LLM code agents. Designed for high performance and reliability, Winx integrates with Claude and other LLMs via the Model Context Protocol (MCP).

๐ŸŒŸ Features

  • โšก High Performance: Implemented in Rust for maximum efficiency
  • ๐Ÿค– Multi-Provider AI Integration (v0.1.5):
    • ๐ŸŽฏ DashScope/Qwen3: Primary AI provider with Alibaba Cloud's Qwen3-Coder-Plus model
    • ๐Ÿ”„ NVIDIA NIM: Fallback 1 with Qwen3-235B-A22B model and thinking mode
    • ๐Ÿ’Ž Google Gemini: Fallback 2 with Gemini-1.5-Pro and Gemini-1.5-Flash models
    • ๐Ÿ”ง AI-Powered Code Analysis: Detect bugs, security issues, and performance problems
    • ๐Ÿš€ AI Code Generation: Generate code from natural language descriptions
    • ๐Ÿ“š AI Code Explanation: Get detailed explanations of complex code
    • ๐ŸŽญ AI-to-AI Chat: Winx fairy assistant with personality and multiple conversation modes
    • ๐Ÿ›ก๏ธ Smart Fallback System: Automatic provider switching on failures
  • ๐Ÿ“ Advanced File Operations:
    • ๐Ÿ“– Read files with line range support
    • โœ๏ธ Write new files with syntax validation
    • ๐Ÿ” Edit existing files with intelligent search/replace
    • ๐Ÿ”„ Smart file caching with change detection
    • ๐Ÿ“ Line-level granular read tracking
  • ๐Ÿ–ฅ๏ธ Command Execution:
    • ๐Ÿš€ Run shell commands with status tracking
    • ๐Ÿ“Ÿ Interactive shell with persistent session
    • โŒจ๏ธ Full input/output control via PTY
    • ๐Ÿƒโ€โ™‚๏ธ Background process execution
  • ๐Ÿ”€ Operational Modes:
    • ๐Ÿ”“ wcgw: Complete access to all features
    • ๐Ÿ”Ž architect: Read-only mode for planning and analysis
    • ๐Ÿ”’ code_writer: Restricted access for controlled modifications
  • ๐Ÿ“Š Project Management:
    • ๐Ÿ“ Repository structure analysis
    • ๐Ÿ’พ Context saving and task resumption
  • ๐Ÿ–ผ๏ธ Media Support: Read images and encode as base64
  • ๐Ÿงฉ MCP Protocol: Seamless integration with Claude and other LLMs

๐Ÿ–‡๏ธ Installation & Setup

Prerequisites

  • Rust 1.70 or higher
  • Tokio runtime

1. Clone the Repository

bash
git clone https://github.com/gabrielmaialva33/winx-code-agent.git && cd winx

2. Build the Project

bash
# For development
cargo build

# For production
cargo build --release

3. Run the Agent

bash
# Using cargo
cargo run

# Or directly
./target/release/winx-code-agent

๐Ÿ”ง Integration with Claude

Winx is designed to work seamlessly with Claude via the MCP interface:

  1. Edit Claude's Configuration

    json
    // In claude_desktop_config.json (Mac: ~/Library/Application Support/Claude/claude_desktop_config.json)
    {
      "mcpServers": {
        "winx": {
          "command": "/path/to/winx-code-agent",
          "args": [],
          "env": {
            "RUST_LOG": "info",
            "DASHSCOPE_API_KEY": "your-dashscope-api-key",
            "DASHSCOPE_MODEL": "qwen3-coder-plus",
            "NVIDIA_API_KEY": "your-nvidia-api-key",
            "NVIDIA_DEFAULT_MODEL": "qwen/qwen3-235b-a22b",
            "GEMINI_API_KEY": "your-gemini-api-key",
            "GEMINI_MODEL": "gemini-1.5-pro"
          }
        }
      }
    }
    
  2. Restart Claude after configuration to see the Winx MCP integration icon.

  3. Start using the tools through Claude's interface.


๐Ÿ› ๏ธ Available Tools

๐Ÿš€ initialize

Always call this first to set up your workspace environment.

initialize(
  type="first_call",
  any_workspace_path="/path/to/project",
  mode_name="wcgw"
)

๐Ÿ–ฅ๏ธ bash_command

Execute shell commands with persistent shell state and full interactive capabilities.

# Execute commands
bash_command(
  action_json={"command": "ls -la"},
  chat_id="i1234"
)

# Check command status
bash_command(
  action_json={"status_check": true},
  chat_id="i1234"
)

# Send input to running commands
bash_command(
  action_json={"send_text": "y"},
  chat_id="i1234"
)

# Send special keys (Ctrl+C, arrow keys, etc.)
bash_command(
  action_json={"send_specials": ["Enter", "CtrlC"]},
  chat_id="i1234"
)

๐Ÿ“ File Operations

  • read_files: Read file content with line range support

    read_files(
      file_paths=["/path/to/file.rs"],
      show_line_numbers_reason=null
    )
    
  • file_write_or_edit: Write or edit files

    file_write_or_edit(
      file_path="/path/to/file.rs",
      percentage_to_change=100,
      file_content_or_search_replace_blocks="content...",
      chat_id="i1234"
    )
    
  • read_image: Process image files as base64

    read_image(
      file_path="/path/to/image.png"
    )
    

๐Ÿ’พ context_save

Save task context for later resumption.

context_save(
  id="task_name",
  project_root_path="/path/to/project",
  description="Task description",
  relevant_file_globs=["**/*.rs"]
)

๐Ÿค– AI-Powered Tools (v0.1.5)

  • code_analyzer: AI-powered code analysis for bugs, security, and performance

    code_analyzer(
      file_path="/path/to/code.rs",
      language="Rust"
    )
    
  • ai_generate_code: Generate code from natural language description

    ai_generate_code(
      prompt="Create a REST API for user management",
      language="Rust",
      context="Using Axum framework",
      max_tokens=1000,
      temperature=0.7
    )
    
  • ai_explain_code: Get AI explanation and documentation for code

    ai_explain_code(
      file_path="/path/to/code.rs",
      language="Rust",
      detail_level="expert"
    )
    
  • winx_chat: Chat with Winx, your AI assistant fairy โœจ

    winx_chat(
      message="Oi Winx, como funciona o sistema de fallback?",
      conversation_mode="technical",
      include_system_info=true,
      personality_level=8
    )
    

    Conversation Modes:

    • casual: Informal, friendly chat with personality ๐Ÿ˜Š
    • technical: Focused technical responses ๐Ÿ”ง
    • help: Help mode with detailed explanations ๐Ÿ†˜
    • debug: Debugging assistance ๐Ÿ›
    • creative: Creative brainstorming ๐Ÿ’ก
    • mentor: Teaching and best practices ๐Ÿง™โ€โ™€๏ธ

๐Ÿ‘จโ€๐Ÿ’ป Usage Workflow

  1. Initialize the workspace

    initialize(type="first_call", any_workspace_path="/path/to/your/project")
    
  2. Explore the codebase

    bash_command(action_json={"command": "find . -type f -name '*.rs' | sort"}, chat_id="i1234")
    
  3. Read key files

    read_files(file_paths=["/path/to/important_file.rs"])
    
  4. Make changes

    file_write_or_edit(file_path="/path/to/file.rs", percentage_to_change=30, 
    file_content_or_search_replace_blocks="<<<<<<< SEARCH\nold code\n=======\nnew code\n>>>>>>> REPLACE", 
    chat_id="i1234")
    
  5. Run tests

    bash_command(action_json={"command": "cargo test"}, chat_id="i1234")
    
  6. Chat with Winx for help

    winx_chat(message="Winx, posso ter ajuda para otimizar este cรณdigo?", 
    conversation_mode="mentor", include_system_info=true)
    
  7. Save context for later

    context_save(id="my_task", project_root_path="/path/to/project", 
    description="Implementation of feature X", relevant_file_globs=["src/**/*.rs"])
    

๐Ÿท Need Support or Assistance?

If you need help or have any questions about Winx, feel free to reach out via the following channels:


๐Ÿ“ Changelog

v0.1.5 (Latest) - Multi-Provider AI Integration

๐Ÿš€ Major Features:

  • Multi-Provider AI System: Primary DashScope, fallback to NVIDIA, then Gemini
  • DashScope/Qwen3 Integration: Alibaba Cloud's Qwen3-Coder-Plus as primary AI provider
  • Smart Fallback System: Automatic provider switching with comprehensive error handling
  • 3 New AI Tools: code_analyzer, ai_generate_code, ai_explain_code

๐ŸŽฏ AI Providers:

  • DashScope: Primary provider with OpenAI-compatible API format
  • NVIDIA NIM: Qwen3-235B-A22B with thinking mode and MoE architecture
  • Google Gemini: Gemini-1.5-Pro and Gemini-1.5-Flash models

๐Ÿ› ๏ธ Technical Improvements:

  • Rate limiting and retry logic for all AI providers
  • Comprehensive logging and error reporting
  • Environment-based configuration management
  • Full CI/CD quality checks (formatting, linting, testing)

๐Ÿ™ Special Thanks

A huge thank you to rusiaaman for the inspiring work on WCGW, which served as the primary inspiration for this project. Winx reimplements WCGW's features in Rust for enhanced performance and reliability.


๐Ÿ“œ License

MIT

Star History

Star History Chart

Repository Owner

Repository Details

Language Rust
Default Branch main
Size 1,853 KB
Contributors 2
License MIT License
MCP Verified Nov 11, 2025

Programming Languages

Rust
99.66%
Shell
0.34%

Tags

Topics

autonomous code-agent computer control execution llm-agent llm-code mcp rust serena shell vibe-coding vibecoding wcgw

Join Our Newsletter

Stay updated with the latest AI tools, news, and offers by subscribing to our weekly newsletter.

We respect your privacy. Unsubscribe at any time.

Related MCPs

Discover similar Model Context Protocol servers

  • Code Assistant

    Code Assistant

    AI coding assistant with multi-modal tool execution and MCP integration

    Code Assistant is an AI-powered coding assistant written in Rust that provides both command-line and graphical user interfaces for autonomous code analysis and modification. It supports multi-modal tool invocation, real-time streaming, and session-based project management. The tool features full Model Context Protocol (MCP) compatibility, enabling seamless integration with MCP clients, and offers advanced project-level configuration and formatting capabilities.

    • โญ 110
    • MCP
    • stippi/code-assistant
  • wcgw

    wcgw

    Local shell and code agent server with deep AI integration for Model Context Protocol clients.

    wcgw is an MCP server that empowers conversational AI models, such as Claude, with robust shell command execution and code editing capabilities on the user's local machine. It offers advanced tools for syntax-aware file editing, interactive shell command handling, and context management to optimize AI-driven workflows. Key protections are included to safeguard files, prevent accidental overwrites, and streamline large file handling, ensuring smooth automated code development and execution.

    • โญ 616
    • MCP
    • rusiaaman/wcgw
  • MCP Claude Code

    MCP Claude Code

    Claude Code-like functionality via the Model Context Protocol.

    Implements a server utilizing the Model Context Protocol to enable Claude Code functionality, allowing AI agents to perform advanced codebase analysis, modification, and command execution. Supports code understanding, file management, and integration with various LLM providers. Offers specialized tools for searching, editing, and delegating tasks, with robust support for Jupyter notebooks. Designed for seamless collaboration with MCP clients including Claude Desktop.

    • โญ 281
    • MCP
    • SDGLBL/mcp-claude-code
  • Serena

    Serena

    Coding agent toolkit with IDE-like semantic code retrieval and editing for LLM integration.

    Serena is a free and open-source coding agent toolkit that enhances large language models with advanced semantic code retrieval and editing tools. It enables integration through the Model Context Protocol (MCP), allowing seamless operation with various coding agents, IDEs, and interfaces. Serena extracts code entities at the symbol level, supports context-aware operations, and improves token efficiency for coding tasks. The tools can be incorporated into diverse LLM-driven environments for more efficient and precise code editing.

    • โญ 15,643
    • MCP
    • oraios/serena
  • Neovim MCP Server

    Neovim MCP Server

    Connect AI assistants to Neovim via the Model Context Protocol.

    Neovim MCP Server enables seamless integration between Neovim instances and AI assistants by implementing the Model Context Protocol (MCP). It allows for multi-connection management, supports both stdio and HTTP server transport modes, and provides access to structured diagnostic information via URI schemes. With LSP integration, plugin support, and an extensible tool system, it facilitates advanced interaction with Neovim for context-aware AI workflows.

    • โญ 20
    • MCP
    • linw1995/nvim-mcp
  • VideoDB Agent Toolkit

    VideoDB Agent Toolkit

    AI Agent toolkit that exposes VideoDB context to LLMs with MCP support

    VideoDB Agent Toolkit provides tools for exposing VideoDB context to large language models (LLMs) and agents, enabling integration with AI-driven IDEs and chat agents. It automates context generation, metadata management, and discoverability by offering structured context files like llms.txt and llms-full.txt, and standardized access via the Model Context Protocol (MCP). The toolkit ensures synchronization of SDK versions, comprehensive documentation, and best practices for seamless AI-powered workflows.

    • โญ 43
    • MCP
    • video-db/agent-toolkit
  • Didn't find tool you were looking for?

    Be as detailed as possible for better results