Winx Agent
High-performance Rust agent for AI code execution and context management with Model Context Protocol support.
Key Features
Use Cases
README
๐ Overview
Winx is a Rust reimplementation of WCGW, providing shell execution and file management capabilities for LLM code agents. Designed for high performance and reliability, Winx integrates with Claude and other LLMs via the Model Context Protocol (MCP).
๐ Features
- โก High Performance: Implemented in Rust for maximum efficiency
- ๐ค Multi-Provider AI Integration (v0.1.5):
- ๐ฏ DashScope/Qwen3: Primary AI provider with Alibaba Cloud's Qwen3-Coder-Plus model
- ๐ NVIDIA NIM: Fallback 1 with Qwen3-235B-A22B model and thinking mode
- ๐ Google Gemini: Fallback 2 with Gemini-1.5-Pro and Gemini-1.5-Flash models
- ๐ง AI-Powered Code Analysis: Detect bugs, security issues, and performance problems
- ๐ AI Code Generation: Generate code from natural language descriptions
- ๐ AI Code Explanation: Get detailed explanations of complex code
- ๐ญ AI-to-AI Chat: Winx fairy assistant with personality and multiple conversation modes
- ๐ก๏ธ Smart Fallback System: Automatic provider switching on failures
- ๐ Advanced File Operations:
- ๐ Read files with line range support
- โ๏ธ Write new files with syntax validation
- ๐ Edit existing files with intelligent search/replace
- ๐ Smart file caching with change detection
- ๐ Line-level granular read tracking
- ๐ฅ๏ธ Command Execution:
- ๐ Run shell commands with status tracking
- ๐ Interactive shell with persistent session
- โจ๏ธ Full input/output control via PTY
- ๐โโ๏ธ Background process execution
- ๐ Operational Modes:
- ๐
wcgw: Complete access to all features - ๐
architect: Read-only mode for planning and analysis - ๐
code_writer: Restricted access for controlled modifications
- ๐
- ๐ Project Management:
- ๐ Repository structure analysis
- ๐พ Context saving and task resumption
- ๐ผ๏ธ Media Support: Read images and encode as base64
- ๐งฉ MCP Protocol: Seamless integration with Claude and other LLMs
๐๏ธ Installation & Setup
Prerequisites
- Rust 1.70 or higher
- Tokio runtime
1. Clone the Repository
git clone https://github.com/gabrielmaialva33/winx-code-agent.git && cd winx
2. Build the Project
# For development
cargo build
# For production
cargo build --release
3. Run the Agent
# Using cargo
cargo run
# Or directly
./target/release/winx-code-agent
๐ง Integration with Claude
Winx is designed to work seamlessly with Claude via the MCP interface:
-
Edit Claude's Configuration
json// In claude_desktop_config.json (Mac: ~/Library/Application Support/Claude/claude_desktop_config.json) { "mcpServers": { "winx": { "command": "/path/to/winx-code-agent", "args": [], "env": { "RUST_LOG": "info", "DASHSCOPE_API_KEY": "your-dashscope-api-key", "DASHSCOPE_MODEL": "qwen3-coder-plus", "NVIDIA_API_KEY": "your-nvidia-api-key", "NVIDIA_DEFAULT_MODEL": "qwen/qwen3-235b-a22b", "GEMINI_API_KEY": "your-gemini-api-key", "GEMINI_MODEL": "gemini-1.5-pro" } } } } -
Restart Claude after configuration to see the Winx MCP integration icon.
-
Start using the tools through Claude's interface.
๐ ๏ธ Available Tools
๐ initialize
Always call this first to set up your workspace environment.
initialize(
type="first_call",
any_workspace_path="/path/to/project",
mode_name="wcgw"
)
๐ฅ๏ธ bash_command
Execute shell commands with persistent shell state and full interactive capabilities.
# Execute commands
bash_command(
action_json={"command": "ls -la"},
chat_id="i1234"
)
# Check command status
bash_command(
action_json={"status_check": true},
chat_id="i1234"
)
# Send input to running commands
bash_command(
action_json={"send_text": "y"},
chat_id="i1234"
)
# Send special keys (Ctrl+C, arrow keys, etc.)
bash_command(
action_json={"send_specials": ["Enter", "CtrlC"]},
chat_id="i1234"
)
๐ File Operations
-
read_files: Read file content with line range support
read_files( file_paths=["/path/to/file.rs"], show_line_numbers_reason=null ) -
file_write_or_edit: Write or edit files
file_write_or_edit( file_path="/path/to/file.rs", percentage_to_change=100, file_content_or_search_replace_blocks="content...", chat_id="i1234" ) -
read_image: Process image files as base64
read_image( file_path="/path/to/image.png" )
๐พ context_save
Save task context for later resumption.
context_save(
id="task_name",
project_root_path="/path/to/project",
description="Task description",
relevant_file_globs=["**/*.rs"]
)
๐ค AI-Powered Tools (v0.1.5)
-
code_analyzer: AI-powered code analysis for bugs, security, and performance
code_analyzer( file_path="/path/to/code.rs", language="Rust" ) -
ai_generate_code: Generate code from natural language description
ai_generate_code( prompt="Create a REST API for user management", language="Rust", context="Using Axum framework", max_tokens=1000, temperature=0.7 ) -
ai_explain_code: Get AI explanation and documentation for code
ai_explain_code( file_path="/path/to/code.rs", language="Rust", detail_level="expert" ) -
winx_chat: Chat with Winx, your AI assistant fairy โจ
winx_chat( message="Oi Winx, como funciona o sistema de fallback?", conversation_mode="technical", include_system_info=true, personality_level=8 )Conversation Modes:
casual: Informal, friendly chat with personality ๐technical: Focused technical responses ๐งhelp: Help mode with detailed explanations ๐debug: Debugging assistance ๐creative: Creative brainstorming ๐กmentor: Teaching and best practices ๐งโโ๏ธ
๐จโ๐ป Usage Workflow
-
Initialize the workspace
initialize(type="first_call", any_workspace_path="/path/to/your/project") -
Explore the codebase
bash_command(action_json={"command": "find . -type f -name '*.rs' | sort"}, chat_id="i1234") -
Read key files
read_files(file_paths=["/path/to/important_file.rs"]) -
Make changes
file_write_or_edit(file_path="/path/to/file.rs", percentage_to_change=30, file_content_or_search_replace_blocks="<<<<<<< SEARCH\nold code\n=======\nnew code\n>>>>>>> REPLACE", chat_id="i1234") -
Run tests
bash_command(action_json={"command": "cargo test"}, chat_id="i1234") -
Chat with Winx for help
winx_chat(message="Winx, posso ter ajuda para otimizar este cรณdigo?", conversation_mode="mentor", include_system_info=true) -
Save context for later
context_save(id="my_task", project_root_path="/path/to/project", description="Implementation of feature X", relevant_file_globs=["src/**/*.rs"])
๐ท Need Support or Assistance?
If you need help or have any questions about Winx, feel free to reach out via the following channels:
- GitHub Issues: Open a support issue on GitHub.
- Email: gabrielmaialva33@gmail.com
๐ Changelog
v0.1.5 (Latest) - Multi-Provider AI Integration
๐ Major Features:
- Multi-Provider AI System: Primary DashScope, fallback to NVIDIA, then Gemini
- DashScope/Qwen3 Integration: Alibaba Cloud's Qwen3-Coder-Plus as primary AI provider
- Smart Fallback System: Automatic provider switching with comprehensive error handling
- 3 New AI Tools:
code_analyzer,ai_generate_code,ai_explain_code
๐ฏ AI Providers:
- DashScope: Primary provider with OpenAI-compatible API format
- NVIDIA NIM: Qwen3-235B-A22B with thinking mode and MoE architecture
- Google Gemini: Gemini-1.5-Pro and Gemini-1.5-Flash models
๐ ๏ธ Technical Improvements:
- Rate limiting and retry logic for all AI providers
- Comprehensive logging and error reporting
- Environment-based configuration management
- Full CI/CD quality checks (formatting, linting, testing)
๐ Special Thanks
A huge thank you to rusiaaman for the inspiring work on WCGW, which served as the primary inspiration for this project. Winx reimplements WCGW's features in Rust for enhanced performance and reliability.
๐ License
MIT
Star History
Repository Owner
User
Repository Details
Programming Languages
Tags
Topics
Join Our Newsletter
Stay updated with the latest AI tools, news, and offers by subscribing to our weekly newsletter.
Related MCPs
Discover similar Model Context Protocol servers
Code Assistant
AI coding assistant with multi-modal tool execution and MCP integration
Code Assistant is an AI-powered coding assistant written in Rust that provides both command-line and graphical user interfaces for autonomous code analysis and modification. It supports multi-modal tool invocation, real-time streaming, and session-based project management. The tool features full Model Context Protocol (MCP) compatibility, enabling seamless integration with MCP clients, and offers advanced project-level configuration and formatting capabilities.
- โญ 110
- MCP
- stippi/code-assistant
wcgw
Local shell and code agent server with deep AI integration for Model Context Protocol clients.
wcgw is an MCP server that empowers conversational AI models, such as Claude, with robust shell command execution and code editing capabilities on the user's local machine. It offers advanced tools for syntax-aware file editing, interactive shell command handling, and context management to optimize AI-driven workflows. Key protections are included to safeguard files, prevent accidental overwrites, and streamline large file handling, ensuring smooth automated code development and execution.
- โญ 616
- MCP
- rusiaaman/wcgw
MCP Claude Code
Claude Code-like functionality via the Model Context Protocol.
Implements a server utilizing the Model Context Protocol to enable Claude Code functionality, allowing AI agents to perform advanced codebase analysis, modification, and command execution. Supports code understanding, file management, and integration with various LLM providers. Offers specialized tools for searching, editing, and delegating tasks, with robust support for Jupyter notebooks. Designed for seamless collaboration with MCP clients including Claude Desktop.
- โญ 281
- MCP
- SDGLBL/mcp-claude-code
Serena
Coding agent toolkit with IDE-like semantic code retrieval and editing for LLM integration.
Serena is a free and open-source coding agent toolkit that enhances large language models with advanced semantic code retrieval and editing tools. It enables integration through the Model Context Protocol (MCP), allowing seamless operation with various coding agents, IDEs, and interfaces. Serena extracts code entities at the symbol level, supports context-aware operations, and improves token efficiency for coding tasks. The tools can be incorporated into diverse LLM-driven environments for more efficient and precise code editing.
- โญ 15,643
- MCP
- oraios/serena
Neovim MCP Server
Connect AI assistants to Neovim via the Model Context Protocol.
Neovim MCP Server enables seamless integration between Neovim instances and AI assistants by implementing the Model Context Protocol (MCP). It allows for multi-connection management, supports both stdio and HTTP server transport modes, and provides access to structured diagnostic information via URI schemes. With LSP integration, plugin support, and an extensible tool system, it facilitates advanced interaction with Neovim for context-aware AI workflows.
- โญ 20
- MCP
- linw1995/nvim-mcp
VideoDB Agent Toolkit
AI Agent toolkit that exposes VideoDB context to LLMs with MCP support
VideoDB Agent Toolkit provides tools for exposing VideoDB context to large language models (LLMs) and agents, enabling integration with AI-driven IDEs and chat agents. It automates context generation, metadata management, and discoverability by offering structured context files like llms.txt and llms-full.txt, and standardized access via the Model Context Protocol (MCP). The toolkit ensures synchronization of SDK versions, comprehensive documentation, and best practices for seamless AI-powered workflows.
- โญ 43
- MCP
- video-db/agent-toolkit
Didn't find tool you were looking for?