Fal.ai MCP Server

Fal.ai MCP Server

MCP server for multimodal media generation using Fal.ai models.

9
Stars
4
Forks
9
Watchers
1
Issues
Fal.ai MCP Server is a Model Context Protocol (MCP) compliant server that enables clients like Claude Desktop and others to generate images, videos, music, audio, and transcriptions using Fal.ai models. It offers native asynchronous operation and supports both STDIO and HTTP/SSE transports for flexible integration. The server provides high-performance media generation, including upscaling, text-to-speech, audio transcription, and image-to-image transformations, all powered by Fal.ai's API. Easy deployment options are available via Docker, PyPI, or from source.

Key Features

MCP 1.0 protocol compliance
Native asynchronous API for optimal performance
Support for image, video, music, and audio generation
Audio transcription using Whisper
Image upscaling and image-to-image transformation
Dual transport modes: STDIO and HTTP/SSE
Progressive updates for long-running tasks via queue API
Seamless integration with Claude Desktop and other MCP clients
Easy deployment via Docker, PyPI, or source
Environment variable based API key management

Use Cases

Integrating Fal.ai media generation models into MCP-compatible clients
Generating AI-powered images, videos, or sounds from text prompts
Creating music or soundscapes from textual descriptions
Transcribing audio content using Whisper
Enhancing the resolution of images through upscaling
Transforming or editing images with prompt-based inputs
Providing custom AI media generation in desktop applications
Running scalable, asynchronous media generation services
Automating multimedia content creation pipelines
Enabling text-to-speech conversion and natural speech synthesis

README

🎨 Fal.ai MCP Server

CI Docker MCP GitHub Release PyPI Docker Image Python License

A Model Context Protocol (MCP) server that enables Claude Desktop (and other MCP clients) to generate images, videos, music, and audio using Fal.ai models.

✨ Features

🚀 Performance

  • Native Async API - Uses fal_client.run_async() for optimal performance
  • Queue Support - Long-running tasks (video/music) use queue API with progress updates
  • Non-blocking - All operations are truly asynchronous

🌐 Transport Modes (New!)

  • STDIO - Traditional Model Context Protocol communication
  • HTTP/SSE - Web-based access via Server-Sent Events
  • Dual Mode - Run both transports simultaneously

🎨 Media Generation

  • 🖼️ Image Generation - Create images using Flux, SDXL, and other models
  • 🎬 Video Generation - Generate videos from images or text prompts
  • 🎵 Music Generation - Create music from text descriptions
  • 🗣️ Text-to-Speech - Convert text to natural speech
  • 📝 Audio Transcription - Transcribe audio using Whisper
  • ⬆️ Image Upscaling - Enhance image resolution
  • 🔄 Image-to-Image - Transform existing images with prompts

🚀 Quick Start

Prerequisites

  • Python 3.10 or higher
  • Fal.ai API key (free tier available)
  • Claude Desktop (or any MCP-compatible client)

Installation

Option 1: Docker (Recommended for Production) 🐳

Official Docker image available on GitHub Container Registry:

bash
# Pull the latest image
docker pull ghcr.io/raveenb/fal-mcp-server:latest

# Run with your API key
docker run -d \
  --name fal-mcp \
  -e FAL_KEY=your-api-key \
  -p 8080:8080 \
  ghcr.io/raveenb/fal-mcp-server:latest

Or use Docker Compose:

bash
curl -O https://raw.githubusercontent.com/raveenb/fal-mcp-server/main/docker-compose.yml
echo "FAL_KEY=your-api-key" > .env
docker-compose up -d

Option 2: Install from PyPI

bash
pip install fal-mcp-server

Or with uv:

bash
uv pip install fal-mcp-server

Option 3: Install from source

bash
git clone https://github.com/raveenb/fal-mcp-server.git
cd fal-mcp-server
pip install -e .

Configuration

  1. Get your Fal.ai API key from fal.ai

  2. Configure Claude Desktop by adding to:

    • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
    • Windows: %APPDATA%\Claude\claude_desktop_config.json

For Docker Installation:

json
{
  "mcpServers": {
    "fal-ai": {
      "command": "curl",
      "args": ["-N", "http://localhost:8080/sse"]
    }
  }
}

For PyPI Installation:

json
{
  "mcpServers": {
    "fal-ai": {
      "command": "python",
      "args": ["-m", "fal_mcp_server.server"],
      "env": {
        "FAL_KEY": "your-fal-api-key"
      }
    }
  }
}

For Source Installation:

json
{
  "mcpServers": {
    "fal-ai": {
      "command": "python",
      "args": ["/path/to/fal-mcp-server/src/fal_mcp_server/server.py"],
      "env": {
        "FAL_KEY": "your-fal-api-key"
      }
    }
  }
}
  1. Restart Claude Desktop

💬 Usage

With Claude Desktop

Once configured, ask Claude to:

  • "Generate an image of a sunset"
  • "Create a video from this image"
  • "Generate 30 seconds of ambient music"
  • "Convert this text to speech"
  • "Transcribe this audio file"

HTTP/SSE Transport (New!)

Run the server with HTTP transport for web-based access:

bash
# Using Docker (recommended)
docker run -d -e FAL_KEY=your-key -p 8080:8080 ghcr.io/raveenb/fal-mcp-server:latest

# Using pip installation
fal-mcp-http --host 0.0.0.0 --port 8000

# Or dual mode (STDIO + HTTP)
fal-mcp-dual --transport dual --port 8000

Connect from web clients via Server-Sent Events:

  • SSE endpoint: http://localhost:8080/sse (Docker) or http://localhost:8000/sse (pip)
  • Message endpoint: POST http://localhost:8080/messages/

See Docker Documentation and HTTP Transport Documentation for details.

📦 Supported Models

Image Models

  • flux_schnell - Fast high-quality generation
  • flux_dev - Development version with more control
  • sdxl - Stable Diffusion XL

Video Models

  • svd - Stable Video Diffusion
  • animatediff - Text-to-video animation

Audio Models

  • musicgen - Music generation
  • bark - Text-to-speech
  • whisper - Audio transcription

🤝 Contributing

Contributions are welcome! Please see CONTRIBUTING.md for guidelines.

Local Development

We support local CI testing with act:

bash
# Quick setup
make ci-local  # Run CI locally before pushing

# See detailed guide
cat docs/LOCAL_TESTING.md

📝 License

MIT License - see LICENSE file for details.

🙏 Acknowledgments

Star History

Star History Chart

Repository Owner

raveenb
raveenb

User

Repository Details

Language Python
Default Branch main
Size 172 KB
Contributors 1
License MIT License
MCP Verified Nov 11, 2025

Programming Languages

Python
92.06%
Shell
2.88%
Makefile
2.73%
Dockerfile
2.33%

Tags

Topics

ai-tools claude fal-ai image-generation llm mcp model-context-protocol

Join Our Newsletter

Stay updated with the latest AI tools, news, and offers by subscribing to our weekly newsletter.

We respect your privacy. Unsubscribe at any time.

Related MCPs

Discover similar Model Context Protocol servers

  • Outsource MCP

    Outsource MCP

    Unified MCP server for multi-provider AI text and image generation

    Outsource MCP is a Model Context Protocol server that bridges AI applications with multiple model providers via a single unified interface. It enables AI tools and clients to access over 20 major providers for both text and image generation, streamlining model selection and API integration. Built on FastMCP and Agno agent frameworks, it supports flexible configuration and is compatible with MCP-enabled AI tools. Authentication is provider-specific, and all interactions use a simple standardized API format.

    • 26
    • MCP
    • gwbischof/outsource-mcp
  • piapi-mcp-server

    piapi-mcp-server

    TypeScript-based MCP server for PiAPI media content generation

    piapi-mcp-server is a TypeScript implementation of a Model Context Protocol (MCP) server that connects with PiAPI to enable media generation workflows from MCP-compatible applications. It handles image, video, music, TTS, 3D, and voice generation tasks using a wide range of supported models like Midjourney, Flux, Kling, LumaLabs, Udio, and more. Designed for easy integration with clients such as Claude Desktop, it includes an interactive MCP Inspector for development, testing, and debugging.

    • 62
    • MCP
    • apinetwork/piapi-mcp-server
  • @nanana-ai/mcp-server-nano-banana

    @nanana-ai/mcp-server-nano-banana

    MCP server for Nanana AI image generation using Gemini's nano banana model.

    @nanana-ai/mcp-server-nano-banana serves as an MCP (Model Context Protocol) compatible server for facilitating image generation and transformation powered by the Gemini nano banana model. It enables clients like Claude Desktop to interact with Nanana AI, processing text prompts to generate images or transform existing images. The server can be easily configured with API tokens and integrated into desktop applications. Users can manage credentials, customize endpoints, and monitor credit usage seamlessly.

    • 3
    • MCP
    • nanana-app/mcp-server-nano-banana
  • MCP Server Giphy

    MCP Server Giphy

    MCP-compatible Giphy API server for AI models to search and retrieve GIFs.

    MCP Server Giphy provides an MCP-compliant server interface for accessing GIFs from the Giphy API, specifically tailored for seamless integration with AI models. It supports content filtering, multiple retrieval methods (search, trending, random), and optimized response formats with comprehensive metadata. The server enables AI applications to search, retrieve, and utilize GIF content efficiently, and is easily configurable with tools like Claude Desktop.

    • 22
    • MCP
    • magarcia/mcp-server-giphy
  • Daisys MCP server

    Daisys MCP server

    A beta server implementation for the Model Context Protocol supporting audio context with Daisys integration.

    Daisys MCP server provides a beta implementation of the Model Context Protocol (MCP), enabling seamless integration between the Daisys AI platform and various MCP clients. It allows users to connect MCP-compatible clients to Daisys by configurable authentication and environment settings, with out-of-the-box support for audio file storage and playback. The server is designed to be extensible, including support for both user-level deployments and developer contributions, with best practices for secure authentication and dependency management.

    • 10
    • MCP
    • daisys-ai/daisys-mcp
  • Vectara MCP Server

    Vectara MCP Server

    Secure RAG server enabling seamless AI integration via Model Context Protocol.

    Vectara MCP Server implements the open Model Context Protocol to enable AI systems and agentic applications to connect securely with Vectara's Trusted RAG platform. It supports multiple transport modes, including secure HTTP, Server-Sent Events (SSE), and local STDIO for development. The server provides fast, reliable retrieval-augmented generation (RAG) operations with built-in authentication, rate limiting, and optional CORS configuration. Integration is compatible with Claude Desktop and any other MCP client.

    • 25
    • MCP
    • vectara/vectara-mcp
  • Didn't find tool you were looking for?

    Be as detailed as possible for better results