video-editing-mcp

video-editing-mcp

MCP server for uploading, editing, searching, and generating videos via Video Jungle and LLMs.

207
Stars
27
Forks
207
Watchers
4
Issues
Implements a Model Context Protocol (MCP) server for seamless video uploading, editing, searching, and generative editing workflows, powered by Video Jungle integration and LLM assistance. Provides a suite of tools for video asset management, automated video editing, and context-aware search leveraging multimedia analysis. Supports both cloud workflows through Video Jungle and local searching capabilities, such as accessing the Photos app database on MacOS. Designed for integration with clients like Claude Desktop and supports automation, debugging, and development through open protocols.

Key Features

Implements Model Context Protocol (MCP) server
Upload and analyze videos from URLs
Custom vj:// URI scheme for asset referencing
Automated video edit generation based on content or prompts
Advanced search by video embeddings and metadata
Project and asset management within Video Jungle
Local Photos app search support (MacOS)
Integration with clients like Claude Desktop
Support for OpenTimelineIO and Davinci Resolve workflows
Live update of video edits with real-time changes

Use Cases

Automating video upload and asset management workflows
Searching large video libraries for specific content using text or embeddings
Generating new video edits from sets of source videos using LLM-powered prompts
Managing collaborative video editing projects with metadata-rich assets
Extracting moments based on specific queries, e.g., spoken keywords
Integrating advanced video editing pipelines with Claude Desktop
Enabling local video file search via the MacOS Photos app
Providing real-time feedback and updates in ongoing edit sessions
Generating highlight reels or custom compilations from multiple assets
Supporting development and debugging using standardized MCP inspection tools

README

Video Editor MCP server

Video Jungle MCP Server

See a demo here: https://www.youtube.com/watch?v=KG6TMLD8GmA

Upload, edit, search, and generate videos from everyone's favorite LLM and Video Jungle.

You'll need to sign up for an account at Video Jungle in order to use this tool, and add your API key.

PyPI version

Components

Resources

The server implements an interface to upload, generate, and edit videos with:

  • Custom vj:// URI scheme for accessing individual videos and projects
  • Each project resource has a name, description
  • Search results are returned with metadata about what is in the video, and when, allowing for edit generation directly

Prompts

Coming soon.

Tools

The server implements a few tools:

  • add-video
    • Add a Video File for analysis from a URL. Returns an vj:// URI to reference the Video file
  • create-videojungle-project
    • Creates a Video Jungle project to contain generative scripts, analyzed videos, and images for video edit generation
  • edit-locally
    • Creates an OpenTimelineIO project and downloads it to your machine to open in a Davinci Resolve Studio instance (Resolve Studio must already be running before calling this tool.)
  • generate-edit-from-videos
    • Generates a rendered video edit from a set of video files
  • generate-edit-from-single-video
    • Generate an edit from a single input video file
  • get-project-assets
    • Get assets within a project for video edit generation.
  • search-videos
    • Returns video matches based upon embeddings and keywords
  • update-video-edit
    • Live update a video edit's information. If Video Jungle is open, edit will be updated in real time.

Using Tools in Practice

In order to use the tools, you'll need to sign up for Video Jungle and add your API key.

add-video

Here's an example prompt to invoke the add-video tool:

can you download the video at https://www.youtube.com/shorts/RumgYaH5XYw and name it fly traps?

This will download a video from a URL, add it to your library, and analyze it for retrieval later. Analysis is multi-modal, so both audio and visual components can be queried against.

search-videos

Once you've got a video downloaded and analyzed, you can then do queries on it using the search-videos tool:

can you search my videos for fly traps?

Search results contain relevant metadata for generating a video edit according to details discovered in the initial analysis.

search-local-videos

You must set the environment variable LOAD_PHOTOS_DB=1 in order to use this tool, as it will make Claude prompt to access your files on your local machine.

Once that's done, you can search through your Photos app for videos that exist on your phone, using Apple's tags.

In my case, when I search for "Skateboard", I get 1903 video files.

can you search my local video files for Skateboard?

generate-edit-from-videos

Finally, you can use these search results to generate an edit:

can you create an edit of all the times the video says "fly trap"?

(Currently), the video edits tool relies on the context within the current chat.

generate-edit-from-single-video

Finally, you can cut down an edit from a single, existing video:

can you create an edit of all the times this video says the word "fly trap"?

Configuration

You must login to Video Jungle settings, and get your API key. Then, use this to start Video Jungle MCP:

bash
$ uv run video-editor-mcp YOURAPIKEY

To allow this MCP server to search your Photos app on MacOS:

$ LOAD_PHOTOS_DB=1 uv run video-editor-mcp YOURAPIKEY

Quickstart

Install

Installing via Smithery

To install Video Editor for Claude Desktop automatically via Smithery:

bash
npx -y @smithery/cli install video-editor-mcp --client claude

Claude Desktop

You'll need to adjust your claude_desktop_config.json manually:

On MacOS: ~/Library/Application\ Support/Claude/claude_desktop_config.json On Windows: %APPDATA%/Claude/claude_desktop_config.json

json
 "mcpServers": {
   "video-editor-mcp": {
     "command": "uvx",
     "args": [
       "video-editor-mcp",
       "YOURAPIKEY"
     ]
   }
 }
json
 "mcpServers": {
   "video-editor-mcp": {
     "command": "uv",
     "args": [
       "--directory",
       "/Users/YOURDIRECTORY/video-editor-mcp",
       "run",
       "video-editor-mcp",
       "YOURAPIKEY"
     ]
   }
 }

With local Photos app access enabled (search your Photos app):

json
  "video-jungle-mcp": {
    "command": "uv",
    "args": [
      "--directory",
      "/Users/<PATH_TO>/video-jungle-mcp",
      "run",
      "video-editor-mcp",
      "<YOURAPIKEY>"
    ],
   "env": {
	      "LOAD_PHOTOS_DB": "1"
    }
  },

Be sure to replace the directories with the directories you've placed the repository in on your computer.

Development

Building and Publishing

To prepare the package for distribution:

  1. Sync dependencies and update lockfile:
bash
uv sync
  1. Build package distributions:
bash
uv build

This will create source and wheel distributions in the dist/ directory.

  1. Publish to PyPI:
bash
uv publish

Note: You'll need to set PyPI credentials via environment variables or command flags:

  • Token: --token or UV_PUBLISH_TOKEN
  • Or username/password: --username/UV_PUBLISH_USERNAME and --password/UV_PUBLISH_PASSWORD

Debugging

Since MCP servers run over stdio, debugging can be challenging. For the best debugging experience, we strongly recommend using the MCP Inspector.

You can launch the MCP Inspector via npm with this command:

(Be sure to replace YOURDIRECTORY and YOURAPIKEY with the directory this repo is in, and your Video Jungle API key, found in the settings page.)

bash
npx @modelcontextprotocol/inspector uv run --directory /Users/YOURDIRECTORY/video-editor-mcp video-editor-mcp YOURAPIKEY

Upon launching, the Inspector will display a URL that you can access in your browser to begin debugging.

Additionally, I've added logging to app.log in the project directory. You can add logging to diagnose API calls via a:

logging.info("this is a test log")

A reasonable way to follow along as you're workin on the project is to open a terminal session and do a:

bash
$ tail -n 90 -f app.log

Star History

Star History Chart

Repository Owner

burningion
burningion

User

Repository Details

Language Python
Default Branch main
Size 3,785 KB
Contributors 3
MCP Verified Sep 2, 2025

Programming Languages

Python
76.12%
Swift
23.19%
Shell
0.69%

Join Our Newsletter

Stay updated with the latest AI tools, news, and offers by subscribing to our weekly newsletter.

We respect your privacy. Unsubscribe at any time.

Related MCPs

Discover similar Model Context Protocol servers

  • mcpmcp-server

    mcpmcp-server

    Seamlessly discover, set up, and integrate MCP servers with AI clients.

    mcpmcp-server enables users to discover, configure, and connect MCP servers with preferred clients, optimizing AI integration into daily workflows. It supports streamlined setup via JSON configuration, ensuring compatibility with various platforms such as Claude Desktop on macOS. The project simplifies the connection process between AI clients and remote Model Context Protocol servers. Users are directed to an associated homepage for further platform-specific guidance.

    • 17
    • MCP
    • glenngillen/mcpmcp-server
  • manim-mcp-server

    manim-mcp-server

    MCP server for generating Manim animations on demand.

    Manim MCP Server allows users to execute Manim Python scripts via a standardized protocol, generating animation videos that are returned as output. It integrates with systems like Claude to dynamically render animation content from user scripts and supports configurable deployment using environment variables. The server handles management of output files and cleanup of temporary resources, designed with portability and ease of integration in mind.

    • 454
    • MCP
    • abhiemj/manim-mcp-server
  • mcp

    mcp

    Universal remote MCP server connecting AI clients to productivity tools.

    WayStation MCP acts as a remote Model Context Protocol (MCP) server, enabling seamless integration between AI clients like Claude or Cursor and a wide range of productivity applications, such as Notion, Monday, Airtable, Jira, and more. It supports multiple secure connection transports and offers both general and user-specific preauthenticated endpoints. The platform emphasizes ease of integration, OAuth2-based authentication, and broad app compatibility. Users can manage their integrations through a user dashboard, simplifying complex workflow automations for AI-powered productivity.

    • 27
    • MCP
    • waystation-ai/mcp
  • mcp-server-js

    mcp-server-js

    Enable secure, AI-driven process automation and code execution on YepCode via Model Context Protocol.

    YepCode MCP Server acts as a Model Context Protocol (MCP) server that facilitates seamless communication between AI platforms and YepCode’s workflow automation infrastructure. It allows AI assistants and clients to execute code, manage environment variables, and interact with storage through standardized tools. The server can expose YepCode processes directly as MCP tools and supports both hosted and local installations via NPX or Docker. Enterprise-grade security and real-time interaction make it suitable for integrating advanced automation into AI-powered environments.

    • 31
    • MCP
    • yepcode/mcp-server-js
  • 1mcp-app/agent

    1mcp-app/agent

    A unified server that aggregates and manages multiple Model Context Protocol servers.

    1MCP Agent provides a single, unified interface that aggregates multiple Model Context Protocol (MCP) servers, enabling seamless integration and management of external tools for AI assistants. It acts as a proxy, managing server configuration, authentication, health monitoring, and dynamic server control with features like asynchronous loading, tag-based filtering, and advanced security options. Compatible with popular AI development environments, it simplifies setup by reducing redundant server instances and resource usage. Users can configure, monitor, and scale model tool integrations across various AI clients through easy CLI commands or Docker deployment.

    • 96
    • MCP
    • 1mcp-app/agent
  • blender-mcp

    blender-mcp

    Seamless integration between Blender and Claude AI using the Model Context Protocol.

    BlenderMCP enables direct interaction between Blender and Claude AI by leveraging the Model Context Protocol (MCP). It allows users to create, manipulate, and inspect 3D scenes in Blender through natural language commands sent to Claude, which communicates with Blender via a custom socket server and an addon. The solution features two-way communication, object and material manipulation, code execution within Blender, and easy integration with tools like Cursor, Visual Studio Code, and Claude for Desktop.

    • 13,092
    • MCP
    • ahujasid/blender-mcp
  • Didn't find tool you were looking for?

    Be as detailed as possible for better results