Defang
Develop Once, Deploy Anywhere.
Key Features
Use Cases
README
Develop Once, Deploy Anywhere.
Take your app from Docker Compose to a secure and scalable deployment on your favorite cloud in minutes.
Defang CLI
The Defang Command-Line Interface (CLI) is designed for developers who prefer to manage their workflows directly from the terminal. It offers full access to Defang’s capabilities, allowing you to build, test, and deploy applications efficiently to the cloud.
Defang MCP Server
The Defang Model Context Protocol (MCP) Server is tailored for developers who work primarily within integrated development environments (IDEs). It enables seamless cloud deployment from supported editors such as Cursor, Windsurf, VS Code, VS Code Insiders and Claude delivering a fully integrated experience without leaving your development environment.
This repo includes:
- Public releases of the Defang CLI; click here for the latest version
- Built-in support for MCP Server — the Defang MCP Server makes cloud deployment as easy as a single prompt. Learn more
- Samples in Golang, Python, and Node.js that show how to accomplish various tasks and deploy them to the DOP using a Docker Compose file using the Defang CLI.
- Samples that show how to deploy an app using the Defang Pulumi Provider.
Getting started
- Read our Getting Started page
- Follow the installation instructions from the Installing page
- Take a look at our Samples folder for example projects in various programming languages.
- Try the AI integration by running
defang generate - Start your new service with
defang compose up
Installing
Install the Defang CLI from one of the following sources:
-
Using the Homebrew package manager DefangLabs/defang tap:
brew install DefangLabs/defang/defang -
Using a shell script:
eval "$(curl -fsSL s.defang.io/install)" -
Using Go:
go install github.com/DefangLabs/defang/src/cmd/cli@latest -
Using the Nix package manager:
- with Nix-Env:
nix-env -if https://github.com/DefangLabs/defang/archive/main.tar.gz - or with Flakes:
nix profile install github:DefangLabs/defang#defang-bin --refresh
- with Nix-Env:
-
Using winget:
winget install defang -
Using a PowerShell script:
iwr https://s.defang.io/defang_win_amd64.zip -OutFile defang.zip Expand-Archive defang.zip . -Force -
Using the official image from Docker Hub:
docker run -it defangio/defang-cli help -
Using NPX:
npx defang@latest help -
or download the latest binary of the Defang CLI.
Support
- File any issues here
Command completion
The Defang CLI supports command completion for Bash, Zsh, Fish, and Powershell. To get the shell script for command completion, run the following command:
defang completion [bash|zsh|fish|powershell]
If you're using Bash, you can add the following to your ~/.bashrc file:
source <(defang completion bash)
If you're using Zsh, you can add the following to your ~/.zshrc file:
source <(defang completion zsh)
or pipe the output to a file called _defang in the directory with the completions.
If you're using Fish, you can add the following to your ~/.config/fish/config.fish file:
defang completion fish | source
If you're using Powershell, you can add the following to your $HOME\Documents\PowerShell\Microsoft.PowerShell_profile.ps1 file:
Invoke-Expression -Command (defang completion powershell | Out-String)
Environment Variables
The Defang CLI recognizes the following environment variables:
COMPOSE_DISABLE_ENV_FILE- Whether to disable loading environment variables from a.envfile in the current directory; defaults tofalseCOMPOSE_FILE- The Compose file(s) to use; defaults tocompose.yaml,compose.yml,docker-compose.yaml, ordocker-compose.ymlin the current directoryCOMPOSE_PATH_SEPARATOR- The path separator to use forCOMPOSE_FILE; defaults to:on Unix/MacOS and;on WindowsCOMPOSE_PROJECT_NAME- The name of the project to use; overrides thenamein the Compose fileDEFANG_ACCESS_TOKEN- The access token to use for authentication; if not specified, uses token fromdefang loginDEFANG_BUILD_CONTEXT_LIMIT- The maximum size of the build context when building container images; defaults to100MiBDEFANG_CD_BUCKET- The S3 bucket to use for the BYOC CD pipeline; defaults todefang-cd-bucket-…DEFANG_CD_IMAGE- The image to use for the Continuous Deployment (CD) pipeline; defaults topublic.ecr.aws/defang-io/cd:public-betaDEFANG_DEBUG- set this to1ortrueto enable debug loggingDEFANG_DISABLE_ANALYTICS- If set totrue, disables sending analytics to Defang; defaults tofalseDEFANG_EDITOR- The editor to launch after new project generation; defaults tocode(VS Code)DEFANG_FABRIC- The address of the Defang Fabric to use; defaults tofabric-prod1.defang.devDEFANG_JSON- If set totrue, outputs JSON instead of human-readable output; defaults tofalseDEFANG_HIDE_HINTS- If set totrue, hides hints in the CLI output; defaults tofalseDEFANG_HIDE_UPDATE- If set totrue, hides the update notification; defaults tofalseDEFANG_ISSUER- The OAuth2 issuer to use for authentication; defaults tohttps://auth.defang.ioDEFANG_MODEL_ID- The model ID of the LLM to use for the generate/debug AI integration (Pro users only)DEFANG_NO_CACHE- If set totrue, disables pull-through caching of container images; defaults tofalseDEFANG_ORG- The name of the organization to use; defaults to the user's GitHub nameDEFANG_PREFIX- The prefix to use for all BYOC resources; defaults toDefangDEFANG_PROVIDER- The name of the cloud provider to use,auto(default),aws,digitalocean,gcp, ordefangDEFANG_PULUMI_BACKEND- The Pulumi backend URL or"pulumi-cloud"; defaults to a self-hosted backendDEFANG_PULUMI_DIR- Run Pulumi from this folder, instead of spawning a cloud task; requires--debug(BYOC only)DEFANG_PULUMI_VERSION- Override the version of the Pulumi image to use (awsprovider only)NO_COLOR- If set to any value, disables color output; by default, color output is enabled depending on the terminalPULUMI_ACCESS_TOKEN- The Pulumi access token to use for authentication to Pulumi Cloud; seeDEFANG_PULUMI_BACKENDPULUMI_CONFIG_PASSPHRASE- Passphrase used to generate a unique key for your stack, and configuration and encrypted state valuesTZ- The timezone to use for log timestamps: an IANA TZ name likeUTCorEurope/Amsterdam; defaults toLocalXDG_STATE_HOME- The directory to use for storing state; defaults to~/.local/state
Environment variables will be loaded from a .defangrc file in the current directory, if it exists. This file follows
the same format as a .env file: KEY=VALUE pairs on each line, lines starting with # are treated as comments and ignored.
Development
At Defang we use the Nix package manager for our dev environment, in conjunction with DirEnv.
To get started quickly, install Nix and DirEnv, then create a .envrc file to automatically load the Defang developer environment:
echo use flake >> .envrc
direnv allow
Star History
Repository Owner
Organization
Repository Details
Programming Languages
Tags
Topics
Join Our Newsletter
Stay updated with the latest AI tools, news, and offers by subscribing to our weekly newsletter.
Related MCPs
Discover similar Model Context Protocol servers
QuantConnect MCP Server
Official bridge for secure AI access to QuantConnect's algorithmic trading cloud platform
QuantConnect MCP Server enables artificial intelligence systems such as Claude and OpenAI to interface with QuantConnect's cloud platform through an official, secure, and dockerized implementation of the Model Context Protocol (MCP). It facilitates automated project management, strategy writing, backtesting, and live deployment by exposing a comprehensive suite of API tools for users with valid access credentials. As the maintained official version, it ensures security, easy deployment, and cross-platform compatibility for advanced algorithmic trading automation.
- ⭐ 50
- MCP
- QuantConnect/mcp-server
TOD - TheOneDev CLI Tool
Streamlined DevOps and AI integration for OneDev 13+ via an advanced CLI and Model Context Protocol server.
TOD is a command-line tool designed for OneDev 13+ that enhances development workflows with CI/CD job execution against local changes or specific branches and tags. It features a comprehensive Model Context Protocol (MCP) server for integrating and communicating with AI assistants, allowing intelligent and natural interactions with OneDev. The tool also supports pull request management and build specification migrations, making it a versatile utility for developers seeking automation and AI-driven enhancements.
- ⭐ 3
- MCP
- theonedev/tod
Edge Delta MCP Server
Seamlessly integrate Edge Delta APIs into the Model Context Protocol ecosystem.
Edge Delta MCP Server is a Model Context Protocol server enabling advanced integration with Edge Delta APIs. It allows developers and tools to extract, analyze, and automate observability data from Edge Delta through standardized MCP interfaces. The server supports AI-powered applications and automations, and can be deployed via Docker for straightforward operation. The Go API is available for experimental programmatic access.
- ⭐ 5
- MCP
- edgedelta/edgedelta-mcp-server
Inspektor Gadget MCP Server
AI-powered Kubernetes troubleshooting via Model Context Protocol.
Inspektor Gadget MCP Server provides an AI-powered debugging and inspection interface for Kubernetes clusters. Leveraging the Model Context Protocol, it enables intelligent output summarization, one-click deployment of Inspektor Gadget, and automated discovery of debugging tools from Artifact Hub. The server integrates seamlessly with VS Code for interactive AI commands, simplifying Kubernetes troubleshooting and monitoring workflows.
- ⭐ 16
- MCP
- inspektor-gadget/ig-mcp-server
mcp-get
A command-line tool for discovering, installing, and managing Model Context Protocol servers.
mcp-get is a CLI tool designed to help users discover, install, and manage Model Context Protocol (MCP) servers. It enables seamless integration of Large Language Models (LLMs) with various external data sources and tools by utilizing a standardized protocol. The tool provides access to a curated registry of MCP servers and supports installation and management across multiple programming languages and environments. Although now archived, mcp-get simplifies environment variable management, package versioning, and server updates to enhance the LLM ecosystem.
- ⭐ 497
- MCP
- michaellatman/mcp-get
In Memoria
Persistent memory and instant context for AI coding assistants, integrated via MCP.
In Memoria is an MCP server that enables AI coding assistants such as Claude or Copilot to retain, recall, and provide context about codebases across sessions. It learns patterns, architecture, and conventions from user code, offering persistent intelligence that eliminates repetitive explanations and generic suggestions. Through the Model Context Protocol, it allows AI tools to perform semantic search, smart file routing, and track project-specific decisions efficiently.
- ⭐ 94
- MCP
- pi22by7/In-Memoria
Didn't find tool you were looking for?