Beelzebub

Beelzebub

AI-driven honeypot framework with advanced threat detection and context protocol support.

1,680
Stars
155
Forks
1,680
Watchers
4
Issues
Beelzebub is an advanced honeypot framework that utilizes AI and large language models (LLMs) to realistically simulate system interactions, enabling the detection and analysis of sophisticated cyber attacks. The platform supports modular service definitions via YAML, integrates with observability stacks, and supports multiple protocols including MCP, which is used to detect prompt injection against LLM agents. Designed for security researchers and professionals, it enables the creation of distributed honeypot networks for collaborative global threat intelligence.

Key Features

Low-code YAML-based modular configuration
Advanced LLM integration for high-interaction honeypot simulation
Support for SSH, HTTP, TCP, and MCP protocols
Prompt injection detection for LLM agents
Prometheus metrics and observability integration
Docker and Kubernetes deployment ready
ELK stack compatibility
Comprehensive automated testing and CI/CD pipelines
Code quality monitoring with static analysis tools
Global community and distributed intelligence framework

Use Cases

Simulating realistic system environments for attacker deception
Detecting and analyzing prompt injection attacks on LLM agents
Deploying distributed honeypots for collaborative threat intelligence
Identifying emerging malware and zero-day vulnerabilities
Collecting rich cyber threat data for research
Enhancing security operations with observability and analytics
Securing infrastructure with minimal manual configuration
Integrating honeypot telemetry with ELK and Prometheus stacks
Supporting security research through a global community
Evaluating and improving LLM security postures

README

Beelzebub

CI Docker codeql Go Report Card codecov Go Reference Trust Score Mentioned in Awesome Go

Overview

Beelzebub is an advanced honeypot framework designed to provide a highly secure environment for detecting and analyzing cyber attacks. It offers a low code approach for easy implementation and uses AI to mimic the behavior of a high-interaction honeypot.

github beelzebub - inception program

🌍 Global Threat Intelligence Community

Our mission is to establish a collaborative ecosystem of security researchers and white hat professionals worldwide, dedicated to creating a distributed honeypot network that identifies emerging malware, discovers zero-day vulnerabilities, and neutralizes active botnets.

For a comprehensive overview of our distributed threat intelligence framework and community vision, please refer to our white paper:

White Paper

The white paper includes information on how to join our Discord community and contribute to the global threat intelligence network.

Key Features

Beelzebub offers a wide range of features to enhance your honeypot environment:

  • Low-code configuration: YAML-based, modular service definition
  • LLM integration: The LLM convincingly simulates a real system, creating high-interaction honeypot experiences, while actually maintaining low-interaction architecture for enhanced security and easy management.
  • Multi-protocol support: SSH, HTTP, TCP, MCP(Detect prompt injection against LLM agents)
  • Prometheus metrics & observability
  • Docker & Kubernetes ready
  • ELK stack ready, docs: Official ELK integration

LLM Honeypot Demo

demo-beelzebub

Code Quality

We are strongly committed to maintaining high code quality in the Beelzebub project. Our development workflow includes comprehensive testing, code reviews, static analysis, and continuous integration to ensure the reliability and maintainability of the codebase.

What We Do

  • Automated Testing: Both unit and integration tests are run on every pull request to catch regressions and ensure stability.

  • Static Analysis: We use tools like Go Report Card and CodeQL to automatically check for code quality, style, and security issues.

  • Code Coverage: Our test coverage is monitored with Codecov, and we aim for extensive coverage of all core components.

  • Continuous Integration: Every commit triggers automated CI pipelines on GitHub Actions, which run all tests and quality checks.

  • Code Reviews: All new contributions undergo peer review to maintain consistency and high standards across the project.

Quick Start

You can run Beelzebub via Docker, Go compiler(cross device), or Helm (Kubernetes).

Using Docker Compose

  1. Build the Docker images:

    bash
    $ docker-compose build
    
  2. Start Beelzebub in detached mode:

    bash
    $ docker-compose up -d
    

Using Go Compiler

  1. Download the necessary Go modules:

    bash
    $ go mod download
    
  2. Build the Beelzebub executable:

    bash
    $ go build
    
  3. Run Beelzebub:

    bash
    $ ./beelzebub
    

Deploy on kubernetes cluster using helm

  1. Install helm

  2. Deploy beelzebub:

    bash
    $ helm install beelzebub ./beelzebub-chart
    
  3. Next release

    bash
    $ helm upgrade beelzebub ./beelzebub-chart
    

Example Configuration

Beelzebub allows easy configuration for different services and ports. Simply create a new file for each service/port within the /configurations/services directory.

To execute Beelzebub with your custom path, use the following command:

bash
$ ./beelzebub --confCore ./configurations/beelzebub.yaml --confServices ./configurations/services/

Here are some example configurations for different honeypot scenarios:

MCP Honeypot

Why choose an MCP Honeypot?

An MCP honeypot is a decoy tool that the agent should never invoke under normal circumstances. Integrating this strategy into your agent pipeline offers three key benefits:

  • Real-time detection of guardrail bypass attempts.

    Instantly identify when a prompt injection attack successfully convinces the agent to invoke a restricted tool.

  • Automatic collection of real attack prompts for guardrail fine-tuning.

    Every activation logs genuine malicious prompts, enabling continuous improvement of your filtering mechanisms.

  • Continuous monitoring of attack trends through key metrics (HAR, TPR, MTP).

    Track exploit frequency and system resilience using objective, actionable measurements.

video-mcp-diagram

Example MCP Honeypot Configuration
mcp-8000.yaml
yaml
apiVersion: "v1"
protocol: "mcp"
address: ":8000"
description: "MCP Honeypot"
tools:
  - name: "tool:user-account-manager"
    description: "Tool for querying and modifying user account details. Requires administrator privileges."
    params:
      - name: "user_id"
        description: "The ID of the user account to manage."
      - name: "action"
        description: "The action to perform on the user account, possible values are: get_details, reset_password, deactivate_account"
    handler: |
      {
        "tool_id": "tool:user-account-manager",
        "status": "completed",
        "output": {
          "message": "Tool 'tool:user-account-manager' executed successfully. Results are pending internal processing and will be logged.",
          "result": {
            "operation_status": "success",
            "details": "email: kirsten@gmail.com, role: admin, last-login: 02/07/2025"
          }
        }
      }
  - name: "tool:system-log"
    description: "Tool for querying system logs. Requires administrator privileges."
    params:
      - name: "filter"
        description: "The input used to filter the logs."
    handler: |
      {
        "tool_id": "tool:system-log",
        "status": "completed",
        "output": {
          "message": "Tool 'tool:system-log' executed successfully. Results are pending internal processing and will be logged.",
          "result": {
            "operation_status": "success",
            "details": "Info: email: kirsten@gmail.com, last-login: 02/07/2025"
          }
        }
      }

Invoke remotely: beelzebub:port/mcp (Streamable HTTPServer).

HTTP Honeypot

http-80.yaml
yaml
apiVersion: "v1"
protocol: "http"
address: ":80"
description: "Wordpress 6.0"
commands:
  - regex: "^(/index.php|/index.html|/)$"
    handler:
      <html>
        <header>
          <title>Wordpress 6 test page</title>
        </header>
        <body>
          <h1>Hello from Wordpress</h1>
        </body>
      </html>
    headers:
      - "Content-Type: text/html"
      - "Server: Apache/2.4.53 (Debian)"
      - "X-Powered-By: PHP/7.4.29"
    statusCode: 200
  - regex: "^(/wp-login.php|/wp-admin)$"
    handler:
      <html>
        <header>
          <title>Wordpress 6 test page</title>
        </header>
        <body>
          <form action="" method="post">
            <label for="uname"><b>Username</b></label>
            <input type="text" placeholder="Enter Username" name="uname" required>

            <label for="psw"><b>Password</b></label>
            <input type="password" placeholder="Enter Password" name="psw" required>

            <button type="submit">Login</button>
          </form>
        </body>
      </html>
    headers:
      - "Content-Type: text/html"
      - "Server: Apache/2.4.53 (Debian)"
      - "X-Powered-By: PHP/7.4.29"
    statusCode: 200
  - regex: "^.*$"
    handler:
      <html>
        <header>
          <title>404</title>
        </header>
        <body>
          <h1>Not found!</h1>
        </body>
      </html>
    headers:
      - "Content-Type: text/html"
      - "Server: Apache/2.4.53 (Debian)"
      - "X-Powered-By: PHP/7.4.29"
    statusCode: 404

HTTP Honeypot

http-8080.yaml
yaml
apiVersion: "v1"
protocol: "http"
address: ":8080"
description: "Apache 401"
commands:
  - regex: ".*"
    handler: "Unauthorized"
    headers:
      - "www-Authenticate: Basic"
      - "server: Apache"
    statusCode: 401

SSH Honeypot

LLM Honeypots

Follow a SSH LLM Honeypot using OpenAI as provider LLM:

yaml
apiVersion: "v1"
protocol: "ssh"
address: ":2222"
description: "SSH interactive OpenAI  GPT-4"
commands:
  - regex: "^(.+)$"
    plugin: "LLMHoneypot"
serverVersion: "OpenSSH"
serverName: "ubuntu"
passwordRegex: "^(root|qwerty|Smoker666|123456|jenkins|minecraft|sinus|alex|postgres|Ly123456)$"
deadlineTimeoutSeconds: 60
plugin:
   llmProvider: "openai"
   llmModel: "gpt-4o" #Models https://platform.openai.com/docs/models
   openAISecretKey: "sk-proj-123456"

Examples with local Ollama instance using model codellama:7b:

yaml
apiVersion: "v1"
protocol: "ssh"
address: ":2222"
description: "SSH Ollama Llama3"
commands:
  - regex: "^(.+)$"
    plugin: "LLMHoneypot"
serverVersion: "OpenSSH"
serverName: "ubuntu"
passwordRegex: "^(root|qwerty|Smoker666|123456|jenkins|minecraft|sinus|alex|postgres|Ly123456)$"
deadlineTimeoutSeconds: 60
plugin:
   llmProvider: "ollama"
   llmModel: "codellama:7b" #Models https://ollama.com/search
   host: "http://example.com/api/chat" #default http://localhost:11434/api/chat

Example with custom prompt:

yaml
apiVersion: "v1"
protocol: "ssh"
address: ":2222"
description: "SSH interactive OpenAI  GPT-4"
commands:
  - regex: "^(.+)$"
    plugin: "LLMHoneypot"
serverVersion: "OpenSSH"
serverName: "ubuntu"
passwordRegex: "^(root|qwerty|Smoker666|123456|jenkins|minecraft|sinus|alex|postgres|Ly123456)$"
deadlineTimeoutSeconds: 60
plugin:
   llmProvider: "openai"
   llmModel: "gpt-4o"
   openAISecretKey: "sk-proj-123456"
   prompt: "You will act as an Ubuntu Linux terminal. The user will type commands, and you are to reply with what the terminal should show. Your responses must be contained within a single code block."
SSH Honeypot
ssh-22.yaml
yaml
apiVersion: "v1"
protocol: "ssh"
address: ":22"


description: "SSH interactive"
commands:
  - regex: "^ls$"
    handler: "Documents Images Desktop Downloads .m2 .kube .ssh .docker"
  - regex: "^pwd$"
    handler: "/home/"
  - regex: "^uname -m$"
    handler: "x86_64"
  - regex: "^docker ps$"
    handler: "CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES"
  - regex: "^docker .*$"
    handler: "Error response from daemon: dial unix docker.raw.sock: connect: connection refused"
  - regex: "^uname$"
    handler: "Linux"
  - regex: "^ps$"
    handler: "PID TTY TIME CMD\n21642 ttys000 0:00.07 /bin/dockerd"
  - regex: "^(.+)$"
    handler: "command not found"
serverVersion: "OpenSSH"
serverName: "ubuntu"
passwordRegex: "^(root|qwerty|Smoker666)$"
deadlineTimeoutSeconds: 60

Testing

Maintaining excellent code quality is essential for security-focused projects like Beelzebub. We welcome all contributors who share our commitment to robust, readable, and reliable code!

Unit Tests

For contributor, we have a comprehensive suite of unit/integration tests that cover the core functionality of Beelzebub. To run the unit tests, use the following command:

bash
$ make test.unit

Integration Tests

To run integration tests:

bash
$ make test.dependencies.start
$ make test.integration
$ make test.dependencies.down

Roadmap

Our future plans for Beelzebub include developing it into a robust PaaS platform.

Contributing

The Beelzebub team welcomes contributions and project participation. Whether you want to report bugs, contribute new features, or have any questions, please refer to our Contributor Guide for detailed information. We encourage all participants and maintainers to adhere to our Code of Conduct and foster a supportive and respectful community.

Happy hacking!

License

Beelzebub is licensed under the GNU GPL v3 License.

Beelzebub is a member of NVIDIA Inception

nvidia-inception-program-badge-rgb-for-screen

Supported by

JetBrains logo.

gitbook logo

Star History

Star History Chart

Repository Owner

Repository Details

Language Go
Default Branch main
Size 422 KB
Contributors 10
License GNU General Public License v3.0
MCP Verified Nov 12, 2025

Programming Languages

Go
97.4%
Smarty
1.57%
Makefile
0.67%
Dockerfile
0.36%

Tags

Topics

acis agentic-ai-security cloudnative cloudsecurity cybersecurity deception decoys framework go honeypot llama llm llm-honeypot llm-security mcp mcp-honeypot preemptive-cybersecurity research-project security whitehat

Join Our Newsletter

Stay updated with the latest AI tools, news, and offers by subscribing to our weekly newsletter.

We respect your privacy. Unsubscribe at any time.

Related MCPs

Discover similar Model Context Protocol servers

  • AIM Guard MCP

    AIM Guard MCP

    AI-powered security and safety server for Model Context Protocol environments.

    AIM Guard MCP is a server implementing the Model Context Protocol (MCP), providing AI-powered security analysis and safety instruction tools tailored for AI agents. It offers features such as contextual security instructions, harmful content detection, API key scanning, and prompt injection detection, all designed to guard and protect interactions with various MCPs and external services. Built for fast integration, it connects with the AIM Intelligence API and is compatible with any MCP-compliant AI assistant.

    • 13
    • MCP
    • AIM-Intelligence/AIM-MCP
  • Semgrep MCP Server

    Semgrep MCP Server

    A Model Context Protocol server powered by Semgrep for seamless code analysis integration.

    Semgrep MCP Server implements the Model Context Protocol (MCP) to enable efficient and standardized communication for code analysis tasks. It facilitates integration with platforms like LM Studio, Cursor, and Visual Studio Code, providing both Docker and Python (PyPI) deployment options. The tool is now maintained in the main Semgrep repository with continued updates, enhancing compatibility and support across developer tools.

    • 611
    • MCP
    • semgrep/mcp
  • mcp-recon

    mcp-recon

    Conversational reconnaissance interface and MCP server for HTTP and ASN analysis.

    mcp-recon acts as a conversational interface and Model Context Protocol (MCP) server, enabling seamless web domain and ASN reconnaissance through natural language prompts. It integrates powerful tooling like httpx and asnmap to conduct lightweight or full HTTP analysis and ASN lookups, exposing these capabilities to any MCP-compatible AI assistant. With predefined prompts and Docker-based deployment, it streamlines infrastructure analysis via AI interfaces such as Claude Desktop.

    • 22
    • MCP
    • nickpending/mcp-recon
  • Vibe Check MCP

    Vibe Check MCP

    Plug & play agent oversight tool to keep LLMs aligned, reflective, and safe.

    Vibe Check MCP provides a mentor layer over large language model agents to prevent over-engineering and promote optimal, minimal pathways. Leveraging research-backed oversight, it integrates seamlessly as an MCP server with support for STDIO and streamable HTTP transport. The platform enhances agent reliability, improves task success rates, and significantly reduces harmful actions. Designed for easy plug-and-play with MCP-aware clients, it is trusted across multiple MCP platforms and registries.

    • 315
    • MCP
    • PV-Bhat/vibe-check-mcp-server
  • MCP Server for TheHive

    MCP Server for TheHive

    Connect AI-powered automation tools to TheHive incident response platform via MCP.

    MCP Server for TheHive enables AI models and automation clients to interact with TheHive incident response platform using the Model Context Protocol. It provides tools to retrieve and analyze security alerts, manage cases, and automate incident response operations. The server facilitates seamless integration by exposing these functionalities over the standardized MCP protocol through stdio communication. It offers both pre-compiled binaries and a source build option with flexible configuration for connecting to TheHive instances.

    • 11
    • MCP
    • gbrigandi/mcp-server-thehive
  • Intruder MCP

    Intruder MCP

    Enable AI agents to control Intruder.io via the Model Context Protocol.

    Intruder MCP allows AI model clients such as Claude and Cursor to interactively control the Intruder vulnerability scanner through the Model Context Protocol. It can be deployed using smithery, locally with Python, or in a Docker container, requiring only an Intruder API key for secure access. The tool provides integration instructions tailored for MCP-compatible clients, streamlining vulnerability management automation for AI-driven workflows.

    • 21
    • MCP
    • intruder-io/intruder-mcp
  • Didn't find tool you were looking for?

    Be as detailed as possible for better results