MCP-Human

MCP-Human

Enabling human-in-the-loop decision making for AI assistants via the Model Context Protocol.

20
Stars
3
Forks
20
Watchers
1
Issues
MCP-Human is a server implementing the Model Context Protocol that connects AI assistants with real human input on demand. It creates tasks on Amazon Mechanical Turk, allowing humans to answer questions when AI systems require assistance. This solution demonstrates human-in-the-loop AI by providing a bridge between AI models and external human judgment through a standardized protocol. Designed primarily as a proof-of-concept, it can be easily integrated with MCP-compatible clients.

Key Features

Implements Model Context Protocol (MCP) server
Human-in-the-loop AI task processing
Automated task creation on Amazon Mechanical Turk
Supports sandbox and production MTurk environments
Configurable via environment variables
Seamless integration with MCP-compatible AI clients
Real-time human input for AI queries
Support for Claude and generic MCP clients
Reward amount customization
AWS credentials and region configuration

Use Cases

AI assistant seeking human judgment for low-confidence answers
Augmenting model decision making with real human feedback
Automated task dispatch to human workers via MTurk
Testing human-in-the-loop workflows for AI research
Integrating crowd-sourced expertise in dynamic AI applications
Prototyping context-aware AI assistants
Experimental platforms involving collaboration between AI and humans
Building compliance or safety checks into AI responses
Scaling QA or annotation with real-time human workers
Deploying hybrid intelligence systems in production or sandbox settings

README

MCP-Human: Human Assistance for AI Assistants

A Model Context Protocol (MCP) server that enables AI assistants to get human input when needed. This tool creates tasks on Amazon Mechanical Turk that let real humans answer questions from AI systems. While primarily a proof-of-concept, it demonstrates how to build human-in-the-loop AI systems using the MCP standard. See limitations for current constraints.

we need to go deeper

Setup

Prerequisites

  • Node.js 16+
  • AWS credentials with MTurk permissions. See instructions below.
  • AWS CLI (recommended for setting aws credentials)

Configuring AWS credentials

sh
# Configure AWS credentials for profile mcp-human
export AWS_ACCESS_KEY_ID="your_access_key"
export AWS_SECRET_ACCESS_KEY="your_secret_key"
aws configure set aws_access_key_id ${AWS_ACCESS_KEY_ID} --profile mcp-human
aws configure set aws_secret_access_key ${AWS_SECRET_ACCESS_KEY} --profile mcp-human

Configuring MCP server with your MCP client

Claude code

Sandbox mode:

sh
claude mcp add human -- npx -y mcp-human@latest

The server defaults to sandbox mode (for testing). If you want to submit real requests, use MTURK_SANDBOX=false.

sh
claude mcp add human -e MTURK_SANDBOX=false -- npx -y mcp-human@latest

Generic

Update the configuration of your MCP client to the following:

json
{
  "mcpServers": {
    "human": {
      "command": "npx",
      "args": ["-y", "mcp-human@latest"]
    }
  }
}

e.g.: Claude Desktop (MacOS): ~/Library/Application\ Support/Claude/claude_desktop_config.json

Configuration

The server can be configured with the following environment variables:

Variable Description Default
MTURK_SANDBOX Use MTurk sandbox (true) or production (false) true
AWS_REGION AWS region for MTurk us-east-1
AWS_PROFILE AWS profile to use for credentials mcp-human
DEFAULT_REWARD The reward amount in USD. 0.05
FORM_URL URL where the form is hosted. Needs to be https. https://syskall.com/mcp-human/

Setting Up AWS User with Mechanical Turk Access

To create an AWS user with appropriate permissions for Mechanical Turk:

  1. Log in to the AWS Management Console:

  2. Create a new IAM User:

    • Navigate to IAM (Identity and Access Management)
    • Click "Users" > "Create user"
    • Enter a username (e.g., mturk-api-user)
    • Click "Next" to proceed to permissions
  3. Set Permissions:

    • Choose "Attach existing policies directly"
    • Search for and select AmazonMechanicalTurkFullAccess
    • If you need more granular control, you can create a custom policy with specific MTurk permissions
    • Click "Next" and then "Create user"
  4. Create Access Keys:

    • After user creation, click on the username to go to their detail page
    • Go to the "Security credentials" tab
    • In the "Access keys" section, click "Create access key"
    • Choose "Application running outside AWS" or appropriate option
    • Click through the wizard and finally "Create access key"
  5. Save Credentials:

    • Download the CSV file or copy the Access key ID and Secret access key
    • These will be used as AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables
    • Important: This is the only time you'll see the secret access key, so save it securely
  6. Configure MTurk Requester Settings:

Note: Always start with the MTurk Sandbox (MTURK_SANDBOX=true) to test your integration without spending real money. Only switch to production when you're confident in your implementation.

Architecture

This system consists of two main components:

  1. MCP Server: A server implementing the Model Context Protocol that integrates with MTurk
  2. Form: A static HTML form.

The AI assistant connects to the MCP server, which creates tasks on MTurk. Human workers complete these tasks through a form, and their responses are made available to the AI assistant.

The Mechanical Turk form used is hosted on GitHub pages: https://syskall.com/mcp-human/. It gets populated with data through query parameters.

MCP Tools

askHuman

Allows an AI to ask a question to a human worker on Mechanical Turk.

Parameters:

  • question: The question to ask a human worker
  • reward: The reward amount in USD (default: $0.05)
  • title: Title for the HIT (optional)
  • description: Description for the HIT (optional)
  • hitValiditySeconds: Time until the HIT expires in seconds (default: 1 hour)

Example usage:

javascript
// From the AI assistant's perspective
const response = await call("askHuman", {
  question:
    "What's a creative name for a smart home device that adjusts lighting based on mood?",
  reward: "0.25",
  title: "Help with creative product naming",
  hitValiditySeconds: 3600, // HIT valid for 1 hour
});

If a worker responds within the HIT's validity period, the response will contain their answer. If not, it will return a HIT ID that can be checked later.

checkHITStatus

Check the status of a previously created HIT and retrieve any submitted assignments.

Parameters:

  • hitId: The HIT ID to check status for

Example usage:

javascript
// From the AI assistant's perspective
const status = await call("checkHITStatus", {
  hitId: "3XMVN1BINNIXMTM9TTDO1GKMW7SGGZ",
});

Resources

mturk-account

Provides access to MTurk account information.

URIs:

  • mturk-account://balance - Get account balance
  • mturk-account://hits - List HITs
  • mturk-account://config - Get configuration info

Limitations

  • Need to implement progress notifications to avoid getting timing out.
  • Currently only supports simple text-based questions and answers
  • Limited to one assignment per HIT
  • No support for custom HTML/JS in the form
  • Simple polling for results rather than a webhook approach
  • Uses MTurk's ExternalQuestion format, which requires hosting a form

Star History

Star History Chart

Repository Owner

olalonde
olalonde

User

Repository Details

Language JavaScript
Default Branch master
Size 1,062 KB
Contributors 1
MCP Verified Nov 12, 2025

Programming Languages

JavaScript
54.51%
HTML
25.66%
TypeScript
19.83%

Tags

Topics

aws aws-mturk llm mcp mcp-server

Join Our Newsletter

Stay updated with the latest AI tools, news, and offers by subscribing to our weekly newsletter.

We respect your privacy. Unsubscribe at any time.

Related MCPs

Discover similar Model Context Protocol servers

  • interactive-mcp

    interactive-mcp

    Enable interactive, local communication between LLMs and users via MCP.

    interactive-mcp implements a Model Context Protocol (MCP) server in Node.js/TypeScript, allowing Large Language Models (LLMs) to interact directly with users on their local machine. It exposes tools for requesting user input, sending notifications, and managing persistent command-line chat sessions, facilitating real-time communication. Designed for integration with clients like Claude Desktop and VS Code, it operates locally to access OS-level notifications and command prompts. The project is suited for interactive workflows where LLMs require user involvement or confirmation.

    • 313
    • MCP
    • ttommyth/interactive-mcp
  • kibitz

    kibitz

    The coding agent for professionals with MCP integration.

    kibitz is a coding agent that supports advanced AI collaboration by enabling seamless integration with Model Context Protocol (MCP) servers via WebSockets. It allows users to configure Anthropic API keys, system prompts, and custom context providers for each project, enhancing contextual understanding for coding tasks. The platform is designed for developers and professionals seeking tailored AI-driven coding workflows and provides flexible project-specific configuration.

    • 104
    • MCP
    • nick1udwig/kibitz
  • Model Context Protocol Server for Home Assistant

    Model Context Protocol Server for Home Assistant

    Seamlessly connect Home Assistant to LLMs for natural language smart home control via MCP.

    Enables integration between a local Home Assistant instance and language models using the Model Context Protocol (MCP). Facilitates natural language monitoring and control of smart home devices, with robust API support for state management, automation, real-time updates, and system administration. Features secure, token-based access, and supports mobile and HTTP clients. Designed to bridge Home Assistant environments with modern AI-driven automation.

    • 468
    • MCP
    • tevonsb/homeassistant-mcp
  • MCP Linear

    MCP Linear

    MCP server for AI-driven control of Linear project management.

    MCP Linear is a Model Context Protocol (MCP) server implementation that enables AI assistants to interact with the Linear project management platform. It provides a bridge between AI systems and the Linear GraphQL API, allowing the retrieval and management of issues, projects, teams, and more. With MCP Linear, users can create, update, assign, and comment on Linear issues, as well as manage project and team structures directly through AI interfaces. The tool supports seamless integration via Smithery and can be configured for various AI clients like Cursor and Claude Desktop.

    • 117
    • MCP
    • tacticlaunch/mcp-linear
  • Think MCP Tool

    Think MCP Tool

    Structured reasoning for agentic AI with the 'think' tool via Model Context Protocol.

    Think MCP Tool provides an MCP (Model Context Protocol) server implementing the 'think' tool for structured reasoning in agentic AI workflows. Inspired by Anthropic's research, it enables AI agents to pause and explicitly record thoughts during complex, multi-step problem solving without altering the environment. The system enhances sequential decision-making, policy compliance, and tool output analysis, and offers advanced extensions for criticism, planning, and searching. Suitable for integration with Claude or other agentic large language models.

    • 80
    • MCP
    • Rai220/think-mcp
  • Perplexity MCP Server

    Perplexity MCP Server

    MCP Server integration for accessing the Perplexity API with context-aware chat completion.

    Perplexity MCP Server provides a Model Context Protocol (MCP) compliant server that interfaces with the Perplexity API, enabling chat completion with citations. Designed for seamless integration with clients such as Claude Desktop, it allows users to send queries and receive context-rich responses from Perplexity. Environment configuration for API key management is supported, and limitations with long-running requests are noted. Future updates are planned to enhance support for client progress reporting.

    • 85
    • MCP
    • tanigami/mcp-server-perplexity
  • Didn't find tool you were looking for?

    Be as detailed as possible for better results