Skip to main content
Model Context Protocol (MCP) is an open standard that enables developers to connect large language models to custom tools, services, and data sources. With Heroku’s Managed Inference and Agents add-on, you can deploy MCP servers and automatically integrate them with your AI models.

Quick Start

  1. Choose or create an MCP server - Use a Heroku template or build your own
  2. Deploy to Heroku - Push your MCP app to Heroku
  3. Add MCP line to Procfile - Declare your MCP server process
  4. Attach to your model - Connect the MCP app to your AI model
  5. Use in agents - Tools become available via /v1/agents/heroku

Why Use MCP on Heroku?

Automatic Tool Orchestration

Instead of manually building control loops to handle tool calls, Heroku automatically:
  • Registers your custom tools
  • Executes tool calls in secure, isolated dynos
  • Manages multiple MCP servers and tool sets
  • Handles tool responses and error conditions

Security and Isolation

  • Tools run in one-off dynos, not long-running processes
  • Each tool execution is isolated
  • No need for multi-tenant MCP servers
  • Reduced security risks and costs

Simplified Management

  • Deploy standard apps - no special MCP hosting required
  • Use familiar Heroku workflows
  • Manage all tools through a single endpoint
  • Scale tool execution independently

Deploy and Register Custom MCP Servers

Step 1: Choose or Create an MCP Server

The fastest way to get started is to use one of Heroku’s example MCP servers. Each template includes a “Deploy to Heroku” button for one-click deployment:
PurposeRepositoryTools Included
Ruby Code Executionmcp-code-exec-rubycode_exec_ruby
Python Code Executionmcp-code-exec-pythoncode_exec_python
Go Code Executionmcp-code-exec-gocode_exec_go
Node Code Executionmcp-code-exec-nodecode_exec_node
Document Parsingmcp-doc-readerHTML & PDF to Markdown conversion
These tools are also available as built-in heroku_tools. Deploying as MCP servers offers additional benefits like no upper limit on ttl_seconds for dyno runtime.

Build Your Own MCP Server

To create a custom MCP server, implement the MCP protocol specification:
  1. Define your tools with clear descriptions and input schemas
  2. Implement tool execution logic
  3. Handle MCP protocol messages (initialize, tools/list, tools/call)
  4. Add language-specific dependencies (requirements.txt, Gemfile, etc.)

Step 2: Deploy to Heroku

Deploy your MCP server like any standard Heroku app:
# Create a new Heroku app
heroku create my-mcp-server

# Deploy your code
git push heroku main
Or use the “Deploy to Heroku” button if using a template.

Step 3: Add MCP Line to Procfile

To register your MCP server with Heroku, add a process declaration to your Procfile:

Requirements

  • Process name must start with "mcp"
  • Process name must be unique across all apps registered with your model
  • Must specify the STDIO server command

Example Procfiles

Python MCP Server:
web: gunicorn app:app
mcp: python -m src.stdio_server
Ruby MCP Server:
web: bundle exec rails server
mcp: bundle exec ruby lib/mcp_server.rb
Node MCP Server:
web: node server.js
mcp: node src/mcp-server.js
Go MCP Server:
web: ./bin/web
mcp: ./bin/mcp-server
The MCP process only declares how to run your server - it doesn’t need to be running continuously. Heroku executes it in one-off dynos when tools are called.

Step 4: Attach to Your Model

Connect your MCP app to a Heroku Managed Inference and Agents model to make tools available:

Option A: Create and Attach a New Model

If you don’t have a model yet:
heroku ai:models:create claude-4-sonnet -a my-mcp-server --as INFERENCE
This creates a new model resource and attaches it to your MCP app.

Option B: Attach an Existing Model

If you already have a model resource:
heroku addons:attach MY_MODEL_RESOURCE -a my-mcp-server --as INFERENCE
Replace MY_MODEL_RESOURCE with your model’s resource ID or alias.

Option C: Attach MCP Server to Your Main App

If you have a separate app that makes inference requests:
# Attach your model to the MCP server app
heroku addons:attach my-main-app::INFERENCE -a my-mcp-server --as INFERENCE
This grants your existing model access to the new MCP tools.

Step 5: Verify Registration

After attaching, Heroku automatically:
  • Registers your MCP server
  • Syncs available tools
  • Makes tools callable via /v1/agents/heroku
Check registration status:
export INFERENCE_KEY=$(heroku config:get INFERENCE_KEY -a my-main-app)
export INFERENCE_URL=$(heroku config:get INFERENCE_URL -a my-main-app)

curl "$INFERENCE_URL/v1/mcp/servers" \
  -H "Authorization: Bearer $INFERENCE_KEY" | jq
You should see your MCP server with "server_status": "registered" and "primitives_status": "synced".
Private Spaces Limitation: MCP servers in Private Spaces cannot currently be registered or used by /v1/agents/heroku.

Using MCP Tools with Agents

Once registered, your MCP tools become available through the /v1/agents/heroku endpoint. Include them in the tools array with "type": "mcp".

Basic Example

curl "$INFERENCE_URL/v1/agents/heroku" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $INFERENCE_KEY" \
  -H "X-Forwarded-Proto: https" \
  -d @- 

The MCP Inspector interface uses SSE, but underlying tool call executions are run in secure, isolated, one-off dynos (STDIO mode).

## Additional Reading

- [Managed Inference and Agents API /v1/agents/heroku](https://devcenter.heroku.com/articles/heroku-inference-api-v1-agents-heroku)

- [Managed Inference and Agents API /v1/mcp/servers](https://devcenter.heroku.com/articles/heroku-inference-api-v1-mcp-servers)