Source: Jahgirdar, Manoj1
In this article, I’ll walk through the evolution of AI tool integration, explain what the Model Context Protocol (MCP) is, and show why MCP alone isn’t production-ready. Then we’ll explore real-world gateway implementations between AI agents and external tools.
MCP Gateway demo
Watch how the AI Gateway uses MCP to track and manage agentic traffic in action. In this demo, three AI agents interact with GitHub, Linear, and OpenAI.
Why MCP needs a Gateway?
In November 2024, Anthropic introduced the Model Context Protocol (MCP), a lightweight standard designed to bridge the gap between AI systems and real-world applications.3 At its core, MCP is a translator that enables AI models to interact with external tools consistently.
Teams started building custom integrations
However, when building enterprise applications, teams have traditionally needed to create custom integrations for every application they want an AI to use.
Token handling, session management, rate limiting, and security policies need to be observable and enforceable at scale.
“Just add another /call_something endpoint” became a common approach.
Fragmentation and complexity multiplied
As more integrations were added, duplicated logic, inconsistent interfaces, and ad hoc designs began to create friction. Maintenance became burdensome, security risks increased, and performance issues emerged, especially as systems scaled.

Without these capabilities in place, integrations risk becoming brittle. MCP solves the what and how of agent-to-tool communication, but not the where, when, or under what conditions.
MCP Gateway: A secure middleware for managing AI-tool communication
Just as traditional systems rely on API Gateways to handle rate limiting and authentication, agentic AI applications require an MCP Gateway to manage standardized communication between agents and MCP servers.
With MCP paired with a gateway, developers can expose thousands of pre-built actions, enabling AI to connect seamlessly to tools like Slack, Notion, Salesforce, and Gmail at scale.

To bridge this gap, the MCP Gateway serves as a secure middleware layer that productionizes MCP for enterprise use.
Similar to an API Gateway, it enhances an MCP-based architecture by adding essential operational and security capabilities:
- Centralized routing and registration: AI agents send all requests through a unified gateway endpoint. The gateway identifies the requesting agent, determines which tool is being invoked, and routes the request to the appropriate MCP server.
- Policy enforcement and governance: The gateway applies consistent rules across all integrated tools, including
- access control (who can use what),
- rate limiting (how often tools can be called),
- tenant isolation (ensuring data and sessions are kept separate across users or teams).
- Session continuity and context tracking: Enables AI agents to retain awareness of previous interactions by maintaining shared memory and scoped session data.
- Operational efficiency: Instead of each agent opening separate connections to every tool (which creates redundant handshakes, authentication flows, and data serialization), the gateway handles routing, batching, and session reuse.
- Unified observability: Logging, monitoring, error tracking, and usage analytics are handled centrally, making system behavior easier to optimize.
OpenAI-compatible and lightweight MCP Gateways
Designed to make MCP tools easily accessible to agents and AI clients. Prioritize compatibility with OpenAI APIs, local dev setups, or UI tools.
Gateway | Transport support | UI compatibility | LLM API format |
---|---|---|---|
MCP Bridge | – HTTP – WebSocket – SSE – STDIO | – Open Web UI – LM Studio – Others | OpenAI /chat/completions |
Director.run | – WebSocket only | – Open Web UI | OpenAI-style* |
LM Studio MCP Proxy | – HTTP | – LM Studio only | OpenAI-style |
*Any system mimicking OpenAI’s API format
MCP Bridge
MCP Bridge is a lightweight gateway designed to connect MCP tools with clients that use the OpenAI API format. Its purpose is to make MCP tools accessible to any LLM client, including those that do not natively support MCP, by providing a compatible interface.
For example, tools like Open Web UI can call MCP functions directly without needing to handle the underlying MCP protocol.
If you’re already using OpenAI clients, this is the easiest way to get into tool-augmented AI without changing your whole stack.
Key features of MCP Bridge
OpenAI-compatible endpoints:
Enables AI clients to use OpenAI-style chat/completions or completions APIs to interact with MCP tools without modification.
MCP tool compatibility:
Supports invoking any tool registered with an MCP server, including tools that use structured prompts or sampling workflows.
This enables AI clients to perform real-world actions such as querying databases, sending messages, or triggering APIs by using standard JSON-RPC calls without direct integration with each tool.
SSE bridge for external clients
Enables real-time communication by exposing a Server-Sent Events (SSE) interface. This allows external clients—such as web-based UIs or agent frameworks—to receive live updates from MCP tools as they execute, without needing to poll or maintain a WebSocket connection.
Tool integration Gateways for MCP and agentic AI
Built to expose new tools to MCP by wrapping REST APIs or CLI tools. Often support no-code or low-code setup for faster development.
Gateway | Transport support | Security features | Governance features |
---|---|---|---|
Lasso MCP Gateway | – HTTP – SSE – WebSocket – STDIO | – Jason web tokens & basic Auth – Plugin-based guardrails – PII masking – Token redaction | – Prompt versioning – Centralized logging – Policy enforcement – Audit trail |
ContextForge | – HTTP – SSE – WebSocket – STDIO | – Jason web tokens & basic Auth – Scoped virtual servers (isolation via endpoint) | – Prompt versioning (Jinja2 with rollback) – Structured logs – Basic access control via scopes |
Lasso
Unlike lightweight gateways such as MCP Bridge or productized platforms like Zapier MCP Gateway, Lasso is built specifically for enterprise-grade security.
Launched in April 2025, Lasso MCP Gateway is an open-source proxy and orchestration layer that sits between AI agents and multiple MCP servers.
Think of it as a central coordination point, rather than having each agent connect individually to every tool; all communication flows through the gateway. This simplifies the architecture and makes it easier to manage tool access, routing, and performance.
Key features of Lasso
Deep security enforcement via a customizable plugin:
Lasso provides a plugin-based guardrail system that allows developers to enforce security at the request/response level.
Plugins like presidio (for PII detection) can be added to inspect, sanitize, or block traffic for enterprise-grade data protection.
Observability and auditability across MCP tool interactions:
Lasso logs all tool calls, prompt executions, and resource reads in a structured JSON format. It integrates easily with logging systems like ELK, Prometheus, or Grafana, enabling teams to trace behavior, monitor usage, and audit actions across all agents and tools.
Support for complex, multi-protocol deployments:
Lasso supports multiple transport protocols, including HTTP, WebSocket, Server-Sent Events (SSE), and STDIO. This makes it adaptable across deployment environments.
Centralized governance over distributed agent-tool workflows:
Through virtual servers and tool registration, Lasso allows admins to define context-specific endpoints for different teams or use cases.
Access policies (e.g., per-user or per-tenant), version control, and structured configuration help enforce consistent behavior and isolation. This eliminates the need for embedding logic in the agents themselves.
Enterprise-grade MCP Gateways
Focused on large-scale deployment, security, tool routing, session management, and policy enforcement across teams.
Gateway | Transport support | Tool integration | UI compatibility |
---|---|---|---|
Zapier MCP Gateway | – HTTP | No-code setup via pre-built Zap actions | Zapier UI |
Unla (AmoyLab) | – HTTP – Docker | Drag-and-drop wrapper for REST & CLI | Web UI |
ContextForge | – HTTP – SSE – STDIO – WebSocket | YAML/CLI-based wrapping of REST & CLI APIs | CLI, Admin UI |
Zapier MCP Gateway

In early 2025, Zapier launched its MCP interface. Zapier’s MCP layer converts its 8,000+ app integrations into MCP-compatible endpoints, enabling LLMs and agent frameworks to invoke real actions (e.g., sending Slack messages, updating Salesforce records, triggering Gmail automations) with minimal setup.
Zapier MCP Gateway offers a library of 1000+ app connections and 30,000+ actions via MCP-compatible endpoints. It’s free to use with rate limits of 80 calls/hour, 160/day, and 300/month.
It provides built-in authentication, rate limiting, and endpoint management.
Key features of Zapier MCP
Query and update databases:
Allow the AI to find, add, or modify records in a PostgreSQL database using natural language.
Send automated messages:
Let the AI deliver scheduled announcements or reminders to community platforms like Circle.
Handle scheduling tasks:
Enable the AI to create and manage calendar events based on a single user request.
Extract and summarize web content:
Use tools like Web Parser to have the AI retrieve webpage content and share key insights via Slack or another channel.
Support end-to-end task automation
Let users complete complex workflows—like booking a flight and adding it to a calendar—through a single chat prompt. For example: “Book me a window seat on the next flight to Chicago and add it to my calendar.” The AI handles flight search, booking, event creation, and confirmation in one flow.
AI workflows you can build with Zapier MCP Gateways
- Work with databases: Enable your AI to locate, insert, or modify data within a PostgreSQL table, based on natural language input.
- Engage communities: Have your AI send a scheduled message.
- Manage scheduling: Automatically create calendar events by a single user request.
- Summarize online content: Ask the AI to extract key information from a webpage using tools like Web Parser, then forward the summary to Slack or another messaging app.
- Enable fully conversational assistants: Let users plan complex tasks like booking flights through chat.
For example, a user can say, “Book a window seat on the next flight to Chicago and add it to my calendar,” and the AI can search for flights, make the reservation, schedule the event, and confirm.
Real world MCP gateway applications
1. MCP Bridge as a lightweight gateway
Here you can see how to configure and run a lightweight MCP gateway, specifically MCP Bridge, which serves as a compatibility layer between OpenAI-style clients and MCP tools.
{
"inference_server": {
"base_url": "http://localhost:8000/v1",
"api_key": "None"
},
"sampling": {
"timeout": 10,
"models": [
{
"model": "gpt-4o",
"intelligence": 0.8,
"cost": 0.9,
"speed": 0.3
},
{
"model": "gpt-4o-mini",
"intelligence": 0.4,
"cost": 0.1,
"speed": 0.7
}
]
},
"mcp_servers": {
"fetch": {
"command": "uvx",
"args": [
"mcp-server-fetch"
]
}
},
"security": {
"auth": {
"enabled": true,
"api_keys": [
{
"key": "your-secure-api-key-here"
}
]
}
},
"network": {
"host": "0.0.0.0",
"port": 9090
},
"logging": {
"log_level": "DEBUG"
}
}
Source: GitHub8
Key configuration sections in MCP Bridge:
Section | Description |
---|---|
inference_server | Defines the AI model endpoint (e.g., OpenAI API or local LLM) used for completions. |
mcp_servers | Lists and configures external MCP tools that can be invoked through the bridge. |
network | Specifies host and port settings for running the server (e.g., via Uvicorn). |
logging | Controls the format of log output (e.g., DEBUG, INFO). |
api_key | Sets access credentials for authenticating client requests to the bridge. |
This setup enables clients like Open Web UI to communicate with MCP tools without needing native MCP support.
When Open Web UI is configured to point to http://localhost:9090/v1 (the MCP Bridge server), and a user enters a prompt such as:
“Extract data from site and summarize”
Open Web UI converts that prompt into a structured API call, which MCP Bridge then routes to the appropriate MCP tool:
The structured API call:
{
"jsonrpc": "2.0",
"method": "chat/completions",
"params": {
"model": "gpt-4o",
"messages": [{ "role": "user", "content": "Extract data from site and summarize" }]
}
}
2. WebSocket transport in MCP Bridge
This example shows how MCP Bridge, functioning as an MCP Gateway, supports WebSocket-based transport between AI clients and the MCP SDK. It enables real-time, bi-directional communication using the JSON-RPC format.
We’re sending and receiving structured messages, specifically MCP SessionMessage objects, which encapsulate JSON-RPC messages like: { “jsonrpc”: “2.0”, “method”: “tools/list” }
Here is what happens inside the loop:
1. Client sends a JSON-RPC message over WebSocket to list available tools:
{
"jsonrpc": "2.0",
"method": "tools/list"
}
2. MCP Bridge receives the message, decodes it, and wraps it into a SessionMessage: The standard format expected by the MCP SDK.
3. The reader loop continuously listens for incoming messages from the client, while the writer loop streams responses back to the client.
4. These responses are formatted and sent using WebSocket. This enables the client to interact with the tool registry in real time.

Is an MCP Gateway the same as an API Gateway?
Not exactly. API gateways are built for traditional client-server communication. MCP Gateways, on the other hand, are optimized for AI agents, supporting context-aware workflows, session management, and standardized tool orchestration via the Model Context Protocol.
MCP Gateway vs. API Gateway
Aspect | Traditional API gateway | MCP gateway |
---|---|---|
Primary consumer | Human-driven apps or backend services | AI agents (e.g., Claude, agent frameworks) |
Routing logic | Based on URL paths, headers, or HTTP methods | Based on semantic intent, task type, or agent memory |
State management | Stateless | May maintain session context or cache tool responses |
Workflow support | Handles isolated, single-step requests | Supports multi-step workflows and tool chaining |
Protocol alignment | Generic HTTP-based protocols (REST, GraphQL) | Purpose-built for Model Context Protocol (MCP) |
Typical use case | API mediation, rate limiting, authentication | Context-aware orchestration between AI agents and tool servers |
External Links
- 1. https://medium.com/@manojjahgirdar/model-context-protocol-mcp-gateway-a-middleware-meant-to-productionize-mcp-for-an-enterprise-bbdb2bc350be
- 2. https://www.youtube.com/watch?v=jx-TRqqgOrQ&t=146s
- 3. Introducing the Model Context Protocol \ Anthropic.
- 4. https://bluetickconsultants.medium.com/implementing-anthropics-model-context-protocol-mcp-for-ai-applications-and-agents-182a657f0aee
- 5. https://www.enkryptai.com/blog/securing-mcps-the-hidden-vulnerabilities-of-mcp-servers-and-a-gateway-to-safety
- 6. https://www.youtube.com/watch?v=G8qNEOXqfX0
- 7. https://zapier.com/blog/zapier-mcp-guide/
- 8. https://github.com/SecretiveShell/MCP-Bridge
- 9. https://www.youtube.com/watch?v=0NHCyq8bBcM
Comments
Your email address will not be published. All fields are required.