Use MCP when you want a single agent to talk to many systems your team does not control, or when you want third parties to extend your agent. Use custom tool APIs when you control both ends, latency matters, evals matter, or when your agent needs business logic that is awkward to express in a generic protocol. Most production agents in 2026 should use both, picking per integration. Anyone telling you "always MCP" or "never MCP" is selling something.
What MCP actually is
The Model Context Protocol is an open spec from Anthropic that standardizes how an LLM-powered application discovers and calls tools and data sources. An MCP server exposes resources, tools, and prompts to any MCP-compatible client. The promise is plug-and-play: write the integration once as an MCP server and any agent can use it. Think of it as the LSP — the Language Server Protocol — but for AI agents and tool calling. The protocol does for tools what HTTP did for documents.
What MCP is good at
Three scenarios where MCP earns its place. First, integrations to systems your team does not own. If a client wants their internal Claude desktop or a future agent to read from Notion, Slack, GitHub, and Linear, you do not write four custom integrations — you point the agent at four published MCP servers and you are done. Second, when you want third parties to extend your agent. If you are building a vertical platform and you want partners to ship integrations without forking your codebase, MCP gives them a path. Third, when you have many small tools and the orchestration overhead of bespoke wiring is more painful than the protocol's overhead.
Where MCP loses to a custom tool API
Five places. Latency-sensitive paths: MCP adds 30–80ms of overhead per call versus a direct function or HTTP call, which matters when your agent makes 10 tool calls per response and you have a sub-second SLA. Tight evals: when you want to evaluate one tool's behavior in isolation, having that tool behind a generic protocol means you cannot easily mock or assert at the level you need. Business logic that is awkward to express as resources/tools: workflows with complex state transitions are clearer as a small custom orchestrator than as a fan of MCP tool calls. High-throughput batch operations: MCP is request-response and not built for streaming-batch patterns. Operations where authorization is per-row: MCP's permission model is at the tool/resource level, not at the data row level, so per-tenant data isolation often pushes you to a custom tool API.
A simple decision framework
For each integration, ask four questions. Do we own both ends? If yes, custom tool API is usually faster, cheaper to evaluate, and easier to debug. Will third parties or other teams need to extend this? If yes, MCP saves you from a permanent custom-API maintenance tax. Is this a hot-path call inside the agent loop? If yes, custom is usually better; the latency tax compounds. Is there a published MCP server already? If yes, that changes the math heavily — using a published server means your team writes zero glue code and inherits the maintainer's updates.
What we use in production
Across our six most recent agent deployments, the average split was 30% MCP and 70% custom tool APIs. The MCP integrations were almost always to third-party systems with mature published servers — GitHub, Linear, Slack, Notion, Sentry. The custom tool APIs were for the proprietary internal systems where we owned both sides, where we cared about latency, or where the tool encoded business logic specific to that client. Two clients used MCP for client-facing extensibility, allowing their customers to plug in custom data sources via the MCP spec rather than maintaining a separate plugin SDK.
Common mistakes we see
Wrapping every internal microservice as an MCP server because the protocol is fashionable. The result is slower agents and harder evals with no extensibility benefit because no third party will ever consume those servers. Using MCP for a single tool: the protocol overhead is not justified when there is one integration. Skipping the auth model: MCP servers in production need OAuth-style scoping; teams that ship MCP without scoped permissions inherit a security audit problem. Treating MCP as a replacement for retrieval; MCP exposes resources but does not solve embedding-based retrieval — you still need a vector store and a retrieval strategy.
The honest 2026 take
MCP is real, useful, and not magic. It will replace approximately 25–40% of bespoke integration code for production agents over the next two years. It will not replace custom tool APIs in the hot path, in evals-critical workflows, or in places where you control both ends and care about engineering quality. The teams shipping the best agents in 2026 are the ones who treat MCP as one tool in the toolbox — not the toolbox.
What to do this quarter
If you are building a new agent: start with the integration list and tag each one M (MCP-published), C (custom), or D (doable either way). Ship the M ones first to compress your timeline. Build the C ones with proper observability. Defer the D ones until you know whether the workflow benefits from extensibility. If you are running an agent already in production: do not migrate working integrations to MCP unless you need third-party extensibility. Working code beats fashionable code every day of the week.