A few days ago, a Principal Software Engineer at Red Hat posted a one-liner on LinkedIn that split the comments section in half:

“MCP is a layer of unnecessary indirection. A properly documented REST API is enough and works for everyone, not just agents.”

33 reactions, 13 comments, and a thread that surfaced some genuinely good arguments on both sides. I jumped in with my take, but a LinkedIn comment is not the right format for a nuanced opinion. So here’s the long version.

What MCP Actually Is (and Isn’t)#

Before the debate: let’s be precise.

The Model Context Protocol is a standardized interface between an AI agent and external tools. It defines how an agent discovers available tools, understands their parameters, decides when to use them, and invokes them. It was designed by Anthropic and has been adopted across the ecosystem.

graph TB subgraph AGENT["AI Agent"] LLM["LLM"] end subgraph MCP_LAYER["MCP Protocol"] DISC["Discovery: what tools exist?"] SCHEMA["Schema: what do they accept?"] CONTEXT["Context: when to use them?"] EXEC["Execution: run the tool"] end subgraph TOOLS["Tools"] API["REST API"] CLI["Local Binary"] DB["Database"] FS["Filesystem"] end LLM --> DISC DISC --> SCHEMA SCHEMA --> CONTEXT CONTEXT --> EXEC EXEC --> API EXEC --> CLI EXEC --> DB EXEC --> FS style MCP_LAYER fill:#44475a,stroke:#8be9fd,color:#f8f8f2

A REST API is a general-purpose interface for any client — human, script, browser, or agent. It defines endpoints, methods, request/response formats, and (if well-documented) the semantics of each operation.

The question at the heart of the post: why would you need a separate protocol when REST already exists?

The Voices in the Room#

The LinkedIn thread was more interesting than the original post, because it surfaced distinct perspectives that map to real architectural tradeoffs. Let me lay them out.

The purist’s position: REST is already self-descriptive#

The original poster’s argument goes deeper than “REST is enough.” In a follow-up reply, he clarified:

“REST applications should be self-descriptive. Hypermedia controls should contain enough information for the Agent to properly traverse the resource tree. This is the map that makes MCP unnecessary, and what I meant by ‘properly documented’.”

This is a purist’s argument, and it’s technically correct. Roy Fielding’s original REST dissertation describes exactly this: resources link to related resources, representations carry metadata about allowed transitions, and a sufficiently intelligent client can navigate the API without prior knowledge.

The problem? Almost nobody implements REST this way. Most “REST APIs” are really RPC over HTTP with JSON. No hypermedia. No HATEOAS. No self-descriptive messages. He’s arguing for the REST that should exist, not the REST that does exist. And the gap between those two is exactly where MCP found its niche.

The AI engineer’s counter: MCP is more than a wrapper#

An AI engineer in the thread pushed back with a specific point:

“Fair point if MCP is used just as a REST API wrapper, but it’s genuinely more than that. Prompts, resources, sampling, elicitation… it’s a complete protocol designed with agents in mind, not just API consumers.”

This is the distinction most people miss. MCP is not just “call this endpoint.” It includes:

  • Prompts: pre-built interaction patterns the agent can use
  • Resources: data the agent can read (like files, database records) without calling an action
  • Sampling: the ability for an MCP server to ask the LLM to generate something
  • Elicitation: the ability to ask the user for input mid-flow

These capabilities don’t map to REST at all. They’re agent-specific primitives that REST was never designed to express.

The original poster’s reply is revealing: “I think we should aim for complete protocols that have some basic denominator of intelligence in mind, be it humans or machines.” In other words: don’t build a protocol just for agents — build one that works for any intelligent consumer. It’s a philosophical position, and it’s respectable. But it’s also aspirational, not practical.

The ergonomics argument#

Another engineer cut through the philosophy with a practical point:

“A well-designed MCP has significantly better ergonomics and uses way fewer context tokens to do thing X than a cURL call. And it has multiple mechanisms for auth. And it transparently handles local and remote tools.”

This is the argument that resonates most with me. When an agent uses a REST API, it needs to reason about HTTP methods, construct URLs, handle authentication headers, parse response formats, manage pagination. All of this consumes context tokens — the agent’s scarce working memory.

An MCP tool call is: “call this function with these parameters.” The MCP server handles the transport, auth, and response parsing. The agent focuses on what to do, not how to call it.

MCP as gateway and access broker#

A Solutions Architect in the thread offered the most forward-looking view:

“MCP exists because REST APIs are not responsible for things like access controls or broadcasting version availability. We probably will see a handful of MCPs that act as gateways or access brokers, and everything else as API or CLI.”

He pointed to Arcade.dev as an example — an MCP runtime that handles secure agent authorization and governance. His vision is that MCP’s future isn’t wrapping every API, but serving as a controlled gateway layer between agents and the tools they’re allowed to use.

This is architecturally interesting. It’s the difference between MCP as “better REST” (wrong framing) and MCP as “agent access control plane” (more useful framing).

The XKCD problem#

Someone in the thread dropped the XKCD “how standards proliferate” comic and made a sharp observation:

“My biggest beef with MCP is the P part. It isn’t really a protocol so much as it is simply a standardized schema for an API.”

He’s not wrong. MCP today runs over stdio (for local servers) or HTTP with SSE (for remote servers). The “protocol” part is really a schema definition — what tools look like, what parameters they take, what responses they return. The transport is borrowed from existing technologies.

Whether this matters depends on whether you think a standard schema is valuable even without a novel transport. I think it is — but it’s a valid deflation of the hype.

The map metaphor#

Someone else in the thread offered the most intuitive framing:

“I tend to think of REST as the infrastructure, while MCP is more like the map that helps an agent figure out how to use it.”

This is the closest to my own view. REST is the road. MCP is the GPS. You can drive without GPS — especially if you know the roads. But the GPS helps when the destination is unfamiliar and the agent needs to make autonomous navigation decisions.

My Take: Context Is the Real Value#

Here’s what I commented on the original post, expanded:

The added value of MCP is the context. Each tool definition explains not just what the endpoint does, but when and how the agent should use it. If OpenAPI is correctly implemented with rich descriptions, it can achieve the same result. I’ve experimented with feeding man pages and shell auto-completion output to AI agents, and it works too.

So MCP is not necessary in the sense that no alternative exists. But it’s a standardized way to provide agent-oriented context, and standardization has value.

Consider the difference:

OpenAPI (typical implementation):

/query:
  post:
    summary: Execute SQL query
    parameters:
      - name: sql
        type: string

MCP tool definition:

{
  "name": "query_database",
  "description": "Execute a read-only SQL query against the production database. Use this when the user asks about customer data, orders, or inventory. Do NOT use for writes. The database is PostgreSQL 15. Common tables: customers, orders, products.",
  "inputSchema": {
    "sql": "SQL query. Use parameterized queries for user-provided values.",
    "database": "Usually 'production'. Use 'analytics' for aggregations."
  }
}

The MCP definition carries intent. It tells the model when to use the tool, when not to, and what context matters. Can you put this in an OpenAPI description? Yes. Do people actually do it? Almost never.

MCP forces this context to be first-class. That’s not a technical innovation — it’s a design constraint that produces better outcomes.

Where MCP Genuinely Wins#

Three scenarios where MCP provides value that REST cannot easily replicate:

1. Local Executables#

This is the killer use case.

graph TB subgraph LOCAL["Your Machine"] AGENT["AI Agent"] MCP_S["MCP Server"] GIT["git"] DOCKER["docker"] KUBECTL["kubectl"] TERRAFORM["terraform"] end subgraph REMOTE["The Internet"] REST["REST APIs"] end AGENT -->|MCP| MCP_S MCP_S --> GIT MCP_S --> DOCKER MCP_S --> KUBECTL MCP_S --> TERRAFORM AGENT -->|HTTP| REST style LOCAL fill:#44475a,stroke:#50fa7b,color:#f8f8f2 style REMOTE fill:#44475a,stroke:#8be9fd,color:#f8f8f2

git, docker, kubectl, terraform — these are CLI binaries on your machine. They don’t have REST endpoints. MCP bridges this gap by wrapping local executables with rich context and letting an AI agent invoke them directly.

To do this with REST, you’d need to build a local HTTP server, wrap each CLI tool, handle auth, manage process lifecycle. MCP makes this a first-class concept via stdio transport. This is not an incremental improvement — it’s a genuinely new capability.

2. Execution Decoupling#

MCP separates where the tool is defined from where it runs. An MCP server can run on your laptop, a remote server, or in a container. The agent doesn’t care.

This matters when:

  • The tool needs access to a private network the agent can’t reach
  • The tool requires credentials the agent shouldn’t have
  • You want to sandbox tool execution
  • Multiple agents share the same tool with consistent behavior

This is the “gateway/access broker” model from the thread. The MCP server becomes the security boundary between the agent and the resources it can access.

3. Agent-Native Primitives#

The point about prompts, resources, sampling, and elicitation is important. These aren’t things you’d add to a REST API because they don’t make sense for non-agent clients. They’re interaction patterns specific to the agent-tool relationship.

REST doesn’t have a concept of “ask the user for clarification mid-request.” MCP does. REST doesn’t have a concept of “let the tool ask the LLM to generate something.” MCP does. These primitives exist because agents are a fundamentally different kind of client.

Where REST Is Enough#

Let’s be equally honest about when MCP adds nothing:

  • Well-documented public APIs. If your REST API has descriptive OpenAPI specs, an LLM can use it directly. The Stripe API, GitHub API, Twilio API don’t need MCP wrappers.
  • Simple CRUD operations. Create, read, update, delete on a resource. REST is the natural fit.
  • Non-agent clients. If your tool serves browsers, mobile apps, and scripts in addition to agents, REST is the universal interface. MCP only speaks to MCP clients.
  • Performance-critical paths. MCP adds a protocol layer. For high-throughput, low-latency operations, direct HTTP is simpler.

And here’s the thing the original poster is fundamentally right about: if everyone implemented REST properly — with hypermedia, self-descriptive messages, and rich documentation — the case for MCP would be much weaker. MCP exists partly because the REST ecosystem failed to deliver on its own promises.

The Standards Problem#

Brant’s XKCD reference is funny because it’s true. We already have:

  • REST + OpenAPI
  • GraphQL
  • gRPC + Protobuf
  • And now MCP

Do we need another standard? The honest answer is: MCP is not competing with REST. It’s solving a different problem. REST is a client-server communication pattern. MCP is an agent-tool interaction pattern. They overlap when the tool is a remote API, but they diverge when the tool is a local binary, a filesystem, or a database.

graph TB subgraph OVERLAP["Where They Overlap"] REMOTE_API["Remote API calls"] end subgraph MCP_ONLY["MCP Territory"] LOCAL_EXEC["Local executables"] AGENT_PRIMS["Agent primitives
(prompts, sampling)"] TOOL_DISC["Standardized discovery"] ACCESS_CTL["Agent access control"] end subgraph REST_ONLY["REST Territory"] BROWSER["Browser clients"] MOBILE["Mobile apps"] SCRIPTS["Automation scripts"] PERF["High-performance APIs"] end style OVERLAP fill:#44475a,stroke:#ffb86c,color:#f8f8f2 style MCP_ONLY fill:#44475a,stroke:#8be9fd,color:#f8f8f2 style REST_ONLY fill:#44475a,stroke:#50fa7b,color:#f8f8f2

The real risk isn’t that MCP replaces REST (it won’t). It’s that people wrap every REST API in an MCP server because it’s trendy, adding indirection without adding value. That’s the scenario the original post is warning against, and it’s right to warn against it.

Not Everything Is a Nail#

MCP is a hammer. A useful one. But not everything is a nail.

Use MCP when you need to:

  • Wrap local executables for AI agents
  • Decouple and control where tool execution happens
  • Provide rich, agent-oriented context that goes beyond API documentation
  • Leverage agent-native primitives (prompts, resources, sampling)

Use REST when:

  • Your clients include humans, browsers, and scripts — not just agents
  • Your API is well-documented and the agent can figure it out from OpenAPI
  • You need maximum performance and minimum indirection
  • You’re building a service, not a tool

Use both when your system is complex enough to warrant it — and be honest about when you’re adding MCP because it solves a problem vs. when you’re adding it because it’s the thing everyone is talking about.

The engineers who build the best AI-integrated systems will be the ones who reach for the right tool — not the ones who insist only one tool exists.

Or as I put it in the LinkedIn thread: “Not everything is a nail, but a hammer is useful nonetheless.”


This post was prompted by a LinkedIn discussion that was worth expanding into an article. Thanks to everyone in the thread for a debate that generated more signal than noise — a rare thing on LinkedIn.