How MCP 2.0 Could 1000x the Number of MCP Servers
MCP has been one of the most successful coordination efforts in the AI ecosystem. A single term that means "this is how AI models connect to external services" — tool providers know what to build, AI platforms know what to consume, developers know what to ask for. Before MCP, every integration was bespoke. Now there's a shared language. That matters enormously.
But there's a problem: the meme has spread much faster than the implementations. Every AI platform supports MCP. Every developer knows the term. And yet the number of services that have actually shipped MCP servers is still small — a few dozen in most connector directories. Meanwhile, hundreds of thousands of services with REST APIs sit on the sidelines.
This article is a proposal. Not "MCP was a mistake" — it wasn't. But now that we can see how the ecosystem actually uses MCP, there's an opportunity to dramatically lower the barrier to entry. A REST-compatible profile in MCP 2.0 could turn "build a new server" into "add a YAML file," and unlock the long tail of services that will never build a dedicated MCP server but would happily expose their existing API.
The adoption bottleneck
The promise of MCP is that any service can integrate with any AI platform through a single standardised interface. In practice, building an MCP server is a meaningful undertaking. It's a new codebase, a new deployment, a new thing to maintain — separate from the REST API most services already have. For well-resourced companies like Notion or Asana, that's manageable. For everyone else, it keeps getting deprioritised.
This is the bottleneck. The protocol asks services to build something new when most of what they need already exists. They already have REST endpoints, OpenAPI specs, and OAuth. The gap between "has an API" and "supports MCP" is wider than it needs to be.
What MCP actually looks like in practice
Now that we're a couple of years in, a clear pattern has emerged. The vast majority of MCP servers are remote services that authenticate with OAuth and respond to simple request-response calls. Here's how the current MCP protocol maps against what REST, OpenAPI, and OAuth already provide:
| Concern | MCP protocol | REST + OpenAPI + OAuth |
|---|---|---|
| Tool discovery | JSON-RPC tools/list method | OpenAPI spec at a well-known URL |
| Tool definitions | Name, description, JSON Schema input | operationId, summary, description, JSON Schema |
| Authentication | OAuth 2.0 (in the newer HTTP transport) | OAuth 2.0 |
| User-scoped permissions | Bearer token per user | Bearer token per user |
| Invoking a tool | JSON-RPC tools/call method | HTTP request to the endpoint |
| Error handling | JSON-RPC error object | HTTP status codes + JSON error body |
| Streaming | Server-Sent Events | Server-Sent Events |
| Capability negotiation | initialize handshake | Implicit in the OpenAPI spec |
| Bidirectional communication | Server-to-client requests (e.g., sampling) | Not natively supported |
| Local tool integration | stdio transport | Not applicable (different problem) |
| Session management | Stateful session lifecycle | Stateless (by design) |
Everything above the last three rows maps directly. The bottom three — bidirectional communication, local stdio tools, and stateful sessions — are the features MCP provides that REST doesn't. The question for MCP 2.0 is: how much do they matter in practice?
Features we built for but didn't need
Bidirectional communication
MCP supports server-initiated requests — the server can ask the client to do something, like generate a completion. This is genuinely novel. But so far, it hasn't seen wide adoption. The overwhelming majority of MCP integrations are straightforward request-response: the client calls the server, the server responds.
Local tool integration via stdio
In early 2024, running local MCP servers for databases and dev tools was a common use case. The stdio transport was designed for this. Since then, CLI-based AI agents have filled this role instead. Claude Code queries a local Postgres database by running psql. It runs tests with npm test. The command line provides direct access without needing a protocol wrapper. The ecosystem has moved decisively towards remote hosted servers.
Stateful sessions
MCP maintains a session lifecycle — an initialize handshake where client and server negotiate capabilities, with both sides tracking state throughout.
REST is stateless by design. Each request carries its own authentication token and parameters. No handshake, no session to maintain.
In a traditional API context, sessions can be useful because the client is usually a dumb script with no memory. But MCP clients aren't dumb scripts — they're LLMs with full conversation history in their context window. The model already knows what tools it called, what results it got back, and what the user asked for. The context window is the session. Server-side session state is largely redundant when the client already carries a rich memory of the entire interaction.
The few things sessions enable — like resource subscriptions — are rarely used in practice, and have well-established REST equivalents like webhooks.
The proposal: a REST-compatible profile for MCP 2.0
MCP 2.0 doesn't need to abandon the current protocol. It could offer a simplified profile for services that only need request-response with OAuth — which is most of them.
I've written out a full spec for what this would look like:
A service publishes an OpenAPI spec at /.well-known/mcp.yaml, supports OAuth 2.0 with PKCE, and writes its endpoint descriptions for an AI audience. That's it.
For a service that already has a REST API, the work to become MCP-compatible:
| Requirement | What to do | Typical effort |
|---|---|---|
| Well-known manifest | Serve your OpenAPI spec (or a curated subset) at /.well-known/mcp.yaml | A single route returning a static file |
| AI-oriented descriptions | Review operationId, summary, and description fields so an AI model can understand when and how to use each endpoint | Editing existing documentation |
| PKCE support | Ensure your OAuth 2.0 flow supports the PKCE extension | Minor if using a standard OAuth library |
| Plain-language error messages | Ensure error responses include a human-readable message field | Likely already the case |
That's the gap. MCP compatibility goes from "build a new server" to "add a well-known URL and improve your descriptions." Services that need bidirectional communication or stateful sessions can still use the full MCP protocol. Everyone else gets a much shorter path in.
Addressing the hard questions
"How does an AI client register with a service it's never seen before?"
This is a real problem. Traditional OAuth assumes a developer manually registers their app and gets a client_id in advance. But when an AI client connects to a new service for the first time, there's no developer in the loop.
MCP currently addresses this with Dynamic Client Registration. The emerging alternative is Client Metadata Documents, where the AI platform publishes a stable identity document (e.g., https://anthropic.com/.well-known/oauth-client.json) that services can fetch and trust. Both solutions operate at the OAuth layer and work identically whether the underlying integration is JSON-RPC or REST.
"OpenAPI specs are too verbose for AI consumption."
An AI client doesn't load an entire OpenAPI spec into context. It reads operationId and summary fields to pick a tool — a few tokens each — then reads the full schema only for the endpoint it decides to call. This is already how function calling works in every major AI platform.
"Capability negotiation matters for interoperability."
An OpenAPI spec implicitly declares capabilities. If an endpoint returns text/event-stream, it supports streaming. If it accepts a cursor parameter, it supports pagination. For anything beyond that, an x-mcp extension in the OpenAPI info object works without a separate negotiation step.
"Resources and tools are fundamentally different."
MCP distinguishes between tools (actions) and resources (data). In practice, both are endpoints with a path, parameters, and a response. A well-written description tells the AI model whether it's reading data or performing an action. A protocol-level distinction isn't necessary for that.
More servers is better for everyone
MCP's biggest achievement is the coordination — the shared vocabulary, the connector directories, the expectation that services should be AI-accessible. That coordination doesn't depend on JSON-RPC or stdio or session handshakes. It depends on the meme.
Right now, the protocol is a bottleneck on the meme's success. Hundreds of thousands of services have REST APIs and will never build a dedicated MCP server. A REST-compatible profile in MCP 2.0 would bring them into the ecosystem overnight. More servers means more value for AI platforms, more value for developers, and more momentum for MCP itself.
The meme is working. Let's remove the barrier that's holding it back.