MCP: The New Vendor Lock-In, Dressed Up as a Standard
Every few years, the infrastructure industry invents a new way to sell you complexity.
First it was hardware appliances - proprietary boxes that went end-of-life every three years, forcing expensive upgrades. Then it was proprietary software platforms - lock-in disguised as "integrated solutions." Now it's MCP.
The Model Context Protocol is being positioned as the "USB-C for AI" - a universal standard for connecting AI agents to external systems. 97 million SDK downloads. Big tech backing. The narrative says it's already won.
I don't buy it.
Follow the Money
When vendors rally around a "standard," ask who benefits.
MCP creates a new layer that needs to be:
- Deployed (buy our MCP servers)
- Maintained (subscribe to our support)
- Secured (purchase our MCP security tools)
- Upgraded (when the spec changes, and it will)
Sound familiar? It's the same playbook. Create complexity, then sell the solution to the complexity you created.
Gartner predicts 75% of gateway vendors will have MCP features by 2026. That's not adoption of a standard - that's 75% of vendors finding a new thing to sell you.
The Overhead Nobody Talks About
Critics have calculated that MCP tool metadata consumes 13,000-18,000 tokens of context window. Compare that to ~225 tokens for a CLI tool description.
That's not efficiency. That's bloat.
And it gets worse. Security researchers flag prompt injection risks, tool poisoning, credential leakage, and cross-server shadowing. The protocol prioritises "simplicity and ease" over authentication and encryption. You're adding attack surface for the privilege of adding complexity.
The API Is Already There
Here's what frustrates me: the problem MCP claims to solve is already solved.
Every modern system has an API. That API has documentation. Modern LLMs are remarkably good at reading documentation and generating the correct API calls.
At NetAutomate, our NetOrca Pack system takes this approach directly:
- Give the AI the API docs
- Give it the customer's intent
- Let it figure out the calls
No intermediate layer. No MCP servers to deploy. No tool definitions to maintain. No 13,000-token overhead.
The AI reads what the customer wants ("I need a load balancer with SSL"), reads the API documentation, and generates: POST to create namespace, POST to create origin pool, POST to create load balancer. Done.
"But Standards Enable Interoperability"
Sure - if you believe the vendors will keep the standard stable.
The New Stack observes that AI development tools feel "fragile and immature" with "rapidly changing standards." MCP is already on that path. Today's MCP implementation will need updating when the spec evolves. Your "standard" connector becomes technical debt.
Meanwhile, HTTP has been stable for decades. REST APIs don't change their fundamental patterns. OpenAPI specs are backwards compatible. The boring infrastructure works.
The Real Question
Do you want to:
Option A: Deploy MCP servers, maintain tool definitions, manage the overhead, patch security vulnerabilities, update when the spec changes, and add another vendor to your support matrix?
Option B: Point an AI at your existing API documentation and let it work?
We chose B. Our AI reads the docs, generates the calls, verifies the results, and self-heals on failure. No new infrastructure required. No new vendor lock-in accepted.
The best abstraction layer is often no abstraction layer.