
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now
The Model Context Protocol (MCP) has become one of the most talked-about developments in AI integration since its introduction by Anthropic in late 2024. If youâre tuned into the AI space at all, youâve likely been inundated with developer âhot takesâ on the topic. Some think itâs the best thing ever; others are quick to point out its shortcomings. In reality, thereâs some truth to both.
One pattern Iâve noticed with MCP adoption is that skepticism typically gives way to recognition: This protocol solves genuine architectural problems that other approaches donât. Iâve gathered a list of questions below that reflect the conversations Iâve had with fellow builders who are considering bringing MCP to production environments.Â
1. Why should I use MCP over other alternatives?
Of course, most developers considering MCP are already familiar with implementations like OpenAIâs custom GPTs, vanilla function calling, Responses API with function calling, and hardcoded connections to services like Google Drive. The question isnât really whether MCP fully replaces these approaches â under the hood, you could absolutely use the Responses API with function calling that still connects to MCP. What matters here is the resulting stack.
Despite all the hype about MCP, hereâs the straight truth: Itâs not a massive technical leap. MCP essentially âwrapsâ existing APIs in a way thatâs understandable to large language models (LLMs). Sure, a lot of services already have an OpenAPI spec that models can use. For small or personal projects, the objection that MCP âisnât that big a dealâ is pretty fair.
The AI Impact Series Returns to San Francisco – August 5
The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.
Secure your spot now – space is limited: https://bit.ly/3GuuPLF
The practical benefit becomes obvious when youâre building something like an analysis tool that needs to connect to data sources across multiple ecosystems. Without MCP, youâre required to write custom integrations for each data source and each LLM you want to support. With MCP, you implement the data source connections once, and any compatible AI client can use them.
2. Local vs. remote MCP deployment: What are the actual trade-offs in production?
This is where you really start to see the gap between reference servers and reality. Local MCP deployment using the stdio programming language is dead simple to get running: Spawn subprocesses for each MCP server and let them talk through stdin/stdout. Great for a technical audience, difficult for everyday users.
Remote deployment obviously addresses the scaling but opens up a can of worms around transport complexity. The original HTTP+SSE approach was replaced by a March 2025 streamable HTTP update, which tries to reduce complexity by putting everything through a single /messages endpoint. Even so, this isnât really needed for most companies that are likely to build MCP servers.
But hereâs the thing: A few months later, support is spotty at best. Some clients still expect the old HTTP+SSE setup, while others work with the new approach â so, if youâre deploying today, youâre probably going to support both. Protocol detection and dual transport support are a must.
Authorization is another variable youâll need to consider with remote deployments. The OAuth 2.1 integration requires mapping tokens between external identity providers and MCP sessions. While this adds complexity, itâs manageable with proper planning.
3. How can I be sure my MCP server is secure?
This is probably the biggest gap between the MCP hype and what you actually need to tackle for production. Most showcases or examples youâll see use local connections with no authentication at all, or they handwave the security by saying âit uses OAuth.âÂ
The MCP authorization spec does leverage OAuth 2.1, which is a proven open standard. But thereâs always going to be some variability in implementation. For production deployments, focus on the fundamentals:Â
Proper scope-based access control that matches your actual tool boundariesÂ
Direct (local) token validation
Audit logs and monitoring for tool use
However, the biggest security consideration with MCP is around tool execution itself. Many tools need (or think they need) broad permissions to be useful, which means sweeping scope design (like a blanket âreadâ or âwriteâ) is inevitable. Even without a heavy-handed approach, your MCP server may access sensitive data or perform privileged operations â so, when in doubt, stick to the best practices recommended in the latest MCP auth draft spec.
4. Is MCP worth investing resources and time into, and will it be around for the long term?
This gets to the heart of any adoption decision: Why should I bother with a flavor-of-the-quarter protocol when everything AI is moving so fast? What guarantee do you have that MCP will be a solid choice (or even around) in a year, or even six months?Â
Well, look at MCPâs adoption by major players: Google supports it with its Agent2Agent protocol, Microsoft has integrated MCP with Copilot Studio and is even adding built-in MCP features for Windows 11, and Cloudflare is more than happy to help you fire up your first MCP server on their platform. Similarly, the ecosystem growth is encouraging, with hundreds of community-built MCP servers and official integrations from well-known platforms.Â
In short, the learning curve isnât terrible, and the implementation burden is manageable for most teams or solo devs. It does what it says on the tin. So, why would I be cautious about buying into the hype?
MCP is fundamentally designed for current-gen AI systems, meaning it assumes you have a human supervising a single-agent interaction. Multi-agent and autonomous tasking are two areas MCP doesnât really address; in fairness, it doesnât really need to. But if youâre looking for an evergreen yet still somehow bleeding-edge approach, MCP isnât it. Itâs standardizing something that desperately needs consistency, not pioneering in uncharted territory.
5. Are we about to witness the âAI protocol wars?â
Signs are pointing toward some tension down the line for AI protocols. While MCP has carved out a tidy audience by being early, thereâs plenty of evidence it wonât be alone for much longer.
Take Googleâs Agent2Agent (A2A) protocol launch with 50-plus industry partners. Itâs complementary to MCP, but the timing â just weeks after OpenAI publicly adopted MCP â doesnât feel coincidental. Was Google cooking up an MCP competitor when they saw the biggest name in LLMs embrace it? Maybe a pivot was the right move. But itâs hardly speculation to think that, with features like multi-LLM sampling soon to be released for MCP, A2A and MCP may become competitors.
Then thereâs the sentiment from todayâs skeptics about MCP being a âwrapperâ rather than a genuine leap forward for API-to-LLM communication. This is another variable that will only become more apparent as consumer-facing applications move from single-agent/single-user interactions and into the realm of multi-tool, multi-user, multi-agent tasking. What MCP and A2A donât address will become a battleground for another breed of protocol altogether.
For teams bringing AI-powered projects to production today, the smart play is probably hedging protocols. Implement what works now while designing for flexibility. If AI makes a generational leap and leaves MCP behind, your work wonât suffer for it. The investment in standardized tool integration absolutely will pay off immediately, but keep your architecture adaptable for whatever comes next.
Ultimately, the dev community will decide whether MCP stays relevant. Itâs MCP projects in production, not specification elegance or market buzz, that will determine if MCP (or something else) stays on top for the next AI hype cycle. And frankly, thatâs probably how it should be.
Meir Wahnon is a co-founder at Descope.