Ask HN: Why does MCP need to exist?

11 points by kostyal 3 days ago

Why can't we just use REST APIs with OpenAPI specs provided to the LLM (possibly alongside x-llm-* hints to give additional contextual information)?

Arguments I've seen focus on advantages of MCP for stateful or bidirectional connections, but it's not obvious that additional complexity is worth the tradeoff to me. Help me understand why this exists

tloula a day ago

I had this question myself when I first learned about MCP and have seen many other people question this as well. As I've learned more about MCP, I've compiled a list of reasons for MCP in place of REST APIs + OpenAPI Specs and recently wrote an article about it [1]. Here's a few of the more overlooked reasons IMO:

- Small Language Models: While most current MCP servers are wrappers on APIs, MCP was designed to be more than that. Think of a SLM running on a local NPU using MCP to interface with the device itself - streaming real-time data between the hardware and the SLM.

- Cost: OpenAPI specs are huge, and including them in the context window for every request would add up.

[1] https://trevorloula.com/blog/what-is-mcp/#why-use-mcp-rather...

obayesshelton 3 days ago

I am struggling too.

There is quite the security risk it seems. From giving your credentials to access to your filesystem and other OS related stuff.

Would you go to an website and willingly give your credentials or filesystem access?

You don't really know what is happening in the middle.

Finally if you are building an AI wrapper, you are just adding more "wrappers" on top

  • malfist 2 days ago

    Not only that, but it doesn't scale either. My API at work has to handle 5,000 transactions per second. If you tried to get an agentic AI to use the API via MCP in anything close to approaching 5,000 TPS you'd either get throttled by the AI, or have an insane bill

    • obayesshelton a day ago

      That is also true! I noticed some crazy token costs lately.

muzani a day ago

Integrating LLMs and guardrailing the API outputs used to be an interview question. It's one of the ways you could tell if someone actually built a sufficiently complex AI tool in production.

1. LLMs hallucinate and often forget to close a bracket or leave a field out. This still happens in JSON mode like Gemini when it's a feature.

2. JSON formatting uses a lot of unnecessary tokens. Comma, quotes, brackets, etc.

3. Extra tokens also mean extra "cognitive effort" for the LLMs. We changed to YAML from JSON and saw a 30% or so increase in output quality back with GPT-3.5.

4. The above can be fixed with more and more training, but why train for REST when you can build something better for it?

jblakely 2 days ago

Totally agree.

Just started reading about MCP the other day and I feel like I must be missing something because I don't see the advantage.

slurpyb 4 hours ago

It’s another nail looking for another hammer

revskill a day ago

U can of course. One day we can.