Good point about the M x N problem reduction, but this glosses over a critical limitation. While MCP does turn integration complexity from M x N to M + N for the protocol layer, authentication and authorization remain stubbornly M x N problems.
Each MCP server still needs to handle auth differently depending on what it's connecting to. A GitHub MCP server needs GitHub tokens, a database server needs database credentials, an email server needs SMTP auth, etc. The client application now has to manage and securely store N different credential types instead of implementing N different integrations.
So yes, the protocol complexity is reduced, but the real operational headache (managing secrets, handling token refresh, dealing with different auth flows) just gets moved around rather than solved. In some ways this might actually be worse since you now have N different processes that each need their own credential management instead of one application handling it all.
This doesn't make MCP useless, but the "M x N to M + N" framing undersells how much complexity remains in the parts that actually matter for production deployments.
If the MCP is running on the user client side and only the llm is remote then possibly one can leverage the existing authentication infrastructure between enterprise IdP, browser, MCP and the enterprise target sites?
The overload of GenAI related postings on here almost makes me look with nostalgy at the period when most of the posts were about some SQLite optimisation/use-case/weird trick....
This is a bit of hyperbole. Step back and look at the list of articles on the front page. You're going to see a lot of different things. It's not the overwhelming flood of Gen AI content that you're saying here. We got one guy who wrote his own music player for iOS. We got this other guy who's optimizing is OCR code. Another article talks about infrared contact lenses. And yeah there's going to be more AI stuff than there used to be, because that's what's going on right now in the field. But it's far from the only thing. Not by a long shot. The only reason you and I are conversing in this AI adjacent thread is because we both clicked on the link. The only difference between us is that I was actually interested in what this article had to say, but you're clearly not. And that's completely fair! But you make it sound like you're starved for non-AI content when there's a whole wealth of it on the front page. C'mon.
On a deeper level, the complaint is about the meaninglessness of those many posts. It is supposed to be revolutionary tech, but every week, every week we are bombarded with these vague and mostly it seems incremental "improvements". People who post this stuff should wait up a bit and let us know when there is a real breakthrough. Or when the VC investors stop pumping in on the order of 200B USD into the magic oracle industry,only to generate a total of 10B pre-tax income for all the major AI companies combined.
I keep waiting for someone to break character and admit that this is all an extended trolling campaign. People are actually connecting these autocompletes to APIs and giving credentials to take impactful external actions? Y'all are _insanely_ trusting.
Sounds useful. However, I'd rather put deterministic code in control of the LLM than the LLM in control of deterministic code. And that's even before prompt injections.
Do you maintain a newsletter or take subscriptions in some other fashion? This is a refreshingly low-BS take and those are hard to come by, and I would be interested especially in Emacs integrations.
Can we not just point LLMs at OpenAPI documents and achieve the same result? All of the example functions in the article look like very very basic REST endpoints.
Exactly. We already have lots of standards for defining APIs (OpenAPI, GraphQL, SOAP if I'm showing my age, etc. etc.) Part of my original "wow this is magic" moment with AI came when OpenAI released some of their plugins and showed how you could just point it at an API spec and the LLM could just figure out, on its own, how to use it.
So one real beauty of AI is that it is so good at taking "semi structured" data and structuring it. So perhaps I'm missing something, but I don't see how MCP benefits you over existing API documentation formats. It seems like an "old way" of thinking, where we always wanted to define these interoperation contract formats and protocols, but a huge point of AI is you shouldn't really need any more protocols to start with.
Again, I don't know all the ins and outs of MCP, so I'm happy to be corrected. It's just that whenever I see examples like in the article, I'm always left wondering what benefit MCP gives you in the first place.
I hear you but what exactly about MCP is more precise or training-friendly than other approaches? I can think of at least one way that it isn't: MCP doesn't provide an API sandbox the way an Apigee or Mulesoft API documentation page could.
I understand what you're saying, but I'm still not clear why any of this should be necessary or is a benefit for LLMs. Another commenter mentioned that MCP saves tokens and is more compact. So what? Then just have the LLM do a one-time pass of a more verbose spec to summarize/minify it.
Any human brainspace needed to even think about MCP just seems like it goes against the whole raison d'être of AI in that it can synthesize and use disparate information much faster, more efficiently, and cheaper than a human can.
You can, most MCP servers are just wrappers around existing SDKs or even rest endpoints.
I think it all comes down to discovery. MCP has a lot of natural language written in each of its “calls” allowing the LLM to understand context.
MCP is also not stateless, but to keep it short. I believe it’s just a way to make these tools more discoverable for the LLM. MCP doesn’t do much that you can’t with other options. Just makes it easier on the LLM.
OpenAPI definitions are verbose and exhaustive. In MCPs you can remove a lot of extra material, saving tokens.
For example in [1], whole `responses` schema can be eliminated. The error texts can instead be surfaced when they appear. You also don't need duplicate json/xml/url-encoded input formats.
Secondly, whole lot of complexities are eliminated, arbitrary data can't be sent and received. Finally, the tool output are prompts to the model too, so you can leverage the output for better accuracy, which you can't do with general purpose apis.
So why can't the LLM just take the verbose OpenAPI spec, summarize it and remove the unnecessary boilerplate and cruft (do that once), and only use the summarized part in the prompt?
MCP is bloated AI hype that basically solves nothing (the Langchain of 2025!). Typical tooling on top of tooling and the quintissential case of a problem looking for a solution. It's absolute garbage from just about any standpoint: architectural, security, elegance, etc. But my main point is that it solves nothing and there's nothing novel here. It's APIs talking to APIs that talk to other APIs. Wow, groundbreaking!
I genuinely believe that there will be (and potentially already are) use-cases when it comes to AI agents, but we really to step back and re-think the whole thing. In the middle of writing a blog post about this, but I really do think genAI is a dead-end and that no one really wants to chill out for a second and solve the hard stuff:
- Needle in a haystack accuracy
- Function calling (and currying/chaining) reliability
- Non-chat UI paradigm (the chat-box UI is a dead-end)
- Context culling (ignoring non-relevant elements)
- Data retrieval given huge contexts (RAG is just not good enough)
- Robotics
- Semantic inference
Like, I get it, it's hard to come up with new ways of solving some of these (or bringing them up from ~50% to 90% accuracy), but no one's going to use an AI agent when it confidently fakes data, it doesn't remember important stuff, or you gotta sit there and tweak a prompt for 30 minutes.
Still, funny to see numerous hyped GenAI start-ups with bad monetary traction jump on the bandwagon and proclaim MCP as the latest revolution (after RAG, Agents, you name it)...All of these are simply tools which add zero value by themselves. Looking forward to the VC wake up calls.
Initially I took MCP to be some crazy new thing, but once I dug into it, it's really just a soft-standard for connecting a database, API, or other data source (e.g., a list of functions that can be called) to a vector database and then returning a response in a standardized JSON format.
It's basically RAG with a bit of sugar on top. What spooks me is how few people hyping MCP seem to understand that.
How so humans need to know about all this? Isn’t it the exact use case where the machines write all the code as it is machine to machine protocol and some of the machine involved are supposed to be phd level programmers.
The MCP or something similar should exist but it should be handled %100 by AI so the people can do the stuff that is important and human related.
It rubs me the wrong way seeing people trying to understand this and if you take it on face value it appears that now AI can do the code but MCP is so hard that it needs a human who studied that so they can talk.
Is this piece of JSON really the last frontier of programming?
> The MCP or something similar should exist but it should be handled %100 by AI so the people can do the stuff that is important and human related.
So far, the entire AI story feels very much like the opposite. The AI is (possibly) taking over the creative/fun stuff, leaving human beings to do the annoying parts.
I feel like there's something wrong with me for not understanding the big leap with MCP and the proponents aren't helping.
I saw a tweet stream that said something like "if you think MCP is the same as REST, you're not thinking big enough" followed by a bunch of marketing speak that gave off LinkedIn web3 influencer vibes. I saw a blog post that says MCP is better because it bridges multiple protocols. Okay, and?
I really want to get this, but I don't know how to square "LLMs are hyper intelligent" with "LLMs can't figure out OpenAPI documentation."
They are, like much of the GenAI-hype, simply a solution looking for a problem. That's why they need to explain it so much - it's more of a desperate convincing really...
REST doesn't provide the documentation or the semantics of the interface. It's the API definition along with text on why to use it, when to use it, and how to use it. This is what is needed for a LLM to consume it. The documentation is a requirement. I have developed many MCP Servers, they are real and they provide me real value in my work every day.
I know it’s in active discussion on GH, but I wish clients would solve (non-text) UIs from MCP servers rather soon. It would 10x the power of these chat extensions.
can you give an example of what you're talking about? do you mean not having a "chatbot" UI and instead sending a camera feed to your mcp client or something?
Is this sort of like how, when iPhone touch-screen came out, it allowed for dynamic regeneration of UI for each specific app instead of "hard coding" hardware inputs/buttons as the one interface to all apps? So here, AI can dynamically generate a context-dependent UI on the fly that can be interacted with, influenced by user input, API reponses etc?
Why is there so much explaining to do for MCPs? There seem to be something seriously wrong with the way Anthropic is marketing it. It looks like the entire world is confused as to what it is.
It's being made out to be something bigger/more important than what it is to create hype and investment interest. Is it incredibly useful? Yes. But it's not aliens landing on the front lawn of the White House offering us anti-gravity tech.
If they said what it really was (see my other comment in this thread [1]), they couldn't leverage it to make more money/get more investors.
Is not correct. MCP can work very well with a RAG system, providing a standard way to add context to a model call, but itself doesn't do any Retrieval.
Over the years there have been a huge variety of ways information such as tool use, RAG context, and other prompting information has been communicated to the model (very often using some ad hoc approach). MCP seeks to clarify and standardize how that information is communicated to an from the model. This, as the poster points out, allows you to reuse tools, RAG, etc with any supporting model rather than hacking these together to work with each on individually.
Previously you would have had to come up with your own way to add the retrieved metadata from RAG to the model, use the vendor specific method of tool calling and then write you own method of tool dispatch once a tool call has been returned.
> Is not correct. MCP can work very well with a RAG system, providing a standard way to add context to a model call, but itself doesn't do any Retrieval.
That's misrepresentation of what I said. I didn't say that MCP replaces RAG, just that it's essentially a RAG system with some syntax sugar on top (which your response confirms).
It's great that it adds some standardization to the process of implementing RAG, but under the hood that's the engine of MCP.
Good point about the M x N problem reduction, but this glosses over a critical limitation. While MCP does turn integration complexity from M x N to M + N for the protocol layer, authentication and authorization remain stubbornly M x N problems.
Each MCP server still needs to handle auth differently depending on what it's connecting to. A GitHub MCP server needs GitHub tokens, a database server needs database credentials, an email server needs SMTP auth, etc. The client application now has to manage and securely store N different credential types instead of implementing N different integrations.
So yes, the protocol complexity is reduced, but the real operational headache (managing secrets, handling token refresh, dealing with different auth flows) just gets moved around rather than solved. In some ways this might actually be worse since you now have N different processes that each need their own credential management instead of one application handling it all.
This doesn't make MCP useless, but the "M x N to M + N" framing undersells how much complexity remains in the parts that actually matter for production deployments.
If the MCP is running on the user client side and only the llm is remote then possibly one can leverage the existing authentication infrastructure between enterprise IdP, browser, MCP and the enterprise target sites?
I imagine this will speed up the convergence of all servers towards oauth and totp
The overload of GenAI related postings on here almost makes me look with nostalgy at the period when most of the posts were about some SQLite optimisation/use-case/weird trick....
Can I interest you in a new Javascript framework?
I liked the 2048 HN era: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Why yes, especially if it is "elegant", "easy to use" and of course "optimised for developer experience" :)
You may dig what I've built, then [1][2].
[1] https://cheatcode.co/joystick
[2] https://github.com/cheatcode/joystick
Water found on Mars!
Eternal September
This is a bit of hyperbole. Step back and look at the list of articles on the front page. You're going to see a lot of different things. It's not the overwhelming flood of Gen AI content that you're saying here. We got one guy who wrote his own music player for iOS. We got this other guy who's optimizing is OCR code. Another article talks about infrared contact lenses. And yeah there's going to be more AI stuff than there used to be, because that's what's going on right now in the field. But it's far from the only thing. Not by a long shot. The only reason you and I are conversing in this AI adjacent thread is because we both clicked on the link. The only difference between us is that I was actually interested in what this article had to say, but you're clearly not. And that's completely fair! But you make it sound like you're starved for non-AI content when there's a whole wealth of it on the front page. C'mon.
On a deeper level, the complaint is about the meaninglessness of those many posts. It is supposed to be revolutionary tech, but every week, every week we are bombarded with these vague and mostly it seems incremental "improvements". People who post this stuff should wait up a bit and let us know when there is a real breakthrough. Or when the VC investors stop pumping in on the order of 200B USD into the magic oracle industry,only to generate a total of 10B pre-tax income for all the major AI companies combined.
Yes, please bring back the blockchain posts. /s
<small>hmm, generative ai blockchain?...</small>
We had Bored Ape, but what we really wanted was Infinite Monkeys.
I keep waiting for someone to break character and admit that this is all an extended trolling campaign. People are actually connecting these autocompletes to APIs and giving credentials to take impactful external actions? Y'all are _insanely_ trusting.
I'm with you, but it makes for extremely impressive demos. And surprisingly useful day to day improvements.
It's not like you can't do this manually (or automatically) but MCP makes things like this so much easier:
> Check JIRA against my org-mode and identify any tasks that I worked on that haven't been reflected in either system.
Undoubtedly there's an incredible amount of hype, but there's a reason for it. I prefer MCP tools that are read only for now.
Sounds useful. However, I'd rather put deterministic code in control of the LLM than the LLM in control of deterministic code. And that's even before prompt injections.
Do you maintain a newsletter or take subscriptions in some other fashion? This is a refreshingly low-BS take and those are hard to come by, and I would be interested especially in Emacs integrations.
Can we not just point LLMs at OpenAPI documents and achieve the same result? All of the example functions in the article look like very very basic REST endpoints.
Exactly. We already have lots of standards for defining APIs (OpenAPI, GraphQL, SOAP if I'm showing my age, etc. etc.) Part of my original "wow this is magic" moment with AI came when OpenAI released some of their plugins and showed how you could just point it at an API spec and the LLM could just figure out, on its own, how to use it.
So one real beauty of AI is that it is so good at taking "semi structured" data and structuring it. So perhaps I'm missing something, but I don't see how MCP benefits you over existing API documentation formats. It seems like an "old way" of thinking, where we always wanted to define these interoperation contract formats and protocols, but a huge point of AI is you shouldn't really need any more protocols to start with.
Again, I don't know all the ins and outs of MCP, so I'm happy to be corrected. It's just that whenever I see examples like in the article, I'm always left wondering what benefit MCP gives you in the first place.
Well, one benefit is the precision and focus of the protocol that can be used to train/finetune LLMs.
More focused training -> more reliable understanding in LLMs.
I hear you but what exactly about MCP is more precise or training-friendly than other approaches? I can think of at least one way that it isn't: MCP doesn't provide an API sandbox the way an Apigee or Mulesoft API documentation page could.
I understand what you're saying, but I'm still not clear why any of this should be necessary or is a benefit for LLMs. Another commenter mentioned that MCP saves tokens and is more compact. So what? Then just have the LLM do a one-time pass of a more verbose spec to summarize/minify it.
Any human brainspace needed to even think about MCP just seems like it goes against the whole raison d'être of AI in that it can synthesize and use disparate information much faster, more efficiently, and cheaper than a human can.
Don't forget HATEOAS if we're listing prior art of self-discoverable APIs!
You can, most MCP servers are just wrappers around existing SDKs or even rest endpoints.
I think it all comes down to discovery. MCP has a lot of natural language written in each of its “calls” allowing the LLM to understand context.
MCP is also not stateless, but to keep it short. I believe it’s just a way to make these tools more discoverable for the LLM. MCP doesn’t do much that you can’t with other options. Just makes it easier on the LLM.
That’s my take as someone who wrote a few.
Edit: I like to think of them as RCP for LLM
OpenAPI definitions are verbose and exhaustive. In MCPs you can remove a lot of extra material, saving tokens.
For example in [1], whole `responses` schema can be eliminated. The error texts can instead be surfaced when they appear. You also don't need duplicate json/xml/url-encoded input formats.
Secondly, whole lot of complexities are eliminated, arbitrary data can't be sent and received. Finally, the tool output are prompts to the model too, so you can leverage the output for better accuracy, which you can't do with general purpose apis.
[1] https://github.com/swagger-api/swagger-petstore/blob/master/...
So why can't the LLM just take the verbose OpenAPI spec, summarize it and remove the unnecessary boilerplate and cruft (do that once), and only use the summarized part in the prompt?
There is probably an MCP for that
That’s basically what we did before MCP. And what (for example) langchain does.
It’s great to have a standard way to integrate tools but I can’t say I have much love for MCP specifically.
The docs are often pretty wrong. It's nice to formalize the glue in a server.
MCP is bloated AI hype that basically solves nothing (the Langchain of 2025!). Typical tooling on top of tooling and the quintissential case of a problem looking for a solution. It's absolute garbage from just about any standpoint: architectural, security, elegance, etc. But my main point is that it solves nothing and there's nothing novel here. It's APIs talking to APIs that talk to other APIs. Wow, groundbreaking!
I genuinely believe that there will be (and potentially already are) use-cases when it comes to AI agents, but we really to step back and re-think the whole thing. In the middle of writing a blog post about this, but I really do think genAI is a dead-end and that no one really wants to chill out for a second and solve the hard stuff:
Like, I get it, it's hard to come up with new ways of solving some of these (or bringing them up from ~50% to 90% accuracy), but no one's going to use an AI agent when it confidently fakes data, it doesn't remember important stuff, or you gotta sit there and tweak a prompt for 30 minutes.I found myself trying to explain MCP the other day. The simplest way I could put it for another developer:
MCP is a standardized set of API endpoints that makes it easier for LLM's to discover and operate with all the other regular APIs you have.
MCP is as revolutionary as JSON.
Still, funny to see numerous hyped GenAI start-ups with bad monetary traction jump on the bandwagon and proclaim MCP as the latest revolution (after RAG, Agents, you name it)...All of these are simply tools which add zero value by themselves. Looking forward to the VC wake up calls.
As in: "JSON was a huge deal, and this could also be a huge deal" or "Just use JSON"?
It's cgi-bin for AI.
I think it's a beautiful comparison on many levels.
Waiting for the AI /cgi-bin/phf.
Initially I took MCP to be some crazy new thing, but once I dug into it, it's really just a soft-standard for connecting a database, API, or other data source (e.g., a list of functions that can be called) to a vector database and then returning a response in a standardized JSON format.
It's basically RAG with a bit of sugar on top. What spooks me is how few people hyping MCP seem to understand that.
How so humans need to know about all this? Isn’t it the exact use case where the machines write all the code as it is machine to machine protocol and some of the machine involved are supposed to be phd level programmers.
The MCP or something similar should exist but it should be handled %100 by AI so the people can do the stuff that is important and human related.
It rubs me the wrong way seeing people trying to understand this and if you take it on face value it appears that now AI can do the code but MCP is so hard that it needs a human who studied that so they can talk.
Is this piece of JSON really the last frontier of programming?
> The MCP or something similar should exist but it should be handled %100 by AI so the people can do the stuff that is important and human related.
So far, the entire AI story feels very much like the opposite. The AI is (possibly) taking over the creative/fun stuff, leaving human beings to do the annoying parts.
I feel like there's something wrong with me for not understanding the big leap with MCP and the proponents aren't helping.
I saw a tweet stream that said something like "if you think MCP is the same as REST, you're not thinking big enough" followed by a bunch of marketing speak that gave off LinkedIn web3 influencer vibes. I saw a blog post that says MCP is better because it bridges multiple protocols. Okay, and?
I really want to get this, but I don't know how to square "LLMs are hyper intelligent" with "LLMs can't figure out OpenAPI documentation."
They are, like much of the GenAI-hype, simply a solution looking for a problem. That's why they need to explain it so much - it's more of a desperate convincing really...
REST doesn't provide the documentation or the semantics of the interface. It's the API definition along with text on why to use it, when to use it, and how to use it. This is what is needed for a LLM to consume it. The documentation is a requirement. I have developed many MCP Servers, they are real and they provide me real value in my work every day.
I know it’s in active discussion on GH, but I wish clients would solve (non-text) UIs from MCP servers rather soon. It would 10x the power of these chat extensions.
can you give an example of what you're talking about? do you mean not having a "chatbot" UI and instead sending a camera feed to your mcp client or something?
Here's someone experimenting with what I mean: https://github.com/idosal/mcp-ui
Is this sort of like how, when iPhone touch-screen came out, it allowed for dynamic regeneration of UI for each specific app instead of "hard coding" hardware inputs/buttons as the one interface to all apps? So here, AI can dynamically generate a context-dependent UI on the fly that can be interacted with, influenced by user input, API reponses etc?
Has anyone found good resources about dealing with authentication in MCP, especially about managing the oauth tokens locally?
Why is there so much explaining to do for MCPs? There seem to be something seriously wrong with the way Anthropic is marketing it. It looks like the entire world is confused as to what it is.
It's being made out to be something bigger/more important than what it is to create hype and investment interest. Is it incredibly useful? Yes. But it's not aliens landing on the front lawn of the White House offering us anti-gravity tech.
If they said what it really was (see my other comment in this thread [1]), they couldn't leverage it to make more money/get more investors.
[1] https://news.ycombinator.com/item?id=44065739
To parent's point, your summary:
> basically RAG with a bit of sugar on top
Is not correct. MCP can work very well with a RAG system, providing a standard way to add context to a model call, but itself doesn't do any Retrieval.
Over the years there have been a huge variety of ways information such as tool use, RAG context, and other prompting information has been communicated to the model (very often using some ad hoc approach). MCP seeks to clarify and standardize how that information is communicated to an from the model. This, as the poster points out, allows you to reuse tools, RAG, etc with any supporting model rather than hacking these together to work with each on individually.
Previously you would have had to come up with your own way to add the retrieved metadata from RAG to the model, use the vendor specific method of tool calling and then write you own method of tool dispatch once a tool call has been returned.
> Is not correct. MCP can work very well with a RAG system, providing a standard way to add context to a model call, but itself doesn't do any Retrieval.
That's misrepresentation of what I said. I didn't say that MCP replaces RAG, just that it's essentially a RAG system with some syntax sugar on top (which your response confirms).
It's great that it adds some standardization to the process of implementing RAG, but under the hood that's the engine of MCP.
... MCP? https://www.youtube.com/watch?v=AvayPCoHGFE