As a community we need to understand that MCP is not needed

1. Give your AI client a wrapper to "curl" that does validation of what API methods and endpoints is allowed to hit

2. Tell your AI to download web docs and use html2text + grep + head + tail to discover what it needs.

Seriously, it just works. No need for everyone to reinvent its own MCP server. No need to put your whole documentation every time in the prompt. Just improve what you have:

- Add OAuth2 to your API endpoints, and improve your docs.

- For CLIs, no need to copy paste your whole help into SKILLS, just improve your existing --help.

If things get easier to use and understand for non-AI, then it's also better for AI itself, don't duplicate work.

Point 1) above definitely needs some kind of protocol, but not a whole MCP server written in a language hosted somewhere. Just a json is enough that an AI client downloads and the user can select which group of method/endpoint is allowed (Read-only, Write, Admin). THIS DOES NOT need to go in the prompt, it's validating the http calls. What can go in prompt is just e.g. at most "Only read-only endpoints are allowed" to give a hint to the AI. That's it!

2 points | by Lethalman 3 hours ago

1 comments

  • dmilicev2 2 hours ago
    I feel like there should be a consistent protocol, an industry standard for interacting with AIs, I do see value in it, however, as you said, it doesn’t seem like there’s value in turning the entire web or everything we can think of into an mcp server.

    At least not at the moment, and perhaps it will stay that way. Its logical to think that LLMs will always be more expensive to run vs a simple web or shell script for a specialised purpose.

    Arguably you can drop an API or a local script for that AI to consume, but I do see benefits of having it standardised for the industry as mcp if you want something to run as an infrastructure layer that’s AI agnostic.

    • Lethalman 2 hours ago
      A web fetch is not much different than asking an MCP for its tools, right?
      • dmilicev2 1 hour ago
        I believe so, sounds logical, yes. I haven’t measured it across LLMs to tell you if there is more/less overhead, confusion, hallucinations, repeated mistakes etc..