Welcome to FastMCP

(gofastmcp.com)

57 points | by Anon84 2 hours ago

7 comments

  • arthurjean 18 minutes ago
    MCP earns its keep in specific cases: when the agent has no shell access, when you need to keep credentials out of the prompt context, or when you want runtime tool discovery across teams. But I've built a few MCP servers and half of them would've been simpler as a CLI script the agent calls directly.
  • zlurker 1 hour ago
    I still dont fully understand the point of MCP servers. What do they provide that a skill doesnt? Maybe I've just used too many poorly written ones.

    Is there some sort of tool that can be expressed as an MCP and but not as an API or CLI command? Obviously we shouldnt map existing apis to MCP tools, but why would I used an MCP over just writing a new "agentic ready" api route?

    • IanCal 4 minutes ago
      You could write an api, and then document it, and then add maybe useful prompts?

      Then you’d need a way of passing all that info on to a model, so something top level.

      It’d be useful to do things in the same way as others (so if everyone is adding Openapi/swagger you’d do the same if you didn’t have a reason not to).

      And then you’ve just reinvented something like MCP.

      It’s just a standardised format.

    • simonw 1 hour ago
      I know of two benefits to MCP over Skills:

      - If your agent doesn't have a full Bash-style code execution environment it can't run skills. MCP is a solid option for wiring in tools there.

      - MCP can help solve authentication, keeping credentials for things in a place where the agent can't steal those credentials if it gets compromised. MCPs can also better handle access control and audit logging in a single place.

      • simianwords 23 minutes ago
        I don't agree with either. Skills with an API exposed by the service solves both your problems.

        The LLM can look at the OpenAPI spec and construct queries - I often do this pretty easily.

        • mememememememo 16 minutes ago
          It creates a new problem. I need an isolated shell environment. I need to lock it down. I need containers. I need to ensure said containers are isolated and not running as root. I probably need Kubernetes to do this at scale. &tc

          Also even with above there is more opportunity for the bot to go off piste and run cat this and awk that. Meanwhile the "operator" i.e. the Grandpa who has an iPhone but never used a computer has no chance of getting the bot back on track as he tries to renew his car insurance.

          "Just going to try using sed to get the output of curl https://.."

          "I don't understand I just want to know the excess for not at fault incident when the other guy is uninsured".

          Everyone has gone claw-brained. But it really is ok to write code and save that code to disk and execute thay code later.

          You can use MCP or even just hard coded API call from your back end to the service you wanna use like it's 2022.

        • simonw 20 minutes ago
          How can you disagree with my first point? You can't use skills if you don't have a Bash environment in which to run them. Do you disagree?

          Skills with an API exposed by the service usually means your coding agent can access the credentials for that service. This means that if you are hit by a prompt injection the attacker can steal those credentials.

          • simianwords 15 minutes ago
            Fair points, learned something new.
      • staticassertion 1 hour ago
        Can you explain the auth part? I feel like auth for an agent is largely a matter of either verifying its context or issuing it a JWT that's scoped to its rights, which I assume is quite similar to how any tools would work. But I'm very unfamiliar with MCP.
        • monkpit 1 hour ago
          I think they’re saying you could start up the mcp and pass it creds/auth for some downstream service, and then the LLM uses the tool and has auth but doesn’t know the creds.
          • simonw 44 minutes ago
            Right. If you're running a CLI tool that is authenticated there's effectively no way to prevent the coding agent from accessing those credentials itself - they're visible to the process, which means they're visible to the agent.

            With MCP you can at least set things up such that the agent can't access the raw credentials directly.

            • zbentley 42 minutes ago
              This is right. It’s not about scoping auth, it’s about preventing secret misuse/exfil.

              (Moved from wrong sub)

          • JambalayaJimbo 34 minutes ago
            The MCP implementation is itself an agent right? Is that not just pushing the problem somewhere else?

            Also, I run programs on my machine with a different privilege level than myself all the time. Why can’t an agent do that?

            • conception 9 minutes ago
              No, mcp just is a server that returns prompts to the llm. The server can be/do whatever. You can have an echo mcp that list echoes back whatever you send it.
            • simonw 19 minutes ago
              I define the agent as the harness that runs the LLM in a loop calling tools. The MCI implementation is one of those tools. I wouldn't call an MCP implementation an agent.
      • tomjwxf 1 minute ago
        [dead]
    • dathanb82 1 hour ago
      Skills are part of the repo, and CLIs are installed locally. In both cases it's up to you to keep them updated. MCP servers can be exposed and consumed over HTTPS, which means the MCP server owner can keep them updated for you.

      Better sandboxing. Accessing an MCP server doesn't require you to give an agent permissions on your local machine.

      MCP servers can expose tools, resources, and prompts. If you're using a skill, you can "install" it from a remote source by exposing it on the MCP server as a "prompt". That helps solve the "keep it updated" problem for skills - it gets updated by interrogating the MCP server again.

      Or if your agentic workflow needs some data file to run, you can tell the agent to grab that from the MCP server as a resource. And since it's not a static file, the content can update dynamically -- you could read stocks or the latest state of a JIRA ticket or etc. It's like an AI-first, dynamic content filesystem.

      • swingboy 1 hour ago
        You can install skills globally so they are available in all projects.
    • 9dev 44 minutes ago
      If you expand your scope a bit from just developer tooling, you’ll notice a lot of scenarios where an agent running somewhere as a service may need to invoke commands elsewhere, in other apps, or maybe provided by a customer in a bring-your-own-MCP setup. In these cases, the harness is not running locally, you don’t have a filesystem to write skills on demand to (or a fixed set of skills is baked into the container), so to get extensibility or updates to tooling, you want something that avoids redeployments. MCP fills that spot.
    • alexwebb2 1 hour ago
      You could get pretty far with a set of agent-focused routes mounted under e.g. an /agents path in your API.

      There'd be a little extra friction compared to MCP – the agent would presumably have to find and download and read the OpenAPI/Swagger spec, and the auth story might be a little clunkier – but you could definitely do it, and I'm sure many people do.

      Beyond that, there are a few concrete things MCP provides that I'm a fan of:

      - first-class integration with LLM vendors/portals (Claude, ChatGPT, etc), where actual customers are frequently spending their time and attention

      - UX support via the MCP Apps protocol extension (this hasn't really entered the zeitgeist yet, but I'm quite bullish on it)

      - code mode (if using FastMCP)

      - lots of flexibility on tool listings – it's trivial to completely show/hide tools based on access controls, versus having an AI repeatedly stumble into an API endpoint that its credentials aren't valid for

      I could keep going, but the point is that while it's possible to use another tool for the job and get _something_ up and running, MCP (and FastMCP, as a great implementation) is purpose built for it, with a lot of little considerations to help out.

    • yoyohello13 41 minutes ago
      I built an MCP server various people in our company can use to query our various databases. I can have a service account scoped only to the non-sensitive data, and users only need to have an MCP aware agent on their computer instead of dealing with setting up drivers, DB tools, etc.
    • Marazan 1 hour ago
      You can tightly constrain MCPs and shape the context that is shared back to the Agent.

      A skill is, at the end of the day, just a prompt.

      • zapnuk 51 minutes ago
        Thats just one of the interpretations of a skill.

        A skill can also act as an abstraction layer over many tools (implemented as an mcp server) to save context tokens.

        Skills offer a short description of their use and thus occupy only a few hundled tokens in the context compared to thousends of tokens if all tools would be in the context.

        When the LLM decides that the skill is usefull we can dynamically load the skills tools into the context (using a `load_skill` meta-tool).

      • dionian 1 hour ago
        true but we could also integrate a non-MCP app with a skill and put the controls there.
  • _verandaguy 2 hours ago

        > FastMCP is the standard framework for building MCP applications
    
    Standardized by whom?

    In an era where technology exists that can lend the appearance of legitimacy to just about anyone, that kind of statement needs to be qualified.

  • notoreous 1 hour ago
    Well it sure took "FastMCP" long enough. And the announcement lands at a time when its looking increasingly like CLI is the preferred method vs MCP. I'm sure in a few months time, even that will be out of date
    • speedgoose 55 minutes ago
      MCP is superior to CLI by design, and it’s not even close. I don’t understand the sudden hype towards CLI for agents.
      • zingar 53 minutes ago
        Would you mind elaborating on the superiority you perceive?
        • TheMrZZ 38 minutes ago
          For MCP servers, there's no need to install a potentially untrusted software on your computer. Remote MCP can do very little harm, a CLI though? You're vulnerable to bad actors or supply chain attacks.

          For client side MCP it's a different story.

        • speedgoose 42 minutes ago
          It has a json schema, that’s the main point. It also enforces good documentation by design. No need to get a man page or run the help command, it’s in the context. It can work remotely with authentication.
          • ramon156 34 minutes ago
            Most CLI tools have JSON support. Your arguments fall flat pretty short.

            I think MCP is fine in an env where you have no access to tools, but you cannot ripgrep your way through an MCP (unless you make an MCP that calls ripgrep on e.g. a repo, which in that case what are you doing).

            • vova_hn2 18 minutes ago
              Tool calls can have JSON schema enforced on lower level (token sampling). Although, I'm not sure if major providers do it, but I don't see any reason why they wouldn't.
        • needs 33 minutes ago
          Explorable by design, can be served through HTTP, OAuth integration.
  • Alifatisk 39 minutes ago
    Have FastMCP become the standard sdk? The docs is great, honestly way better than the official website modelcontextprotocol which most if its pages is ”under construction”.
  • whattheheckheck 2 hours ago
    Whatever you do do not simply put 1 to 1 rest apis to mcp tools. Really think about common workflows users want and make good abstractions for good chunks of work.
  • cboyardee 1 hour ago
    [dead]