7 comments

  • driese 3 minutes ago
    Nice one! Let's say I'm serving local models via vllm (because ollama comes with huge performance hits), how would I implement that in gomodel?
  • pjmlp 1 hour ago
    Expectable, given that LiteLLM seems to be implemented in Python.

    However kudos for the project, we need more alternatives in compiled languages.

    • goodkiwi 10 minutes ago
      It’s also badly implemented - everything is a global import. Had to stop using it
    • santiago-pl 1 hour ago
      Agree and thank you! Please let us know if you'd like to give it a try and if you miss any feature in GoModel.
  • indigodaddy 49 minutes ago
    Any plans for AI provider subscription compatibility? Eg ChatGPT, GH Copilot etc ? (Ala opencode)
    • santiago-pl 23 minutes ago
      You are not the first person who has asked about it.

      It looks like a useful feature to have. Therefore, I'll dig into this topic more broadly over the next few days and let you know here whether, and possibly when, we plan to add it.

  • Talderigi 1 hour ago
    Curious how the semantic caching layer works.. are you embedding requests on the gateway side and doing a vector similarity lookup before proxying? And if so, how do you handle cache invalidation when the underlying model changes or gets updated?
    • giorgi_pro 1 hour ago
      Hey, contributor here. That's right, GoModel embeds requests and does vector similarity lookup before proxying. Regarding the cache invalidation, there is no "purging" involved – the model is part of the namespace (params_hash includes the LLM model, path, guardrails hash, etc). TTL takes care of the cleanup later.
  • rvz 38 minutes ago
    I don't see any significant advantage over mature routers like Bifrost.

    Are there even any benchmarks?

  • anilgulecha 1 hour ago
    how does this compare to bifrost - another golang router?
    • santiago-pl 1 hour ago
      First of all, GoModel doesn't have a separate private repository behind a paywall/license.

      It's more lightweight and simpler. The Bifrost docker image looks 4x larger, at least for now.

      IMO GoModel is more convenient for debugging and for seeing how your request flows through different layers of AI Gateways in the Audit Logs.

      • anilgulecha 1 hour ago
        That would be valuable if there's a commitment to never have a non-opensource offering under GoModel? If so, you can document it in the repo.
  • tahosin 1 hour ago
    This is really useful. I've been building an AI platform (HOCKS AI) where I route different tasks to different providers — free OpenRouter models for chat/code gen, Gemini for vision tasks. The biggest pain point has been exactly what you describe: switching models without changing app code.

    One thing I'd love to see is built-in cost tracking per model/route. When you're mixing free and paid models, knowing exactly where your spend goes is critical. Do you have plans for that in the dashboard?

    • santiago-pl 50 minutes ago
      This comment looks like AI-generated.

      However IIUC what you're asking for - it's already in the dashboard! Check the Usage page.