Ask HN: Best Embedding Models?

Hey HN, which embedding models are people using? There has been so much development around foundational LLMs, but haven't seen much news about embedding models.

17 points | by devstein 2 days ago

16 comments

  • PhilippGille 1 day ago
    Benchmarks only paint part of the picture, but it's still a decent place to start looking into recent models:

    https://huggingface.co/spaces/mteb/leaderboard

  • mutant 2 hours ago
    not a single "of what data" or "in what env"

    best in what?

  • rapatel0 2 days ago
    I've liked qwen and embeddinggemma for local search. Qwen because 32K is enough to basically fit a whole page into the context window and embeddiggemma because it's crazy efficient.
  • stevenfazzio 1 day ago
    Cohere's embed-v4.0 is my daily driver as far as a high performance model is concerned. I do a lot of cluster analysis and data visualization and I like that there's an `input_type="clustering"` mode in addition to the standard `input_type="search"` mode.

    For a fast, open, and local model, I've found it hard to beat https://huggingface.co/sentence-transformers/all-MiniLM-L6-v...

  • sp1982 1 day ago
    I am using openai small embedding model with custom compression. It is super cheap. You can read more at https://corvi.careers/blog/vector-search-embedding-compressi...
  • emschwartz 1 day ago
    I’ve been using MixedBread, which is a pretty old model at this point. Recently, I tried comparing it to some newer models and was disappointed that the results weren’t dramatically and uniformly better.

    You probably can’t go wrong if you pick a recent one that scores decently well on benchmarks and is at the right price point (or memory requirement) for whatever you’re trying to do.

  • pstorm 1 day ago
    Just fyi, for RAG/similarity search, adding a reranker was much bigger pay off than switching embedding models.
    • devstein 1 day ago
      What top K do you use for vector search before passing into the reranker?
      • pstorm 1 day ago
        At a minimum, you increase top-k to cast a wider net, then after reranking, take the N you really want. You have to play around with it a bit, but that’s the idea.
  • LogicCraft678 1 day ago
    Feels like embeddings are underrated compared to LLM's hype, but they doing great.
    • Alifatisk 1 day ago
      Why do you feel like embeddings are underrated? What is it with embeddings that deserves more attention?
  • preetsojitra 1 day ago
    Meta's Perception Encoder Audio-Visual, its CLIP like but has three modality: Audio, Video and Text
  • didgeoridoo 1 day ago
    I’m partial to jina.ai — they have open models for code and prose, all easily runnable locally.
  • sovenyr 1 day ago
    please check OpenAI embedding models - especially small one
  • jayshah5696 1 day ago
    embeddings are easy to fine tune. Try modern bert.
  • Yogeshshirsath 1 day ago
    E5 (Microsoft)
  • frederickabrah 1 day ago
    who knows a tool for rug check in crypto
  • halvorbuilds 1 day ago
    gemma4
  • IVski 1 day ago
    [flagged]