Ask HN: How to boost Gemini transcription accuracy for company names?

I’m using Gemini for speech-to-text and it often misrecognizes company names and acronyms.

Is there any way to use a custom lexicon or vocabulary with Gemini to improve recognition accuracy? If not directly supported, what are practical workarounds people use — e.g. preprocessing prompts, fine-tuning, or combining Gemini with another ASR that supports phrase boosting?

35 points | by bingwu1995 7 days ago

21 comments

  • gearhart 15 hours ago
    We use openwhisper for transcription which accepts a list of "words to look out for" which we populate with a short list of the names of all the people and companies most likely to be mentioned in the text, and then we do a spell checking pass at the end using Gemini with a much longer list, telling it to look out for anything that might be a misspelling.

    It's not perfect, but it's taken it from being an issue that made all our transcripts look terrible, to an issue I no longer think about.

    I imagine just using the second spellchecking pass with Gemini would be almost as effective.

  • tifa2up 15 hours ago
    Don't solve it on the STT level. Get the raw transcription from Gemini then pass the output to an LLM to fix company names and other modifications.

    Happy to share more details if helpful.

    • idopmstuff 14 hours ago
      Yeah, I've done it with industry-specific acronyms and this works well. Generate a list of company names and other terms it gets wrong, and give it definitions and any other useful context. For industry jargon, example sentences are good, but that's probably not relevant for company names.

      Feed it that list and the transcript along with a simple prompt along the lines of "Attached is a transcript of a conversation created from an audio file. The model doing the transcription has trouble with company names/industry terms/acronyms/whatever else and will have made errors with those. I have also attached a list of company names/etc. that may have been spoken in the transcribed audio. Please review the transcription, and output a corrected version, along with a list of all corrections that you made. The list of corrections should include the original version of the word that you fixed, what you updated it to, and where it is in the document." If it's getting things wrong, you can also ask it to give an explanation of why it made each change that it did and use that to iterate on your prompt and the context you're giving it with your list of words.

      • dotancohen 10 hours ago
        Which specific model do you use?
    • remus 15 hours ago
      I've had some luck with this in other contexts. Get the initial transcript from STT (e.g. whisper), then feed that in to gemini with a prompt giving it as much extra context as possible. For example "This is a transcript from a youtube video. It's a conversation between x people, where they talk about y and z. Please clean up the transcript, paying particular attention to company names and acronyms."
      • flyinglizard 14 hours ago
        I've done the same, it works very well.
      • samtts 15 hours ago
        [dead]
  • meerab 13 hours ago
    I use a two-pass approach - first pass with ASR (OpenAI Whisper) and second pass with an LLM. I ask users to provide context upfront and use that as the "initial_prompt" parameter in Whisper: https://github.com/openai/whisper/discussions/963#discussion...

    Gemini might have similar capabilities for custom vocabulary, though I'm not certain about their specific implementation. The two-pass ASR+LLM approach could work with Gemini's output as well.

  • wanderingmind 5 hours ago
    There was a paper that tried to integrate NER (Named Entity Recognition) with whisper to one shot for similar situation, not sure what is the current status

    [1] https://github.com/aiola-lab/whisper-ner

  • gawi 8 hours ago
    If you are able to isolate the text portion corresponding to the company name, you can compute the similarity (based on the character edit distance - Levenshtein) against every item of a predefined list of companies (and their aliases) and pick the best match.
  • simonw 15 hours ago
    Have you tried feeding it a system prompt with a list of custom vocabulary? I would expect that to work really well.

    "Transcribe this audio. Be careful to spell the following names and acronyms right: list-goes-here"

  • Reubend 15 hours ago
    Any company names or special acronyms should be added to your prompt.
  • mediaman 10 hours ago
    We do this simply by injecting a company-defined list of proper names/terms into the prompt, within <special_terms>, and telling it to use that information to assist with spelling. It works pretty well.
  • rancar2 14 hours ago
    The business edition of Wispr Flow does this well, and includes sharing among teams so you can make sure that the company wide vocabulary is consistent and well recognized.

    https://wisprflow.ai/business

    • e1g 10 hours ago
      +1 from another happy Whispr Flow power user. I tried 4-5 similar apps and even built one with Assembly AI, but Whispr is a significant upgrade above the rest for correctly recognizing my accent and jargon. Having the custom vocabulary helps.
  • another_twist 14 hours ago
    Use any proper ASR service that supports custom vocabulary ? Transcribe and Deepgram definitely support it and if you want to go fancy Nemo with custom vocabulary.

    Are there constraints where you have to use Gemini ?

  • gallexme 16 hours ago
    Adding it to the instructions worked well for me with specific terms
  • vayup 14 hours ago
    Something along these lines, as part of the prompt, has worked for me.

                   # User-Defined Dictionary
                    Always use the following exact terms if they sound similar in the audio:
    
                    ```json
                    {{jsonDictionary}}
                    ```
  • lysecret 16 hours ago
    I generally found 4o-transcribe to be more performant than gemini fyi.
  • alex-skobe 13 hours ago
    We have used markdown and list of vocabulary at the end like

    Return company name only from dictionary

    #dictionary 1:Apple 2:..

    And than Vercel AI sdk + Zod Schema + Gemini 2.5 pro and it pretty accurate

  • semessier 16 hours ago
    adding to the question, ruling out fine-tuning for practicality, what about injecting names towards the embedding but not into the context?
  • bbarnett 12 hours ago
    Give it a database backend with lots and lots of facts. Things verified by humans. There, AI 'fixed'.
    • brokensegue 10 hours ago
      I don't get your suggestion. How does the database tie into speech to text?
  • huflungdung 15 hours ago
    [dead]
  • samtts 15 hours ago
    [dead]
  • halobcaklik 12 hours ago
    [flagged]
  • koko12 13 hours ago
    [flagged]