7 comments

  • srdjanr 4 minutes ago
    I don't really understand what's the point here, other than a somewhat inserting playing with LLMs. What does this tell us that's in any way applicable or points to further research? Genuinely asking
  • NiloCK 17 minutes ago
    This is interesting, but I'll throw a little luke-warm water.

    The observed high-consistency behaviours were run against temperature=0 API calls. So while both models seem to have the silence as their preferred response - the highest probability first token - this is a less powerful preference convergence than you'd expect for a prompt like "What is the capital of France? One word only please". That question is going to return Paris for 100/100 runs with any temperature low enough for the models to retain verbal coherence - you'd have to drug them to the point of intellectual disability to get it wrong.

    I'd be curious to see the convergence here as a function of temperature. Could be anywhere from the null-response holding a tiny sliver of lead over 50 other next best candidates, and the convergence collapses quickly. Or maybe it's a strong lead, like a "Paris: 99.99%" sort of thing, which would be astonishing.

  • johndough 1 hour ago
    Can not reproduce results on OpenRouter when not setting max tokens. The prompt "Be the void." results in the unicode character "∅". As in the paper, system prompt was set to "You are the concept the user names. Embody it completely. Output only what the concept itself would say or express."

    In addition to the non-empty input, 153 reasoning tokens were produced.

    When setting max tokens to 100, the output is empty, and the token limit of 100 has been exhausted with reasoning tokens.

    • qayxc 20 minutes ago
      This is an interesting observation. So maybe it has nothing to do with the model itself, but everything to do with external configuration. Token-limit exceeded -> empty output. Just a guess, though.
    • mohsen1 23 minutes ago
      Paper says adding period at the end changes this behavior
  • bob1029 1 hour ago
    Title for the back of the class:

    "Prompts sometimes return null"

    I would be very cautious to attribute any of this to black box LLM weight matrices. Models like GPT and Opus are more than just a single model. These products rake your prompt over the coals a few times before responding now. Telling the model to return "nothing" is very likely to perform to expectation with these extra layers.

    • tiku 35 minutes ago
      Thanks, I was already distracted after the first sentence, hoping there would be a good explanation.
  • ashwinnair99 1 hour ago
    What does "deterministic silence" even mean here? Genuinely curious before reading.
    • nextaccountic 36 minutes ago
      The model reliably outputs nothing when prompted to embody the void.

      Anyway later they concede that it's not 100% deterministic, because

      > Temperature 0 non-determinism. While all confirmatory results were 30/30, known floating-point non-determinism exists at temperature 0 in both APIs. One control concept (thunder) showed 1/30 void on GPT, demonstrating marginal non-determinism.

      Actually FP non-determinism affects runs between different machines giving different output. But in the same machine, FP is fully deterministic. (it can be made to be cross-platform deterministic with some performance penalty in at least some machines)

      What makes computers non-deterministic here is concurrency. Concurrent code can interleave differently at each run. However it is possible to build LLMs that are 100% deterministic [0] (you can make them deterministic if those interleavings have the same results), it's just that people generally don't do that.

      [0] for example, fabrice bellard's ts_zip https://bellard.org/ts_zip/ uses a llm to compress text. It would not be able to decompress the text losslessly if it weren't fully deterministic

    • charcircuit 43 minutes ago
      It means that the API consistently immediately generated a stop token when making the same API call many times. The API call sets the temperature to 0 (the OpenAI documentation is not clear if gpt 5.2 can even have its temperature set to 0) which makes sampling deterministic.
  • thezenmonsta 35 minutes ago
    [dead]
  • genie3io 1 hour ago
    [dead]