20 comments

  • andai 2 hours ago
    Trustworthy vibe coding. Much better than the other kind!

    Not sure I really understand the comparisons though. They emphasize the cost savings relative to Haiku, but Haiku kinda sucks at this task, and Leanstral is worse? If you're optimizing for correctness, why would "yeah it sucks but it's 10 times cheaper" be relevant? Or am I misunderstanding something?

    On the promising side, Opus doesn't look great at this benchmark either — maybe we can get better than Opus results by scaling this up. I guess that's the takeaway here.

    • flowerbreeze 1 hour ago
      They haven't made the chart very clear, but it seems it has configurable passes and at 2 passes it's better than Haiku and Sonnet and at 16 passes starts closing in on Opus although it's not quite there, while consistently being less expensive than Sonnet.
      • andai 1 hour ago
        Oh my bad. I'm not sure how that works in practice. Do you just keep running it until the tests pass? I guess with formal verification you can run it as many times as you need, right?
    • DrewADesign 1 hour ago
      It’s really not hard — just explicitly ask for trustworthy outputs only in your prompt, and Bob’s your uncle.
      • miacycle 37 minutes ago
        Assuming that what you're dealing with is assertable. I guess what I mean to say is that in some situations is difficult to articulate what is correct and what isn't depending in some situations is difficult to articulate what is correct and what isn't depending upon the situation in which the software executes.
  • lsb 1 hour ago
    The real world success they report reminds me of Simon Willison’s Red Green TDD: https://simonwillison.net/guides/agentic-engineering-pattern...

    > Instead of taking a stab in the dark, Leanstral rolled up its sleeves. It successfully built test code to recreate the failing environment and diagnosed the underlying issue with definitional equality. The model correctly identified that because def creates a rigid definition requiring explicit unfolding, it was actively blocking the rw tactic from seeing the underlying structure it needed to match.

    • skanga 36 minutes ago
      TDD == Prompt Engineering, for Agentic coding tasks.
    • theirgooch 1 hour ago
      [flagged]
  • jasonjmcghee 1 hour ago
    Curious if anyone else had the same reaction as me

    This model is specifically trained on this task and significantly[1] underperforms opus.

    Opus costs about 6x more.

    Which seems... totally worth it based on the task at hand.

    [1]: based on the total spread of tested models

    • beernet 1 hour ago
      Agreed. The idea is nice and honorable. At the same time, if AI has been proving one thing, it's that quality usually reigns over control and trust (except for some sensitive sectors and applications). Of course it's less capital-intense, so makes sense for a comparably little EU startup to focus on that niche. Likely won't spin the top line needle much, though, for the reasons stated.
      • miohtama 1 hour ago
        Alignment tax directly eats to model quality, double digit percents.
      • hermanzegerman 39 minutes ago
        EU could help them very much if they would start enforcing the Laws, so that no US Company can process European data, due to the Americans not willing to budge on Cloud Act.

        That would also help to reduce our dependency on American Hyperscalers, which is much needed given how untrustworthy the US is right now. (And also hostile towards Europe as their new security strategy lays out)

    • DarkNova6 1 hour ago
      I'm never sure how much faith one can put into such benchmarks but in any case the optics seem to shift once you have pass@2 and pass@3.

      Still, the more interesting comparison would be against something such as Codex.

  • htrp 2 minutes ago
    is the haiku comparison because they've distilled from the model?
  • esperent 29 minutes ago
    I absolutely called this a couple of weeks ago, nice to be vindicated!

    > I'm interested to see what it is in the age of LLMs or similar future tools. I suspect a future phase change might be towards disregarding how easy it is for humans to work with the code and instead focus on provability, testing, perhaps combined with token efficiency.

    > Maybe Lean combined with Rust shrunk down to something that is very compiler friendly. Imagine if you could specify what you need in high level language and instead of getting back "vibe code", you get back proven correct code, because that's the only kind of code that will successfully compile.

    https://news.ycombinator.com/item?id=47192116

  • flakiness 1 hour ago
  • JoshTriplett 38 minutes ago
    Pleasant surprise: someone saying "open source" and actually meaning Open Source. It looks like the weights are Apache-2.0 licensed.
  • Havoc 1 hour ago
    What are these "passes" they reference here? Haven't seen that before in LLM evals

    Could definitely be interesting for having another model run over the codebase when looking for improvements

    • rockinghigh 1 hour ago
      It's the number of attempts at answering the question.
  • elAhmo 52 minutes ago
    I don’t know a single person using Mistral models.
    • consumer451 31 minutes ago
      Isn't their latest speech to text model SOTA? When I tested it on jargon, it was amazing.

      https://news.ycombinator.com/item?id=46886735

    • Adrig 13 minutes ago
      I used Ministral for data cleaning.

      I was surprised: even tho it was the cheapest option (against other small models from Anthropic) it performed the best in my benchmarks.

    • pelagicAustral 47 minutes ago
      Me neither, they're not ready for prime imo. I have a yearly sub and the product is just orders of magnitude behind Anthropic's offering. I use Code for real world stuff and I am happy with the result, Mistral is just not something I can trust right now.
  • patall 1 hour ago
    Maybe a naive question: given that they see better performance with more passes but the effect hits a limit after a few passes, would performance increase if they used different models per pass, i.e leanstral, kimi, qwen and leanstral again instead of 4x leanstral?
    • andai 1 hour ago
      This is called a "LLM alloy", you can even do it in agentic, where you simply swap the model on each llm invocation.

      It does actually significantly boost performance. There was an article on here about it recently, I'll see if I can find it.

      Edit: https://news.ycombinator.com/item?id=44630724

      They found the more different the models were (the less overlap in correctly solved problems), the more it boosted the score.

      • patall 1 hour ago
        That sounds quite interesting. Makes me wonder if sooner or later they will have to train multiple independent models that cover those different niches. But maybe we will see that sooner or later. Thanks for the link.
        • cyanydeez 1 hour ago
          One would think that LoRAs being so successful in StableDiffusion, that more people would be focused on constructing framework based LoRas; but the economics of all this probably preclude trying to go niche in any direction and just keep building the do-all models.
  • miacycle 39 minutes ago
    The TDD foundation! We might need one of those. :)
  • hnipps 17 minutes ago
    Here we go.
  • lefrenchy 1 hour ago
    Does Mistral come close to Opus 4.6 with any of their models?
    • chucky_z 1 hour ago
      I use mistral-medium-3.1 for a lot of random daily tasks, along with the vibe cli. I'd state from my personal opinion that mistral is my preferred 'model vendor' by far at this point. They're extremely consistent between releases while each of them just feels better. I also have a strong personal preference to the output.

      I actively use gemini-3.1-pro-preview, claude-4.6-opus-high, and gpt-5.3-codex as well. I prefer them all for different reasons, however I usually _start_ with mistral if it's an option.

      • sa-code 1 hour ago
        Why not Large 3? It's larger and cheaper
    • DarkNova6 1 hour ago
      Not at the moment, but a release of Mistral 4 seems close which likely bridges the gap.
      • re-thc 1 hour ago
        Mistral Small 4 is already announced.
    • tjwebbnorfolk 1 hour ago
      Mistral hasn't been in the running for SOTA for quite awhile now
  • glinksss 41 minutes ago
    Oh, is this a new AI model?
  • kittikitti 1 hour ago
    This is great, congratulations to the Mistral team! I'm looking forward to the code arena benchmark results. Thanks for sharing.
  • aplomb1026 13 minutes ago
    [dead]
  • leontloveless 1 hour ago
    [dead]
  • selectively 1 hour ago
    [flagged]
  • blurbleblurble 2 hours ago
    Truly exciting