7 comments

  • babblingfish 1 hour ago
    LLMs on device is the future. It's more secure and solves the problem of too much demand for inference compared to data center supply, it also would use less electricity. It's just a matter of getting the performance good enough. Most users don't need frontier model performance.
    • melvinroest 33 minutes ago
      I have journaled digitally for the last 5 years with this expectation.

      Recently I built a graphRAG app with Qwen 3.5 4b for small tasks like classifying what type of question I am asking or the entity extraction process itself, as graphRAG depends on extracted triplets (entity1, relationship_to, entity2). I used Qwen 3.5 27b for actually answering my questions.

      It works pretty well. I have to be a bit patient but that’s it. So in that particular use case, I would agree.

      I used MLX and my M1 64GB device. I found that MLX definitely works faster when it comes to extracting entities and triplets in batches.

    • pezgrande 32 minutes ago
      You could argue that the only reason we have good open-weight models is because companies are trying to undermine the big dogs, and they are spending millions to make sure they dont get too far ahead. If the bubble pops then there wont be incentive to keep doing it.
      • aurareturn 28 minutes ago
        I agree. I can totally see in the future that open source LLMs will turn into paying a lumpsum for the model. Many will shut down. Some will turn into closed source labs.

        When VCs inevitably ask their AI labs to start making money or shut down, those free open source LLMS will cease to be free.

        Chinese AI labs have to release free open source models because they distill from OpenAI and Anthropic. They will always be behind. Therefore, they can't charge the same prices as OpenAI and Anthropic. Free open source is how they can get attention and how they can stay fairly close to OpenAI and Anthropic. They have to distill because they're banned from Nvidia chips and TSMC.

        Before people tell me Chinese AI labs do use Nvidia chips, there is a huge difference between using older gimped Nvidia H100 (called H20) chips or sneaking around Southeast Asia for Blackwell chips and officially being allowed to buy millions of Nvidia's latest chips to build massive gigawatt data centers.

      • Eufrat 2 minutes ago
        [dead]
    • AugSun 44 minutes ago
      "Most users don't need frontier model performance" unfortunately, this is not the case.
      • AugSun 25 minutes ago
        ... another user who "don't need frontier model performance" downvoted LOL ppl why are you so predictable? No wonder you are being replaced by LLMs ...
        • seanhunter 12 minutes ago
          Complaining about downvotes is futile and is also against hn guidelines.
    • gedy 55 minutes ago
      Man I really hope so, as, as much as I like Claude Code, I hate the company paying for it and tracking your usage, bullshit management control, etc. I feel like I'm training my replacement. Things feel like they are tightening vs more power and freedom.

      On device I would gladly pay for good hardware - it's my machine and I'm using as I see fit like an IDE.

      • aurareturn 38 minutes ago
        When local LLMs get good enough for you to use delightfully, cloud LLMs will have gotten so much smarter that you'll still use it for stuff that needs more intelligence.
        • gedy 21 minutes ago
          True, but I'm already producing code/features faster than company knows what to do with, (even though every company says "omg we need this yesterday", etc). Even coding before AI was basically same.

          Code tools that free my time up is very nice.

    • aurareturn 45 minutes ago
      It isn't going to replace cloud LLMs since cloud LLMs will always be faster in throughput and smarter. Cloud and local LLMs will grow together, not replace each other.

      I'm not convinced that local LLMs use less electricity either. Per token at the same level of intelligence, cloud LLMs should run circles around local LLMs in efficiency. If it doesn't, what are we paying hundreds of billions of dollars for?

      I think local LLMs will continue to grow and there will be an "ChatGPT" moment for it when good enough models meet good enough hardware. We're not there yet though.

      Note, this is why I'm big on investing in chip manufacture companies. Not only are they completely maxed out due to cloud LLMs, but soon, they will be double maxed out having to replace local computer chips with ones that are suited for inferencing AI. This is a massive transition and will fuel another chip manufacturing boom.

      • virtue3 21 minutes ago
        We are 100% there already. In browser.

        the webgpu model in my browser on my m4 pro macbook was as good as chatgpt 3.5 and doing 80+ tokens/s

        Local is here.

      • AugSun 30 minutes ago
        Looking at downvotes I feel good about SDE future in 3-5 years. We will have a swamp of "vibe-experts" who won't be able to pay 100K a month to CC. Meanwhile, people who still remember how to code in Vim will (slowly) get back to pre-COVID TC levels.
        • QuantumNomad_ 11 minutes ago
          What is CC and TC? I have not heard these abbreviations (except for CC to mean credit card or carbon copy, neither of which is what I think you mean here).
          • Ericson2314 3 minutes ago
            I figured it out from context clues

            CC: Claude Code

            TC: total comp(ensation)

  • LuxBennu 48 minutes ago
    Already running qwen 70b 4-bit on m2 max 96gb through llama.cpp and it's pretty solid for day to day stuff. The mlx switch is interesting because ollama was basically shelling out to llama.cpp on mac before, so native mlx should mean better memory handling on apple silicon. Curious to see how it compares on the bigger models vs the gguf path
  • codelion 59 minutes ago
    How does it compare to some of the newer mlx inference engines like optiq that support turboquantization - https://mlx-optiq.pages.dev/
  • AugSun 46 minutes ago
    "We can run your dumbed down models faster":

    #The use of NVFP4 results in a 3.5x reduction in model memory footprint relative to FP16 and a 1.8x reduction compared to FP8, while maintaining model accuracy with less than 1% degradation on key language modeling tasks for some models.

  • mfa1999 8 minutes ago
    How does this compare to llama.cpp in terms of performance?
  • brcmthrowaway 19 minutes ago
    What is the difference between Ollama, llama.cpp, ggml and gguf?
    • benob 1 minute ago
      Ollama is a user-friendly UI for LLM inference engines. It is powered by llama.cpp (or a fork of it) which is more power-user oriented and requires command-line wrangling. Ggml is the math library behind llama.cpp and gguf is the associated file format used for storing LLM weights.
    • xiconfjs 15 minutes ago
      Ollama on MacOS is a one-click solution with stable obe-click updates. Happy so far. But the mlx support was the only missing piece for me.
  • dial9-1 58 minutes ago
    still waiting for the day I can comfortably run Claude Code with local llm's on MacOS with only 16gb of ram
    • gedy 54 minutes ago
      How close is this? It says it needs 32GB min?
      • HDBaseT 39 minutes ago
        You can run Qwen3.5-35B-A3B on 32GB of RAM sure, although to get 'Claude Code' performance, which I assume he means Sonnet or Opus level models in 2026, this will likely be a few years away before its runnable locally (with reasonable hardware).
        • Foobar8568 23 minutes ago
          I fully agree, I run that one with Q4 on my MBP, and the performance (including quality of response) is a let down.

          I am wondering how people rave so much about local "small devices" LLM vs what codex or Claude code are capable of.

          Sadly there are too much hype on local LLM, they look great for 5min tests and that's it.