Train Your Own LLM from Scratch

(github.com)

158 points | by kristianpaul 3 hours ago

10 comments

  • JoeDaDude 23 minutes ago
    Coincidentally, I just started on Build a Large Language Model (From Scratch), a repo/book/course by Sebastian Raschka [0][1][2]. Maybe it is a good problem to have to have to decide which learning resource to use.

    [0] https://github.com/rasbt/LLMs-from-scratch

    [1] https://www.manning.com/books/build-a-large-language-model-f...

    [2] https://magazine.sebastianraschka.com/p/coding-llms-from-the...

  • kriro 11 minutes ago
    I did it back in the day when fast.ai was relatively new with ULMFiT. This must have been when Bert was sota. The architecture allows you to train a base and specialize with a head. I used the entire Wikipedia for the base and then some GBs of tweets I had collected through the firehouse. I had access to a lab with 20 game dev computers. Must have been roughly GTX 2080s. One training cycle took about half a day for the tokenized Wikipedia so I hyper parameter tuned by running one different setting on each computer and then moving on with the winner as the starting point for the next day. It was always fun to come to work the next morning and check the results.

    The engineering was horrible and very ad-hoc but I learned a lot. Results were ok-ish (I classified tweets) but it gave me a good perspective on the sheer GPU power (and engineering challenges) one would need to do this seriously. I didn't fully grasp the potential of generating output but spent quite some time chuckling at generated tweets (was just curious to try it).

  • jvican 2 hours ago
    If you're interested in this resource, I highly recommend checking out Stanford's CS336 class. It covers all this curriculum in a lot more depth, introduces you into a lot of theoretical aspects (scaling laws, intuitions) and systems thinking (kernel optimization/profiling). For this, you have to do the assignments, of course... https://cs336.stanford.edu/
  • antirez 25 minutes ago
    Context: he is one of the MLX developers, a skilled ML researcher.
  • NSUserDefaults 1 hour ago
    Been doing it since the day I was born. The beginnings were hard but I’m getting there.
  • ofsen 55 minutes ago
    This looks like exact copy of this video of andrej karpathy ( https://youtu.be/kCc8FmEb1nY ) but in a writing format, am i wrong ?
  • steveharing1 26 minutes ago
    The documentation is really helpful enough to get started
  • hiroakiaizawa 1 hour ago
    Nice. What scale does this realistically reach on a single machine?
    • lynx97 59 minutes ago
      Model: 36L/36H/576D, 144.2M params

      runs on a Blackwell 6000 Max-Q, using 86GB VRAM. Training supposedly takes 3h40m

  • iamnotarobotman 2 hours ago
    This looks great for a first introduction to training LLMs, and it looks simple enough to try this locally. Great job!
  • baalimago 2 hours ago
    Train your LM from scratch*

    I doubt you have a machine big enough to make it "Large".

    • mips_avatar 1 hour ago
      You can fully train a 1.6b model on a single 3090. That’s a reasonably big model.
    • nucleardog 1 hour ago
      Hey now! I've got a half terabyte of RAM at my disposal! I mean, it's DDR4 but... it's RAM!

      And it's paired with 48 processor cores! I mean, they don't even support AVX512 but they can do math!

      I could totally train a LLM! Or at least my family could... might need my kid to pick up and carry on the project.

      But in all seriousness... you either missed the point, are being needlessly pedantic, or are... wrong?

      This is about learning concepts, and the rest of this is mostly moot.

      On the pedantic or wrong notes--What is the documented cut-off for a "large" language model? Because GPT-2 was and is described as a "large" language model. It had 1.5B parameters. You can just about get a consumer GPU capable of training that for about $400 these days.

      • Malcolmlisk 33 minutes ago
        Then rewrite the title and call it "learn how to do a non usable llm from scratch"
        • improbableinf 16 minutes ago
          Opus 4.7 is non-usable for the tasks I have — but it’s considered an LLM.

          And no one is stopping anyone from tweaking few parameters in this repo to go above 10M parameters.