• SamotsvetyVIA [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    21 days ago

    They’ve had distills before this, a more accurate title would be “Newest DeepSeek R1 distill runs on a single GPU like all the previous ones”.

    Also it’s not accurate to say that a Qwen3 distill is the same as the DeepSeek R1 running in the datacenter - that one is still 85x larger than the Qwen3 distill.

    What stands out about DeepSeek-R1-0528-Qwen3-8B is that it only requires a GPU with 40GB to 80GB of RAM to run

    This is just inaccurate. It runs in 16GB of VRAM… because, you know, 8B parameters x 2 bytes (needed to store each parameter) = 16x10^9 bytes = 16GB…

    • TheVelvetGentleman [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      21 days ago

      It’s also just not the same thing at all. The distillations are not even remotely close to the 600+b parameters of the parent model. You can’t run Deepseek on your GPU at home. It’s the equivalent of buying your kid a powerwheels car. They’re not driving a car. They can’t drive a car at home.

      Edit - I do run the distillations at home though and they’re fine for what I use them for. I’m not openai-pilled. I just hate when people say that you can run the same software as a massive datacenter privately.

    • freagle@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      21 days ago

      It’s gotta be a “distillate”, right? Not a “distill”. Verbing weirds language.

      • SamotsvetyVIA [any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        5
        ·
        21 days ago

        Most people say distilled model, distillate sounds right as well. The process is called distillation. I’ve just fried my brain on the local LLM subreddit because I was trying to get the transformers library working, probably why I phrased it like that.