I’m usually the one saying “AI is already as good as it’s gonna get, for a long while.”

This article, in contrast, is quotes from folks making the next AI generation - saying the same.

  • 31337@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    10 days ago

    Larger models train faster (need less compute), for reasons not fully understood. These large models can then be used as teachers to train smaller models more efficiently. I’ve used Qwen 14B (14 billion parameters, quantized to 6-bit integers), and it’s not too much worse than these very large models.

    Lately, I’ve been thinking of LLMs as lossy text/idea compression with content-addressable memory. And 10.5GB is pretty good compression for all the “knowledge” they seem to retain.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 days ago

      I don’t think Qwen was trained with distillation, was it?

      It would be awesome if it was.

      Also you should try Supernova Medius, which is Qwen 14B with some “distillation” from some other models.