The edge is where it’s at
Interview with Nick about the post:
https://www.youtube.com/watch?v=a5rLzNxRjEQ&list=UU9rJrMVgcXTfa8xuMnbhAEA - video
https://pivottoai.libsyn.com/20251107-nicholas-weaver-the-futile-future-of-the-gigawatt-datacenter - podcast
time: 26 min 53 sec
So if a company does want to use LLM, it is best done using local servers, such as Mac Studios or Nvidia DGX Sparks: relatively low-cost systems with lots of memory and accelerators optimized for processing ML tasks.
Eh, Local LLMs don’t really scale, you can’t do much better than one person per one computer, unless it’s really sparse usage, and buying everyone a top-of-the-line GPU only works if they aren’t currently on work laptops and VMs.
Sparks type machines will do better eventually but for now they’re supposedly geared more towards training than inference, it says here that running a 70b model there returns around one word per second (three tokens) which is snail’s pace.
yeah. LLMs are fat. Lesser ML works great tho.
Lard Language Model
There is no future. They will be outdated by the time they are finished and the most expensive part wears out quickly and has to be replaced. Literally DOA.
Let me see if I got this right: Because use cases for LLMs have to be resilient to hallucinations, large data centers will fall out of favor for smaller, cheaper deployments at the cost of accuracy. And once you have a business that is categorizing relevant data, you will gradually move away from black box LLMs and towards ML on the edge to cut costs and also at the cost of accuracy.
I read it this way: because LLMs inevitably hallucinate, no matter how resource intensive the LLM is, it makes economic sense to deploy smaller, cheaper LLMs that hallucinate a little more. The tradeoff isn’t “hallucinations vs no hallucinations”, it’s “more hallucinations vs fewer hallucinations”, and the slight gain in accuracy from using the big data center isn’t worth the huge expense of using those big data centers.
A++ episode, you’re a great interviewer
Like how this is an explainer for laymen but still just casually drops an ‘on the edge’ reference. The meaning of which might not be clear to laymen (the context explains it however, so it isn’t bad, just talking about how much jargon we all use).
I hate “the edge”. Such a vague expression.
Think that is in part intentional so people don’t start squabbling over what does and doesn’t count as ‘the edge’ in edge cases, as it also quite depends on the setup of the organization/people you are talking about. But yeah it is badly defined, which is also why I noticed it.







