• 0 Posts
  • 9 Comments
Joined 24 days ago
cake
Cake day: January 12th, 2025

help-circle

  • It’s not about hampering proliferation, it’s about breaking the hype bubble. Some of the western AI companies have been pitching to have hundreds of billions in federal dollars devoted to investing in new giant AI models and the gigawatts of power needed to run them. They’ve been pitching a Manhattan Project scale infrastructure build out to facilitate AI, all in the name of national security.

    You can only justify that kind of federal intervention if it’s clear there’s no other way. And this story here shows that the existing AI models aren’t operating anywhere near where they could be in terms of efficiency. Before we pour hundreds of billions into giant data center and energy generation, it would behoove us to first extract all the gains we can from increased model efficiency. The big players like OpenAI haven’t even been pushing efficiency hard. They’ve just been vacuuming up ever greater amounts of money to solve the problem the big and stupid way - just build really huge data centers running big inefficient models.




  • There are many clear use cases that are solid, so AI is here to stay, that’s for certain. But how far can it go, and what will it require is what the market is gambling on.

    I would disagree on that. There are a few niche uses, but OpenAI can’t even make a profit charging $200/month.

    The uses seem pretty minimal as far as I’ve seen. Sure, AI has a lot of applications in terms of data processing, but the big generic LLMs propping up companies like OpenAI? Those seems to have no utility beyond slop generation.

    Ultimately the market value of any work produced by a generic LLM is going to be zero.


  • How to address superintelligence, if that is actually something we realistically face:

    1. Make creating an unlicensed AI with over a certain threshold to be a capital offense.

    2. Regulate the field of artificial intelligence as heavily as we do nuclear science and nuclear weapons development.

    3. Have strict international treaties on model size and capability limitations.

    4. Have inspection regimes in place to allow international monitoring of any electricity usage over a certain threshold.

    5. Use satellites to track anomalous large power use across the globe (monitored via waste heat) and thoroughly investigate any large unexplained energy use.

    6. Target the fabs. High powered chips should be licensed and tracked like nuclear materials.

    7. Make clear that a nuclear first strike is a perfectly acceptable response to a nation state trying to create AGI.

    Anyone who says this technology simply cannot be regulated is a fool. We’re talking models that require hundreds of megawatts or more to run and giant data centers full of millions of dollars worth of chips. There’s only a handful of companies on the planet producing the hardware for these systems. The idea that we can’t regulate such a thing is ridiculous.

    I’m sorry, but I put the survival of the human race above your silly science project. If I have to put every person on this planet with a degree in computer science into a hole in the ground to save the human race, that is a sacrifice I am willing to make. Hell, I’ll go full Dune and outlaw computers all together, go back to pen and paper for everything, before I condone AGI.

    We can’t control this technology? Balderdash. It’s created by human beings. And human beings can be killed.

    So, how do we deal with ASI? You put anyone trying to create it deep in the ground. This is self defense at a species level. Sacrificing a few thousand madmen who think they’re going to summon a benevolent god to serve them is simple self-defense. It’s OK to kill cultists who are trying to summon a demon.