It’s not always easy to distinguish between existentialism and a bad mood.

  • 6 Posts
  • 144 Comments
Joined 3 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • Usually, you wake up on a lifeless beach that’s adorned with some sort of abandoned marble temple. It’s supposed to be beautiful, but instead it’s really sad. Almost unbearably sad. So much so that you want to get away from it. So you crawl downward into these vents going below the horrible temple, and suddenly it’s like you’re moving through the innards of an incomprehensible machine that’s thudding away, thud, thud, thud. And as you get deeper, the metal sidings are carved with scrawled ominous curses and slurs directed toward you, and you hear the voices, louder than before, and you somehow know these people are in pain because of you. It keeps getting colder. Color drains from the world. And you see the crowd through the slats of the vents: pale and emaciated men, women, and children from centuries to come, all of them pressed together for warmth in some sort of unending cavern. What clothes they have are torn and ragged. Before you know it, their dirty hands and dirty fingernails lurch through the grates, and they’re reaching for you, tearing at your shirt, moaning terrible things about their suffering and how you made it happen, you made it, and you need to stop this now, now, now. And next they’re ripping you apart, limb from limb, and you are joining them in the gray dimness forever.









  • He isn’t even trying with the yellow and orange boxes. What the fuck do “high-D toroidal attractor manifolds” and “6D helical manifolds” have to do with anything? Why are they there? And he really thinks he can get away with nobody closely reading his charts, with the “(???, nothing)” business. Maybe I should throw in that box in my publications and see how that goes.

    It’s from another horseshit analogy that roughly boils down to both neural net inference (specifically when generating end-of-line tokens) and aspects of specific biological components of human perception being somewhat geometrically modellable. I didn’t include the entire context or a link to the substack in the OP because I didn’t care to, but here is the analogy in full:

    spoiler

    The answer was: the AI represents various features of the line breaking process as one-dimensional helical manifolds in a six-dimensional space, then rotates the manifolds in some way that corresponds to multiplying or comparing the numbers that they’re representing. You don’t need to understand what this means, so I’ve relegated my half-hearted attempt to explain it to a footnote1. From our point of view, what’s important is that this doesn’t look like “LOL, it just sees that the last token was ree and there’s a 12.27% of a line break token following ree.” Next-token prediction created this system, but the system itself can involve arbitrary choices about how to represent and manipulate data.

    Human neuron interpretability is even harder than AI neuron interpretability, but probably your thoughts involve something at least as weird as helical manifolds in 6D spaces.I searched the literature for the closest human equivalent to Claude’s weird helical manifolds, and was able to find one team talking about how the entorhinal cells in the hippocampus, which help you track locations in 2D space, use “high-dimensional toroidal attractor manifolds”. You never think about these, and if Claude is conscious, it doesn’t think about its helices either2. These are just the sorts of strange hacks that next-token/next-sense-datum prediction algorithms discover to encode complicated concepts onto physical computational substrate.

    re: the bolded part, I like how explicitly cherry-picking neuroscience passes for peak rationalism.



  • I like how even by ACX standards scoot’s posts on AI are pure brain damage

    One level lower down, your brain was shaped by next-sense-datum prediction - partly you learned how to do addition because only the mechanism of addition correctly predicted the next word out of your teacher’s mouth when she said “three plus three is . . . “ (it’s more complicated than this, sorry, but this oversimplification is basically true). But you don’t feel like you’re predicting anything when you’re doing a math problem. You’re just doing good, normal mathematical steps, like reciting “P.E.M.D.A.S.” to yourself and carrying the one.

    The most compelling analogy: this is like expecting humans to be “just survival-and-reproduction machines” because survival and reproduction were the optimization criteria in our evolutionary history. […] This simple analogy is slightly off, because it’s confusing two optimization levels: the outer optimization level (in humans, evolution optimizing for reproduction; in AIs, companies optimizing for profit) with the inner optimization level (in humans, next-sense-datum prediction; in AIs, next-token prediction). But the stochastic parrot people probably haven’t gotten to the point where they learn that humans are next sense-datum predictors, so the evolution/reproduction one above might make a better didactic tool.

    He also threatens an Anti-Stochastic-Parrot FAQ.

    Here’s hoping if this happens Bender et al enthusiastically point out this is coming from a guy whose long term master plan is to fight evil AI with eugenics. Or who uses the threat of evil AI to make eugenics great again if they are feeling less charitable.








  • The common clay of the new west:

    transcription

    Twitter post from @BenjaminDEKR

    “OpenClaw is interesting, but will also drain your wallet if you aren’t careful. Last night around midnight I loaded my Anthropic API account with $20, then went to bed. When I woke up, my Anthropic balance was $O. Opus was checking “is it daytime yet?” every 30 minutes, paying $0.75 each time to conclude “no, it’s still night.” Doing literally nothing, OpenClaw spent the entire balance. How? The “Heartbeat” cron job, even though literally the only thing I had going was one silly reminder, (“remind me tomorrow to get milk”)”

    Continuation of twitter post

    “1. Sent ~120,000 tokens of context to Opus 4.5 2. Opus read HEARTBEAT md, thought about reminders 3. Replied “HEARTBEAT_OK” 4. Cost: ~$0.75 per heartbeat (cache writes) The damage:

    • Overnight = ~25+ heartbeats
    • 25 × $0.75 = ~$18.75 just from heartbeats alone
    • Plus regular conversation = ~$20 total The absurdity: Opus was essentially checking “is it daytime yet?” every 30 minutes, paying $0.75 each time to conclude “no, it’s still night.” The problem is:
    1. Heartbeat uses Opus (most expensive model) for a trivial check
    2. Sends the entire conversation context (~120k tokens) each time
    3. Runs every 30 minutes regardless of whether anything needs checking That’s $750 a month if this runs, to occasionally remind me stuff? Yeah, no. Not great.”



  • I’m planning on using this data to catalog “in the wild” instances of agents resisting shutdown, attempting to acquire resources, and avoiding oversight.

    He’ll probably do this by running an agent that uses a chatbot with the playwright mcp to occasionally scrape the site, then feed that to a second agent who’ll filter the posts for suspect behavior, then to another agent to summarize and create a report, then another agent which decides if the report is worth it for him to read and message him through his socials. Maybe another agent with db access to log the flagged posts at some point.

    All this will be worth it to no one except the bot vendors.