


It’s not always easy to distinguish between existentialism and a bad mood.





A potential massive uptick of consumer tier subscribers that they don’t break even on at the same time the DoD fallout drives more lucrative prospects away could be fun to watch at least, a considerable chunk of the llm code helper ecosystem appears to hinge on anthropic not doing anything crazy like suddenly hiking prices.


It unthickened, it was just Altman grandstanding while at the same time taking over Antrhopic’s DoD DoW: The Everything App contracts.


Pentagon labels Anthropic a supply-chain risk, strikes deal with OpenAI whose president Greg Brockman is a Trump mega-donor.
🍌🍌🍌
Trump added there would be a six-month phase-out for the Defense Department and other agencies that use the company’s products. If Anthropic does not help with the transition, Trump said, he would use “the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.”
The designation could bar tens of thousands of contractors from using Anthropic’s AI when working for the Pentagon. That represents an existential threat to its business with the government and could harm its private-sector relationships, said Franklin Turner, an attorney who specializes in government contracts.
“Blacklisting Anthropic is the contractual equivalent of nuclear war,” he said.


deleted by creator


As far as I can tell it’s only on anthropic’s word that that’s the main issue, DoD just talks about unfettered access for all lawful purposes, which is basically a bend-the-knee-or-else framing, and pivoting away from that to bargaining on particulars will make them look weak, so I guess that’s that for now.
Αnthropic being against mass surveillance and autonomous weaponry while in bed with Palantir is kind of if IBM took a stand against antisemitism while spearheading the computerization of the third reich prison system.
Kudos to Dario for stepping off the hype train for one millisecond to admit that using an LLM to control an automated weapons platform is currently kind of out of scope for this technology, I bet that took a toll on his psyche.
And also for pointing out that something can be legal only because the law hasn’t yet caught up with the technology.


It’s entirely possible he does get that it’s a nothing burger but is just being his usual disingenuous self to pull people in.


I mean the whole entire premise (not unique to this post, scoot’s gotten a lot of mileage out of this) is shoehorning LLMs into the predictive coding framework mostly on the grounds that they both use prediction terminology and deal with work units that they call neurons, with the added bonus that PC posits Bayesian inference is involved so it’s obviously extra valid.
Queue a few thousand words of scoot wearing his science popularizer hat and just declaring the most vacuous shit imaginable with a straight face and a friendly teacher’s casual authority.


He isn’t even trying with the yellow and orange boxes. What the fuck do “high-D toroidal attractor manifolds” and “6D helical manifolds” have to do with anything? Why are they there? And he really thinks he can get away with nobody closely reading his charts, with the “(???, nothing)” business. Maybe I should throw in that box in my publications and see how that goes.
It’s from another horseshit analogy that roughly boils down to both neural net inference (specifically when generating end-of-line tokens) and aspects of specific biological components of human perception being somewhat geometrically modellable. I didn’t include the entire context or a link to the substack in the OP because I didn’t care to, but here is the analogy in full:
The answer was: the AI represents various features of the line breaking process as one-dimensional helical manifolds in a six-dimensional space, then rotates the manifolds in some way that corresponds to multiplying or comparing the numbers that they’re representing. You don’t need to understand what this means, so I’ve relegated my half-hearted attempt to explain it to a footnote1. From our point of view, what’s important is that this doesn’t look like “LOL, it just sees that the last token was ree and there’s a 12.27% of a line break token following ree.” Next-token prediction created this system, but the system itself can involve arbitrary choices about how to represent and manipulate data.
Human neuron interpretability is even harder than AI neuron interpretability, but probably your thoughts involve something at least as weird as helical manifolds in 6D spaces.I searched the literature for the closest human equivalent to Claude’s weird helical manifolds, and was able to find one team talking about how the entorhinal cells in the hippocampus, which help you track locations in 2D space, use “high-dimensional toroidal attractor manifolds”. You never think about these, and if Claude is conscious, it doesn’t think about its helices either2. These are just the sorts of strange hacks that next-token/next-sense-datum prediction algorithms discover to encode complicated concepts onto physical computational substrate.
re: the bolded part, I like how explicitly cherry-picking neuroscience passes for peak rationalism.


I live in the Balkans, I have br-word privilege.


I like how even by ACX standards scoot’s posts on AI are pure brain damage
One level lower down, your brain was shaped by next-sense-datum prediction - partly you learned how to do addition because only the mechanism of addition correctly predicted the next word out of your teacher’s mouth when she said “three plus three is . . . “ (it’s more complicated than this, sorry, but this oversimplification is basically true). But you don’t feel like you’re predicting anything when you’re doing a math problem. You’re just doing good, normal mathematical steps, like reciting “P.E.M.D.A.S.” to yourself and carrying the one.

The most compelling analogy: this is like expecting humans to be “just survival-and-reproduction machines” because survival and reproduction were the optimization criteria in our evolutionary history. […] This simple analogy is slightly off, because it’s confusing two optimization levels: the outer optimization level (in humans, evolution optimizing for reproduction; in AIs, companies optimizing for profit) with the inner optimization level (in humans, next-sense-datum prediction; in AIs, next-token prediction). But the stochastic parrot people probably haven’t gotten to the point where they learn that humans are next sense-datum predictors, so the evolution/reproduction one above might make a better didactic tool.
He also threatens an Anti-Stochastic-Parrot FAQ.
Here’s hoping if this happens Bender et al enthusiastically point out this is coming from a guy whose long term master plan is to fight evil AI with eugenics. Or who uses the threat of evil AI to make eugenics great again if they are feeling less charitable.


Also being in a strategic partnership with fucking Palantir does tend to make one’s stand against mass surveillance seem less than genuine.


I mean, sure, but it’s still the CEO of XBOX on her second day on the job throwing her hat in the legendarily sus declining birthrates discourse in service of AI solutionism, it’s not nothing.


MicroSlop’s new xbox CEO has a background in AI and is worried about birthrates.
Can’t wait for her lesswrong handle to leak.


Either the stupidity just metastasized or China is going to try and pull a reverse star wars on the US and make them burn up an even more horrendous amount of capital to keep up with nothing.
China plans space‑based AI data centres, challenging Musk’s SpaceX ambitions (reuters)


How do these people delude themselves into thinking that the dogshit they’re eating is good?
They think it’s just that they’re early, like they did with bitcoin. Maybe in six monthsthe dogshit will start to taste great, who’s to say, and so on and so forth.


The common clay of the new west:


Twitter post from @BenjaminDEKR
“OpenClaw is interesting, but will also drain your wallet if you aren’t careful. Last night around midnight I loaded my Anthropic API account with $20, then went to bed. When I woke up, my Anthropic balance was $O. Opus was checking “is it daytime yet?” every 30 minutes, paying $0.75 each time to conclude “no, it’s still night.” Doing literally nothing, OpenClaw spent the entire balance. How? The “Heartbeat” cron job, even though literally the only thing I had going was one silly reminder, (“remind me tomorrow to get milk”)”
Continuation of twitter post
“1. Sent ~120,000 tokens of context to Opus 4.5 2. Opus read HEARTBEAT md, thought about reminders 3. Replied “HEARTBEAT_OK” 4. Cost: ~$0.75 per heartbeat (cache writes) The damage:


Diligence is costly in executive attention, it is relatively rare that a major donor is using your acceptance of donations to get social cover for an island-based extortion operation
Either deliberately whitewashing the situation or completely missing the point of why people are mad at Epstein, Yud really can’t help himself.
edit: Or depending on the timeline and the fact that ‘prison time for soliciting a 14 year old’ was on top of Epstein’s wiki as early a 2016 he’s explicitly saying they didn’t mind that part with 300k on the line.


It’s possible it just means the responses aren’t vetted by a lawyer, and will be revised as neccessary.


I’m planning on using this data to catalog “in the wild” instances of agents resisting shutdown, attempting to acquire resources, and avoiding oversight.
He’ll probably do this by running an agent that uses a chatbot with the playwright mcp to occasionally scrape the site, then feed that to a second agent who’ll filter the posts for suspect behavior, then to another agent to summarize and create a report, then another agent which decides if the report is worth it for him to read and message him through his socials. Maybe another agent with db access to log the flagged posts at some point.
All this will be worth it to no one except the bot vendors.