

I was unable to follow the thread of conversation from the archived links, so here is the source in case anyone cares.
Does anyone know when Dustin deleted his EA forums account? Did he provide any additional explanation for it?


I was unable to follow the thread of conversation from the archived links, so here is the source in case anyone cares.
Does anyone know when Dustin deleted his EA forums account? Did he provide any additional explanation for it?


I didn’t realize this was part of the rationalist-originated “AI Village” project. See https://sage-future.org/ and https://theaidigest.org/village. Involved members and advisors include Eli Lifland and Daniel Kokotajlo of “AI 2027” infamy.


Its standard crypto libraries are also second to none.


Follow the hype, Kevin, follow the hype.
I hate-listen to his podcast. There’s not a single week where he fails to give a thorough tongue-bath to some AI hypester. Just a few weeks ago when Google released Gemini 3, they had a special episode just to announce it. It was a defacto press release, put out by Kevin and Casey.


Orange site mods retitled a post about a16z funding AI slop farms to remove the a16z part.
The mod tried to pretend the reason was that the title was just too damn long and clickbaity. His new title was 1 character shorter than the original.


Bay Area rationalist Sam Kirchner, cofounder of the Berkeley “Stop AI” group, claims “nonviolence isn’t working anymore” and goes off the grid. Hasn’t been heard from in weeks.
Article has some quotes from Emile Torres.


As someone who also went to university in the late 80s and early 90s, I didn’t share his experiences. This reads like one of those silly shaggy-dog stories where everyone says sarcastically afterwards: “yeah that happened”.


Damn. I thought I was cynical, but nowhere near as cynical as OpenAI is, apparently.


One thing to keep in mind about Ptacek is that he will die on the stupidest of hills. Back when Y Combinator president Garry Tan tweeted that members of the San Francisco board of supervisors should be killed, Ptacek defended him to the extent that the mouth-breathers on HN even turned on him.


Same. I’m not being critical of lab-grown meat. I think it’s a great idea.
But the pattern of things he’s got an opinion on suggests a familiarity with rationalist/EA/accelerationist/TPOT ideas.


Do you have a link? I’m interested. (Also, I see you posted something similar a couple hours before I did. Sorry I missed that!)


So it turns out the healthcare assassin has some… boutique… views. (Yeah, I know, shocker.) Things he seems to be into:
How soon until someone finds his LessWrong profile?


As anyone who’s been paying attention already knows, LLMs are merely mimics that provide the “illusion of understanding”.


As a longtime listener to Tech Won’t Save Us, I was pleasantly surprised by my phone’s notification about this week’s episode. David was charming and interesting in equal measure. I mostly knew Jack Dorsey as the absentee CEO of Twitter who let the site stagnate under his watch, but there were a lot of little details about his moderation-phobia and fash-adjacency that I wasn’t aware of.
By the way, I highly recommend the podcast to the TechTakes crowd. They cover many of the same topics from a similar perspective.


For me it gives off huge Dr. Evil vibes.
If you ever get tired of searching for pics, you could always go the lazy route and fall back on AI-generated images. But then you’d have to accept the reality that in few years your posts would have the analog of a geocities webring stamped on them.


Please touch grass.


The next AI winter can’t come too soon. They’re spinning up coal-fired power plants to supply the energy required to build these LLMs.


I’ve been using DigitalOcean for years as a personal VPS box, and I’ve had no complaints. Not sure how well they’d scale (in terms of cost) for a site like this.
That was quite the rabbit-hole.
The whole time I’m sitting here thinking, “do these mods realize they’re moderating a subreddit called ‘cogsuckers’?”