

John Scalzi’s shitcanned any book club plans for the foreseeable future, and AI spammers are the reason why.
he/they


John Scalzi’s shitcanned any book club plans for the foreseeable future, and AI spammers are the reason why.


Skyview.social mirror so everyone can see - he’s locked out everyone who’s not signed in.


The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield.
You might think using a hallucination machine was risking their lives already.
They are being issued a handgun which goes off on its own (which killed an airman in 2025), so that tracks.
The army’s shiny new service rifle from SIG Sauer (in a shiny new caliber that nobody else in NATO uses) also got slammed by Army troops and gun nerds alike for a variety of reasons, so its not just AI that the brass is being braindead on.
(Meanwhile, the USMC told SIG to fuck off and stuck with their M27 IARs, which had served them well since 2011.)


the one about arena shooters dying off
That extruded piece of audiovisual garbage has been bugging me massively for a couple hours now, so I’m gonna make a quick recommendation:
If you’re looking for a non-dogshit video about the arena shooter’s implosion, boomer shooter YouTuber Skeleblood made a pretty solid one a couple years ago.


Stumbled across a YouTube slop-farm calling itself The Interactive Archive recently, and the whole thing is just plain shameless:

AI slop banner, AI slop thumbnails, AI slop avatar, its all slop from top to fucking bottom.
Its getting pitiful views, too - the highest-viewed video on the damn thing (as of this writing) is the one about arena shooters dying off, sitting at just under 400 views:

Putting that into context, a random screen recording I uploaded hit nearly 700 through sheer luck.



TBF, fighter jets should have been unmanned drones
On the one hand, an autonomous fighter jet would be immune to G-LOC, letting them perform maneuvers that would incapacitate/kill a human pilot. On the other hand, air-to-air combat is a complex affair, and the enemy will be probing for any weaknesses in your drones’ programming to exploit.
Autonomous bombers seem easier to pull off - bombing missions are (relatively) straightforward compared to air-to-air combat.


I personally wouldn’t count on it - if nothing else, losing Internet access would be crippling in modern day life.
I’m not gonna completely rule it out, though - CNN wrote on Instax making a comeback and the BBC reported a general spike in retro tech sales last year.


Jonathan Hogg gives his two cents on gen-AI, pointing to high barriers to entry causing vibe-coding to explode:
We seem to have largely stopped innovating on trying to lower barriers to programming in favour of creating endless new frameworks and libraries for a vanishingly small number of near-identical languages. It is the mid-2020s and people are wringing their hands over Rust as if it was some inexplicable new thing rather than a C-derivative that incorporates decades old type theory. You know what I consider to be genuinely ground-breaking programming tools? VisiCalc, HyperCard and Scratch.
You know what? HyperCard was a glorious moment in time that I dearly miss: an army of non-experts were bashing together and sharing weird and wonderful stacks that were part 'zine, part adventure game and part database. Instead of laughing at vibe-coders, maybe we should ask ourselves why the current state-of-the-art in beginner-friendly programming tools is a planet-boiling roulette wheel.
(Adding my two cents, Adobe Flash filled the same role as HyperCard in the '00s, providing the public an easy(ish) way to get into programming, and providing an outlet for many an aspirating animator and gamedev.)


throwing chatbots into the military
Not the first time the US gov’s pulled that shit, and it sure as hell won’t be the last.
File this shit next to the SIG M18 and the XM7 in its list of grade-A blunders.


The promptfondlers did it, they made a computer which doesn’t do what you tell it to do


Starting this Stubsack off with one programmer’s testimony on the effects of the LLM rot:
For the record, I work at a software company that employs ~10k developers.
Before LLMs, I’d encounter [software engineers that seem completely useless or lacking in basic knowledge] a couple of times a month, but I interact with a lot of engineers, specifically the ones that need help or are new at the company or industry at large, so it’s a selected sample. Even the most inexperienced ones are willing and able to learn with some guidance.
After LLMs, there’s been a significant uptick, and these new ones are grossly incompetent, incurious, impatient, and behave like addicts if their supply of tokens is at all interrupted. If they run out of prompt credits, its an emergency because they claim they can’t do any work at all. They can’t even explain the architecture of what they are making anymore, and can’t even file tickets or send emails without an LLM writing it for them, and they certainly lack in any kind of reading comprehension.
It’s bleak and depressing, and makes me want to quit the industry altogether.


you completely skipped past one if not the most important theme in the novel which is language and the way people talk and write and the various ways they conduct themselves in different times and places
I don’t think the LWer even realised those themes were there. This whole review screams “failed high school English” to me.


Found another website doing a good job keeping eye on the slop machines and their promoters: The AI Dirty List.
It also lists those who have fought against the bullshit fountains as well.


In other news, Larry Garfield of GarfieldTech has had enough of the bullshit fountains, and put out a fury-filled sneer in response.


New post from Iris Meredith: “Becoming an AI-proof software engineer”


New blogpost from Drew DeVault, titled “The cults of TDD and GenAI”. As the title suggests, its drawing comparisons between how people go all-in on TDD (test-driven development) and how people go all-in on slop machines.
Its another post in the genre of “why did tech fall for AI so hard” that I’ve seen cropping up, in the same vein as mhoye’s Mastodon thread and Iris Meredith’s “The problem is culture”.
The purpose of AI is theft, part infinity: chardet steals LGPL code for profit using Claude