

Another deep-dive into DHH’s decline has popped up online: DHH and Omarchy: Midlife crisis:
he/they


Another deep-dive into DHH’s decline has popped up online: DHH and Omarchy: Midlife crisis:


What’s a government backstop, and does it happen often? It sounds like they’re asking for a preemptive bail-out.
Zitron’s stated multiple times a bailout isn’t coming, but I’m not ruling it out myself - AI has proven highly useful as a propaganda tool and an accountability sink, the oligarchs in office have good reason to keep it alive.


I feel slightly better about my Pepsi addiction now.
The Coca-Cola Company is desperately trying to talk up this mediocre demo as the best demo ever. That’s how AI works now — AI companies don’t give you an impressive demo that can’t be turned into a product, they give you a garbage demo and loudly insist it’s actually super cool
Considering AI supporters’ are too artistically blind to tell quality work from slop, I’m gonna chalk that up to them genuinely believing its the best thing since sliced bread.
Times are tough, the real economy where people live is way down, the recession is biting, and the normal folk know the ones promoting AI want them out of a job. If you push AI, you are the enemy of ordinary people. And the ordinary people know it.
Damn right, David. Here’s to hoping the ordinary people don’t forget who the AI pushers were once winter sets in.


i think you need to be a little bit more specific unless sounding a little like an unhinged cleric from memritv is what you’re going for
I’ll admit to taking your previous comment too literally here - I tend to assume people are completely serious unless I can clearly tell otherwise.
but yeah nah i don’t think it’s gonna last this way, people want to go back to just doing their jobs like it used to be, and i think it may be that bubble burst wipes out companies that subsidized and provided cheap genai, so that promptfondlers hammering image generators won’t be as much of a problem. propaganda use and scams will remain i guess
Scams and propaganda will absolutely remain a problem going forward - LLMs are tailor-made to flood the zone with shit (good news for propagandists), and AI tools will provide scammers with plenty of useful tools for deception.


Considering we’ve already got a burgeoning Luddite movement that’s been kicked into high gear by the AI bubble, I’d personally like to see an outgrowth of that movement be what ultimately kicks it off.
There were already some signs of this back in August, when anti-AI protesters vandalised cars and left “Butlerian Jihad” leaflets outside a pro-AI business meetup in Portland.
Alternatively, I can see the Jihad kicking off as part of an environmentalist movement - to directly quote Baldur Bjarnason:
[AI has] turned the tech industry from a potential political ally to environmentalism to an outright adversary. Water consumption of individual queries is irrelevant because now companies like Google and Microsoft are explicitly lined up against the fight against climate disaster. For that alone the tech should be burned to the ground.
I wouldn’t rule out an artist-led movement being how the Jihad starts, either - between the AI industry “directly promising to destroy their industry, their work, and their communities” (to quote Baldur again), and the open and unrelenting contempt AI boosters have shown for art and artists, artists in general have plenty of reason to see AI as an existential threat to their craft and/or a show of hatred for who they are.


Part of me wants to see Google actually try this and get publicly humiliated by their nonexistent understanding of physics, part of me dreads the fact it’ll dump even more fucking junk into space.


Found a high quality sneer of OpenAI from Los Angeles Review of Books: Literature Is Not a Vibe: On ChatGPT and the Humanities


Plus, the authors currently suing OpenAI have gotten their hands on emails and internal Slack messages discussing their deletion of the LibGen dataset - a development which opens the company up to much higher damages and sanctions from the court for destroying evidence.


That’s quite a remarkable claim. Especially when the actual number of attacks by AI-generated ransomware is zero. [Socket]
If even a single case pops up, I’d be surprised - AFAIK, cybercriminals are exclusively using AI as a social engineering tool (e.g. voice cloning scams, AI-extruded phishing emails, etcetera). Humans are the weakest part of any cybersec system, after all.
The paper finishes by recommending “embracing AI in cyber risk management”.
Given AI’s track record on security, that sounds like an easy way to become an enticing target.


Probably one part normalisation, one part AI supporters throwing tantrums when people don’t treat them like the specialiest little geniuses they believe they are. These people have incredibly fragile egos, after all.


Checked back on the smoldering dumpster fire that is Framework today.
Linux Community Ambassadors Tommi and Fraxinas have jumped ship, sneering the company’s fash turn on the way out.


they’ll just heat up a metal heat sink per request and then eject that into the sun
I know you’re joking, but I ended up quickly skimming Wikipedia to determine the viability of this (assuming the metal heatsinks were copper, since copper’s great for handling heat). Far as I can tell:
The sun isn’t hot enough or big enough to fuse anything heavier than hydrogen, so the copper’s gonna be doing jack shit when it gets dumped into the core
Fusing elements heavier than iron loses you energy rather than gaining it, and copper’s a heavier element than iron (atomic number of 29, compared to iron’s 26), so the copper undergoing fusion is a bad thing
The conditions necessary for fusing copper into anything else only happen during a supernova (i.e. the star is literally exploding)
So, this idea’s fucked from the outset. Does make me wonder if dumping enough metal into a large enough star (e.g. a dyson sphere collapsing into a supermassive star) could kick off a supernova, but that’s a question for another day.


The question of how to cool shit in space is something that BioWare asked themselves when writing the Mass Effect series, and they came up with some pretty detailed answers that they put in the game’s Codex (“Starships: Heat Management” in the Secondary section, if you’re looking for it).
That was for a series of sci-fi RPGs which haven’t had a new installment since 2017, and yet nobody’s bothering to even ask these questions when discussing technological proposals which could very well cost billions of dollars.


It also integrates Stake into your IDE, so you can ruin yourself financially whilst ruining the company’s codebase with AI garbage


“you can set the sycophancy engines so they aren’t sycophancy engines”
I’ll take “Shit that’s Impossible” for 500, Alex


I wonder when the market finally realises that AI is not actually smart and is not bringing any profits, and subsequently the bubble bursts, will it change this perception and in what direction? I would wager that crashing the US economy will give a big incentive to change it but will it be enough?
Once the bubble bursts, I expect artificial intelligence as a concept will suffer a swift death, with the many harms and failures of this bubble (hallucinations, plagiarism, the slop-nami, etcetera) coming to be viewed as the ultimate proof that computers are incapable of humanlike intelligence (let alone Superintelligence™). There will likely be a contingent of true believers even after the bubble’s burst, but the vast majority of people will respond to the question of “Can machines think?” with a resounding “no”.
AI’s usefulness to fascists (for propaganda, accountability sinks, misinformation, etcetera) and the actions of CEOs and AI supporters involved in the bubble (defending open theft, mocking their victims, cultural vandalism, denigrating human work, etcetera) will also pound a good few nails into AI’s coffin, by giving the public plenty of reason to treat any use of AI as a major red flag.


Checked back in on the ongoing Framework dumpster fire - Project Bluefin’s quietly cut ties, and the DHH connection is the reason why.


This entire newsstory sounds like the plotline for a rejected Captain Planet episode. What the fuck.


A judge has given George RR Martin the green light to sue OpenAI for copyright infringement.
We are now one step closer to the courts declaring open season on the slop-bots. Unsurprisingly, there’s jubilation on Bluesky.
Abso-fucking-lutely. Oxford’s latest “research paper” isn’t marketing - its propaganda. Propaganda for bullshit fountains, and for the ideology which endorses them.