Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    4 days ago

    Yet another billboard.

    https://www.reddit.com/r/bayarea/comments/1ob2l2o/replacement_ai_billboard_in_san_francisco_who/

    https://replacement.ai/

    This time the website is a remarkably polished satire and I almost liked it… but the email it encourages you to send to your congressperson is pretty heavy on doomer talking points and light on actual good ideas (but maybe I’m being too picky?):

    spoiler

    I am a constituent living in your district, and I am writing to express my urgent concerns about the lack of strong guardrails for advanced AI technologies to protect families, communities, and children.

    As you may know, companies are releasing increasingly powerful AI systems without meaningful oversight, and we simply cannot rely on them to police themselves when the stakes are this high. While AI has the potential to do remarkable things, it also poses serious risks such as the manipulation of children, the enablement of bioweapons, the creation of deepfakes, and significant unemployment. These risks are too great to overlook, and we need to ensure that safety measures are in place.

    I urge you to enact strong federal guardrails for advanced AI that protect families, communities, and children. Additionally, please do not preempt or block states from adopting strong AI protections, as local efforts can serve as crucial safeguards.

    Thank you for your time and attention to this critical issue.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 days ago

      but maybe I’m being too picky?

      This is something I’ve been thinking about. There’s a lot of dialogue about “purity” and “purity tests” and “reading the room” in the more general political milieu. I think it’s fine to be picky in this context, because how else will your opinion be heard, let alone advocated for?

      Like, there’s a time and place for consensus. Consensus often comes from people expressing their opinions and reaching a compromise, and rarely from people coming in already agreeing.

      So wrt this particular example, it’s totally fine to be critical and picky. If you were discussing this in the forum where this letter was written, it probably wouldn’t be ok.

  • lagrangeinterpolator@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    6 days ago

    More AI bullshit hype in math. I only saw this just now so this is my hot take. So far, I’m trusting this r/math thread the most as there are some opinions from actual mathematicians: https://www.reddit.com/r/math/comments/1o8xz7t/terence_tao_literature_review_is_the_most/

    Context: Paul Erdős was a prolific mathematician who had more of a problem-solving style of math (as opposed to a theory-building style). As you would expect, he proposed over a thousand problems for the math community that he couldn’t solve himself, and several hundred of them remain unsolved. With the rise of the internet, someone had the idea to compile and maintain the status of all known Erdős problems in a single website (https://www.erdosproblems.com/). This site is still maintained by this one person, which will be an important fact later.

    Terence Tao is a present-day prolific mathematician, and in the past few years, he has really tried to take AI with as much good faith as possible. Recently, some people used AI to search up papers with solutions to some problems listed as unsolved on the Erdős problems website, and Tao points this out as one possible use of AI. (I personally think there should be better algorithms for searching literature. I also think conflating this with general LLM claims and the marketing term of AI is bad-faith argumentation.)

    You can see what the reasonable explanation is. Math is such a large field now that no one can keep tabs on all the progress happening at once. The single person maintaining the website missed a few problems that got solved (he didn’t see the solutions, and/or the authors never bothered to inform him). But of course, the AI hype machine got going real quick. GPT5 managed to solve 10 unsolved problems in mathematics! (https://xcancel.com/Yuchenj_UW/status/1979422127905476778#m, original is now deleted due to public embarrassment) Turns out GPT5 just searched the web/training data for solutions that have already been found by humans. The math community gets a discussion about how to make literature more accessible, and the rest of the world gets a scary story about how AI is going to be smarter than all of us.

    There are a few promising signs that this is getting shut down quickly (even Demis Hassabis, CEO of DeepMind, thought that this hype was blatantly obvious). I hope this is a bigger sign for the AI bubble in general.

    EDIT: Turns out it was not some rando spreading the hype, but an employee of OpenAI. He has taken his original claim back, but not without trying to defend what he can by saying AI is still great at literature review. At this point, I am skeptical that this even proves AI is great at that. After all, the issue was that a website maintained by a single person had not updated the status of 10 problems inside a list of over 1000 problems. Do we have any control experiments showing that a conventional literature review would have been much worse?

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 days ago

    Words of wisdom from Baldur Bjarnason (mostly repeated from his Basecamp post-mortem):

    We know we’re reaching the late stages of a bubble when we start to see multiple “people in tech don’t really believe in all of this, honest, we just act like it because we think we have to, we’re a silent majority you see”, but the truth is that what you believe in private doesn’t matter. All that matter is that you’ve been acting like a true believer and you are what you do

    In work and politics, it genuinely doesn’t matter what you were thinking when you actively aided and abetted in shitting on people’s work, built systems that helped fascists, ruined the education system and pretty much all of media. What matters, and what you should be judged on is what you did

    Considering a recent example where AI called someone a terrorist for opposing genocide, its something that definitely bears repeating.

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    7 days ago

    Somehow I missed the fact that yesterday paypal’s blockchain operator fucked up and accidentally minted 300 trillion itchy and scratchy coins.

    https://www.web3isgoinggreat.com/?id=paxos-accidental-mint

    And now apparently it turns out that it was just a sequence of stupid whereby they accidentally deleted 300 million, which would have been impressive all by itself, then tried to recreate it (🎶 but at least it isn’t fiat currency🎶) and got the order of magnitude catastrophically wrong and had to delete that before finally undoing their original mistake. Future of finance right here, folks.

    Anyone else know the grisly details? The place I heard it from is a mostly-private account on mastodon which isn’t really shareable here, and they didn’t say where they’d heard it.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    7 days ago

    2 items

    Here’s a lobster being sad a poor uwu smol bean AI shill is getting attacked

    Would you take a kinder tone to the author’s lack of skill/knowledge if it weren’t about AI? It would be ironic if hatred of AI caused us to lose our humanity.

    link

    here’s political mommy blog Wonkette having fun explaining the hallucinatory insanity that is Google AI summaries

    https://www.wonkette.com/p/are-you-ok-google-ai-do-you-need

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      11 days ago

      And in related news, worldwide ecosystems are already getting perma-fucked by billionaires’ repeated and relentless wrecking of our planet for personal gain.

      I mention billionaires specifically because the average Joe, the 99% of humanity being told to cut down their carbon footprint and delete emails to save water, is completely fucking blameless in this.

      They didn’t choose to have car-centric architecture forced on them for the past goddamn century, they didn’t choose to have planet-wrecking crypto farms/NFTs forced on them a few years ago, and they sure as hell didn’t choose to have planet-killing AI slop extruders forced on them, either.

      (Anyways, unrelated hot take of the day: taking responsibility for something you’re not responsible for is a moral failing, and needs to be treated as such)

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        10 days ago

        (Anyways, unrelated hot take of the day: taking responsibility for something you’re not responsible for is a moral failing, and needs to be treated as such)

        wat

  • sc_griffith@awful.systems
    link
    fedilink
    English
    arrow-up
    20
    ·
    9 days ago

    as an ezra klein hater since 2020 the past month or so has been victory lap after victory lap. and now, well

    he's interviewing yud

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      I still refuse to learn what an ezra is, they will have to drag my ass to room 101 to force that into my brain

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 days ago

      I swear to god if yud goes on conan needs a friend (who recently interviewed a freshly minted riyadh comedy festival alum bill burr) i will unplug from this simulation

    • fnix@awful.systems
      link
      fedilink
      English
      arrow-up
      19
      ·
      8 days ago

      I remember when this guy used to castigate Sam Harris for platforming Charles Murray’s race science. The same guy who now eulogizes Charlie Kirk and does the bidding of billionaires. Really encapsulates the elite pivot to the right.

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    ·
    9 days ago

    Hey, remember Sabine Hossenfelder? The transphobe who makes YouTube videos? She published a physics paper! Well, OK, she posted a thing to the arXiv for the first time since January 2024. I read it, because I’ve been checking the quant-ph feed on a daily basis for years now, and reading anything else is even more depressing. It’s vague, meandering glorp that tries to pretty up a worldview that amounts to renouncing explanation and saying everything happens because Amon-Ra wills it. Two features are worth commenting upon. The acknowledgments say,

    I acknowledge help from ChatGPT 5 for literature research as well as checking this manuscript. I swear I actually wrote it myself.

    “Tee hee, I shut off my higher brain functions” is a statement that should remain in the porn for those who have a fetish for that.

    And what literature does Hossenfelder cite? Well, there’s herself, of course, and Tim Palmer (one of those guys who did respectable work in his own field and then decided to kook out about quantum mechanics). And … Eric Weinstein! The very special boy who dallied for a decade before writing a paper on his revolutionary theory and then left his equations in his other pants. Yes, Hossenfelder has gone from hosting a blog post that dismantled “Geometric Unity” to citing it as a perfectly ordinary theory.

    If she’s not taking Thielbux, she’s missing an opportunity.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      9 days ago

      I am still staying away from YouTube, so I am happily cut off from the bulk of her content. But when she teases a video with the phrase

      People in Western countries are having fewer kids

      I reserve the right to say “yikes”.

      Oh, and she has podcasted with sex pest Lawrence Krauss, multiple times (“What’s New in Science With Sabine and Lawrence”).

      • o7___o7@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        8 days ago

        Usually you get tech creeps insisting that they could’ve done physics. Isn’t it kind of uncanny when a physicist insists on their capacity for tech creeping? Edit: also thanks for the explainer!

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      9 days ago

      Community sneer from this orange-site comment:

      We know from Bell’s theorem that any locally causal model that correctly describes observations needs to violate measurement independence. Such theories are sometimes called “superdeterministic”. It is therefore clear that to arrive at a local collapse model, we must use a superdeterministic approach.

      I only got the first 1/2 of my physics degree before moving on to CS, but to me this reads as “We know eternal life can only be obtained from unicorn blood, so for this paper we must use a fairytale approach.”

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        I saw like a couple articles and a talk about Bell’s theorem 5 years ago and I immediately clocked this as a vast, vast oversimplification

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        9 days ago

        That passage of Hossenfelder’s jumped out at me, too. It’s a laughably bad take about the implications of Bell’s theorem that ignores how just about every interpretation of quantum mechanics has responded to Bell-inequality violations, and it attempts to sanewash superdeterminism.

        Not her first time doing that…

        • zogwarg@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          9 days ago

          Reading up a bit more on “superdeterminism” I guess it explain a bit more why she made that video attempting to debunk free will Compatibilism as a cooky idea cooked up by new cooky philosophers (Not realising it’s about as ancient as western philosophy itself).

          For the “esthetics” of presenting superdeterminism as a “pure-common-sense” the no free will just sells it better.

          EDIT: From memory maybe it was about “Hard Compatibilism” (free will requires determinism) which might not be explicitly so old, though I would say a natural consequence of most Compatibilist positions.

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      9 days ago

      Thanks, this was an awful skim. It feels like she doesn’t understand why we expect gravity to propagate like a wave at the speed of light; it’s not just an assumption of Einstein but has its own independent measurement and corroboration. Also, the focus on geometry feels anachronistic; a century ago she could have proposed a geometric explanation for why nuclei stay bound together and completely overlooked gluons. To be fair, she also cites GRW but I guess she doesn’t know that GRW can’t be made relativistic. Maybe she chose GRW because it’s not yet falsified rather than for its potential to explain (relativistic) gravity. The point at which I get off the train is a meme that sounds like a Weinstein whistle:

      What I am assuming here is then that in the to-be-found underlying theory, geometry carries the same information as the particles because they are the same. Gravity is in this sense fundamentally different from the other interactions: The electromagnetic interaction, for example, does not carry any information about the mass of the particles. … Concretely, I will take this idea to imply that we have a fundamental quantum theory in which particles and their geometry are one and the same quantum state.

      To channel dril a bit: there’s no inherent geometry to spacetime, you fool. You trusted your eyeballs too much. Your brain evolved to map 2D and 3D so you stuck yourself into a little Euclidean video game like Decartes reading his own books. We observe experimental data that agrees with the presumption of 3D space. We already know that time is perceptual and that experimentally both SR and GR are required to navigate spacetime; why should space not be perceptual? On these grounds, even fucking MOND has a better basis than Geometric Unity, because MOND won’t flip out if reality is not 3D but 3.0000000000009095…D while Weinstein can’t explain anything that isn’t based on a Rubik’s-cube symmetry metaphor.

      She doesn’t even mention dark matter. What a sad pile of slop. At least I learned the word for goldstinos while grabbing bluelinks.

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        9 days ago

        Also, the focus on geometry feels anachronistic; a century ago she could have proposed a geometric explanation for why nuclei stay bound together and completely overlooked gluons.

        She wrote a whole book about how physicists have deluded themselves by pursuing mathematical “beauty”, and now she’s advocating “everything is geometry”.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      9 days ago
      taking a bad paper too seriously

      Hossenfelder starts her “Summary” section thusly:

      I have shown here how the assumption that matter and geometry have the same fundamental origin requires the time evolution of a quantum state to differ from the Schrödinger equation.

      This conclusion is unwarranted. It follows, not from the given assumption, but from the overcomplicated way that assumption is implemented and the kludges built on top of that. Here is how Hossenfelder introduces her central assumption:

      What I am assuming here is then that in the to-be-found underlying theory, geometry carries the same information as the particles because they are the same. […] Concretely, I will take this idea to imply that we have a fundamental quantum theory in which particles and their geometry are one and the same quantum state.

      Taking this at face value, the quantum state of a universe containing gravitating matter is just a single ray in a Hilbert space. As cosmic time rolls on, that ray rotates. This unitary evolution of the state vector is the evolution both of the matter and of the geometry. There is, by assumption, no distinction between them. But Hossenfelder hacks one in! She says that the Hilbert space must factor into the tensor product of a Hilbert space for matter and a Hilbert space for geometry. And then she says that the only allowed states are tensor products of two copies of the same vector (up to a unitary that we could define away). If matter and geometry were truly the same, there would be no such factorization. We would not have to avoid generating entanglement between the two factors by breaking quantum mechanics, as Hossenfelder does, simply because there would not be two spaces to tango.

      I am skeptical of this whole approach on multiple levels, but even granting the basic premise, it’s a bad implementation of that premise. She doesn’t have a model; she has a pathological “fix” to a problem of her own making.

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        8 days ago

        Eric “I will come to Harvard and espouse Numberwang Racism if you deign to invite me” Weinstein:

        Invite me back to Harvard as the co-founder of the Science and Engineering Workforce Project in the @HarvardEcon department and I will give a talk on how this really works. You don’t have to pay me a cent if you video it.

        I’ll cover:

        The need to fire Claudine Gay.

        The need to end activist studies depts.

        University Bioweapon research

        String Theory

        CPI Cost of Living

        Evolutionary theory applied to Humans

        Low Dimensional Geometry

        NSF STEM Shortage Panics

        DEI hiring against merit

        Epstein and Science

        Cognitive abilities expectations in Geographicly widely separated populations.

        • blakestacey@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          8 days ago

          In using xcancel to look up Eric Weinstein’s bonkers rants on Xitter, I exposed myself to Sabine Hossenfelder’s comment section. The drivel, the fawning, the people asking chatbots about quantum gravity… It hurts, it hurts.

          I am going to scrub my brain with Oliver Byrne’s edition of Euclid.

        • o7___o7@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          8 days ago

          You’ve got to be shitting me.

          Edit: maybe all these weirdos in the techfash groupchat are experimenting with the wrong nootropics and gave themselves brain damage. They all seem to be unravelling simultaneously.

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            I think they are feeling like they are riding the wave. The unstoppable wave of ‘achtually we are right’ and they learned nothing from the expert drug users and dont know the wave is about to reach the high water mark.

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      9 days ago

      just repeating my reaction on bluesky here but “my AI partner died because of the updates” is a common complaint with ppl “dating” AI. altman absolutely knows who he’s targeting when he talks about restoring personality in erotic settings, and it’s people who are really not doing great

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        8 days ago

        I guess we’re moving into the “take advantage of the mentally unwell” stage of trying to figure out a way to make money off this shit, also known as the Gacha Gambit.

        Bold move, Cotton. Let’s see how it pays off. (It pays off in human misery)

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      12
      ·
      9 days ago

      “As part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults.”

      There’s this magical thing called “commissioning a porn artist” Sammy Boy. I recommend trying it out for once - it gets you an objectively better result than throwing money into a planet-destroying slop machine, and its much cheaper too. You’ve done nothing but attack the human soul for the past three fucking years, you might as well give an infinitesimal amount of your undeserved billions to one of the millions of artists whose livelihoods you’ve been murdering.

      (Seriously, I feel fucking insulted by this.)

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        8 days ago

        But commissioning a porn artist is hard as they lost their payment providers and bank accounts, while the slop machines got off with a american authors only slap on the wrist.

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    11 days ago

    Obituaries are being run for John Searle. Most obituaries will focus on the Chinese Room thought experiment, an important bikeshed in AI research noted for the ease with which freshmen can incorrectly interpret it. I’m glad to see that Wikipedia puts above the Chinese Room the fact that he was a landlord who sued the city of Berkeley and caused massive rent increases in the 1990s; I’m also happy that Wikipedia documents his political activity and sexual-assault allegations.