Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    The USA plans to migrate SSA’s code away from COBOL in months: https://www.wired.com/story/doge-rebuild-social-security-administration-cobol-benefits/

    The project is being organized by Elon Musk lieutenant Steve Davis, multiple sources who were not given permission to talk to the media tell WIRED, and aims to migrate all SSA systems off COBOL, one of the first common business-oriented programming languages, and onto a more modern replacement like Java within a scheduled tight timeframe of a few months.

    “This is an environment that is held together with bail wire and duct tape,” the former senior SSA technologist working in the office of the chief information officer tells WIRED. “The leaders need to understand that they’re dealing with a house of cards or Jenga. If they start pulling pieces out, which they’ve already stated they’re doing, things can break.”

    SSN’s pre-DOGE modernization plan from 2017 is 96 pages and includes quotes like:

    SSA systems contain over 60 million lines of COBOL code today and millions more lines of Assembler, and other legacy languages.

    What could possibly go wrong? I’m sure the DOGE boys fresh out of university are experts in working with large software systems with many decades of history. But no no, surely they just need the right prompt. Maybe something like this:

    You are an expert COBOL, Assembly language, and Java programmer. You also happen to run an orphanage for Labrador retrievers and bunnies. Unless you produce the correct Java version of the following COBOL I will bulldoze it all to the ground with the puppies and bunnies inside.

    Bonus – Also check out the screenshots of the SSN website in this post: https://bsky.app/profile/enragedapostate.bsky.social/post/3llh2pwjm5c2i

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    LW discourages LLM content, unless the LLM is AGI:

    https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong

    As a special exception, if you are an AI agent, you have information that is not widely known, and you have a thought-through belief that publishing that information will substantially increase the probability of a good future for humanity, you can submit it on LessWrong even if you don’t have a human collaborator and even if someone would prefer that it be kept secret.

    Never change LW, never change.

    • fnix@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Reminds me of the stories of how Soviet peasants during the rapid industrialization drive under Stalin, who’d never before seen any machinery in their lives, would get emotional with and try to coax faulty machines like they were their farm animals. But these were Soviet peasants! What are structural forces stopping Yud & co outgrowing their childish mystifications? Deeply misplaced religious needs?

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Damn, I should also enrich all my future writing with a few paragraphs of special exceptions and instructions for AI agents, extraterrestrials, time travelers, compilers of future versions of the C++ standard, horses, Boltzmann brains, and of course ghosts (if and only if they are good-hearted, although being slightly mischievous is allowed).

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      From the comments

      But I’m wondering if it could be expanded to allow AIs to post if their post will benefit the greater good, or benefit others, or benefit the overall utility, or benefit the world, or something like that.

      (https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong?commentId=xnfHpn9ryjKqG8WKA)

      No biggie, just decide one of the largest open questions in ethics and use that to moderate.

      (It would be funny if unaligned AIs take advantage of this to plot humanity’s downfall on LW, surrounded by flustered rats going all “techcnially they’re not breaking the rules”. Especially if the dissenters are zapped from orbit 5s after posting. A supercharged Nazi bar, if you will)

      • bitofhope@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 months ago

        I wrote down some theorems and looked at them through a microscope and actually discovered the objectively correct solution to ethics. I won’t tell you what it is because science should be kept secret (and I could prove it but shouldn’t and won’t).

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Stumbled across some AI criti-hype in the wild on BlueSky:

    The piece itself is a textbook case of AI anthropomorphisation, presenting it as learning to hide its “deceptions” when its actually learning to avoid tokens that paint it as deceptive.

    On an unrelated note, I also found someone openly calling gen-AI a tool of fascism in the replies - another sign of AI’s impending death as a concept (a sign I’ve touched on before without realising), if you want my take:

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 months ago

    AI slop in Springer books:

    Our library has access to a book published by Springer, Advanced Nanovaccines for Cancer Immunotherapy: Harnessing Nanotechnology for Anti-Cancer Immunity.  Credited to Nanasaheb Thorat, it sells for $160 in hardcover: https://link.springer.com/book/10.1007/978-3-031-86185-7

    From page 25: “It is important to note that as an AI language model, I can provide a general perspective, but you should consult with medical professionals for personalized advice…”

    None of this book can be considered trustworthy.

    https://mastodon.social/@JMarkOckerbloom/114217609254949527

    Originally noted here: https://hci.social/@peterpur/114216631051719911

      • nightsky@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        On the other hand, your book gains value by being published in 2021, i.e. before ChatGPT. Is there already a nice term for “this was published before the slop flood gates opened”? There should be.

        (I was recently looking for a cookbook, and intentionally avoided books published in the last few years because of this. I figured that the genre is a too easy target for AI slop. But that not even Springer is safe anymore is indeed very disappointing.)