• 0 Posts
  • 373 Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle


  • Man, it’s frustrating to see him end up going down this route because the opening part of this is actually one of the better descriptions of AI psychosis I’ve seen, and i appreciate his emphasis on the way the delusion is built up in the sufferer’s mind rather than trying to game out what’s happening “inside” the chatbot. Even his point about how LLMs aren’t bad in exceptional ways for a new technology is pretty cogent. But his insistence on defending his own use of these things (and others who do so in “centaur-configured” ways) rather than thinking about how it interacts with all the relatively normal ways that this technology is wildly destructive is a very conspicuous blind spot.

    Like, you can absolutely drive a nail with a phone book, and given the wider surface area it even has the advantage over a traditional hammer of being harder to smash your fingers. An individual craftsman may well decide that this is a useful tool and in some cases worth using over other options. But if the only source of these hammer-books was an industry that relied on massive uncompensated use of creative work passed through exploited third-world labor, ground rainforests to dust to create special “old-growth paper”, placed massive and unsustainable burdens on existing road infrastructure to collect these parts and deliver them, and somehow had been blown into a speculative bubble that represented something like a quarter of the entire US economy by promising that if they created a big enough book then one guy could hammer all the nails at once and they could lay off all the carpenters, I think it’s justifiable to look at the people using it as a normal tool and ask them “what the actual fuck are you doing?” The usage statistics they represent and the user stories they tell are used to justify not addressing any of the harms necessary to enable this tool to exist in its current form, and are largely driving the absurd valuations that keep pumping the bubble. Your individual role in those harms as a small-time user who finds it occasionally useful may be incalculably small, but it is still real.

    Like, it feels like I agree with Doctorow on basically all the premises here. He seems to have a decent grasp on how the things actually work (even if he’s wrong about Ollama specifically being an LLM in its own right) and their associated limitations. He draws a decent line separating criticism from criti-hype. He is basically correct about how much of a bastard everyone involved in the industry at a high level is. But maybe because so many of these things aren’t really exceptional (save possibly in their sheer scale) he can’t seem to conceive of a world where things happen any differently, or of the role his actions and words play in reinforcing the status quo even as he writes pretty explicitly about how fucked up that status quo is.

    Honestly it makes me think of the finale of his second Martin Hench novel, The Bezzle. After drilling into the business of the private prison operator that is making his friend’s life hell and separating the merely fucked up parts from the things that might actually have consequences if word got to what passes for cops in that tax bracket, he doesn’t go to the papers or start reaching out to the SEC. Instead he goes to the bastard at the head of it all and blackmails him into making his friend’s remaining incarceration less hellish and leaving him alone. And his friend, who started all this by begging for help unraveling this shit, rightly calls Marty a coward for it. There’s something ironic in seeing Doctorow here seemingly make the same judgement: abuse and apathy are sufficiently normal that we shouldn’t even bother to try and make the world better, just find ways to shelter ourselves and the people we care about from the consequences. And hell, I guess even there I’m not immune to it. There are reasons why I’m posting here and not waiting out front of a hotel with some engraved brass. Still, on the continuum of such things I’m disappointed that the guy who wrote that scene is stuck in the normalization blues.



  • FT reports from Amazon insiders that they’re investigating the role AI-assisted development has played in a spate of recent issues across both the store and AWS.

    FT also links to several previous stories they’ve reported on related issues, and I haven’t had the time to breach the paywalls to read further, but the line that caught my eye was this:

    The FT previously reported multiple Amazon engineers said their business units had to deal with a higher number of “Sev2s” — incidents requiring a rapid response to avoid product outages — each day as a result of job cuts.

    To be honest, this is why I’m skeptical of the argument that the AI-linked job losses are a complete fabrication. Not because the systems are actually there to directly replace the lost workers, but because the decision-makers at these companies seem to legitimately believe that these new AI tools will let their remaining workforce cover any gaps left by the layoffs they wanted to do anyways. It sounds like Amazon is starting to feel the inverse relationship between efficiency and stability, and I expect it’s only a matter of time before the wider economy starts to feel it too. Whether the owning class recognizes what’s happening is, of course, a different story.


  • Thank you for providing some actual domain experience to ground my idle ramblings.

    I wonder if part of the reason why so many high profile intellectuals in some of these fields are so prone to getting sniped by the confabulatron is an unwillingness to acknowledge (either publicly or in their own heart) that “random bullshit go” is actually a very useful strategy. It reminds me of the way that writers will talk about the value of just getting words on the page because it’s easier to replace them with better words than to create perfection ex nihilo, or the rubber duck method of troubleshooting where just stepping through the problem out loud forces you to organize your thoughts in a way that can make the solution more readily apparent. It seems like at least some kinds of research are also this kind of process of analysis and iteration as much as if not more than raw creation and insight.

    I have never met Donald Knuth, and don’t mean to impugn his character here, even as I’m basically asking if he’s too conceited to properly understand what an LLM is, but I think of how people talk about science and scientists and the way it gets romanticized (see also Iris Merideth’s excellent piece on “warrior culture” in software development) and it just doesn’t fit a field that can see meaningful progress from throwing shit at the wall to see what sticks. A lot of the discourse around art and artists is more willing to acknowledge this element of the creative process, and that might explain their greater ability and willingness to see the bullshit faucet for what it is. Maybe because science and engineering have a stricter and more objective pass/fail criteria (you can argue about code quality just as much as the quality of a painting, but unlike a painting either the program runs or it doesn’t. Visual art doesn’t generally have to worry about a BSOD) there isn’t the same openness to acknowledge that the affirmative results you get from an LLM are still just random bullshit. I can imagine the argument being: “The things we’re doing are very prestigious and require great intelligence and other things that offer prestige and cultural capital. If ‘random bullshit go’ is often a key part of the process then maybe it doesn’t need as much intelligence and doesn’t deserve as much prestige. Therefore if this new tool can be at all useful in supplementing or replicating part of our process it must be using intelligence and maybe it deserves some of the same prestige that we have.”



  • Even in Knuth’s account it sounds like the LLM contribution was less in solving the problem and more in throwing out random BS that looked vaguely like different techniques were being applied until it spat out something that Knuth and his collaborator were able to recognize as a promising avenue for actual work.

    His bud Filip Stappers rolled in to help solve an open digraph problem Knuth was working on. Stappers fed the decomposition problem to Claude Opus 4.6 cold. Claude ran 31 explorations over about an hour: brute force (too slow), serpentine patterns, fiber decompositions, simulated annealing. At exploration 25 it told itself “SA can find solutions but cannot give a general construction. Need pure math.” At exploration 30 it noticed a structural pattern in an earlier solution. Exploration 31 produced a working construction.

    I am not a mathematician or computer scientist and so will not claim to know exactly what this is describing and how it compares to the normal process for investigating this kind of problem. However, the fact that it produced 4 approaches over 31 attempts seems more consistent with randomly throwing out something that looks like a solution rather than actually thinking through the process of each one. In a creative exploration like this where you expect most approaches to be dead ends rather than produce a working structure maybe the LLM is providing something valuable by generating vaguely work-shaped outputs that can inspire an actual mind to create the actual answer.

    Filip had to restart the session after random errors, had to keep reminding Claude to document its progress. The solution only covers one type of solution, when Claude tried to continue another way, it “seemed to get stuck” and eventually couldn’t run its own programs correctly.

    The idea that it’s ultimately spitting out random answer-shaped nonsense also follows from the amount of babysitting that was required from Filip to keep it actually producing anything useful. I don’t doubt that it’s more efficient than I would be at producing random sequences of work-shaped slop and redirecting or retrying in response to a new “please actually do this” prompt, but of the two of us only one is demonstrating actual intelligence and moving towards being able to work independently. Compared to an undergrad or myself I don’t doubt that Claude has a faster iteration time for each of those attempts, but that’s not even in the same zip code as actually thinking through the problem, and if anything serves as a strong counterexample to the doomer critihype about the expanding capabilities of these systems. This kind of high-level academic work may be a case where this kind of random slop is actually useful, but that’s an incredibly niche area and does not do nearly as much as Knuth seems to think it does in terms of justifying the incredible cost of these systems. If anything the narrative that “AI solved the problem” is giving Anthropic credit for the work that Knuth and Stapprrs were putting into actually sifting through the stream of slop identifying anything useful. Maybe babysitting the slop sluice is more satisfying or faster than going down every blind alley on your own, but you’re still the one sitting in the river with a pan, and pretending the river is somehow pulling the gold out of itself is just damn foolish.



  • I actually dug up the context to make sure I wasn’t forgetting something horrific. It’s from a 2017 piece (CW: SSC Link) back before he went mask-off but was firmly in the “I’m a liberal and I talk exclusively about how liberals and their institutions suck” useful idiot phase of his career, so the overall essay is about how actually the wing nuts have a point when they say that all so-called neutral institutions are actually secret communist indoctrinators that want to trans your children and take your guns. I’m paraphrasing, obviously; he believes/pretends that when they called these things left-wing they didn’t mean “literally in league with Stalin and the Devil”. However, in the middle of the usual beigeness he tries to maintain his air of neutrality by having a section on how bad Voat ended up being, which concludes with:

    The moral of the story is: if you’re against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to live even if witch-hunts are genuinely wrong.


  • God that was bleak - I thought Nick was bad in his guest spots on Alex’s show (seen via Knowledge Fight, of course) but apparently you really do need at least two layers of insulating podcast to avoid suffering critical psychic damage from that level of hatred. I appreciated the acknowledgement that in order to feel at all okay playing clips you needed to sanewash him a little bit. I’m pretty sure that JorDan do the same thing with Alex and don’t acknowledge it nearly often enough.

    I also feel like some of Nick’s schtick is about trying to position himself and maintain his position in the right wing grifter bigot-industrial complex. Like, the open disdain for his audience and presenting his actually pretty straightforward feelings on the halftime show as somehow brave and iconoclastic is also about differentiating himself and making his audience feel superior to Alex, Tucker, Candace, etc. In that sense the open disdain for the audience serves another purpose in terms of reinforcing heirarchy. Look at how great it feels for me to be better than you. And even you are better than the chuds, who are better than the racialized other.


  • It’s especially strange because becoming less prone to bias and developing a clear understanding of what serves your interest is so much of the pitch for Rationalism as a community/ideology/project. Like, here’s unbearably long essays that promise to help cultivate the superpower of seeing the world clearly and acting in it effectively, now if you acknowledge that nobody outside this small set of group homes is actually doing that you’ll be shunned. And that’s not getting into how easily exploitable those assumptions of good faith are by bad-faith actors. It comes back to that quote from Scott that has stuck in my head apparently more than it did his: if you build a community based on the principle that you will absolutely never have a witch hunt you will end up living among approximately seven principles civil libertarians and eleven million goddamn witches, and this is true even if you’re right that witch hunts are bad.





  • It’s such a powerful dodge. What you’re actually saying is “we’re going to keep doing exactly what we’re doing and see if that fixes it” because the nature of innovation is such that it’s actually pretty complex to “invest” in, and very rarely has the direct application you need. Like, you don’t get penicillin by investing in pharmaceutical innovation you get it by paying some nerd to fuck off to the jungle for a few years and hope that his special interest ends up being useful. Bell Labs was able to basically invent the modern world by funneling the profits of their massive monopolistic empire into a bunch of nerds poking stuff with probes to see what happens elementary physics and materials science research that didn’t have a definite objective.



  • Yeah, I probably should have included a warning about incoming psychic damage on that link. Sorry.

    Although highlighting the phrase “intelligence displacement” does illuminate that the whole case they make is built on the same foundations as that other Rat fixation: eugenics and race science! Like, I’m not saying the author is definitely a eugenicist breaking out the skull calipers, but their argument is based on the same idea of what “intelligence” is in the first place. It’s a distinct commodity that is produced or contained in certain minds and is the ultimate source of the value that they create. If you’re a “knowledge worker” you don’t provide a specific perspective, experience, expertise, or even knowledge, you just plug your intelligence into the organization like connecting a new processor bank to a server farm. Because it’s disconnected from a person’s individuality and subjectivity we can model it effectively as a commodity and look to optimize its production, either by automating away the squishy human element with ai or by increasing the productivity of current methods by optimizing for the white “right” kind of person.


  • I can see a situation where his bombing campaign fails to achieve the objective, a special operation like in Venezuela fails, and Hegseth or Rubio or someone (Putin? Netanyahu? Kanye?) convinces him to invade long enough that inertia carries it forward.

    Of course, anything we do is going to take us to the same result we’ve seen with all these interventions. The US military and whatever allies join us will be, broadly speaking, terrifyingly effective at achieving their tactical and operational goals, but because the overall strategic plan is somewhere between non-existent and backwards those successes will fail to actually do anything. We will inflict and suffer that much more death and devastation, and all it will accomplish is making the world less stable and less safe for everyone.