TrustedFeline [she/her, comrade/them]

  • 3 Posts
  • 44 Comments
Joined 2 months ago
cake
Cake day: March 19th, 2025

help-circle













  • Feels like we’re talking past each other.

    Martin Luther convinced people of ideas. AI slop doesn’t. The printing press increased literacy at a faster rate than years prior. AI slop is decreasing literacy (in the broad sense, including things like media literacy) at a faster rate than it was decreasing before. I think we agree on those points.

    I think those points are enough to make the argument that LLM chatbots are the inverse of the printing press. It’s a vibe-based assessment based on those points we agree on. This isn’t math, there’s no exact definition of (printing press)^-1 , You disagree with the vibe, but we’re pretty much in agreement on everything else


  • The problem with this is that LLM allows you to make slop to a much greater scale than ever before.

    Yeah, and the printing press allowed weirdos like Martin Luther to spread ideas faster. You’re just repeating what I said. That’s exactly what I meant by “The LLM has enabled slop to be produced at a much greater scale than before” and “it’s a question of degree”

    The idea that there was ever a “truth world” and a “post truth” world that LLMs have created is ignorant

    I didn’t say or imply that!!

    Again this is the general problem of epistemology. It has no answer.

    Yes, and the whole point of the paragraph is that I’m not talking about “truth”. I specifically said that I don’t know what to think of some platonic ideal of “truth”, scare quotes and all. The whole point of that paragraph is that I’m not talking about heady epistemology. I’m just talking about social phenomena and human behavior. That’s why I talked about “human thought”. That’s why I talked about “information”, which is a separate concept from “truth”. I was specifically avoiding talking about “truth”, but you brought it up

    You can literally make the same argument about the printing press itself, that it enabled greater forgery to happen than what was possible prior. Which was my point in the original reply.

    Yes, forgeries carry information. I talked about information and models in my first post, not truth

    Yes, LLMs have strained(broken?) the already austere and unrealistic expectations of academic systems across all levels. We should be leveraging that to reforge these systems to better serve students and educators alike rather than trying to get the cat back in the bag regarding LLMs.

    We should be dismantling the industrial chatbots, and redirecting all that compute towards something useful. Get that cat in the fucking bag. I feel like we basically agree on everything, you’re just seeing an epistemological argument where there is none. I was trying to avoid epistemology from the start.

    Maybe TF talked about “post-truth” and I just don’t recall? But I’m pretty sure they were talking about information as well


  • because it gives the general problem of epistemology a year zero date of November 30th, 2022.

    No it doesn’t. It’s just pointing out that a slop machine was invented around then. The printing press enabled information to be shared at a much greater scale than before. The LLM has enabled slop to be produced at a much greater scale than before. It’s a question of degree.

    Creating reliable records of “human thought” doesn’t matter because the problem isn’t one of what do people think, it’s what is the actual truth.

    And how is truth determined? What do you call a truth that nobody believes? Global warming is happening whether or not people believe in it. But it could have been avoided if more people believed in climate change, and also believed it was worth taking action against. IDK what to think about some platonic ideal of “truth”

    There is no place in academia in which an LLM would be a reliable store of information because it’s a statistical compilation not a deterministic primary source, secondary or tertiary source.

    Do you not see how the general public is actually using LLMs? It’s bleeding into academia, too




  • Trash Future repeatedly makes the point that AI chat bots are the inverse of the printing press. The printing press created a way for information to be reliably stored, retrieved, and exchanged. It created a sort of ecosystem where ideas (including competing ideas) could circulate in society.

    Chat bots do the opposite. They basically destroy the reliable transmission of information and ideas. Instead of creating reliable records of human thought (models, stories, theories, etc.), it’s a black box which randomly messes with averages. It’s so fucking harmful