Also knowns as the Bullshit Asymmetry Principle

    • Track_Shovel@slrpnk.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      As another commentator says, pointing that a statement is bullshit is sufficient. The burden the burden of the proof is on the shitter, for having contrary opinion

      • uienia@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        In a fact based discussion sure, but bullshitting has other purposes: muddying the waters, creating distrust of media, simply drowning out non-bullshit.

      • AwkwardLookMonkeyPuppet@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        Except that people readily believe the lie and liar, so you need to dispute it with data, and all the liar has to do is go “nuh uh!”. They’ll still influence people with zero effort.

      • jballs@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 month ago

        Kamala did this the during the debate and it worked much better than I expected. She was basically like “yeah everything this guy just said is bullshit. Anyway, here’s what I’m going to do.”

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    If AI ever gets to the point where it can fact check in real time (with actual sources), it will completely change society. Unfortunately, it’s currently on the other side of the problem.

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      If anything, it demonstrates that the law has mathematical validity. Fact-checking simply requires more work than making shit up. Even when AI gets to the point where it can do research and fact-check things effectively (which is bound to happen eventually), it’ll still be able to produce bullshit in a fraction of that time, and use that research ability to create more convincing bullshit.

      Fact-checking requires rigor. Bullshit does not. There’s no magic way to close that gap.

      However, most social media sites already implement rate limits on user submissions, so it might actually be possible to fact-check people’s posts faster than they are allowed to make them.

    • Gonzako@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      2 months ago

      by just how AI works it’ll never solve the problem. what we really need is big brother to tell us what’s right and what’s wrong

  • sine@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    I have a few things in my reading backlog about bullshit. I think that it tends to be trivialized in social discurse. It honestly feels like the patterns of bullshit exploit built in biases we have.

    This is my future starting point for when I leave some room to this topic: https://en.wikipedia.org/wiki/On_Bullshit

  • Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Brandolini’s Law is great to keep in mind when discussing online - because as you’re busy refuting a piece of bullshit, the bullshitter is pumping out nine other bullshits in its place, so discussing with obvious bullshitters is a lost cause.

    On a lighter side pointing the bullshit out is considerably easier/faster than to refute it, but still useful - as whoever is reading the discussion will notice it. As such, when you see clear signs of bullshit*, a good strategy is to point it out and then explicitly disengage.

    *such as distorting what others say, assuming, using certain obvious fallacies/stupidities, screeching when someone points out a fallacy, etc.

    • JaggedRobotPubes@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      It can be very useful to pick just one element of a multi-part bullshit firework and refute the shit out of it, and then completely tune out the rest.

      Sometimes even just the quality of thinking comes across and does some work.

    • Zement@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Most Bullshiters just copy talking points… they rarely defend them, just spew new. It’s like talking to ChatGPT to persuade it… it’s futile because the “source data” won’t be updated.

      This is why bullshitters hate Ai and think it’s biased towards Liberal/Woke Ideology. Spoiler: It’s not, it’s mostly the average of all people which is: Live and let Live. (Idiotic to think about this “don’t thread on me” synonym as woke or liberal, but here we are).

      • AwkwardLookMonkeyPuppet@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        AI isn’t really the average of all people. It’s more like the average of all people on Reddit and other similar sources, so it does skew left. Microsoft took great care to eliminate hostile data from their training pool to avoid another Tay disaster.

        • Zement@feddit.nl
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 months ago

          Everyone is skewed left, even Nazis, when it’s about their own life. It’s what they want for others that’s “right”… at least that is my impression.

          (Wasn’t the Microsoft Ai trolled by 4-chan?)