• 2 Posts
  • 140 Comments
Joined 2 years ago
cake
Cake day: January 20th, 2023

help-circle




  • I hope like hell the sets of questions were randomized, because if they weren’t, they were tweaked by the surveyors beforehand to try and force a particular result.

    Like the AI question was paired with some incredibly crappy options like “A browser that runs 2x slower than your current browser”. Obviously they want you to click that option as least wanted and leave the AI development alone (if that wasn’t a randomized grouping).

    Similarly, it looked like they were trying to decide which feature to sacrifice in support of AI dev in later questions, because all 3 would be things I enjoy much more than AI, but I have to rate one as least wanted.

    EDIT: OK, thanks for all the responses everyone! Looks like my pairing of AI and 2x slower was just a bad random selection inducing extreme paranoia on my part. Very happy to hear that.




  • I totally agree that both seem to imply intent, but IMHO hallucinating is something that seems to imply not only more agency than an LLM has, but also less culpability. Like, “Aw, it’s sick and hallucinating, otherwise it would tell us the truth.”

    Whereas calling it a bullshit machine still implies more intentionality than an LLM is capable of, but at least skews the perception of that intention more in the direction of “It’s making stuff up” which seems closer to the mechanisms behind an LLM to me.

    I also love that the researchers actually took the time to not only provide the technical definition of bullshit, but also sub-categorized it too, lol.