You must log in or register to comment.
Why would the steps be literal when everything else is bullshit? Obviously the ‘reasoning’ steps are AI slop too.
It’s bullshitting. That’s the word. Bullshitting is saying things without a care for how true they are.
The word “bullshitting” implies a clarity of purpose I don’t want to attribute to AI.
It’s kind of a distinction without much discriminatory power: LLMs are a tool created to ease the task of bullshitting; used to produce bullshit by bullshitters.