Tea@programming.dev to Technology@lemmy.worldEnglish · 5 months agoReasoning models don't always say what they think.www.anthropic.comexternal-linkmessage-square6linkfedilinkarrow-up111arrow-down17cross-posted to: ai_@lemmy.worldhackernews@lemmy.bestiver.se
arrow-up14arrow-down1external-linkReasoning models don't always say what they think.www.anthropic.comTea@programming.dev to Technology@lemmy.worldEnglish · 5 months agomessage-square6linkfedilinkcross-posted to: ai_@lemmy.worldhackernews@lemmy.bestiver.se
minus-squareA_A@lemmy.worldlinkfedilinkEnglisharrow-up4·5 months agoi like this part : There’s no specific reason why the reported Chain-of-Thought must accurately reflect the true reasoning process; there might even be circumstances where a model actively hides aspects of its thought process from the user.
i like this part :