CodyIT@programming.dev to Programmer Humor@programming.dev · 5 months agothe beautiful codeprogramming.devimagemessage-square217linkfedilinkarrow-up1731arrow-down16
arrow-up1725arrow-down1imagethe beautiful codeprogramming.devCodyIT@programming.dev to Programmer Humor@programming.dev · 5 months agomessage-square217linkfedilink
minus-squareJtotheb@lemmy.worldlinkfedilinkarrow-up1·5 months agoThat’s unreal. No, you cannot come up with your own scientific test to determine a language model’s capacity for understanding. You don’t even have access to the “thinking” side of the LLM.
minus-squareCanadaPlus@lemmy.sdf.orglinkfedilinkarrow-up1·edit-25 months agoYou can devise a task it couldn’t have seen in the training data, I mean. You don’t even have access to the “thinking” side of the LLM. Obviously, that goes for the natural intelligences too, so it’s not really a fair thing to require.
That’s unreal. No, you cannot come up with your own scientific test to determine a language model’s capacity for understanding. You don’t even have access to the “thinking” side of the LLM.
You can devise a task it couldn’t have seen in the training data, I mean.
Obviously, that goes for the natural intelligences too, so it’s not really a fair thing to require.