Researchers at Apple have come out with a new paper showing that large language models can’t reason — they’re just pattern-matching machines. [arXiv, PDF] This shouldn’t be news to anyone here. We …
It’s worth pointing out that it does happen to reconstruct information remarkably well considering it’s just likelihood. They’re pretty useful tools like any other, it’s funny ofc to watch silicon valley stumble all over each other chasing the next smartphone.
It’s worth pointing out that it does happen to reconstruct information remarkably well considering it’s just likelihood. They’re pretty useful tools like any other, it’s funny ofc to watch silicon valley stumble all over each other chasing the next smartphone.
The only remarkable thing is how fucking easy it is to convince the median consumer that vaguely-correct-shape sentences are correct.
“remarkably well” as long as the remark is “this is still garbage!”