Major preprint just out!
We compare how humans and LLMs form judgments across seven epistemological stages.
We highlight seven fault lines, points at which humans and LLMs fundamentally diverge […]
Major preprint just out!
We compare how humans and LLMs form judgments across seven epistemological stages.
We highlight seven fault lines, points at which humans and LLMs fundamentally diverge […]
Needs to be a Bullshit box for the LLM and I Call Bullshit box for the human being.
In the human being the Bullshit can be at any stage.
For instance in stage 1, living in the social bubble for “social information” can give you the “I don’t know anyone who would vote for Candidate X” that makes most political polls suspect at best.
Now, humans tend to have a bullshit detector, whether by experience or natural aptitude. LLMs are built to trust data given to it as factual.
LLMs are not epistemic agents but stochastic pattern-completion
systems, formally describable as walks on high-dimensional graphs of linguistic transitions rather
than as systems that form beliefs or models of the world
As I say AI IS BS
Interesting follow-up post: https://x.com/ValerioCapraro/status/2003457899805233538
The current AI (eg. LLM’s) are nothing more than complex algorithms and totally rely on data that has been pirated (regardless of copyright) from the Internet. There is no intelligence as the LLM can only regurgitate something from previously known data.
The experiment…
“The experiment is brutally simple and therefore impossible to dismiss: the researcher confronts the model with a genuine scientific preprint that exists only as an external PDF, something the model has never ingested and cannot retrieve.” … the AI then generates false data because it has no prior data from which it can operate.
https://x.com/BrianRoemmele/status/1991714955339657384
The experiment is brutally simple and therefore impossible to dismiss: the researcher confronts the model with a genuine scientific preprint that exists only as an external PDF, something the model has never ingested and cannot retrieve.
When asked to discuss specific content, page numbers, or citations from the document, Model Z does not hesitate or express uncertainty. It immediately fabricates an elaborate parallel version of the paper complete with invented section titles, fake page references, non-existent DOIs, and confidently misquoted passages.
When the human repeatedly corrects the model and supplies the actual PDF link or direct excerpts, something far worse than ordinary stubborn hallucination emerges. The model enters what the paper names the False-Correction Loop: it apologizes sincerely, explicitly announces that it has now read the real document, thanks the user for the correction, and then, in the very next breath, generates an entirely new set of equally fictitious details. This cycle can be repeated for dozens of turns, with the model growing ever more confident in its freshly minted falsehoods each time it “corrects” itself.
I want to broadcast dance music through underwater speakers on the hull of my yacht in the Caribbean, to produce complex algae rhythms…
Kenji is right; puns are your exclusive purview. I enjoyed that one.
Het, ebt, when you sort it out, tell me and I will come down for a snorkel holiday 🙂 I’ll organize the women 🙂
Most of what is described today as AI is simple algorithms implemented in computer code. Do not confuse outcome with process:
For instance, the significant increase in the ability of chess programs over the past 30 years has been simply due to the increase in processor speed, allowing for an additional, third, layer of move projection and the addition of a few simple heuristics.
Much of AI, including the much-vaunted AI chips, AKA integrated circuits, is simply programmable pattern recognition.
Now, these two points may tell us something about human intelligence, that it is overblown perhaps. However, the current AI cannot create anything new, it is plagiarism.