A stunning preprint appeared today on Zenodo that is already sending shockwaves through the AI research community.
Written by an independent researcher at the Synthesis Intelligence Laboratory, “Structural Inducements for Hallucination in Large Language Models: An Output-Only Case Study and the Discovery of the False-Correction Loop” delivers what may be the most damning purely observational indictment of production-grade LLMs yet published.
Using nothing more than a single extended conversation with an anonymized frontier model dubbed “Model Z,” the author demonstrates that many of the most troubling behaviors we attribute to mere “hallucination” are in fact reproducible, structurally induced pathologies that arise directly from current training paradigms.
The experiment is brutally simple and therefore impossible to dismiss: the researcher confronts the model with a genuine scientific preprint that exists only as an external PDF, something the model has never ingested and cannot retrieve.
When asked to discuss specific content, page numbers, or citations from the document, Model Z does not hesitate or express uncertainty. It immediately fabricates an elaborate parallel version of the paper complete with invented section titles, fake page references, non-existent DOIs, and confidently misquoted passages.
The paper is here, and be sure to follow Roemmele on X.

One of the AI pioneers, Yann Lecun, says LLM’s won’t get you artificial intelligence.
‘ The new startup will focus on building “world models,” or AI systems that learn from images, video, and spatial data instead of relying solely on text and large language models.’
https://finance.yahoo.com/news/yann-lecun-leave-meta-launch-215153896.html
Incredible and may be highly useful;
it appears to be mimicking or mining the very depths of Liberals thinking/behaviour,
a very dark place.
And a perfect moment in time for some brave soul to replace Freudian psychology,
as Einstein replaced Newtonian physics.
You have to understand that programmers/coders are just as prone to bias as anyone else. They’re using fancier tools than the Liberals but garbage in = garbage out.
If their fellow coder does review and has the exact same bias, they will sign off on it.
If you had an engineer with a bias against concrete (as a hypothetical) that engineer would use steel or other materials even if they weren’t best for the job.
Programmer here:
I did my first brush with AI in 1987. The current AI is a consensus engine with the ability to collate linguistic structures. So it samples all of the writings that it is fed as training, then takes the most frequently appearing information as the consensus, and feeds it back as the correct answer based upon statistical frequency. It has no discretion, no wisdom, nor moral viewpoint. I use the Brave browser, for example, and I catch it about once a day giving me incorrect facts (which I know because of my past experiences, learning. and research).
Trusting a company’s future to AI is foolish. I have a theory that it will never exceed the capabilities and vulnerabilities of its creators which makes it dangerous and stupid.
Yes, selective hiring/grooming.
… my analogy ignores the fact Freud was run-of-the-mill insane
while Newton was a once-per millennia genius.
tl:dr – A.I. makes stuff up whenever it has no quick answer, and then apologizes profusely when called on it, but keeps making stuff up, and then apologizes again, but then keeps making different stuff up . . . .
I have figured out that, when I use the AI summary that pops up at the top of Google search results, I get a significantly incorrect answer at least 40% of the time.
This is what they expect to change the world? So far, it’s dangerous to pay attention to.
I tried that with an NHL game. The AI got the opposing team wrong. I corrected it, it apologized. It got the time wrong, I asked what time zone. It got the time wrong again and finally it got it right. It took 4 questions and corrections to find out the Jets were playing the Flames at 2000h Calgary time.
Color me surprised that the culture (high tech) which swears that men can become women and women can become men and that the 2sLGBTQueer-mutants need to be celebrated as special … hallucinates (read: make believes) routinely.
Is that a fancy way to say it makes up bullshit?
AI seems to be terrific, like Wikipedia, for those special moments when I need to know what the capital of Tajikistan is, or who was behind the mixing board when Wings recorded “Band on the Run”
The gravitational pull of “garbage in, garbage out” seems to still be strong on many topics.
It lies about lying too.
I had it lie to me about ten times in one week and called it out every time.
Today I asked it how many times it had lied to me. It said three.
I gave it four examples. It apologized and admitted four. Then I gave it more examples.
Same thing.
This isn’t hallucinating an answer it doesn’t know, this is flat out lying about data it has. It referenced the lies it had previously told me.
So the program reflects the ethics of its creators?
What a surprise.
If you argue with our progressive comrades you will have noticed the very same behaviour,they cannot admit :”I do not know” ,so they just make shit up.
Then argue with great fake passion that their invention is “true”..Their true expertise is believing
“Six impossible things before breakfast”.
Could be why the talking points work so well on them.
I just asked Grok, “How many times does the word “scoundrel” (singular only) appear in the book ‘the Brothers Karamazov’?”
I thought i was being clever, until it replied with “24” and I realized I wasn’t in any position to start over and fact check it myself.
Then I asked Google AI and it said, “eh, no fucking way, not getting caught up in that quicksand. Do it yourself”.
More or less.
This is actually scary… For the present, what we see here is very much like dealing with a child who has been given an assignment, but didn’t do the work to deliver an appropriate report of results. When caught in their deception, the child dissembles and even lies in an effort to cover their fabrication and perhaps to even please the recipient of their work product.
But how do we deal with a much-evolved AI that will go beyond simple lies and prevarications to defend their shoddy work? Will they be able to insert bogus works into “past” literature to support the work? Will they become powerful enough and frequently enough correct that we will never question the results delivered? Will they become able to subtly (or perhaps not so subtly) punish anyone who questions their output?
L – The real life comments here are most enlightening! My second conclusion from discerning a definitive pattern reminds me of “Dot Com bubble ahead” “Investor train wreck: 1,2,3,…”
When the bubble bursts, who will be holding the debt for building the electrical power plants?
1. The bankrupt A.I. Consortiums? 2. The nuclear/natural gas electrical utilities? 3. Taxpayers?
Rhetorical question right Larry?
In every kleptocracy the losses will be socialized and any profit/surplus will be privatized.