
"LLMs are not designed to be truthful, but to ensure that the narrative "makes sense" in any context. Given a context, LLMs are trained to generate what should come next in the developing narrative. Confabulations—plausible-sounding distortions or fabrications—are part of its repertoire, regardless of whether they correspond to truth or facts in our world."
"One of the primary functions of language is to imagine and enable the creation of ideas that have never been expressed before. LLMs do that easily, even when the context has nothing in common with the data on which they were trained. The narrative always makes sense in any context because the machine has learned some general structures of language that transfer to new situations."
"Machine learning researcher Léon Bottou argues that LLMs are essentially fiction machines that can be remarkably good at conversing about new situations that are far removed from their training data."
Modern AI language models achieve remarkable linguistic fluency but prioritize narrative coherence over truthfulness, generating confabulations that sound plausible regardless of factual accuracy. LLMs can reason about contexts absent from their training data by applying general language structures, particularly compositionality—the principle that complex expressions derive meaning from their component parts and combinations. This capability enables LLMs to generate novel ideas and engage meaningfully in unfamiliar situations. However, their strength in creating coherent narratives masks fundamental limitations when confronting theories requiring new conceptual frameworks, as these demand symbolic and causal reasoning beyond pattern recognition.
Read at Psychology Today
Unable to calculate read time
Collection
[
|
...
]