And here’s the unsettling part: the AI doesn’t know it’s wrong.
To the model, a hallucination and a fact are structurally the same—just sequences of words that statistically follow one another based on its training data. It can write a fake biography of a person who never existed. It can cite academic articles that sound real but were never published. It can fabricate laws, historical events, or medical advice that could put someone at risk.
It’s not lying in the human sense—because it doesn’t “know.” But it feels like a lie when it happens. And that makes it dangerous. HAL, JARVIS, and the Characters We Cast
We imprint familiar archetypes onto AI. HAL 9000 from 2001, JARVIS from Iron Man, Samantha from Her—they influence how we prompt.
When I want precision and utility: Not just asking machines for answers…
But learning how to ask ourselves better questions.