LLM Hallucination
Last reviewed by Moderation API
An LLM hallucination is a confident but factually incorrect or fabricated output produced by a large language model, such as invented citations, nonexistent people, or false historical claims. It stems from the model predicting plausible-sounding text rather than retrieving verified information.
