AI hallucination

AI hallucination is a phenomenon where a generative large language model (LLM) creates text that’s nonsensical or inaccurate. Relevance Generative Answering (RGA) uses grounding to reduce the chances of AI hallucinations.