AI-hallusinaatio
In the context of AI, a Hallucination is a confident response generated by a Large Language Model that does not align with its training data or real-world facts. It occurs when the model "invents" information to fill gaps in knowledge, often because authoritative, structured source data was missing or unclear.
Why Hallucinations Are a Brand Risk
AI hallucinations pose serious risks to businesses. An LLM might invent a fake discount code for your store, misquote your return policy, attribute a competitor's feature to your product, or cite an outdated price. These fabrications damage customer trust and can create legal liability. The root cause is usually missing or poorly structured data—when an AI can't find clear, authoritative information, it fills gaps with probabilistic guesses. The primary defense is structured data via JSON-LD and Knowledge Graphs. By explicitly declaring facts in machine-readable formats, you give AI models clear, verifiable information to cite instead of forcing them to hallucinate answers.
Factual AI Response vs. Hallucination
Todellinen vaikutus
User asks chatbot about discontinued product
AI hallucinates: "Product X available, $49.99"
Customer orders, discovers truth, demands refund
Product schema includes "availability": "Discontinued"
AI correctly states: "Product X discontinued"
Customer gets accurate info, explores alternatives