Definition
The hallucination test measures the extent to which the generated answer contains information that is not supported by or contradicts the given context. This metric is essentially the complement of faithfulness, identifying when your LLM generates unsupported or fabricated information.Taxonomy
- Task types: LLM.
- Availability: and .
Why it matters
- Hallucination detection is critical for maintaining trust and accuracy in AI-generated responses, especially in high-stakes applications.
- This metric helps identify when your model is making up facts, providing unsupported claims, or contradicting the given context.
- It’s essential for RAG (Retrieval-Augmented Generation) systems where responses should be strictly grounded in the provided information.
- Lower hallucination scores indicate better adherence to factual accuracy and context consistency.
Required columns
To compute this metric, your dataset must contain the following columns:- Outputs: The generated answer/response from your LLM
- Context: The provided context or background information
Test configuration examples
If you are writing atests.json
, here are a few valid configurations for the hallucination test:
Related
- Faithfulness test - Measure factual consistency with context (complement of hallucination).
- Groundedness test - Ensure responses are grounded in provided context.
- Context utilization test - Evaluate how well context is used.
- Answer correctness test - Measure factual accuracy against ground truth.
- Aggregate metrics - Overview of all available metrics.