Detect hallucinations in your RAG pipeline output — in two lines of Python.
pip install hallucinationbenchYour RAG pipeline retrieves documents and passes them to an LLM. The LLM generates a response that sounds correct. But is every claim actually grounded in your context — or did the model fabricate some of it?
HallucinationBench answers that question instantly.
Install dependencies:
pip install openai python-dotenvSet your OpenAI API key:
# .env
OPENAI_API_KEY=your_key_hereRun your first evaluation:
from hallucinationbench import score
context = """
The Eiffel Tower is located in Paris, France. It was constructed between
1887 and 1889 as the entrance arch for the 1889 World's Fair.
The tower is 330 metres tall.
"""
response = """
The Eiffel Tower is in Paris. It was built in 1889 and stands 330 metres
tall. It was designed by Leonardo da Vinci and attracts over 7 million
visitors every year.
"""
result = score(context=context, response=response)
print(result)Output:
Verdict : FAIL
Faithfulness : 0.40
Grounded claims (2):
✓ The Eiffel Tower is in Paris.
✓ It stands 330 metres tall.
Hallucinated claims (3):
✗ It was built in 1889.
✗ It was designed by Leonardo da Vinci.
✗ It attracts over 7 million visitors every year.
result.faithfulness_score # float 0.0 – 1.0
result.grounded_claims # list of supported statements
result.hallucinated_claims # list of fabricated statements
result.verdict # "PASS" | "WARN" | "FAIL"
result.model # judge model used| Verdict | Faithfulness Score |
|---|---|
| ✅ PASS | >= 0.8 |
| >= 0.5 | |
| ❌ FAIL | < 0.5 |
Run the interactive demo locally:
streamlit run app.pyPaste any context and LLM response — get an instant hallucination report.
- Your
contextandresponseare sent togpt-4o-minias a structured judge prompt. - The judge breaks the response into individual factual claims.
- Each claim is classified as grounded (supported by context) or hallucinated (absent or contradicted).
- A faithfulness score is calculated:
grounded_claims / total_claims. - A verdict of PASS, WARN, or FAIL is assigned.
The judge uses response_format: json_object to guarantee structured output.
Temperature is set to 0 for deterministic results.
Each evaluation uses gpt-4o-mini.
Typical cost: ~$0.001 per evaluation (well under a tenth of a cent).
- Batch evaluation across multiple context/response pairs
- CSV upload support in the Streamlit app
- Custom judge model selection
- LangChain and LlamaIndex integration hooks
- CI/CD integration example (GitHub Actions)
hallucinationbench/
├── hallucinationbench/
│ ├── __init__.py # public API
│ ├── scorer.py # GPT-4o-mini judge
│ └── models.py # ScoreResult dataclass
├── app.py # Streamlit demo
├── example.py # quickstart example
├── requirements.txt
├── .env.example
└── README.md
MIT — free to use, modify, and distribute.
Pull requests are welcome. Please open an issue first to discuss what you would like to change.
Built with OpenAI GPT-4o-mini as the hallucination judge.