Forget about AI presently. AI bullshits as much as liberal and ndp politicians.
Ranked: AI Hallucination Rates by Model
November 27, 2025
By Jenna Ross Graphics & Design
The following content is sponsored by Terzo

Ranked: AI Hallucination Rates by Model Key Takeaways- Many of today’s AI models struggled when asked to identify and cite news sources from an excerpt, producing frequent errors.
- The highest overall AI hallucination rate was 94?% for Grok-3, indicating nearly all its answers were incorrect.
Does your AI always give you the right answer? Unfortunately, its “truth” may be an illusion.
This infographic breaks down AI hallucination rates by model. It’s a preview of the brand-new executive guide from Terzo and Visual Capitalist, AI’s Illusion of Truth: The Data Behind AI Errors.
What are AI Hallucinations?
An “AI hallucination” refers to cases where a language model presents information as fact even though it is false or ungrounded.
These hallucinations happen because standard training systems reward guessing over showing uncertainty. Think about it this way: if you guess on a multiple choice test, you are more likely to get it right than if you give no answer.
AI Hallucination Rates: The Best and Worst Models
To measure AI hallucination rates, researchers presented models from leading AI companies with news excerpts. They then asked the models to identify the original article, publication, and URL.
Notably, the researchers specifically chose excerpts that, if pasted into a traditional Google search, returned the original source within the first three results.
The models’ responses were then checked for accuracy. Below, the table shows how often each model got an answer partially or entirely incorrect.
AI Model Hallucination Rate|
| Perplexity | 37% | | Copilot | 40% | | Perplexity Pro | 45% | | ChatGPT Search | 67% | | Deepseek Search | 68% | | Gemini | 76% | | Grok-2 Search | 77% | | Grok-3 Search | 94% |
Source: Columbia Journalism Review, March 2025. Responses where no answer was provided were not considered a hallucination.
Grok-3 had the worst performance, hallucinating 94% of the time. Perplexity, by contrast, delivered the most accurate answers.
Notably, paid models fared worse than their free counterparts. Most models failed to express any uncertainty in their answers, despite frequent errors.
Risks?&?Implications for Business Leaders
For company executives, the takeaway is clear. It’s risky to take an AI model’s answers at face value. Assuming output is accurate without verification can lead to many negative outcomes:
- Reputational damage
- Financial losses
- Legal exposure
With AI agents, where every action builds on the last, the consequences of AI hallucination can compound quickly. That’s why leaders need strategies to keep humans in the loop, verify output, and use a model that’s built on trusted company data.
ee the data behind AI’s errors and how to get 99% accuracy in the free executive guide, AI’s Illusion of Truth. |