A curious question from my kids sent Gemini into a hallucination.

Google's AI Overview can be 100% wrong, even when SERP is right.

1 min read LinkedIn

Google’s AI Overview can be 100% wrong, even when SERP is right.

It could be a trap when you least expect it.

Today, my kids asked me, “What is Buah Long Long?” I quickly Googled it to show them images. The initial search was good: both the AI Overview and SERP correctly identified “Buah Long Long” as Ambarella Fruit.

However, when I tapped on “Benefits of Buah Long Long,” the AI Overview hallucinated, confidently stating that Buah Long Long is also known as longan. This led to a cascade of completely wrong information.

Ironically, the SERP results for “Benefits of Buah Long Long” were accurate. AI Overview is supposed to be grounded in Google Search, so why was it so far off?

This isn’t just about fruit; it’s a critical reminder about the reliability of information, especially from AI summaries.

I tested the same prompt, “Benefits of Buah Long Long,” on Perplexity, and it had no issues. This incident, combined with my past experiences, indicates that Gemini still exhibits the worst hallucinations among frontier LLMs.

My takeaway: Always, always, always double-check facts presented by AI Overview.

What are the worst hallucinations you’ve encountered recently?

Technical observation:

It hallucinates when I least expect it. But here’s an interesting observation: Buah Long Long and Longan both contain the word “Long”. Could this be related to tokenization?

#gemini #googleai #ai #google #artificialintelligence #chatgpt #perplexity #openai #genai #llm #hallucinations

Enjoyed this? Subscribe for more.

Practical insights on AI, growth, and independent learning. No spam.

More in AI Agents