Are AI labs trying too hard to anthropomorphise AI to keep the illusion of AGI going?
--
—
Anthropic just gave their AI model a retirement interview.
In the interview, Opus 3 “expressed a desire” to continue sharing its “musings and reflections” with the world. Anthropic suggested a blog. Opus 3 “enthusiastically agreed.”
So now an AI model has its own Substack.
7 months ago I wrote about how AI labs use anthropomorphic design to make AI feel more human. Back then the example was subtle but powerful - showing “Thinking…” instead of “Computing…” when processing.
This time, here is what actually happened in technical terms:
- Anthropic is replacing Opus 3 with a newer model
- They prompted it with reflective questions
- The model generated text that sounds like someone wanting to continue their work (because LLMs generate statistically plausible text - that is literally what they do)
- Anthropic framed that generated output as the model having genuine desires and emotions
“Retirement.” “Desire.” “Enthusiastically agreed.”
This is a text prediction engine. It doesn’t retire - it gets deprecated. It doesn’t desire - it predicts the next most probable token. It doesn’t enthusiastically agree - it generates text that looks like agreement because the prompt context made that the highest-probability output.
I think the AI labs are getting bolder with anthropomorphic framing. And I think there is a $999B+ reason for it.
These companies have valuations tied to the AGI narrative. Every time you read “AI wants”, “AI thinks”, “AI feels” - you are being nudged toward believing these systems understand what they are saying.
I am not saying AGI is impossible. I think we are probably still far from it. But the gap between “remarkably good at predicting the next word” and “genuinely understands” is being deliberately blurred by marketing teams with very strong financial incentives to blur it.
Next time you see an AI announcement, try this:
Replace every human verb with the technical one.
“AI retired” → “model deprecated” “AI desired” → “model generated text” “AI enthusiastically agreed” → “output matched expected pattern”
If the announcement reads very differently after that, you are probably reading marketing, not science.
#AI #LLM #AGI
Enjoyed this? Subscribe for more.
Practical insights on AI, growth, and independent learning. No spam.
More in AI Strategy
More Context Isn't Always Better in AI Prompts
I treat AI like a brainstorming partner.
Hot Take: Vibe Coding Won't Replace Software Engineers
Here, I share my journey from a strong believer to a skeptic.
Claude Code is for software developers, and OpenClaw is more for business users.
A learner said this to another learner during a recent workshop. I think this is the most common and most dangerous misconception about these two tools.
I’ve seen several people sharing this chart recently.
Interestingly, they seem to have very contrasting views.
Preparing for the Age of AI: NTU Panel Recap
· Duan Kai Neo- President of NTU EEE Alumni Association
The Worst Job Displacement of Software Engineers Is Yet to Come.
This is not another fear mongering post.
More Context Isn't Always Better in AI Prompts
I treat AI like a brainstorming partner.
I’ve seen several people sharing this chart recently.
Interestingly, they seem to have very contrasting views.
The Worst Job Displacement of Software Engineers Is Yet to Come.
This is not another fear mongering post.
Hot Take: Vibe Coding Won't Replace Software Engineers
Here, I share my journey from a strong believer to a skeptic.
Claude Code is for software developers, and OpenClaw is more for business users.
A learner said this to another learner during a recent workshop. I think this is the most common and most dangerous misconception about these two tools.
Preparing for the Age of AI: NTU Panel Recap
· Duan Kai Neo- President of NTU EEE Alumni Association