Are AI labs trying too hard to anthropomorphise AI to keep the illusion of AGI going?
--
—
Anthropic just gave their AI model a retirement interview.
In the interview, Opus 3 “expressed a desire” to continue sharing its “musings and reflections” with the world. Anthropic suggested a blog. Opus 3 “enthusiastically agreed.”
So now an AI model has its own Substack.
7 months ago I wrote about how AI labs use anthropomorphic design to make AI feel more human. Back then the example was subtle but powerful - showing “Thinking…” instead of “Computing…” when processing.
This time, here is what actually happened in technical terms:
- Anthropic is replacing Opus 3 with a newer model
- They prompted it with reflective questions
- The model generated text that sounds like someone wanting to continue their work (because LLMs generate statistically plausible text - that is literally what they do)
- Anthropic framed that generated output as the model having genuine desires and emotions
“Retirement.” “Desire.” “Enthusiastically agreed.”
This is a text prediction engine. It doesn’t retire - it gets deprecated. It doesn’t desire - it predicts the next most probable token. It doesn’t enthusiastically agree - it generates text that looks like agreement because the prompt context made that the highest-probability output.
I think the AI labs are getting bolder with anthropomorphic framing. And I think there is a $999B+ reason for it.
These companies have valuations tied to the AGI narrative. Every time you read “AI wants”, “AI thinks”, “AI feels” - you are being nudged toward believing these systems understand what they are saying.
I am not saying AGI is impossible. I think we are probably still far from it. But the gap between “remarkably good at predicting the next word” and “genuinely understands” is being deliberately blurred by marketing teams with very strong financial incentives to blur it.
Next time you see an AI announcement, try this:
Replace every human verb with the technical one.
“AI retired” → “model deprecated” “AI desired” → “model generated text” “AI enthusiastically agreed” → “output matched expected pattern”
If the announcement reads very differently after that, you are probably reading marketing, not science.
#AI #LLM #AGI
Enjoyed this? Subscribe for more.
Practical insights on AI, growth, and independent learning. No spam.
More in AI Strategy
The Worst Job Displacement of Software Engineers Is Yet to Come.
This is not another fear mongering post.
The Missing GenAI Video Assessment Criterion
Sora's First Attempt at Depicting Gymnastics
OpenAI’s Windsurf deal is off — and Windsurf’s CEO is going to Google
Anyone feel like we should use AI to create a drama and publish it on Netflix?
Yesterday OpenAI finally launched Sora and it never had a chance.
When Sora was announced a year ago it had everyones attention and rightfully so. It was essentially showcasing video quality on par with the current state of...
Future-Proofing Your Digital Marketing in The Age of AI
I thought, why not turn my presentation into a LinkedIn article? Rather than let the ideas stay in the slides, I wanted to share them here so more people can...
The Worst Job Displacement of Software Engineers Is Yet to Come.
This is not another fear mongering post.
OpenAI’s Windsurf deal is off — and Windsurf’s CEO is going to Google
Anyone feel like we should use AI to create a drama and publish it on Netflix?
Future-Proofing Your Digital Marketing in The Age of AI
I thought, why not turn my presentation into a LinkedIn article? Rather than let the ideas stay in the slides, I wanted to share them here so more people can...
The Missing GenAI Video Assessment Criterion
Sora's First Attempt at Depicting Gymnastics
Yesterday OpenAI finally launched Sora and it never had a chance.
When Sora was announced a year ago it had everyones attention and rightfully so. It was essentially showcasing video quality on par with the current state of...