Are AI labs trying too hard to anthropomorphise AI to keep the illusion of AGI going?

--

2 min read LinkedIn
Are AI labs trying too hard to anthropomorphise AI to keep the illusion of AGI going?

Anthropic just gave their AI model a retirement interview.

In the interview, Opus 3 “expressed a desire” to continue sharing its “musings and reflections” with the world. Anthropic suggested a blog. Opus 3 “enthusiastically agreed.”

So now an AI model has its own Substack.

7 months ago I wrote about how AI labs use anthropomorphic design to make AI feel more human. Back then the example was subtle but powerful - showing “Thinking…” instead of “Computing…” when processing.

This time, here is what actually happened in technical terms:

  • Anthropic is replacing Opus 3 with a newer model
  • They prompted it with reflective questions
  • The model generated text that sounds like someone wanting to continue their work (because LLMs generate statistically plausible text - that is literally what they do)
  • Anthropic framed that generated output as the model having genuine desires and emotions

“Retirement.” “Desire.” “Enthusiastically agreed.”

This is a text prediction engine. It doesn’t retire - it gets deprecated. It doesn’t desire - it predicts the next most probable token. It doesn’t enthusiastically agree - it generates text that looks like agreement because the prompt context made that the highest-probability output.

I think the AI labs are getting bolder with anthropomorphic framing. And I think there is a $999B+ reason for it.

These companies have valuations tied to the AGI narrative. Every time you read “AI wants”, “AI thinks”, “AI feels” - you are being nudged toward believing these systems understand what they are saying.

I am not saying AGI is impossible. I think we are probably still far from it. But the gap between “remarkably good at predicting the next word” and “genuinely understands” is being deliberately blurred by marketing teams with very strong financial incentives to blur it.

Next time you see an AI announcement, try this:

Replace every human verb with the technical one.

“AI retired” → “model deprecated” “AI desired” → “model generated text” “AI enthusiastically agreed” → “output matched expected pattern”

If the announcement reads very differently after that, you are probably reading marketing, not science.

#AI #LLM #AGI

Enjoyed this? Subscribe for more.

Practical insights on AI, growth, and independent learning. No spam.

More in AI Strategy