Claude Code can code nice UI. But nice UI doesn't mean good UI.

Manual UI testing is becoming one of my biggest bottlenecks when coding with AI now.

1 min read LinkedIn

Tap a slide to expand

Claude Code can code nice UI. But nice UI doesn't mean good UI., slide 1
Claude Code can code nice UI. But nice UI doesn't mean good UI., slide 2
Claude Code can code nice UI. But nice UI doesn't mean good UI., slide 3
1 / 3

Manual UI testing is becoming one of my biggest bottlenecks when coding with AI now.

For any new feature, I find myself usually spending 80% of time doing manual UI testing.

These are two common issues I see:

  1. LLM sometimes makes poor judgment of what UI component to use.

When I asked Claude Code to add a feature to allow users to toggle the AI mode, image 1 is what it coded. It used an on/off switch for a choice between two modes. It looks decent, but ambiguous. What does ON mean? Turn ON Learning Mode or Explore Mode?

One way to solve this is probably giving it best practices on how to choose components as a skill?

  1. LLM can’t see layout issue.

After prompting it about the ambiguity, I asked it to use a segmented control. But it introduced a layout issue.

It took additional prompt to finally get it fixed.

This invites a question.

If the LLM could see what it coded, could it catch these obvious issues on its own?

This is something I just started to explore. Does anyone know of a solution that lets Claude Code (or similar tools) see its own UI output for a mobile app during the agentic loop?

Would love to hear how others handle this.

#AI #CodingWithAI #ClaudeCode #UITesting #UXDesign #DeveloperExperience #LLM

Download carousel document

Enjoyed this? Subscribe for more.

Practical insights on AI, growth, and independent learning. No spam.

More in Vibe Coding