Most Failed AI Rollouts Are Technically Sound

I had a conversation with Shang How Tan, CEO of Sequoia Group, a Singapore-based leadership and organisation development consultancy firm with 25 years of pr...

6 min read LinkedIn
Most Failed AI Rollouts Are Technically Sound

Recently, I had a conversation with Shang How Tan, CEO of Sequoia Group, a Singapore-based leadership and organisation development consultancy firm with 25 years of practice across 200+ organisations and 1,000+ engagements.

We talked about why so many AI rollouts in companies are underperforming. The diagnosis they kept returning to was not about the AI at all.

The pattern they have observed across decades of transformation work is consistent. The majority of failed transformations had technically sound solutions. What was missing was acceptance from the people who were supposed to use them.

Sequoia expressed it as an equation that they have applied in change management:

Q × A = E

If Q is high but A is low, E is still low. You can have the best workforce strategy in the country, the best AI tool on the market, the best vendor in the room. If your people did not accept it, the result is the same as having a bad solution.

This explains a lot about why AI rollouts in 2026 are underperforming.


The Singapore SME paradox

In 2025, AI adoption among Singapore SMEs jumped from 4.2% to 14.5%. That sounds like progress. It also means 85.5% of Singapore SMEs still have not adopted AI.

There is a second-order failure mode hiding in there. Of the 14.5% who did adopt, many implementations are delivering suboptimal value. Not because the AI is broken. Because:

  • Jobs were not redesigned to use the AI
  • Workflows stayed static after the tool was introduced
  • Employees were not equipped to operate the new tools

The Association of Small & Medium Enterprises in Singapore has flagged this directly. We are training people in AI skills faster than workplaces are being redesigned to use those skills.

So you have two failure modes happening at the same time:

  1. The non-adopters: stuck because there is no compelling case for change. Low A before any Q exists.
  2. The adopters: stuck because the tech was added without redesigning the work. High Q, low A.

Both are acceptance problems. Neither is a technology problem.


Why builders should care

I have used Claude Code to automate many of my daily tasks. The hardest part was never the engineering. It was figuring out which task my AI agent should do, and which task should stay with me.

That is a job redesign question, not a tech question.

When business owners tell me their AI rollout failed, the story is almost always the same. They bought a tool. They told the team “we have AI now”. They expected adoption to happen. It did not.

Compare that to the failure mode I see less often, but which still happens. The technical solution was actually wrong. Wrong model, wrong vendor, wrong architecture.

In my experience, that is maybe 1 in 10 cases. The other 9 are acceptance failures dressed up as technology failures. And we keep blaming the tech because the tech is easier to swap than the people side.


Four shifts if you take Q × A = E seriously

If acceptance is the bigger lever, your priorities change. Here are four shifts I think every business owner rolling out AI should consider.

1. Redesign the job before introducing the tool

Don’t ask “can AI do this?” Ask “what would this role look like if AI were doing the routine parts?” Then build that role. Then bring in the tool to support it.

Sequoia frames this as deconstructing jobs into discrete tasks first, then deciding which tasks should be human-led, AI-augmented, or automated. The role is the unit of design. Not the tool.

2. Build acceptance with the people doing the work, not for them

Workforce transformation fails when stakeholders feel excluded or uninformed. Top-down rollouts have the lowest acceptance. Co-creation with line managers and the actual implementers has the highest.

If your CTO unilaterally picks the AI vendor and the workflow, expect low A.

3. Measure outcome, not adoption

A team using ChatGPT 100 times a week is not the same as a team that has redesigned its workflow around it. Login counts and seat usage are vanity metrics for AI rollouts.

Measure what changed in the actual work. Cycle time. Output quality. Decision speed. If those did not move, your rollout is not working, regardless of how many people opened the app.

4. Treat readiness as the gate, not the tool

Sequoia uses an AI Readiness Checklist that evaluates digital infrastructure, data quality, technical capability, workforce pain points, job redesign readiness, and budget. If you fail on the first three, you are not ready for AI. You are ready for digitisation.

Most SMEs do not need autonomous AI or sophisticated machine learning. They need analytics foundations and smart automation. Skipping ahead to GenAI before you have clean data is the most expensive way to discover this.


The 25-year pattern

The same pattern emerges across decades of consulting work. Leaders treat technology adoption as a procurement problem instead of an organisational change problem. Technically brilliant plans (high Q) generate minimal organisational impact (low E) when the people implementing them never accept the change (low A).

This is not a Singapore problem. It is not even an AI problem. It is the same problem we have had with every wave of enterprise software for the last 30 years.

What is different now is the speed of change. AI is being adopted faster than any other technology in history, which means the ratio of acceptance to quality matters more than ever. There is less time for organisations to course-correct mid-rollout.


A practical diagnostic

Next time you see an AI rollout struggling, run this before blaming the tool:

  1. Did we redesign the job, or just bolt the tool onto the old workflow?
  2. Did the people doing the work co-design this, or were they told?
  3. Are we measuring outcomes that matter, or just usage stats?
  4. Was the organisation ready for this level of AI, or did we skip a step?

If you answer “no” or “we skipped” to two or more of these, the tool is probably fine. The acceptance is the problem.

If you are a business owner rolling out AI inside your company, your job is not to pick the best model. Your job is to design the work and bring people along. The model selection is the easy part.


Come say hi at HR Tech Festival Asia

I will be joining the Sequoia Group team at HR Tech Festival Asia, Asia’s premier HR Tech event by HRM Asia. They will be showcasing Octave, one of their org tech solutions for job redesign and workforce transformation, at Booth S7.

If you are wrestling with the Q × A = E problem inside your own company, come find us. Drop by Booth S7, see Octave in action, and let us swap notes on what is actually working in AI rollouts right now.

#AI #Leadership #OrgTech #DigitalTransformation #FutureOfWork #HRTechFestivalAsia #OD

Enjoyed this? Subscribe for more.

Practical insights on AI, growth, and independent learning. No spam.

More in AI Strategy