Why Most Leaders Can’t Choose an AI Tool 

13th February 2026 | Insights & Case Studies Why Most Leaders Can’t Choose an AI Tool 

If part of your job is to choose technology (and probably even if it isn’t), your inbox is flooded with AI vendor emails, demo invitations and case studies promising overnight results.

One Advisory Board member listed his current evaluation set. “Writer AI for content creation, Lovable for prototyping, Base44, Gemini, Notebook LLM, Claude. So, LLM models, then what to build yourself. We haven’t even stepped into agent-to-agent conversations yet. It’s such an overwhelming domain for business leaders, no matter what segment you’re in.”

The challenge is there are few public success stories with enough detail to add value to a vendor comparison grid. When a firm gains advantage from AI, they’re not eager to share the details with competitors. As a result, according to one enterprise leader, ‘a lot of mid-market firms are sticking with Janet and Excel.’

As one strategy leader said, “I don’t even know what criteria I should use. What questions should I ask? How would this fit into my organisation?”

Even leaders who do develop evaluation criteria hit a wall when everyone’s selling and no one’s showing proof.

According to one founder at the aibl Advisory Board meeting, the barrier is uncertainty – leaders are willing but need to be sure. What moves them from paralysis to action is trust, evidence and logic working together. Feature lists don’t cut through. Another pointed out that people want guidance but are wary of being sold to.

Rushed decisions and frozen evaluations both backfire

Without clear criteria, leaders split into two camps. Some make arbitrary bets and ask middle management to figure out implementation. Others freeze in evaluation mode and make no decision at all.

Both create problems. Teams respond to rushed decisions by working around them, or to indecision by rolling out their own solutions. Around 70% of organisations we hear from at aibl have employees using AI unofficially. Different people, different tools, no consistent standards.

Departments pick tools function by function without coordination. One practitioner described the result: “I’m seeing entire departments picking their own tools function by function. They’re training people on what the tool does, then leaving them to guess what problems it should solve. The sequence is completely backwards.”

When leadership finally picks a direction, they hit a switching problem nobody planned for. The result is organisations burn through confidence and give up.

What to write down before you talk to any vendor

There’s a way forward that starts with the right metrics. Even mature organisations make the choice of evaluating AI with the lens of time-saved-per-week. Sounds reasonable.

But as one transformation leader who is running 30-plus AI projects across a 400-person company told us, she couldn’t care less about individual productivity or hours saved. What matters are quantifiable business outcomes: revenue generated, pipeline created, documentation coverage.

Time saved is easy to report but difficult to verify. Unless you take a rigorous approach evaluating tasks before/after AI intervention, it tells you someone feels faster, without showing whether a process changed. 

The projects that stuck in her organisation had specific outcomes tied to them.

The leaders we work with at aibl follow that logic. Before any vendor conversation, write down four things:

What’s actually broken – not “we need AI for sales” but “our sales team spends 10 hours a week on manual data entry”

Who’s affected – which team, which customers, which process

What it’s costing you – time lost, errors made, revenue at risk

What success looks like in 12-18 months – quantifiable outcomes like “generate £50k additional pipeline,” not vague claims like “save time”

If you can’t name the people affected, you don’t understand the problem yet. Without a way to measure the problem, you’ve got no way to measure whether AI fixed it.

Only then consider AI as a potential solution – and still don’t name tools yet. When you turn “we want AI” into a specific outcome, you can finally match tools to reality.

How to drag vendor demos into reality

Once you’ve worked this out, you’re ready for vendor demos.

Start with proof, or at least as much proof as is available. Force vendors to address your specific context by asking how they helped mid-market companies with similar constraints. This moves the conversation on from generic capabilities.

Implementation specifics are a natural follow-on: What work is on their side versus yours? What’s a realistic timeline? What systems does this connect to, and what happens with custom integrations? One board member put it directly: “The unglamorous work – finance, HR, operations – that’s where organisations struggle to picture the future state. Everyone wants to talk about the sexy AI bits, but it’s the workflow management and data cleaning that determines success.”

This includes getting clear on data. What do they need from you and what happens to it? How will it be collected, stored and used? Will they train models for other customers on your data? One Advisory Board member flagged that even when leaders want to move fast, messy data blocks progress. Data governance can’t be an afterthought.

Monitoring and accountability also matter too. How will you know it’s working? What happens when it makes a mistake, and who’s accountable?

Finally, ask about cost beyond the license fees. Include implementation, integration, training and usage-based charges. What does the actual number look like over 18 months? If a vendor isn’t willing to spec this out, walk away. According to one practitioner, don’t be persuaded that AI spend will be small and easy to ignore – it becomes structurally material over time.

Start with the problem, not the tool

What separates staying stuck from moving forward is clarity – knowing what’s broken, who it affects, and what it costs. Business outcomes over time saved, and a real picture of what implementation requires.

The leaders breaking through decision paralysis at aibl show up to vendor calls with specific problems. They bring criteria that force real answers. They’re not waiting for perfect information or Fortune 500 proof points.

Before your next call, re-read this article and commit it to memory (it’s really not that long). Use our recommended criteria to test whether vendors can address your actual constraints. The right choice becomes obvious when you know what you’re solving for.

Hype Free AI insights

Our latest operator insights

The AI Amplifier: Moving from Tool Adoption to Work Design with Dr. Laura Weis, WPP

The AI Amplifier: Moving from Tool Adoption to Work Design with Dr. Laura Weis, WPP

Dr Laura Weis, Global Human AI Strategy Lead at WPP, argues that organisations do not have an AI adoption problem. They have a work design problem...

Watch video
If your organisation was already struggling, AI will just make it louder

If your organisation was already struggling, AI will just make it louder

AI is the ultimate fix: bring in the tools, improve the numbers, move faster. Dr Laura Weis, Global Human AI...

Read more
AI in Practice: How a better prep process created a new problem

AI in Practice: How a better prep process created a new problem

This week we spoke to the head of sales at a mid-market SaaS firm selling compliance and scheduling software to...

Read more