Why 96% AI adoption at Make didn’t start with tools or training
Watch the interview here When Sara Maldon joined Make two years ago, there was no approved AI tool. Nobody...
Read morePLUS: What 12 months of AI agents taught mid-market firms
Every mid-market leader we know is buried in vendor demos right now. The tools keep multiplying and case studies promise overnight results. Nobody can point to a UK company making any of it work, so most are struggling to move forward.
At the recent aibl Advisory Board meeting, founders and operators landed on why. Leaders sit through five demos and can’t articulate what differentiates them. They ask ‘will this work?’ when they can’t define what ‘work’ means. ‘Does it have a proven track record?’ is hard to answer in a market moving this fast.
Meanwhile their teams are using free versions of platforms like ChatGPT on the sly, potentially leaking company data and creating pockets of AI-assisted workflow that are independent and uncoordinated, making learning and the inevitable cleanup difficult.
The article below walks through how to fix that, starting with what to document before any vendor call and the questions that force demos into reality.

If part of your job is to choose technology (and probably even if it isn’t), your inbox is flooded with AI vendor emails, demo invitations and case studies promising overnight results.
One Advisory Board member listed his current evaluation set. “Writer AI for content creation, Lovable for prototyping, Base44, Gemini, Notebook LLM, Claude. So, LLM models, then what to build yourself. We haven’t even stepped into agent-to-agent conversations yet. It’s such an overwhelming domain for business leaders, no matter what segment you’re in.”
The challenge is there are few public success stories with enough detail to add value to a vendor comparison grid. When a firm gains advantage from AI, they’re not eager to share the details with competitors. As a result, according to one enterprise leader, ‘a lot of mid-market firms are sticking with Janet and Excel.’
As one strategy leader said, “I don’t even know what criteria I should use. What questions should I ask? How would this fit into my organisation?”
Even leaders who do develop evaluation criteria hit a wall when everyone’s selling and no one’s showing proof.
According to one founder at the aibl Advisory Board meeting, the barrier is uncertainty – leaders are willing but need to be sure. What moves them from paralysis to action is trust, evidence and logic working together. Feature lists don’t cut through. Another pointed out that people want guidance but are wary of being sold to.
NEWS
The decision paralysis described above shows up in Quadbridge’s recent research across 250+ mid-market IT leaders. Security, privacy, and data quality ranked as the top barriers to AI adoption. But when participants were asked to name actual AI failures they’d experienced, almost nobody could.
AI maturity is becoming a leadership competency, not a technical one. Leaders are managing known risks, unknowns, and unknown unknowns simultaneously – and decisions are shaped by anticipated risk rather than lived experience.
Without clear boundaries for how AI should operate, reasonable caution turns into indefinite delay. The missing piece isn’t confidence in the technology – it’s the guardrails that would make confidence possible.

We’ve recently been speaking to mid-market firms about what worked with AI agents over the last 12 months. Many teams had wasted months chasing autonomous agents – the “full autonomy narrative” (ahem) that vendors sell. Generic assistants that promised to do everything but failed at everything. Complex workflows that broke constantly. Agents needing more babysitting than the tasks they replaced.
Most teams try to build agents that think before building agents that do. Flip that. Prove an agent can handle a boring, repeatable task flawlessly – then expand from there.
Autonomous agents are possible. But at aibl we see teams skip the foundational work that makes autonomy reliable. Stakeholders who see an agent nail something tedious will ask what else it can do – and that’s your budget case for the next build.

This week we’ve been tracking Workday’s expansion of Workday GO, announced in November and set for February 2026. It covers the UK, US, Canada, and parts of Europe. It’s aimed at mid-market firms that need proper HR, payroll, and finance infrastructure without the complexity and cost of traditional enterprise platforms.
They’re using AI to handle the usual mid-market problem – small teams can’t manage long implementations. Their new Deployment Agent cuts setup time by 25%. It works alongside embedded agents for payroll and HR self-service that let smaller teams handle more volume.
The expansion includes Workday GO Global Payroll, built for international scaling through a partnership with Remote (a global HR/payroll platform). If you’re growing across borders, you get one system for hiring and paying people in different countries instead of disconnected tools. They’ve also built a curated partner network so you work with one trusted implementer rather than navigating a messy ecosystem.
Core HCM runs around £200-£250 per employee annually, with AI features adding 12-18% on top. Implementation costs are additional, though they’re pitching fixed pricing to remove guesswork.
Watch the interview here When Sara Maldon joined Make two years ago, there was no approved AI tool. Nobody...
Read more
Voice notes from calls, meeting transcripts, half-formed ideas recorded on the move. They contain commercial...
Read more
For this week's AI in practice, we spoke to the founder of a regional managed IT services provider that had grown...
Read moreGet ahead with the most actionable insights, playbooks and real-world AI use cases you can adopt right now, in your inbox every week