Watch the interview here
Tim Flagg, CEO and Co-Founder of UKAI discusses why 75% of mid-market firms are stuck in “Shadow AI” or exploratory paralysis…
This article is drawn from a recent conversation with Tim Flagg, co-founder and CEO of UKAI.
Tim Flagg spent years building AI-driven products at companies like EntityX, Greater Than and Intelliquence. He witnessed how quickly this new industry was accelerating, but he also saw the risks and was concerned that the industry was becoming a ‘Wild West”.
So he co-founded UKAI, the country’s only trade association for AI businesses, bringing together companies that wanted to build a more responsible industry.
The organisation now works closely with the government (Tim was speaking with AI minister Kanishka Narayan just days before our conversation and UKAI regularly runs workshops with numerous departments) and across a membership base that gives him a particular view of how mid-market firms are actually adopting AI.
Some firms are running structured pilots. But from what Tim sees across the market, a large proportion are not. In their place: shadow AI – people using their own tools and platforms, outside any structure the organisation has set.
When people are on different pages, using different systems at different levels of maturity, pilots fail for structural reasons and leadership draws the wrong conclusion. “People sort of will say, well, AI failed,” Tim puts it. “Without really thinking about which bit of AI it was that failed and whether it was a fair test.”
Why tooling alone doesn’t get you there
The firms Tim sees handling AI well right now had already built habits of experimentation and learning before ChatGPT launched.
“The companies who got into that transformation project early, for them, when AI came along, it’s just another technology,” Tim says. “They’d already been scanning, they’d already been testing, they’d already been figuring a few things out. They had that curiosity, that resilience, that data-driven experimentation.”
Tim describes other companies “almost running around saying, we need to do this transformation now. We need to do an AI transformation… without thinking about the culture and without actually thinking about what the business need is.” They’re bolting AI onto organisations that never learned how to test, fail, and adapt continuously.
And at the individual level, most of the problem is confusion. People hear ‘AI’ and think chatbots, don’t see how it connects to their actual work, and disengage.
Tim describes running workshops where people use no-code tools to build something in normal conversation. “Once you start demystifying it and showing them some of the simpler ways of getting started and how it’s relevant to their jobs, they love it,” he says. The shift from crossed arms to leaning forward happens fast when someone makes it concrete.
Curiosity and resilience matter, but someone has to translate that into changed workflows and realistic targets. That job falls on managers, and it’s harder than most leadership teams recognise.
Where AI adoption breaks down: the manager’s new job
Tim describes AI adoption as “different things at different levels.” Leaders set vision, employees need relevance, managers redesign how work gets done. Of the three, the manager’s job has changed most. They’re no longer just encouraging teams to use AI tools – they’re figuring out how work functions when AI agents take on part of the flow. By agents, Tim means software that can carry out tasks with more autonomy across a workflow.
“Managers are having to start thinking about what hybrid teams look like,” Tim says. “How do they encourage their teams to not just use AI-powered tools, but use agents, and where do those agents fit into those teams and then what does that do to the KPIs and the responsibilities of the humans?”
That plays out in three ways.
If an agent handles data processing or first-draft analysis, measuring the human on throughput no longer makes sense. Managers have to figure out what realistic targets look like when the work has changed shape.
Managers have to design the human-in-the-loop checkpoints, setting the right inputs into these systems and reviewing what comes out.
The third part is the most practical: someone has to connect a specific tool to the task a person is doing today.
This is the frozen middle that aibl’s research keeps surfacing – managers facing a genuinely harder version of their job, with unclear expectations and misaligned incentives. Leadership approves the budget, employees need relevance before they’ll move, and managers have to rewire the actual operating model in between.
Start with the problem, then fix the data
Most firms ask the wrong first question: “How do we make an AI for our business?” he says. “What they should be thinking about is what’s the business need that we have? What’s the problem that we have? And then how do we design some technology that actually addresses that problem?”
When you start with the business problem, you almost always end up at data. And the data is almost always a mess: fragmented CRMs, legacy systems from mergers, datasets locked away with shallow sharing.
“Starting with cleaning up the data is sometimes the less sexy part of what you need to do in AI transformation,” Tim says. “But it’s a fundamental part. When you ask the right question, majority of cases you’ll get to doing these foundational things.”
AI can now help with the cleanup itself. Some of UKAI’s members are building tools that find scattered data, recodify it, and make it interoperable. But the tools don’t remove the need to know what you’re aiming at. The order matters: “work out what you’re solving, get the data right, then pick the tools.”
What buyers should ask AI vendors, but don’t
Most AI delivery now is stitched together: models, open-source components, vendor layers, vibe-coded interfaces. That’s not automatically a problem, but it changes what “robust” means, and buyers need to understand that.
“Some of the IP that you might have developed five or six years ago that you carefully hand coded and carefully crafted and built with very expensive engineers,” Tim says. “Now you’re starting to see companies come along and they’ll just vibe code a whole load of it. I’m simplifying, right? But that’s sometimes the challenge.”
That’s a real problem for vendors who invested years building proprietary tools – it’s also a problem for buyers. The question isn’t just “do you own the IP?”- it’s what did you build, what did you assemble, and what risks does that introduce? Where does proprietary work end and open-source dependency begin?
If a component breaks or a model changes, who’s on the hook? Most mid-market procurement processes aren’t set up to ask that yet.
Having built AI products himself, Tim frames the ethics question as commercial due diligence, not moralising. He draws a direct parallel to ESG (environmental, social and governance) – responsible sourcing was a nice-to-have, and now it’s embedded in procurement. He expects the same cycle with AI vendors.
Buyers are starting to ask whether providers have sourced their data responsibly and how they’re handling bias. “They’re thinking about sustainability, they’re looking at bias within their systems,” he says. Those are commercial questions, not peripheral ones. Firms that scrutinise pricing and features but skip provenance, data sourcing, and governance are exposed.
The vendors who can answer those questions are better positioned for longer-term contracts – the ones who can’t are a liability waiting to surface.
Why uncertainty blocks adoption more than regulation does
UKAI was set up to build a more responsible British AI industry. “There were companies who were like-minded,” he says. “We came together and said, we believe in building a more responsible industry.”
And not just for the industry’s sake. Tim sees the same gap nationally: consumers and citizens won’t engage with AI-powered services until they understand what they’re being offered. Closing that gap is a core part of UKAI’s work.
Tim points to the proposed UKAI bill last year: businesses stalled decisions because they didn’t know what it would cover, and the bill was eventually dropped for trying to do too much. The problem wasn’t regulation itself – it was the uncertainty.
He sees regulation as useful when it sets clear benchmarks and standards rather than just adding restrictions. Agreed standards give buyers and sellers something to work from, which is what actually unlocks adoption.
For mid-market firms, the question isn’t whether to engage with AI. It’s how to drive adoption, sustainably and effectively.
Watch the interview here
Tim Flagg, CEO and Co-Founder of UKAI discusses why 75% of mid-market firms are stuck in “Shadow AI” or exploratory paralysis…