workforceLIVE: What the data says about AI ROI and organisational readiness
Last week we hosted aibl's first workforceLIVE event in London, focusing on Capabilities, Culture and Change. The...
Read more
Most organisations have invested in AI tools over the last three years. About half of them are still waiting for positive returns.
It’s easy to point at the tools, but the issues are often organisational and human.
Admittedly, an event called workforceLIVE was never going to conclude that the missing ingredient was a better software licence. But in the course of the morning we spoke with three experts who, working with different data, arrived at the same conclusions from three unique angles.
Dr Laura Weis, Human AI Strategy and Transformation Lead at WPP, opened with a point that ran through everything that followed: AI is not neutral. In high-performing teams with clear decision-making and good culture, it multiplies value. In poorly functioning organisations, it amplifies the dysfunction. Unclear responsibilities, weak culture and poor design don’t get solved; if anything they become more acute.
A spoon can be used for good or ill, Laura said. We do not blame the spoon. The tool amplifies the skill and judgement of whoever is using it, and the output is only as good as what comes in.
AI amplifies the social judgements that surround skill as much as it amplifies skill itself. Research Laura cited shows that when women use AI tools, the output tends to be attributed to the tool. When men use the same tools to the same effect, the credit goes to the person. Organisations designing AI programmes without inclusion at the centre are widening gaps that already exist, not because the technology is biased but because the social judgements around it are.
AI reshapes work in the same direction, targeting the routine, high-volume tasks first because they’re easiest to specify. What remains is less clear and less teachable. Roles shift from execution to oversight, and oversight requires domain knowledge that execution experience used to build. The tasks most exposed to AI are often the ones associated with creativity, autonomy, and happiness at work. Laura’s term for the instinct that follows is automate for applause rather than relief. Organisations reach for the impressive automations rather than the grunt work that drains people. The better move is to start where the drag is, redesign the workflows, and protect the space AI creates rather than fill it with more work. The structures around the work make this hard: legacy cultures, siloed handovers and linear career paths were built for execution-heavy roles, not for the distributed oversight AI is creating.
Research replicated within WPP shows how unevenly it lands. A fifth of C-suite executives are saving more than twelve hours a week. Two in five general workers report no time savings at all. It lands hardest on mid-senior managers, sitting where those structural pressures bite. Laura described them as running at around 120% capacity, with no slack to rethink the work even if the system allowed it. Until organisations create that space, more tools won’t help.
Laura’s answer is to rethink the shape of the professional alongside the shape of the work. The T-shaped model, a deep specialist with broad awareness across adjacent areas, has been the hiring standard for years. She argues for the M-shaped professional: multiple areas of real expertise, plus the orchestration skills to combine them. People who can hold complexity and work across boundaries, rather than hand work across them.
Kerri O’Neill, CPO of Ipsos UK, took the next question: what does leadership have to do in response? The starting point is trust, and trust is in shorter supply than most leaders assume. Workplace studies show around half the workforce fears AI will take their jobs regardless of what their employers say. That is the territory organisations are deploying AI into.
Ipsos began working with AI well before ChatGPT brought the technology into the mainstream. It looked like a threat at first, Kerri said: its ability to ingest and analyse vast amounts of data mirrored what researchers do. Not surprising for a French company, she noted, Ipsos took a philosophical position before touching the tools. By working out how the organisation intended to use AI, it could communicate clearly to 20,000 employees across 93 countries what they were actually expected to do with it.
We covered the detail of Ipsos’s philosophy previously in this newsletter. Of all those principles, transparency proved the hardest to implement. Transparency about how AI makes decisions is where you start to see what it’s doing to the organisation. Without it, the question of whether AI is reinforcing or undermining the conditions Laura described stays out of reach. Grappling with how the algorithm works and whether it can be audited has done more for workforce engagement at Ipsos than any tool launch.
Two years of dialogue sessions, leader-led conversations and regular check-ins built from there. The check-ins ask employees what they need right now to feel more confident about working with AI, and the answers change, which is itself useful information. Trust within the Ipsos workforce now sits at around 80%, against roughly 50% in the general population. What that buys, Kerri argued, is speed: conversations about future changes happen faster because employees already believe the company will act responsibly. The framework isn’t a constraint on adoption. It’s what lets adoption move.
Matt Phelan, co-founder of The Happiness Index, brought the data. The Global Workplace Happiness Report draws on 250 million data points from five million employees across 170 countries.
The 19 to 29 age bracket records the lowest scores for happiness, organisational trust and work-life balance of any group in the dataset. The report describes this group as overloaded and under-supported. The pressure of trying to prove themselves while struggling to set boundaries shows up in the numbers. Read alongside Laura’s argument, it raises a question. AI targets the routine, high-volume tasks first. Those tasks are also the ones that teach junior workers how things work. Remove the easier work and the scaffold that builds expertise goes with it. The M-shaped professional doesn’t arrive without that foundation.
Across Matt’s research, the pattern is consistent: organisations invest most in the things that matter least. The two biggest drivers of happiness at work are inspiration from the organisation and a sense of belonging. Most companies are funding neither. They fund functional improvements, technology platforms and engagement programmes while the human side goes uninvested. What Kerri described at Ipsos, two years of dialogue before any tool deployment, is the kind of investment that builds belonging and trust. That inspiration is rarer still.
People need to come to AI willingly, not by mandate. Matt cited a study in which two groups did identical amounts of exercise. The group that wanted to exercise lost more weight and showed better blood pressure outcomes than the group that didn’t. The same logic applies to AI adoption. Where organisations have built the conditions Kerri described, willing engagement, clear philosophy, trust as a foundation, adoption follows. Where they’ve issued mandates and hoped for the rest, they get compliance without the results.
The structural work Laura argued for, the trust Kerri described, and the investment in belonging Matt called for aren’t separate fixes. They’re the foundation willing engagement builds on.
At aibl, the pattern holds across the businesses we cover: the question is rarely whether the tools work. It’s whether the organisation was ready for what they’d reveal. AI amplifies. The question is what it’s amplifying. Organisations that have built those conditions will see their investments compound. The ones that haven’t are amplifying the problems they already have.
Next week: what the roundtables made of it.
Last week we hosted aibl's first workforceLIVE event in London, focusing on Capabilities, Culture and Change. The...
Read more
Most organisations have invested in AI tools over the last three years. About half of them are still waiting for...
Read more
Rahim Hirji, founder of Superksills, thinks we’re missing something profound in our approach to AI...
Watch videoGet ahead with the most actionable insights, playbooks and real-world AI use cases you can adopt right now, in your inbox every week