Watch the interview here
As Chief People Officer at Ipsos UK and Ireland, and its Global AI Workforce Transformation Lead, Kerri O’Neill thinks that sequence is where…
When a significant new technology arrives, the instinct is to start with the tools. Kerri O’Neill has spent over two decades in HR watching organisations move through technology cycles. As Chief People Officer at Ipsos in the UK and Ireland, and its Global AI Workforce Transformation Lead, she thinks that sequence is where most organisations go wrong.
Starting with the philosophy, not the tools
Ipsos is over fifty years old, and its biggest periods of growth have tracked major technological shifts: face-to-face to telephone, telephone to internet, now internet to AI. Each time the question was the same: how do you adopt a new technology without losing what the business actually is?
“Most people when they talk about adoption, they tend to start with technology. But what’s interesting about what Ipsos has done is start with a philosophy. And that’s really helped people understand, embrace and experiment, because they know the frame by which we want to work with this technology.”
Kerri explains the AI philosophy centres around the Three Ts. Trust: “we would never use AI to shortcut something. Everything we do is about making sure it’s statistically valid, robust, based on genuine evidence.” Transparency, at both the algorithmic and human level: “we’re honest about what we’re trying to do with AI, we’re honest about what it can do, what it can’t do.” Truth, meaning outputs must ground back in what people actually said, not in what the AI produces when it is “very flattering and lovely.”
The Three Ts were laid out “even before we got to: these are the tools, here’s the prompting piece.”
Why Ipsos gave AI to everyone, regardless of role
Ipsos gave AI tools to every member of staff, irrespective of role and geography, and trained everyone in the basics of how an LLM works and how to prompt it properly.
“You wouldn’t not give someone the cloud. You wouldn’t not give someone the internet. You wouldn’t not give someone electricity.” AI belongs in the same category: a general purpose technology, not a departmental one.
Most employees are getting to grips with AI by starting to use it like an expanded search request. Moving them beyond that means treating AI as something you shape around yourself, which requires telling it a surprising amount about what you’re after and who you are.
Humans absorb roughly two billion data points a day from their culture, environment, and the people around them, most of it unspoken. AI has none of that and so you need to give it more context to be truly useful. “You have to tell it your hopes, fears, dreams, visions, you know, your thoughts on tone, communication, timing, style.”
Why Ipsos trained staff to question what AI produces
Universal access creates its own risk. Kerri calls it “cognitive surrender,” the term used by a recent Wharton study that found people accepting AI outputs uncritically and growing more confident in their answers even when the AI was wrong. The effect held even when participants were under time pressure or working for performance incentives.
“I think it’s quite good that the early models of AI were a little bit ropey. Because for most of us, there is a tendency to be quite wowed by technology and to just assume it’s amazing and it does all the right things.” The imperfect outputs forced a habit of checking early on that cleaner technology might have bypassed.
That habit, though, isn’t a reason to write the current tools off: “they are exponentially better than what they put out even two years ago.” Leaders who tried it early and haven’t returned are, in her view, misjudging what’s now available.
“We must always remember that AI is not a human. At the end of the day AI is just statistical patterns and models when you get to the bottom of it.” Against that, Kerri sets the human brain: “one of the biggest supercomputers that we have sits behind your very own eyes. And that supercomputer compounds when you get people together and it creates connections between those brains.” That, she adds, is something AI is struggling with at the moment.
A posture like this doesn’t arrive on its own. Ipsos trained everyone explicitly in what LLMs are and what they aren’t, on the view that “healthy, balanced scepticism” isn’t the default starting point for most people meeting the technology.
Why individual training produces uneven results
In aibl’s experience of the mid-market, access is the easy part. What comes after is where most organisations lose momentum. Individual learning, in Kerri’s view, produces uneven outcomes.
“You just end up with a few faster people over there, or a few people that everyone gets to do the AI type jobs to do and then everyone else lags behind. We need substantial upgrades across all of our different firms in most of our jobs.”
This means creating moments of shared experience. “Let people do the course on their own with headphones and laptops, but then create dialogue around what you’re learning together. I think the more that we can use social learning and peer learning together the better.”
She points to Trinny Woodall, founder of Trinny London, who took her whole company, “a fast growing startup,” out for two days to learn AI together. At Ipsos, the scale of 20,000 people makes a company-wide event like this impractical. But Kerri says they have created team and market-based equivalents.
“Maybe half a day out together, to learn and upskill, I think that could be one of the best uses of your time right now actually. I’m almost certain you will guarantee a shift that you wouldn’t have got if you just let everyone kind of learn and do it in a quite individualised way.”
The one-time gain versus the ongoing bet
The choice of what to do with that capability matters as much as building it. A few weeks before this conversation, Ipsos announced a new strategy built around augmentation, called Augmented Ipsos.
“If an organisation thinks, okay, I can use AI to cut heads by X amount, well, you can only do that once in your strategy. However, augmentation has ongoing potential benefits and ongoing value pools that can be opened up.”
Kerri has watched the alternative play out across two decades in HR. “I’ve seen the cycle where we got to in the 90s and 2000s, where we’re growing organisations, cutting them, growing organisations, cutting them. It’s a really awful hamster wheel to be in as a staff member, as an investor in those companies, leaders in those companies.”
In an augmented model of the kind Ipsos is adopting, the human role needs to shift toward what AI can’t replicate. Kerri points to three such things: relationships, creativity, and the ability to move faster because AI is handling everything else. “The value of that person being in our business is what’s the relationships they can create? What’s the creativity that they bring?”
The Augmented Ipsos strategy is weeks old. Kerri is excited about what comes next: “we’ve not scratched the surface of the innovation potential.”
Watch the interview here
As Chief People Officer at Ipsos UK and Ireland, and its Global AI Workforce Transformation Lead, Kerri O’Neill thinks that sequence is where…