SmartRecruiters On Turning AI Experiments Into Business Outcomes
Tech Talks DailyMarch 04, 2026
3609
27:5325.53 MB

SmartRecruiters On Turning AI Experiments Into Business Outcomes

Is 2026 the year AI finally has to prove it is worth the investment?

In this episode, I'm joined by Chris Riche-Webber, VP of Business Intelligence and Analytics at SmartRecruiters, to explore why so many AI and agentic AI initiatives stall after the pilot phase and what separates the projects that scale from the ones that quietly disappear. With Gartner predicting that more than 40 percent of agentic AI programs could be cancelled by 2027, Chris brings a pragmatic, data-led perspective on what is really happening inside organizations as the hype meets operational reality.

We talk about the fundamentals that have not changed despite the new technology. Influence, clearly defined problems, measurable impact, and adoption still determine success, yet they are often overlooked in the rush to deploy the latest tools. Chris explains why "good vibes" are no longer enough in front of a CFO, how to baseline outcomes properly, and why ownership of results is one of the most common missing pieces in enterprise AI programs.

A big part of the conversation focuses on what Chris calls the "agent washing" problem. Just as products are sometimes marketed with fashionable labels that do not reflect their real value, many solutions are being positioned as agentic without delivering true autonomy or business outcomes. We discuss how leaders can cut through the noise by asking better questions, aligning technology to specific use cases, and recognizing when simple automation is the right answer.

Trust, adoption, and measurable ROI emerge as the three signals that determine whether an AI initiative survives. Chris shares a clear framework for defining these signals in a way that is consistent, comparable over time, and meaningful to the executive team. We also explore how connecting talent decisions to revenue, productivity, and retention changes the conversation, especially in the context of SmartRecruiters' broader SAP ecosystem and the opportunity to link people data directly to business performance.

This is a conversation about moving from experimentation to accountability, from buying narratives to solving real problems, and from technology-first thinking to outcome-first leadership.

So as the window for easy wins closes and the demand for proof of value grows, will your AI strategy be remembered as a pilot that generated excitement or as an initiative that delivered measurable business impact?

Useful Links

[00:00:04] What happens when the AI projects that once filled conference stages with promise start quietly getting shut down behind the scenes? Well, over the last 12 months I've travelled across US, UK, Europe and the Middle East and one theme keeps surfacing. Yep, you've got it. Agentic AI, autonomous systems, custom built agents, all that promise to rethink how we work.

[00:00:30] Yet alongside this boundless amount of optimism, there's also a growing tension. Because Gartner predicts that more than 40% of agentic AI projects could be cancelled before 2027. Why? Because the business case doesn't hold up. And if that sounds familiar, I think it's because we've been on this path multiple times over the last five years. And it's this contrast that fascinates me.

[00:00:58] On one side we have bold ambition, rapid experimentation, and on the other this looming wave of cancellations. So the real question becomes, what is it that separates the AI initiatives that scale from those that stall and struggle to get out of pilot phase? Well, hopefully we can answer this question once and for all today as I sit down with the VP of Business Intelligence and Analytics at Smart Recruiters, which was recently required by SAP.

[00:01:29] His name's Chris, and he's going to bring a data-led perspective to a conversation that all too often drifts into hype. But we're not going to be doing that today. Because he argues that 2026 is all about proof of concept and proof of value. So we'll explore why influence still matters in corporate change, why measurable impact has to be defied before the pilot even begins, and why adoption, trust, and ROI.

[00:01:58] How these are the three signals that determine whether a project will survive or not. And yet we'll even have time, I hope, to unpack the idea of agent washing, and how buying into narratives instead of defined problems could be undermining your strategy. So if you care about linking people decisions to productivity, retention, and revenue, today's conversation will go straight into the heart of all of these things.

[00:02:26] So as AI budgets become under closer scrutiny, are you building something impressive or something that can't truly stand up to the CFO? I'll check in with you at the end to see what you think, but let me officially introduce you to Chris right now. So thank you for joining me on the podcast today. Can you tell everyone listening a little about who you are and what you do? Yeah, thanks for having me, Neil. So my name's Chris Richweber. I tend to go by CRW. My name's a bit of a mouthful.

[00:02:56] I live in the south of England, and for the last 20 years, my work has really revolved around everything recruiting and HR tech. For the last several years, I've been working with smart recruiters, and really my team's mission, simply put, is how we collect, use, and work with data to better understand both our customers and our business in order to try and improve the success of both. And although we are recording this in Q1 of 2026,

[00:03:23] I've been to several tech conferences around the world already, both in the US, UK, Europe, and even Egypt. And one of the big key things I'm seeing everywhere, predictably maybe, is autonomous AI, agentic AI, and custom-built agents, to name but a few there. Yeah, and Gartner did predict that a large share of agentic AI projects will be cancelled by 2027. So a different narrative there.

[00:03:51] So from what you're seeing on the ground at smart recruiters, why are so many AI initiatives struggling once they move beyond the pilot phase? Yeah, we do love to both predict wild success and wild failure in the same breath, don't we? It's a great question. I think there's a few common themes behind this, which we'll probably touch on as we go through. But for me, it comes down to a few fundamentals, one of which hasn't changed, whether it's AI automation or anything else.

[00:04:21] And simply put, I think that's influence. So anybody's ability to succeed within a corporate project or get a pilot off the ground and take it more enterprise has to be backed up by influence. So when you've got the right senior stakeholders on board. I think the added complication with anything to do with AI is depending on who you work with, you're having to influence them about something that maybe they understand even less than they have done previously. There'll be lots of organizations where leaders are really upskilling themselves and do understand all that.

[00:04:51] But if they don't, you've really got to help them make that extra leap. And for me, that means you as the key person driving the project, you have to educate yourself. You have to understand what you're going and asking for even more so than before. So influence is still key. I think one of the biggest pieces, particularly with pilots, where we're seeing customers be successful when they start small and go big is understanding impact. Now that sounds very obvious, but I'm going to talk probably quite a lot today

[00:05:20] about people being able to define the problems they're solving really well. I see that that's something that's still lacking. And when it comes to what you're trying to prove with that pilot, what's the measurable impact, the vital impact that you're looking to get to? You can't get to the end of the pilot and just go, we've got good vibes. It's worked. Everybody's a little bit happier. There has to be something there that you can look the CFO in the face when you bump into him in the lift and say, hey, we want to push this out further.

[00:05:50] Or you bump into the CEO and you say to her, wasn't this an amazing success? Look at what we achieved. It can't just be feels. It has to be something measurable. And the third thing there then comes down to adoption. So this is something I speak a lot about. Adoption is a really boring, unsexy word when you're looking at all the wonderful, shiny outcomes we can get to. But we often forget in order to get there, people have to use the thing. So we have to be really, really obsessed

[00:06:18] about getting people to adopt it. And the fundamental difference now is when you're asking somebody to adopt something as differentiating and as transformative as AI, you might be asking them to adopt something that could really impact their own job. And as humans, adoption is a very much a behavioral challenge, not a technical one. So you've got this added pressure to then get over with helping people adopt something that is going to probably

[00:06:47] fundamentally change the way that they work. And that can be scary for people. And one of the reasons I was excited to speak with you today is before you came on, I was reading how you've spoken a lot about the rise of agent washing. And it's a phrase that I'm hearing more and more this year. So how does this show up in real organizations? And why does it create false confidence around AI progress, do you think? Yeah, it's a great term, isn't it? You'll probably be familiar, Neil, with protein washing.

[00:07:16] There's something I'm seeing a lot in the UK at the moment on foods. You know, marketing teams are slapping high in protein on labels. And when we're walking through the shop, we're feeling a little bit hungry. All of our rational thought goes out of the window. We look at this thing and it says protein and we think, okay, protein's healthy. Healthy's good. So this protein chocolate cookie bar must be healthy and good, which is not necessarily true. Same thing goes here. I think, you know, just because something says it is an agent,

[00:07:47] number one, it doesn't necessarily mean it is. But also that's not really the most important question. I think whenever we're looking to understand, you know, to buy something new, whether that be an agent or not, again, it comes down to the problem you're trying to solve for. It might be that the problem you're solving just needs really good automation. Maybe it doesn't need an agent. But if you don't know enough to look beyond that term and ignore the, you know, the labeling and the terminology, you're not going to get to there.

[00:08:14] So, you know, I see lots of HR teams, lots of IT teams who are very, very good at buying narratives versus buying solutions to defined problems. So, you know, you've got to be able to spot the difference. That does mean that, you know, you need to get a rough understanding of what an agent is versus what automation is. And I think, you know, there are a ton of ways people can do that. I would start with some really simple questions. You know, when you're getting presented to a vendor, you know, ask them things like,

[00:08:43] what decisions can this agent make for itself without predefined rules? That would be a pretty quick way of understanding, is this truly an agent or is it actually, you know, an automation working within guardrails? Or asking them, you know, when and how does it decide to hand off to a human or deal with a situation that agent has? And autonomy is probably the key word there for trying to differentiate between, you know, an agent and a true automation. But like I said,

[00:09:12] I think we all get very swept up in, we want the shiny toy. We want the thing that sounds the best. But sometimes what you need is a little bit different, even if it comes with a slightly unsexy title. 100% with you on that. And I suspect many people listening will also have their own stories and experiences of failed or stalled AI projects. But on the flip side, when an AI project does succeed, what are you seeing as the clear differences there compared with the ones that quietly stall

[00:09:42] or get shut down? What are you seeing on the positive side? Yeah, so I think, you know, when somebody's really succeeded, they've done some of those fundamentals we touched on really, really well. They've got the influence. And in order to get the influence, they've really focused very hard on that problem they're solving for. And they've defined what the business outcome is going to be. So, you know, really good example. One of our customers, big fintech organization, hire many, many thousands of people a year.

[00:10:12] They were really struggling with, you know, something as mundane as interview scheduling. It was taking up inordinate amounts of time for them, not just with the people actually scheduling the interviews, but for their candidates, for their hiring managers, you know, who had busy day jobs to do. So their chosen experiment was simply, let's put in the smart recruiters automation and, you know, an AI around interview scheduling.

[00:10:41] And we think it's going to save us, you know, X amount of time. And they ran that experiment really well. They got the sponsorship and the sign-off for it. They tracked it incredibly closely. They baselined what they were doing beforehand. And they really measured the success of that over time. And, you know, it's an easy one to gain people's buy-in for because, you know, we all hate scheduling nightmares and bouncing around between calendars. And, you know, the net result was thousands and thousands of hours saved that they could then pour

[00:11:11] into far more interesting things and get their team working on. So, you know, they hit those fundamentals of, they defined the problem well. They got the influence from stakeholders. They obsessed over how it was being adopted and they had a really clear, measurable outcome story to tell. And I think it was early last year that many leaders that had jumped on the AI bandwagon with a tech-first shoehorner problem in later mindset suddenly found that they weren't solving a problem and they weren't getting ROI

[00:11:41] on those expensive projects. And you often point to trust, adoption and measurable ROI as the signals that determine whether an AI investment survives. So here in 2026, how do you think leaders should be thinking about building those three signals right from day one rather than further on down the road? Yeah, I love, yeah, again, you know, it's shiny toy syndrome, isn't it? You know, we want it. We'll figure out what to do with it once we've unwrapped it. Yeah. So I think, you know, when we look at

[00:12:11] those kinds of signals that we should all be caring about here and how we measure them in a really disciplined way, for me, it means three key things. Those signals have to be incredibly clear. They have to be very consistent and they have to be concise. And what I mean by those is if your signals aren't clear, as in, you know, trust is a really good example. Trust is an important thing to measure. But how are you defining trust? Is that measured by, you know, people's sentiment

[00:12:39] towards the thing that they're using? Or is it defined as a, you know, as a measurable, quantifiable outcome? So the signals have to be very clear. They have to be consistent because if you keep changing your definitions and moving the goalposts, then you're not going to have something established that you can look back on over time and compare results. And obviously, being in the role that I'm in and in an analytical world, having a baseline and a measurement on that baseline that is the same and is consistent is incredibly important.

[00:13:08] And then they have to be concise. You know, you can't make these things too complicated. If I've got a measure that I have to back up with a two-page definition for people to understand it, no one's going to care and they're just going to ignore it. So those signals should be always clear, always consistent, always concise. And I'm a big fan of doing things in threes as you'll pick up through this. So with my team, I talk a lot about when we determine how our customers are using our product, we talk about three categories, adoption,

[00:13:39] impact, and sentiment. So adoption is, are they using it? Impact is, is it benefiting them? And sentiment is, are they happier for using it? And if you boil everything down to those three things, we can answer most questions that matter with regards to what we're building for our customers. So, you know, any leader can apply that into an AI project that they're putting in. If they get to those three categories that they care about most, any question really that is important to answer could sit within them.

[00:14:08] And I think your points today will resonate with so many people listening around the world, especially when they might, might also be struggling to connect AI investments in people systems to hard business outcomes. That's what they're in this for. So how can they and their organization link hiring and talent decisions more directly to productivity, retention, and revenue? And I'm sticking with your power of three there when I'm in the most. Thank you. It's rubbing off.

[00:14:38] Yeah, it's a great question. I mean, I spoke with a couple hundred of our people just a couple of weeks ago on this topic around translating people and hiring outcomes into business outcomes. And there's a really good reason why for many years our slogan was you are who you hire. And I think that's because the biggest impact of people decisions always flows to the bottom line of any business. Where I think a lot of people miss out to do

[00:15:08] that translation is they don't truly understand how their business makes money. That sounds like a very odd thing to say but, you know, understanding how a pound or a dollar or a euro enters and moves through their business and how the different functions shape that and support it. I think if you don't understand that fundamentally it's going to be really, really hard to tie what you're doing to a business outcome. So let's take retail for example because it's an easy one

[00:15:38] everybody can understand we all shop at some point. You know, how many corporate HR or IT leaders in a retail business could tell you the average weekly store revenue or the average basket size or the average abandonment rate on their e-commerce site? Those are the measures that I guarantee you are talked more about in the boardroom versus, you know, what's our time to hire or what's our applicant to hire ratio or how many interviews are we scheduling?

[00:16:07] Those are all important things but you have to be able to translate what that means to those kinds of business metrics. So if you said, okay, you know, we're opening a new retail store the CFO's made projections as to what that store is going to bring in well now if you look at the people aspect and say, okay, what if we're a month late, two months, three months late in hiring some of those key roles? Your revenue is going to be directly impacted. So that gives you a way to tell the story of tying

[00:16:37] the people decision to that business outcome. You know, sales roles is a really obvious place to start. If your average salesperson is bringing in let's say, you know, a million dollars a year to keep the maths easy and you could find a way to make them 20, 30, 40% more efficient because of, you know, some AI or some automation or whatever you're putting in there and you're making the decision to architect their work differently, you're adding 200, 300, 400,000 dollars

[00:17:06] per person that you bring into the business. So for me, this is a question that my team and my wider colleagues get really bored of me asking, but if anybody gets stuck in a loop of talking about HR or hiring outcomes, I'll always probe and say, okay, great, how does it impact the business? How does it change what the CFO cares about? How does it change what the CEO cares about? They all care about people but, you know, the reality is we all exist in a business

[00:17:35] to drive business performance. So we want our people to be happy but we also need them to be productive and producing as well. And with your background across analytics, hiring and business performance, I'm curious, where do you think organisations are most commonly misreading the data when assessing AI's impact? I would imagine you've got somewhat of a unique vantage point here with your background. Yeah, I think, you know, the mistakes I see made or, you know,

[00:18:05] difficulties people face is when they have, you know, targets and measures not being clearly defined. So, you know, like I said, with setting baselines and setting new measurements, if it's not clearly defined and easily measurable, it's going to be tough. So, you know, sometimes people get really bogged down in some really complicated calculations and it takes somebody a lot of time to manually pull information from, you know, various systems to try and make these metrics work and you're just setting yourself up

[00:18:34] for an awful lot of effort to try and keep that thing ticking over. So that's one thing. I think the bigger problem, though, is when ownership of the outcome is unclear. So, when you look at particularly enterprise-wide adoption of, you know, AI tooling, who cares about whether it's working or not? If there's not somebody waking up every day obsessed over whether it is working and bringing value and you've said, well, everybody should care about it

[00:19:04] because everybody's, you know, everybody's trying to be better, everybody's trying to be more productive. That's great, but everybody is busy. Everybody has BAU work, everybody has distractions, everybody has adult things that come up. So you need somebody who owns that and cares about it deeply enough, like I say, to wake up every day and figure out what's going on with this today, what needs to be set right. And then once you get past the targets and the ownership piece, I think the other thing is expectations.

[00:19:34] expectations, because the hype is so high, our expectation levels are naturally raised. And I heard this quote recently, it's really, really stuck with me around unspoken expectations. And it was the unspoken expectations are premeditated resentments. And I think that's such an apt way to describe lots of things in life when we don't properly explain what we are expecting. But in, you know, in corporate project world, if I'm relying

[00:20:03] on a, you know, on a hype cycle to get my leader to sign off on something and I don't adequately find out what their expectations are and they remain unspoken in a few months time, even if it's successful by my measure, if it's not successful by theirs, you know, that's your premeditated resentment coming through. So we have to work really, really hard to set expectations of not just what this thing will do eventually, but what it might do over time, you know, we often forget about the ramp and the change over time

[00:20:33] versus expecting some level of overnight success and I think we've been conditioned that AI will bring overnight success, which is, you know, is not true. If you look at the story recently of, you know, OpenAI acquiring or acquiring Aqua Hiring, the guy that set up Claudebot, you know, he had like 50 failed projects before that one. So it's still not an overnight success even though it may have seemed a bit from the outside. And I have spoken

[00:21:03] to a few people from smart recruiters on here over the years but of course last year was a pretty big year for you guys. I mean, following that acquisition of smart recruiters by SAP, how does that broader ecosystem, how's that changing the way customers think about data, scale and long-term AI value? Yeah, definitely a big year. I know you had Rebecca, our CEO on previously. So yeah, she had a pretty hectic year last year with all of that work. For me,

[00:21:33] it's incredibly exciting because, you know, I'm a data guy so I can suddenly now potentially, you know, look at a far broader ecosystem of data. We're in this, you know, suite of other HR and ERP tools now where we're looking beyond just recruiting and seeing what else is going on in the rest of the customer's world and I think, you know, when customers look at compiling their tech stack five, 10 years ago it was all about

[00:22:02] being best of breed and having, you know, the absolute right tool for the right job in a silo and stitching all these things together and we've seen the fragility of that over time. So there is real power in, you know, a one ecosystem view and having everything in one place. So, you know, it makes doing things like translating those business outcomes much, much easier. So, you know, if you imagine a customer who is running

[00:22:32] their store ERP on a particular platform and within that same platform ecosystem they're running all of their people metrics how easy to be able to then correlate the high-performing stores with high-performing people and be able to dive into, you know, into those kinds of areas. So, for me, massive opportunity for us to work closely with, you know, some new SAP colleagues on helping customers tell that story and helping them

[00:23:01] understand it as well by having it all in one place. So, I think it just makes all of that far, far easier. And as I said earlier, we are recording this relatively early in the new year. We are still in Q1 just about and to give everyone listening a valuable takeaway, what do you think leaders should be prioritising right now to make sure those AI investments deliver lasting impact rather than becoming just another failure statistic that was highlighted by Gartner? What should they be doing, do you think?

[00:23:31] Yeah, I think it's easy, it's incredibly easy right now to feel overwhelmed. You know, the pace of everything is incredible. Every day, there's a new model, there's a new tool, there's something else we should be worrying about and, you know, what was released yesterday is suddenly out of fashion. So, I think in the face of all of that, my advice would be go an inch wide and a mile deep. If you can't think of somewhere to start, that's going to be the problem

[00:24:01] because you'll just paralyze yourself. So, pick one problem, go run an experiment on it. Like I said, we've all got so many things going on, you've got your day jobs, you've got escalations, you've got out-of-health things that pop up. So, you really need to narrow that down. So, if you were to go to your immediate team or stakeholders and say to them, you know, if we could just fix or improve one thing in our current workload that would make the biggest difference, what would it be? Just reason around that and get started on that. You've still got to do all those other things

[00:24:30] I talked about. You have to gain the influence, you have to define the problem well, you have to understand the impact. But, I've seen too many people just not get out of the starting gate because they just don't know what direction to run in. So, just pick one thing, start with that, momentum follows action. Once you get that action going, you'll get in a groove, you'll learn more as you go through it. And I think that's probably the best, most practical thing we can all do right now. I think that's a perfect moment to end on. But before I do,

[00:25:00] we covered a lot there. I know you're passionate about this topic too, but for people listening, wanting to connect with you, your team, learn more about smart recruiters, where would you like to point everyone to know? Yeah, so you can find myself, my team, on LinkedIn. We also publish some work on smart recruiters.com and I'll give a shout to some of our excellent SAP colleagues as well. If you Google SAP Future of Work Research Lab, there's some really interesting

[00:25:29] pieces of research on there that my team and the SAP team collaborate on as well. Well, I for one enjoyed exploring with you today why that next phase of AI will be defined by proof of value rather than proof of concept and also how data-led people strategies can turn ambitions into real, measurable business outcomes. It's great to be getting back to this stuff. So, thank you for bringing such a clear and pragmatic perspective on what it will take for organisations to make their AI investments count. I will have links

[00:25:58] to everything you mentioned and I encourage people to check those out, but just thank you for bringing this conversation to life today. It's been a pleasure. Thanks for having me now. So, as we continue to race through 2026, I think Chris made so many great points there, but one that particularly stuck with me. Go an inch wide and a mile deep. Because in a market flooded with new models and bold claims, I think that temptation to just chase everything can be difficult to resist. But I think the organisations that succeed,

[00:26:27] they're the ones that define one meaningful problem, secure the right influence, measure impact properly, and then obsess over adoption. And we talked about things like trust, ownership, and setting expectations clearly, so there are no silent resentments when results are reviewed months later. And when we keep returning to that simple idea that if you cannot connect your AI initiative to something the CFO really cares about, of course it's going to

[00:26:57] struggle to survive. Because the next phase of AI will reward discipline over drama. Data-led people strategies, clear baselines, transparent outcomes. This is how ambition becomes measurable business performance. So as you look at your own AI investments right now, are you chasing the latest narrative or are you building something that will be standing when the hype fades? As always, let me know, techtalksnetwork.com You can leave me

[00:27:26] an audio message over there or send me a DM and learn more about how you can work with me. But that is it for today, so big thank you to Chris for joining me and an even bigger thank you to each and every one of you for not only listening but making it right to the end. I wish you a fond farewell and I'll be back tomorrow. Bye for now.