From AI Pilot Purgatory To Real ROI With Bill Briggs Of Deloitte
Tech Talks DailyFebruary 16, 2026
3590
38:2335.14 MB

From AI Pilot Purgatory To Real ROI With Bill Briggs Of Deloitte

In this episode, I'm joined by Bill Briggs, CTO at Deloitte, for a straight-talking conversation about why so many organizations get stuck in what he calls "pilot purgatory," and what it takes to move from impressive demos to measurable outcomes.

Bill has spent nearly three decades helping leaders translate the "what" of new technology into the "so what," and the "now what," and he brings that lens to everything from GenAI to agentic systems, core modernization, and the messy reality of technical debt.

We start with a moment of real-world context, Bill calling in from San Francisco with Super Bowl week chaos nearby, and the funny way Waymo selfies quickly turn into "oh, another Waymo" once the novelty fades. That same pattern shows up in enterprise tech, where shiny tools can grab attention fast, while the harder work, data foundations, APIs, governance, and process redesign, gets pushed to the side.

Bill breaks down why layering AI on top of old workflows can backfire, including the idea that you can "weaponize inefficiency" and end up paying for it twice, once in complexity and again in compute costs. From there, we get into his "innovation flywheel" view, where progress depends on getting AI into the hands of everyday teams, building trust beyond the C-suite, and embedding guardrails into engineering pipelines so safety and discipline do not rely on wishful thinking.

We also dig into technical debt with a framing I suspect will stick with a lot of listeners. Bill explains three types, malfeasance, misfeasance, and non-feasance, and why most debt comes from understandable trade-offs, not bad intent. It leads into a practical discussion on how to prioritize modernization without falling for simplistic "cloud good, mainframe bad" narratives.

We finish with a myth-busting riff on infrastructure choices, a quick look at what he sees coming next in physical AI and robotics, and a human ending that somehow lands on Beach Boys songs and pinball machines, because tech leadership is still leadership, and leaders are still people.

So after hearing Bill's take, where do you think your organization is right now, measurable outcomes, success theater, or somewhere in between, and what would you change first, and please share your thoughts?

Useful Links

[00:00:04] What if the biggest reason for your AI projects are stalling right now has nothing to do with the models and has everything to do with the way that your organisation thinks about work? Well, today's guests have spent years inside boardrooms where AI ambition is high, patience is low and results are expected fast.

[00:00:24] My guest today is Bill Briggs. He is the CTO at Deloitte and he's going to join me today and talk about why so many AI pilots never make it past the demo phase, how companies end up burning money by automating broken promises and what real return actually looks like when AI is done with intent.

[00:00:45] And we'll also get into why trust in AI collapses the closer you get to the front line, how innovation can quickly die when it gets locked in labs with foosball tables and multicoloured beanbags, and what leaders should be rethinking right now if they want AI spend to tie to real business outcomes instead of slide decks.

[00:01:06] So if you're tired of AI success theatre and want a straight talk from someone who sees what works and what quietly fails, you're going to love this one. And not only that, my guest is an incredibly great guy and we have so much in common. Despite being men of a certain age, I still go to Glastonbury, he'd been to Coachella, we both like pinball machines, and we even share the same wedding song.

[00:01:32] Intrigued? I was hoping you'd say that. Let's get him on now and we'll talk about all this and much more. So a massive warm welcome to the show. Can you tell everyone listening a little about who you are and what you do? Bill Briggs, I am Deloitte's Chief Technology Officer. And people don't know Deloitte, the biggest professional services firm in the world, been around for 180 years. And some might still think of us in tax and audit, which is an important piece of our business.

[00:02:00] But the consulting and technology and innovation has been a big thrust for my career, almost 30 years. Get to oversee emerging technology, investments we make in things like quantum computing and physical robotics and AI. And help oversee our talent as more and more of our people and their tech fluency is an important piece of how they grow.

[00:02:26] And we expand to engineers and make sure we've got everything you'd expect from culture, community, learning environments. And then in market, advise clients across private and public sector on how to make sense of all the exciting advances in technology and translate the what into the so what. And maybe the most important thing is the now what? Where do we go and how do we start and how do we get there as fast as possible without undue risk and all the above?

[00:02:56] And even though I'm in San Francisco now, home is Silicon Prairie in the suburbs of Kansas City in the U.S., which always gives people a chuckle at Deloitte CTO. Oh, wow. Yeah. So I'm curious, what's your backstory on technology? What took you from Kansas to San Francisco? There's got to be a story there, right? Well, no, I'm a computer engineer. I actually grew up in a different part of the Midwest. Yeah.

[00:03:18] And thought I'd be headed into the Valley at the time in the 90s and did an internship with one of the leading telco providers, one of the titans of the tech industry. And over the course of a summer doing automated self-tests for land mobile radios in assembly language, I realized that that was not exactly my path.

[00:03:42] And it just happened that at the time, Deloitte was starting to have more and more of our business being the consulting side of the world and technology being an important piece of that. So I joined as a fresh-faced undergrad. I know this is audio only. You can vouch on how fresh this space is still, Neil, 30 years later. 100% with you. Yeah.

[00:04:03] And it was a bit of, I'm going to do this until I'm not having fun anymore and still waiting for that day 30 years later. And the glory of what we do is we get to help the biggest organizations in the world with their most strategic kind of bet-the-company problems. And it turns out that's pretty addicting, especially when a big piece of that is how does technology fit in the equation and the technology is always changing.

[00:04:29] And so before I was a CTO, I was just driving as an engineer, as a developer, as an architect, as a strategist. And it's been a hell of a ride. And the short answer is I was in Chicago when I joined the firm, and my wife is from Kansas City. Her parents are retired. So she said, you're a great husband and a great father, and we're moving to Kansas, and we'd like for you to join. And I said, thanks for the invite. And it's been glorious.

[00:04:59] What a great story. Absolutely. Love it. And listening to your career history there, it's funny how everything lines up to where we are now, because I would say all your experience leads to this point where we are now. And it's never been more important as well, your skill set, because organizations over the last few years, they're starting to run successful AI pilots, but many of them still struggling to translate them into business results and find that elusive ROI.

[00:05:27] So from your perspective, what is separating those pilots that stall, that get stuck in pilot purgatory, with those that are delivering measurable ROI quickly and a real difference to the bottom line? Yeah. Part of it is a story that, honestly, history doesn't always repeat, but it often rhymes. I think whenever you look back and a given technology has been made the hero of the story, and you could argue right now, AI, because there's so much investment, there's so much excitement,

[00:05:58] if it's left to its own devices and looked at it in a siloed way, the ceiling's pretty low. And if it's applied in the way we typically apply technology in the earliest days of how do we layer it on to known problems, you get trapped in incrementalism. And those two things combined, you know, the moment we're in now with Agentic and the promise of Agentic is amazing.

[00:06:24] The hope from execs building off of Gen.AI into this moment with Agentic was that maybe this means we don't have to do all the hard work that we've been neglecting over the years. Investing in our core systems, they have the underlying transactions and APIs exposed and governed in a way that we could orchestrate them differently and access them differently. The investments in core data foundations that I've yet to meet an organization that says we've declared victory and we're done on that challenge.

[00:06:54] So the hope was maybe this next wave of technology would make those things less of an inhibitor to progress. It's been the opposite. What we found is the rush for pilots and if they're isolated small pilots on top of existing processes, they're great for a success theater.

[00:07:14] You know, we can stand up in our financial clothes and say we've got this much investment and activity happening, but the actual productivity gains, efficiency gains, the way that we can unlock it for new business models or new ways to serve customers or citizens. That's a very different motion. And it's this idea of reimagination of business process, not let's take the thing we always did and see if we can make it go a little bit faster because we're putting AI on top of it.

[00:07:41] In fact, that could be the worst thing because you're going to weaponize inefficiency and you're going to take something that was a cron job running in the background and you're going to start burning tokens instead. So not only, you know, both things add up to potential disillusionment. Now, the flip side is the organizations are doing it right that are looking at it from business problem.

[00:08:04] Bounded, like we want to apply AI into our finance function and we think we can fundamentally rethink how we do invoice management and journal reconciliation. Now, we think about cash and treasury management and apply tech in ways that it's not the way we always did it with our ERP systems. But it's like, oh, if we had intelligence that all of our best financial analysts have and we're betting it in or elevating their role and work strength. Oh, my gosh. And the power and the impact is tremendous.

[00:08:32] And you've raised so many great points there as well, especially about the legacy tech and burning tokens as a result. It's so true. I've seen that more and more now. And there will be many leaders listening and they hear a lot about things like innovation flywheels, but the concept can still feel incredibly abstract. So in plain terms, what does an innovation flywheel look like in a day-to-day reality of running a business? I'm sure you see so many examples there.

[00:08:59] I don't expect you to name any names, but what does that look like? Well, part of it is when innovation is a separate organization, kind of siloed off with sexier office space. And part of me being in Kansas City, I collect pinball machines. So I'm going to say this in a disparaging way, but it burns my heart to say it. When you see it's filled with foosball tails and pinball machines, and that's the innovation group. They're off doing their own thing.

[00:09:26] The hard bit is the center or the skunk works operation. It's really hard to go and get that embedded into the organization at scale. And we did a study about trust in AI, and it looked at it from an org chart view. And in C-suite, it was 70%, 7-0% of excitement, enthusiasm, optimism about AI adoption. Trust was very, very high.

[00:09:58] Every layer that you went out of the C-suite, it was like a log scale, decreasing that trust and excitement in half. By the time it got to frontline employees, the most junior staff, it was 6.7%. So from 70% to 6.7%.

[00:10:16] And I think that the best organization at the Innovation Flywheel is how do you tap into the passion, excitement, the knowledge, the creativity that your folks closest to the business have in a way that they're not feeling like they're putting themselves out of a job by participating in this. But, you know, elevating what their role and position is because they know where the knuckleheadery lives. That's a technical term, Neil, on knuckleheadery.

[00:10:46] And they need to feel empowered. Now, we just did the state of AI in the enterprise survey. We do it every year. It just came out last week. And one thing I was encouraged to see is a 50% increase in the amount of organizations that have basically given AI tools to all of their people. So it went from 40% to 60% in this last year.

[00:11:07] And I think that's a part of the unlock, especially, you know, more and more people are experimenting at home in their private lives and then feeling hamstrung because when they come to work, if it's a separate AI group, a separate innovation group that's doing all that work, it's in a vacuum, they don't see it. But there's a reason why they're going to feel suspicious at best and maybe skeptical at worst on it. So, yeah.

[00:11:34] And I also think as AI continues to evolve, inevitably extraordinary speed, we're already adding agentic AI into the mix. Organizations unleashing thousands of agents out there. As an XIT guy, I do want to operate, maybe come with a bit of caution here. So how should organization maybe better rethink that balance between people, the data and guardrails so they can continue to move fast but without lose control? That's right.

[00:12:02] And one of my favorite stats from our tech trends, we do our tech trends report every year. It's the 17th year we published it. And the finding was in all AI investment, 93% is in the tech and 7% is in everything else, which includes the culture, the learning, development, training, the policies, guardrails, process controls.

[00:12:25] All of that gets short shrift and doing it after the fact is a scary endeavor because the genie's at the bottom. If you've been following everything going on with OpenClaw in this last few weeks, a lot of folks dove in on their private servers and gave full access. And the study of how many malicious libraries and skills are out there is growing.

[00:12:54] And, you know, so there's a bit of a lesson of we want to unleash the potential. If we invest in our own, and I'll call it the innovation process, but it's really the engineering process. If we say, hey, we're going to have sandboxes and ways for people to explore where they can't really get into trouble. And we're going to make that democratized in a way.

[00:13:18] And then as we have investments we're making that we expect to go to production at scale, which is the best way to start innovation. Now, the idea is if we're going to start dabbling, we want our seed investment to become a series D and go public. So as we begin the investments of these early pilots, how do we embed security, privacy, management, rigor and discipline?

[00:13:45] You know, this comes to the how do you have governors and thresholds that you're watching, just like we did with virtualization and cloud, even more important with models and agents. Like embedding that in the engineering platform and lifecycle makes it so your people. It's not like we're hoping and wishing and praying. I'm a big Beach Boys fan. This isn't wouldn't it be nice that we hope and wish and pray it might come true, that our people will follow the things that we say are the responsible way to deploy technology.

[00:14:11] It becomes embedded in the environments that they automatically instantiate. It gets embedded in their pipeline that they don't want to really deal with anyway. I'm an engineer. That was not the way I wanted to spend my day on getting things packaged and ready for release. Right. So we automate those steps. We embed the policies controls inside of it. And then everything we're delivering.

[00:14:38] It doesn't mean there's not additional steps to make sure we're taking a look before it goes to production of scale. But we have a higher confidence because it's being built on known safe libraries, tools, frameworks, you know, with rigor that we're thinking about overall cost and deployment and maintenance and monitoring from the beginning.

[00:14:57] And there's a chapter in tech trends about how 99% of the organizations we talk to around the globe, all industries, 99% are fundamentally saying we need to transform our IT function. And I've yet to meet the 1%. And I'm not trying to shame anyone. So if a listener out there is one of the 1%, I'm hoping that means it's because you've already done it. And I'd love to hear the story. But if you haven't started, this is the wake up call to say it's time. Yeah, yeah.

[00:15:26] And you mentioned a few moments ago the danger of putting new tech on top of old tech. And there's something even more dangerous. And that is that age-old phrase of we've always done it this way. So who continues doing it that way? One of my favorite quotes is Grace Hopper, one of the founding mothers of computer science. The four most dangerous words are we've tried this before or this is the way we do, right? Yeah. Either one will do.

[00:15:50] So I'm curious, on that side of things, where are you seeing teams automating existing steps when actually the bigger opportunity is to redesign the process entirely from the ground up so it's fit for the next stage that we're heading towards? Yeah. Unfortunately, many places because it's the easiest and obvious way to go. But because of the client a few days ago who's in a services-based business, the CEO,

[00:16:16] and they're saying we need to fundamentally move to outcome-based delivery instead of time and material or effort-based, and we need to invest more in product and platform, new business model. And it's a sizable enterprise. But it takes the CEO to say it's time.

[00:16:40] And I don't know about your travels, Neil, but between what motivates more, desperation or inspiration, I find desperation tends to have a bit more of a quickening of the pulse. Yeah. Now, we like to try to inspire too, but when you can tie it to market ships and competitive pressures for business models, that becomes a way to never let a crisis go to waste.

[00:17:09] And the good news is the technology has advanced so much, even if there wasn't a single advance in any of that. And we're spending a lot of time on AI. There's a huge amount in physical robotics. There's interesting advances in infrastructure and quantum. There's all kinds of things in the halo that also matter a lot. But even if we didn't have a single advancement, which is impossible because of the trillions being put in R&D across players big and small, we'd still be able to say, I could look any executive, any board in the eye and say,

[00:17:39] we can fundamentally improve, transform your business full stop with the toolkits we have already that are proven. And then the danger becomes sometimes there's a feeling of, well, let's wait until the next release of the foundation models. Let's wait because we don't have a perfect strategy and the world is changing fast and we don't want to have buyer's remorse.

[00:18:05] And paralysis is not a strategy and perfection is impossible. So how do we architect technically, how do we architect contractually, financially expecting change, but not find ourselves stuck with the inertia of we don't know where to start.

[00:18:34] Someone will be in your seat if that is your strategy. If you're the CIO, the CTO, if you're the CEO, if you're the chair of the board, someone else will be in your seat that won't have that posture soon. Yeah, I completely agree. And you did mention how there's trillions being invested at the moment, but due to some failings on previous experiments, technology budgets are under growing scrutiny for value and ROI.

[00:18:59] So what kind of changes are you seeing helping organizations connect AI and digital spend directly to business outcomes rather than activity or experimentation? Seems to be a much stronger focus on that this year. Yeah. One of Deloitte's businesses, we have our conversion consumer business, which is a platform that helps understand customer sentiment and embed it into. So AI driven, of course, but very different segmentation and pricing.

[00:19:25] And it's something we've shifted, you know, as our own business evolves from just services into tech product and platform and outcomes. It's one that for sure in consumer and retail and hospitality, but we're seeing life sciences companies, government agencies, you know, the idea of understanding who your citizen customer is at a deeper way and how to serve them better is an amazing bit of those investments in AI are not about the tech,

[00:19:53] though the tech is tremendous and we built it on top of Alliance ecosystem partners, co-investment. We've got a great relationship with 100X, which is a differentiated data set that actually ties consumer, not through surveys and opinion, but actual consumer sentiment and ties it to financial outcomes and value in a way that... So, yeah, but it shifts it from... It's not about the tech.

[00:20:18] It's, hey, how would you serve your customer different if you knew them at a very different intimate level, as an example. Whenever I hear, hey, we're playing AI to our manufacturing process, my question is for what... And there's a lot of good reasons to do it. Hey, we want to optimize source and procurement of the supply chain. We want to do predictive maintenance on the line. We want to...

[00:20:47] Like those are the ones that you can point to. And in almost every case, there's a benchmark. There's a recipe, Neil. The things that you can say with high confidence, we're going to be able to measure value in return. One is it being applied to an operations or business process or a customer event that actually has a measurable outcome. Might seem obvious. But it isn't always there.

[00:21:13] And then can you benchmark today and can you benchmark performance gains you're expecting in a way that's trusted, not that success theater? Like those things become the ingredients to, okay, now we'll figure out the pieces of technology. And the feeling is we want to get the shiniest object. That's typically the starting point. We read the headline. The vendor pitched us a press release.

[00:21:40] And a lot of times it's like, maybe. Or could we get to the same output with just traditional optimization techniques and machine learning? Do we need to identify it to start? And that's okay. Like Shakespeare, what's in the name? Capulet or Montague, by the end of the name, Rosewood smelled as sweet.

[00:22:04] Like if we care about a point in tech for outcome and impact, the bill of materials matters, but it shouldn't be the draw. And I love the line you used there about success there. I think we've all seen that. We've also seen a lot of negative stories on our newsfeed around ROI and failing or stalling projects.

[00:22:25] But I'd love to get the balance back in the universe and talk about where have you seen leaders that get that alignment between technology, strategy, and business that get that strategy right? You must see a lot of this too, right? Yeah, 100%. And again, in the state of AI study we just did, we're seeing 84% of organizations don't have dedicated jobs or roles, which is an issue.

[00:22:54] So the things that we don't get right are a good way to flip it and say, how can we get it right? So how do we dedicate people whose job it is to be able to understand where we should be investing, deploying, and taking advantage?

[00:23:07] And then again, flipping it from this abstract, we're going to AI-ify our business to individual line of business leaders, individual functional leaders saying, hey, we're going to apply it in HR for how we hire and deploy our people. If you're a services firm, we're going to deploy it into finance to get X percentage better return on our working capital.

[00:23:35] You could say that feels like the right discipline and hygiene from technology investments from the beginning of time. And I would agree. And part of the reason why we've seen this flurry of headlines about disappointment is, back to the point at the beginning of the conversation, there was a hope that AI would make that different.

[00:24:00] And if anything, it's just hardened the rules that have been true and made them even more true. Now, the skill sets we need are fundamentally different. As a computer engineer, I used to pride myself in the code I'd write. There's a very different skill set that the best engineers are going to have already now, but in the next five to 10 years, where managing agentic architects and engineers is going to be the value, not how fast you can sling code.

[00:24:29] And I was always a VI guy. I didn't like to go into the GUI for my IDE. So I was very proud of my macros and VI and how quickly I could work a code base. That's cute. And it still makes me feel good because it's part of how I got to where I am. But that's not the formula for tomorrow. A problem that we've navigated around a little today is technical debt. Another huge problem right now.

[00:24:55] I was at AWS reInvent in December and they took all the tech journalists out into the middle of nowhere and blew up a load of servers saying, we're going to blow up technical debt and get rid of it forever if you come over. But it was a nice little move, a bit of success there, maybe again.

[00:25:12] But I mean, when people or businesses are trying to keep up with the pressure of technological change while still operating those legacy systems, that technical debt, how should they prioritize what to modernize first? Yeah, it's the surgical approach. And the answer was never mainframe bad cloud good, even if we put TNT in the desert and blow up a bunch of Z-series.

[00:25:41] Targeting it based on one, the overarching here's the architecture approach we're going to go after as we look to modernize is a step that not enough folks take. Because the moving it from one infrastructure stack to another without refactoring and without remediating it doesn't make any kind of sense. And then it's not that we have to get the totality of, we've got some public sector clients that have hundreds of millions of lines of code. Not every line of code is created equal.

[00:26:11] Not every system matters the same way. So there's things we do to go in and use AI to understand just the frequency of change and the amount of dependencies and interfaces that any given system has. And how well described and encapsulated are the services, transactions, APIs, which in the old days would have been copy books and RFCs. And how close are they to a microservices kind of mindset to it?

[00:26:39] And what is the financial state of the stack? This is the MBA that I have on top of the computer engineer. If it's a fully depreciated asset that is infrequently updated and it's not anywhere close to performance or scalability limits, and it has transactions exposed and data defined in ways they can participate in a modern agentic architecture,

[00:27:09] why would I spend the money to go and tackle that other than an empty bullet and an update to my CEO about the progress we're making on this big concept of technical debt? Flip side, I've got a system that has reliability issues. I've got performance issues. I've got potential limits that I'm going to reach about the transaction volume it can support. It's being updated all the time.

[00:27:38] We know it's a coefficient of complexity on any project that has to talk to it. It's a part of a critical business function or represents a critical piece of customer knowledge or product knowledge. Those are the ones that we circle and say, there's a ton of juice in that squeeze and let's go after it. And then we'll apply AI to not just do the porting lift and shift, but we'll say, let's go in and re-engineer it to be able to participate in what we expect more of the future workload to look like,

[00:28:08] which is going to be AI embedded and agentic and orchestrated. But what I love, Neil, is when I'm with boards and CEOs talking about technology trends, we include the topic of core modernization, not as you've got to eat your vegetables if you want to have your dessert.

[00:28:27] Right. It's a part of the balanced diet of tech innovation needs to include investing the right way in the things that matter deeply. It becomes a fulcrum of innovation. And I've, honest to God, had CIOs and CTOs in the boardroom given amen, hallelujah, like literally vocally because someone is telling the chair and the CEO and the CFO what they've been trying to get across.

[00:28:57] Maybe for all their careers. Right. And it's elevating it to be a strategic. It's not the CIO's burden. And by the way, most technical debt is, I've got the three forms of technical debt. You know, you might like this. Yeah. First of it is malfeasance.

[00:29:18] Bad tech execs and shitty architects and dishonest consultants and the wrong vendors delivering shoddy code. Poorly designed, poorly engineered. That is such a small percentage of technical debt. I'm not going to say it's zero, but it's such a small percentage. People think, especially a non-technologist, when you think about technical debt and legacy, that's typically the bias they have, even if they don't voice it, of mistakes made in the past that we're paying interest on now. Very small percent.

[00:29:48] Misfeasance. Perfectly justified trade-offs and decisions we made. And we told ourselves, we'll move it to the next release, but we'll come back to it then. Right. There's a reason why we have to take a shortcut. And we all did it eyes wide open. We all agreed. And then people forget about the why. And then generations later, you're trying to defend decisions that were made with everyone's fool. That's the vast majority.

[00:30:16] And then there's a small piece, bigger than the first, much smaller than the second, which is non-feasance. They were the right decisions made at the time. It's just the technology has moved on and it hasn't aged well. So, like I have a MQ-based Kix bridge I built for a public sector client in Chicago 20 years ago, which was elegant as hell. Service enablement of their entire stack when that wasn't the concept people talked about. I do it differently now. Right.

[00:30:45] So, part of this is how do you change the framing and the temperament behind core mod and legacy debt? Because that bias, people might not talk about it, but it's real. And the other problem is people don't measure it. So, it's this amorphous sort of damically is hanging over us that, you know, feels like it's...

[00:31:11] You can feel the gravitational pull of it, but it's hard to understand exactly how big it is and what to do about it. Lovely. And finally, I always try to give my guests an opportunity to lay to rest any myths or misconceptions that they commonly see on their news feeds. And I'm sure you've got a few of these. So, is that a myth or misconception that you've seen out there that we can lay to rest today? Let's see if we can set you up for the weekend and offload them. Okay. Yeah.

[00:31:39] Well, maybe, Riff, pull a thread on before. Just like mainframe bad cloud good wasn't the answer. Yeah. There's a decision that needs to be made around infrastructure. And one of the chapters in Tech Trends is about hardware is eating the world. Redux here about AI chipsets and Edge.

[00:31:59] And the point is, we might want to continue to have highly nimble access to costly capabilities in hyperscalers and by-the-drink consumption-based. That's a part of the strategy. But we also have to balance it with where do we want to invest in our own dedicated infrastructure that we have more control over and then balance it with Edge.

[00:32:25] So, whenever we see a headline that says one of those is the answer, it's never that simple. It never has been that simple. It's not that simple today. And then, and I'll leave you to, because I know it's getting late in the UK for you, even if it's early in California for me. The other one is physical AI and physical robotics is definitely the next frontier.

[00:32:49] And there's a lot of progress that will be made that doesn't include humanoid, though we're excited about where the humanoid four-factor is going. But that tends to get a lot of the airtime for good reason, because it taps into our science fiction curiosity.

[00:33:09] But wheeled conveyance and dedicated arm-based and drones and many other form factors are being deployed right now and having tremendous impact and value. Love that. And one question I've got to quickly ask you before I let you go is, I'm listening to your story today. Favorite Beach Boys song and favorite pinball machine? What are they? Okay. God Only Knows is my wedding song. Oh, mine too. Is it right? Yeah, yeah, yeah.

[00:33:37] Did you have the same awkward moment in front of all of your family and friends for the first line of the song that somehow I may not always love you? Yep, you know it. Yeah. And we kind of look to each other like, well, the rest of it gets better for everyone that doesn't know. Carl Wilson's voice is angelic. It's and then my favorite pinball machine is sentimental. In my office, if I turn my camera, if I was home, the first machine I ever bought back in 1998.

[00:34:07] It's Tommy. It's the who. Oh, wow. So it's Tommy the who. And I still have it. And back at the time in Chicago, we didn't have enough room to have more than one. And so it was a bit of an art deco kind of like all of our friends thought that was. And the guy I bought it from, Craigslist, like $300, like, you know, it's worth five grand now. But he brings me to the basement. He's like, I got to warn you, they have friends, this poor gentleman.

[00:34:37] So if he's listening, I apologize because my wife and I mocked him because he had a basement full of pinball machines. And we were buying this one that was going to scratch my hobby. It should be a bit of a artistic statement. And now my wife is married to that guy. But she still laughs about it, but she has a different person she's thinking about.

[00:34:58] But part of moving to Kansas City, you can have secret doors in your basement and pinball machines and workshops and all those things. Oh, man, what a beautiful and incredibly cool moment to end on. But just bring it back to work for a moment. For everyone listening who want to find out more information, we referenced the Tech Trends report there. Any way you'd like me to point everyone listening to and I'll add a link to the show notes? Yeah, we've got a dedicated landing page for it.

[00:35:26] There's a ton of material, long-form research, and then every one of the chapters. And we hit it on a few. Agentic, physical AI robotics, the AI infrastructure push, the future of tech. We didn't spend as much time on cyber, but a big piece of it is on cyber. For offensive defense. So we'll make sure everyone can get a link to the report, the videos. I'm hosting a podcast, Forward Technology, with Deloitte's Chief Innovation Officer and Chief Human Officer. So that's launching soon.

[00:35:54] So keep an eye out for that. And then the state of AI and the enterprise survey is a great piece of work that just was published. So we'll make sure and share that as well. Well, I will include links to everything. I'll stay in touch with your team and ensure that we will get the link to your podcast when that's launched as well. And you've got yourself your first subscriber there. I love it. Yeah. All right. Perfect. One subscriber at a time. You need to do more podcasts.

[00:36:23] But we covered so much today from turning AI pilots into real results, tech spend, tying it directly to business outcomes, and even talking about an incredibly cool band and pinball machine too. But thank you so much for bringing this to life. Really appreciate you. I think if there's one takeaway from this conversation, it's that AI doesn't fail because it moves fast. It fails because organizations don't change how they work around it.

[00:36:50] And Bill made it clear that real progress comes from rethinking processes, rebalancing investment towards people and guardrails, and giving teams the confidence to experiment without fear and blame games. So whether you are wrestling with technical debt stuck in pilot purgatory or wondering why trust in AI evaporates outside of the C-suite, there were plenty of moments here worth sitting with, I think.

[00:37:20] So check out all the links in the show notes. There's lots of valuable takeaways and stats in there too. So please check that out. And I'd love to know where that landed most with you today. Are you seeing real AI outcomes where you work? Or are you still chasing them? Or are you just a pinball guy to want to share your wedding song or favorite song? Whatever it is, I'm the easiest guy in the world to find. We can keep this conversation going outside of the podcast.

[00:37:47] Believe me, I'm a guy that's pretty hard to shut up, which probably comes across by recording so many episodes here. But I'm more of a listener than a talker. So please, techtalksnetwork.com, send me your messages, send me anything that resonated with you. I'd love to connect with you one on one too. But I have taken up too much of your time today. I'll be back again tomorrow. And hopefully you'll meet me here, same time, same place. Bye for now.