What if the real AI race in 2026 isn't about building bigger models, but about where decisions are made, how fast they happen, and whether they deliver measurable value?
In this episode, I'm joined by John Bradshaw, Director of Cloud Computing Technology and Strategy at Akamai, to unpack his predictions for the next phase of cloud, AI inference, and the economics that will shape enterprise technology over the next 12 months. As organizations move beyond experimentation, John explains why the boardroom conversation has shifted from capability to return on investment, and how spiraling compute demands are forcing leaders to rethink the balance between performance, cost, and innovation.

We explore why this new financial scrutiny is not slowing AI adoption, but refining it. John shares how inefficient GPU workflows, centralized inference, and poorly aligned architectures are being challenged by a more disciplined approach that pushes intelligence closer to the edge. This shift is not only about latency and performance. It is about building scalable, value-driven platforms that can support real-time decision-making, agentic workloads, and global user experiences without breaking traditional IT budgets.
Trust is another major theme throughout our conversation. From the rise of everyday AI agents that quietly handle routine tasks to the growing importance of secure, resilient inference pipelines, John outlines how low-latency edge infrastructure, local processing, and hybrid cloud models will redefine reliability for both enterprises and consumers. We also discuss the smart home backlash following recent outages, and why the next generation of connected products will be designed to work even when the network does not.
The episode also looks at the future of streaming, where consolidation, intelligent content delivery, and AI-driven personalization are reshaping both the user experience and the economics behind the platforms. Behind the scenes, orchestration is emerging as a defining capability, with multiple models and services working together to validate outputs, reduce hallucinations, and create more dependable AI systems.
This is a conversation about moving from possibility to production, from experimentation to accountability, and from centralized architectures to distributed intelligence.
So, as AI becomes embedded in every workflow and every customer interaction, will the winners be the companies with the biggest models, or the ones that know exactly where their AI should live, how it should be orchestrated, and how it proves its value every single day?
Useful Links
[00:00:04] How much faith should we really be putting into AI right now? And what happens when that hefty cloud bill lands on the boardroom table? Well, my guest today is somebody that has the answer to that. He sits right at the intersection of cloud infrastructure, AI inference, and the very real-world pressures to make this stuff work at scale.
[00:00:26] So from the shifting economics of AI to the growing role of the Edge, my guest today spends his days thinking about where decisions should happen, how fast they need to be made, and what happens when trust breaks down. This is a conversation about what this year might look like as the hype that surrounds AI continues to fade, and the trade-offs that we're seeing become unavoidable and must be challenged and tackled.
[00:00:57] So, big question. What does smarter, more disciplined AI adoption mean for businesses and indeed consumers alike? Well, it's on that note that I will officially introduce you to my guest now. So a massive warm welcome to the show. Can you tell everyone listening a little about who you are and what you do?
[00:01:21] Yeah, absolutely. My name's John Bradshaw. My role is really to help articulate what we do as an organization at Akamai, how we're there to support our customers. And if I can use some of the experiences I've had over the years in different roles to help translate some of the more complex or esoteric bits of technology into something that's meaningful and useful, then that's what I do.
[00:01:49] But the actual title is Field CTO for Cloud for the EMEA region here at Akamai. And there's so much I want to talk with you about today because I think when boards look at AI spending this year, many are questioning whether the returns justify the compute and GPU costs. So from your perspective, what you're seeing with your conversations from your customers and beyond that as well, where are enterprises typically misjudging AI, ROI today?
[00:02:20] Tech does look a couple of good acronyms, don't they? Absolutely. And also, what separates the projects that deliver those real business outcomes from those that are quietly paused or stuck in pilot purgatory? You're absolutely right. And I think there's a big difference this year. And 2026, in my mind, is absolutely the year of ROI. We're going to see organizations move away from personal productivity improvements.
[00:02:47] We've seen some of the big hyperscalers, some of the big enterprise collaboration businesses revise their numbers around what they think they're going to be able to do this year. And a lot of that's to do with the difficulty in measurement. So if you're able to save each of your employees five minutes a day in what they're doing, that's great from a quality of life point of view.
[00:03:12] But there's no FD in the world who can measure that and say, yeah, I've managed to return $20 million worth of productivity to the business. It's not going to cut it. So this year is going to be focused on how do I grow my business and how do I materially save money within the business?
[00:03:31] So from those two sides of the coin, you've got tools and programs that will, let's say, drive hyper-personization, create a meaningful relationship with your end customer, and therefore help increase attachment rates, improve checkout experience and turnover more, all the way through to reducing costs.
[00:03:58] So how do I avoid another hire? How do I redeploy personnel or materials equipment to the right location? So let's say you're selling ice creams around the country. How do I use AI and weather data to predict where actually this group of people would be much more interested in buying ice cream,
[00:04:22] even though it's raining, say that living in Scotland, we have some amazing ice cream shops that are open all year round. But even on the coldest day, we want to be buying ice cream. All the way through to how do I make sure I've got parts for jet turbine engines that I think might go on the blink shortly? And how do I do predictive maintenance on that? Well, I need AI, I need the people, I need the materials in order to do that.
[00:04:49] So it will be that shift from personal productivity into gross and cost saving at scale. And another big focus for many companies now is the rising cloud costs. But one of the things that attracted me to you is you've spoken about cloud costs becoming a lever rather than a limitation. So how should leaders rethink cloud and AI architecture decisions?
[00:05:17] So things like performance, cost control and innovation can all coexist rather than compete against one another. So I've been fortunate to do a number of different roles, both customer side and this side of the table, where I'm trying to articulate value and how to work. And what I've noticed is those teams that can partner with the rest of the business to drive outcomes,
[00:05:47] get more flexibility and are better placed to drive change. So if you can go back to your business and say, look, if I save you a thousand bucks this year, can I spend half of that doing this cool thing, which I think might pay off in a year's time? Well, pretty much everyone that you meet is going to go, yeah, absolutely. I will take that. That sort of shared benefit or shared outcome approach is brilliant.
[00:06:16] Now, if you can do that, you can start to demonstrate to the business your value immediately from the IT side of the world and start to put things in place to support growth objectives or help your team train or build additional skills within the department. That lets you use cost as a lever. So it's okay if your cloud bill is going up, if that's commensurate with your revenue.
[00:06:43] It's similarly okay if it's declining and that's commensurate with all of your other costs decline, but your revenue staying stable. So it's about using it as a lever in order to define the business outcomes.
[00:06:56] If you do that, if you get your teams to buy into the vision or the value you provide, then you've moved out of that service provider model into an actual partner with your business. And that's a very different place for internal IT teams to be. And here we are recording this towards the beginning of 2026.
[00:07:21] I've already been through to a few tech conferences in the US, predictably. All the big topics are around agentic AI and working with AI agents. So one of the reasons I wanted to bring that up is you predicted that by the end of this year, people will routinely dedicate everyday tasks to AI agents without second guessing the outcome.
[00:07:44] So what do you think needs to change technically and probably most importantly, culturally, before that level of trust becomes normal behavior? It might feel a little bit off now, but I agree with you. Towards the end of the year and next year, this is going to feel normal. But what needs to change to get to that place? So it's a funny one. And I think of AI as a whole as kind of a really eager intern. Now, I've been an intern. I've been a grad.
[00:08:13] And when you're in those positions, you're desperate to help. And you'll always say yes, and you'll want to do stuff. And AI has been very guilty of being overly positive and always trying to confirm what you want to say. I think we're going to see a move now to it becoming a more stable and responsible partner to you.
[00:08:39] So out of that, Brad, into the two or three years business experience kind of person that starts to demonstrate that value. But a lot of that's to do with things like the ability to influence system products and to demonstrate the way that you can reflect your own controls within that. And we'll see this move away from it just being really happy to please you to more challenging and responsible.
[00:09:08] But that goes hand in hand with the regulatory frameworks that we all operate under maturing at the same pace. So three, four years ago, the idea of using AI to do anything seemed a bit, I don't know, it's a bit novel. I'm not sure we're keen on this. And we're starting to see more and more people get comfortable with it generally.
[00:09:34] And as we progress through the year, it will get to that point where it's considerably more right than it's ever wrong. So we've moved out of that confirmation bias hallucination mode, still happens on occasion as we've seen, no doubt, with some public stuff recently, to it generally being absolutely right. And if you're saying to a gent, look, could you book me a doctor's appointment for Friday, please?
[00:10:04] That's hard as a process to mess up unless it suddenly decides you're in Madrid and tries to make you an appointment there. That will be a little bit harder. But we'll also start to see it reflect into some more business processes. So how do I get my agentic system to react to a customer demand that is much more intelligent than those older chatbots,
[00:10:30] which were essentially press one for this, press two for the other, into a more dynamic system, which is able to make decisions independently. So you'll see, let's say, things move away from having to phone into the call center to cancel an order, to being pretty comfortable with a web chat with your agentic bot able to go, yeah, okay, it's your Bob. This was your order number. This is how much it was. It was due to come on Friday.
[00:10:59] I'm going to cancel that one, but not cancel the order for Saturday. So it will be able to cope with more complex demands on it. But this is going to be a comfort level thing that happens over time. It's not going to happen by the end of this quarter, but I definitely think by the end of the year, it'll almost be second nature. And when I was doing a little research on you before you joined me today,
[00:11:24] I was also reading that edge AI plays a central role in your outlook for the immediate future too. So on that side of things, why does moving inference closer to users, why does that improve reliability and trust so dramatically? And for people listening, what practical differences will they actually notice in their daily digital experience as a result of this, do you think? Yeah, so there's a few elements in that. One is simply around speed of response.
[00:11:53] So if you land on a web store site, you need the recommendation engine to be as close to instantaneous as possible. We all know that for every 10, 100 milliseconds of delay on a site, your drop-off rate massively improves. So if you're having to backhaul a recommendation from LA all the way over to London, that's not going to be a great experience for anyone.
[00:12:22] But the second part of that is on the trust side. So consumers are becoming more and more conscious of privacy, as are businesses. They don't want their data to traverse geopolitical boundaries or national boundaries. They want it to stay within the area that it's protected, that they designed their control sets for. So it's doing inference within a few miles, tens of miles of the end user is a better experience,
[00:12:51] but it's also much more compliant as a consequence. That then triggers into these agentic workflows. So if you're trying to manage process with lots of different inputs from all over the place, and let's take a supply chain one where you're trying to just in time pull apart from a different store and then send it on a robot somewhere and all of those sorts of workflows, you can't be moving that data and the decision-making process everywhere.
[00:13:20] It would be the same as phoning Hong Kong to see what part number needs to go into this thing and your manufacturing plant is in Birmingham. You know, why would you bother doing that? That just seems mad. And I also think over the last 12 months, we've seen a lot of outages and security incidents. And as a result, confidence in things like smart homes and connected devices have all been somewhat shaken.
[00:13:48] So how do you see things like hybrid cloud and edge architectures maybe helping to restore trust? And what mistakes should manufacturers avoid repeating as well? I'm sure you see and hear lots of stories around this, but tell me more about them. Well, I think it's a challenge where people have decided, right, we want to make something really clever. Absolutely. That makes perfect sense.
[00:14:13] And the best place to get lots of access to compute and all this other technology is in these big data centers in different parts of the world. And that's great when everything's working. Well, as soon as you have a disruption on that line, it could be your local broadband. It could be DNS going down. It could be any one of these things. Then you've got a very brittle workflow.
[00:14:37] And there were some stories of people's smart beds not working because they couldn't connect to their data center in the cloud and therefore couldn't adjust the size and bed. Now, why do you have to go all the way to a major data center to adjust the level of this bed? That doesn't make any sense. So what we'll start this year is more of a continuum of compute.
[00:15:02] So there'll be on-device capability to cope for not quite out of band, but if it's disconnected for a period of time, all the way through to what can't the device answer? Because it hasn't quite got enough compute locally to it. So a small specialized language model isn't going to quite cut it. So we might need a little bit of extra oomph, to use a technical term, in the edge to try and answer that question.
[00:15:29] And if the edge can't answer it right, well, then I'll expand out to an even bigger capability. And that approach coupled with not putting all your eggs in one basket. So not just picking a single DNS provider or a single GPU provider, cloud service, whatever it happens to be. And running in this heterogeneous type environment where you can move workloads,
[00:15:54] where they're going to be most secure, most performant, most cost appropriate, is going to help balance these things out. And what I've certainly seen around the AI space is people aren't looking at their existing cloud provider to do their AI service. And some of that is a tacit recognition of cloud lock-in, having been a problem for the last 15, 20 years, but no one really addressing it.
[00:16:20] All the way through to, this market is moving so quickly. Do I really want to place a bet with provider A? When B, C, D, E, and F have this really new capability that looks amazing that I want to try. And that can be anything from some of these coding agents, which are really nifty, through to co-working tools or responsive chat services.
[00:16:49] That ability to go, no, I don't want to be there today. I'm going to go over here and back, vice versa. That makes a big difference. And the ability to start A-B testing things as you're working through that becomes critical in taking best advantage of those services. And if we look even closer to home at how we access entertainment now, I think many people over the years have cut the cord to traditional cable and satellite packages
[00:17:19] in favour of streaming services. But they now find themselves with an increasing list of subscriptions and saying to themselves, why am I paying a company to just watch ads in a lot of these services? And streaming services themselves, they're facing rising costs, fragmented audiences, and the dreaded subscription fatigue. So as this market matures, I'm curious, how do you see AI and infrastructure choices shaping which platforms might thrive and which might struggle to keep pace?
[00:17:49] Because it feels like there's a lot of competition in this area now. Yeah, I agree with you. I think it's interesting. Certainly after COVID, we had the explosion of providers with specialisms in different content category. And I know when I travel for work, I seem to come home and the kids have spent up for yet another streaming provider. And honestly, I feel like I spend more now than I've ever spent on entertainment.
[00:18:17] I think the distinction is going to be in that user experience. And I don't know if you've experienced this, but I know I've spent 20 minutes going through a catalogue and going, yeah, that looks interesting, but I'm not quite in the mood for that. Maybe this, maybe that. And what I haven't seen yet is any catalogs that reflect that in a deeper sense. So yes, you'll get, oh, here's an action movie or here's a set of movies we think you might like.
[00:18:46] But what they tend to lack is context. So what I might want to watch on Friday night after we've managed to put the kids to bed is not the same thing as what I want to watch at lunchtime on a Tuesday. It's just not. So that lack of both location awareness, temporal awareness, those contexts are missing from those
[00:19:12] catalogues. And the, the organizations that can, can actually start to build a profile of their users and reflect that are going to be much more successful because I don't know about you. I cannot spend 20 minutes going through a catalogue and going, yeah, and by which time it's bedtime. I really need to go. If you get some things, I know they're going to be up at six in the morning or, or whatever it happens to be. Yeah. Yeah. I'm completely with you there.
[00:19:37] And even if there's just a film that you want to watch, find out which platform that particular film is on. There's one service I use called, I think it's just watch or just watch it. And that will tell you what, where everything is. You tell it what platform you've got and it'll tell you where everything is. It makes it a little bit easy, but you do still find yourself searching 20 minutes for content, which is not ideal. And of course, outside of home entertainment, another big topic this year is AI inference.
[00:20:05] We've already mentioned it several times in our conversation today. So why do you think this year is the moment when centralized inference will start to become maybe a bottleneck? And how does pushing decision making closer to the edge, how does that change what, what applications are possible? Any big changes you see here? Yes, I do.
[00:20:27] And I think a lot of it comes from the growing understanding that there is a difference between you signing up for a service that sits on your laptop and costs you £20 a month or whatever, through to an enterprise scale version. Now, we've all seen the different models from the different providers and they're exceptional.
[00:20:51] The challenge is that they pretty much contain the sum of all human knowledge and experience to date. And as a consequence, to process that, to refine the model, to build the next 5.3 or 6.0 or whatever the model ends up being, requires thousands, hundreds of thousands of GPUs crunching through all of these data sets in real time or near real time.
[00:21:17] And the outlay for these providers is in the millions, if not billions of dollars. The problem with that is that if you're running a website that sells trainers, you're having to pay for that. Now, do you need your inference engine to be able to write you an essay on the Penefletian war? Or do you need it to tell you which trainers go with which pair of jeans?
[00:21:42] And the consequence is your tokens cost orders of magnitude more because you can't make that refinement. These edge services that organisations like ourselves provide allow you to take a subset of those models that are parameterised or shrunk to just address the challenge that they have. And you can operate them at a tenth, a hundredth, a thousandth of the costs.
[00:22:11] Now, if you're wanting to scale out or scale up a chatbot or an inference service, that order of magnitude becomes a really big issue for you. And it can be the difference between going bust and being wildly successful as a business. So this move to the edge allows you to bring a lot of that intelligence,
[00:22:35] or rather the intelligence you need, without having to have all of it there, if that makes sense. Yeah, and I think as these technologies begin to not only evolve but converge, we're also seeing growing interest in orchestrating multiple AI models and services together in a bid to improve things like accuracy and resilience, etc. So how do you see this shift towards model orchestration,
[00:23:01] changing the way enterprises might even design trusted AI workflows throughout the year? Any big surprises here or any changes you see? Yeah, so I've spent a lot of time speaking with people around, say, AI gateways and orchestration as an approach. Because yes, the big two, big three providers of models have really interesting capabilities,
[00:23:27] but they're not identical and some are better suited for this task and some are better suited for the other. So coupling that with your ability to provide guardrails and begin to test things as part of that orchestration becomes really interesting. So do I like model A from provider A or is model B from provider A better or C or so on? Now, you can't do that if you've got to manually cut over code all of the time.
[00:23:55] But you might also want to go, I'm going to get the answer from model A, but I just want to run it past provider C's model just to validate it. Because yes, you get hallucination in these things, but also maybe its context isn't quite as up to date or it's got flooded with context and therefore has got a bit off-filter in the way it's trying to answer your question. So that validation step becomes really important.
[00:24:24] There were some issues earlier last year where consultancies were generating reports for even government clients and some of the data wasn't as accurate as it might have been. And your ability to validate that with a second or a third model is going to help you produce better results for your customers, but also protect your reputation as you look to scale out that technology.
[00:24:52] And we've covered so much in a short amount of time today. And just listening to you, it's easy to see just how passionate you are about the topic. So for anybody listening, maybe they want to continue the conversation with you or keep up to speed with some more of your musings throughout the year. Where would you like to point everyone listening? And of course, anyone wanting to find out more about Akamai too? Well, Akamai.com is a great place to start around all of our capabilities.
[00:25:21] I'm on LinkedIn as John Bradshaw. But equally, I write there. I also write on other publications and other services. But almost all of that is linked from either my LinkedIn profile or Akamai.com. Awesome. Well, I will add links to everything. And I do urge people especially to follow you on LinkedIn. I mean, in a 30-minute conversation today, we've talked about the AI ROI imperative,
[00:25:50] trusted workflows, smart home backlash, next phase of streaming, and not to mention Akamai's AI inference predictions. And there is much more to come. So please, I urge them to check that out. But more than anything, just thank you for sharing your time with me today. I really appreciate it. Thank you. It's been a pleasure. I think there's a lot to sit with after listening to this one.
[00:26:13] From the rising scrutiny on AR return on investment to the idea that trust in AI agents will very quickly feel routine rather than risky. I also enjoyed talking about why inference is moving closer to users, how outages and security failures are reshaping expectations, and why orchestration, resilience, and transparency, how these things matter way more than shiny demos at a tech conference.
[00:26:42] And I think the most important thing here is none of what we talked about today points to less AI. What it does point to is better choices. Choices where intelligence lives and how it is delivered. And if this conversation made you rethink how AI shows up in your own work, in your home, or in your business, that alone is a signal worth paying attention to. But over to you. What decisions will you trust AI with?
[00:27:11] Which ones will you still want to keep close to home? I'm sure each and every one of you have different answers, and I'd love to hear from you. Please, techtalksnetwork.com. Send me a message over there. Connect with me on socials. And we'll continue this conversation. But as always, thanks for listening, and I will speak with you again tomorrow. Bye for now.

