3566: How Ergodic Predicts Complex Disruptions Before They Happen
Tech Talks DailyJanuary 24, 2026
3566
37:5330.34 MB

3566: How Ergodic Predicts Complex Disruptions Before They Happen

What if your AI systems could explain why something will happen before it does, rather than simply reacting after the damage is done?

In this episode of Tech Talks Daily, I sat down with Zubair Magrey, co-founder and CEO of Ergodic AI, to unpack a different way of thinking about artificial intelligence, one that focuses on understanding how complex systems actually behave. Zubair's journey begins in aerospace engineering at Rolls-Royce, moves through a decade of large-scale enterprise AI programs at Accenture, and ultimately leads to building Ergodic, a company developing what he describes as world models for enterprise decision making.

World models are often mentioned in research circles, but rarely explained in a way that business leaders can connect to real operational decisions. In our conversation, Zubair breaks that gap down clearly. Instead of training AI to spot patterns in past data and assume the future will look the same, world-model AI focuses on cause and effect. It builds a structured representation of how an organization works, how different parts interact, and how actions ripple through the system over time. The result is an AI approach that can simulate outcomes, test scenarios, and help teams understand the consequences of decisions before they commit to them.

We explored why this matters so much as organizations move toward agentic AI, where systems are expected to recommend or even execute actions autonomously. Without an understanding of constraints, dependencies, and system dynamics, those agents can easily produce confident but unrealistic recommendations. Zubair explains how Ergodic uses ideas from physics and system theory to respect real-world limits like capacity, time, inventory, and causality, and why ignoring those principles leads to fragile AI deployments that struggle under pressure.

The conversation also gets practical. Zubair shares how world-model simulations are being used in supply chain, manufacturing, automotive, and CPG environments to detect early risks, anticipate disruptions, and evaluate trade-offs before problems cascade across customers and regions. We discuss why waiting for perfect data often stalls AI adoption, how Ergodic's data-agnostic approach works alongside existing systems, and what it takes to deliver ROI that teams actually trust and use.

Finally, we step back and look at the organizational side of AI adoption. As AI becomes embedded into daily workflows, cultural change, experimentation, and trust become just as important as models and metrics. Zubair offers a grounded view on how leaders can prepare their teams for faster cycles of change without losing confidence or control.

As enterprises look ahead to a future shaped by autonomous systems and real-time decision making, are we building AI that truly understands how our organizations work, or are we still guessing based on the past, and what would it take to change that?

Useful Links

Thanks to our sponsors, Alcor, for supporting the show.

[00:00:03] What happens when AI stops reacting to yesterday's data and starts reasoning about tomorrow's decisions? Well, today's conversation sits right at that edge. In a world obsessed with dashboards, predictions and faster answers, there is a much, much quieter question that's worth asking. Do our AI systems actually understand cause and effect, or are they simply guessing with confidence?

[00:00:33] Well, in this episode, we're going to explore what it means to build intelligence that can simulate outcomes before decisions ripple through supply chains, operations and even customer trust. And I hope that today's discussion will move beyond hype and buzzwords and into how leaders should think about AI when the stakes aren't very real and the systems are incredibly complex and mistakes, well, they're incredibly expensive.

[00:01:02] So if you've ever wondered whether AI can help organisations act earlier instead of apologising later, I think this conversation should give you plenty to think about. Here at the Tech Talks Network, we now have nine podcasts and approaching 4,000 interviews. And that is only possible with some of the great friendships that I've developed over 10 years of podcasting. And a company that I'm proud to call friends of the show is Donodo.

[00:01:29] Because not only have they been on this podcast multiple times, they also help make sense of the AI data chaos that we're seeing now. Because the data world is louder than ever. AI hype, lake house complexity and pressure to deliver more with less. These are things that I talk about every day on this show. But Donodo is helping businesses make sense of it all because they provide a unified data foundation for trustworthy AI. So if you're ready to unlock real outcomes, simply visit Donodo.com today.

[00:01:59] But now it's time for today's interview. Let me introduce you to today's guest. But enough from me. Let me officially introduce you to my guest today. So a massive warm welcome to the show. Can you tell everyone listening a little about who you are and what you do? Indeed. So my name is Hubert. I'm one of the co-founders and CEO at Agotic.

[00:02:25] Brief background. So I'm a computer scientist by education, but please don't hold that against me. I started my career at Rolls-Royce in defense aerospace working on the Eurofighter engine, which was a super fun first job that got me excited and enthused by the world of data and analytics.

[00:02:43] So I then moved into consulting where I spent the next 10 or so years at Accenture working on building and scaling large scale data analytics and later stage AI programs for these big enterprise customers across the world. And that got me super excited by AI. And then I decided that I wanted to be on the coalface of that. So I joined a startup, which is where I was for a couple of years before founding Agotic and Agotic just as a quick intro.

[00:03:11] So we're an AI startup based in London and Munich, and we're building world models as the intelligence layer to radically transform and improve enterprise decision making. And looking forward to talking you through that today. And there's so much about that stuff I want to talk with you about, but you did mention Rolls-Royce there in a former life. And I love your, your backstory there and what led you to where you are now. And I'm a guy that was from Derby originally. Was there the Rolls-Royce connection? Was that in Derby or somewhere else in the country?

[00:03:39] No, it was in Bristol and Fulton, which is where defense aerospace is, is based. I think Derby was civil aerospace, the Trent engines. I visited Derby and Anstey and Coventry a few times, but mainly I was based in Bristol. And then traveling around to the different consortium partners, which are all Europeans. So in Germany, Spain and Italy. And as I said, that's quite a, quite a fun first gig out of college and university,

[00:04:04] traveling around meeting RAF and Air Force people, working in engine bays and in aircraft hangars and working on really cool military jets. So super fun first, first experience I would say. Incredibly cool. Yeah. And then fast forward to present day, it's even cooler. I mean, you're building an intelligence layer for complex systems using world models.

[00:04:26] So for listeners who hear that phrase for the first time, how would you explain what world model AI is and why does it represent a shift from maybe some of the conventional AI approaches we've heard over the last few years? Sure. So the one sentence summary, I think of a world model, and this is how I describe it to our team and our customers and prospects is a world model helps us simulate the effect of any action on KPIs that we care about.

[00:04:55] That's a really simplified understanding and view of the world. So under the covers, it's an AI system that basically builds a structured representation of how complex enterprise actually works. So what entities exist in that enterprise, whether it's plants, warehouse facilities, production machines, how they all interact, and importantly, how the system or the environment changes over time.

[00:05:21] So thinking about external dynamics, the macroeconomic environment, and it fuses all those things together to ultimately be able to reason, simulate outcomes, and recommend actions in really complex contexts. The compare and contrast between world models and conventional AI is quite fascinating. So conventional AI, which I would call more traditional machine learning, it's really a pattern-matching machine.

[00:05:46] So it's looking at analyzing historical data, trying to find the patterns that exist there, and assuming that that's how the world is going to play out in the future. So to give you a couple of concrete examples, so large language models, which have come to the fore, the likes of Chat, GPT, Gemini, Chord, they're trying to predict the next word or the next token. A traditional machine learning model might be trying to predict whether a metric like churn or OTIF is going to go up or down.

[00:06:16] But they really struggle with explainability. So if you've ever seen the topology or the network diagram of a deep learning network, which is most traditional machine learning, it's incredibly difficult to decipher. It's a spider's web of thousands of different nodes and interconnections, and it's impossible for, I would say, anyone, regardless of how intelligent you are, to be able to decipher it. And even the machine can't decipher it.

[00:06:40] And what that means is they leave a lot of value on the table, because ultimately our belief is that we want to be able to understand the dynamics of any environment that we're operating in. So whether you're in manufacturing or whether you're in marketing, you want to understand and be able to simulate what is going to be the effect if I take an action. You know, if I increase the marketing campaign budget, what is going to be the impact of that?

[00:07:04] Or if I increase the throughput on a particular machine, am I going to see more inventory or am I going to drive more sales and meet the demand? And that's really important because we want to be able to simulate the future. We want to be able to test different scenarios, right? So if you have a new policy on inventory or if we have potential new shocks coming down the market, what's going to happen when we see those?

[00:07:30] And what can we do as an organization or as an enterprise to be able to meet those challenges head on? And we think that's really important because where the world is going, and you will have, I'm sure, seen much of this, Neil. 2026 is being billed as the year of the agent, the agent workflow. They're getting so much prominence.

[00:07:50] But if we think about how an agent works, you know, even an agent undertaking a relatively low to mid-level of complex work, they'll really need to understand the implications of the plans that they generate and execute on. And if they can't understand that, then they're going to go haywire, which is what we're kind of seeing in a lot of early pilot deployments when we talk to some of our customers.

[00:08:11] And that's where we think world models will really come to the fore and where they're going to be a key enabler for the next generation of AI systems, which is why we're excited to be working on it. And as you said there, many AI systems that people listening will be using, they focus on pattern recognition or prediction based on past data. And here in 2026, we are talking about unleashing thousands of agents that are capable of making their own decisions.

[00:08:38] As an ex-IT guy, that makes me very nervous. But I've got to ask, how do world models go further by simulating cause and effect? And most importantly, why does this matter when decisions have real operational consequences? And this one's for all the business leaders listening. I think we might set off a few light bulb moments here. Hopefully we will. Hopefully we will. So world models don't just learn the patterns, right? So they go beyond that.

[00:09:04] They're trying to understand and learn how actions produce changes, right? So you do X, Y is the outcome. And ultimately, that's the core of cause and effect reasoning. We want to understand how an action that we take in one part of our enterprise will affect inputs and outputs of another part, right? And in order to do that, we've got to explicitly model what we call interventions.

[00:09:32] So if we change a particular policy or if we have a rerouting of inventory or if we delay a new product launch, what is going to be the impact of taking that action rather than assuming that the future will look like the past and we can draw a pattern from a past product launch? You know, the future is different.

[00:10:20] Not just historical averages from past data. And that's super important for the business leaders because ultimately, the primary decisions that we make drive or affect revenue. They might drive service levels or even customer trust because AI isn't just predicting outcomes. Hopefully, going forward, if you deploy and start using these world models, they'll help you to choose the right action, right? Not just any action that was maybe done in the past.

[00:10:49] And that hopefully will improve positively these outcomes that you care about, whether it's revenue or customer trust. And you describe your approach as physics-based AI. And at a time where that sounds incredibly cool, but there's also an increasing focus on ROI of every tech project. How do ideas from physics and system dynamics, how do they help organizations improve things like performance, resilience and stability in environments that are continuously changing?

[00:11:19] Because it's a big challenge right now, isn't it? Ticking all of these boxes. It is. It is. So, again, going back to the core definition, right? So, a world model is supposed to simulate the world and answer what happens if I take a specific action, right? What is going to be the outcome or the consequence of that? In order to do that, we've got to respect a few factors. So, whether it's the operational constraints or the limits that are placed in environments.

[00:11:46] So, for example, a production machine can't run at 300% capacity. It's just impossible. Delivery trucks or logistics, they can't teleport. So, if you need inventory to be moved from one to another by 12 p.m. today, it's probably not going to happen if you're in North Brazil and you need to move stock to South Brazil because you've got thousands of miles to travel. Secondly, there's conservation, right? Conservation of time, capacity, inventory, money, and the final point around causality, right?

[00:12:14] The fact that A causes B rather than B causing A. Those are really constraints, right? And those constraints are physics. Without them, the model itself, if you talk to your data science team, it might fit beautifully and have the most wonderful metrics in the world, but it might not be able to propose actions that are actually viable in your environment.

[00:12:37] And in supply chain, for example, to bring this to life, one of the classic physics laws is known as the ball-width effect. This is where you may have heard of it. It was coined, I think, in the 60s from MIT. It's where small changes in upstream consumer demand, let's say at the retail level, cause increasingly large fluctuations in volatility as you move upstream in the supply chain.

[00:13:02] So wholesalers might respond to customer demand going up by, let's say, 50% by increasing the order volume by 75% into their suppliers. The suppliers and the production team might increase their production runs by another 20% or 30%. And at each step along the way, because they're dealing with imperfect information, their response to the upstream demand is to go even further and increase their own throughput.

[00:13:29] And then you get this incredible effect, which can lead to things like inventory excess. It can lead to stockouts, high levels of working capital. So a small upstream change in demand actually has this massive knock-on ripple effect through the system and the environment that we operate in. And ultimately, for Ergodic, for the world models that we develop, we're trying to help our customers respond to these upstream changes in demand.

[00:13:57] So we need to make sure that our world model also reflects the physics laws that are present. And bulb effect, of course, is one of them. So where we identify that we are seeing the early signs of a ballwhip effect come into play, we need to help our customers to firstly be aware that this is happening and secondly course correct appropriately. And that's one example of the physics law. There's countless physics laws sitting in supply chain.

[00:14:23] And even if you move into marketing customer in most industries, there are laws that need to be respected and governed. And if we can build those into our models as we're building them, then we're in a good place to be able to react and be able to respond to the changing dynamics of the environment. And that's really, really crucial. And when I was doing a little research on you, one of the things that really stood out to me is that you're helping teams anticipate what will happen next rather than reacting after the fact.

[00:14:52] So just to bring that to life, are you able to share maybe a concrete example of how a scenario simulation has really changed decision making in a sector like automotive, supply chain or CPG? It'd be great to hear a real world example of how it's made a big difference. I don't expect you to name any names. Is there a story you could share? Yeah, I won't name names, but I'll give you some good context here.

[00:15:16] So in supply chain, we've built world models that connect every single part of a really complex set of enterprise supply chains from raw materials and supplies that come in all the way through to production, to logistics, to customers. So we have that base layer of connectivity, if you like. On top of that, we've got the agents that essentially use the world model.

[00:15:43] And what they're doing is constantly scanning upstream for early risks before they cascade into these full-blown crises. And you have your VIP customers calling you up, shouting at you that you've missed a massive delivery. So let's go a little bit deeper into that, right? So let's say we're a CPG manufacturing food products, right? And one of our core ingredients for many of our products is tuna.

[00:16:07] So we've had a delay or a quality issue with a huge shipment of tuna from one of our suppliers. So this agent, what it does is identifies, first of all, that this has happened, you know, perhaps through some integration with SAP or from emails or other files. You know, it senses what is going on. And instead of just sending, you know, the plant manager or the customer service team an alert to say, hey, guys, you've got an issue with some raw supply, it goes way beyond.

[00:16:36] So what we're trying to do is to understand and build this cascade analysis. So, okay, we've got a thousand tons of tuna sat in our warehouse, which is just not fit for purpose. Where is that going to actually hit us? Which specific plants, which products, which territories, which customers ultimately are most likely to be affected? And what is going to be actually the impact of this, right?

[00:17:03] And in a supply chain world, OTIF on time and in full is quite an important metric. So what is going to be the impact on OTIF if we do nothing at all? But obviously, we don't want to just leave it at here's an impact for you. And now please go away and solve it. We want to help our customers to understand what are the potential solutions and what are the options.

[00:17:24] So we can actually identify these automatically because we have this connected world model that really understands the interconnectivity between all these different parts of the production process. So perhaps we should prioritize specific SKUs for certain VIP customers. Or maybe we need to move inventory of this raw tuna from one plant to another to overcome the issue. And the reality is with each of these options, there are trade-offs, right?

[00:17:50] So if you want to move inventory from one plant to another, there's obviously a time period for that logistics transfer. There might be a cost implication if you're doing inter-country transfers. There might be tax implications and so on. So what we need to do is also to understand the second and third order consequences of any suggested action. But here's the reality.

[00:18:16] Very few people are going to trust a very complex AI and just driving decisions automatically. So we also need to be able to allow the user to be able to interact with the models and say, you know what? The AI is telling us that we need to, I'm making this up, you know, recommend all the products get siphoned off to Amazon. But the user might know that there's some operational edict from above that. In fact, Carrefour or Tesco need to be more prioritized.

[00:18:46] So we need to be able to enable our users to make those kind of tweaks and changes as part of the simulation. But then to also see what the impact of that is going to be. So if you're now prioritizing Tesco over Amazon, what is going to be the impact on Amazon? You know, is there going to be some kind of fine or penalty clause instigated because you failed to deliver to Amazon? All of those things have to be wrapped into this, which is why these simulations, we really go above and beyond what is typical,

[00:19:15] where you're just showing in a typical simulation, here are two potential outcomes. We're trying to go end level deep into here are the implications of every action that you can take and bring that to the user in a really simple way where they can make the right decision. Ultimately, take the execution call on what makes most sense at that point in time. So that's one example of a pretty, pretty complex simulation that we've built and we've had executed many times.

[00:19:45] This month, I'm partnering with Alcor. And if you've ever tried to hire engineers in another country, you probably know just how painful it can be. Different laws, patchy support and partners who don't truly understand engineering roles. So Alcor approaches this from a different tech point of view. They specialize in Eastern Europe and Latin America, and they're able to combine EOR capabilities with recruiting.

[00:20:11] So you get one partner handling everything and they help you choose the best location for your stack, find developers with the right depth of experience and run proper assessments so they can onboard people quickly. And they also give you a model that respects both transparency and margin. Most of your spend goes directly to your engineers and the fee will decrease as the team expands.

[00:20:35] And you can even transition everyone in-house at that time when you're ready without having to worry about a penalty. And that structure is why a mix of early stage and unicorn stage companies use them as they scale. So if you want to take a look, visit alcor.com slash podcast or tap on the link in the show notes. But now, on with today's show.

[00:20:58] And looking on your website and social channels, I also learned that Ergodic's technology is data agnostic, which sounds incredibly powerful, but also slightly abstract. So in practice, how does this kind of flexibility help enterprises adopt AI without needing to perfect data or massive replatforming efforts? Because I know that data problem is a massive deal when it comes to AI, but tell me more about this. Sure. So I think a couple of points there.

[00:21:27] Let me take those one by one. So on the perfect data, I've had the privilege of working for hundreds of enterprises over the course of my career. And I'll say this as a pretty certain fact. No single organization has perfect data. It just doesn't happen. Data is noisy. It's messy. It's sparse. And that's just the reality. So my advice to customers that we work with or that are planning to work with us is that if you're waiting for perfection,

[00:21:56] before trying to exploit and build these intelligence systems, number one, you're going to be waiting an awfully long time. And number two, you're going to be overtaken by organizations who are happier to proceed at risk. I think the key here is really pragmatism. You know, perfection is the enemy of good, to call in the old phrase. So we don't need every possible data point or feature to be 100% perfect.

[00:22:23] But we probably do need a subset of all your data to be at least good. So the question that we often ask is, you know, what is that subset? How can we identify what those data points are early on, maybe even before we get into an engagement with customers? And how can we find the appropriate methods and workarounds to overcome them?

[00:22:46] So it's basically trying to fight off this idea that you need perfect data before you can get going, because I think that's the wrong way to think about it. The second point that you or the second question that you had was around replatforming. And that's a really important one to us, because we've designed Agotic to work with and on top of any existing system.

[00:23:07] So whether you're an SAP house and you run your entire organization on SAP as your ERP, or maybe you're in Google Cloud or Snowflake, for us, it doesn't matter. So we sit on top and it can integrate with any upstream system. So there's no need to replatform at all. And that was a fundamental decision we made quite early on. And we're quite happy to have made that because, frankly, most organizations have a myriad of different systems.

[00:23:36] And they probably don't want to be replatforming one to another in this day and age. So to be able to work with what they have is, I think, quite important and quite a value add from us. And we mentioned earlier that Agentec AI and AI agents, these are the big talking points of 2026. And I would argue that the only things bigger than any tech trend or any buzzword this year is ROI. We'll come back to that again.

[00:24:00] And from a growth and deployment perspective, what have you learned about integrating advanced AI into large organizations in a way that delivers measurable ROI that actually sticks with teams? Because, again, it's very elusive for many enterprises. There's a big focus on it now. What have you learned here? Yeah, this is a very detailed question. We could spend three hours on it.

[00:24:25] But let me try and summarize, I think, some of the key learnings, probably three or four key things I'd like to get across here. So the first is when you're starting out, if you're an enterprise in a function, perhaps, that hasn't maybe spent too much time thinking or activating their data into these big intelligent initiatives. So the first thing is to start out with the right problem and the right ROI framing, right?

[00:24:54] So focusing on high-impact, measurable pain points where AI can actually directly improve those KPIs, not just automate from novelty or have some kind of fancy chatbot, which looks like it's doing something really cool, but actually does really nothing useful and is then abandoned very quickly. So making that explicit tie between the AI initiatives that you're looking to trial and the business outcomes.

[00:25:23] So in a supply chain manufacturing setting, maybe it's we want to reduce stockouts. That's the overall business outcome. And there are clearly metrics that we can tie against stockouts that measure it. And therefore, whenever we deliver, whether it's a pilot POC or through life engagement, we know the kind of North Star metric that we're trying to improve upon.

[00:25:44] So it's there, it's crystal, and it's very much publicized to the teams that we can see how we are progressing because that openness and transparency is really important. So that's kind of point one, which is the framing and the problem. The second point, I think, is around thinking about the humans and the people that we are actually working with and trying to help.

[00:26:06] So making AI usable in an enterprise setting, I think, requires, I keep saying this many, many times, but requires more than just a chat window or a co-pilot. But we need to think about how people work today, how they might work in future with AI augmenting their day-to-day tasks, their work, and how we can design around that. So there's often what are called baby steps that we need to take right along the way.

[00:26:33] So maybe the full workflow can't be improved or augmented from day one, but maybe there's the first step along the way that's really important, where providing insights at the right point, right time, just in time, is really going to help the end users who ultimately need to be using this AI. Otherwise, you've created a little black box in a corner that no one uses that your customers have spent a lot of money on and they're going to get no value from, which is not what we want to do.

[00:27:02] We want our platform, our products, our models to be used on a regular basis. So thinking about the human as part of that is really, really important. And then trust, right? So trust is the third point that we need to get across really importantly. So trust is really everything. Most people won't use an AI. They don't understand or they don't trust or they can't explain.

[00:27:26] I was talking to a colleague recently, someone I used to work with several years ago, trying to understand what it is that they're doing in their particular sector. And what he said to me was, yeah, we rolled out, I won't name the company, but we rolled out a co-pilot, which was supposed to give us, you know, AI at our fingertips. The first question I asked it, it spooled up a random answer, which is search from the internet. The second question I asked it, it came back and said, I can't answer that question.

[00:27:56] So within the first two minutes of using this new AI initiative, he had lost all faith in it. He was like, well, this is not, this is completely useless and therefore won't use it again. And literally it was two or three goes. He was done. No further action. And I think that's something that we see quite a lot. And you'll have heard, I'm sure, many examples of AI initiatives essentially failing at that first hurdle.

[00:28:20] So being able to build that level of trust in the responses, whether it's a chat type response or whether it's a prediction and prescriptive action that we're suggesting, getting that trust right up front is really, really important. The other part to it, which is very much framing around the world model approach that we're taking, which is we need to design for actionability, not just insights.

[00:28:49] You can ask any model, whether it's chat GPT or core to analyze a bunch of data that you might have, and it will come up with wonderfully formatted graphs and paragraphs of text. But most of that is insight, right? It's telling you what's happened. It's really descriptive. Our point of view is if you start pushing organizations towards what are the actions that we can take as a result of all this insight, which obviously the world model approach lends itself to really well,

[00:29:18] then actually you're giving your customers and these people that are using your AI far more power than they've had before. And if you pair that with the trust, you're in a really kind of powerful, you're in a really powerful context to really supercharge what they can do on a day-to-day basis, which is really what we're trying to get to. And we've talked a lot today around the technology and the AI models,

[00:29:43] but I think as we look ahead and AI starts to influence how entire systems are designed and managed, how should leaders and operators prepare for the organizational and cultural shifts that come with this new way of working? Because very often we've, I suspect we've both seen new technologies come and go and exciting new tech projects, but that cultural aspect often gets neglected, but it's possibly even more important than the technology sometimes, isn't it? Yeah, absolutely.

[00:30:13] So I think if you had asked me this a year ago, I would have maybe responded by saying something like, look, embrace the mindset shift. AI is coming and it's super cool. But I think that path has been very much crossed now, right? So every CEO, every board, practically every company is now really actively trying to work with AI, develop AI and embedded in their operation. So I think now it's a case of being far more ambitious, right?

[00:30:41] So we should all be thinking, how can we take the teams, the functions, the departments that we're responsible for, even working in, right? It can be inside out rather than top down. And how can we actually use AI to augment, improve every single facet of the work that we do day to day? So take an example, right? So we've seen over the last, I think, six to 12 months, a complete transformation of software engineering coding development.

[00:31:11] So now if you're not using Claude, Opus or Cursor, you're going to be massively outpaced by teams that do. It's just a complete fact now, right? The very best teams are using these AI tools to develop their software quicker, as are we at Agotic as well, right? We'd be stupid not to. That's one example where I think if you had rollback maybe 24 months,

[00:31:37] very few organizations were using these code development AI tools to do anything. But now it's pretty much the standard way to operate. That's going to be, I think, very similar to how other functions will ultimately transform over the next 6, 12, 24 months. So we can expect to see an equivalent transformation where AI is at the center,

[00:32:03] and these tools are at the center of the work that knowledge workers, people do on a day-to-day basis. So what I would say would be this, right? Because there is such an explosive ace of development, both research and vendors like ourselves producing these amazing tools that can really help organizations now and in future move forward, most enterprises just need to be much more open to continual experimentation rather than this mindset of,

[00:32:33] we're going to do a three-month trial of the tool, and then that's going to be our tool for the next two years. I think you've got to be much more flexible. You've got to try the new tools, the new techniques, the new models, and assuming that every six months or so, what was impossible in the past is now possible, and capability that maybe wasn't there is now available. And that can change your outlook. It can change the way you do things, and ultimately, it keeps you competitive

[00:33:02] because you can rest assured that your competitors are doing that without any question. The final tip I would give is look at what your best resources are doing. So often there's this kind of concept of shadow IT and shadow AI, right, where some of your best people are just paying for $20 a month for some random tool which they're using to amplify their work on a daily basis. So you need to be finding and nurturing those people and maybe not having them send their data to external systems without approval, right,

[00:33:32] but bringing them under your wing to say, you know what, you are really pioneering the use of this. How can we learn from that so that the rest of the team can also be adopting these tools and improving what they do and improving the output, and ultimately improving what we as an organization can do? So they are what I would suggest to be the mindset tips. And then from a top-level enterprise point of view,

[00:33:58] I think there is a need to reflect on change management, I think, in 2026. So there is definitely a fear of job displacement, a loss of autonomy, this kind of opaque decision-making from above or below with the AI helping us to make decisions without actually giving us the true reasoning behind it. So change management, I think, in 2026 and onwards is also going to be really, really important.

[00:34:26] I think the old edicts of having these change programs where you plan the change, you communicate the change, and then you execute the change, and you do that change management activity over the course of a two-year period, I think the timeline and the cycle for change is now in weeks and months, not in years, right? So we have to be much quicker in adoption and much quicker in the change elements of introducing these new technologies.

[00:34:55] So there's a few nuggets and thoughts of what I think now. Maybe that will change if you ask me in a year's time, but I think that's certainly what I believe to be the case now. And so many big takeaways in that. And for people listening who we have set off those light bulb moments, they want to find out more information about anything we talked about today, where would you like to point everyone listening? Sure. So feel free to take a look at our website, agodic.ai.

[00:35:22] So that's E-R-G-O-D-I-C dot A-I. My name is Zubair. Feel free to reach out to me, zubair at agodic.ai, if you want to talk to me personally. I would love to continue the conversation. Well, I love how your world AI model there is helping enterprises detect early signals, map hidden risks, and predict disruptions before they spread across suppliers, geographies, and tiers. So many big takeaways. So I will get those links added to the show notes.

[00:35:51] I'd urge everyone listening to check that out. But more than anything, just thank you for shining a light on this and the great work you're doing. Thanks again. Thank you so much, Neil. Appreciate it. I look forward to talking again soon, this time next year. As we wrap up, I think one idea lingers long after the microphones are switched off. If AI is going to take more of an active role inside our organizations, how much understanding do we actually demand from it?

[00:36:18] And I think today's discussion raises an uncomfortable but very necessary challenge for anyone betting on agents, automation, or intelligent systems. And that is, insights alone is no longer enough. Decisions carry second and third order consequences, and pretending otherwise is how trust erodes. And the real opportunity sits with leaders who ask those harder questions about action,

[00:36:45] accountability, and how humans stay in the loop as those systems around them grow smarter. But here's something to leave you with. As AI becomes more capable of acting on your behalf, how confident are you that it understands the world that you're asking it to operate in? Do you understand the trade-offs it's making for you? And what are you going to do differently? I'd love to hear what this conversation sparked in you.

[00:37:14] You've heard from me. You've heard from my guest. You've got unique experiences, success stories, failures, and everything in between. I want to know what you're hearing and seeing. So please email me, techblogwriteratoutlook.com, techtalksnetwork.com. Send me an audio message over there. 4,000 interviews. You can learn how to work with me. And also connect with me on the social channels. But that's it for today. Thank you for listening as always. Speak with you all again tomorrow. Bye for now.