Nutanix, AI And Containers: Preparing For A Distributed Data Future
Tech Talks DailyMarch 30, 2026
3465
27:2925.15 MB

Nutanix, AI And Containers: Preparing For A Distributed Data Future

What happens when AI ambition starts moving faster than the infrastructure built to support it?

In this episode, I spoke with Lee Caswell, SVP of Product and Solutions at Nutanix, about the latest Enterprise Cloud Index and what it tells us about where enterprise IT really is right now. There is no shortage of AI headlines, product launches, and promises about what comes next, but this conversation gets behind the noise and into the operational reality that many business and technology leaders are now facing. As Lee explained, AI is not arriving in isolation. It is pulling containers, data strategy, hardware decisions, governance, and application modernization along with it.

One of the biggest themes in our conversation was the growing link between AI workloads and container adoption. Lee made the point that applications still sit at the top of the org chart, and infrastructure exists to serve them.

As more AI-enabled applications are built by developers who favor containers and Kubernetes-based environments, enterprises are being pushed to rethink how they support those new workloads.

We talked about why containers are becoming such an important part of modern application strategy, how they help organizations handle distributed AI use cases, and why many businesses are trying to balance speed and flexibility without giving up the resilience and control they have spent years building into their infrastructure.

We also spent time on the less glamorous side of AI adoption, but arguably the part that matters most. Shadow AI, data sovereignty, unpredictable token costs, and infrastructure readiness are all becoming board-level issues.

Lee shared why so many organizations are realizing that AI cannot simply be layered onto existing systems without deeper changes underneath. New hardware, new software, new governance models, and a more consistent approach across edge, on-prem, private cloud, and public cloud environments are all part of the picture now.

What I enjoyed most about this conversation was that it never framed AI as magic. It framed it as work. Real work that demands better architecture, sharper oversight, and faster decision-making from IT teams that are already under pressure.

So if your organization is racing to adopt AI, are you also building the foundation needed to support it responsibly, and where do you think the biggest risk sits right now? Share your thoughts with me.

[00:00:04] What does it really take to prepare your IT infrastructure for an IT-driven future? Well, today's guest joins me with a front row seat to how enterprises all around the world are answering that exact question. His name is Lee Caswell. He's the Senior Vice President of Product and Solutions Marketing over at Nutanix.

[00:00:27] And he's here to join me today to unpack the latest findings from the Enterprise Cloud Index report. One of the most wivey reference reports in tracking how organisations are modernising their infrastructure. And in the conversation today, we'll explore why AI is accelerating the shift towards containerised applications.

[00:00:50] Why so many organisations are still unprepared to run AI workloads at scale, despite what the headlines elsewhere might tell you. And also discuss the rise of what's being called Shadow AI. And the risks that is creating for security, compliance and cost control. And Lee will also share what's really happening between the...

[00:01:15] And Lee will also share exactly what's happening beneath the surface of all the hype we see in our newsfeeds. From GPUs and token economics, to hybrid cloud strategies and data sovereignty. And why infrastructure decisions that are made today will shape how successfully businesses adopt AI over the next few years. We've got a lot to talk about today.

[00:01:39] So, buckle up and hold on tight as I beam your ears all the way stateside where you can sit down with myself and Lee right now. Thank you for joining me on the podcast today. Can you tell everyone listening a little about who you are and what you do? Sure. Thanks for having me. My name is Lee Caswell. I'm the Senior Vice President of Product and Solutions Marketing at Nutanix, a leader in cloud computing. Awesome. Well, thank you for sitting down with me today.

[00:02:09] So much that we want to talk about. I mean, the Nutanix Enterprise Cloud Index has become somewhat of an annual pulse check on how enterprises are actually deploying modern infrastructure right now. So, for listeners that are unfamiliar with this report, though, can you just tell me a little bit about what the Enterprise Cloud Index is, how the research is conducted, and why it's become such a useful barometer for tech and business leaders alike?

[00:02:35] Yeah, it really has become one of the mainstays, I think, of thinking about customer sentiment, right? And as you think about the enterprise and how much hype there is today, and certainly we'll be talking about AI, for example, right? You know, there hasn't been as much hype around anything, I think, since Web 2.0, whatever, right? So, just talking about, like, what does it actually mean? Like, how are customers and prospects thinking about this, right, particularly in the infrastructure market?

[00:03:01] So, the way this research is done, we go out to infrastructure owners all the way from the largest enterprises down into some of the smaller teams across the world, and we go and ask questions about some of the main themes that customers we find are thinking about. In this one, we're talking about AI, of course, right? There's also container adoption, what's happening with new modernized applications. What about cloud and hybrid cloud and how that's evolving, right, in terms of thinking about this?

[00:03:31] Data sovereignty turns out to be an important one. And also, just thinking about, you know, some of the dynamics as, you know, some of the competitive dynamics have changed, right, as new product bundling and pricing is impacting how customers think about the underlying virtualization layer. So, lots to talk about. It's a great opportunity to think about how the pulse of the industry is changing. And you mentioned container adoption, and I must admit that was one of the standout stats for me.

[00:03:58] I think it was 85% of executives said that AI is meaningfully pushing container adoption. So, what's driving that relationship between AI workloads and containers, and why are containers becoming such a foundational part of the modern application strategies that we're seeing now? Super interesting, right, as you think about new developers. And this is really what's happening, right? You know, one of the things that infrastructure people forget, right, is it's actually applications that are at the top of the org chart, right?

[00:04:27] It's the applications. And the infrastructure is really built in purpose of serving the applications. Well, applications are changing because of new large language models being built for AI. And because they're built by new developers, new developers favor containers. And they favor containers in Kubernetes orchestrated environments because it means it's easier to develop.

[00:04:50] It means that you can remove some of the OS dependencies, which means it's easier to test for disparate heterogeneous landing spots. And so, that could mean, for example, like out at the edge in a telemedicine facility or in a manufacturing facility or in a wind farm, right? We're going to watch AI distributing data more effectively over time.

[00:05:12] And so, as we think about new developers using containers and new AI apps being built by those same developers, what you're watching is the most natural way to go and assimilate AI is to take a containerized large language model and run it. Presumably, and we'll talk about this, without losing all of the sovereignty, compliance, and resiliency value that infrastructure teams have built up over the last 20 years.

[00:05:40] Now, the report also suggests, maybe unsurprisingly, that the number of AI-enabled applications inside organizations is about to increase rapidly over the next three years. But what kinds of AI applications are business leaders planning to introduce from what you're seeing and hearing out there? And ultimately, what does that growth mean for the infrastructure decisions that organizations need to make today and avoid running into technical debt issues further on down the line?

[00:06:09] Yeah, one of the things we're finding, right, is that customers are managing their LLMs and which applications they're bringing in. A lot of them are starting with things that are more internally focused, actually, because you can reduce your risk, get used to the technology, right? So some of the internal applications look like chatbots, for example, or support co-pilots, right, sort of things, right, where you can go and now get access to information. Companies have access, they have a lot of information. The question is, how do you access it most quickly, right?

[00:06:36] So increasing the support, a company support, for example, is a way to do this internally. And we're finding that internal application of AI then is being followed up by external applications where you're exposing AI to your customers, for example, right? So once you understand some of the limitations, some of the guardrails you might want to install, and this becomes part of what we call agentic AI.

[00:07:04] Now, one of the things we're watching, right, carefully is shadow IT, just like we had in cloud environments, can lead to cost overruns. It can lead to applications being used without, let's call it thorough review for things like sovereignty, right?

[00:07:21] And so infrastructure teams have the opportunity and need to really insert their controls, I believe, right, to make sure that AI is being deployed with the same application resiliency, sovereignty, and protection privacy that we applied into databases, for example.

[00:07:40] And so I'm watching this kind of start internal, then move external, make sure your IT infrastructure teams are treating AI as just the next business critical app is a really important model for how we get started. And I think very often it's very easy to get distracted by all the opportunities and the great things that AI can deliver and go into implementation mode and bringing all this to life.

[00:08:05] But another thing that jumped out is this idea that so many organizations, despite the opportunities they're chasing, they're not actually fully prepared to run AI workloads on their existing infrastructure. But again, from what you're seeing, what gaps are companies discovering when they try to operationalize AI? And where do you see the biggest challenges appearing right now? What are you seeing? Well, here's what we're watching, right?

[00:08:29] The fact that there's new hardware that's required, GPUs, for example, today, probably something called a DPU in the future, right, of how you're doing offload to make sure GPUs are fully optimized. So new hardware, new software. We talked about LLMs a bit, right? So how do you make sure that the latest LLMs? And then from an infrastructure standpoint, there's the container aspect.

[00:08:50] And one of the things we're finding is that running containers as most containers run in VMs is actually the easiest way for enterprises to assimilate containers and make sure that you're matching the agility of application ease along with the resiliency of the infrastructure layer. A lot of people don't realize this is actually the way the public cloud runs as well, right? So if you look at EKS, for example, that runs in the public cloud on a KVM hypervisor. Makes sense, right?

[00:09:20] You're getting the benefits of the underlying infrastructure virtualization matched up with the speed of new application developers. So this is one of the things you can look at is it's possible to go and get all of the benefits of containers and use all of the virtualization skills. So this is important because using one team to manage both containers and VMs gives you that flexibility going forward.

[00:09:46] We're finding that's an interesting solve for a lot of customers worried about how containerization might have to lead to a new team. No, by contrast, it's possible to do these together and get the benefits of both. And we've both been around in tech for long enough to have seen the fight in IT with BYOD, bring your own device where everyone wanted to bring their iPads and phones onto the into the workplace. And there was shadow IT.

[00:10:13] Anyone can access any SAS model from their browser without telling IT. Now we've got shadow AI that you mentioned a moment ago. And the reason I bring that up is 79% of organizations are seeing AI tools or agents that are introduced by their employees outside IT oversight. So we're here again. But why is it happening? And what risk does it create for security, compliance and governance and everything else that IT are focusing on trying to be that guardian of the network?

[00:10:44] Yeah, it's a shocking result, really, when you think about it. Like that level of concern about this. And that's because as you start thinking about AI and both the opportunity it creates, but also some of the risks that it introduces, we started looking at first thinking about how do people naturally evolve, get access to new technologies? Well, usually there's a dev team somewhere that says, hey, look at this. This is kind of cool, right? And try something out.

[00:11:12] And all of a sudden you realize, well, that worked maybe great at the first level of just doing a trial. But when you think about scale, think about day two operations. How am I going to patch this system so that I can go and bring it up to speed for new privacy and security updates? How am I going to think about compliance and making sure that this is working within my legal requirements, right? Or in Europe, for example, DORA requirements.

[00:11:40] Where is my data being replicated and what does DR look like, for example? So the ability to go and take these latest technologies, LLMs, for example, in AI and apply this becomes extremely important from a management standpoint. So I'll say that's first. Sovereignty, by the way, a huge issue right now is you start thinking about where is my data? Who can replicate it? Who can see it? Who can subpoena it, right? Those sort of elements. The second element that's interesting, though, Neil, is about cost.

[00:12:08] We have this new metric of how you manage AI costs. It's called tokens. Tokens are generated, consumed. And the idea that you've got predictable and low cost token usage is something that's important. And in an on-prem environment, you've got your infrastructure that you can now load up and share across teams. In the cloud, it's a little bit less obvious. You can get unpredictable token costs. A lot of folks don't know exactly how is a model generating a token cost?

[00:12:37] How many tokens will I generate and use and be paid for, be billed for, you know, in this latest billing period, right? So there's an unpredictability aspect of this. And this is one of the things we find, right, that, you know, IT does really well when they can go and have predictable costs, predictable sovereignty, predictable privacy. When you start having things that are less predictable, then people get nervous and surprises are not our friends.

[00:13:02] And I'm glad you mentioned data sovereignty there because 80% of organizations in the report said that it is a major factor when making infrastructure decisions. Now, a lot has changed in the last few years. So how are regulatory pressures, global conflict maybe, customer expectations? How are all these things reshaping where companies choose to run their AI applications and ultimately where they choose to store their data? Right.

[00:13:30] Geopolitical concerns, of course, right, are meaning that, hey, where I've got data in one data center, for example, if there's a geopolitical conflict, for example, I may want to relocate that data to another location. Or it may be that I'm moving it for cost purposes, for example, right?

[00:13:48] Or I'm thinking about, hey, how do I make sure I'm in compliance with regulations that make sure that my data, although replicated within a sovereign nation, for example, are not suddenly, somehow replicated to a region that I didn't understand or control. And so those sovereignty aspects become extremely important.

[00:14:07] And what I find fascinating here is, right, we're trying to go and insert sovereignty controls at the same time that we're recognizing AI is making data more distributed. These two work at conflict with each other if you think about it, right? So instead of just putting a cinder block around a data center, right, you're thinking, oh, well, it makes sense, as data people know, to move the AI close to where the data is being ingested.

[00:14:34] And that data could be in an MRI facility, right, as I mentioned, right, or a manufacturing facility, right, in a NC equipment. It could be now out in video surveillance equipment, right, out at a local retail environment. And so you're going to have AI in more distributed data. So what becomes incredibly interesting as a solve is how do I think about building a platform that operates consistently across all of these different locations?

[00:15:03] So I could run at the edge. I could run in a virtual private data center or even in the public cloud. One of the fascinating elements to think about here is something I call follow-me security. So if you think of an app and setting global or setting local policies but having them act globally, right, you remember that whole thing, right? It used to be think globally, act locally.

[00:15:28] I'm thinking now set locally and then instantiate globally is a new way to think about how your security policies and your sovereignty policies can help in this new distributed data world as we think about AI changing where data is going to be located. I love that. And this year in particular, I think we're all seeing that growing role of agentic AI and autonomous systems across enterprise strategies. I'm hearing and seeing it at tech conferences.

[00:15:58] I've been to five or six already this year. That seems to be the main topic. But I think when many business leaders listening hear that term, it can sound somewhat futuristic. So for what you're seeing in the research, how close are we to agentic AI becoming part of everyday business operations? Are we going to see that this year? Yeah, let's back up and talk about what is agentic AI, right?

[00:16:21] So if you stop back and go back to when we had like just a search engine result, right, you would query a search engine, right, of some sort. And you get some list of possible answers for you to go and think, you know, sort through yourself. That's kind of the element, right? Now it's changed, right, in what we call moving from search engine optimization to generative engine optimization.

[00:16:47] Now what happens is those results, right, are now synthesized. And, you know, what we're finding is most people are pretty satisfied with that summary of results that are being presented on a single prompt. So if you think of it that way. And now it becomes really important to think about, well, how good was the prompt, for example, right? And give rise to something called prompt engineering, right? What we're finding, though, that's interesting, right, is it's not just one prompt anymore.

[00:17:14] What agentic AI infers is that what I'll do is I'll take that first set of results and now I'll send it to a second agent or a third agent who are going to start doing things like re-ranking what was actually there based on what I think my users might want to look at. Or putting guardrails in place so that I can say relative to my legal issues or any compliance issues, I'm going to make sure that the results are being tailored for this.

[00:17:43] This becomes an iterative process, if you will. So I've got my first results fed into the second result, and it becomes agentic in the sense that the AI is becoming an agent for you. So it's actually now going and saying, oh, I think we should go to this next model, get this next set of results. And here we go. So the idea of this agentic workflow becomes very important. And for infrastructure teams, what that means is a couple of things.

[00:18:10] Number one, you want to make sure that the LLMs that you're accessing are actually validated for the GPUs that you have. And this goes back to a really basic infrastructure thing about memory management. Actually, you want to make sure that you're working within the memory management. And so certifying and validating those GPUs is an important aspect. Second, you actually want to have some audit capability. How can I look and see, for example, who's using DeepSeq within my organization? Is that authorized or not?

[00:18:39] So I can go and look at audit capabilities. Lastly, you want to start looking at being able to manage the token generation and the optimization across these LLMs. And so these are some of the software values that you can think of. Infrastructure teams should be looking for platforms that are able to provide these level of insights into how they can operate, manage, and optimize the access to LLMs within their environment.

[00:19:08] And all those tools then become just part of the infrastructure team. Similarly, how you look at RAM usage and CPU utilization, IOPS, and other things. You can now start saying, oh, I can go give more input into the LLM usage across my complete profile. And we said at the very beginning of our conversation today that the Nutanix Enterprise Cloud Index has become an annual pulse check. You live and breathe this space. You've seen this report come out each year for many years.

[00:19:38] I'm curious, as a man that's seen it all, did anything surprise you when you first picked up the new report? I mean, I think the idea that containers are coming so quickly is really critical because for many, let's call it traditional enterprise on-prem users, they're really comfortable with virtualization. It's worked really great, right? And for traditional applications, think databases, VDI, server virtualization, these kind of virtualized applications, right?

[00:20:06] These were well understood within the environments. And to the extent that infrastructure users have been using containers, they've largely been in the public cloud, right? Where things like state are managed by a public cloud provider, right? They're specific to a single cloud in that sense. And so the whole management of Kubernetes and everything has been outside of the realm of the need for the IT organizations to really understand or comprehend.

[00:20:34] Well, when you think about AI coming in so quickly and the fact that containers is this wave, you know, so we're expecting that probably today maybe 15% of on-prem users are running containers. Watching how this becomes mainstream is something incredibly important.

[00:20:53] So, you know, as I mentioned before, running containers in VMs is a huge solve for customers who want to leverage their existing teams, take their virtualization skills, and apply them, not jettison them. By the way, if you want to run bare metal, right, look for a platform that offers that as well. So, you know, if you wanted to get to the lowest cost, perhaps, in an edge environment. But think about that virtualization environment as one that you can leverage. That, to me, is actually the more seen.

[00:21:20] I'd love to see, Neil, how new technologies get assimilated. And what I've found in my career, right, is if you can leverage your existing skills, you can basically bring this in more quickly. It happened in server virtualization, right? There were server admins at the time. That role was almost dead, if you remember, right?

[00:21:40] Until virtualization meant that you could go become a virtualization expert, get your certification, and bang, you had all of this new power and management control over now an expanded estate. The same thing is happening with AI and containers where I look at our on-prem customers looking to manage across this new hybrid cloud environment, bring in containers, bring in AI.

[00:22:04] They're future ready for the new technologies, right, that are going to give them more influence coming up in the next coming years. And for everybody listening, I will add a link to the report. And for anybody that's reading that report after listening to the conversation today, what are the practical lessons you think they should take away, especially if they want to prepare their organization for an AI-driven future, while also avoiding some of the risks highlighted in the research?

[00:22:32] It's a bit of a balancing act, but where should they begin? Anything that you'd like everyone listening to take away? I think this, right? Usually when people look at risks, they slow down. It's kind of the nature, right? It's like driving fog. So ideally, right, you're like, oh, shoot, maybe I need to go a little bit slower. But the idea on AI, right, is that the pace at which AI is transforming businesses means that we really need to get moving here.

[00:23:01] One of the ways you can do this, right, is think about how do I minimize any of the perceived or even real risks, right, around new GPU qualifications, around LLM integrations, around agentic workflows, around container adoption. And so my advice here, right, is start quickly to get an application. If you want to reduce the risk to your business, you can have it be internally focused or facing as a first element.

[00:23:31] Start there. Get something in place where you get familiar with the operational model for this. And suddenly you'll find that there are new ideas spawning all of the time about how AI can transform your business. And with the right platform across the different locations that you anticipate, you'll be able to do it without introducing any risk or material risks to your business.

[00:23:54] In fact, you can go and leverage your existing teams as a way to go and speed your path to have AI transform your business in a container-driven world. I think that's a powerful moment to end on. As I said, I will add a link to the research there so people can check that out. I'll also pop a link to your LinkedIn. Anywhere else you'd like me to point everyone listening to keep up to speed on the announcements and everything coming out? Where would you like me to point everyone?

[00:24:20] Yeah, I mean, right at Nutanix.com, where, you know, our homepage itself will show some of the fascinating things we've done. You know, we're at industry events. We have our own events. And you can see topical things. One of the things I like about the ECI Index is it's really not a Nutanix statement. It's an industry statement. And you can find that.

[00:24:40] Nutanix is a consultative industry partner for customers looking at how to go and be relevant in this next world where things are changing relatively quickly. Nutanix is there to help. And Nutanix.com is a great starting point. Well, we covered so much today around how AI is being incorporated into workflows in every organization at a breathtaking pace right now.

[00:25:05] And, yes, AI apps need data and often require a rethink of the data infrastructure for performance and governance. But it's also this running containers at scale in production with proper infrastructure to support it. So many big takeaways. We could have talked for hours on this. I urge people to carry on that conversation with you. But more than anything, just a big thank you for shining a light on your research as always. Thank you. Thank you so much, Neil. Always a pleasure and great conversation. Wow.

[00:25:33] So many big takeaways there, especially around how much of the AI story is actually an infrastructure story. And I think there is a tendency to focus on models, tools, use cases, etc. But as Lee explained, the real challenge often sits underneath all that. Whether it's containers becoming the default way to run modern applications or the growing complexity of managing data across hybrid environments.

[00:26:00] The decisions being made at the infrastructure level are quietly determining what is possible. And, yes, the other takeaway is how familiar some of these challenges feel. Shadow AI. Yes, it echoes the same patterns we saw with Shadow IT and BYOD. But this time around, it feels like there are far greater implications around data exposure, compliance and unpredictable costs. But, hey, you've heard from my guest.

[00:26:30] You've heard my takeaways. I want to hear your perspective too. Is your organisation ready to support AI at scale? Or are you still discovering gaps in infrastructure, governance or visibility? And how are you managing the rise of AI tools that are appearing outside of IT oversight? As always, let me know your thoughts. Join the conversation. TechTalksNetwork.com. You'll find 4,000 interviews.

[00:26:59] Many different ways of messaging me and working with me. And there's an event page where you can see all the events and tech conferences that I'm going to be at this year. If you're attending any of them, please send me an audio message. Send me a DM. It would be great to meet you in person. But that's it for today. So thank you for listening as always. And I'll speak with you all again very soon. Bye for now.