3080: VMware Explore: Chris Wolf on the Rise of Private AI and Data Sovereignty
Tech Talks DailyNovember 07, 2024
3080
22:3218.05 MB

3080: VMware Explore: Chris Wolf on the Rise of Private AI and Data Sovereignty

In today's episode, recorded live from VMware Explore in Barcelona, I sit down with Chris Wolf, Global Head of AI and Advanced Services at VMware Cloud Foundation, Broadcom. Chris offers a front-row view of how VMware is leading the charge in transforming private AI solutions to meet the evolving needs of businesses across industries.

As the conversation unfolds, Chris provides insight into the core principles behind private AI and explains why privacy, security, and control over data are critical for businesses looking to unlock AI's full potential without compromising compliance.

Chris shares real-world examples of how enterprises—from contact centers to police departments—are leveraging private AI to drive efficiency, improve customer experience, and streamline processes. He emphasizes how VMware's AI solutions enable organizations to keep sensitive data on-premises, thus avoiding risks like data leakage and unauthorized model training. Through platforms like VMware's Private Foundation with Nvidia and new support for Intel Gaudi 3 accelerators, organizations can now customize and control their AI infrastructure more effectively than ever before.

We also explore the future of enterprise AI, touching on agentic AI and how sophisticated AI agents are poised to reshape large-scale business operations. Chris provides a compelling look at how private AI can optimize resources and simplify workflows by offering modular, flexible solutions that don't lock businesses into any single service provider.

Whether you're considering AI adoption or looking for ways to deepen its impact within your organization, this episode delves into VMware's approach to balancing AI innovation with robust privacy protections. As we look toward 2025, Chris outlines the emerging trends and hybrid AI models set to redefine enterprise strategies.

So, how can your organization harness private AI to stay agile and secure in a rapidly evolving digital landscape? Tune in to discover how VMware is shaping the future of AI in business.

[00:00:04] How does the modern enterprise achieve real transformation without sacrificing control, privacy or, indeed, agility?

[00:00:14] Well, as businesses increasingly move to integrate AI, these questions are more pressing than ever before.

[00:00:21] So today I'm live at VMware Explore in Barcelona, where there's lots of conversations around all things cloud and, indeed, AI.

[00:00:31] My guest today, Chris Wolf, he leads AI and advanced services within the VMware Cloud Foundation Division at Broadcom.

[00:00:39] And he's joining me on the show for all here to shed light on the groundbreaking advancements that he's observing in private AI.

[00:00:47] And with more than 25 years of experience in the industry, he's been at the forefront of driving AI strategies that allow organizations to modernize without compromising on critical data privacy concerns.

[00:00:59] So whether that be from innovating resource management for AI to tackling compliance issues head on, Chris has deep insights on how private AI is actually transforming sectors far and wide from healthcare to finance and public services.

[00:01:15] But the big question is, are businesses truly ready to manage their AI in-house?

[00:01:21] And what are the real tangible benefits that private AI can unlock?

[00:01:27] Well, enough scene setting for me. Let's get Chris onto the podcast now.

[00:01:33] So a massive warm welcome to the show. Can you tell everyone listening a little about who you are and what you do?

[00:01:38] Yeah, sure. I've been with, well now Broadcom, VMware Privacy. I'm going up on 11 years.

[00:01:45] I've been working with VMware products and technologies for the past 25 years.

[00:01:49] And I've had a variety of roles in my time at VMware. Prior to AI, I was leading a lot of the organic innovation functions for the company.

[00:01:58] Today, my team drives the AI strategy. We do the heavy lift in terms of the engineering required to run AI applications, as well as we've been developing the AI services that would run as a part of our core platform.

[00:02:13] So my job is leading that team and then the overall strategy for the company.

[00:02:18] Well, welcome to the show and thank you for sitting down with me today.

[00:02:22] There's so many big conversations here and private AI is increasingly being discussed in the context of enterprise solutions.

[00:02:29] Can you explain the core principles behind private AI and ultimately why privacy and security are so critical for businesses that are now adopting AI in their workflows?

[00:02:39] Because we've gone from thinking about it to implementing it now.

[00:02:42] Yeah, yeah. I think it's definitely something that's taken off.

[00:02:46] I mean, if you go back even 18 months, the narrative predominantly was if you wanted to do anything with AI, you needed to just move your data to some type of hyperscaler because you needed hundreds to thousands of GPUs to get anything done.

[00:03:01] And that's just fundamentally not true.

[00:03:03] And we've seen this time and time again where there's use cases for AI, like we announced one this week, where CPUs are fine.

[00:03:11] There's use cases where you need a small number of GPUs and it just depends on scale.

[00:03:17] So first there was a narrative that you could bring the AI model to wherever your data happens to be so the organization can maintain control.

[00:03:25] They can have greater assurances of compliance.

[00:03:28] They can use a lot of their existing tools.

[00:03:31] So these things all really mattered.

[00:03:33] And, you know, at the at the high level, we see private AI is really just from a very pragmatic sense.

[00:03:39] I'm balancing the gains of the AI, but I'm maintaining that with the privacy and compliance needs of the organization.

[00:03:43] So I get the benefits of AI.

[00:03:45] I don't lose control of my data.

[00:03:46] I don't worry about my data being used to train a model that might benefit a competitor.

[00:03:51] I don't have the same concerns I might have with data leakage.

[00:03:54] So it's really all of this in aggregate that has fueled this enthusiasm, I would say, around the private AI approach.

[00:04:02] And I think we often hear about the real world potential of private AI.

[00:04:06] But are you able to share some specific use cases across any industry from health care to finance and manufacturing that ultimately demonstrate the tangible benefits businesses are already experiencing?

[00:04:18] Because I think that ROI question is something we're hearing more and more about now.

[00:04:22] Yeah, yeah.

[00:04:23] So I think a good example would be the contact center.

[00:04:26] We see this crossing nearly all industry verticals.

[00:04:30] You know, you have contact centers for lots of things to interface with your customers, right?

[00:04:34] And along with those contact centers, you typically have some private documentation, some private knowledge space articles, a lot of support data.

[00:04:42] And when you're trying to help a customer, you're having to, the human is having to navigate through all this to try to find the answer.

[00:04:50] This is a great use case for private AI and generated AI in particular, because I'm able to use the AI tool to be able to parse through all this information, look at the question and be able to get to an answer more quickly.

[00:05:03] We're seeing north of, in some cases, 10% efficiency gains with this particular use case.

[00:05:13] And what's great about it is you can measure the business value really clearly because I know how many support tickets on average my agents close in a week, right?

[00:05:23] And I wouldn't have that number before the tool.

[00:05:25] I can also have that number and then when the tools use.

[00:05:28] So that starts to really help you to effectively measure the efficiency gain that you would get.

[00:05:34] Right. So that's one example. I'll give you one more too. This is a very recent customer.

[00:05:38] This is a police department and this is in the Asia Pacific region.

[00:05:42] And you think about like police investigations and case files, right?

[00:05:46] You have all of this data. And sometimes what will happen is, right, you, a new detective comes along,

[00:05:51] that's trying to reopen a case and look at some of the details and it can take the human months, right?

[00:05:56] To parse through all of this data. So it's a great use case for a generative AI where I can index that data.

[00:06:02] I can find those correlations far more quickly.

[00:06:04] I'd really assist the human to get to some ideally some new outcomes.

[00:06:09] I think it's so important, as you said, to get those metrics in place from the beginning, draw the line in the sand and then put the improvements in place.

[00:06:17] So what would you say are the significant outcomes that businesses can achieve using private AI, particularly when it comes to optimizing processes,

[00:06:24] enhancing maybe customer experiences or even driving innovation?

[00:06:28] Yeah. So I mentioned a little bit on the enhancing customer experiences, right? And some support there.

[00:06:34] And that's really the key is like, like I relate AI to blockchain. Yeah.

[00:06:40] Five years ago, right? Where every use case, it was like, oh, I need a blockchain for this.

[00:06:45] And it's like, no, actually a database is going to be fine, right? You don't need all of this.

[00:06:49] And it's the same thing with generative AI. If you feel that you can't measure the business value, then maybe it's just not a good use case.

[00:06:56] Right. And that's okay. So not all of, not every use case, AI is going to work out or the cost of AI.

[00:07:02] It may not be worth the benefits that you're going to receive.

[00:07:05] So you have to really wade through to find the right use cases, which is why we start at document search and summation back office contact centers is another easy one.

[00:07:15] Because again, I'm, I keep saying back office because I'm de-risking the investment.

[00:07:19] I'm not exposing an AI chat bot to a customer, right? Where it might give them a wrong answer or take them down the wrong path.

[00:07:25] So you still want that human in the loop to reconcile that.

[00:07:29] So those are a couple of areas we're seeing even in tech, like being able to parse through security logs and events at a much faster pace, even for our own product support.

[00:07:39] You might have millions of lines of log entries that you're trying to parse through to find like what, what the heck is causing this problem, right?

[00:07:47] Generative AI can really help to narrow down to maybe I have those millions.

[00:07:52] Maybe now I have 200 log entries.

[00:07:54] I want to look at that seem to be the most relevant to the problem.

[00:07:57] Like I've mentioned a couple of use cases.

[00:07:59] I think the other key outcome that's unique to private AI is you're not having to uplift your data and put it into somebody else's proprietary format to gain the benefits of AI.

[00:08:09] You're bringing the AI model to you.

[00:08:11] And with our approach, which we think is really important is we're taking a, we're taking a platform approach.

[00:08:17] So meaning that you're investing in a private AI platform and then the AI models and services, you have the flexibility to swap them out.

[00:08:24] And that's really important.

[00:08:25] I think as an outcome of private AI, because it could be very easy with artificial intelligence today to have buyers remorse next week.

[00:08:33] Yeah.

[00:08:33] Right?

[00:08:34] Like the same great at the time, but it all, wow, this new model came about and it's so much better, but I don't have access to it.

[00:08:40] So when you take that platform approach, new model comes around.

[00:08:44] It's just a software update.

[00:08:45] It's not a big deal to be able to embrace new technologies quickly and have that level of agility.

[00:08:50] And another big buzzword at the moment is agentic AI after Gartner mentioned it a few weeks ago in a report.

[00:08:56] So sophisticated AI agents as a result of this are emerging as critical for enterprise use cases right across the board.

[00:09:03] I'm curious from what you're seeing, how are these AI agents different from let's say traditional AI solutions that we've all encountered in work and in our personal life?

[00:09:12] And what makes them particularly valuable in large scale business operations?

[00:09:16] Because again, it all comes back to tangible business value, doesn't it?

[00:09:19] Yeah.

[00:09:19] You want to be able to interact with and act on the AI models and outcomes that they achieve.

[00:09:25] I mean, the most rudimentary example of a AI agent would be a chat bot.

[00:09:29] We have a coming in our product and a AI agent builder that you can use natural language just to create these AI agents.

[00:09:36] So I'm simply describing what I'd like the AI agent to achieve, but yeah, we think there's going to be, you know, agents for lots of things,

[00:09:44] whether it's parsing technical log data or security events or looking at, say video that's being captured for particular events that might be of interest, say for safety reasons or whatever that might be.

[00:09:56] It's really going to run the gamut to have an agent that can provide good customer support and interact with the human.

[00:10:04] And we think these are all going to help to further fuel AI.

[00:10:08] We're seeing even architecturally on the back end approaches that is allowing you to use smaller parameter models that are fine tuned to your business data.

[00:10:22] And so when you start to have these agents interact with this, what matters here is the overall like compute resources you need to run these services is also getting smaller and smaller,

[00:10:34] which is further driving AI use cases out of the hyperscalers and closer to the business, which is certainly something that's benefiting private AI for sure.

[00:10:43] And for business leaders listening, maybe they're thinking about jumping on board here for AI software to qualify as a specialized AI agent.

[00:10:53] What specific requirements or capabilities does it need to meet, especially in terms of security, data management and integration?

[00:10:59] Because we hear a lot about businesses wanting to implement AI, but it's all those IT aspects often are not talked about in the public arena.

[00:11:07] Yeah, I think that right now for most businesses, and we're a little different when we look at the AI software we're providing and that we brand ours as intelligent assist because our perspective is that AI is a tool that's assisting the human, not replacing the human today.

[00:11:26] So having a human in the information loop matters.

[00:11:29] The next thing to really think about, and this is one of the things that definitely keep people up at night, is access control.

[00:11:36] It's very easy to index all of your data and then make that available to a model, right?

[00:11:41] To start to rationalize and reconcile with.

[00:11:43] And the net result can be if somebody else is coming along and now interacting with that model, they might be gaining access to data they should see under normal circumstances.

[00:11:52] So having the right architecture that gives you the ability to create access controls on a per user group type basis to ensure that you don't have that type of data leakage or you're not unintentionally creating back doors.

[00:12:06] That really matters.

[00:12:07] One other thing I would mention as a part of this overall architecture is to have strong model governance.

[00:12:12] You just can't allow anyone to download an open source model from the internet and then just throw it into production.

[00:12:18] I need to make sure that I have the proper security scans and controls around that model output to ensure I still have the safety for my own organization.

[00:12:25] 100% with you on that.

[00:12:27] And with VMware's recently announced collaboration with Intel too, how is VMware differentiating itself in the private AI space?

[00:12:35] Because there's so much noise in this space at the moment.

[00:12:37] So what does that partnership ultimately mean for VMware's customers in terms of new possibilities, new solutions?

[00:12:44] Yeah, so we have a great partnership with NVIDIA and we've been going to market with them since May with the product we're shipping, which is called Private AI Foundation with NVIDIA.

[00:12:52] Here at the conference, we announced support for the Intel Gaudi 3 accelerators.

[00:12:56] And our goal is to make sure that customers have choice of hardware accelerators, choice of AI services and applications,

[00:13:04] and really making that flexibility and giving the customer the ability to pivot really central to how we look at things.

[00:13:10] So it's still early days, I'd say, for Intel with AI, but we're committed.

[00:13:16] We've been long-term partners with Intel.

[00:13:17] We're committed to continuing to grow that partnership with Intel going forward.

[00:13:22] And the other thing I would just mention on, because you said differentiation.

[00:13:26] Yeah.

[00:13:27] People ask all the time, like, well, why would I buy from you guys, right?

[00:13:31] There's lots of choices out there.

[00:13:32] First, if you stack us up against the hyperscalers, a big differentiation we have is cost.

[00:13:38] We're typically one-third the cost of the public AI services.

[00:13:43] You get all the privacy and control with us.

[00:13:45] Now, if you look at some of our on-premises competitors, we're typically lower costs than them.

[00:13:51] And in addition to that, there's some sophistication we have that they just simply lack.

[00:13:56] AI is more than just a GPU.

[00:13:58] It's making sure I have the right amount of network capacity for the application, the right amount of data IO, the right amount of memory.

[00:14:04] There's lots of factors that go into successfully scheduling resources for AI applications and running that at scale.

[00:14:12] And our distributed resource scheduler is technology we've been building and improving for just about two decades now.

[00:14:18] It is far and away ahead of anything else you would see on the market today.

[00:14:24] And what we've seen that's been a nice surprise is companies that have been running AI workloads, in some cases for many years,

[00:14:32] they've come to us because what they've been trying to do on bare metal, they just can't maintain.

[00:14:38] They can't get the same level of software automation that we provide in our platform.

[00:14:42] So they see a lot of value in that, and they've been partnering with us to get deployed as well.

[00:14:46] And I think for many business leaders, their organizations are still struggling with integrating AI into their existing business workflow.

[00:14:54] So how does private AI help business overcome some of those barriers?

[00:14:57] And what would you say is the role that VMware plays in maybe simplifying that process more?

[00:15:03] Yeah, yeah.

[00:15:04] So like really one of the key parts that we have is we have – I mentioned some of the automation with resource scheduling,

[00:15:10] but this goes even higher up.

[00:15:11] So we offer a service catalog where I can pick a number of different AI applications from our blueprints.

[00:15:17] And from that, you can get your AI workstations for experimentation or AI applications up and running in a matter of minutes.

[00:15:25] I think that's really key.

[00:15:27] We've seen organizations that it might take them weeks.

[00:15:29] Even our AI platform, what we've seen for customers that have deployed it, they've gone from zero to serving AI applications in two days.

[00:15:38] So there's a lot of velocity that the customers are getting in addition to all that choice.

[00:15:44] So as these new models come around, they can scan them, they can approve them, they can decide how folks are going to collaborate.

[00:15:49] So it's that end-to-end solution and simplicity that's really driving the traction that we're seeing.

[00:15:55] And another question I'm getting asked more and more recently is, as businesses increasingly turn to AI for code generation, how important is private AI?

[00:16:05] And does this resolve any potential IP implications from integrating AI-generated code within somebody's proprietary product?

[00:16:12] Because that IP question is something that seems a little bit clouded at the moment, shall we say?

[00:16:17] It's a good one, and that's a great question.

[00:16:20] And something that we've advised customers on is, because we've gone through this journey ourselves, right?

[00:16:25] We develop enterprise software.

[00:16:27] There's code generation tools today that when you look at how their model is trained, the provider will not guarantee that the model is only trained with permissively licensed code.

[00:16:39] Which, to us, that's massive red flags to your point around a potential IP infringement.

[00:16:45] And then you have to look at, in the contracts, what kind of indemnification are they looking to offer you to provide some assurances?

[00:16:53] And typically, it's not that much.

[00:16:55] So you're taking on a huge risk.

[00:16:58] Now, there are commercial code assist solutions that will provide those guarantees in terms of the data used to train our models.

[00:17:06] They will provide the right code citations and all of these types of things.

[00:17:10] So there are processes that you can follow.

[00:17:12] And, of course, if you have really good code scans prior to moving to production, that should help you as well.

[00:17:18] But, yeah, so you have to look at the fine print.

[00:17:22] And the other thing, too, just along those, it's not always just code generation.

[00:17:25] But if you look at some of the data sets you see in open source and places like Hugging Face, you also have to check to understand how is that data actually curated?

[00:17:35] Because in some cases, what we're finding is that data is actually synthetic data that was curated using chat GPT or GPT for whatever.

[00:17:44] And using GPT to curate data is actually a violation of OpenAI's terms of service.

[00:17:52] So it doesn't matter that the data set came with an open source model.

[00:17:57] You still have to look at that next level of detail and figure out, well, how is this actually collected in the first place?

[00:18:03] So these are things to worry about.

[00:18:04] But let me go back to code assist, though, because we're using it internally.

[00:18:08] We've just deployed a commercial solution ourselves within the VCF division at Broadcom.

[00:18:13] And we've seen significant productivity gains from our software engineers using this technology.

[00:18:20] One of the more interesting cases, there was an engineer on my team that was looking to rewrite an application into a different language.

[00:18:26] The time estimate he gave us was two months, which is kind of reasonable.

[00:18:31] Using the code assist solution, he was able to rewrite the entire app in two weeks.

[00:18:36] So that's significant, right?

[00:18:39] That's a six-week delta in terms of how quickly he was able to iterate through this.

[00:18:43] So there's definitely a lot of value there.

[00:18:45] We think that's going to be a growing tool.

[00:18:47] And we're currently deploying this out to hundreds of our software engineers today.

[00:18:52] Wow, so much food for thought there.

[00:18:53] And, of course, we're only weeks away from life in 2025.

[00:18:57] So if we do dare look ahead, how do you see the future of private AI?

[00:19:01] And how do you see shaping the next generation of enterprise applications?

[00:19:05] And what should businesses be focusing on as they adopt these new AI-driven technologies?

[00:19:10] Yeah, I think the first thing is resist the temptation just to consume an entire service and lock yourself into a vertical stack.

[00:19:18] Start with a platform that gives you the optionality that allows you to effectively grow as you move forward.

[00:19:24] And you're going to continue to see the rise of more small language models that's complementing what you see with the large language model space.

[00:19:34] And really a consolidation, we think, of these specialized types of use cases that's going to further drive adoption.

[00:19:43] Because it's not just the – you get the privacy and control.

[00:19:46] I mentioned you're getting a lot of lower costs as well.

[00:19:50] But then as you start to see the use cases, you can leverage smaller models.

[00:19:54] You need less resources.

[00:19:55] It requires less power.

[00:19:57] So there's far more efficiencies that are also coming to AI moving forward, which I think is going to matter.

[00:20:04] And one other thing I'll leave you with, and this kind of is on the heels of this partnership we announced with Microsoft this week.

[00:20:09] My expectation is you're going to see more in 2025 of these hybrid AI solutions where I might use the hyperscaler in a public cloud for some experimentation research.

[00:20:19] But then when I get the model and the service to where I want, I'm then going to deploy at my business location out at the edge or in a data center where I get the control and cost benefits of operating the service myself.

[00:20:33] So I think you'll see a lot more of that too.

[00:20:34] So these types of cloud partnerships with Broadcom and others, I think that's something that's very clearly on the horizon and is going to be very valuable to a lot of organizations.

[00:20:44] Well, so many big takeaways from our conversation today.

[00:20:46] And I hope people will also feedback on their thoughts of what they're experiencing.

[00:20:51] And there's, what, thousands of people here in Barcelona.

[00:20:53] I saw you yesterday at AI Fireside chat.

[00:20:56] And obviously you're rushing around back-to-back interviews, I would imagine.

[00:21:00] So thank you for taking the time to sit down with me today.

[00:21:02] Appreciate the opportunity.

[00:21:03] Thanks.

[00:21:03] Thanks.

[00:21:04] As we've heard from Chris, private AI holds significant promise for businesses aiming to balance AI-driven innovation, but with stringent data privacy demands.

[00:21:16] Whether that be enhancing customer experiences in contact centers or accelerating case investigations at a police department.

[00:21:24] These use cases shared today demonstrate how private AI is becoming essential to real-world enterprise applications.

[00:21:34] But, of course, as AI's role in business grows, are we truly prepared to navigate the complexities that come with it?

[00:21:41] Well, if you are at VMware Explore or watching from afar, I'd love to hear your thoughts on where private AI is heading and how it could impact you and your industry.

[00:21:51] So email me, techblogwriteroutlook.com, Twitter, Instagram, LinkedIn, just at Neil C. Hughes.

[00:21:59] But that's it for today.

[00:22:00] So huge thanks to Chris for his insightful conversation and sharing some of the things that he's seeing out there.

[00:22:06] And thanks to everyone listening for joining us and exploring the future of AI in business.

[00:22:11] But it's time for me to get back out there on the show floor.

[00:22:14] So I will speak with you all bright and early tomorrow.

[00:22:17] And I cordially invite you to join me in Barcelona.

[00:22:19] Virtually, of course.

[00:22:21] But that's it for today.

[00:22:22] So bye for now.