3091: How Dell's AI Factory is Simplifying AI Adoption for Businesses
Tech Talks DailyNovember 18, 2024
3091
49:5629.78 MB

3091: How Dell's AI Factory is Simplifying AI Adoption for Businesses

In this episode of the Tech Talks Daily Podcast, I'm joined by Elliott Young, EMEA CTO at Dell Technologies, to explore Dell's ambitious approach to making AI adoption easier and more effective for businesses of all sizes. As AI becomes a key driver of digital transformation, many organizations face barriers around data management, security, and the lack of skilled talent. Dell's AI Factory aims to address these challenges head-on, providing a comprehensive solution to streamline the journey from AI concept to value creation.

Elliott introduces us to Dell's AI Factory, a scalable infrastructure designed to support a wide range of GPU options, from NVIDIA to Intel and AMD, making it flexible enough to cater to everything from edge devices to cloud-based deployments. By leveraging pre-engineered frameworks and design patterns, the AI Factory helps businesses accelerate their AI projects, reducing the time and complexity of deployment. We dive into real-world use cases, including "Hero AI" solutions like digital humans in emergency rooms and AI-augmented contact centers, demonstrating how AI can transform industries by embedding intelligence into core production systems.

We also tackle the critical issue of data security. Elliott explains why on-premise and hybrid cloud models remain the preferred choice for over 80% of businesses due to their enhanced security and compliance features. He highlights Dell's commitment to zero-trust principles and secure supply chains, which form the foundation of the AI Factory's data management capabilities. Additionally, Elliott shares insights into the innovative security strategies required to address the unique data challenges posed by AI technologies.

Sustainability is another key focus for Dell, as Elliott discusses the energy efficiency improvements in the latest generation of Dell PowerEdge servers and the company's efforts to use recycled materials in hardware and packaging. We also explore how GenAI solutions within the AI Factory can help optimize resource usage, replacing outdated, less efficient AI systems.

As the conversation moves to the skills gap in AI, Elliott talks about Dell's approach to bridging this divide. By focusing on business expertise rather than purely technical skills, Dell's workshops and professional services aim to make AI accessible for a broader range of users. He emphasizes the importance of empowering teams to prioritize AI use cases that align with business objectives.

Finally, we discuss Dell's commitment to an open ecosystem approach, which supports multiple hardware options and software integrations. Elliott explains how multi-tenancy capabilities within the AI Factory enable efficient resource sharing across departments, maximizing GPU utilization and driving business efficiency.

The company is currently preparing for the Dell Technologies Forum on November 26th, where they will showcase the latest capabilities of the AI Factory. This episode offers a sneak peek into the future of AI-driven innovation and how businesses can overcome common hurdles in AI adoption.

Will Dell's AI Factory pave the way for a more accessible and effective AI landscape?

[00:00:03] Have you ever wondered how big tech companies are tackling the practical challenges of AI adoption?

[00:00:10] Well, in today's episode, I'm incredibly grateful to be sitting down with Elliot Young. He's the EMEA CTO at Dell.

[00:00:19] And together, we're going to discuss their transformative approach that might just change the landscape of artificial intelligence as we know it.

[00:00:27] Because today I want to explore how Dell's innovative AI Factory model is aiming to streamline the deployment of AI technologies across a variety of industries.

[00:00:40] So I invite you all to join me today as we uncover the mechanisms that could democratize access to advanced AI and making it a feasible option for businesses, regardless of their size or their sector.

[00:00:53] So if you're thinking of adopting AI, but, and you're struggling with the learning curve, we're going to simplify all of that.

[00:01:01] We're going to demystify this space. And I'm convinced at the end of this episode, you will feel much more confident about doing so.

[00:01:09] But enough from me. Let's get Elliot onto the podcast now.

[00:01:14] So a massive warm welcome to the show, Elliot. Can you tell everyone listening a little about who you are and what you do?

[00:01:22] Hi, Neil. It's great to see you again. I'm Elliot Young. I'm the Chief Technology Officer for Dell Technologies in Europe, Middle East and Africa.

[00:01:30] And I spend the majority of my time meeting customers to talk about how can they get the most efficient use of their existing IT and strategically going forward.

[00:01:42] Where should the aiming points be? And what are the things that we've learned as an organization that can help other organizations to navigate potential pitfalls when they introduce new technologies?

[00:01:52] And of course, the one technology that everyone is always talking about at the moment is generative AI.

[00:01:58] It is such a huge topic right now. That was one of the reasons I was excited to get you on the podcast to join me today because there's so much excitement.

[00:02:07] And there's a fair amount of caution as well.

[00:02:10] And I think AI is often viewed as a complex and in some cases even threatening by many business leaders.

[00:02:17] So how do you or how does Dell's AI factory, how does that simplify AI and make it more accessible to organizations that might be looking to embark on their AI journey?

[00:02:28] And again, because there's so many big questions around this and it can be intimidating or even overwhelming.

[00:02:34] Well, it's interesting you say threatening because up until recently, I suppose because here at Dell we're completely absorbed in everything to do with generative AI, it doesn't seem that threatening.

[00:02:45] But recently I had a customer who came along and said, we've got this new mantra that we follow and the new mantra is AI or die.

[00:02:52] And I thought, wow, that sounds kind of dramatic.

[00:02:56] But in actual fact, that's the way some organizations are looking at this.

[00:02:59] They've come to the conclusion that unless they adopt generative AI to help accelerate their business process and create new innovative approaches, that if they don't do that, then they know that their competition will.

[00:03:12] And how can they compete when your competitor has got an AI on one side and you're trying to compete without that?

[00:03:18] It's a bit like having your hands tied behind your back.

[00:03:21] So one of the things that we want to do is to make it easy to consume and make it less complicated and accelerate the time to value so that customers can get the most out of what the AI is doing as opposed to having to worry about how to engineer the solution.

[00:03:39] And that Dell AI factory is our approach to helping accelerate that and helping that kind of innovation.

[00:03:45] What we want to think about is, are there some design patterns that can already be adopted in a kind of re-engineered framework that you can put together a bit like puzzle pieces in a jigsaw and bring those together so that now you can concentrate on the bit which is specific to you, which is how you run your business or how you're a specialist in your industry.

[00:04:06] And less thinking about what size server or how many GPUs or what kind of networking is required or anything like that.

[00:04:12] And we want to bring that together as an entire ecosystem of capabilities.

[00:04:18] So you're taking these puzzle pieces, putting them together and leveraging the power of all these different components working together.

[00:04:25] And one of the things I try and do on this tech podcast every day is demystify technology, put it in a language everyone can understand and understand the value that it can bring to their business.

[00:04:36] And this is something that you've excelled at.

[00:04:38] And there's something or one of the big reasons why you appeared on my radar as well, because you've reframed Gen AI's potential.

[00:04:45] But through the lens of archetypes like a chef, a chemist and a gardener.

[00:04:51] So can you explain how those help demystify AI for decision makers and ultimately make the benefits of it more relatable to audiences?

[00:05:00] Yes, I think that having these kind of approaches gives business leaders a framework, different ways they can think about generative AI.

[00:05:08] So if we take this example of a master chef, for example, a master chef is a key innovator who is effectively experimenting with new approaches, creating something new that didn't exist and introducing new flavors and new techniques.

[00:05:24] And this is exactly one of the approaches you can take with generative AI.

[00:05:27] And in fact, I call that kind of AI hero AI.

[00:05:32] Why call it hero AI?

[00:05:34] It's because there are certain solutions that when you build those solutions, you look at that capability and you think, wow, that can only really have been created by generative AI.

[00:05:45] So a great example of that would be you walk into an emergency room or an accident and emergency hospital.

[00:05:54] And as you're greeted at the door by a digital human, the digital human can instantly talk to you about what's happened, take some medical history, find out who you are, do the identification, perhaps even contribute to triage, as opposed to somebody having to wait for hours and hours before they get seen in a hospital.

[00:06:14] That kind of solution where you can interact with a digital human is probably built by a kind of graphical interface that makes it look like a human.

[00:06:24] But behind the scenes, there's a large language model which is acting a bit like a puppet masker and is telling the digital human what words to speak and how to respond to the person, the human that they're talking to.

[00:06:34] And when you see those kind of solutions, you think to yourself, that can only have been created by generative AI.

[00:06:40] Another great example of that is if you deploy a chatbot to help with your contact center.

[00:06:47] This is a very common use case that we're seeing right now.

[00:06:50] Organizations want to augment the staff they have in contact centers to be able to respond very, very quickly and in a very personal way to a customer who might call in with a query.

[00:07:02] And if they send an email or if they want to talk on the telephone, then having an AI who becomes an expert on each and every customer, it's a bit like having one AI for as many customers as you've got.

[00:07:16] Because as soon as that customer calls in, the AI has been preloaded with all of the history of the interaction with that customer.

[00:07:21] And it can have a really good personal conversation with the person calling it.

[00:07:27] Those kind of solutions are really things that can only have been created with generative AI.

[00:07:35] And the way that that approach is a bit like a chef putting together ingredients and collecting what's the data that's required to create that kind of solution.

[00:07:43] And then looking at the recipe and say, well, how do I put that together to create a digital human or a particular type of chatbot?

[00:07:49] Well, how do I process the data in some kind of food preparation type approach?

[00:07:54] Or how do I then work out what the presentation is to make it as compelling as possible in the way that they craft this digital human that may be specific to that?

[00:08:04] So that's one of the metaphors.

[00:08:06] But we have a couple of others that often talk about, for example, the chemist.

[00:08:11] The role of a chemist as an accelerator is someone who's there to speed up business processes and to basically get more productivity, optimize workflows and introduce an element of innovation as well.

[00:08:26] But this second approach is where you take generative AI and you think about how can I embed this into my core systems or my most important production systems?

[00:08:39] So let me give you an example of that.

[00:08:42] Consider the situation where there's a doctor and the doctor is treating a patient and they decide they want to write a letter to another doctor about the new treatment plan that they've come up with.

[00:08:52] In that particular case, the doctor logs into the patient information system and the format of these letters that a doctor will write is always the same.

[00:09:00] It starts off with the patient's name and their date of birth.

[00:09:03] It looks at what a summary of their medical history is.

[00:09:06] It looks at what medication they're taking.

[00:09:08] And then you have a couple of paragraphs of what the treatment plan is.

[00:09:12] So you can imagine the first doctors typing in their treatment plan.

[00:09:15] They hit submit in the medical record system.

[00:09:18] And right before it sends the letter, a generative AI comes to life and reads the letter and then goes back to the original doctor and says,

[00:09:28] this is a great treatment plan, but have you also considered this, this and this?

[00:09:32] And the human doctor can look at these recommendations from the AI and say, actually, you know what?

[00:09:39] Those are really good recommendations.

[00:09:40] I think I'll include them in the letter.

[00:09:42] And so the doctor modifies their letter to take on board these recommendations, hit submit again, and the letter gets sent to the second doctor who needs to receive this letter.

[00:09:52] Now, in that particular case, the way that's been implemented is you've effectively modified a key production system by introducing generative AI.

[00:10:01] But probably the first doctor has absolutely no idea there was an AI helping them.

[00:10:06] All they know is they had a pop up on the screen that said, have you considered these other options in the treatment plan?

[00:10:12] Now, if you're talking to a CIO and you say to the CIO, let's make a change to your production strength medical system.

[00:10:21] That might be a daunting prospect for them because they'll be thinking, well, I've got to develop the change.

[00:10:25] I've got to test the change.

[00:10:26] I've got to roll it out.

[00:10:27] I've got to have a backup plan.

[00:10:28] I've got to schedule it.

[00:10:29] I've got to have funding.

[00:10:31] Do I need to train up users?

[00:10:32] There are all kinds of considerations that you think about when updating core production systems.

[00:10:37] So that is probably one of the most impactful ways you can improve things in your company by introducing generative AI to production systems.

[00:10:49] But is there another way if you don't want to update your production systems, taking that kind of approach?

[00:10:56] What else can you do?

[00:10:57] And this is where I often talk about the analogy of a gardener.

[00:11:01] Gardener or cultivate.

[00:11:03] What a gardener does is they're basically nurturing ideas from original conception all the way through to fruition.

[00:11:11] And in order to do that, they have to carry out research and development.

[00:11:14] They've got to think about which plant varieties go together.

[00:11:18] They've got to think about not just a single plant, but how that's actually planted in a garden.

[00:11:23] And bringing together this expertise to help the garden flourish and provide all this abundance and beauty.

[00:11:31] There are all kinds of aspects that go into that.

[00:11:34] And you might say, well, when a gardener is selecting seeds, it's like they're doing data collection.

[00:11:38] Or if they're preparing the soil, it's about thinking about how do you pre-process the environment so it's ready for AI.

[00:11:47] How do you monitor things as they grow and develop?

[00:11:50] And so this third design pattern is like a combination of the first two.

[00:11:56] Because you can adopt technologies like robotic process automation to drive generative AI.

[00:12:03] And with robotic process automation, what we do is we basically use some software, which is available to probably nearly every organization, either on premise or in the cloud, which drives other computer systems as if there was a human sat there at the keyboard.

[00:12:22] And if you can do that, then that means you don't need to worry about cracking open your production systems and trying to modify them.

[00:12:31] Because essentially what you're doing is you're using a robot to call a generative AI, which is then taking over and doing what a human would do.

[00:12:39] It's doing it 10,000 ways parallel and 1,000 times faster and basically enabling scale that way.

[00:12:46] And with this robotic process automation approach, a bit like when you're planting, when a gardener's planting seeds in a garden, one of the things that we can do with the robotic process automation approach is to create teams of AIs.

[00:13:04] And this is known as agentic AI, where you say, OK, I've got one particular problem that I want to solve.

[00:13:11] In order to solve that problem, what I'll do is I'll ask an AI what kind of team of other AIs I need.

[00:13:18] And so you say to the main project manager AI, for example, we want to find out why we're suddenly getting a huge volume of calls to our contact center.

[00:13:28] What could be the reason for that?

[00:13:30] And so the project manager AI says, well, in order to understand that, I'll create an AI which can transcribe live calls into text.

[00:13:37] And I'll give the output from that AI as it transcribes live calls to another AI which can do some research on the Internet.

[00:13:44] How do you do research on the Internet with an AI?

[00:13:47] Well, you have an AI that can write Python code and it writes a Python code which goes out to the Internet, scrapes websites, analyzes the text it finds on the website and then comes back with an answer.

[00:13:56] So if there's some event going on, like, I don't know, you're a contact center for a water utility company, it may be that in the news there's an article that says there's a giant burst water main which is affecting the area.

[00:14:10] In which case, the AI grabs that information and brings it back down in-house and it might say, for example, OK, I'm now going to write a report for the contact center agents so that when somebody calls up the contact center next, the agent knows to say to the customer, there's been an incident in this part of the city and therefore it's affecting the water supply.

[00:14:31] And if the answers come back in English but the person's speaking in a different language, then perhaps translate that.

[00:14:36] So in that particular use case example, you've got a project manager AI talking to a research AI, talking to somebody that can transcribe phone calls.

[00:14:45] I say somebody, it's actually an AI.

[00:14:47] You've got an AI that can write a report.

[00:14:49] You've got another one that can do translation.

[00:14:51] And working together as a team, you have these agents which actually are answering one question from the human which was, why was everyone calling my contact center?

[00:15:00] And giving one response back, which is a written report for contact center agents.

[00:15:05] But in actual fact, the way you achieve that is by having a whole team of AIs collaborate together.

[00:15:10] So these are the kind of approaches that we want to introduce with customers to really get absolute maximum advantage from all the amazing things you can do with generative AI.

[00:15:21] Well, so many great examples there.

[00:15:24] And I'm glad you brought up agentic AI as well.

[00:15:27] So much positivity around now at the moment.

[00:15:29] There's a lot of people tipping it for great things in 2025.

[00:15:32] I know Garton has been leading the charge on that.

[00:15:36] And another thing that stood out to me when I was doing a little research before you came on the podcast today is the concept of an AI factory and how that is central to Dell's vision too.

[00:15:46] So how does that analogy help businesses understand AI's role in producing more actionable outcomes?

[00:15:53] And much like physical factories produce goods?

[00:15:56] Because I think, again, it brings it to life.

[00:15:58] And there's so many people chasing the ROI of AI that this seems a real fresh approach.

[00:16:04] Yeah.

[00:16:05] I mean, it wasn't an accident that we call it a factory.

[00:16:09] And this is a key concept that I think is important for organizations to understand is that if you look at the IT that a typical organization has access to,

[00:16:18] it's either running in the cloud or it's running on IT that you control.

[00:16:23] And the IT could be in a data center or in a partner's data center or a service provider's data center.

[00:16:28] That kind of IT, traditional IT, is really good at doing one thing.

[00:16:33] That is processing existing data sets or processing existing transactions.

[00:16:39] And that kind of IT that you deliver to create those kinds of solutions is well known and well understood and it's been around for years.

[00:16:49] But that IT is not the IT which you'd expect to run generative AI solutions.

[00:16:56] And the reason for that is that generative AI infrastructure is not about processing existing transactions.

[00:17:02] It's about creating new data that never existed before.

[00:17:06] And what the factory is doing is it's producing a constant stream of tokens that come out of that factory.

[00:17:12] And once you have in your mind this idea that generative AI infrastructure is about creating streams of tokens that never existed before,

[00:17:20] you start to realize that the way you implement that is very different to a traditional data center or traditional cloud-based approach.

[00:17:29] Because in any factory, what are you looking to do?

[00:17:31] You're looking to eliminate bottlenecks.

[00:17:34] You're looking to reduce waste.

[00:17:35] You're looking to make the different components of the factory work together in the most efficient way.

[00:17:42] And this is where the whole power of the factory approach comes in because you're looking to optimize the streams of tokens and get those tokens into your key business processes.

[00:17:53] And, of course, data is the key enabler for AI success.

[00:17:58] So, what would you say are some of the biggest challenges business face when managing data across multiple environments,

[00:18:05] especially with so many different silos that is well documented as well?

[00:18:09] And how does Dell's AI factory address some of those issues?

[00:18:13] Because it is – if you peel the layers back, it's all about data, isn't it?

[00:18:16] Yeah, certainly.

[00:18:18] I mean, if an organization was to use a publicly available pre-trained model, then you can do some great things with that.

[00:18:28] But consider now if the AI knows everything about your own private data sources, like your CRM database or your billing database or your manufacturing systems or your project product management systems or all the data on your intranet.

[00:18:43] Once the AI understands that kind of thing, then that's really where you get incredible outcomes.

[00:18:50] And so the location of the data and where that data sits becomes really important.

[00:18:55] However, I'm not advocating that you have to do like a review of multiple different data sets.

[00:19:03] One of the things that I see in practice where we help accelerate time to value with customers is to help identify data sets which are not sensitive to slightly incorrect or slightly bad or contradictory information.

[00:19:20] So let me give you an example of what I'm talking about.

[00:19:22] Imagine you wanted to build an AI which can give a sales recommendation for any of your customers and you have an entire sales force team of people who go out selling your products to your customers.

[00:19:36] Every time they have a customer meeting, they either record the meeting and transcribe it and upload it to the CRM system or they type in some notes into the CRM system and says,

[00:19:47] Wow, I just had this great meeting with the following three customers.

[00:19:50] Here are the topics we talked about and here are the actions we agreed on.

[00:19:54] Well, what you can do is you can take the notes of the last five meetings and put that into the AI.

[00:20:02] You can say to the AI, here is our entire product catalog and what's great about it.

[00:20:06] Put that into the AI and then say to the AI, well, now what's been discussed in these last five meetings and now that the products that we sell,

[00:20:12] why don't you write a half page report on what the sales team should sell to the customer next time I talk to them?

[00:20:20] And by the way, why don't you follow that up with an email which you can send to tee up the next meeting?

[00:20:25] Now, if in the meeting notes, the salesperson has put typos or they've put the wrong information or they put contradictory information

[00:20:35] or they just basically haven't bothered to fill out a huge amount of detail.

[00:20:39] Is that going to stop the AI from making a really good recommendation on what to sell to the customer?

[00:20:46] No, it doesn't care about typos, doesn't care about spelling mistakes or other small inaccuracies.

[00:20:52] And you have a, therefore, that means you have a really powerful data set that you can start monetizing immediately with generative AI to take advantage of that

[00:21:01] and perhaps create a recommendation on what to sell for every single one of your customers.

[00:21:06] So in that particular case, the data is not sensitive to inaccuracies and emissions and that kind of thing.

[00:21:14] And you can still have a fantastic outcome with those kind of use cases.

[00:21:18] We'll now consider another use case.

[00:21:20] Let's say you want to create an AI which is an expert on all of the data on your intranet.

[00:21:28] Now, data on the intranet is probably built up over years and years and every day a new staff member comes in and puts more data on the intranet.

[00:21:36] And they might say, for example, maybe six months ago there was an article on the intranet that says,

[00:21:41] here are the latest products that we're selling to our customers and here are the features of them

[00:21:44] and here are the new features that are coming out next week or next month.

[00:21:48] And then a few months later, somebody else puts another article on the intranet saying,

[00:21:53] OK, we've now released the next version of the product and it's got all of these features in it.

[00:21:57] If you feed that information into an AI without looking at the quality of the data and actually what the intent and the meaning of that data is,

[00:22:07] then essentially you've just confused the AI because the AI is now thinking, well, does the product have these features or does it have these features?

[00:22:15] And so in that kind of situation, your use case is absolutely sensitive to data inaccuracies or contradictions or typos or missing data.

[00:22:26] So what I'd say is working with Dell, what we can do is we can help identify the use cases where you can start getting benefit from them right away

[00:22:36] without having to move your data, without having to make duplicates of the data,

[00:22:41] because basically any data that's already sitting on a Dell platform like PowerScale, for example,

[00:22:48] is the work of minutes to make that available to an AI because it's already pre-prepared.

[00:22:53] It's already ready to go. It's very simple to index.

[00:22:56] It's very simple to create a vector database.

[00:22:57] And so the technology to do that is something that we do all day, every day.

[00:23:03] And what that means is that the customers can focus on what kind of questions are they going to ask for the AI

[00:23:07] or what kind of use cases are they going to benefit from first?

[00:23:10] So this whole point about making data like a key enabler, I think, is absolutely right.

[00:23:17] And this is one of the things that we want to focus on to get to value as quickly as possible.

[00:23:22] And as organisations get more confident, they begin to continuously improve and inevitably begin to scale AI,

[00:23:31] I would imagine that infrastructure becomes even more critical.

[00:23:35] So how does Dell's AI factory ensure things like flexibility and scalability,

[00:23:40] especially as AI technologies continue to evolve so rapidly?

[00:23:46] Because the speed of change at the moment is breathtaking, isn't it, in technological terms?

[00:23:52] Well, you're absolutely right.

[00:23:53] And it used to be in the old days where you could say people will do their quarterly training and they'll be up to date.

[00:23:59] And then when cloud technologies came along, they said, well, you need to do monthly training and then you'll be up to date.

[00:24:06] Nowadays, in the world of generative AI, you really need to be getting updates weekly, if not daily.

[00:24:12] And I know a number of people who just start today by working out what's changed in the world of AI.

[00:24:17] But this idea of scalable approach is really important because there are some things that I think a lot of our customers haven't realised yet,

[00:24:27] which is that once you deploy the Dell AI factory, what you've done is you've deployed a framework,

[00:24:32] which is very customisable and very flexible and very scalable.

[00:24:37] You can deploy one large language model on it and that one large language model can be the thing that drives your digital human.

[00:24:45] It's the same large language model that helps your programmers do AI coding assistance.

[00:24:51] It's the same large language model that powers the chatbot for your contact centre.

[00:24:55] It's the same large language model that analyses incoming emails and classifies them into different categories.

[00:25:04] Like, is this a customer's report request or is this a request to buy something or is this a complaint?

[00:25:10] All of these capabilities actually coming from the same factory.

[00:25:14] It's just that you're emitting a slightly different stream of tokens each time.

[00:25:18] And the way we scale that is that you can choose where to place the large language model capability.

[00:25:25] You could start off with a server, fill it full of E200 NVIDIA chips and start inferencing right away on that.

[00:25:34] And if you want to scale that out, because the whole thing is container based, you can say,

[00:25:37] well, I'll just have another worker node and I'll put more containers on there.

[00:25:40] Oh, I need another worker node to do more inferencing.

[00:25:42] Just add another one and scale it out very simply from that approach.

[00:25:46] But we were talking about data earlier.

[00:25:49] One of the things we've also done on that is we've given the Dell storage systems the ability to interact directly with GPUs.

[00:25:57] So if a GPU wants to write data directly to storage, we give it a route to be able to do that.

[00:26:03] And it effectively is bypassing the CPU and bypassing other components that might otherwise slow it down.

[00:26:09] So that is hugely scale.

[00:26:11] We introduce technologies which are faster in networking terms than most organizations existing data centers today.

[00:26:20] So if organizations are running 25 gigabit Ethernet, for example, compare that to an AI solution where we're looking at technologies that are running 400 or 800 speeds.

[00:26:32] That is vastly different.

[00:26:35] And it enables GPUs to have this direct connection between them.

[00:26:39] And once you have that capability, you can share GPU memory between multiple GPUs, even if they're in different servers.

[00:26:47] And that enables you to load up the largest large language models.

[00:26:52] Now, why might you want to do that?

[00:26:53] The reason you might want to do that is because if you can tell the AI in a single prompt everything about that use case, then it's a bit like having a trained AI which is specific to your data without having to do any training or without having to do any fine tuning or without having to implement RAG, retrieval, augmented generation.

[00:27:16] Because you can take that much data, let's say an entire customer history and put it into one prompt and then have the AI answer a question about the prompt.

[00:27:23] So that is hugely scalable.

[00:27:26] But we also give you the ability to take that technology and push it to Dell native edge solutions.

[00:27:32] So it may be that actually you want to do some inferencing in a factory, which is not your typical data center.

[00:27:37] Or maybe you say, you know what, I want to I want the AI to have knowledge about everything that a particular employee knows and has access to.

[00:27:48] So you take a copy of that data.

[00:27:50] And you deploy a copy of that data onto a user's laptop.

[00:27:55] And in that laptop, you put a neural processing unit alongside the GPU and the CPU.

[00:28:01] And now you can start having AIs running in people's laptops.

[00:28:06] So the way we've architected the AI factory is it's completely scalable, but it goes the full width.

[00:28:13] It goes right from a user's Dragon enabled laptop, for example, to edge locations, to data center locations, to the cloud.

[00:28:23] And in fact, if you just want to consume it even as a cloud service,

[00:28:26] then one of the things we make available is a completely cloud based solution for this as well.

[00:28:31] So it's a very extensible, very flexible solution.

[00:28:36] Wow, incredibly cool.

[00:28:38] And another topic we've got to bring up, of course, especially when talking about AI,

[00:28:42] is the fact that security is also a growing concern as businesses adopt AI,

[00:28:48] particularly with generative AI models and corporate data, etc.

[00:28:51] So how does Stiles AI factory address some of those security challenges of an unexpanded attack surface

[00:28:59] and ultimately ensure better data protection?

[00:29:04] Because, again, it's all about data and protecting that data.

[00:29:07] But how do you approach this?

[00:29:10] Yes, well, generative AI, getting access to confidential data is by completely new approaches.

[00:29:16] If you want to hack an AI, there are various different ways of doing that,

[00:29:21] like, for example, you might take a prompt engineering approach where you try and extract from the AI

[00:29:26] the instructions that the company put into that AI in the first place.

[00:29:31] And if those instructions include, you know, confidential data about their own customers,

[00:29:36] then, of course, that's something that needs to be protected.

[00:29:38] And for that reason, I'd say most organizations, I think in one review,

[00:29:47] in fact, I have to get the source for this one.

[00:29:49] But in that one review, I think more than 80% of businesses said that they preferred an on-premise

[00:29:56] or a hybrid cloud Gen AI model as opposed to a fully public cloud model

[00:30:01] because they really didn't want to take the crown jewels or their liquid gold,

[00:30:05] which is the data, and put it in a place where the third party may have access to it.

[00:30:10] So to enable that, obviously, we've got the design pattern where the entire thing can be self-contained

[00:30:16] and either run by you or run by a partner or delivered as a service.

[00:30:20] But it's effectively that infrastructure is dedicated to you.

[00:30:25] And the way the whole thing is created is we start off right from day one when we build this

[00:30:31] by having a secure supply chain.

[00:30:34] And that supply chain is checked at each level of the build and the deployment

[00:30:39] and it's underpinned with these zero-trust principles.

[00:30:44] So at every stage, we're always thinking about what's the right encryption?

[00:30:48] What's the right role-based access control?

[00:30:51] How do you know if there's a threat?

[00:30:53] Can you add on any software that detects things like hallucinations

[00:30:56] or attempts to circumvade the configuration?

[00:31:01] Are there ways that people are going to try and extract information from the AI

[00:31:06] by crafting very, very special prompts,

[00:31:08] which try and trick the AI into revealing how it's being configured?

[00:31:12] These are exactly the kind of things that we're looking at

[00:31:15] and making sure that we're protecting customers' data

[00:31:19] and also putting in place these kind of governance

[00:31:22] that validates that that protection is in place.

[00:31:26] So yes, it's strange because in the old days,

[00:31:29] if you wanted to hack a system,

[00:31:31] you would look for where the data is and you'd say,

[00:31:34] OK, I'll go into this huge billing engine database

[00:31:38] because I know that's where all the customer data is going to be

[00:31:40] and that would be a target.

[00:31:41] When you're creating AIs,

[00:31:43] there aren't any rows in the database to look at.

[00:31:46] There aren't any columns to protect.

[00:31:48] There isn't even a program that somebody's written

[00:31:51] that you might try and reverse engineer

[00:31:52] to see if you could get in the program

[00:31:55] because the AI has effectively written its own program.

[00:31:57] The way the data is stored in AI,

[00:32:00] the way it has access to its knowledge,

[00:32:02] are by these vectors,

[00:32:04] which are essentially numbers that are connected together in formulas.

[00:32:08] And that is a very different way of storing and accessing data.

[00:32:13] And of course,

[00:32:14] the security for that has to also be different as well.

[00:32:17] And the environmental impact of AI does seem at odds somewhat

[00:32:23] with the sustainability and ESG initiatives inside of businesses.

[00:32:28] And those things are increasingly important for organizations too.

[00:32:32] So how does Dell's AI factory integrate sustainability principles

[00:32:37] into AI development?

[00:32:38] And what role does that play in helping reduce

[00:32:41] some of those environmental impacts?

[00:32:43] Because there's been a lot of talk of some of the harm that AI can do

[00:32:47] with its thirst of energy, resources and water, etc. for cooling.

[00:32:52] Well, that is an interesting area

[00:32:54] because obviously people are rushing at high speed

[00:32:57] to adopt this technology and get access to it.

[00:33:00] So it's important that that's done in a sustainable way.

[00:33:03] And the Dell AI factory components are based on these things

[00:33:08] called Dell validated designs.

[00:33:10] And in fact, you can go onto the Dell website

[00:33:12] and download a Dell validated design.

[00:33:14] But the Dell validated design tells you everything about

[00:33:17] the jigsaw pieces that are put together to create this solution.

[00:33:22] And they're all based on the latest generation

[00:33:25] of components like Dell PowerEdge servers.

[00:33:29] And these PowerEdge servers, because you're using the latest generation,

[00:33:33] are inherently way, way more energy efficient

[00:33:37] compared to, let's say, a legacy server

[00:33:40] you might have in your data center from a different vendor

[00:33:42] which is three or four years old,

[00:33:44] where you may have said,

[00:33:46] oh, I think I'll sweat that asset as much as I can.

[00:33:48] The drawback of sweating an asset

[00:33:50] which is three or four or five years old

[00:33:52] is that the level of energy efficiency is way, way down

[00:33:57] compared to modern generations of things like PowerEdge servers.

[00:34:02] So that's the first thing to consider is

[00:34:05] are you actually building on infrastructure

[00:34:06] where you're getting the most energy efficiency?

[00:34:10] And then in those actual components,

[00:34:13] we put a lot of effort into using sustainable materials

[00:34:17] like recycled plastic, reclaimed carbon fiber,

[00:34:21] low emission aluminum.

[00:34:22] And in fact, our servers feature recycled steel as well.

[00:34:27] And of course, all the packaging is 100% recycled.

[00:34:32] So sustainability is really important to us.

[00:34:34] And I think you have to look at the whole end-to-end approach

[00:34:37] to say, how can I get the best from this?

[00:34:40] I think that's one thing.

[00:34:41] The other thing is that a surprising finding that I've seen

[00:34:46] is that generative AI can actually be used

[00:34:51] to turn off other components in people's IT landscape.

[00:34:56] So if you compare it to traditional AI,

[00:34:59] traditional AI might include things like machine learning

[00:35:01] or unsupervised learning or anomaly detection

[00:35:04] or things like that.

[00:35:06] Well, consider the old way that those kind of things were done.

[00:35:10] So let me give you a real world example to make it make sense.

[00:35:14] So we talked earlier about this idea

[00:35:16] that you have a big contact center

[00:35:18] and people keep sending you lots of emails.

[00:35:20] Your customers send you lots of emails.

[00:35:22] And some emails might be about a request to buy a product.

[00:35:26] Another one might be a complaint.

[00:35:28] Another email might be a request for support.

[00:35:31] So in the traditional approach,

[00:35:35] what we would do is we would get a data scientist

[00:35:37] to get a thousand of those three different types of email

[00:35:41] and load up a thousand of those examples

[00:35:43] into a machine learning AI

[00:35:45] and they would develop a model over time.

[00:35:48] It might take weeks or months to develop a model that says,

[00:35:51] is this email a complaint email?

[00:35:55] And once you've done that for the complaint email,

[00:35:58] you then create a second model,

[00:35:59] which is a model that says,

[00:36:01] is this email a request to buy something?

[00:36:03] And then you do that.

[00:36:04] You'd see for every category,

[00:36:06] the data scientist is building a model.

[00:36:08] And then when a brand new email comes in

[00:36:11] that system's never seen before,

[00:36:12] what it basically does is it runs all of those three models,

[00:36:15] inferences all those three models.

[00:36:17] And maybe the complaint model says,

[00:36:19] oh, this is 10% likelihood that this email is a complaint.

[00:36:22] And maybe the customer service email says,

[00:36:25] oh, this is a 5% chance that this email is about

[00:36:28] a request for service.

[00:36:29] And then the selling model says,

[00:36:33] oh, this is a 95% chance

[00:36:35] that this customer is trying to buy something.

[00:36:37] And therefore, based on these voting of these three AIs,

[00:36:41] you basically have an answer that says,

[00:36:43] oh, I think this email is a request to buy something.

[00:36:47] That's the old way of doing things.

[00:36:48] With generative AI, you don't need any of that.

[00:36:51] You just say to the AI,

[00:36:52] you will be receiving an email,

[00:36:55] you work in the customer service center

[00:36:56] for this company in this industry,

[00:36:58] and tell me what the email is about.

[00:37:01] And so it might say it's a complaint

[00:37:06] or customer service or selling,

[00:37:08] but it might also come back and say,

[00:37:11] actually, this email is requesting

[00:37:13] to onboard a new customer.

[00:37:14] Hold on, there's a whole new category

[00:37:16] that didn't even exist in the old approach.

[00:37:19] And now suddenly generative AI is doing

[00:37:21] as well as the old machine learning approach,

[00:37:23] but it's also introduced new capabilities

[00:37:26] that were previously not available before.

[00:37:29] So what do you do if you're the CIO?

[00:37:30] Well, you probably think about decommissioning

[00:37:32] that old system.

[00:37:33] And that old system was running on IT that was old

[00:37:36] and it was using electricity.

[00:37:38] It required data scientists to keep the thing running

[00:37:41] and operating.

[00:37:42] And now you've got the opportunity to replace that

[00:37:44] with a way more efficient approach,

[00:37:45] which gives you a better outcome based on generative AI.

[00:37:48] So these are the kind of ways that you can look at

[00:37:51] to really get the sustainable advantage

[00:37:54] with the Dell AI factory.

[00:37:57] Something else I wanted to highlight today

[00:37:59] is AI-ready skills are often in short supply

[00:38:02] right across the industry.

[00:38:04] So how do Dell services within AI factory,

[00:38:07] how does that help organisations

[00:38:09] maybe better bridge the gap between technology

[00:38:12] and their talent

[00:38:13] and ultimately ensuring successful AI implementation?

[00:38:17] What are you seeing there?

[00:38:20] Yeah, this is a good point.

[00:38:22] I mean, what skills does it take to create an AI?

[00:38:26] If you're going to train an AI from scratch,

[00:38:29] that is a significant undertaking.

[00:38:32] And yes, you absolutely do need skills

[00:38:34] in data science to do that.

[00:38:36] And you need infrastructure

[00:38:37] that's optimised for training.

[00:38:39] A lot of organisations won't be doing that.

[00:38:41] A lot of organisations will be looking at fine tuning.

[00:38:45] With fine tuning, again,

[00:38:46] you do need skills in data preparation.

[00:38:48] You need skills in the mechanics

[00:38:51] of how do you fine tune an AI.

[00:38:53] So that's one thing to look at.

[00:38:55] For many use cases,

[00:38:56] you don't need either of those two

[00:38:57] because you've got a RAG solution,

[00:39:00] which is where you take your own database

[00:39:02] and connect it to the AI.

[00:39:04] Just by doing that,

[00:39:05] the AI becomes an expert on that data.

[00:39:08] So in that particular case,

[00:39:10] the skills required to do that

[00:39:12] are not particularly technology-focused.

[00:39:15] We're now starting to get into skills

[00:39:17] which are more business-related.

[00:39:19] And the fourth design pattern

[00:39:20] is where you don't do any of those things.

[00:39:22] You just take your existing business data

[00:39:24] and you stuff it into a prompt.

[00:39:26] So you say to the AI...

[00:39:28] So here's an actual example from this morning.

[00:39:31] I was talking to a customer

[00:39:32] who creates AIs

[00:39:33] that can write a summary

[00:39:36] of how a drone flight progressed.

[00:39:39] So the drones send back telemetry data,

[00:39:41] send back video and audio data.

[00:39:43] When they're flying around,

[00:39:44] you want to ensure that the drone

[00:39:45] has been flying at the right height.

[00:39:47] It hasn't crashed into anything.

[00:39:48] It didn't come close to running out of battery power.

[00:39:51] There are various things that you need to evidence

[00:39:54] that you've had a drone flight in a successful way.

[00:39:57] And so in those particular cases,

[00:40:00] you can take all of the drone information,

[00:40:02] stuff it into the prompt,

[00:40:03] and then say to the AI,

[00:40:05] based on your knowledge of drone law,

[00:40:07] can you write a report that says

[00:40:09] that this drone flight was compliant?

[00:40:11] Yes or no?

[00:40:11] So in that particular case,

[00:40:13] you didn't do any of those other things

[00:40:14] like RAG or tuning or training.

[00:40:17] And the skills required to do that

[00:40:20] way more about who's an expert in drone law

[00:40:24] and who's an expert in understanding

[00:40:26] the engineering data

[00:40:27] that comes out of drone telemetry.

[00:40:29] In those kind of scenarios,

[00:40:32] it's much less about the IT side.

[00:40:34] So what we do is

[00:40:36] we make available a...

[00:40:41] for nearly all of our customers.

[00:40:42] And in fact, if you follow up with this podcast,

[00:40:45] one of the things that we're doing

[00:40:46] is we're giving access to organisations

[00:40:48] to a one-day workshop

[00:40:50] run by our professional services team

[00:40:52] who will come on site

[00:40:54] and actually support you

[00:40:55] as you choose your use cases

[00:40:58] to be the highest priority use cases.

[00:41:01] And many organisations these days,

[00:41:03] when you say to them,

[00:41:04] well, have you thought about

[00:41:05] what use cases you're going to use

[00:41:07] with generative AI?

[00:41:08] They come back and say,

[00:41:09] oh yes, I've had loads of ideas

[00:41:11] from everyone in the organisation,

[00:41:13] but the list is 200 long.

[00:41:15] What are you going to do

[00:41:15] with a list of 200 use cases?

[00:41:17] You're probably not going to implement

[00:41:18] all of those immediately.

[00:41:19] And there are some which are low-hanging fruit

[00:41:21] and really quick and easy to implement.

[00:41:24] And those are exactly the things

[00:41:25] where our services people

[00:41:26] will come along and give you advice

[00:41:28] and say,

[00:41:29] okay, this is where the low-hanging fruit is.

[00:41:31] These are the quickest things to implement.

[00:41:33] This is where you'll get the first value

[00:41:35] from that kind of approach.

[00:41:37] So we want to support customers

[00:41:42] by bringing the technical knowledge,

[00:41:44] but let them bring the industry vertical knowledge

[00:41:47] and put that together

[00:41:48] with skills around things like rapid deployment,

[00:41:52] making day one operations a reality,

[00:41:55] handing it over to the customer

[00:41:57] and basically just giving hands-on

[00:42:00] real world experience

[00:42:01] of how to build these kinds of solutions

[00:42:03] and make them available to our customers.

[00:42:05] And I think many companies

[00:42:07] are hesitant to adopt AI

[00:42:09] due to an almost perceived complexity

[00:42:12] of integrating various ecosystem solutions

[00:42:15] enough to keep anybody awake at night.

[00:42:17] So how does Dell's open ecosystem approach

[00:42:20] make AI adoption easier

[00:42:23] and ultimately more seamless for businesses

[00:42:25] and their customers too?

[00:42:28] Well, just like a traditional factory,

[00:42:32] you have lots of different elements

[00:42:35] of the supply chain

[00:42:36] and frequent deliveries

[00:42:38] and you have to integrate

[00:42:40] with what's coming out of the factory.

[00:42:42] It's a similar kind of approach

[00:42:44] that we take with the AI factory.

[00:42:47] We really are about giving customers choice.

[00:42:49] So if you want to implement the AI factory

[00:42:51] on wall-to-wall NVIDIA GPUs,

[00:42:55] you can absolutely do that.

[00:42:56] If you want to implement it

[00:42:58] on Gaudi 3 with Intel,

[00:42:59] that's an option as well.

[00:43:01] And similar for some of the AMD chips.

[00:43:04] If you want to think about

[00:43:07] how do I provide a multi-tenant approach?

[00:43:10] So let's say, for example,

[00:43:12] in a large organization,

[00:43:14] whichever department is first

[00:43:17] to deploy an AI platform,

[00:43:20] they typically become the de facto standard

[00:43:22] within the organization.

[00:43:23] I see this a lot in banks, for example.

[00:43:25] You take a bank and you say,

[00:43:29] well, if it's a retail bank,

[00:43:30] you might say,

[00:43:31] well, is the retail banking division

[00:43:33] that focuses on high street customers,

[00:43:35] is that going to get an AI platform first?

[00:43:37] Or will it actually be the treasury department?

[00:43:39] Or will it be capital markets?

[00:43:41] Or will it be private banking?

[00:43:43] There's some kind of internal pressure

[00:43:44] that happens between

[00:43:45] these different departments.

[00:43:47] Whichever of those departments

[00:43:48] gets the platform first,

[00:43:50] they typically become the de facto standard.

[00:43:52] And the idea is,

[00:43:54] if you say, for example,

[00:43:56] oh, well, capital markets

[00:43:58] makes a huge amount of revenue for the bank

[00:43:59] and therefore they've bought

[00:44:01] their platform first,

[00:44:02] probably you'll get someone

[00:44:02] from private banking

[00:44:03] come along to capital markets

[00:44:04] and say,

[00:44:05] hey, do you mind if we have a slice

[00:44:06] of your GPUs

[00:44:08] so that we can do our own inferencing

[00:44:09] without having to buy

[00:44:10] a whole new infrastructure?

[00:44:11] So in that particular case,

[00:44:14] internally within the organization,

[00:44:16] capital markets department

[00:44:17] becomes like a service provider

[00:44:18] to other departments within the bank.

[00:44:20] And we've thought about

[00:44:23] those kind of problems in advance.

[00:44:25] And what we can do

[00:44:25] is we can take software solutions

[00:44:28] and integrate them

[00:44:29] with the Dell AI factory.

[00:44:31] That means effectively

[00:44:32] that platform becomes

[00:44:33] a multi-tenant platform.

[00:44:35] So when someone from private banking

[00:44:38] comes along and does some inferencing,

[00:44:40] they think they've got access

[00:44:41] to six GPUs.

[00:44:43] But behind the scenes,

[00:44:44] what's actually happened

[00:44:44] is the system has carved that up

[00:44:46] and given them access

[00:44:47] to what looks like six virtual GPUs.

[00:44:50] But behind the scene,

[00:44:51] they're only consuming one or two.

[00:44:53] And probably capital markets

[00:44:54] is still consuming the other five.

[00:44:57] Or we say,

[00:44:58] well, capital markets

[00:44:58] isn't using the other five GPUs

[00:45:01] for the next 30 minutes.

[00:45:02] So why not burst up

[00:45:04] and give the entire capability

[00:45:05] to private banking

[00:45:06] so that they can consume it

[00:45:07] in the most efficient way?

[00:45:09] Because what we want to do

[00:45:10] is we want to make sure

[00:45:11] that that infrastructure

[00:45:13] is not sitting there being unused,

[00:45:14] that people are getting

[00:45:15] the maximum value

[00:45:16] for their factory,

[00:45:16] which has to be constantly

[00:45:18] creating a stream of tokens.

[00:45:21] So that's kind of long-winded answer,

[00:45:24] I guess,

[00:45:24] to the thing of the ecosystem.

[00:45:26] But it's extensible approach

[00:45:30] which leverages all the abilities

[00:45:32] of all of the Dell partner ecosystem

[00:45:35] to be able to create

[00:45:36] these kind of solutions.

[00:45:38] And looking ahead

[00:45:40] to Dell's technology forum,

[00:45:42] I'm not sure how much

[00:45:44] you are able to share with me today

[00:45:46] and what you're not allowed to share.

[00:45:48] But just to offer a few teasers,

[00:45:50] what can attendees expect

[00:45:51] to learn about AI

[00:45:53] and how is Dell continuing

[00:45:55] to position itself

[00:45:56] as almost a catalyst

[00:45:58] for driving successful AI adoption

[00:46:00] across multiple industries there?

[00:46:03] Is there anything you can share

[00:46:04] around that road ahead

[00:46:05] and what we can expect

[00:46:06] from Dell's technologies forum?

[00:46:09] Yeah, definitely.

[00:46:10] Dell tech forum

[00:46:10] is taking place

[00:46:11] on the 26th of November this year.

[00:46:13] It's in London.

[00:46:15] So if you can come along

[00:46:15] and visit in person,

[00:46:16] I definitely encourage you to do that.

[00:46:18] But there's also

[00:46:19] an online version as well.

[00:46:20] But at that session,

[00:46:22] what we do is we show

[00:46:23] all the different parts

[00:46:24] of the Dell technologies portfolio.

[00:46:27] And that includes multi-cloud.

[00:46:29] So for example,

[00:46:30] the concept of having a node,

[00:46:32] which on a Monday

[00:46:33] could be an Azure Stack node,

[00:46:36] for example.

[00:46:36] On a Tuesday,

[00:46:37] that same node could flip

[00:46:38] into an OpenShift node

[00:46:40] or it could be

[00:46:41] another multi-cloud provider.

[00:46:44] So this whole idea

[00:46:44] of what is multi-cloud,

[00:46:46] how do you get advanced of it?

[00:46:47] We're looking at cyber

[00:46:48] and cyber security,

[00:46:50] workforce enablement,

[00:46:51] storage solutions.

[00:46:53] So you can come along

[00:46:54] to those sessions

[00:46:54] and sit in on

[00:46:56] multiple interactive streams

[00:46:59] that are going on

[00:46:59] at the same time

[00:47:00] and learn about the key technologies

[00:47:02] in each of those areas.

[00:47:04] We're going to have

[00:47:04] some fantastic presenters

[00:47:06] and we're also going to be

[00:47:08] announcing the latest

[00:47:09] and greatest capabilities

[00:47:10] about the Dell AI Factory.

[00:47:12] So yeah,

[00:47:13] if you get the chance

[00:47:14] to come along,

[00:47:15] then definitely encourage

[00:47:16] and hopefully

[00:47:16] we can meet in person.

[00:47:18] Awesome.

[00:47:19] Well, I will encourage

[00:47:19] anyone listening

[00:47:20] to check that out.

[00:47:22] And before I let you go,

[00:47:23] for anyone listening,

[00:47:24] just want to find out

[00:47:24] more information

[00:47:25] about anything we talked about today.

[00:47:27] Connect with you or your team

[00:47:28] or dig a little bit deeper

[00:47:30] on some of the things

[00:47:31] that we've mentioned.

[00:47:32] Is there any way in particular

[00:47:33] you'd like to point

[00:47:34] everyone listening?

[00:47:36] Yeah, exactly.

[00:47:37] Well, you can always reach out

[00:47:38] to me personally

[00:47:39] on LinkedIn

[00:47:39] or to my team.

[00:47:41] It'd be great to see you

[00:47:42] in person on the 26th

[00:47:44] and we're going to share

[00:47:45] all the details

[00:47:46] of that event

[00:47:47] and more

[00:47:47] in the show notes

[00:47:48] for this podcast.

[00:47:50] Well, I'll add links

[00:47:51] to everything there

[00:47:52] so people can find

[00:47:53] that nice and easily.

[00:47:55] And we've covered

[00:47:55] so much today.

[00:47:56] In particular,

[00:47:57] one of the things

[00:47:58] that I love about

[00:47:58] is your passion

[00:47:59] for this topic

[00:48:00] and how you're able

[00:48:01] to demystify AI

[00:48:02] for decision makers,

[00:48:04] make its benefits

[00:48:05] more relatable

[00:48:06] and doing that

[00:48:07] through the concept

[00:48:08] of the AI factory.

[00:48:10] It's easy to see

[00:48:10] why that is central

[00:48:12] to Dell's vision.

[00:48:13] But more than anything,

[00:48:14] just thank you

[00:48:14] for shining a light

[00:48:15] on this today

[00:48:16] and starting a conversation

[00:48:17] around it.

[00:48:18] Thanks for joining me.

[00:48:19] Yeah, as ever,

[00:48:20] it was great to talk to you

[00:48:21] and good to see you again.

[00:48:23] Wow.

[00:48:24] I think as we conclude

[00:48:25] that conversation

[00:48:26] with Elliot Young,

[00:48:27] it's clear that

[00:48:28] Dell's AI factory

[00:48:29] is not just

[00:48:31] an infrastructure solution

[00:48:32] but more of a

[00:48:33] pivotal strategy

[00:48:35] in the adoption

[00:48:36] and integration

[00:48:37] of AI

[00:48:37] across many

[00:48:38] different business arenas.

[00:48:40] Whether that be

[00:48:41] improving sustainability,

[00:48:43] bridging skills gaps,

[00:48:45] Dell seems to be

[00:48:46] setting a benchmark

[00:48:47] in the tech field.

[00:48:49] But what are your thoughts

[00:48:50] on the potential impacts

[00:48:51] of technologies

[00:48:52] in your industry?

[00:48:53] Have you considered

[00:48:55] how AI could transform

[00:48:56] your business operations?

[00:48:58] We're not just talking

[00:48:58] about the technology.

[00:49:00] We're talking about

[00:49:00] real-world problems,

[00:49:01] generating business value,

[00:49:03] solving real-world problems.

[00:49:05] Please,

[00:49:06] share your thoughts.

[00:49:07] Join the conversation

[00:49:08] by emailing me

[00:49:10] techblogwriter

[00:49:10] at outlook.com.

[00:49:12] LinkedIn,

[00:49:13] that's probably the best way

[00:49:14] of getting a hold of me.

[00:49:15] I'm just

[00:49:15] at Neil C. Hughes

[00:49:17] but equally

[00:49:17] at Neil C. Hughes

[00:49:18] on X

[00:49:19] and Instagram too.

[00:49:22] So please,

[00:49:23] share your insights.

[00:49:24] Join the conversation

[00:49:25] as we continue

[00:49:25] to explore

[00:49:26] the intersections

[00:49:27] of technology

[00:49:28] and business innovation.

[00:49:30] But I'm out of time

[00:49:31] for today.

[00:49:32] Hopefully I delivered

[00:49:33] on my promise there

[00:49:34] of making you feel

[00:49:35] more confident

[00:49:36] about adopting AI

[00:49:37] in your business

[00:49:38] but please let me know.

[00:49:39] That's it for today though

[00:49:41] so time for me to go

[00:49:42] and hopefully

[00:49:42] I will speak with you all

[00:49:44] again tomorrow morning.

[00:49:46] Bye for now.

[00:49:47] Bye for now.