AI PCs Explained With Logan Lawler from Dell Technologies
Tech Talks DailyFebruary 11, 2026
3586
36:2433.32 MB

AI PCs Explained With Logan Lawler from Dell Technologies

What actually happens when AI stops being a cloud-only experiment and starts running on desks, in labs, and inside real teams trying to ship real work?

In this episode, I sit down with Logan Lawler, Senior Director at Dell Technologies, to unpack how AI workloads are really being built and supported on the ground today. Logan leads Dell's Precision and Pro Max AI Solutions business and hosts Dell's own Reshaping Workflows podcast, giving him a rare vantage point into how engineers, developers, creatives, and data teams are actually working, not how marketing slides suggest they should be.

We start by cutting through the noise around AI PCs. At every conference stage, Logan breaks down what genuinely matters when choosing hardware for AI work. CPUs, GPUs, NPUs, memory, and software stacks all play different roles, and misunderstanding those roles often leads teams to overspend or underspec. Logan explains why all AI workstations qualify as AI PCs, but not all AI PCs are suitable for serious AI work, and why GPUs remain central for anyone doing real model development, fine-tuning, or inference at scale.

From there, the conversation shifts to a broader architectural rethink. As AI workloads grow heavier and data sensitivity increases, many organizations are reconsidering where compute should live. Logan shares how GPU-powered Dell workstations, storage-rich environments, and hybrid cloud setups are giving teams more control over performance, cost, and data. We explore why local compute is becoming attractive again, how modern GPUs now rival small server setups, and why hybrid workflows, local for development and cloud for deployment, are becoming the default rather than the exception.

One of the most compelling parts of the discussion comes when Logan connects hardware choices back to business reality. Drawing on real-world examples, he explains how teams use local AI environments to move faster, reduce cloud costs, and avoid getting locked into architectures that are hard to unwind later. This is not about abandoning the cloud, but about being intentional from the start, mainly as AI usage spreads beyond developers into marketing, operations, and everyday business roles.

We also step back to reflect on a deeper challenge. As AI becomes easier to use, what happens to critical thinking, curiosity, and learning? Logan shares a candid perspective, shaped by his experiences as a parent, technologist, and podcast host, raising questions about how tools should support rather than replace thinking.

If you are trying to make sense of AI PCs, local versus cloud compute, or how teams are really reshaping workflows with AI hardware today, this conversation offers grounded insight from someone living at the center of it. Are we designing systems that genuinely empower people to think better and build faster, or are we sleepwalking into decisions we will regret later? How do you want your own AI workflow to evolve?

Useful Links

[00:00:03] What actually makes a computer an AI PC? Especially once you strip away the marketing and all the buzzwords. Well, I've got an answer for you. Well, I haven't. My guest does. His name's Logan Lawler. He's from Dell Technology, and he leads AI strategy for the Dell Pro Precision Workstation portfolio. But today, we're not going to be talking about selling things. We're going to unpack what matters when choosing hardware for local AI.

[00:00:28] And that means we will demystify everything from CPUs, GPUs to NPUs. And why so many people are feeling confused right now about that AI PC conversation. But we will be your guide and show you how teams are deciding when they run AI locally in the data center or in the cloud. What drives those different decisions? And if you've been trying to make sense of AI hardware without overspending or underspecking,

[00:00:55] today's conversation will hopefully help you cut through the noise of exactly what makes an AI PC useful in the real world. But you don't want to hear me rambling on setting the scene. So buckle up and hold on tight as I beam your ears all the way to Austin, Texas, where my guest is waiting to demystify all this and much more. So thank you for joining me on the podcast today, Logan. Can you tell everyone listening a little about who you are and what you do?

[00:01:26] Yeah, sure, Neil. Appreciate you having me on. So Logan Lawler have been at Dell for longer than I probably care to admit. It's coming up on about 20 years. And I've done a lot of different things at Dell. But most currently, the job that I'm in and why I'm kind of on the podcast today is I lead AI strategy for our Dell Pro Precision line of workstations within Dell.

[00:01:49] And there's so much I want to talk with you about today because I'm fortunate I get to go to the US a lot for so many different tech conferences. And right now, it seems every single one of them seems to be pushing AI PCs. And yet I know I do see many users on Reddit that feel somewhat indifferent to that. And I suspect it comes down to a missing education piece. So to begin with today and set the scene, let's start there and make sure nobody gets left behind.

[00:02:15] So from your perspective at Dell Technologies, what actually makes a computer an AI PC beyond the marketing? Which specs truly matter? Is it CPUs, GPUs, memory, NPUs? So many acronyms. Tech loves a good acronym. But let's start from there. What makes a good AI PC? So you asked the most complicated question. You basically asked, you know, what is the reason for life in the very first question. So let me kind of break it down this way.

[00:02:43] And it's not going to be a long-winded answer, but just for those that aren't super technical, you know, a couple of terms you said. So you have CPU, which is, you know, think your Intel kind of, you know, central processing unit, which been around for forever. You have your GPU, graphical processing unit, thinking like graphics card, like NVIDIA RTX Pro, BlackBowl GPU. And NPUs are something that are newer on the market as of maybe like, maybe let's call it two years ago. And it's a neural processing unit.

[00:03:13] So that's, it kind of shares, you know, some dedicated RAM of the system to do offload AI tasks. So AI PCs is a category is, it's kind of, I'm not going to say it's kludgy, but it's one of those, if you've ever taken the LSAT or done like kind of a word, you know, one of those kind of like word riddle type things. The way you need to think about it is that all AI workstations are AI PCs, but not all AI PCs are AI workstations.

[00:03:42] And what I mean by that is that anything with a, you know, NPU or a GPU is considered an AI PC. I mean, that is considered an AI PC where an AI workstation. That's kind of where my expertise lies. Obviously I know a lot about NPUs as well, but where GPUs separate in a workstation is it takes NPUs are very low power, very little that they, not little that they can do, but it's growing in the amount that you can do.

[00:04:10] Where GPUs really, when it comes to AI is, you know, ultimately the gold standard. So for example, I think a lot of confusion in the market and I completely understand why Reddit, you know, has maybe that sentiment because they're technically, you know, usually a much more technical audience is that AI PC. Oh my God, it's going to do all these great things. But the key is, and this is what I want everyone to take away. If you take away one thing from this is that NPUs, you know, are not GPUs and they have underlying softwares.

[00:04:40] There's an SDKs that power them, right? Like NVIDIA is obviously CUDA. NPU is kind of the Intel architecture on the Intel side, or it could be AMD or whoever. But if your software or what the workflow you're doing is not accelerated by those SDKs, whether that's, you know, the GPU or the NPU, then getting an AI PC does absolutely nothing for you.

[00:05:02] So no matter which way you go, workstation, AI PC, you need to think about what software am I using, do some research. Or what I tell you to do is open up your task manager on your computer, run that program and see what's actually being utilized. Is it the NPU? Is it the GPU? Is it the CPU? Is it the RAM? Trying to understand where, you know, their code base is written. Because if you do that, it'll tell you what you need. And most of it is GPU.

[00:05:30] But there are growing pieces of software like Microsoft Copilot, right? That can leverage the NPU. So long-winded answer, it's extremely confusing. But the takeaway is think about, know what softwares you're using, understand which acceleration path, GPU, NPU, CPU that you need, then you can't go wrong. And it's so important that you mention that. I think very often people think of a one-size-fits-all. They'll speak to someone in the office and they'll say, well, you'll need this, when actually they don't because it's just a browsing machine.

[00:05:59] And we might have many people listening, developers, researchers, or even just curious professionals choosing hardware for a local AI. And you must see so many bad things on the internet, whether it be Reddit or a LinkedIn news feed. What are the most common misconceptions or myths that you see that might frustrate you? And also, the people that you speak with, where do they often overspend or underspec? Because the basics are not that well understood. Let's clear some of these things up to them. Yeah, for sure.

[00:06:28] I mean, let's think about this. Some of the misconceptions, right, is I think a lot of, and I kind of answered in the previous question, but a lot of the misconception is, let's say, and I'll target this as just a general consumer, not a developer, not someone who's super technical. They think they're getting an AI PC with an NPU and it's going to change their life. And in about several years, when we have software that is written to leverage that piece of silicone for an acceleration standpoint, that absolutely, it will change your life. I think that's kind of one.

[00:06:58] That's kind of one kind of misconception. Two is that depending on what you're doing, and this is kind of an NVIDIA thing. If you, you'll see a lot, especially on Reddit, people think, oh yeah, you know, I have a gaming card, right? And those absolutely work for a lot of AI stuff. And if you're tinkering around and you're not, you know, doing this professionally or, you know, it's just like, hey, I'm at home. I have a gaming PC and I want to use Comfy UI. Like, awesome, you know, have fun.

[00:07:27] But if you're using this and now this is more targeted to professionals is that a lot of people because of, and not necessarily cost, but they don't understand the difference between GeForce cards on the NVIDIA side. They're like 5090s, 5080s, 4090s versus the RTX Pro cards. And the difference is the amount of VRAM that you get, right? The amount of power, the amount of supportability, et cetera. And then also the, what that card can actually be used for.

[00:07:55] So, you know, the way that I would kind of say is that when you're thinking about, you know, graphics acceleration for AI, you know, hey, at home, use your GeForce. You've already got it, test it out. But if you're moving into a professional environment, you definitely need to be, you know, on a workstation class system like our Dell Pro Precision machines or, and also have, you know, NVIDIA RTX Pro Blackwell cards.

[00:08:18] I'm glad you mentioned them because I know you spend a lot of time working with NVIDIA RTX Pro GPUs inside Dell Precision and those Pro Max systems. So, again, for those people listening, how do these GPUs change what teams can realistically run locally from model development to inference comparing with, let's say, relying entirely on the cloud that many people listening will be doing? Yeah. So, let's talk about the cloud for a second.

[00:08:44] The cloud, and I'm not here to, you know, I think the cloud has its right place, right, in the right thing. But let me give you a hypothetical is that, hey, you are tinkering in the cloud and you are like, wow, this is fun. I am running these models, et cetera. Tinkering in the cloud or, like, nonprofessional use or, like, you know, revenue that is assigned to kind of that cloud usage where you're getting something out that a customer's billing against kind of as cost of goods sold thing makes the cloud very expensive.

[00:09:14] What NVIDIA did from their ADA generation to Blackwell generation GPUs that made complete, I mean, I call it game changings, others might not, is the amount of VRAM that you need. VRAM at the end of the day is, I mean, it's what makes the juice run, right? Like, I mean, it's what you load your model into, you know, so, you know, the tensor cores can work, you know, running inference, fine tuning, et cetera.

[00:09:40] Well, Gen over Gen, NVIDIA from ADA to Blackwell doubled the amount of VRAM. So taking it from, in the highest card, 48 to 96 gigs. So if you know anything about, you know, cloud or what servers you're using or, you know, you have access to a data center, I mean, 96 gig rivals server-level cards, right, that are in a server farm, multiple stacks and racks. So basically, NVIDIA made it where you could put a small server more or less on your desk.

[00:10:08] And it has changed the game because at the end of the day, you know, you can have two older generation GPUs that have 30 gigs, but you can't stitch those together, right? You want to be able to run, you know, that in one parallel process versus having running three different things, right? And not every application is set up to take advantage of multiple GPUs. But integrating that into one, I think that really changed the game. And yes, I'm not going to lie. Those cards aren't cheap.

[00:10:33] But for example, I have a company I work with, I won't say their name, but they were developing this application. That's awesome. And the developer, you know, very talented, very creative, went to the cloud, you know, for simplicity and all of that because it was easy to scale and all this. And it got to the point where they were getting some paying customers, but he's kind of a tinker, right? He wanted to develop his application, make things better, run experiments, you know, kind of run a sandbox. Well, he did that.

[00:11:01] And then when he got his bill, he was like, oh my God, I can never do that again because the bill costs more than buying just the card, right? So that's the thing that you need to think about, like, you know, whether you think locally or cloud is think holistically. Like, what is my costs? Like, am I trying to do this as like a CapEx expense and OpEx expense? You know, how many people are going to be accessing this? Like, how do I want to deploy? All those things need to be considered.

[00:11:26] But I will say local AI with, you know, like an RTX Pro GPU with 96 gigs will rival what most developers or data scientists could ever possibly utilize in a data center. And it can set on top of your desk. And throughout both of our careers, we've seen so many reoccurring trends. Everything seems to come full circle from data centers going to the cloud and then back again.

[00:11:51] And then now with our local AI, there's this clear shift towards doing compute closer to where the data lives. But from what you're seeing here, what is ultimately driving that move? And how are organizations balancing performance, cost and data control as they slowly begin to rethink where AI workloads should run? I think we've been debating this for three years now. But what are you seeing here? I think the debate continues. Yeah, I think the debate continues. I really think it comes down to, you know, the size and scale of organizations.

[00:12:21] But I'll kind of break it down into let's call, because I work with, you know, smaller kind of startups. I work with like medium-sized businesses all the way up to some of the Fortune 500 companies, right? I mean, Fortune 500 companies at the end of the day, I think are making the change purely because security and they're starting to get more into AI. I'm not saying they're using cloud, but it is more likely that they're moving to the data center or local compute, you know, kind of at the desk side, right?

[00:12:47] Medium business at the end of the day is really kind of a function of, yeah, it's a bit cost, but it's also, it's really kind of, I think medium business is more kind of like cost. It's like, you know, what makes sense for me? You know, what makes sense for me? But small businesses are really is like, what can I get up and running as quickly as possible, right? Like, what can I make happen quickly that's not going to break the bank?

[00:13:13] And that's what I want to say is that before, and this is, there's the big thing that I've seen is that whether it be, you know, latency or moving from the cloud back to the desk side, whether it's latency or cost or security or a whole host of things, whatever your reason is, those decisions are now being made after the initial decision was made. And what I would challenge people to do is that people that are in charge, if you're in charge of making decisions or thinking about these decisions.

[00:13:40] So I see a lot of companies rush to, well, man, let's just do this is, and then it's like, Hey, to unwind everything, it takes forever. So I would say before making a decision, really spend that extra time thinking about like, Hey, where is this going to grow in my organization? Like, how am I going to deploy kind of across scale across maybe a huge company? Think about those, you know, how many developers do I have? Start thinking about those things first, and then take some time, make one decision versus pulling back on the decision.

[00:14:09] And the other thing I've seen, you got me thinking now is it's hybrid. I've also seen start a lot more of hybrids type solutions where, for example, Hey, the development piece and the sandboxing is local, but then for deployment, because, you know, maybe it's software as a service or something or some kind of AI app. It's delivered, you know, it's scalable, it's predictable. Customers are paying. It might be in the cloud, right? So it really depends, but I'm starting to see more hybrid workflows, not necessarily a complete jump back to, Oh my God, we can't be in the cloud anymore.

[00:14:39] It's moving more hybrid, I think. And I think you've got somewhat of a unique vantage point here because you're someone that's got visibility into how teams are actually using GPU powered workstations, hybrid cloud storage environments every single day. So just to bring to life some of the things that you see here, you'd have to mention any names, but are you able to share any examples of how these setups are supporting AI workflows in practice in real environments rather than just in lab conditions? Yeah.

[00:15:08] I mean, I mean, for example, I mean, I can't name any names, of course, but you know, I'll give you kind of a perfect example. Are you saying that kind of the actual workflow itself? Yeah, sure. No problem. So there is a, you know, a company that we work with that I would say it's kind of like a small to kind of medium, right? And it kind of follows the same. They have a whole host of data scientists that will post the kind of software devs, right? All of their compute is, is local.

[00:15:37] So that's on kind of the Dell Pro Max with GB 10s. They're doing the work. It's doing the work on, you know, some Dell Pro Precision, you know, desk side workstations. But where they make the, basically they're doing, think of all the sandbox and code writing, compiling, you know, all the machine learning tasks, all the deep learning tasks, you know, basically compiling, testing, sandboxing, all of that is done locally, right?

[00:16:04] And the choice was made is because the CEO was like, hey, we want to build the best dang thing that we possibly can, right? It's, and I won't tell you who it is, but it's an AI agent builder app that I think is one of the best. And they're like, we want to build the best end thing and time and things are going to change rapidly and we don't want to be stuck in the cloud. So they made the decision to invest in hardware for all of their resources, not a huge company, it's about 25, but local resources for that.

[00:16:30] But they have quite a few paying customers and they move basically that up to the data or they move that up to the cloud for deployment. So they're moving, they act because it's right, the customer pays a fixed cost, all that can be figured out. You can figure out how many times they access it, all this kind of API calls and all this stuff. They can basically do the math and say, hey, I can price my software based upon this so I can, this makes sense for me because I'm not being charged and I don't have this huge server that I have to take care of if no one buys my application.

[00:16:57] So, I mean, that's kind of a real world example of what they do. And then, you know, they have the connecting fabric, you know, across NVIDIA's kind of AI enterprise, which makes it really, really easy for everyone to kind of work collaboratively on, you know, their software solutions and stuff like that. So that's one example that I've seen from a pretty well-known company. I wish I could say the name, but I can't. Intrigued. Incredibly cool.

[00:17:22] As AI workflows inevitably become more distributed across Edge, On-Prem and indeed cloud, for people listening here, those business leaders and IT teams, what are those new challenges that will inevitably be created for IT leaders that are still trying to keep systems usable for developers without making them fragile or overly complex and be the guardian of the network at the same time? Yeah. Yeah. I mean, we're seeing that, you know, more and more, right?

[00:17:51] Because it's not just the developer that wants access to it, right? It's that person in marketing who, you know, wants to create some AI images, right? There's so many more. I think that's the complexity piece that, you know, if you're in kind of, you know, like an IT DM role or you're an IT leader, because you have to think it's like AI is not any, it's not certain people's role. It's becoming everyone's role.

[00:18:16] It's not necessarily that they're AI developers, but even your marketing folks, you know, your procurement folks, everyone is running some sort of now, generally, some sort of AI inference, you know, on their system, right? And you might not even be knowing that you're doing it depending on the application that you're doing. And that's good AI when you don't even know that it's happening.

[00:18:34] But what they need to think about is, I think for IT leaders to make it easy is think about, hey, if I know, you know, if I have Jimmy or, you know, Patty and they're in marketing, start thinking about to make it simple. It's like, hey, like, what am I going to get them? You know, they're new to the company. What am I going to get them? That if AI is really accelerated that much in the last two years, what is that going to look like in three years?

[00:19:02] Like, I don't want to buy a system that is completely out to date. So really, what they can do is kind of try to future-proof themselves as much as possible to make sure it can handle kind of today's workloads and the workloads of the future. That is probably the easiest thing, I think, to do. Because otherwise, if you're not giving any sort of NPU, GPU compute to someone locally on their system that they're using as like their daily driver, they're going to come to the data center or they're going to ask for the cloud.

[00:19:28] And that's where resources, because would you rather buy, you know, a massive, you know, data center, have a huge cloud bill, or spend a couple thousand dollars for a GPU and a system that you're already going to buy? Yeah. And before you join me on the podcast today, I was doing a little research on you, trying to find out a little bit more. I didn't go the old school method of Googling and getting loads of sponsored results, et cetera. I asked chat GPT and straight away. You did not. I did. Did you really? Okay. And guess what? He told me you're a podcaster too.

[00:19:58] You host the Reshaping Workflows podcast, right? Yeah, that's true. I do. I mean, I've done a podcast in a previous life. You know, I've been a lifelong cigar smoker, so had one back in the day that was very popular. We won't talk about that. But yeah, about a year ago, and I've been told not to brag, but I have a good podcast voice. I don't know if that's true or not. I can't even stand it. I can't stand the sound of my own voice. So I never go back and listen to my own episodes. But yes, I am.

[00:20:25] So the idea of Reshaping Workflows, you know, every company kind of has a podcast. And I wanted to do this one differently, where it wasn't just, you know, a shill of, you know, our Dell technology products and our Dell Pro Precision workstations or, you know, VR tricks, Pro Blackwell GPUs, where it was the industry leaders. It was actual practitioners across all the major verticals that Workstation served, right, from engineering to AI to manufacturing to energy, healthcare, life sciences, kind of the whole thing.

[00:20:53] Have them come on, talk about their workflows, talk about what they're doing to kind of inspire people, but also to talk about how does Workstation and Local Compute, you know, power their workflow. So in a year, it's been kind of a wild journey, you know, in the amount that the podcast has grown, how much I'm recording. I don't know when this one is actually releasing, but, you know, in early February, I'm going to be recording in one week.

[00:21:18] I think it's 18 episodes just to get it knocked out because I just have to schedule it in blocks. But yeah, no, I love it. And I love bringing people on and learning. And I think it's one of the ways that I learn as well, being able to kind of talk to people. But yeah, it's pretty cool. It's a pretty cool feeling when, you know, you're at a Dell event or an industry event and you talk, you know, on stage or whatever. And someone comes up and goes, I thought I knew your voice, but I didn't know where. And then they're like, you're that guy. I'm like, yeah, I'm that guy.

[00:21:49] It's so refreshing to hear Dell approach it this way, because I think almost any corporate podcast, nobody wants to listen to you. You might like running, but you're not going to want to listen to a Nike podcast talking about running shoes, for example. You're not going to want to do that. And the similar with Dell, if it's just about selling particular equipment, there's no value in there. And what I found refreshing about what you're doing there is these conversations with real builders, real practitioners that listeners can learn from.

[00:22:14] And I'm curious if you were to put all these episodes into a big melting pot, are there any patterns that have surprised you most about how people are really adopting AI hardware and software together? Maybe even hear a few stories of things. Oh, I never thought about it that way. And what kind of stories come out of that? Yeah. I mean, honestly, like, I'll tell you the, I don't know. I mean, like, you know what?

[00:22:37] I am going to do that because I do have a couple AI workflows that I created where I can very easily strip the audio out and then I can put in a rag database. So I will do that. I'm just out of morbid curiosity. I think one of the craziest things that we've ever discussed on the podcast was basically is AI taking away our cognitive ability to think. Yeah. And you can go listen.

[00:23:05] It was with the episode I recently did with the neuron. But basically the idea was, is that when I, and I'll age myself here, I'm 42, I'm an elder millennial. I'm losing my eyesight. It's terrible. But I can't read up close anymore. But when I was a kid, like when I went to research something, I had to ask or I had to think about it or I had to go find an encyclopedia. And, but I also had the age of the internet where I could learn things and watch documentaries and all of that. And my daughter's generation, she's 11. It's just, it's not that it's like, there's no thought.

[00:23:34] And we, we had it. And that was the thing I think that was not most shocking, but kind of most eyeopening to me is that my daughter will literally, cause I, I want to expose her to all the AI stuff. Right. Is like when she has a question, it's not like, all right, she has a problem. I said, Hey, like normally, like if it was just her and I talking, I like to approach her. Like, Hey, what would you think? Like, what would, what do you think? What do you think? How would, how would you approach this?

[00:23:58] What she does is just goes, Hey, you know, um, chat, do you, BTO says, can you tell me how I should do this? And I just think that that is fundamentally very scary and fundamentally probably the biggest issue with AI. And it's not one that we'll see today, but we might see 20, 30 years from now. Um, which to me is, can, are we going to take away our ability to like cognitively think? Yeah. It's such a big promise.

[00:24:26] I consider it's almost like the AI paradox, isn't it? Because to survive in an AI world, you need critical thinking to, to stand out from the crowd more than ever. But of course, over usage of AI removes that critical thinking. It's a real paradox. It is. It's. And what I would say is that it can be easily, you know, there can be rules. I mean, there can be rules, but not, I hate regulations, but like there can be things and tools where it's like, Hey, here's an age safe thing where it's very simple, right?

[00:24:53] Like, Hey, like for the AI, if you ask a question and say, Hey, what, what should I do? Be like, Hey, what do you think you should do? And if it goes back, say, Hey, here's maybe two different paths. Tell me why you think one is better. Like, do you know what I'm saying? I think it would be something very simple to add, I think would ward off a lot of potential problems in my humble opinion. A hundred percent with you. And if we were to look ahead, looking at AI PCs as they continue to evolve for them to

[00:25:20] become genuinely useful rather than another buzzword, what do you think vendors, employers and educators need to be doing differently to help those everyday users, those people listening, understand and trust the technology that they're, they're being asked to adopt? Yeah. I mean, it's a great question. I, I don't, it's going to be the answer to your question, but it's probably not the answer you're looking for is that I think fundamentally the technology, you know, is being adopted. I think people do find it generally useful, useful.

[00:25:49] The thing is, is that really from an end, I'm gonna go an NPU standpoint, cause that's where most people think about, you know, AI PCs is that there has to be a fundamentally different shift in the landscape because, and I'm not going to get into like the complexity of writing code, but I'll give you an example is if you remember back, I don't know, 20 years ago, was it 25 years ago, Blu-ray versus HD DVD. It's kind of the same thing, but fundamentally very different.

[00:26:18] And there was a battle Royale for which one would take it. And it was Blu-ray not saying NPUs and GPUs are in that fight. But it is similar in the sense where this family is only going to buy a Blu-ray or an HD DVD. It's kind of the same thing is that you, um, a company who is writing, you know, developing any AI product, let's say cloud coach, unless they're a big company or tons of funding,

[00:26:44] it takes time to write that to be accelerated by an NPU versus a GPU. And they have to make a choice. And Microsoft said, Hey, NPU, great. You know, others say GPU. So really what I would say is to make sure that you can enjoy any AI acceleration is look for a system that has an NPU, but then also get yourself a GPU because you have protected

[00:27:11] yourself from no matter what software you're going to be able to run those AI workloads and inference. No problem. You've kind of set yourself up for, you know, the future and no matter which way the market goes, whether it's NPU or GPU, I personally think GPU could just NVIDIA just got such a head start. Um, you're protected or your investment's protected. Fantastic advice. And thank you so much for sharing your insights with me today.

[00:27:39] Now it's time to have a little bit of fun with, with me, because as a guy speaking to me from Austin, Texas at one of the biggest tech companies in the world, host of your own podcast, working in AI, you must've picked up a few interesting war stories in your career. So is that a funny or interesting or insightful story that you can share that you've picked up along the way? Yeah. I mean, I've got a lot, like, I mean, I I've traveled all over. I'm not going to tell you, I've been in India and this one I won't share, but I've, I've

[00:28:08] find myself falling into a sewer in India. Like I've been shaken down by customs in different countries. Like a lot of things have happened, but I'm going to tell not a funny, not a funny story. It is funny, but an insightful story that I hope people will take away something room. So I was pretty young. I was probably like, well, let's call it late. Well, no, probably not even that old, probably like 28 ish. So still pretty young in my career, five or six years in like a young professional.

[00:28:37] And I was doing a social media marketing role, like a social media listening marketing role, where at the time Twitter had kind of just started. Facebook was taking off and there were tools out there that could go out and scrape, you know, basically social media, much like Google and basically tell you what people were saying. Right. So within Dell, I was on that team and reported to a lady who reported to, so I skip a little boss at the time. She was only a VP at the time, but ultimately became the CMO of Dell. I mean, you can research to figure out who it is and all of that.

[00:29:06] But so I reported, which was a great experience. Right. But the story is, is that we kind of had a readout to her and there was a couple of different listening tools that we were looking at and, you know, the value. And one of them that was kind of a homegrown thing versus a purchase through, you know, a company called Radian six that ultimately was bought by Salesforce. But, you know, they had ROI, I had all this kind of stuff or whatever, or like perceived. So I went in full bore, right?

[00:29:34] There wasn't a lot of ROI stuff, but I went in full bore of like why this is amazing and why it's valuable to listen to your customers and all this stuff. And I remember so vividly and it took me such by surprise where I'm going through this. I think I'm making like the best pitch ever. And she looks at me and says, Logan, this is great. But so freaking what? And I was like, I was like, whoa. Okay. And I go, so, Hey, I'm trying not to use her name right now, but I was like, Hey,

[00:30:03] like, what do you mean? And she's like, show me the money. Oh. And I said, all I could think about, right. It's Jerry Maguire at that time. Right. And obviously I'm so young and she's like, so powerful, like so educated and like all this. And I feel like I'm an idiot. But afterwards she had a really great conversation with me and said, listen, like, I understand what you're trying to do, but in the corporate world, it's all about dollars and cents.

[00:30:32] And if you cannot connect your activities to some sort of revenue generating activity or some sort of return, Logan, your ideas and thoughts will might be well thought of, but they will not be the ones that ever get implemented. And from that point on, I was, it was just such an impactful lesson for me that everything I do or everything I like projects, I remember programs I work on with the media, Dell, uh,

[00:30:58] you know, or any initiative that I have, I always look for that angle and be able to quantify out, like, not just for my time, but is it worth the company's time? Is it, is it worth, is there a return of this? What's the potential impact? And I've always done that. And I think it served me pretty well. And I always tell that story just because I want people to want to hear it. Cause it's kind of funny, but also is that you can always kind of level yourself up at work by being able to show kind of a, an ROI or return story, and it will help you get much

[00:31:28] farther in your career. I can promise you that. What a fantastic story. I absolutely love that. And finally, before I let you go, I think one thing that unites you, I, every single person listening here is that there's a, a real pressure on us all to be in a state of continuous learning. So I've got to ask where or how do you self-educate or what keeps your curiosity alive? Helps you stay open-minded because it's more important than ever. And keeping up to speed with change is incredibly hard, but you're right at the heart of this space.

[00:31:58] People are looking at you to, to lead the way. So how do you self-educate? Great question. Great question. So if I, you always, um, so there's a newsletter, well, actually there's a couple, um, but I just subscribe to, you know, TLDRs, AI newsletter and the neurons, uh, and you can Google them. You'll find them. I get, I subscribe to those newsletters and I literally blocked my calendar the first about 15 minutes of the day just to read them. Um, and that's the thing is you have to prioritize this learning time.

[00:32:28] Um, because without it, um, without you prioritizing, you know, if everything's a priority, nothing's a priority, right. And this is a priority for me. So I market, I do it religiously. Um, so that's one kind of going out and being able to search the sources. But here's the thing that I think most people miss about education is I am not, and never be afraid not to know the answer. Be afraid of what happens when you don't go try to find the answer.

[00:32:54] So like there'll be meetings where I'm in with some developers or customers and they'll say something like that. I don't know the answer to, uh, well, you know, I try to fake it till I make it the best I can in the meeting, but afterwards I've written it down and I'll go Google it. And then I'll second level Google it. And I'll third level Google it where I try to learn everything about it. Um, just to educate myself. Cause it just kind of brings up the credibility. So if you're trying to stay in the space with AI, definitely. Uh, I mean, you could always listen to reshaping workflows, but I won't, I won't plug

[00:33:22] that, uh, let's do those two newsletters. And when you hear something, take 10 seconds, use your smartphone. It's connected to the world's collective knowledge, Google it, understand it. And if you do that and make that a habit, you'll, you'll become a much more well-educated person. I absolutely love that. Fantastic. I was speaking to someone a few weeks ago and they did something very similar, but on top of that, when they're on a, I don't know, a three hour car journey, they would just have a conversation with open AI. Just saying, yeah, just asking the questions. Yeah. Same thing.

[00:33:52] Yep, exactly. And I will add links to the reshaping workflows podcast, those newsletters, but for anyone wanting to find out more about everything we talked about from GPUs, MPUs, the old precision and pro max systems, any way you'd like to point everyone listening. Um, yeah, I mean, I will put the link, but yeah, if you just click on the link below and listen to reshaping workflows, check it out. I appreciate it. And I love connecting on LinkedIn. I try to pride myself on answering every message that I get, uh, unless you're like a

[00:34:20] complete spammer, uh, but I will respond. Uh, pretty simple. I'm the only Logan Lawler on LinkedIn and yeah, just message me on LinkedIn. Let's connect. Let's talk and love conversations about AI. If you ever see me come up, introduce yourself and yeah, love to have a conversation with you. Awesome. Well, I'll have links to everything that you mentioned there. I'm also going to add you to my cool guys. I want to have a whiskey at a bar with in Austin. So you're my Austin guy now. So next time in town, we will make something happen, but more than anything, just thank you for joining me today. Of course.

[00:34:50] Well, I really appreciate the time, Neil. Um, thanks for having me on and yeah, I'll see you in Austin. If there's one thing this conversation made clear today, it's that understanding how AI workloads actually run matters far more than chasing labels, trends, or high specs. So from local GPUs to hybrid workflows and the growing role of critical thinking in a AI heaven world, AI heavy world, my guest Logan today, he offered such a refreshingly practical

[00:35:19] view of where this space is heading. But I'd love to hear what stood out for you and your thoughts on AI hardware, whether it's on a personal basis or inside your organization. And what are the questions that you are still wrestling with after listening? So I urge you to go and check out all of the links that Logan mentioned there, including his podcast. It's brilliant with so many builders and practitioners on now. I think you've got a lot of so many big takeaways from those episodes. So please listen to that as well.

[00:35:47] If you want to contact me, just techtalksnetwork.com, send me an audio message, written message, connect with me on socials, work with me, browse through 4,000 interviews, whatever it is, go over there and check it out. But that's it for today. So thank you to Logan for joining me today. A big, massive thank you to each and every one of you for not only listening, but listening right to the end. So thank you so much. But that's it for today. Speak with you tomorrow. Bye for now.

[00:36:15] Bye for now.