How safe are we in the age of AI PCs? As AI technology becomes increasingly integrated into our everyday devices, the promise of faster, more personalized computing comes with a critical question: Are we truly prepared for the security and privacy challenges that follow?
In this episode, I sit down with Eric Shulze, VP of Product Management at Trend Micro, to explore the rapidly evolving landscape of AI PCs. With chip makers in a race to innovate, these new devices promise unparalleled speed and privacy by running generative AI locally on specialized neural processing units (NPUs). But as AI's role expands, so do the concerns over data security and privacy.
Eric shares his insights on the potential risks associated with on-device AI, including the threat of compromised data and the spread of misinformation. We delve into the steps consumers can take to protect themselves, from choosing reputable vendors to implementing additional security layers. Eric also reveals how Trend Micro is stepping up to the challenge, with plans to roll out cutting-edge tools to safeguard AI PC users.
But it's not all about the risks. We also explore the excitement surrounding AI innovation—how it's transforming personalization and accessibility in tech, and why ethical considerations must be at the forefront of this revolution. Plus, Eric offers a unique glimpse into his own career journey, from studying zoology and working as a dolphin trainer to becoming a leader in the tech industry.
As AI PCs move from concept to mainstream reality, what do you need to know to stay safe and informed? Tune in to find out, and join the conversation on how we can balance innovation with privacy in the digital age. What are your thoughts on the future of AI PCs? Let us know!
[00:00:01] [SPEAKER_01]: Are you ready to step into the future? A future where your PC not only understands you, but also
[00:00:08] [SPEAKER_01]: anticipates your needs. Well as AI continues to revolutionize our digital experiences,
[00:00:15] [SPEAKER_01]: the emergence of AI PCs brings forth a blend of innovation and complexity.
[00:00:22] [SPEAKER_01]: So today I'm joined by Eric Schultz, Vice President of Product Management at Trend Micro.
[00:00:30] [SPEAKER_01]: Together we're going to try and unravel how AI is reshaping the tech landscape right under our fingertips.
[00:00:36] [SPEAKER_01]: With AI PCs integrating generative AI more seamlessly than ever, what do you need to know to navigate
[00:00:42] [SPEAKER_01]: this new terrain securely and confidently? The costs of hosting a daily show for 140,000 monthly
[00:00:50] [SPEAKER_01]: listeners can be significant and I'd like to take a moment to thank those who make it possible
[00:00:55] [SPEAKER_01]: for me to keep delivering this content every day to you all. I also want to talk about the fact that
[00:01:00] [SPEAKER_01]: legacy DRM failed to securely enable external collaboration on sensitive files and I think
[00:01:07] [SPEAKER_01]: it's important to recognize that organizations in this digital age face a somewhat risk-trust
[00:01:13] [SPEAKER_01]: contradiction. Yep, they must share content with untrusted third parties while also protecting
[00:01:18] [SPEAKER_01]: that data. So it's time for a more modern DRM solution, one that solves this dilemma but
[00:01:24] [SPEAKER_01]: without compromising security and productivity. So collaborators, imagine editing files externally
[00:01:30] [SPEAKER_01]: without losing control. Stream zero latency video renditions to authorized users but without any
[00:01:38] [SPEAKER_01]: actual file transfers needed. The co-author can view it remotely while you retain full ownership.
[00:01:44] [SPEAKER_01]: Ultimately they never leave your environment so you can stay goodbye to data leakage risks
[00:01:49] [SPEAKER_01]: and experience seamless editing across all file types, not just native applications without any
[00:01:55] [SPEAKER_01]: plugins required. So say goodbye to deployment headaches, file transfer risk, collaboration
[00:02:00] [SPEAKER_01]: barriers and all those productivity constraints and experience a more modern way to collaborate
[00:02:05] [SPEAKER_01]: on sensitive content sacrificing control or security and you can do all that by visiting
[00:02:10] [SPEAKER_01]: kiteworks.com to get started and with my thank yous out of the way I'm now officially
[00:02:16] [SPEAKER_01]: excited to introduce you to today's guest. Well buckle up and hold on tight as Ibeamure is
[00:02:22] [SPEAKER_01]: all the way in a Taipei where today's guest is waiting to join me.
[00:02:28] [SPEAKER_01]: So a massive warm welcome to the show, Eric. Can you tell everyone listening a little about
[00:02:34] [SPEAKER_00]: who you are and what you do? Yeah thanks for having me Neil. My name is Eric Schultz.
[00:02:39] [SPEAKER_00]: I am the VP of Product Management for Trend Micro covering our consumer
[00:02:44] [SPEAKER_00]: division of products so these would be things that your listeners and our customers will run on their
[00:02:50] [SPEAKER_01]: home or personal devices. It's a pleasure to have you join me on the podcast today. I know it's
[00:02:55] [SPEAKER_01]: quite late where you are and you've stayed on to talk with me but just to set the
[00:02:59] [SPEAKER_01]: scene for our conversation today there's so much talk of AI at the moment and hype around it.
[00:03:05] [SPEAKER_01]: I was trying to look beyond that hype. So can you begin by explaining what AI PCs are and how they
[00:03:10] [SPEAKER_01]: integrate generative AI into everyday tech landscape? Something we're hearing more and
[00:03:15] [SPEAKER_01]: more about, I've seen a few stories about Apple etc and their next PCs and the processes behind
[00:03:20] [SPEAKER_01]: them but can you tell me about what you're seeing and what that means? Yeah it's a great question.
[00:03:25] [SPEAKER_00]: So let's start with like you said the definition or what we consider an AI PC
[00:03:30] [SPEAKER_00]: depends on what definition you use. There really isn't a hard set one in the industry that I found.
[00:03:36] [SPEAKER_00]: Each major company kind of has one that differs but the commonalities between them are
[00:03:43] [SPEAKER_00]: relatively simple. It's a PC or a personal computer that has a specialized processing unit
[00:03:50] [SPEAKER_00]: for doing AI type workloads. You'll commonly see these referred to as NPUs or Neuro-Processing
[00:03:58] [SPEAKER_00]: units meaning that they are very very good and very efficient at running neural network based
[00:04:07] [SPEAKER_00]: workloads which is what a generative AI or LLM is in general. So they integrate generative AI by
[00:04:17] [SPEAKER_00]: running it locally on the device instead of having to send all that data to the cloud
[00:04:23] [SPEAKER_00]: and why that matters to a lot of customers and consumers is a couple of reasons. One is speed.
[00:04:30] [SPEAKER_00]: When things are local you don't have latency or cost of going to the cloud. It's also more private
[00:04:36] [SPEAKER_00]: because the data is not leaving your device. If it's done correctly it's all done locally so
[00:04:42] [SPEAKER_00]: you have and I know we'll probably dig in a bit deeper there in a bit and then the third
[00:04:47] [SPEAKER_00]: reason is just the speed because you have it locally, you have the computing power,
[00:04:52] [SPEAKER_00]: why not use it so it comes to the law of the land as it's sometimes referred to.
[00:04:56] [SPEAKER_01]: You're talking to me today in Taiwan and of course that is the chip central of the universe right
[00:05:03] [SPEAKER_01]: now and with the competition among big chip makers to create that best AI PC are there any potential
[00:05:10] [SPEAKER_01]: security or privacy concerns that end users should be aware of when it comes to an on-device AI?
[00:05:16] [SPEAKER_00]: Anything they should be thinking about? So yeah I am in Type A right now. I would say
[00:05:22] [SPEAKER_00]: it's not as much around the chip specifically. I think all three of the you know the one
[00:05:27] [SPEAKER_00]: specifically on the AI PC side today all have their benefits for sure. In terms of the security and
[00:05:34] [SPEAKER_00]: privacy I would say that definitely depends on the application that the users are interacting with
[00:05:40] [SPEAKER_00]: whether it be the built-in operating system based applications or if it's something they
[00:05:46] [SPEAKER_00]: download from a third party, independent software vendor things like that. In terms of what are they
[00:05:54] [SPEAKER_00]: one would be can they access your sensitive data on that device right? We all have personal data on
[00:06:00] [SPEAKER_00]: our device maybe tax returns maybe national identification numbers etc and can this generative
[00:06:09] [SPEAKER_00]: AI or this application access that data to help build you know better responses for you?
[00:06:16] [SPEAKER_00]: The other would be can something edit or impact that application where it could make that application
[00:06:24] [SPEAKER_00]: or that generative AI application return a incorrect result or a you know compromised result
[00:06:31] [SPEAKER_00]: maybe changing some variable from like hours to days which could give misleading information
[00:06:38] [SPEAKER_01]: for example. Another reason I was excited to get you on the podcast was I was reading a
[00:06:43] [SPEAKER_01]: recent trend micro survey and it revealed that 68% of respondents were worried about AI's role in
[00:06:51] [SPEAKER_01]: spreading misinformation. What measures can be taken to mitigate these concerns which you
[00:06:56] [SPEAKER_00]: uncovered in your survey? So I look at it from a couple of ways one is what data like what's
[00:07:04] [SPEAKER_00]: the source of this information so for example is it coming from a reputable source is it a
[00:07:09] [SPEAKER_00]: trusted information source that you or your listeners have validated it is from? Is it coming
[00:07:15] [SPEAKER_00]: from just a random post on social media where it's some person or some entity with a weird headshot
[00:07:21] [SPEAKER_00]: or is it a reporter that you know and trust that would be one. A second would be the good old
[00:07:30] [SPEAKER_00]: mantra of trust but verify if you see it is there a way to verify that that information
[00:07:36] [SPEAKER_00]: is accurate. Can you double check or check that against another source for example? Are you seeing
[00:07:41] [SPEAKER_00]: multiple news outlets reporting the same thing or sharing that same data? And then the third I would
[00:07:49] [SPEAKER_00]: say is always be skeptical if it sounds too good to be true it may be right so trust your gut
[00:07:56] [SPEAKER_00]: that if something's like hey this is free or hey come here you can win this be skeptical
[00:08:02] [SPEAKER_00]: and verify that or go into it with that skeptical mind just to make sure because otherwise we all
[00:08:11] [SPEAKER_00]: it's too easy to get sucked into that dopamine boost in the brain by I won something or that
[00:08:17] [SPEAKER_01]: when you don't know for sure if it's real. And also in that survey I think it was 58% of
[00:08:23] [SPEAKER_01]: respondents are concerned about AI misuse of their images and likeness elsewhere outside
[00:08:29] [SPEAKER_01]: of the survey I'm hearing more as a podcast I'm hearing more about voice being misuse and voice
[00:08:34] [SPEAKER_01]: fraud so how can consumers better protect themselves from such potential misuse of their
[00:08:41] [SPEAKER_01]: biometrics or their voice or their appearance or so much other things too.
[00:08:45] [SPEAKER_00]: No it's a great question Neil. I think the first would be you know images is
[00:08:50] [SPEAKER_00]: be careful what you share we live in the age of social media and I know we all like to
[00:08:56] [SPEAKER_00]: share with our friends and family but can you control or be careful what you share and
[00:09:01] [SPEAKER_00]: you know how you control your privacy settings right are you sharing this image globally you
[00:09:06] [SPEAKER_00]: know if it's a landscape picture something like that great but if it's starting to you or your
[00:09:11] [SPEAKER_00]: family or your friends do you have your privacy settings set to kind of protect that or limit
[00:09:16] [SPEAKER_00]: who could access that to make sure it's not getting vacuumed up by model generators or
[00:09:22] [SPEAKER_00]: by malicious actors that are using that to try to imitate them and when it comes to voice
[00:09:28] [SPEAKER_00]: voice is definitely an interesting one there are tools coming onto the market
[00:09:33] [SPEAKER_00]: regularly to help detect and produce this and one is verifying where that source is coming from
[00:09:38] [SPEAKER_00]: right so your voice obviously it's possible to capture right you Neil probably know
[00:09:44] [SPEAKER_00]: this better than even I do because as being this your voice is all over all over your podcast
[00:09:49] [SPEAKER_00]: yeah so it is possible to get these samples so making sure that your friends and family are aware
[00:09:57] [SPEAKER_00]: of having tools to validate that it's really you if they hear things that don't sound like you and
[00:10:02] [SPEAKER_00]: having that that that skeptical attitude that we talked about previously and as AI just become
[00:10:09] [SPEAKER_01]: more integral part of our PCs and even phones and tablets etc when we're looking at things
[00:10:15] [SPEAKER_01]: like Microsoft new lines and new line of AI PCs what are the key privacy and safety elements that
[00:10:21] [SPEAKER_01]: need to be addressed to ensure that consumer trust anything they should be looking out for
[00:10:26] [SPEAKER_00]: I think a couple would be it do the company have or does the entity have a privacy policy
[00:10:32] [SPEAKER_00]: an AI you know acceptable use policy or guidelines of how they're using AI to make
[00:10:38] [SPEAKER_00]: sure that you know those aligned with the our listeners value statements are they comfortable
[00:10:44] [SPEAKER_00]: with that that use of AI potentially using their data another would be are they protecting
[00:10:50] [SPEAKER_00]: their their sensitive information so like I said earlier we all store that information on our devices
[00:10:56] [SPEAKER_00]: in some way or form how do we make sure that data is either segmented or protected off where it
[00:11:02] [SPEAKER_00]: can't get sucked into these new applications and then just being careful what we store and what
[00:11:09] [SPEAKER_00]: applications we use make sure we're getting applications from reputable sources right
[00:11:14] [SPEAKER_00]: whenever there's new technology advances like this you'll see a lot of new companies come up
[00:11:19] [SPEAKER_00]: maybe legitimate maybe someone just in their in their garage coming up with a new idea that
[00:11:23] [SPEAKER_00]: they want to test do they have that rigorous process in place regarding software lifecycle
[00:11:29] [SPEAKER_00]: to make sure that that that application is secure that they have you know properly tested
[00:11:34] [SPEAKER_00]: it to make sure it's not using vulnerable libraries or some other you know other
[00:11:39] [SPEAKER_00]: types of supply chain attacks upstream so just making sure you're you're using legitimate
[00:11:44] [SPEAKER_00]: applications from well-known well rated etc to to really make sure that your your base is there
[00:11:51] [SPEAKER_00]: and then also looking at security tools like the ones that trend micro make that can help add
[00:11:56] [SPEAKER_00]: some of these protection layers if you know you are using tools or you are going down this path
[00:12:02] [SPEAKER_00]: making sure you have that extra layer around there to to ensure that that application is doing
[00:12:08] [SPEAKER_00]: what it's supposed to be and that it's you know that there is that extra guard guard around it
[00:12:14] [SPEAKER_01]: and it is so tricky i mean we look at someone like apple a privacy first company towards their
[00:12:18] [SPEAKER_01]: consumers in every way but they've recently been accused of scraping youtube to train their ai
[00:12:24] [SPEAKER_01]: there's so much going on and almost a gold rush of all these companies scraping online
[00:12:29] [SPEAKER_01]: content i also read somewhere that uh i think the data that they can train ai learning models
[00:12:35] [SPEAKER_01]: could run out as soon as 2026 is this big a like gold rush out there almost isn't it
[00:12:40] [SPEAKER_00]: it is there's a crazy amount of that and i mean when it comes to ai especially training data is king
[00:12:47] [SPEAKER_00]: so you'll you'll see lots of things of how do you how do you source data where are you sourcing
[00:12:51] [SPEAKER_00]: data uh are you ethically sourcing your data to train your models there so that's something
[00:12:57] [SPEAKER_00]: to where that gets a little bit deeper because now you have to look at what models or what ai
[00:13:02] [SPEAKER_00]: you know is the application that you're running using and you know digging down deeper which i
[00:13:08] [SPEAKER_00]: which you know even as a consumer myself i may not do so that's where we go back once again to that
[00:13:13] [SPEAKER_00]: reputation of of that uh vendor where you're getting that software from and then also potentially
[00:13:19] [SPEAKER_00]: adding a a layer of security around that to add to make sure any of those gaps that that
[00:13:25] [SPEAKER_00]: software might have are being filled right and things like that where trend microprovides
[00:13:31] [SPEAKER_01]: 100% videos are entirely different subject for a different day and
[00:13:35] [SPEAKER_01]: trend micro though how are you enhancing security and privacy for users of ai pcs i know it's a
[00:13:42] [SPEAKER_01]: topic very close to your heart so what are you working on here it is my team's been working
[00:13:47] [SPEAKER_00]: tirelessly on this and we've announced some new things around the compute x for a big hardware
[00:13:52] [SPEAKER_00]: show here in taiwan about a month ago month and a half ago now i think we announced some
[00:13:57] [SPEAKER_00]: privacy focused features that actually had a rare uh kind of a rare thing in our industry where we
[00:14:04] [SPEAKER_00]: are able to enhance the security and privacy for our users and actually decrease user friction
[00:14:09] [SPEAKER_00]: so we introduced a uh feature for email in this in this example specifically where we are able to
[00:14:18] [SPEAKER_00]: scan the user's email uh locally on the device using that npu uh and we don't have to send the
[00:14:25] [SPEAKER_00]: email to the cloud like we used to so previously you know niel from the from europe you're familiar
[00:14:30] [SPEAKER_00]: with the gdpr i'm quite sure where we have data collection notices we have to accept and uh you
[00:14:35] [SPEAKER_00]: know from an ethical and transparency perspective we want to make sure our our customers always
[00:14:40] [SPEAKER_00]: know what data we're collecting so we previously would have to ask the user's consent to scan
[00:14:45] [SPEAKER_00]: their email because we were using you know ai in the cloud to do that but now with the
[00:14:50] [SPEAKER_00]: new power of the aipcs we've been able to bring that functionality locally onto the device
[00:14:57] [SPEAKER_00]: where now we don't have to ask the user's permission to collect their data because we're not collecting
[00:15:02] [SPEAKER_00]: their data right it's actually being all done on their device so like i said we were able to
[00:15:07] [SPEAKER_00]: raise security and privacy because now the user doesn't have to worry about those notices
[00:15:11] [SPEAKER_00]: or be worried about their data going to the cloud which also reduced the friction because
[00:15:16] [SPEAKER_00]: now we don't have to prompt them to opt in for data collection another one would be you know that's
[00:15:21] [SPEAKER_00]: how we're using ai to do better security but we're also doing the second part we're actually making
[00:15:27] [SPEAKER_00]: sure we're using security for ai so adding those protections that i mentioned earlier around protecting
[00:15:34] [SPEAKER_00]: the local application protecting that knowledge base that the local application may be using or
[00:15:40] [SPEAKER_00]: the model that the local application would be using to make sure that outside actors can't
[00:15:45] [SPEAKER_00]: tamper with it or you know tweak the data in the database to return a malicious url so you know if
[00:15:51] [SPEAKER_00]: you ask her what's my bank and it returns you know a slightly different length than you're used to
[00:15:56] [SPEAKER_00]: but you don't notice it because it's trying to do a scam or something like that so you know
[00:16:01] [SPEAKER_00]: trend's been looking at this from both angles right how can we use ai to better secure our customers
[00:16:07] [SPEAKER_00]: but also how can we make sure that our customers can use ai securely and you've provided
[00:16:12] [SPEAKER_01]: so many great examples already but just to drill down one last time anybody listening that may be
[00:16:18] [SPEAKER_01]: considering buying a new pc an ai pc they're looking at all their different options in front of them
[00:16:25] [SPEAKER_01]: any other ways that they can differentiate between those ai pcs that prioritize security and privacy
[00:16:31] [SPEAKER_01]: and and those that might not have the those robust measures in place anything else they
[00:16:35] [SPEAKER_00]: should be looking out for yeah we see the key things you know we covered already obviously
[00:16:40] [SPEAKER_00]: the it's it's hard especially when you're going to you know retail locations to differentiate those
[00:16:46] [SPEAKER_00]: things so just making sure that the PCs you have are obviously coming from reputable companies with
[00:16:51] [SPEAKER_00]: legitimate versions of the software right obviously that's what we would always want anyway
[00:16:56] [SPEAKER_00]: and just make that would be probably the biggest thing and then looking to you know add on those
[00:17:01] [SPEAKER_00]: extra security measures if you if you have concerns or just want to make sure that
[00:17:06] [SPEAKER_00]: a little more peace of mind for sure in in that's in this new world because this is this is definitely
[00:17:12] [SPEAKER_00]: an infrastructure shift that I think we're all excited about the possibilities that and all the
[00:17:16] [SPEAKER_00]: opportunities it opens we just want to make sure that you know everyone can use it safely and
[00:17:21] [SPEAKER_00]: securely to get this this new this new power and I'm glad we've approached things from a
[00:17:26] [SPEAKER_01]: cautious standpoint today but indeed there is so much excitement around the road ahead as well
[00:17:32] [SPEAKER_01]: so if we do look into the future what do you see as the future of AI and consumer technology
[00:17:38] [SPEAKER_01]: anything excite you and then also how could the industry balance that innovation that
[00:17:43] [SPEAKER_01]: excite innovation with that using it safely securely and maintaining privacy standards
[00:17:48] [SPEAKER_00]: etc I'm excited for you know some of the ideas that I've seen of this of what we can do the
[00:17:54] [SPEAKER_00]: way we can personalize and personalize some of these new features with local AI right where
[00:18:01] [SPEAKER_00]: we used to have to do this all this in the cloud being able to bring it there I think it's going to
[00:18:05] [SPEAKER_00]: open up opportunities for lots and lots of innovation in the software space to to personalize
[00:18:12] [SPEAKER_00]: maybe it be interacting with you know people with disabilities or bringing in you know allowing
[00:18:19] [SPEAKER_00]: people to be more comfortable sharing certain personal information because they know it's
[00:18:23] [SPEAKER_00]: going to be processed locally and not put into a massive cloud data bank those are one of
[00:18:29] [SPEAKER_00]: those are where I'm really really excited to see this this innovation I think where you know as the
[00:18:34] [SPEAKER_00]: industry needs to balance though is can we come out with things these new things these new use
[00:18:40] [SPEAKER_00]: cases maybe in a more controlled way or a hey do you need to collect all that data right away or
[00:18:46] [SPEAKER_00]: can we work our way up to that as we learn more and we experience more but ultimately there's
[00:18:52] [SPEAKER_00]: always you always have to push the the limits right that's what drives some of the best
[00:18:56] [SPEAKER_00]: innovation so can we innovate and then make sure that we're just innovating with privacy and mind
[00:19:03] [SPEAKER_00]: versus innovating in a vacuum without without thinking about that till after the fact because
[00:19:09] [SPEAKER_00]: that's that's when it gets harder people get confused so we want to make sure we're innovating
[00:19:15] [SPEAKER_00]: we're innovating fast we're just keeping in the back of our mind that you know consumers
[00:19:20] [SPEAKER_00]: myself you we care about our security and our privacy especially in the day and age of how we
[00:19:25] [SPEAKER_00]: see things used in other ways sometimes so and we've been quite serious today talking about
[00:19:31] [SPEAKER_01]: everything from safety and security and the exciting work that you're doing in AI but outside of all
[00:19:37] [SPEAKER_01]: that if I have a quick scroll down your LinkedIn I can see you've had quite a varied career an
[00:19:42] [SPEAKER_01]: exciting career successful career and you also get to travel a lot so I've got to ask before I
[00:19:47] [SPEAKER_01]: let you go what is the funniest or most interesting story that you've picked up along the way
[00:19:51] [SPEAKER_01]: there's probably a few you can share a few you can't but what can you share I think one that
[00:19:56] [SPEAKER_00]: comes to mind you know would be kind of how I got into the careers into where I am now in the first
[00:20:02] [SPEAKER_00]: place so yeah as you probably see there I come from you know the states specifically I come from
[00:20:09] [SPEAKER_00]: Wisconsin and I went to university and actually got a degree in zoology and you might say zoology
[00:20:15] [SPEAKER_00]: and IT that's a little bit of a jump I actually doubled down on that jump after I got my degree
[00:20:21] [SPEAKER_00]: and moved to Hawaii spent six months in Hawaii working as a dolphin trainer and people are like how
[00:20:27] [SPEAKER_00]: like dolphin training IT are we talking like polar opposites in the world but as weird as it sounds
[00:20:34] [SPEAKER_00]: actually that taught me a lot that's made made my made me successful in my career because
[00:20:40] [SPEAKER_00]: there's lots of weird things around behavior and human interaction and non nonverbal behavior you
[00:20:49] [SPEAKER_00]: know being able to read people via nonverbal communication that the dolphins were able to
[00:20:53] [SPEAKER_00]: help me really grow that skill early in my career that now when I go meet with customers if I'm
[00:20:59] [SPEAKER_00]: in a boardroom with you know sea levels as well as engineers I can read the room to make sure
[00:21:05] [SPEAKER_00]: everyone's engaged and stuff so it's weird as it sounds it actually prepared me more for my
[00:21:10] [SPEAKER_00]: current job than I ever could imagine and you know makes for an interesting story for sure
[00:21:17] [SPEAKER_01]: wow what an incredible story absolutely love that and one of the reasons I love enjoying
[00:21:22] [SPEAKER_01]: and recording this podcast every day is hearing stories like that I mean the
[00:21:25] [SPEAKER_01]: synergies between zoology dolphin training and attack career phenomenal man but anyone
[00:21:32] [SPEAKER_01]: listening just wants to find out more information about the things we talked about today or work
[00:21:37] [SPEAKER_01]: at trend micro or connect with you or your team where would you like to point everyone listening
[00:21:41] [SPEAKER_00]: yes I would point them back to our our corporate website so trend micro calm and if you want to
[00:21:46] [SPEAKER_00]: see specifically what our consumer team is doing trend micro calm slash hold and if you
[00:21:53] [SPEAKER_00]: looking for me on the socials are in linkedin it is linkedin.com slash e-shelts my last name
[00:22:00] [SPEAKER_01]: s-h-u-l-z-e on most platforms well we covered so much today about how trend micro among the first to
[00:22:07] [SPEAKER_01]: put user safety at the forefront about talking about things like AI application protection malicious
[00:22:13] [SPEAKER_01]: AI contact content protection the excitement for the future but if I'm honest 2,000 interviews
[00:22:19] [SPEAKER_01]: from now you're going to be my dolphin zoology guy and the difference and the synergies that
[00:22:24] [SPEAKER_01]: that brings with the tech industry as well fantastic story though I will never forget so thank you
[00:22:29] [SPEAKER_01]: so much for taking the time to share that with me today. Hey Neil thanks for having a sign greatly
[00:22:34] [SPEAKER_01]: appreciate it I think it's clear that the evolution of AI PCs is not just about technological advancements
[00:22:40] [SPEAKER_01]: but it's also about ensuring the security and privacy don't fall by the wayside and with trend
[00:22:47] [SPEAKER_01]: micro at the forefront of safeguarding consumer interest in this age of AI kudos to them for
[00:22:52] [SPEAKER_01]: the great work that they're doing but what about you listening what steps are you going to
[00:22:56] [SPEAKER_01]: take to better protect your own digital safety this is where I invite you to share your thought
[00:23:03] [SPEAKER_01]: this is where I invite you to share your thoughts and continue this conversation
[00:23:07] [SPEAKER_01]: by engaging with us you can do that by uh connect to win me on socials at neil c hughes
[00:23:13] [SPEAKER_01]: or send me a quick email on tech blog writer outlook.com please let me know how will you
[00:23:19] [SPEAKER_01]: adapt to an AI powered future look forward to hearing from you all but that's it for
[00:23:24] [SPEAKER_01]: today so thank you for listening as always and until next time don't be a stranger

