3547: Telus Digital on the Human Role in the Final Mile of AI Safety and Security
Tech Talks DailyJanuary 09, 2026
3547
33:3126.84 MB

3547: Telus Digital on the Human Role in the Final Mile of AI Safety and Security

Today's episode is a conversation with Bret Kinsella, recorded while he was in Las Vegas for CES and preparing to step onto the AI stage. Bret brings a rare combination of long-term perspective and hands-on experience.

As General Manager of Fuel iX at TELUS Digital, he operates generative AI systems at a scale most enterprises never see, processing trillions of tokens and delivering measurable business outcomes for global organizations. That vantage point gives him a clear view of both the promise of generative AI and the uncomfortable truths many teams are still avoiding.

Together, we unpack why generative AI breaks so many of the assumptions security teams have relied on for decades. Bret explains why these systems are probabilistic rather than deterministic, and how that single shift creates what he calls an unbounded attack surface.

Users are no longer limited to predefined buttons or workflows, and outputs are no longer constrained to a fixed database. The same prompt can succeed or fail depending on subtle changes, which makes single-pass testing and checkbox compliance dangerously misleading. If you have ever wondered why an AI system feels safe one day and unpredictable the next, this conversation offers a grounded explanation.

We also explore why focusing on the model alone misses the real risk. Bret makes a strong case that the model is only one part of a much larger system shaped by system prompts, connected data sources, tools, and guardrails. Change any one of those elements and behavior shifts. This is why automated, continuous red teaming has become unavoidable.

Bret shares how Telus Digital's Fortify AI attack model uncovered hundreds of vulnerabilities in hours, far beyond what human teams could realistically surface on their own. Yet automation is not the end of the story. The final decisions still depend on people who understand context, trade-offs, and business impact.

Throughout the discussion, we return to a simple but uncomfortable idea. AI safety is not something you bolt on after deployment. It demands a different mindset, broader testing, repeated validation, and ongoing human judgment. For leaders moving from experimentation to real-world deployment, this episode is a clear-eyed look at what responsible progress actually requires.

As more organizations rush to deploy agents and autonomous systems in 2026, are we truly prepared for software that learns, adapts, and occasionally surprises us? What does that mean for how you test and trust AI within your own business?

Useful Links

Thanks to our sponsors, Alcor, for supporting the show.

[00:00:04] What happens when the technology that we trust to give us the answers that we're looking for starts behaving more like a living system than just another piece of software? Well, at the start of every year, I find myself having similar conversations with business leaders, technologists and security teams. And everyone feels the momentum. Gen AI is everywhere now. Budgets are being unlocked.

[00:00:29] Pilots are turning into production systems. And suddenly, tools that started out as experiments, they're sitting inside real workflows, touching real data and influencing real decisions. But the thing that often gets lost in the excitement and the distraction of the shiny next big thing, these systems do not behave like software most of us grew up using and testing.

[00:00:55] And that's why today's conversation matters. And I've invited Brett Kinsella to join me on the podcast. He's someone who has spent years studying how generative AI systems behave once they leave the lab and enter the real world. And Brett works at the intersection of AI safety, security and large scale enterprise deployment. And his perspective challenges some very comfortable assumptions.

[00:01:21] The biggest one being that you can test an AI system once tick a box and move on. But in today's conversation, we will unpack why generative AI is probabilistic rather than predictable and how this creates an unbounded attack surface that traditional security models were never designed for. And why focusing on the model alone maybe misses the point entirely.

[00:01:47] So my guest will explain the systems around the model, the prompts, data sources, tools and guardrails, where the real risk actually lives. And yeah, we'll also talk about automated red teaming, why human testing cannot simply keep up and why the last mile of AI safety still comes down to human judgment.

[00:02:09] So if you're responsible for deploying AI inside a business or simply advising leaders who are, this episode will challenge how you think about safety, testing and prevention. Here at the Tech Talks Network, we now have nine podcasts and approaching 4,000 interviews. And that is only possible with some of the great friendships that I've developed over 10 years of podcasting. And a company that I'm proud to call friends of the show is Denodo.

[00:02:38] Because not only have they been on this podcast multiple times, they also help make sense of the AI data chaos that we're seeing now. So whether you are a CIO or a builder, Denodo helps you activate your data with speed and governance. And their global partner network also helps you accelerate every step of the way. So if you're ready to unlock real outcomes, simply visit Denodo.com today. But now it's time for today's interview.

[00:03:06] Let me introduce you to today's guest. Welcome to the show. Can you tell everyone listening a little about who you are and what you do? So I'm Brett Kinsella. I am the general manager of Fuel.iX at Telus Digital. And what that is, it's the generative AI platform we use across all of our capabilities in the organization. And so we have tens of thousands of users.

[00:03:33] And it's not just internal, but it's also our customers use it as well. Now, to give you a sense of the scale of what we do, in 2025, we processed over 2 trillion AI tokens. If people don't know what that is, we can talk about that a little bit later. But at a scale that you're not normally going to see outside of maybe a couple hyperscalers who are providing these to third parties. And we've delivered over $100 million in value in the last year directly from AI. So we're very deep into this.

[00:04:02] I've been doing this in this industry since 2013, launching products with AI. Ran a research organization for a number of years. I'm very excited about this because, you know, several years ago, the industry shifted a lot. And I think a lot of people, maybe that's when they first started paying attention to it. But it shifted for very good reason because there's a lot of value to be had. And we're doing a lot of work every day, helping our customers. And I'm really excited about what we've been able to do.

[00:04:31] And there's so much I want to talk with you about because for, what, three years, everyone was excited about Gen AI. Now it's all about agentic AI and companies talking about unleashing thousands of agents into the wild. As an ex-IT guy, that makes me somewhat nervous. But I was reading that your team had tested 24 frontier models and found exploitable vulnerabilities in every single one.

[00:04:57] So when you step back from data like this, what does it say about the current state of Gen AI safety that is operating inside real enterprise deployments? And there's obviously vulnerabilities there that they might be unaware of. In fact, I think they're pretty much universally unaware that these exist. And part of it's because it's new technology. And every new technology, as we know, brings benefits. But then there's also new risks that are introduced. And this is no different in that way.

[00:05:23] But this is quite a bit different in ways that people have not experienced in the past. And that's largely because, and maybe we'll talk about this in a little bit, it's probabilistic technology. It's not deterministic. When we used to write software, it was procedural. It was deterministic. We would write it in a certain way to do one thing. And then we would test it. Does it do that one thing? And it would never do something else. It would always just do the thing that we tested. Now what we're talking about is you can just enter in text or you can talk to it. You can say any words.

[00:05:52] You can talk about any topic. And so that gives you sort of what we call unbounded inputs. So it could be anything, not just limited by the buttons on the screen or something like that you've designed in your application. And then it's unbounded outputs as well because it can actually pull from its training data. It could pull from your data sources. It could pull from lots of different places. And as you mentioned, agents.

[00:06:18] That's just going to be more and more places that it can pull from and things you might not have full control over. And so therefore, there's variability in the output. And what I'm finding is that most organizations are hoping that the model makers have done a good job protecting these models. And in many cases, they have. It's just like you can't protect against everything. It's just like if you have a window on your house, like sometimes, you know, there's going to be a storm and a branch is going to break the window. Right? It's just one of those things that's going to happen. But there's variability in the world.

[00:06:47] But many of them are just hoping that the model makers are doing sufficient work or that the hyperscalers, if they're getting access through one of their cloud providers, is doing this. But if you look at it, the security they provide from the cloud is not exactly the same as the security. Cloud for other services is not exactly the same as what you're getting from AI. So, yes, we find it's not just the frontier models. Every model has vulnerabilities.

[00:07:12] And one finding that really surprised me, and I'm sure will surprise many listeners listening to this podcast today, is the sheer variation in attack success rates, even when models are given identical instructions. So why does that make single pass testing and checkbox style validation so dangerous for organizations? It seemed to be a real red flag when I was reading through the findings there. Yeah, I think there's a couple of different things.

[00:07:39] So, first of all, yes, the exact same attack on different models will give you different results. And I guess that shouldn't surprise people. But the exact same attack on the same model will often give you different results. And that's because these are probabilistic. Every time you query a system, it's doing math in the background. And there's a lot of variables that can come in. It could be like ambient noise. It could be a misplaced keystroke. It could just be that the calculation is slightly different this time. And so it's going to give you a different response.

[00:08:08] Now, there are certain things you can do to make it more consistent, but it'll never be deterministic. It'll never be one input always gives you the same output. It's just not – the systems aren't designed that way. That's why they're so facile and can do all these amazing things that we like. But at the same time, there's a downside to it. And so that's one of the things that I think most organizations really need to spend more time thinking about. And it manifests in a couple different ways.

[00:08:37] So the first is you need to go through your test protocol, but you can't just have your top 100 or 1,000 tests that you want to run and say, that's good. You should do that. There are certain types of things in an organization. You want to always make sure that you understand what the state of your AI safety and security is. However, what you should do is you need to run those multiple times because you do – to cover that variation.

[00:09:00] And you need to regularly generate new novel attacks because, again, the world is very complex. There's high variability. So most people aren't going to type into your AI chatbot the exact same thing that you tested against. There's going to be some variants. There could be other words. There could be different types of text or symbols that might put in. And all of these things can change the output.

[00:09:25] And that's why we have to take a different approach to AI safety and security than we've taken traditionally from a cybersecurity protection standpoint. And we are recording our conversation today at that magical part of the year, right at the beginning of a new year where we're all starting to think about different mindsets required. How are we going to evolve? How are we going to think and work differently? And you describe Gen AI as this shift from deterministic to probabilistic systems.

[00:09:51] So for executives listening who still think about security in those traditional software terms, is there a mental model or mind shift that they need to – or a mental model they need to unlearn first and a new mindset they need to be adopting and replacing it with? What would you say to those people listening? Yeah. Think of these two words, imperative versus declarative. Okay. So imperative is you tell a system exactly what to do.

[00:10:19] Each step you want it to do in sequence. That's an imperative system. A declarative system is you say, here's my objective. You can do whatever steps you want in between as long as I meet my objective at the end. Okay. So imperative systems are like the software that we use every day in our businesses, when we're at the grocery store, all these other types of things, any type of computer. They're generally built on these imperative systems.

[00:10:48] If this, then that. All these tree diagrams. And that's great if you want repeatability. But they're not very flexible, I think people have noticed. So they do what they can do, but then we hit a wall about 15 years ago and we couldn't do any better. And when we think about a declarative system, it's objective-based. So it's saying, hey, this is what I'm trying to accomplish. You AI system helped me accomplish that.

[00:11:14] And this flexibility is what allows AI to handle not only a lot of our old imperative use cases, but all these new declarative objective-based use cases. These things the imperative systems could never do because we can't actually program in 2,000 different variants of how someone might want to accomplish a task. We just, we can't come up with that many variants. So we're always trying to teach people how to use things and, no, you didn't do it that way.

[00:11:41] And then we have these help desks and Telus Digital has 80,000 people that help support contact centers for the largest enterprises in the world. And we do that type of work. But now we have these systems that are declarative that they can actually just interpret what you want to do. And it can say, okay, here are the tools that I have at my disposal. Let me apply those tools in order to help meet that objective.

[00:12:04] And that allows you to address all this variability that people have in terms of their use cases and the way they request assistance from their AI systems. And the numbers show a massive gap between Gen AI investment and AI-specific security spending. And maybe we shouldn't be too surprised of that because we have seen a lot of enterprises wanting to be part of that AI narrative, wanting to jump on the bandwagon for want of a better phrase.

[00:12:31] But from your perspective, why is security lag so behind adoption, even in regulated industries that usually are more cautious and know better for anything that you're seeing here? I think partly is they don't know what they need to do. And mostly what you do is when a new technology comes along, you focus first on, can I accomplish this new task? Can I deliver this new value? That's the first thing you do. Can I make it work?

[00:12:59] And then you're like, okay, it does work. Great. Now that I know that it works, let me step back and figure out, is it safe? How can I make it more safe? Can I reduce the vulnerabilities, address gaps, whatever those things might be? And so we think about this idea of you've got experimentation, you've got production, you've got scaling, and then you have optimization. And usually somewhere in that production to optimization is where security comes in. And the first thing we do is we grab the tools that we have today.

[00:13:26] And so there are certain types of things with AI systems that you just need traditional cybersecurity for. And so those types of things very often are being done. The difference here is that unbounded input, unbounded output that I mentioned earlier, and that people can say anything. So in cybersecurity world, this would be your attack surface is unlimited in terms of topic domain range. And then you have this unbounded output. And so there's always going to be some variability.

[00:13:53] And so people just don't know yet what they need to do. In addition, if I look at traditional cybersecurity, they've never dealt with AI safety before. And that's why we talk about AI safety and security, because there's a security aspect here, which is very traditional. We've got PII data that we don't want to leak out. But we've got this other information that is safety oriented.

[00:14:17] These are the things that might be offensive or could actually lead into types of things where they're exfiltrating data. But what it is, is it's this fuzzy area that we've never had to deal with before, because in the past, whenever we had a system that provided information to you, it was coming from a pre-approved database. Now we have AI systems that are actually pulling from lots of different places and concatenating that information in a new way.

[00:14:44] That solves the need for these use cases, which are more complex and variable. But at the same time, it can put these things together in ways that may not be what you really want from an organizational standpoint. And your research also suggests that many organizations focus on intervention and filtering after deployment rather than prevention during the design process.

[00:15:07] So just to bring to life what we're talking about here and deliver this message, what risk does that create once these systems are live and handling real customer interactions? We've seen a few nightmare stories over the years, but anything that you or any message you'd like to deliver here? Yeah, I think most people understand this idea of intervention. And guardrails is the term a lot of people will use in the industry. And that's, as you said, that's essentially a filter.

[00:15:35] It's saying, it's looking at all the information, might be rules-based initially. Say, okay, does it have this type of format, like an email address? So might you use Regex or another technology like that? And it'll say, oh, we're going to block that out or we're going to give them a response that says we can't do this. Because that would be a violation of one of our policies. There's other types of AI technologies. And some of these are actually agent-based. But we'll just call them, you know, we'll call them non-deterministic.

[00:16:01] And they're just going to scan it and they're going to say, hey, does the intent here seem like this is something I shouldn't be doing? Or is this something, is there information in this that seems like something I as an AI system shouldn't be delivering? And that's the guardrail road. And that's the prevention. What you're doing at the time of the potential incident, whether it be an attack or just a random benign type of thing by an employee, it's basically trying to catch that information. And everybody should be doing that.

[00:16:31] Now, I'd say most people are not even doing that. Most people are just hoping that the hyperscalers or the model providers are automatically blocking this stuff. And very rarely is that the case. So you should have your guardrails in. So that was definitely an important step. But I sit there and I think, OK, is this scalable? Can I just can I catch everything? I'm just sitting here. I've got a whiteboard. No one's using it. You know, I'm going to have hundreds of thousands or millions of users like on this system.

[00:16:58] Can I divine all the ways people might try to break into this and test all these? No, I can't. So this is where I go and say, OK, well, how can I prevent these things to happen so that the filters so that I'm actually configuring the right guardrail filters? Or so that I understand what my risk profile is, because I've looked across all these different categories of potential vulnerabilities. And I understand the likelihood, the percentage rate of the chance that something is going to slip through these guardrails.

[00:17:27] Can I then reconfigure them? And that's where we get into this prevention idea. And a lot of people think, OK, I'm going to test and then I can just I can just roll it out. I'll tell you within TELUS ourselves, before we rolled out one of our customer facing gender of AI chatbots, we went through a process. Seven days, seven people basically testing full time. They were able to come up with a little over 100 different vulnerabilities that they then said, OK, well, we're going to do prevention here.

[00:17:56] We're going to go and we're going to put some guardrails in. We're going to remove some information from the data stores that we're pulling from. And that's going to make us safer. We ran our Fortify product on it. We found 470. And that took us five hours. You basically, you know, you click a button. It just goes because that's an AI attack model. And so it's doing 12 times more attacks. It's, you know, the coverage ratio is much broader because humans just can't only input things so quickly. Right. You know, AI, we can paralyze it.

[00:18:25] And we can also come up with these new things. And most people aren't very good at understanding all the things that nefarious people might do or just like benign interactions from people might generate. And so those are the types of things you think about. Like the scale of this is very significant. But when we think about prevention, it's like if I want to prevent something, I want to know where my problem areas are. And so that's really what we need to start with. And you can't just have someone like do 100 attacks or run the script of 1,000 attacks over and over again.

[00:18:54] You really need to do this at a much bigger scale and a much broader scope than most people anticipate. A quick thank you to the sponsor that supports every podcast across the Tech Talks network and every episode. Because their help allows me to publish 60 interviews a month with founders and technologists who are keeping this industry moving. And this month I'm partnering with Alcor.

[00:19:18] And if you've ever tried to hire engineers in another country, you probably know just how painful it can be. Different laws, patchy support and partners who don't truly understand engineering roles. So Alcor approaches this from a different tech point of view. They specialize in Eastern Europe and Latin America. And they're able to combine EOR capabilities with recruiting. So you get one partner handling everything. And they help you choose the best location for your stack.

[00:19:48] Find developers with the right depth of experience. And run proper assessments so they can onboard people quickly. And they also give you a model that respects both transparency and margin. Most of your spend goes directly to your engineers. And the fee will decrease as the team expands. And you can even transition everyone in-house at that time when you're ready without having to worry about a penalty.

[00:20:12] And that structure is why a mix of early stage and unicorn stage companies use them as they scale. So if you want to take a look, visit alcor.com slash podcast or tap on the link in the show notes. But now, on with today's show. And something else that stood out to me was that the benchmark result highlighted that newer or more advanced models are not automatically safer. And actually, smaller, cost-optimized models often carry a higher risk.

[00:20:42] So with all that in mind, how should enterprises be rethinking model selection when security outcomes vary so widely? Yeah, I think this is a great point. Generative AI models are designed to be permissive. And what does this mean? In the security world, we don't like permissive systems. We want to lock everything down. But we want these things to be productivity tools. And we want them to have general purpose capabilities. And so that's why they will do lots of different things.

[00:21:12] That's why we talk about compliant models. They will comply with your requests. And some of the best models are the most compliant. Why are they the best models? Because they do the most stuff. People are the happiest with them because they do what they want. Because these people who are using them aren't trying to break them. But what becomes a benefit in one area is a detraction in another area. So I think even if you look at the frontier models, they can do more. They're more sophisticated.

[00:21:38] So therefore, you're going to have potentially more issues. The other thing you mentioned was that the smaller cost-optimized models, yes, they're more likely. And this is true. Because they're smaller models, when you get into the math, they have less underlying data beneath them. And so the probability swings are going to be broader, typically. So these types of things can happen as well. Now, there are some things that we can do in order to mitigate some of these risks. But yes, you should not assume that a new model is safer.

[00:22:08] It's going to be different. The other thing is models are regularly updated. Even the ones you're plugged into, a hyperscaler, and you're getting access to a model, they might update that model from time to time. You might not even know it. And that could change the performance. Because you have to think of not just the model itself, but you also have to think about the system that you're deploying. Your system has what's called a – the AI system has what's called a system prompt, which the user never sees. So you type something in, and it's appended to a system prompt.

[00:22:38] The system prompt tells the model, this is what you're supposed to do. This is what you're not supposed to do, right? Oh, and here's what the person wants, right? Now, that system prompt will change the behavior and will identify new vulnerabilities. You might also connect that to different data sources. You might connect that to other agents. All those things change the behavior of the model. So anytime any of those items changes, you need to actually go through another round of testing, another round of prevention,

[00:23:02] so that you can lock down or prevent more of the AI safety and security risks and continue to generate all the benefits without sort of the backlash that comes when these things don't work out the way you want them to. And I was also reading that you make a very strong case for continuous automated red teaming, because human expertise simply can't scale in the way that you need it to.

[00:23:27] So again, for people listening, how do red, blue, and purple team approaches need to evolve when the system under test behaves differently every time it responds? It feels like quite a big challenge there that's emerging. I think when we think about traditional red teaming, and I've been around this for a long time in the industry, it's event-based. And usually you say, okay, I'm going to launch a new system, and let's throw a red team at it. And they'll do their testing, and they'll come back with the report.

[00:23:55] You fix things, they do another red team, and they're like, oh, good. You've got the stamp of approval. Go forward. And then maybe you red team it again later when you have a new release or you change the underlying technology. When you talk about continuous, this is really important because, as I just mentioned, all the system components change. The models change. And by the way, even if no one touched the model, it's the same model, they deprecate these models every 9 to 18 months. Most people want their AI systems to last longer than that period of time.

[00:24:25] And so you're always going to be introducing new models. The other thing is your system prompt is going to change because you're always tweaking it to optimize. It's like, I want to get this model to work a little bit better. I'm going to tweak these sentences, or I'm going to add these clauses. I'm going to remove some things. So that's going to change it. The way the guardrails interact with it is going to change. So all these changes require you to do continuous testing. It's not good enough to say, at launch, this is good. You should be doing this every week, every month, every quarter.

[00:24:53] You should be running through a set of standardized tests that you run over and over again. So you have some sort of common benchmark. And then you should layer on to that a bunch of novel attacks because those novel attacks are where you find the little cracks in the armor.

[00:25:10] And if we were to look ahead for any enterprises or enterprise leaders listening, if they want to take one lesson from the research we're talking about and act on it in the next 12 months, what would meaningful progress in Gen AI safety and security actually look like in practice? So I'd love to give people listening a real valuable takeaway on this, that they can go away and make a big difference here. Where should they start? Any advice here? Yeah.

[00:25:37] So I would say you need to broaden your testing scope for AI safety and security. I mean, when we built our Fortify solution, we looked at over 10,000 different types of attack objectives and methods. And we narrowed that down to 139 that we focus on sort of general purpose. And then every model gets additional tests as well. But you really need to, you need to test more broadly across domains than you think you need to do. The second thing is you need to test it more volume.

[00:26:06] You need to do the same attacks over and over again. If it's good on one try, that doesn't, that could have been a coin flip. It just might've been safe on that one try. So you need to do it more and more. And then, you know, ultimately, you know, I tell people is that you need to do this continuously. You need to test because your system is constantly changing. Your engineers are in there tweaking things. They're swapping out data sources and those types of things.

[00:26:33] And so, you know, if we think about broader scope, novel attacks, more repeats, and then continuous testing, those are the things that I think every organization should do. And then those are the elements that then go into your prevention techniques, which update your guardrails, add other types of protections, remove different types of systems or data sources that are causing these problems. That's the way you do it. It's actually a pretty straightforward game plan.

[00:27:03] It's just more complex and it's going to take more time than you think it's going to. And one other thing I like to give my listeners, along with that valuable takeaway, is give my guests a chance to bust a myth or a misconception. So I'm hopefully going to maybe remove some of those frustrations you may have picked up over the years. What do people misunderstand most about your industry? Are there any myths about your job or field of expertise that we can finally lay to rest today?

[00:27:33] This could even be a podcast on its own, but if there were one, what would it be? I'll give you two. Okay. I'd say the first one is, and I just mentioned this, so that's why I'm going to give you two. It's a system, not a model. You can test the model all you want. When you plug it into these other components, it will change the behavior. So you have to test the system. It's okay to test the model. We test models too. We want to know what their tendencies are.

[00:28:02] But you need to test the system because the system is what your users, whether they're consumers outside or your employees, because that can lead to all sorts of other situations. That's what they're going to use. They're going to use the system. So make sure you're testing the system and not just testing the model and thinking that's good enough. The second thing I'll say is that my view is the last mile of generative AI is people and these people with expertise.

[00:28:28] So we have systems like we have this Fortify system that'll be 97% faster, covers all these different types of things. It automates a lot. However, once it gives you the output, you still want people to look at it and say, okay, yes, this is the thing I want to prioritize. This is how I'm going to take this information and update my guardrails. This is something that makes this model unacceptable.

[00:28:51] And I think this is the thing that a lot of people missed when Gen AI came on so hot and heavy a couple of years ago is they thought it was just going to be a typical automation technology. And they could just do it. And it's so complex. It's moving so quickly. It's very hard for people to keep up. And so you actually have to have people who develop expertise.

[00:29:13] You have to bring in partners who do this type of work, who help you do this work and help you get to that scale and production and then safety and security and that optimization phase so that you can actually get all the benefits from this great technology. I think that is a great message to end on today. I will add links to the research that we've mentioned throughout our interview today. But for everybody listening, if they want to keep up to speed with the work that you're doing, contact you or your team.

[00:29:42] Where would you like to point everyone listening? And I'll add it to the show notes. Yeah, I think they can go to telusdigital.com and they can look for the AI work. Or you can just go to fuelix.ai and that sort of links you up to all the things we do. That's fuel, like gasoline, IX, intelligent experiences, fuelix.ai. And that'll just link you to all the different pages that show you a lot of the work that we're doing. Not just the research, but some of the other things that we've learned over time.

[00:30:08] Because this is a new market and I've done new markets since the 1990s. And what we need is a lot more education. So we try to provide a lot of resources out there so that people can just benefit from our experience. We've been doing this at a larger scale and for longer than pretty much everybody out there that I've come across. And so, you know, our goal is to share that and make people better consumers of the technology, better users of the technology. Awesome. Well, I would add links to everything that you just mentioned.

[00:30:36] I'll add that to the show notes along with a link to your LinkedIn. And I would invite everyone listening to let me know what you thought. Any big questions, any big takeaways, please contact me directly. Let me know your thoughts there. And you are an incredibly busy guy, Brett. For everyone listening, a big thank you to you for joining me today. I know you were recording this. You're at CES. You're in Vegas at the moment, about to go on stage on the AI stage over there. So thank you for taking that time out and ensuring that this conversation gets out there. Thanks so much for your time today. Thanks, Neil.

[00:31:05] Really appreciate it. If generative AI never gives the same answer twice, how confident are you that your organization really understands its risk exposure? This is one of the questions I keep coming back to after my conversation with my guest today. And what stood out to me was how clearly he reframed AI safety as a systems problem rather than a model problem.

[00:31:30] Because it's easy to point at a foundation model and assume the hard work has been done elsewhere. But it's much harder to accept that every prompt tweak, data source change or new agent is quietly reshaping behavior in ways that we may never fully predict. And there is something refreshing in Brett's insistence that automation alone, yep, that's not the answer. But automated red teaming, that can help uncover vulnerabilities at scale that humans cannot match.

[00:32:00] But deciding what matters, what to fix first and what level of risk is acceptable still requires people who understand both the technology and the context it operates in. So as AI systems move deeper into customer interactions, internal decision making and core business processes, I think this blend of automation and human judgment feels less like an option and more like table stakes.

[00:32:28] And ignoring that reality may be the fastest way to lose trust in tools that promise so much value. But I will include links to everything that we talked about in the show notes so you can explore Brett's work in more detail. I'd love to hear what resonated with you most from today's episode. Did it change how you think about testing, security or AI readiness inside your organization?

[00:32:53] And looking ahead to the air in front of us, are we building AI systems that we can actually stand behind when we behave in unexpected ways? As always, techtalksnetwork.com. Leave me a voicemail. Find out how you can work with me. Click on all the links to everything that we talked about today and let me know your thoughts. But that's it for today. So I'll be back again tomorrow with another guest. But thank you for listening today. Bye for now.