3524: Trust, Verification, and Ownership in the Age of AI, with eSentire's Alexander Feick
Tech Talks DailyDecember 19, 2025
3524
29:5023 MB

3524: Trust, Verification, and Ownership in the Age of AI, with eSentire's Alexander Feick

What happens when artificial intelligence moves faster than our ability to understand, verify, and trust it?

In this episode of Tech Talks Daily, I sit down with Alexander Feick from eSentire, a cybersecurity veteran who has spent more than a decade working at the intersection of complex systems, risk, and emerging technology. Alex leads eSentire Labs, where his team explores how new technologies can be secured before they quietly become load-bearing parts of modern business infrastructure.

Our conversation centers on a timely and uncomfortable reality. AI is being embedded into workflows, products, and decision-making systems at a pace most organizations are not prepared for.

Alex explains why many AI failures are not caused by malicious models or dramatic breaches, but by broken ownership, invisible dependencies, and a lack of ongoing verification. These are not technical glitches. They are organizational blind spots that quietly compound risk over time.

We also explore the ideas behind Alex's recently published book on trust and AI, which he made freely available due to the speed at which real-world AI failures were already overtaking theory.

From prompt injection and model drift to the dangers of treating non-deterministic systems as if they were predictable software, Alex shares why generative AI requires a fundamentally different security mindset. He draws a clear distinction between chatbot AI and embedded AI, and explains the moment where trust quietly shifts away from humans and into systems that cannot take accountability.

The discussion goes deeper into what trust actually means in an AI-driven organization. Alex argues that trust must be earned, measured, and monitored continuously, not assumed after a successful pilot. Verification becomes the real work, not generation, and leaders who fail to recognize that shift risk scaling errors faster than they can contain them. We also talk about why he turned his book into an AI advisor, what that experiment revealed about the limits of models, and why human responsibility cannot be automated away.

This is a grounded, practical conversation for leaders, technologists, and anyone deploying AI inside real organizations. If AI is becoming part of how decisions get made where you work, how confident are you that someone truly owns the outcome?

Useful Links

Tech Talks Daily is sponsored by Denodo

[00:00:04] Welcome back to another episode of the Tech Talks Daily Podcast. Now, as AI races from experimentation into live production systems, one question keeps surfacing for business leaders and indeed technologists alike. How do you move fast with AI without creating new forms of risks that you cannot see, measure, or control?

[00:00:29] And today's episode, I'm joined by Alex from eSentire, and he leads the labs team focused on emerging technologies and how they can be secured before they become embedded into critical systems. And he spent more than a decade working at the intersection of cybersecurity, complex systems, risk, and he's also the author of an incredibly timely book called Trust and AI.

[00:00:53] And in our conversation today, I want to explore why AI behaves fundamentally different from the software that most organizations are used to trusting. And also why early success in pilots can be misleading, and how concepts like ownership, verification, and accountability are becoming harder, not easier as AI gains agency.

[00:01:19] And I'm also hoping to dig into the differences between chatbot AI and embedded AI, why non-deterministic systems change how data must be managed. Also, what happens when decisions are quietly delegated to machines without clear human responsibility? And my guest will argue that it is our responsibility to verify.

[00:01:44] So if you are deploying AI beyond simple experimentation, especially into areas that will carry legal, financial, and operational risk, if not now, in the future, this episode will give you lots to think about. It's a great one, this one. Before I bring today's guest on, I just want to give a massive thank you to my friends at Donodo.

[00:02:06] Because after visiting over 25 different events in 2025, one of the phrases I keep hearing is no data, no AI, and agentic AI simply needs better data. Now, agentic AI is here, but it only works when the data behind it is complete, governed, and in real time. And this is one of the areas that Donodo helps.

[00:02:29] Because Donodo gives you a logical data foundation that accelerates AI, boosts Lakehouse performance, and turns your information into reusable data products. And for every team. So, CIOs, architects, and business owners each get the data that they need instantly. And their global partners help you get up and running faster than ever.

[00:02:54] So, if you want AI that doesn't hallucinate, but actually delivers real business outcomes, visit Donodo.com and start making your data work harder. But now, let's get today's guest on. So, a massive warm welcome to the show. Can you tell everyone listening a little about who you are and what you do? Well, my name is Alexander Fyke. I've been working in cybersecurity for the past, I guess, 14 years now. And I run a team at eSentire called Labs.

[00:03:24] And we sort of look at, I guess, the new technologies and how they can be secured and what it is that we can potentially adopt and pull into our own security solutions for the clients that we protect. And one of the things I love doing on this podcast is finding more about my guest's origin story. So, let's go back in time. When did you first consider working in the tech sector? Was there something that lit a spark in you?

[00:03:49] Was it picking something up, playing with something or something with a friend, etc.? And also, from there, what's your current role at eSentire? Yeah. So, I mean, for tech sector, I would say that's sort of been a lifelong thing. I started learning how to program, I think, when I was about seven years old. And I never really looked back from there. So, I was always interested in it. My first summer job was building a website for a local music store.

[00:04:17] And, yeah, I got into cybersecurity along the way as well and just kept playing with it. So, pretty much my whole life has been in tech in one side or the other. Done various things from, like, you know, systems work to IT stuff. Ran a land gaming center for a little while. And eventually worked my way into cybersecurity about 14 years ago. And I've been there ever since. Awesome. What a great story.

[00:04:42] And if we fast forward to present day, not only that, you're also the author of a recently online published book called On Trust and AI. And you've worked around complex systems and risk for a long time. But this book feels very specific in its timing and an important read right now as well. So, what made you decide now was the moment to write about AI? And what was it that created that sense of urgency behind it? I feel there's got to be a story there too, right?

[00:05:10] So, when ChatGPT dropped, my team at eCentire had literally just been formed. And we sort of had the mandate to explore new technologies. And I don't think that something better than, you know, the advent of generative AI could have dropped in just as we were forming a research team to give us something to all kind of rally around. And initially, when we started with it, it was just this incredibly powerful tool that we could do all sorts of things with it.

[00:05:35] But starting to notice, I think, over the past year or so that a lot of people have, in their experimentations with it, kind of forgotten some of the differences between the generative AI technology and some of the technologies that we've had along the way. And when we're merging them together, it reminds me a lot of when we were just starting out in the technology field.

[00:05:59] And we had, you know, a whole bunch of computers that were set up for basically like closed networks. Nobody had ever thought about what it would be like to connect them to the Internet. And what I was seeing with AI reminded me in a huge way of what it was like to be, you know, browsing along, like you'd open up a laptop in a coffee shop and everything would be wide open. And nobody had any security around anything.

[00:06:23] And I feel like we're sort of at an inflection point with AI where we're making a lot of very similar sorts of mistakes to what we were doing in the early days of security around just the Internet and open networks. And I keep seeing things pop up in the news. And so I felt like spending some time and actually writing down and getting thoughts out there about what it is that we needed to do in order to actually be able to use AI safely until we figure the technology out fully. This was the time to do it.

[00:06:53] And that was, I think, ultimately, like the be all and end all of the urgency. But I would say if I had to point to one event, it would have been sort of the realization of what prompt injection can do within within a system.

[00:07:07] And when we started doing our first red team exercises where we were looking at what prompt injection would do across our own systems, and then I guess also looking at the ramifications of model poisoning would be the two things together that made me realize that we really needed a different way of thinking about how to secure systems that incorporated AI than we were using before. And you do open with a strong observation that AI is being deployed faster than organizations are building the controls to trust it.

[00:07:36] And I completely agree with this sentiment. I mean, I've had so many tech conferences this year, and every single one of them mentioned agentic AI, and hey, we're going to launch swarms of our agents, and they're all going to talk to each other's agents. And there's an IT side of it that's like, well, let's just slow down a minute here. Let's think about things before we move fast and break things. But from your personal point of view, what patterns or failures did you keep seeing that convince you that this gap had become or becoming genuinely dangerous?

[00:08:05] So I would say the single biggest thing that I keep seeing is that I think people who've been working with technology for the past 20 years have come to trust that software behaves in sort of like a deterministic and reliable way. If you write a test case for something and you try it out a few times, you can have a very, very high confidence with the pre-generative AI software that if you run it 100,000 times in production, it's always going to perform the same way. And people don't realize that generative AI is fundamentally different.

[00:08:34] And what I keep seeing organizations doing is running some test cases or some test pilots and thinking that because the model was working well in the test cases, it's going to continue working when they actually launch it out into production. And we start seeing things like model drift and then the performance is not the same.

[00:08:53] And I keep seeing situations where organizations have sort of overtrusted the initial testing that they were doing with the models, weren't thinking about how they were actually doing some sort of ongoing program of verification, and then end up getting exposed to some sort of like major legal risk or compliance failure. You see things like, you know, judges throwing cases out because they've got, you know, chat GPT generated briefs.

[00:09:16] We've got things like the recent Deloitte incidents where, you know, we're seeing situations where some AI fabrications or hallucinations have made it into a report that goes to a government. And it keeps on happening over and over again. So I would say that that's probably the biggest pattern that I see is that organizations tend to believe that they can just trust the software that they produce with generative AI in exactly the same way as they used to trust deterministic software that they're used to for the past 20 years.

[00:09:46] And also in the book, you argue that most AI failures are not caused by rogue models, but that broken ownership, invisible dependencies and missing verifications. So why do these organizational failures tend to matter more than the technical ones that many people may go to first? So I think the biggest thing that has sort of shifted with AI is it's something which is that non-deterministic output.

[00:10:14] And it's actually also making decisions, which is something that we've never really seen that you can actually delegate to a computer before. And previously, when you think about ownership, deterministic systems like what we see with most technology, pregenerative AI are essentially like long chains of predictable cause and effect actions. And they can be very, very complex, right? When you send an email, it's going to be digitized, packaged up, sent across the Atlantic. It's going to arrive in my inbox.

[00:10:42] I'm going to be able to open it and read it, but it's always going to be exactly the same every single time. And the decision to actually send it is always going to trace back to an individual person. And because the paths are predictable all the way through, it's reasonable to say that when a person is running it, they've got ownership of what's actually taking place. With AI today, what we have is we actually have systems that are capable of making decisions on their own. And the path that it takes can be completely different from execution to execution.

[00:11:12] I can tell the AI system to solve a problem for me, and then I can tell it to solve the exact same problem again in five minutes. And it might go about it a completely different way with completely different sort of costs or effects that are generated from the chain of decisions that it makes. Organizations haven't really, I don't think, thought this through yet. And so they don't really think about who owns the decisions that the AI makes along the way as it's solving the goal that you give it.

[00:11:37] And if they don't have a way to actually tie that back to an individual who's validating and making sense of it, what you end up with is organizations like they solve the problem and the goal that they set the AI model out to do. But they don't necessarily have an understanding of who owns all of the decisions that were made along the way. And I think most of the problems that we're seeing with AI in most workplaces today all come back to that. The AI was doing something. It was doing something that looked reliable the first few times.

[00:12:06] Then it started doing things in a different way. And we didn't understand who was owning the decisions that it was making in order to do that. We were just delegating our trust to the model and not really thinking about how to move it back to the system of people that makes sure that the model is acting in alignment with us. And something else that stood out in your book there is that you draw a clear distinction between what you call chatbot AI and embedded AI.

[00:12:29] So for any leaders that have been listening to our conversation today, what fundamentally changes when AI moves from being an almost bounded tool to becoming load-bearing infrastructure of sorts? If I had to sum it up in one word, it's agency. And it's where is the locus of trust if you sort of expand it a little bit.

[00:12:49] When you're talking about chatbot AI, you're talking about a system that all of its decisions, its actions, its recommendations, everything it's doing is being transparently made visible to the person who's actually driving the chatbot. Right? Like they're talking to the chatbot, the chatbot sending something back, but ultimately accountability for every single decision is transparent to the person who's driving the chatbot and away they go.

[00:13:13] When we move down the line and we start to move forward through the trust chain, embedded AI is sort of it's when you say, well, I don't actually need a person to review the output of what it is that the AI is doing anymore. I trust that if I put this particular prompt in, I can embed that in my business and it'll be able to handle that reliably each time. But as soon as you do that, you've moved trust away from somebody who's a human being who can actually take ownership of it.

[00:13:38] And you've moved it onto a non-deterministic agentic system that might not make the same decision every single time. And you've actually moved your trust model off onto the AI. So I would say that's, that's sort of like the initial step. And of course you can keep going further and trust the AI to, you know, go more and more in depth into your business.

[00:13:57] But that first handover, when we switched from chatbot to embedded is that first sort of point where we've begun to move trust away from the people who are running the process onto like a non-human system that can't actually take ownership of the decision. And a word you mentioned there a few times, trust, is one of those words that gets used a lot in AI conversations, but it's often referred to almost quite vaguely. So when you say trust has to be earned, measured and monitored, I completely agree with you.

[00:14:25] But what would you say that actually looks like in a day-to-day operation for that business leader listening? So like when we think about trust, there's a couple of different ways that we can think about it. So in the psychological sense, trust is sort of that, that feeling that we get when something, you know, behaves in a similar way or like a bunch of times. And we start to believe that it will always behave that way. And with AI, people can rapidly build that up with a number of sort of good test cases.

[00:14:53] But then trust is also something that you lose very quickly as soon as, you know, it does something unexpected or betrays that trust. And it doesn't take a lot of failures for you to completely no longer trust the system altogether. So you can do the same thing correctly 99 times, but if 1% of the time you bankrupt a business or, you know, you break some critical infrastructure, the whole system is something that you can't, you can't trust going forward.

[00:15:18] When we're talking about trust in AI conversations and earning, measuring it and monitoring it, I think what that actually comes down to is having systems in place that allow you to be able to say with confidence that even if the AI made a bad decision, because we know that it will some percentage of the time, the system overall will still make a good decision.

[00:15:40] And you can be accountable to the organization that's backing the AI decision because it's got some sort of system of measurement and monitoring and oversight around it. And what that ultimately comes down to is constantly keeping tabs on what the AI is doing, using various systems of, you know, deterministic guardrails that you can put around the AI in order to sort of surface anomalies in its decision making to a human reviewer.

[00:16:04] And ultimately, always having critical AI decisions that are going to carry risk for humans tied back to somebody who actually understands what the AI did with a system where it's providing a chain of evidence. And you can actually see why it is that the person is backing the AI's decision. And it's not something where you can basically just say, oh, I looked at it 50 times and it's good forever. However, it's something where you have to continuously monitor it. AI is subject to model drift. It's subject to changes.

[00:16:32] The providers of the frontier models are constantly pushing tweaks and new learnings into the system that can change its behavior. And a lot of that isn't necessarily transparent to the people who are using the models and deploying them. So for all of those reasons, I think trust in AI ultimately comes down to how do you tie the AI's decisions back to somebody who actually understands what the AI did and can take accountability for it. And another big theme in the book is verification. And I love trying to get people listening valuable takeaways.

[00:17:03] So again, for those business leaders listening, how can their organization better think about building verification layers into their AI workflows without slowing down their teams to halt or even kill innovation? Because it's a notoriously difficult balance to achieve. We hear about it a lot. But any advice you deliver around that on getting that balance right? Yeah, so I actually think this is probably the single most important question in adopting AI systems.

[00:17:29] And I think what most organizations fail to realize is that what AI is doing is it's changing the nature of the mental work from a problem of actually producing a work output to the problem of verifying the correctness of a work output. And there still is mental work involved in verifying the correctness of the work.

[00:17:49] You could be thinking about what sort of systems of software that you can use in order to give the humans who are doing the verification an easy way to actually verify that the output or the decisions that were made by the AI are correct. So if you've ever looked at systems where you can actually run audits, you start to think about, OK, well, I have computer programs that help me run down and gather all of the evidence that I need in order to be able to surface what's actually going on in the situation.

[00:18:18] If I think about it from the perspective of like a legal system, right? Like if we go back to that example where you've got like a lawyer who's trusting a chat GPT generated case brief, that case brief, you can just generate it with AI instantly. The expensive part of actually being able to deploy the AI is to figure out how you can trust it. And you can do that by thinking about all of the things that you would need to verify, like citations and all the rest of it, and then building software specifically around that use case that makes it easy to verify what's going on.

[00:18:47] So if we're looking at that legal example, you might do something like have a big body of case law, which is known good articles that are there. And instead of just letting the AI generate whatever it wants, it can only generate pointers that go into that system of case law. And when it does, when the case brief is placed in front of a human reviewer, the pointers are resolved by the software. And so what that does is it separates all the citations that the AI uses from its generated arguments. And the citations can be immediately pulled in and verified.

[00:19:17] And I think every type of knowledge work that you want to deploy AI into, if you think about building the system of verification around it, such that it's easier to verify the work product than it is for you to have produced it originally, your AI program is probably going to be a success. So, yeah, I think that's kind of the crux of it really is just figuring out how do you make the right tools to make the cost of verification cheaper than the cost of basically repeating the whole work product again.

[00:19:46] And something I really wanted to highlight today is you made a very deliberate decision to publish the book for free first. So tell me more about that. What does that choice say about how you personally see responsibility, urgency and access when it comes to AI governance? So I think for me, the biggest thing there was just realizing that everything's kind of on a timeline.

[00:20:08] And I have noticed that the rate at which we are tying AI systems in is changing literally on a month over month basis. And what AI was integrated into two months ago is significantly different than what it is integrated to today. Also, while I was writing the book, some of the things that I talked about were coming true at the time. So the opening example that I had was completely fictitious when I started.

[00:20:35] And by the time that I actually got like two months into writing, the situation had happened with Deloitte and then it ended up happening again. And so I was looking at it and I was going, OK, well, if it's changing this fast, I need to get the book out as fast as I possibly can. Otherwise, it's not going to do as much good and it's going to be old news. And the publication process is a bit slow. I still intend to publish the book. I'll probably have it out fairly soon.

[00:21:02] But it's it's just a it's literally just a question of speed. I wanted to make sure that the case examples and everything were getting there in time to do do people good before it was all of those failure modes were showing up in actual organizations. And you also took an unusual step, an incredibly cool one as well, because you turn the book itself into an AI advisor. So what did building that agent reveal about the limits of models and how did it reinforce your view that humans must firmly stay in the loop?

[00:21:32] Which, again, human in the loop is another big buzzword this year. But tell me more about that. Yeah. So what I noticed with it was, I mean, if you get used to building agents, one of the things that you realize very early on is that the biggest component of them is to have sort of like a bounded realm, trusted knowledge that they can actually draw from. And the better the source of your trusted knowledge, you know, the generally speaking, the better your outputs will be if you ask it a question related to your knowledge base.

[00:21:57] So realistically, building an AI agent that runs off of a book is really just a question of converting the book into a database of knowledge that the AI agent can draw from. So that was probably the simplest step of the whole process. It was just just basically being able to pull in the book and using that as the knowledge base for a dedicated model. But in terms of like it teaching for trust, I would say it's like, you know, I've been having conversations with a bunch of people.

[00:22:23] And a lot of people don't have the opportunity to like actually, you know, think about how they would search through the book and apply a specific section and get that out in the moment. And everybody wants something that's that's very fast, very applicable, very easy. Having an agent do the searching and then citing and pulling you right to the correct chapter is basically a shortcut. And I would say that it makes the applicability of the book go up a lot. But of course, you do still have to read the sections that it cites.

[00:22:49] And you have to think about the arguments that it's giving you, because sometimes it might say something that I didn't write. And, you know, it's it's it's an agent responsibility to verify is kind of what what I preach. But you also have to do that for the for the advisor agent that I wrote for the book. Well, incredibly cool. And you also describe your philosophy as a vigilant optimism operation operationalize.

[00:23:15] So for leaders with a similar mindset, deploying AI today, what habits should they start institutionalizing in 2026 if they want AI systems that they can actually depend on rather than struggle to find ROI from? Yeah, so I guess a couple of points from that. So the first one is that you want to instill a culture of ownership, right? You don't want to have situations where there's a pattern that I'm seeing in workplaces a lot, which is what I call AI slop.

[00:23:45] And what that generally comes down to is when somebody generates an output from an AI agent, but doesn't take ownership and accountability over what the agent was actually producing. And instead, they just pass it on up to a manager or somebody that they consider a subject matter expert. And then that person basically has to take on the burden of verifying the work product. And it used to be that this was very, very helpful work that you would want to actually build out in organizations and you design your whole enterprise culture around it.

[00:24:14] And I think what has changed is that the cost of generating that initial work product has gone from being something that you would actually pay somebody to produce like a low quality rough draft. And now it's pretty much any situation where you would previously have paid for a low quality rough draft and then reviewed it by a senior. You could really just go to a model in order to generate it. And so the remaining intellectual work, I think, that lives inside of enterprises is around the question of ownership and verification.

[00:24:44] You don't really need to do all of the rough work of just generating an initial output anymore, but you do need to do the work of verifying that it's correct. And so the single biggest thing that I think that leaders need to sort of institutionalize within their companies is at every level, there should be a sense of ownership over something. And whatever that something is, that's what you do. You have to think about that your job is not to produce a rough draft that somebody else owns accountability for.

[00:25:11] Everybody's job has to be that they own something and they can use AI to help them produce it. But at the end of the day, everybody's got to be able to take accountability over that output or you're not going to be able to trust it. You're going to get cascading trust passed up the organization and it's not going to work out. And so the AI systems that you can depend on, they've got to be backed by people that you actually trust who can take ownership over the outputs. I love that. And there's a line you used there, responsibility to verify.

[00:25:39] I think that's something that we should all live our lives by in an AI world. And from a personal level here, I mean, there's a real pressure on us all to be in a state of continuous learning, you and I and everybody listening. So I've got to ask you from a personal point of view, where or how do you self-educate? How do you keep up to speed with the pace of change? So I think in the age of AI, this is incredibly an important question.

[00:26:06] So I actually sort of like transitioned largely out of the day to day a little bit before Gen AI sort of dropped on us all. And then when it came in, I was just basically sucked back into it. And what I discovered as I was working on it was that if you're not keeping pace with it, if you're not learning about it, you actually don't understand what people are doing inside of your organization or who are reporting to you anymore.

[00:26:28] And the role of almost every task that I would see at a ground level is changing so fast that you absolutely have to spend time experimenting with the tools that are coming out that your team is using to even understand the job that they're doing anymore. And so personally, what I do for that is I generally take a couple of hours pretty much on a daily basis. And I look at the tools that are actually coming out in different areas. And then I pull them down and I actually deploy them inside of a test environment.

[00:26:58] And I play with them and I try and very much do sort of like a hands-on learning thing. Sometimes I will also read research briefs. I love reading, always have. And I generally dedicate an hour or two on a daily basis to reading as well. But I definitely feel like it's got to be a balance. You have to have that hands-on experimentation with the tools that your people are using. And you also have to be keeping pace with what's coming out in the research because both are changing so fast right now. Wow, such a great answer.

[00:27:26] And I think that's a powerful moment to end on. But before I do let you go, anyone listening want to learn more about the book that we've talked about, the AI agent, you, the work that you're doing at eCentire, anything at all. Where would you like to point everyone listening? Well, for anyone who wants to read the book or find out more about eCentire, obviously I would say eCentire.com. And if you do like just a slash after that and then go on trust in AI with hyphens, you'll get my book. If you want to look up me personally, you can do that at fight.ca.

[00:27:56] And, yeah, I've got my website there with all of the things about me personally. Awesome. Well, I will have links to everything that you mentioned there to the show notes. So anyone listening, please check that out. There's so much gold in there, as you probably heard from today's interview. But more than anything, just thank you for taking the time to come on here today, Alex, and sharing your story. Really appreciate your time. Thank you. Thank you.

[00:28:20] I think this episode really highlighted how many of today's AI failures are not caused by malicious models or exotic attacks, but organizational blind spots, broken ownership, missing verification layers, and misplaced trust all show up long before any technical exploit does. And what really stands out is this idea that AI shifts knowledge work away from producing outputs and towards verifying them.

[00:28:50] That change has deep implications for culture, leadership, and accountability inside modern organizations. And AI systems that deliver value over time, they're not defined by how clever they are or they shouldn't be, but how clearly responsibility remains with the people that are deploying them. And as he said a few times there, his big belief is our responsibility to verify.

[00:29:16] But I'd love to hear more from you if this conversation changed how you think about trust verification or ownership in AI systems. I want to hear your perspective. So please, techtalksnetwork.com. Whole heap of ways of communicating with me. You can even send me a voice message. Love to hear your thoughts on this one. But that's it for today. So thank you for listening as always. And I'll speak with you all again very soon. Bye for now.