In this episode of Tech Talks Daily, I sat down with Keith Zubchevich, CEO of Conviva, to unpack one of the most honest analogies I have heard about today's AI rollout.
Keith compares modern AI agents to toddlers being sent out to get a job, full of promise, curious, and energetic, yet still lacking the judgment and context required to operate safely in the real world. It is a simple metaphor, but it captures a tension many leaders are feeling as generative AI matures in theory while so many deployments stumble in practice.

As ChatGPT approaches its third birthday, the narrative suggests that GenAI has grown up. Yet Keith argues that this sense of maturity is misleading, especially inside enterprises chasing measurable returns. He explains why so many pilots stall or quietly disappoint, not because the models lack intelligence, but because organizations often release agents without clear outcomes, real-time oversight, or an understanding of how customers actually experience those interactions.
The result is AI that appears to function well internally while quietly frustrating users or failing to complete the job it was meant to do.
We also dig into the now infamous Chevrolet chatbot incident that sold a $76,000 vehicle for one dollar, using it as a lens to examine what happens when agents are left without boundaries or supervision.
Keith makes a strong case that the next chapter of enterprise AI will not be defined by ever-larger models, but by visibility. He shares why observing behavior, patterns, sentiment, and efficiency in real time matters more than chasing raw accuracy, especially once AI moves from internal workflows into customer-facing roles.
This conversation will resonate with anyone under pressure to scale AI quickly while worrying about brand risk, accountability, and trust. Keith offers a grounded view of what effective AI "parenting" looks like inside modern organizations, and why measuring the customer experience remains the most reliable signal of whether an AI system is actually growing up or simply creating new problems at speed.
As leaders rush to put agents into production, are we truly ready to guide them, or are we sending toddlers into the workforce and hoping for the best?
Useful Links
Connect with Keith Zubchevich
Learn more about Conviva
Thanks to our sponsors, Alcor, for supporting the show.
[00:00:04] I've been to enough tech conferences in the last 12 months to question why AI still feels impressive in demos. And yet, let's be honest, unreliable in the real world. iPhone users know exactly what I'm talking about. I don't know what's going to come first, decent Apple AI or Grand Theft Auto. But that's a debate for another day.
[00:00:27] Today, I'm joined by the CEO of Conviva. And together, we're going to unpack one of the most relatable metaphors I've heard about AI adoption so far. My guests will argue that deploying AI today is a bit like sending a toddler out to get a full time job. Yeah, might be might be full of potential eager to learn, but it's not quite ready to handle real responsibility without guidance.
[00:00:52] So we'll talk about that why the majority of AI pilots are still failing to deliver ROI. What incidents like the Chevrolet chatbot selling a truck for $1 reveal about enterprise oversight, and why the next phase of AI maturity is much less about smarter models and more about visibility into how AI agents actually behave once they are live. Intrigued? Good.
[00:01:19] But before I get my guest on today, I want to give a quick thank you to my friends at Denodo, who are playing a big part in supporting this show. Because one of the questions I hear more and more from listeners on this podcast is, why does AI succeed? Or why does it fail? Because let's be honest, AI is moving fast, but success is often still elusive.
[00:01:42] Now, most projects fail not because of the AI, but because the data foundation isn't ready. This is why organisations are increasingly turning to Denodo. Denodo delivers trustworthy and AI-ready data without the need to copy it everywhere. So if you're ready to understand why your AI projects fail and how to succeed with AI, simply visit denodo.com and take control of your data world.
[00:02:12] It's time for me to officially introduce you to today's guest. So a massive warm welcome to the show, Keith. Can you tell everyone listening a little about who you are and what you do? So my name is Keith Zubcevic, President and CEO of Conviva. What we do is we're a performance analytics solution that measures consumer experience and outcomes. So we started in the video business. So we measure most of the large streaming apps in the world today. We've been around for about 20 years.
[00:02:37] So we measure the video experience, viewer experience and give that to publishers so they can modify their businesses, make decisions so that they improve the viewers experience so they actually watch more video. Logical. Now, every business has gone digital. And so we do the same thing for all digital businesses that are launching products into the wild, which is to show consumer experiences. How are consumers experience your product? And by the way, now agents, because now agents are here.
[00:03:00] What is the experience of the agent? And then giving the publisher, giving the e-commerce companies, giving the AI teams the intelligence back in real time so that they can make the modifications to adjust their products or their agent intelligence strategies so that they're able to respond to consumers and improve the experience. And again, same outcome, getting the better experience, getting the better outcomes.
[00:03:20] Well, it's a pleasure to have you sit down with me today. One of the things that put you guys on my radar, set off my tech spidey sensors was I was reading online that you have compared today's AI deployment to sending a toddler out to get a job, which really struck a chord with me. But what feels most immature about how organizations are putting AI agents into real environments right now for what you're seeing?
[00:03:46] Because I must admit, as an ex-IT guy, when I see businesses just saying, hey, we're going to launch a thousand agents out there next week, I'm like, whoa, whoa, this makes me a bit nervous. But what are you seeing here? Yeah. So at the fundamental layer, agents are nothing more than automated pieces of software. So they're automating some task. They're automating some process. And so when I talked about the ability for, you know, you're launching a toddler out into the wild, you really are launching a brand new piece of software. And it's a very complicated, complex piece of software. It's not simple.
[00:04:14] So you have an agent that's out in the wild. It knows very little. You're providing it data. I mean, obviously, everybody's trying to get and acquire as much data as possible to make their models and agents as smart as possible. But the reality is that we're at the early curves of this thing. So whether it's hallucinations or whether it just doesn't have the right information, that's what I mean by the immaturity of an agent is like launching a toddler into the wild. It's not mature. It doesn't have all the information. And just to operate correctly, just at that layer, the technical layer.
[00:04:43] And I think that's where companies are learning that this is a maturity process. It's not a if, it's a when. And it's not even a matter of whether you should start later. You should start now and add to that maturity. The key is can you put guardrails in place? And can you protect the level of immaturity that may impact your business? And by the way, there's two different layers of agents. There's internal workflow ops agents, which are very safe, by the way, because you're really dealing with your own employee base, your end users.
[00:05:08] So it's a little bit of a more protected, secure agent launch because, you know, if something my Salesforce agent, you know, I'm querying our sales reports and it's wrong. I just pick up the phone and call my sales ops person and say, hey, this isn't correct. Can we make the modification and make sure this is correct going forward? The line that when that gets crossed in when you launch these into the wild, into your consumer base or you launch chatbots, you start to launch sales assistants or more. You actually launch an agent based product.
[00:05:35] Now you're talking about consumers interacting with the agent and they don't pick up the phone and call you and say, hey, you had a problem with your agent. Can you fix it so I can buy your stuff? The businesses have to know what was the consumer's experience with that agent. And if you don't know what it is, that's where the ability for that to really impact your business becomes a major risk. So, again, even that is more on the guardrails monitoring side. So you can see how is it performing as it matures because you're never going to be able to launch a mature agent for your business.
[00:06:05] It's everything brand new gets launched and you try and make it as good as possible. But once it hits production, you're always going to learn new things and that agent is going to get smarter over time. The key is, can you see when it's not quite as smart as it should be? And can you make the adjustments quickly so that you bolster that lack of intelligence or awareness of the agent so that it gets better faster? So that's how we view and how we look at the development of the agent marketplace. Very immature, very low. It's gathering intelligence as it goes. But the next step is when you push this into the consumer base.
[00:06:33] It's a totally different ballgame. And speaking of toddlers, ChatGPT has turned three. It's officially a three-nager now. So as most AI pilots start to fail to produce measurable returns or early iterations of them, for what you're seeing in the field, where is the disconnect between technical progress and real measurable business outcomes and measurable impact? Yeah.
[00:06:59] I mean, I think, again, automation being the core foundation of what AI is and agents are, I think the efficiency gains are number one, right? So Chat, no matter who's using AI, it's about efficiency gains. I'm faster. I'm more productive. It's not perfect. And it gets 70% to 80% accurate, and then I got to work on the rest. So I still have to scan the whole document. You still got to know what you're doing, but it does improve efficiencies in a big way.
[00:07:28] So I think that's the first benefit of agents. I think beyond that, when you start to get into transactions or you start to get into actual business impacting conversations or business impacting agent conversations, that's where it becomes a whole different ballgame because it's not just about 80%. If I'm trying to buy something and the agent got me 80% of the way there, this is zero. It's a times zero. It didn't get me all the way. I didn't get to the business outcome.
[00:07:53] So the complexity of the agent conversation, the ability for the agent to be accurate in an e-commerce or speaking to consumers is a different level of expectation. I don't need it to be 70% and I can fix the rest. When you launch a consumer-facing agent, it has to be 100%. It has to get me all the way to the transaction or else I go back to the app, right? Websites and apps, I can do anything I want manually. I can go back to that world and it's 100% effective. When you launch that agent, you have to make sure that that agent can move people through 100% of the time too.
[00:08:22] 80%, 70%, even 90% is not good enough anymore because up until purchase button is clicked, right? Anything short of that purchase button is a zero. So that's why consumer-facing agents and real business-impacting agents, it's very much lagging because of the fact that it's imprecise and it's not quite where it needs to be yet. I think it's so true what you're saying there. And I think there will be many leaders that have assumed that a failed pilot means that, hey, the model's just not smart enough.
[00:08:51] But another thing that stood out to me when I was doing research on you guys is that you've argued that the issue is visibility and guidance instead. So why is understanding agent behavior so much more important than simply building better models, just for that very person listening? Yeah, there was an overused MIT report that was put out because it was like everything, the hype cycle was on and everybody's super excited about AI.
[00:09:16] And there was an MIT report put out that said something to the effect of 95% of agent launches or agent projects fail. And then they just took the headline, right? It's in today's society, it's all about the headline. Forget the body of work or what's written. It's about the headline. But if you read the article, you read the report, it actually says that there was never any business outcome assigned anyway. So people were launching agents with what is the outcome you expect of the agent? Again, back to automating a process is one thing. You can automate and actually make it faster.
[00:09:42] But if the agent launch itself has an outcome on it, now you can measure whether it's effective or not. And most agent launches were just automated efforts. I'm automating a process and never really had an outcome in mind. So that's why the paper says it's about outcomes and it's also visibility. So the paper actually talks about visibility. And so the failure rate of agents in the early days, not surprising. When we launched our video business and streamers, publishers were launching streaming products. It was awful.
[00:10:12] I mean, this is not even that much different than the launch of the streaming businesses where these large publishers, mature businesses, were launching immature video playback technologies. You probably remember back in the days of just buffering was out of control, low bit rates. Same thing that you were trying to launch something into the consumer base that had to serve a consumer base in a very high quality level. And it struggled and it got there, but it got there because of visibility and monitoring.
[00:10:37] So publishers were able to use Conviva and tune the experience of the viewers and what was happening, what worked, what didn't, where were their challenges, and keep tuning it to measure outcome, which is engagement. And they got to better engagement. It's the same thing with agents. If you monitor them and you can show what outcome you want to get to, and then you can in real time see how it's effectively getting people to that outcome or not and being able to adjust when it doesn't, you can get there much, much faster. So the agent failure rate is going to be high no matter what.
[00:11:06] But the key is, can you see why it's not succeeding and what you need to do to make those adjustments faster so that you can ultimately get there? Because again, I said businesses are not. It's not if. It's a when. People deploy agents. It's here. It's here to stay, and it's going to revolutionize all businesses. The key is, can you harness the power of this immature technology and still be able to deliver business outcomes?
[00:11:25] And the only way to do that is monitoring in real time so that you can adjust and make meaningful, impactful decisions of how you can get that agent to help your business and return an ROI versus cost you money or do nothing. It just did something, and I don't know really how it got anybody into anything meaningful. And you mentioned one big story, that big report that everybody jumped on. And there was another big news story, and that's the one where the Chevrolet chatbot sold a $76,000 car for just $1.
[00:11:54] It becomes somewhat of a cautionary tale. But what does an incident like this reveal about how little oversight many organizations have once AI is placed in customer-facing roles? Any big lessons there? There's a huge one there, and that is that when I was younger, I would listen to a morning radio show that had a project where they went from a paperclip to a car.
[00:12:17] And so the game they were playing was over a long period of time, they would trade from a paperclip to somebody who would give them something for the paperclip. And you would just slowly upgrade from what you had, and they went over like 30 days or something like that from a paperclip to a car. And it's just the slow, gradual, right? Getting from a paperclip to a car. And it's the same thing with agents. If people can get in and they can gradually wear away your agent, then they can get the agent to just respond to a question.
[00:12:43] And it's accurate, and the agent feels like it answered the right question, and the person comes back with another question to guide that conversation down to a dollar. Or whatever you want to manipulate the agent to do. The key is, can you have the pattern of seeing that trend, and you can see, oh, immediately something's on its way down, and it's continuing down, I should jump in. Or you just set up guardrails, that it has a floor. And if it gets to that point, the agent comes back and says, you know, has a response.
[00:13:08] So, again, back to the dollar for a truck, you know, that's a perfect example of the loose nature of agents that require guardrails, the ability to set floors and understand when something reaches a certain point in real time, by the way. You've got to know in real time where it's too late. Or the ability to just see the trend, see the pattern of a conversation, and then be able to step in and say, okay, this is going in a very bad direction, or it's going in a direction I need to stop or pull back or kick you to a live agent, whatever you're doing.
[00:13:33] But that is actually a perfect example of where monitoring and trending is actually critically important of being able to manage those guardrails. I love that. I'm going to have to look up the paperclip to a car. Clip to a car, look it up. Yeah. It blew my mind. It was one of those things where I was like, I cannot believe this is happening. But then you logically think about it, and they never did trade a paperclip for a car. They just traded a paperclip for, like, a stapler, a stapler for, like, a calculator, and then they just kept going up. It was crazy. It was crazy. Love that.
[00:14:01] If we stick with the toddler analogy for a moment, if AI agents are still learning how to behave, what should effective parenting look like in an enterprise setting? How should teams that are listening to this conversation today, how should they be monitoring, training, and correcting AI without slowing innovation down? Yeah, that's what I said. Without slowing the innovation is the key because innovation is a nonstop thing with AI right now and agents in particular. You're always going to be developing new prompts.
[00:14:30] You're going to be developing new intelligence. You're going to be providing new sources of data and information to the agent so that it's getting smarter all the time. So there's two layers of how people should think about an agent. One is the back-end technology performance. Is it making the right calls? Is the API calls happening? Did I provide it with the right data set? Is it accessing the right data set to the right questions? So there's a lot of technology functionality that people who are building agents should look at to make sure that it's just the technology is working well.
[00:14:57] But the other side of the equation is what is the impact in the conversations and what are the outcomes and patterns of those conversations? Because I could be accessing all the right information. We just use the example of the truck, right? So obviously every answer was correct, right? It's answering the questions. The key is can you take the conversation in totality and look at the pattern of the conversation and say, is this a good conversation? And I'll give you another example of not the dollar truck one. But let's say somebody builds an agent and it gets 100% of the customers to the outcome they wanted.
[00:15:26] But if you don't understand the pattern of conversations, which is that it frustrated everybody. Let's say I had a question and I had to ask it five times and the average question asked was five times. Yes, maybe I stuck it out because it was a billing question or it was a refund that I was really needing. So I had to get to the outcome. So the consumer had to invest in motion to get through this inefficient agent conversation. So as someone who just looks at, oh, the agent got everybody to the refund correctly, I'm good.
[00:15:54] Instead, if you don't understand the pattern of the conversations and you see, oh my gosh, like it took 15, 20 round trip conversations to get to that outcome. I now realize that not only they got to the outcome that I wanted them to, but I have a frustrated consumer base. That's not, that's the thing. Getting to the outcome is not the only thing. I say in agents, there's two things that matter, accuracy and efficiency. Because it's not just about accuracy. It's how efficient was it for me to get to that accurate outcome?
[00:16:23] And if you are stress testing your consumers, because this thing is immature back to the toddler, you know, I'd like to get a refund or refund. What do you want to refund on? I already told you I want to refund on this bill. Oh, what was that bill for? And it's like wasting time. Then you need to understand that that's the problem, not whether it got to the outcome. I need to make my agent more efficient. So the ability for an agent to understand consumer behavior patterns and be able to understand when things like intent, but then also sentiment.
[00:16:52] How are you measuring intent and sentiment of your consumers so that the agent is able to know when I'm frustrating a consumer? Yes, I got the accuracy right, but I frustrated the consumer. So as a builder of agents, you have to be able to measure both. You have to be able to measure the accuracy, but then also the efficiency of how the conversations are going and where the agent is a little loose in responding. Even though it might be accurate, it's just inefficient.
[00:17:18] I just want to give a big thank you to my sponsor who is supporting every show, every episode across the Tech Talks network. And this month, I'm proud to be partnering with Alcor. And anyone who's tried to scale an engineering team across borders, they will know firsthand how messy it can get. Because they deal with endless providers, then there's confusing rules to deal with in each and every region, and fees that always seem to surface at the last minute.
[00:17:46] Now, Alcor, they solve that by acting as a partner rather than just an intermediary. And they focus on tech teams that expand in Eastern Europe and Latin America, and they bring employer of record services together with recruiting. So, essentially, they help you pick the right country, source the right engineers, and assess them properly. And then get them active for you and your company within days. And one of the things that stands out for me is the financial transparency.
[00:18:15] Around 85% of what you pay goes directly to your engineers. Their fee goes down as your team grows. And if you ever wanted to bring your team in-house, you do so with no exit costs. That kind of clarity is why Silicon Valley startups, including several unicorns, have chosen Alcor. And you can find out more by simply going to alcor.com slash podcast or follow the link in the show notes below.
[00:18:43] And so much of what you just said there will resonate with people around the world. We've all had those frustrating experience with chatbots. And real-time visibility into AI behavior sounds so obvious, yet many companies still lack it based on our experiences. So, what kind of signals should lead us listening? What should they be tracking to understand whether an agent is helping or quietly creating risk or enhancing frustrations? What should they be doing? Number one is measure the consumer experience.
[00:19:13] Start with the consumer experience. We did that in video. That was our promise in videos. We deploy 8 billion sensors in the market today. We probably have the largest sensor measurement analytics network in the world today because we measure everybody's video player. Every single video player matters. You can't deal in aggregate. You can't just deal in system-level performance because it doesn't translate to whether how many consumers got there or are upset. So, the first thing is you have to measure consumer experience. You have to measure the consumer's point of view.
[00:19:41] You can't measure internally because the person you're serving may have a totally different perspective than your system red, green, yellow lights are telling you. So, the number one problem, I think, with most digital businesses, not even agent businesses, but most digital businesses is are you effectively measuring in real time your entire consumer-based experience? Forget the agent for a second. That is a challenge in digital businesses all over the place. And that's the reason why video businesses said, I'm not going to make that mistake. I'm going to measure every video experience.
[00:20:08] I'm going to use Conviva so that every video experience is the best it can possibly be. Now, you still aggregate that because we measure 8 billion. So, it's not like you're looking at 8 billion individual sessions, but we aggregate those 8 billion and then say this is the group that you can make one decision and solve all their problems. But you have to see their problems from that perspective. Agent's no different. There will be a lot of companies that will talk about, you know, agent performance, agent observability, agent conversation measurement. And that's good, but you're launching these to serve a consumer. Who's measuring the consumer?
[00:20:38] And if you're not measuring the consumer, you're guessing the impact of your systems and agent on that consumer. And you're trying to figure out if it worked. Conviva says, know the experience and the sentiment of your consumer by measuring them and then track that back into the agent and then see where the disconnect was. Because the agent may say, I did a great job. And the consumer's like, I'm quitting. How do you reconcile that? And by the way, which one matters? Are you building a business of agents or are you building a business for your consumers? So, that's why we say start with the consumer.
[00:21:07] They're the one that you're building it for and the one that matters most. And that's the mistake most people make. And it's so true what you're saying there. And I think when organizations talk about ROI from their AI or any tech investment, they often just focus on the cost savings. So, talking about the customer experience is so important, so crucial.
[00:21:28] So, what are the indicators should they be considering to judge whether an AI system is genuinely growing up and evolving or just ultimately contributing to the business? Because it's not just about saving a few dollars, is it? Yeah, that's where I think there's a couple steps here. One, the first is outcome. You've got to be able to measure accuracy. Did it get to the outcome I wanted people to get to and what were those outcomes? Second, as I mentioned, is efficiency. How efficient did it get to? Did it get to those outcomes?
[00:21:58] And by the way, you can continually tune that efficiency. The smarter you make an agent, ideally people want to log into an agent and ask one question and get an answer and be done. That will never happen. But absolute zero is the goal. You want to make that conversation as efficient as possible. People don't have time to spend talking to your agent. So, you really always not only want to check to get them to the outcome, but can I get it there faster? If I got somebody to an outcome in seven turns and I got someone there in five, oh, can I get people in five? That's the new standard.
[00:22:27] So, it's all about getting to the best possible efficient outcome. So, those are the two things everybody should be thinking about. And then beyond that, it's just now getting into the details of how to do that. Once I understand this is the outcome I want, and by the way, these are my efficiency measurements, my efficiency metrics. And then I look at the causes of those inefficiencies or efficiencies and I start, I always say the simple business logic I always use is do more of what works, less of what doesn't.
[00:22:51] So, if you have patterns of behavior and patterns of conversations that got somebody to the outcome in five turns versus somebody that got there in nine, what was the difference in what happened in those conversations? And can I manage that agent to get down to five? I have the blueprint and I have what doesn't work. So, let's start working those two things together so that I can move the agent to a very efficient enough. And people can launch agents and consumers, and consumers know every time I talk to that agent, I get exactly what I want and I don't waste time. That's a great brand in the new AI world.
[00:23:21] It's not just that I got what I wanted. It's that I got what I wanted and it was super efficient. That's the new branding measurement for AI digital businesses. I love that. And I always try and give people listening a valuable takeaway.
[00:23:33] So, for any executive listening who are currently feeling that pressure to scale AI quickly, what mindset shift do you think is needed on their part to move from, hey, experimental pilots to thinking bigger to dependable outcome-driven AI systems that can be trusted out there in the real world, especially when we're starting to talk about accountability when they're out there too? Three steps. One is, do you understand the patterns of behavior of your digital product? Right? So, that's the first thing.
[00:24:02] Before you ever launch an agent, can you provide that agent with how your business operates today? That's step one. Most people just launch the agent and it's actually not even a toddler. It's an infant. Right? It's a brand new piece of software. Every interaction is brand new. Can you find a way to measure the patterns of your digital product and give that to the agent so that it launches as a teenager, for example? Because it has all the information of what's coming in, the consumers coming in, the patterns, segmentation dimensions that matter, so that the agent is prepared to talk to your consumer base. So, that's step one.
[00:24:31] And then step two and three are what we talked about, which are the monitoring and guardrails. Can you set up the system so that you know what the outcomes you want to get to? And then can you continuously in real time measure the patterns of the conversation so that you're able to continually tune every conversation and every agent interaction to a successful outcome in the most efficient way? And if you can do those three things, understand the patterns of behavior from your consumers, load that into the agent before the launch. And then post-launch, watch that conversation efficiency to the 100% outcomes.
[00:25:00] And then you just manage your business like that. It actually will become very easy. It's not that hard. I think it's when people are launching partially blind or they're running the wrong data set. They're measuring the wrong things. And then they're getting blistered online reviews because, you know, hey, my agent worked perfectly. All the lights were green. And why am I getting ripped by consumers? Right? That's the worst outcome of running the wrong systems and the wrong monitoring capabilities. Well, thank you so much for sitting down with me today and sharing your insights.
[00:25:27] And before I let you go, I'm going to ask you to leave one final gift for everyone listening. We have an Amazon wish list where I ask my guests to leave a book that they'd recommend for them to check out. We also have a Spotify playlist for people to check out their favorite songs and the stories behind them, etc. So I don't mind which guilty pleasures are allowed in songs, of course, as well. But what would you like to leave everyone listening and why? Well, I'll do the book because I have two. Professionally, I have an oldie but a goodie.
[00:25:56] It's the book I always refer to, which is Good to Great. I think it was a book that was done so long ago, but it just stands the test of time. Anybody who's looking at launching a business, especially in this time of transition, right? With AI, AI is so disruptive. You're going back to startup days where you have to understand what is my core business because it's changing. So Good to Great is actually a book everybody should revisit, even if you work for a mature company, because the ability to go back to your hedgehog concept, the ability to understand, wow, I was good to great before, but now everything resets. I got to go good to great again. So I would advocate that.
[00:26:26] And then personally, a book that I love that I thought was just so amazing on human spirit is a book called Shantaram. Shantaram. Yeah. One of the best books I've ever read, Shantaram. It's a book about a guy who goes to India and integrates himself into the slums of India and his real life experience, by the way. Phenomenal book. And now it's a movie and TV or whatever. They came out with more. But one of the best just mind's eye. You can't put it down book. Oh, wow. I'll get both of those added.
[00:26:56] And I will be checking that out as well. You've certainly hooked me in there. And for anyone listening just wants to find out more information about you, your work, some of the big announcements, what they can expect this year. Where would you like to point everyone? Conviva.ai. Log in. And we have a ton of information there as we continue to push into this agent future. We're all going there together. That's where we tend to put all of our information, our white papers. And if anybody wants to reach out, they can contact us there. And happy to help them.
[00:27:26] Awesome. Well, I'll add links to everything there. Make it easy for people to find all the information they need. And I love the analogy you used in our conversation of why today's AI deployment is like sending a toddler out to get a job. But it's bursting with potential, but nowhere ready for the real world. But, of course, this next evolution of AI and why you believe it isn't about building smarter models, but gaining real-time visibility into agent behavior.
[00:27:53] And then you can monitor, train, and course correct their childlike systems into dependable ROI driving adults. Fantastic. So much gold in there. Just thank you for sharing it all with me today. Really appreciate your time. Absolutely. Thanks for having me. I appreciate it. So if AI is really growing up, then this conversation today makes it clear that supervision matters just as much as intelligence.
[00:28:19] And Keith challenged the assumption that failed AI initiatives are purely a technology problem. And I think today he refrained them as a management and visibility challenge. So we explored what effective oversight looks like in practice, how organizations can monitor and course correct AI without slowing innovation. And what real progress towards ROI, what that really looks like beyond simple cost savings. Time to think bigger.
[00:28:47] So you'll find links to Conviva and Keith's work in the show notes, as I promised. But I'd love to hear your perspective. Do you see today's AI systems as ready for adult responsibility? Or do they still need stronger guidance before they can be trusted at scale in the real world? You know the drill. You can send me a written or voice message from the site. 4,000 interviews. How you can work with me. How you can sponsor the show. Or just send me a DM on LinkedIn.
[00:29:18] I'll leave that up to you. Lots to think about. And I'll return again tomorrow. Thanks for listening, guys. Bye for now.

