Closing The AI Trust Gap In Customer Experience With Cyara
Tech Talks DailyApril 06, 2026
3473
33:3030.66 MB

Closing The AI Trust Gap In Customer Experience With Cyara

How many bad customer experiences does it take before someone walks away for good? In my conversation with Amitha Pulijala, we explore why the answer might be fewer than most businesses are prepared for, and what that means for anyone investing in AI-powered customer experience.

New research from Cyara reveals a stark reality. Twenty-eight percent of consumers will abandon a brand after just one poor interaction, and nearly half will do the same after only two or three. That leaves very little room for error at a time when more organizations are introducing AI into customer journeys, often at speed and at scale.

Amitha, who leads product strategy in the AI and CX space, brings a grounded perspective shaped by years of working with large enterprises and complex contact center environments. What stood out in our discussion is how the real challenge is no longer about whether AI can handle customer interactions. In many cases, it already can. The issue is whether customers trust it enough to let it try.

We unpack the growing perception gap: 73 percent of consumers still believe human agents resolve issues faster, even though AI systems can deliver near-instant responses. That disconnect often comes down to past experiences, from bots that fail to understand context to systems that trap users in frustrating loops with no clear way out. There is also a clear line that customers draw around where AI belongs.

Routine, high-volume tasks such as password resets or appointment confirmations are widely accepted. But when conversations shift toward financial security, healthcare, or legal advice, expectations change. People want human judgment involved and reassurance that the outcome is reliable.

What makes this conversation particularly relevant is the generational divide shaping expectations. Younger users are far more open to AI-led interactions, provided they work seamlessly. Older generations remain more cautious, often preferring the certainty of speaking with a human. That creates a design challenge for businesses trying to serve everyone without alienating anyone.

Throughout the episode, Amitha emphasizes that trust is built through experience, not intention. That means testing AI systems in real-world conditions, monitoring how they perform over time, and ensuring that when things do go wrong, the transition to a human feels smooth and informed rather than abrupt and frustrating.

This is not a conversation about replacing humans with machines. It is about understanding where AI can add speed and efficiency, where it should support human agents, and where it should step back entirely. The organizations getting this balance right are not the ones deploying AI the fastest, but the ones validating it most carefully before customers ever see it.

As businesses race to embed AI at every touchpoint, a bigger question emerges. Are we building systems that customers actually trust, or are we creating new points of friction that push them away?

Useful Links

[00:00:04] Customer experience used to be a fairly simple affair. You had a question, a problem or a complaint and hopefully a human being somewhere would pick up the phone, answer the chat or maybe even reply to the email before your patients ran out.

[00:00:21] But then came interactive voice response or IVR. We've all encountered them, you call a helpline and you have to push one for this, five for that, nine for this and then you end up back on hold. A frustrating experience. But now AI has entered the chat, sometimes helpfully, sometimes confidently and sometimes with the digital equivalent of shrugging its shoulders while sending you around in circles.

[00:00:49] And that is why today's conversation matters. Because today I want to talk about that growing gap between what brands think AI is delivering in customer experience and what customers actually feel when they are on the receiving end of it. Because when one bad interaction can send people walking away for good, it's trust that stops being a nice to have and becomes the whole game.

[00:01:15] But enough scene setting for me. I'm going to introduce you to my guest now and we will discuss all this and much more. So a massive warm welcome to the show. Can you tell everyone listening a little about who you are and what you do? So Neil, thanks for having me. I'm Amita Puli Jala. I'm the Chief Product Officer at Sierra.

[00:01:37] Before Sierra, I spent several years building communications and CX platforms at companies like Ericsson, Vonage and Oracle and working closely with enterprises running large contact centers. So here at Sierra, I'm responsible for the vision and strategy of Sierra's agentic CX assurance platform. Well, thank you so much for taking the time to sit down with me today. There's a lot I want to talk with you about.

[00:02:04] One of the reasons that I put you on my radar and set off my tech spidey sensors was when I read that your latest research showed that 28% of consumers would abandon a brand just after one bad experience.

[00:02:18] And 48% after two or three. And it reminds me of an old saying, I think it was by an IBM chief well over a decade ago when she said that the last best experience we have anywhere becomes the standard expectation for what we expect everywhere. But what did you take away from this research? Yeah. So for me, what it tells me is that there's almost no margin for error anymore.

[00:02:46] People are busy when they get to a customer agent. They want that experience to be the best experience that they would have. So when I speak to customers, especially large enterprises rolling out AI into their contact centers, the biggest concern I hear isn't actually the technology itself. It's the trust. They're excited about the efficiency and scale AI can bring.

[00:03:16] But they're also very aware that if something goes wrong in front of customers, the impact is immediate. Right. So in many of my conversations with CX leaders, they would be like, you know, we are moving quickly with AI, but we are nervous about what happens if it fails.

[00:03:34] Right. And that's keeping them from deploying AI at scale, because when a customer gets stuck in a loop and we've already been there, all of us, or if a customer receives the wrong answer or can't get escalated to a human, that one interaction, as you said, can undergo a really bad experience. And it can undo like a lot of years of brand trust.

[00:04:00] Right. So what we are seeing is a shift in mindset with a lot of organizations. Right. Companies are realizing that introducing AI into customer interactions isn't just about deploying a new capability. It's about making sure those systems behave reliably in the real world. And that's where the idea of AI assurance comes in.

[00:04:26] You're testing these systems, you're validating them before they go live. You're continuously monitoring how they perform once the customers are actually using them. And most of the organizations that have been, I would say, partially or fully successful, maybe I'll stick to partially successful, is the ones that are treating AI and CX the same way as they treat any other critical systems.

[00:04:55] Right. So if they have actually tested end to end, that's where I think there's more confidence in they being fully successful. But, you know, customers have been asking, you know, how do we test it? How do we catch issues before customers do? All right questions. And they just take care of the experience, you know, before it is rolled out to their end users.

[00:05:20] And that's how they can stop bad experiences and prevent that AI trust going down. And another big stat in the report is that 73% of consumers believe human agents will always resolve issues faster than AI. And that's a big stat there. And I suspect it is actually untrue. We both work in AI. We know the power of it out there.

[00:05:45] But I suspect there's many people outside the tech industry have had frustrating experiences when they've had a question that they want to ask. But it's not on that standard script. It ends up taking longer and they pull their hair out thinking, I just want to speak to somebody here. But is this truly a performance gap or a perception gap that brands have failed to address, do you think? Yeah, I actually think it's a bit of both.

[00:06:09] In many conversations I had with customers, they'll say their AI systems can technically handle a large percentage of inquiries that are coming from the users. But customers still prefer going straight to human. And you and I have done that probably in many conversations. And that tells me that the challenge isn't just the capability. It's also the confidence, right?

[00:06:36] You know, part of the perception gap, I think, comes from early experiences, right? And many customers have interacted with bots that didn't understand the content, that got stuck in loops or couldn't escalate when needed. So those experiences stick. So even as the technology improves, the customers often assume it won't resolve their issues quickly.

[00:07:02] And at the same time, I think there are still real performance gaps in some deployments. What I hear from customers is that once the systems move from a controlled demo to real customer environments, unexpected things can happen. Right. So edge cases, you know, some ambiguous questions or conversations that go off script, as you said.

[00:07:28] So without the right, again, monitoring of those situations, you know, it can slow down the resolution instead of speeding up. Right. So that's why a lot of discussions I'm having right now are around, again, back to assurance. Right. How do you validate these systems before they actually interact with customers? Right. How do you continuously check that they are performing the way it is expected before their life?

[00:07:56] And, you know, ultimately, you know, customers don't care, to be honest, whether the issue is solved by AI or human. They care about how quickly and smoothly the problem gets resolved. The, you know, the goal shouldn't be either replacing humans or going directly to humans. It's actually should be making sure the technology handles what it's good at and bring human in the loop when it's needed.

[00:08:25] So that's my take on it. Yeah. 100% with you. And going back to the report there, I think it was nearly half of customers say they want to escalate to a human immediately. 30% a little bit more reasonable just when they've had a failed bot attempt. Then they expect it to be handed to a human. But, Al-Khuis, from what you're seeing here, what does that seamless, trust-preserving handoff, what does that actually look like in practice?

[00:08:50] Yeah. So a seamless handoff actually really comes down to a few practical things. First, the system needs to recognize early when it's not helping. For example, when a customer repeats the same request again and again, you know, ask for or the conversation goes off track. So these are the indicators that there might be something there that's not helping, right?

[00:09:17] And maybe that's the point where you would want to have a seamless handoff. The worst experiences happen when the bots keep trying to recover instead of passing the interaction along. And I've seen so many bots that do that. They just keep going in a loop. And that's a very bad customer experience. So the second thing I wanted to say was when actually handoff happens, the context has to travel with the customer.

[00:09:46] And this is not new to AI. I've been in the customer experience space for a very long time. And we always talk about passing context to the customer, right? Because one of the biggest frustrations customers tell companies is having to repeat everything again. And even today, at this time and age, there are, you know, some IVRs and some bots where you have to, you know, when you get in, you tell a lot to the agent.

[00:10:15] And when you get handed off, you again have to repeat the same thing again, right? Again, this is not a new concept. The agent should immediately see the conversation history. You know, what has happened, where it struggled. And that turns the escalation from a restart to a continuation, right? And third, I would say is that the experience should feel intentional, right? Not like a failure. Sometimes companies are starting to message this clearly, right?

[00:10:44] Let me connect you to a specialist who can help further, right? That's a simple framing that preserves, you know, the confidence in the brand. And again, we talk about, you know, trusting these escalation scenarios when I talk to my customers. And these escalation scenarios should be tested as much as the automation itself, right?

[00:11:06] It's not enough to test, you know, happy paths or golden paths, as the customer call it, where the system answers correctly. You also have to test what happens if it doesn't, right? How quickly the issue is recognized and how smoothly the transfer of the conversation happens. And if the agent has the right context.

[00:11:28] Because in practice, I think the handoff to the right agent with the right skill set to resolve the customer issue is actually one of the most important moments in the experience, right? When it's done well, you know, customers barely notice any transition or handoff. But when it's done really poorly, that's usually the interaction that customers will remember, unfortunately.

[00:11:54] And over the last few years, many people have likened the arrival of AI to the iPhone moment. But I always argue it feels more like the app store moment, you know, where the experimental phase, the first apps that hit an iPhone were turn my phone into a glass of beer or a chainsaw. And then when enterprises started adopting, then it was everybody needed an app for absolutely everything. And right now we're seeing AI shoehorned into everything in a similar fashion.

[00:12:22] And the important message, I think, is you can't have AI for everything or you shouldn't put it for everything. There isn't a one size fits all for all that. And your data seems to confirm this with 65% of consumers saying, yes, they'll trust AI for some things, but they would never trust an AI bot with financial security issues. And over half feel exactly the same about health care or legal matters.

[00:12:46] So, again, I'm curious from what you're seeing, where should AI clearly lead in customer experience and where should it deliberately take a step back? No, you're right, Neil. So, definitely at Sierra, we work with many organizations in health care and financial services. So, these questions come up all the time, right? So, when we ask customer where they're deploying AI and where they're not, you know, we get different questions.

[00:13:14] And these industries, as you mentioned, you know, deal with highly sensitive interactions. And, you know, customers are naturally very cautious about how automation is used. So, when, you know, automation tends to work really well in these situations is really in high volume and very well-defined interactions.

[00:13:36] Things like checking account balances or verifying appointment times, resetting passwords or answering any other common questions. I was working with a financial services provider recently, and they said they want to deploy AI for dispute resolution, right? Which is a bit further, but still a very valid use case, right? Again, in these cases, customers usually value speed and convenience.

[00:14:06] And any kind of AI automation there can resolve issues in seconds without the customer has to wait for an agent, right? But when you get into areas like, you know, financial security or medical guidance or legal issues, you know, the expectations change. You know, the stakes are higher and the customers want to know there's human judgment involved.

[00:14:31] In many conversations I have had with our customers in banking and healthcare, they say that, you know, AI automation can help gather the information and guide that interaction. But the final decision or advice should still be with a human agent, right? And that's where a lot of discussions around, you know, the AI assurance and trust also come in.

[00:14:54] You know, we help customers think through where the automation should confidently lead, where it should support human agents and where it should step back entirely, right? So those are the boundaries that are especially important in your regulated industries. So it's, and again, I don't think most of these companies are trying to automate everything.

[00:15:17] They're being very, very thoughtful, you know, where to put automation, where it adds speed and efficiency and where human expertise actually builds trust, right? And that's the balance, what makes customers feel comfortable engaging with these systems also in the first place, right? So, you know, we, when we talk about Sierra, we talk about, you know, static journeys.

[00:15:41] These are very deterministic journeys and the customers have very much control on what journey it is, when to hand off to the customer, how the user should actually navigate through the customer journey. And we also talk about hybrid journeys, right? Hybrid journeys where agent is taking over and there is agentic communication, but at some point of time, the agent will also hand off to the human agent.

[00:16:09] The AI agent will hand off to the human agent. So we have all these discussions, but, you know, certainly I would say there is a place for every kind of conversation in these industries, I would say. And I do see big differences out there at the moment of how people, let's say, retrieve information. I see the way my mum, for example, she will stick to traditional Google searches where, as my son, he will increasingly turn to AI for any information that he's looking for.

[00:16:39] And that generational divide is also striking in your report with, I think, 56% of Gen Z open to AI, as long as it resolves issues seamlessly, especially come. But when you put that against baby boomers, I think the figure shrinks to 26%. So should brands design AI experiences for multi-generational audiences without alienating key segments? And what is the best way of doing that? Because everybody's different.

[00:17:09] Everyone has different uses for technology now. Yeah, yeah. And that's interesting, isn't it? The key thing I tell customers is that, you know, the experience shouldn't be designed around technology. And we have seen that over and over again, right? So it's not about technology. It should be designed around the customer choice. The most successful companies aren't forcing everyone down the same automated path, right? They are giving customers options.

[00:17:36] You know, if you want a quick resolution, that's available. But if you prefer to speak to a person or choose a different path, that should be available as well. And, you know, another important piece is also making the experience predictable. One of the concerns that I hear from customers, especially when they're rolling out these systems for the first time, is that people feel frustrated when they don't understand what the system can or can't do.

[00:18:06] So when you are actually talking to the bot, you're like, you know, can it really find me that refund that I'm asking for? Or would it hand off to an agent? So that's really very confusing to me as a user when I'm talking to a bot, right? So clear prompts, right? Easy escalations to human and smooth handoffs. I think as we discussed in just now, become really important for maintaining that trust across different groups, right?

[00:18:34] And this is also, I think, one of the interesting things that we think about when we are thinking about the AI assurance, right? So when companies test these experiences with real-world scenarios and different types of user personas, they start to see where the friction happens, right? So sometimes it's not the technology itself, it's the design of the interaction, the wording or, you know, how quickly the system offers an alternate path.

[00:19:04] I think that's essentially what a company should put some attention to. And ultimately, the goal is to build one, actually, it's not to build one experience for Gen Z and another one for older people, older generation people, right? You know, it's to build a system that adapts to different preferences, results quickly, right, when it can,

[00:19:29] and makes it easy to reach a human when that's what the customer wants. Yeah, again, 100% agree with you. And another big stat in the report I've got to bring up here, 61% of consumers said that AI failures are more frustrating than human failures because I always, you would think that a failure is a failure. So why is it customers give people or humans grace but hold machines to a much higher standard?

[00:19:57] Yeah, so there is a reason why, you know, the voice as a channel has sustained to be the top channel for years together, right? So when a human agent makes a mistake, you know, customers can usually sense the intent to help. So there's a room for empathy. And when you hear a person acknowledge the issue, apologize, and try to fix it, that interaction itself helps rebuild trust.

[00:20:26] You know, when machines come in, they are judged differently, right? So customers assume that, you know, if the company has invested in AI automation, you know, the system should be accurate and consistent. You know, when it gives a wrong answer or misunderstands the context or even sends the customers in loops and circles,

[00:20:49] you know, the frustration builds very quickly because people feel the system should have known better for the lack of anything else. So I think another factor that also comes up very frequently in discussion with customers is that we talked about financial services, healthcare, and these regulated industries, right? And they are responsible for the information being provided.

[00:21:14] And when, you know, when bots answer questions, if there are any facts that are incorrect, then it becomes, you know, really frustrating for the customers, right? So in these situations, actually, the fact checking, the bias detection, and the validation of responses also becomes much more important. Because it's not just about giving an answer.

[00:21:40] It's about giving the right kind of answer, the right factual answer without any bias and completely validating it. And that's what, you know, builds the trust. And this is exactly where I think, you know, most of my discussions are focused. Companies are certainly realizing that, you know, deploying these systems isn't just about launching them.

[00:22:03] Because I've heard, you know, very recently I heard a few customers say that, you know, their work is not done when they launch them. It's about constant observability, constant, you know, monitoring of the system and making sure they are performing well, right? It's about putting guardrails around them.

[00:22:20] And that means, you know, validating the responses, any potential bias, as I said, or the information that they're providing is grounded in trusted sources is really, really important. And, you know, in practice, I would say building trust with AI systems requires more deliberate design.

[00:22:42] You know, the system needs to recognize when it may be wrong, you know, provide reliable answers and know when to bring a human in the conversation. Again, we have talked about human in the loop, and this is going back to emphasizing that point that AI agents can provide really factual information. But, you know, sometimes recognizing where they might go wrong is also important.

[00:23:05] And with all these safeguards in place, I think customers can become much more comfortable engaging with the technology, I think. Yeah, completely agree. And I'm curious, from your perspective, when you see examples of AI bots getting things wrong, is the real issue model capability, poor validation or weak CX orchestration or something else? What are the most common reasons there that you see for AI getting it wrong? Yeah, I think that's a great question, Neil.

[00:23:35] And this is what I think I hear all day long from my customers, right? From what I see, it's rarely just one thing, right? Most of the time, it's a combination of how systems are designed, validated, and even orchestrated across the entire customer journey. And we all know that the customer journey is not just about one component.

[00:23:57] It's about, you know, especially in the contact centers, you know, there are multiple systems, multiple integrations, and the journey spans across all those systems, right? The models themselves, as you said, have improved a lot. In many deployments, the core capability is actually quite strong. But when a customer says, you know, that bot did not understand me, it's often not just about, you know, even the utterance or language understanding.

[00:24:26] It can be how the system understands or handles the context, how it responds when the conversation goes off the escalation path, or how the interaction moves across different steps in the journey. One thing I talk about a lot with customers is the end-to-end journey assurance, right? It's not enough to test whether the bot answers a single question correctly.

[00:24:54] You know, you have to test the full experience. What happens when the customer asks something unexpected, or when they switch topics, or when the system needs to authenticate them, or when the interaction needs to move to a human agent, or many of these failures, customer notice, happen in these transactions, not within the component, but in these transitions from one component to the other component, right?

[00:25:23] And voice interactions. And this is where Sierra has been in the voice space for a very, very long time. And voice interactions are not always easy, right? They introduce another layer of complexity. In voice systems, you know, timing matters a lot. If the bot interrupts the customer mid-sentence, or if there is noticeable lag before it responds, the experience immediately feels broken.

[00:25:51] You know, even if the answer itself is correct, that interruption or delay makes customers feel like the system isn't really listening to them. And that's why in a lot of discussions I'm having around AI assurance, the focus is expanding beyond just the model performance. So companies are looking at how the entire experience between from where the customer starts,

[00:26:16] and how, you know, the system behaves in the real conversations, whether the system listens properly, responds quickly, you know, handles edge cases, and the transitions are smooth or not, whether it needs help from a human. So all these are very important. And from the customer's perspective, again, it's simple, right? They just want to be understood, get their issue resolved without friction. Everything behind the scenes, right?

[00:26:45] Whether it's model or orchestration, routing has to come together to make it happen. And you're in somewhat of a unique situation here, and you've got great visibility working with brands from Salesforce, ADP, Amazon, so many others there. And I'm curious, if you were to put all these experiences into one melting pot, what is the single most overlooked step that organizations must take to close that AI and CX trust gap before it starts costing them customers?

[00:27:14] I think it'd be incredibly valuable to leave everyone listening with a valuable takeaway here. So what do they get wrong and what can they do to reduce that gap? It's very simple, actually. The most overlooked step, honestly, is testing the systems the way real customers will actually use them, right, before they go live. You know, it seems obvious, but I think that's still where a lot of friction is, right?

[00:27:41] In many conversations that I have with the customers and the visibility that I have, you know, teams spend a lot of time selecting models, right? So they do model analysis, they select models, they build integrations, they launch new capabilities. But what sometimes gets less attention is validating the system and how the system behaves across real customer scenarios, right?

[00:28:08] Different ways people phrase questions, unexpected follow-ups or moments where the conversation goes off script, right? What we often tell customers is that AI in customer experience shouldn't be tested like a simple feature release. It's a full customer-facing system that needs end-to-end assurance. It's not enough to test a single component like a bot or a model in isolation.

[00:28:35] You have to test the entire experience, the integrations with backend systems, the authentication flows, routing, escalation to agents, how the conversation moves across different parts of the journey. All these have to be tested. A lot of CX issues, you know, don't really happen in the individual component. You know, they happen, as I said, in transition between systems. You know, maybe the bot understands the request, but the integration fails.

[00:29:01] Or the context does not transfer correctly to the agent, yeah? So these are the gaps that the customers would feel, right? So I would say the organizations are getting this right slowly. They are investing in true end-to-end testing of the entire journey before customers will ever see it.

[00:29:23] And because ultimately customers or successful customer experience isn't really about whether one piece works. It's about whether everything works together seamlessly for the customer and whether you are getting the trust from the customer in using those automations. So I think that's where the ultimate success is. I think that's a powerful moment to end on.

[00:29:49] But before I let you go, what is the best place for anybody listening to find you and your team online and explore anything we talked about today? I will include a link to the research we mentioned. But anywhere else you'd like me to point, Edwin? Yeah, the best place, I would say, is to start at our website, sierra.com, where we share a lot of research, some of the metrics that you have been mentioning.

[00:30:15] So those are available for download at our website as well. So these research and insights around customer experience and assurance that we talk about every day. And also I'm also fairly active on LinkedIn. So that's usually the easiest way to find me and follow the conversations we are having around AI and customer experience.

[00:30:40] And also find out how enterprises are thinking about reliability and trust as they deploy these systems. And if anyone listening is working on customer experience platforms, especially around voice bots or chat bots or your customer journeys and thinking about how to validate and monitor them, our team works a lot with the organizations on exactly those kind of challenges.

[00:31:08] We spend a lot of time talking to our customers about how to test these systems end to end, make sure how they perform reliably before the customers encounter any issues. So I would say LinkedIn or sierra.com are probably the best places to connect and continue the conversation.

[00:31:33] Awesome. Well, I will add a link to the website, the company LinkedIn page and your LinkedIn page and the research. I urge people to check that out. So many big reveals there around trust boundaries, perception problems, generational divides really is a great read. And I urge people to check that out. But more than anything, just thank you for shining a light on this and also leaving everyone listening with so many valuable takeaways. Really appreciate your time today. Thank you. Thank you, Neil. Appreciate it.

[00:32:00] One of the things that stood out to me in this conversation today is the issue is not whether AI belongs in customer experience. It clearly does. The real question is where does it work best and where does it need careful oversight? And where does a human still make all the difference? Because most customers, they don't care where the answer comes from, whether it is a bot, a person or even a small army of trained pigeons.

[00:32:26] All they actually care about is the issue gets solved quickly, clearly and without making them repeat themselves six, seven times. So if this episode got you thinking about the balance between automation and trust, I'll be sharing links to all the research, everything we talked about in the show notes. And as always, let me know where you stand on this.

[00:32:47] When it comes to customer experience, how much AI is helpful and when does it simply become one frustration too far? A bit of a balancing act here, but I'd love to hear your experiences. And if you head over to techtalksnetwork.com, you can send me an audio message there. Feel free to rant away. We'll solve all those first world problems one at a time. Send them over and I will send you a reply straight back. But that's it for today. So thank you for listening as always.

[00:33:17] And I'll speak with you again tomorrow. Bye for now. Bye for now.