How do you move AI from a flashy demo on a conference stage to something that can handle real customer pressure on a Monday morning when the tickets are piling up?
In this episode of AI At Work, I sit down with Niraj Ranjan Rout, Founder and CEO of Hiver, to unpack what it really takes to build AI that works inside high-volume support environments. With more than 10,000 teams using Hiver, including brands like Flexport, Capital One, and Epic Games, Niraj has had a front-row seat to both the promise and the pitfalls of AI in customer service.

We talk about the difference between “slapping a chatbot” onto an existing problem and rethinking the entire support workflow. Niraj makes a compelling case that AI should function as infrastructure, embedded across triage, routing, drafting, summarization, quality assurance, and insights. Rather than replacing agents, the goal is to remove the repetitive, manual work that drains time and energy, so humans can focus on solving real problems and understanding how customers actually feel.
Our conversation also gets into the uncomfortable but necessary topics many leaders underestimate. Data hygiene. Governance. The reality that 98 percent accuracy is sometimes still not good enough. Niraj shares why clear handoff protocols between humans and AI are essential, and how organizations can avoid measuring ROI through surface metrics like deflection rates alone. Instead, we explore more nuanced signals, from sentiment shifts to long-term customer outcomes and team productivity.
We also discuss Hiver’s own journey from an email collaboration tool to an AI-native customer service platform. Niraj is candid about the noise in the market, from overblown promises to doomsday narratives, and how founders must stay close to customers while remaining hands-on with emerging models and agentic capabilities. Culture, he argues, is as important as code. Customer stories need to flow directly into product and engineering teams if AI investments are going to remain grounded in reality.
And yes, we even end on a musical note, with a nod to Jimi Hendrix and a reminder that creativity, whether in music or software, still comes down to craft and feel.
So here’s the question I’ll leave you with. As AI becomes embedded into every workflow, are you treating it as a shiny add-on, or are you redesigning your foundations so it can truly perform under pressure?
Useful Links
[00:00:06] - [Speaker 0]
What does AI look like when it stops being a conference demo and starts taking real customer pressure at scale? Well, on today's episode of AI at Work, my guest is the founder and CEO of Hyva. Now Hyva is used by more than 10,000 teams, including brands like Flexport, Capital One, and Epic Games to name but a few. And my guest has got a very clear point of view on why customer service AI has disappointed so many teams over the last couple of years. It doesn't have to be that way.
[00:00:42] - [Speaker 0]
And in our conversation today, we will talk about why simply slapping a chatbot onto a broken support process rarely works, and I suspect that we've all been on the other side of that numerous times. And, also, what changes when you actually do something different and treat AI as part of the operational fabric? And we will get into today how AI can support triage, routing, context gathering, drafting, summarization, quality checks, insight generation. What I'm trying to say is the opportunities are endless, and those real opportunities are reducing that busy work that keeps your human agents from actually helping customers. But enough for me.
[00:01:25] - [Speaker 0]
Let me officially introduce you to my guest right now. So a massive warm welcome to the show. Can you tell everyone listening a little about who you are and what you do?
[00:01:37] - [Speaker 1]
Hey, Neil. I'm Niraj. I'm the founder and CEO of Hivor. We are an AI powered customer service solution, very popular, used by more than 10,000 teams, including the ones at companies like Flexport, Capital One, Epic Games, and then many, many more. Our approach to AI is fairly unique in the customer service space in that we believe that, as a customer service solution, we need to do more a lot more than just, you know, deploy a chatbot and expect it to handle queries.
[00:02:07] - [Speaker 1]
The the way we look at AI is that, we meticulously work on all parts of the support process and make AI a core part of them so that, you know, it really elevates the day to day job of a customer support agent and really helps them provide a great experience to their customers.
[00:02:23] - [Speaker 0]
Well, it's a pleasure to have you join me today. So much I wanna talk with you about because I think right now, there is a clear difference between AI that shines in a product demo on a a stage at a tech conference and the AI that performs under real customer support pressure. There's a big difference there. I've seen what happens on both sides of that fence. But what have you learned about building systems that actually survive high volume, high stakes environments rather than just those shiny demos that often distract people?
[00:02:54] - [Speaker 1]
Yes. So the general approach to AI in customer service right now or maybe in the last two years, which has led to a lot of frustration is either of the two. Either you take a chatbot and slap it on on an existing high volume problem. Right? Or you take AI and try to solve a very narrow part of the entire customer service problem.
[00:03:14] - [Speaker 1]
Now neither of them work because the first is inadequate in the way it handles so many of support scenarios. The second is inadequate because it does not really take the full benefit of AI to every part of the support process. Right? So so, you know, what is really needed is that you need to have a good understanding of what AI can actually do in real world support scenarios. Right?
[00:03:35] - [Speaker 1]
And and then take AI and deploy it in a manner so that it really elevates the experience for both support agents and customers. So so you have to think of the entire experience right from the time someone reaches out for help to how that query either gets answered by AI or gets deflected to a real person, how information actually changes hands within Teams, right, how the team actually looks for information that they might need to, refer to to respond to a query. Do you really understand how the customer is feeling when the query is being responded to by your team? And then, ultimately, can you take all of this and use this to learn and improve your system? Right?
[00:04:12] - [Speaker 1]
And we believe that AI has a role to play in all of this. And until you use AI in a manner so that it it adds value to this entire process, it's going to be pretty inadequate and probably fail in many real world scenarios.
[00:04:23] - [Speaker 0]
And one of the things that stood out to me about what you do and what you're passionate about is you always talk about AI as infrastructure rather than an add on or a shiny tool. So what does it mean to embed AI across triage, routing, drafting, summarization, quality assurance, and insights in instead of just simply layering another chatbot on top, which we've both seen probably out there.
[00:04:49] - [Speaker 1]
Absolutely. So, you know, the the way you can actually deploy AI to make it really useful is, as I said. Right? I mean, think of exactly how it can add value to everything that was done in in a routine, boring, you know, tiring manner by humans. For example, once a query comes in, how do you figure out exactly who is the right person to handle that query?
[00:05:10] - [Speaker 1]
Right? Now that might rely on a lot of tribal knowledge in your team, which might be inadequate in many, many scenarios. Right? When a query comes in, how do you categorize it to make sure that it gets the right attention? Whether it's a billing query, whether it's a refund related query, whether it's a technical question.
[00:05:25] - [Speaker 1]
Now all of this requires manual work, which, you know, which is which is stuff that you do not want people to actually spend time on. Right? Once you've categorized it and send it to the right person, let's say you get a query where, you have a customer referring to an order. Right? Now information relating to that order might be sitting sitting in your ERP or your CRM.
[00:05:44] - [Speaker 1]
Now your agent actually goes to your ERP or CRM, finds that information, refers to it. Right? Now all of this is stuff that AI can make it a lot easier. You know? AI can actually sit between all of these processes and make it easier for your team to delegate queries, understand what is going on, stitch together information from various disparate systems.
[00:06:03] - [Speaker 1]
Right? Understand the sentiment of the customer. Understand when the customer is not very happy about what is going on. AI can help you pass information in context within your team. For example, most complex scenarios, support scenarios, require multiple teams to work together.
[00:06:22] - [Speaker 1]
You have a frontline customer support team, and you have a team of people who actually support the customer support team to actually solve product problems. Right? Now every time information changes hands, there is there is pain because, you know, you have to basically have someone explain something to someone else and that then have that party explain things back to you. Right? Now all of this is stuff that AI can make a lot easier, which leads to your customer support agents actually spending time on helping customers and solving their problems and not basically jumping around hoops to get really routine stuff done.
[00:06:53] - [Speaker 1]
And we believe that that is the way to actually use AI in customer service rather than, you know, just thoughtlessly trying to solve everything using a chatbot and basically force fit AI into things which it cannot really do very, well.
[00:07:07] - [Speaker 0]
And I think many software companies are experimenting with AI, but they often struggle to operationalize it. It's something I'm hearing more and more about. So from your perspective, what are the the foundational requirements around things like data hygiene, governance, and, of course, human oversight that that leaders often they often underestimate? What are you seeing here?
[00:07:30] - [Speaker 1]
Yeah. So there's a lot here. First of all, you know, the the the fundamental thing here is that you have to understand what AI is really capable of and whether the level of accuracy and the reliability that you can get from AI in a support process is aligned with what that support process requires. Right? So probably you can get 99% or 98% accuracy with something, when you do it with AI, but that 98 or 99% might just not be enough in many, many support processes.
[00:07:56] - [Speaker 1]
Right? Or many processes. Right? You have accounting, finance, there are tons of processes where 98 or 99% is not enough. But it might be perfectly adequate in many processes.
[00:08:05] - [Speaker 1]
So first thing is to really understand where is it that actually AI can make a real significant difference by aligning its capabilities with what is truly required of that process or job. Right? And then you have a couple of things. AI does not really do very well when you don't have good quality structured or, you know, semi structured data available. Right?
[00:08:27] - [Speaker 1]
So if you are if you are running a customer support process and your knowledge, you know, which is your knowledge around how to handle customer support queries, your product documentation, your your support, you know, processes, If all of that is not in a structured state or in in an updated state, it might be extremely difficult for you to be able to apply AI on that process. Right? And that applies not just to customer support processes, but to any process that you want to apply AI to. And the third thing here is governance. You have to make sure that you make it very clear what AI is for the boundaries of, you know, human and AI trade off exactly when one party hands off to the other party and how one party works with the other party.
[00:09:11] - [Speaker 1]
Right? How AI takes assistance from humans and how humans take assistance from AI. Right? So until we have we have a huge amount of clarity on these things, it might be very difficult to AI. To sum up to to, you know, deploy AI.
[00:09:23] - [Speaker 1]
You know, to sum up, you know, you need to make sure that AI is actually going to do a good job in the process that you're trying to deploy it to. The second thing is that, you know, your knowledge has to be in a state where AI can be deployed effectively, or you get your need to get your knowledge to that state as soon as possible to deploy AI. And the third is that, you know, the the the governance, the boundaries, and the protocol for handoff need to be very, very clear. Right? So so once you have these three things, you can actually start deploying AI effectively.
[00:09:54] - [Speaker 1]
And the good thing is that, you know, you can actually AI to make these things better. Right? For example, if your knowledge is not a good state, you can actually use AI to take it to a good state. Right? You know?
[00:10:03] - [Speaker 1]
So so the the beautiful thing about AI is that it's very flexible. And if you can work with it smartly, it can actually help you deploy AI, effectively.
[00:10:13] - [Speaker 0]
And ROI is something that remains a big sticking point for many AI investments and big expensive projects. So how should executives be evaluating whether AI in customer service is genuinely improving things like resolution time, customer satisfaction, and and team productivity? What what's the best better way of of measuring that ROI and that impact that this this technology can deliver?
[00:10:41] - [Speaker 1]
Yeah. So, you know, when it comes to ROI, right, I think we are all learning for our AI. And and I'm not just talking about customer service, but pretty much every function or department where where you're trying to deploy AI. Right? For example, AI has a huge code use case in coding and software development.
[00:11:01] - [Speaker 1]
Right? Now if you measure ROI in software development simply in terms of the number of lines of code that AI wrote, right, is actually not going to lead you anywhere. Right? Because AI can actually write a lot of code that does not do what it expect you expect it to do. Right?
[00:11:17] - [Speaker 1]
So you need to take a more nuanced approach to actually how you measure ROI in from AI in all of these scenarios. So, no, similarly, if you if you come to the customer service space and if you try to measure the ROI of AI in customer service simply by measuring the number of queries that were deflected and did not reach humans. That might be extremely surface here. Or if you simply look at response times, for example, to understand the impact of ROI, it might be extremely surface here. So just like the deployment of AI in customer service and so many other functions requires a very nuanced approach, I believe that, the measurement of ROI in these functions also requires an equally nuanced approach.
[00:12:00] - [Speaker 1]
Right? And you need to go beyond these these extremely surficial metrics, like, you know, deflection rates or response time or resolution time to actually seeing how AI is making the day to day job of a customer service agent better. Right? Is it allowing them the time and the flexibility to actually, you know, have empathy, understand, customers, understand exactly what they need, and provide better service. Right?
[00:12:27] - [Speaker 1]
And you can actually measure that. It's not a subjective thing. You can objectively measure that by going into metrics, like how much time does it take you to understand a query and actually get it to a person who can solve it. Right? Can you reduce the number of interactions between your support team and the customer?
[00:12:43] - [Speaker 1]
Because you are asking for information in one go and not not in multiple pieces. Right? Can you predict what is going to happen next? So can AI help you understand when a customer sentiment might be moving south? Right?
[00:12:55] - [Speaker 1]
Can AI help you understand if a customer is going to churn? Right? So so you need to go into a much more nuanced approach to actually measuring AI ROI, rather than, looking at, you know, something extremely surficial like deflection rate or response rate. And that applies not just to customer support, but pretty much everywhere that you're trying to deploy.
[00:13:14] - [Speaker 0]
And looking at Hyva, you began as an email collaboration platform and is now repositioning around AI native customer service. So I've got to ask, what were the the hardest decisions in evolving the product without compromising simplicity and first time to value fast time to value, for example?
[00:13:35] - [Speaker 1]
Yeah. So, Nil, as as a founder right now, the the the hardest job and probably the most important one is to just shut off the noise. Right?
[00:13:42] - [Speaker 0]
Yeah.
[00:13:43] - [Speaker 1]
There is so much noise about, you know, both the capability of AI. You know, scenarios like AI might just, you know, do everything tomorrow, leaving no room for a soft like software like ours to actually do anything to to doomsday scenarios like, you know, it's all going to fail and AI is not really ready for the prime time. Right? So so you have you have you have extremely divergent scenarios and narratives. And as a founder, you need to really focus on what is AI really capable of and whether that aligns with the problems that you're trying to solve.
[00:14:12] - [Speaker 1]
Right? So so I think the most important job here has been to balance of of of, you know, balance a good understanding of what AI is capable of here and what the support process needs. And then with that, you know, I think we have been able to do a a fairly decent job where where we have, you know, from from going from a company that had very little AI roughly three years back to to essentially rearchitecting everything and putting AI as a core layer into everything that we do. It has been an extremely deliberate, slow process, but then, but then a very rewarding one because I believe that the way we have deployed AI into the customer support process is truly impactful and and really gets the job done.
[00:14:57] - [Speaker 0]
And another phrase we hear a lot, and I'll put this in quotation marks, is AI native. When we talk about AI native, what does that actually look like architecturally and culturally inside a company building a product, would you say?
[00:15:11] - [Speaker 1]
Yeah. So, look, AI native, it's an interesting term. It basically came about in the last two and a half to three years. Right? No one had heard of it, you know, three years back.
[00:15:18] - [Speaker 1]
Right? And and the dictionary meaning of AI native for me would be a company, a team, a process that uses AI as a core architectural piece instead of, you know, layering AI as an afterthought on something that already exists. Right? Now that is very applicable to a lot of companies that came up in the last two and a half, three years in coding and customer support, etcetera, where they did not exist in this pre AI boom era. And then when all of these capabilities became evident, they built new solutions ground up to use this AI to solve problems.
[00:15:54] - [Speaker 1]
Right? But then that is also applicable to so many companies like ourselves, you know, a lot lot of, you know, software majors too, which saw this option which which actually had already existing tools and then saw this opportunity of being able to take a fresh look at problems with AI and then went ahead and very quickly rearchitected their solutions to make AI as a a core part and a a very important layer of everything that we do. Right? And and that is where we also fall as however. Right?
[00:16:21] - [Speaker 1]
We had a solution. As you said, we started as an email management solution, then then, essentially, we re architected everything in the last two and a half years or so to make AI a core layer. So we are all AI native companies now. But then culturally, right, a few things are very important. Of course, you need to have a really good understanding of what AI is truly capable of.
[00:16:40] - [Speaker 1]
You need to keep up to date with new developments, new models that are coming in. But then you really need to also stay very close in touch with your customers. Right? Because it's it's so possible to be be carried away by the promise of AI, which is extremely glamorous, and and and end up solving fictional problems that really do not exist. Right?
[00:17:00] - [Speaker 1]
So you you really need to balance off this understanding and this excitement with what is possible with AI, with what real problems it can truly solve. Right? And what that leads to is that you see this this this role in a lot of these AI native companies that is called forward deployed engineers, right, which is essentially engineers who understand AI very, very well, but but work very, very closely with customers to really deploy AI fruitfully. Right? So so I would say that for an AI native company to be successful, you need to combine, this fundamental understanding of AI with very, good customer understanding that is that is very much needed.
[00:17:36] - [Speaker 1]
And there are some other characteristics too. You know, you'll find that most AI native companies generally are extremely fast moving. They iterate fast. They move fast. They release fast, and they fail fast, primarily because they use AI as a core building block in also how they learn and how they build things.
[00:17:51] - [Speaker 1]
Right? So so yeah. So I think an AI native company is culturally and operationally very different from from how companies ran in the pre AI era. And I think all companies, you know, whether they are AI native right now or not, will need to move there over the next few years.
[00:18:09] - [Speaker 0]
And as a founder that is operating through multiple technology cycles, the last few years alone, we've seen the shift from GenAI to AgenTik AI, custom built agents this year. So how do you balance that long term conviction with the need to adapt quickly in this AI driven market where we find ourselves? Because it moves pretty fast, doesn't it?
[00:18:29] - [Speaker 1]
Yeah. So, you know, again, know, I just like to reiterate what I just said when you asked me the last question, which is, you know, you need to balance off two things. You need to really stay up to date. You have to be hands on, you know, for a technical founder like me. Right?
[00:18:42] - [Speaker 1]
I mean, it's my job to stay extremely hands on with everything that is coming out. Right? Try everything that comes out. There is a new agent. There is a new model.
[00:18:51] - [Speaker 1]
I should be on top of that. But then I need to balance that off with being very, very close to our customer. Right? If if if we spot a problem that is not solvable, but was not one month black, we need to ship a solution to that in the next fifteen days. Right?
[00:19:06] - [Speaker 1]
And then that requires me to, you know, understand what's possible, understand the needs, and have a team, you know, that that moves as fast as the market moves and keep shipping very, very quickly. Right? So so, you know, it it it's a very different scenario compared to how companies used to run probably three, four years back, and I think we're all learning and adapting.
[00:19:28] - [Speaker 0]
Yeah. Completely agree. And, of course, AI, and in fact, any emerging technology is only one part of what we're talking about here. So in a space full of hype and rapid future releases and big promises, how do you build a culture that prioritizes durability, responsibility, and customer empathy over chasing the latest trend? Because that culture in an organization, that is possibly as important, if not more so than the tech itself sometimes.
[00:19:57] - [Speaker 0]
Right?
[00:19:58] - [Speaker 1]
Absolutely. So one thing that I have found very useful is to funnel a lot of customer stories into company channels. Right? So, you know, our company channels are buzzing as much with new stuff that we are building as they are with how customers are using our our product. Right?
[00:20:16] - [Speaker 1]
How they're using new capabilities, if there is something that is frustrating them, what is it that they like, the improvements that they need. Right? And, you know, you need to filter you need to basically funnel all of that into your internal communication unfiltered, right, for for people to actually get a deep sense of how customers are responding to and feeling about what you're releasing. Right? So if you release something and customers take up to it and really make good use of it, your team needs to know.
[00:20:43] - [Speaker 1]
If they hate it, again, the team needs to know. Right? So so and and breaking that boundary or those barriers between customer sentiment and customer adoption and your engineering and product teams. Right? Because your GTM teams, your support sales and customer service and, you know, your your account management are anyway very close to your customers.
[00:21:02] - [Speaker 1]
But the people who are actually building the product, which is your product design engineering teams, they need to be very close to the customer. And for that, bringing that customer, you know, experience, customer sentiment close to where your product and engineering teams live is very, very important. And I think, people need to put in a lot of deliberate, well thought out effort into how they architect, this information being very available and digestible, by their teams that actually build the product.
[00:21:32] - [Speaker 0]
Oh, thank you so much for sitting down with me today and sharing your insights. Before I let you go, I'm gonna throw a slight curveball in your direction. What I often do is I give my guests a chance to add a song to a group Spotify playlist. The guests will add us a song they recommend or inspires them or mean something to them, and we add it to that Spotify playlist. Now talking to you today, for people listening on the audio version of this podcast, they won't be able to see this, but you've got two great guitars back there.
[00:22:00] - [Speaker 0]
And I I know that, before we started recording today, you were telling me how you love playing guitar, and you've got a, a few artists that you love there. But if I was to ask you one song to add to our Spotify playlist, what would it be today?
[00:22:13] - [Speaker 1]
It would be Little Wing by Jimi Hendrix. I love that song.
[00:22:17] - [Speaker 0]
Oh, what a choice. Can you play that as well?
[00:22:21] - [Speaker 1]
Yeah. I can play that as well. You'll probably find it on my LinkedIn. So if you go to my LinkedIn, I posted a short video of playing that.
[00:22:27] - [Speaker 0]
Oh, brilliant. Well, I will add that song to the playlist. In the blog post associated to this episode over at techtalksnetwork.com, I will try and embed that LinkedIn post as well so people can check that out. And for anything else, all things that you're working on at the moment, where where should people go if they wanna connect with you, your team, or find out more about Hyva and anything we talked about?
[00:22:50] - [Speaker 1]
Yeah. So LinkedIn is the place to find me. Of course, our website is hyvahq.com. But, you know, just just go to LinkedIn or you can email me on Niraj, nirej,@HiverHQ,hiverhq.com. But LinkedIn is the place.
[00:23:03] - [Speaker 1]
We are all on LinkedIn. I would love to connect with everyone there.
[00:23:06] - [Speaker 0]
Awesome. Well, I'll add a link to your LinkedIn, the website, and try and track down that LinkedIn post
[00:23:13] - [Speaker 1]
of
[00:23:13] - [Speaker 0]
you playing guitar there. And I urge anyone listening to go check you out. I'd love chatting with you today. We covered so much in just thirty minutes today, but thank you for joining me. Really appreciate your time.
[00:23:24] - [Speaker 1]
Thanks, Steve. Great speaking to you.
[00:23:26] - [Speaker 0]
I think one of the strongest themes here of our conversation is clarity. And my guest made the case that leaders need to be explicit about where AI fits, where it doesn't, and exactly how handoffs between humans and automation should work. And he also shared why knowledge quality and governance, how these things matter just as much as the model, and why ROI in customer service shouldn't be reduced to simple metrics like deflection rate or response time. Time to think bigger than that. And we also got time to talk about Hiva's shift from email collaboration platform to this AI native customer service approach that we talked about today.
[00:24:10] - [Speaker 0]
And I think as a founder, that challenge of filtering noise while staying relentlessly close to customers really set off a few light bulb moments in me, and I I especially loved how he was able to add a personal touch by picking Little Wing by Jimi Hendrix and adding that to our Spotify playlist. I will be adding that clip of himself playing it on LinkedIn. I'll embed that on the blog post over at techtalksnetwork.com. Look out for the episode there. And I'll also include the website and LinkedIn details to the show notes.
[00:24:41] - [Speaker 0]
If you're thinking about AI in support right now, hopefully, episode will help you pressure test where automation can belong and where human judgment still has to stay in the loop. But over to you, where are you seeing AI genuinely improving customer experiences, and where is it just creating more friction? Whatever it is, let me know. Techtalksnetwork.com. But that is it for today.
[00:25:06] - [Speaker 0]
I'll be back again soon with another guest, and, hopefully, I will speak with you all again very soon. Bye for now.

