Building Responsible Agentic AI: Genpact's Blueprint For Enterprise Leaders
Tech Talks DailyFebruary 09, 2026
3582
32:2829.71 MB

Building Responsible Agentic AI: Genpact's Blueprint For Enterprise Leaders

In this episode of Tech Talks Daily, I sat down with Jinsook Han, Chief Agentic AI Officer at Genpact, to unpack one of the most misunderstood shifts in enterprise AI right now.

Many organizations feel confident about the value AI can deliver, yet only a small fraction are able to move beyond pilots and into autonomous operations that actually scale. Genpact's Autonomy By Design research puts hard data behind that gap, and Jinsook explains why optimism often races ahead of readiness.

We explore why agentic AI changes the rules entirely. When AI systems begin to act, decide, and adapt on behalf of the business, familiar operating models start to strain.

Jinsook makes a compelling case that agentic AI cannot be treated like another software rollout. It demands a rethink of data, governance, roles, and even how teams define work itself. The shift from tools to teammates alters expectations for people across the organization, from frontline operators to the C-suite, and exposes just how unprepared many companies still are.

Governance is a major theme throughout the conversation, but not in the way most leaders expect. Rather than slowing progress, Jinsook argues that governance must become part of how work happens every day.

She shares how Genpact approaches agent certification, maturity, and oversight, using vivid analogies to explain why quality and alignment matter more than simply deploying large numbers of agents. We also dig into why many governance models fail, especially when they rely on committees instead of lived understanding.

Upskilling sits at the heart of this transformation. Jinsook walks through how Genpact is training more than 130,000 employees for an agentic future, starting with executives themselves. The focus is not on abstract learning, but on proving that today's work looks different from yesterday's. Observability, explainability, and responsible AI are woven into this approach, with command centers designed to monitor both agent performance and health, turning early signals into opportunities rather than panic.

This conversation goes well beyond hype. It is about readiness, responsibility, and the reality of building autonomous systems that still depend on human judgment. As organizations rush toward agentic AI, are they truly prepared to change how decisions are made, how people work, and how accountability is defined, or are they still treating AI as a faster hammer rather than a new kind of teammate?

Useful Links

[00:00:04] What happens when leaders feel great about AI, but the organisation behind it is nowhere near ready? Well today's conversation will tackle that exact gap, because today I'm joined by the Chief Agentic AI Officer at Genpact. And together we're going to unpack why so many AI programmes stall after early pilots and what it really takes to move towards real autonomy.

[00:00:31] And we will talk about governance, how that works in practice, why people are often the real constraint, and how organisations can stop treating AI like a shiny tool, please! And start thinking of it as a teammate instead. So if you care about scaling AI responsibly and avoiding the quiet failure of pilot purgatory, yeah we've all seen what that looks and feels like, this one's for you.

[00:00:59] But let me officially introduce you to my guest. So a massive warm welcome to the show. Can you tell everyone listening a little about who you are and what you do? Thank you for having me, Neil. Jinsu Khan, I'm the Chief Agentic AI Officer at Genpact, a tech services and software and operations company. An incredibly cool job title right now as well. You must be in high demand.

[00:01:27] And one of the reasons I wanted to invite you on here today was having read about Genpact's autonomy by design research, which showed while most executives feel positive about AI's value so far, only a small minority can actually scale towards autonomous operations. So they've got the pilot phase, but getting to production and scaling, that's where problems begin to arise.

[00:01:51] So from your perspective at Genpact here, what is creating this biggest gap between confidence and capability? What's the cause here? Yes, the research was really astounding. But also, you know, as much as it was a surprise, also non-surprise, right? Because as you know, you're developing and then implementing a new technology, at the end of the day, it's not the technology game, right? It's a business. It's about the environment.

[00:02:18] And it's about what you're willing to change in the operations. And what we learned is, you know, a few things. Number one is the client actually are implementing this, you know, autonomy by design, the authentic AI. A lot of the clients were treating it like just like any other technology. And when you do that, it's like, oh, I had already cleaned up my data. I already, you know, have the tech team. I already have this.

[00:02:43] But as the award-winning professor Ethan Malik said, the biggest difference, one of the biggest difference is authentic AI is no longer tools. It's a teammate. So when you actually are looking at a technology that you need to work with rather than using it as a hammer to the tool, that completely changes the game. You need to have your data really by utility, right?

[00:03:10] Everyone has a lot of data, but what data is really helpful and is it organized? Two, to treat, you know, from tools to teammates. Am I organizing my team the way that it should be? If now moving, humans used to do most of the work using technology as a tool versus now technology is going to do all the work, but you are going to be shaping and governing. That completely changes the job task and the roles and the org design and the operating environment.

[00:03:39] And then the third is we all talk about reskilling, upskilling, but are you really upskilling those people to do a completely different job? Because now I come to the same job and expectations of me is completely different. I used to type it away doing processing and things. Now I'm supposed to sit and monitoring guidance and this thing that's moving in front of me and doing the work. So many great points there.

[00:04:05] And I think that reality check around upskilling and reskilling is so important. And I think that's almost an entire episode that maybe we could revisit in the future as well. And just coming back to the survey there, nearly every executive survey admitted that their organization does lack adequate governance for autonomous or agentic AI,

[00:04:29] which again blew my mind considering that all I read this year is the amount of thousands of AI agents are going to be unleashed on the world. And that lack of governance is somewhat worrying. But I'm curious just to bring that to life, when governance falls behind innovation, what kind of risks do leaders tend to underestimate until it's too late?

[00:04:53] Because I think when you're playing with those shiny toys like a small child, you don't think about the risks or the magnitude of the risks. But what risks are they putting themselves under here? So two things. One is it's no longer governance by committee, right? It's very easy to have all the right people, but not designing the governance for this new agent together, right? As you look at them as teammates and changing the processes.

[00:05:20] So one is you want to organize with, if I'm thinking now the, all the execution is being done by agents. One, yes, you need a value. As you said, everyone's building hundreds and thousands. And the volume is important to get started. But at the end of the day, you need more quality, right? Because for example, we have established an architectural review board where every agent has to be reviewed and get certified and given a rating and what maturity they are.

[00:05:50] Because you don't want to be doing the work that actually two mature agents can do, more autonomous than not, and being done by 20 agents, right? So you need certain volume. But at the end of the day, it's not the only volume. You need to get to the quality and maturity of the agents because not all agents are created equal. So when you form this in a new governance body, you need to think about what does it mean with the right level of people? We're doing what I get to. So that's number one.

[00:06:18] Number two, I often use this analogy. Agents are powerful. But they need to be trained and shaped and humans must govern, right? Whether because of the law and compliance, but also to continue shaping them and continue making them learn, humans must govern. And I often say, think of actually you becoming a zookeeper.

[00:06:41] Now you have four lions, three tigers, five rabbits, you know, four swans. They all come with their own power, right? But you must actually put a leash on them. You must have the right zookeepers who know how to train the lion, who know what lion eats, which is the data, right? This one is eating the accounts payable data. This one is eating only the purchase, you know, data from this region.

[00:07:10] This one is only eating from actually Oxford. Oh, but that's not Oxford, UK. It's Oxford, actually Massachusetts. Well, you must have a governance body that thinks about what zoo, what animal kingdom, what ecosystem are you creating, right? So now the humans who used to run themselves. Yeah, but now I got cheetah who runs faster than me. Well, what am I going to feed that cheetah? What time? What are those signs?

[00:07:40] That's the mindset that you want to have in setting up that organic governance. I absolutely love that analogy. Perfect way of explaining it. And I think many leaders expect autonomous AI decision making across their strategic functions within the next three years. They're starting to lose a little patience there. And yet the reality is, of course, that very few are actually ready right now.

[00:08:04] So what practical signal should organizations be looking for to know whether they are truly prepared to take that next step? I think really number one is the governance. Is am I really having that mindset change to know what agents are? Right? We actually deploy the authentic school, not only for our operators who are working with the client business, but actually starting from the top.

[00:08:30] From BK, from the CEO and my peers, we did the authentic school and data school. Because it's the first signal is, am I taking authentic governance seriously from top down and also bottoms up? And do I see that actually people truly understand that this is not a tool set? This is a teammate. Right? So here's the tool set of your hammers and screwdrivers and all. That's what you're very used to work with.

[00:09:00] Do you know now you're working with this live animal and ecosystem kingdom? And how do you test for that? I think that's a very first signal. Having an authentic committee or AI committee is one signal, but the stronger signal is do they really understand? Can they pass the test? Right? And by the way, one side note that we make everybody take the test and not everyone passes.

[00:09:25] I'm happy to report our, my GLC members, the fellow members all passed, but that was a work. That was a work. But as we actually go down the different levels of the businesses, not everyone passes. Right? And they have to retake and retake. And it's not only about the extent, but the live example of how do they make it come live in whatever function they're performing. So by far, that's the strongest signal.

[00:09:52] And then the second signal is about the readiness of actually their processing the data. Whether you do it yourself or partner or hire a company like ours is how willing are you to change? It's one thing to know and have the knowledge and have the muscles, but am I really willing to let go and all these fragmented and small things and then be able to really completely move to a dynamic process?

[00:10:21] Otherwise, you have a really great set of tools and these, you know, agents, but it's going to become a hurry up and wait game. Right? Okay. The agent did it in nanosecond. Yeah, but I'm still going to do the way that I used to do. So wait for 13 and a half days until the person is available to do something. So it becomes a hurry up and wait game. So am I really willing to change? And do I see that in the global process owner? Do I see that in the investment allocation? I do.

[00:10:49] At the end of the day, money talks and we put our money where our mind is. So am I moving the money to change the process, change the data, change, you know, how we operate? That's the second closest signal. So everyone invests. But the question that we ask is yes, but how are you moving the money? Where's the money going to for the process change or just to buy a lot of technology? Right?

[00:11:15] And then the third, I think the strongest signal is, is that messaging and communication cascading down that you actually see it. Right? And this is becoming a replacement engine, not keeping the way we used to do them adding agentive, actually authentic, completely replacing. That is the third signal that we see as mature organizations who are climbing up to become really changing.

[00:11:42] And that's why the research shows only the small number of companies who really meet their criteria. And I think when a lot of people hear the word governance, they almost frame it immediately as a break on progress and slowing things down. And it doesn't have to be that way, of course. So based on your work, how can governance frameworks actually do the opposite? Accelerate responsible AI adoption rather than slowing teams down?

[00:12:08] Because it can be somewhat of a myth when people just almost panic and put the brakes on when they hear governance, don't they? I think the governance needs to become a system and also governance as a service. Right?

[00:12:21] Because if governance, what is the governance about is, look, it's actually the way we work and it's how I work rather than, oh, there's this policing or auditing committee who's going to come now with some framework and laws and compliance. I think that's the huge difference between the governance. Governance has to become the actual agentic way of working so every person understands what that governance is.

[00:12:49] So, for example, if we design that, yes, you, the accounts payable process associate, now have become a prime engineer, then do you know that you yourself is part of governance? So every day when I look at the exception, when I look at the behavior of an agent and I shape and I work with engineering, then I go into, I'm already playing the function of governance.

[00:13:13] And that actually becomes part of how the agentic governance work as a service rather than, oh, I have a governance body composed of the legal officer or agentic AI officer, the CTO and come in every week. And if you do something wrong, they're going to come down or here's new thing that you need to implement. Then I think there's a difference of how you think about governance versus you are part of the governance fabric.

[00:13:41] And listening to you there and looking at the research that we're talking about here, there does seem to be a clear risk of an evolutionary mismatch as AI agents continue to learn and adapt. And again, to bring that to life, can you maybe tell me a bit more about what that will look like in real terms and how a leader and an organization can keep those agents aligned with business goals over time and not losing sight of that? So I'll start with an analogy.

[00:14:10] You know, I go to mountains a lot. I'm more of a mountain person than the water person. And, you know, when we talk about the evergreen trees, like, wow, through four seasons, how do they stay evergreen? Right now, this sounds very, very, you know, naive, but it's a true story. Only about like about a decade ago, I realized that, you know, those trees have three colors, all four seasons.

[00:14:35] So basically the evergreen tree that you see is green because there's a dark green which has matured. But if you look at closely in the branches, there's a really lighter green, the new shoots that are coming out. And there's actually way back, there's a brown stuff that it's self-actually deprecating and killing to keep itself green. Whereas I always thought that the evergreen trees are just green, right? They're strong and they're going to stay green even in the wintertime.

[00:15:03] What it actually is doing is shedding itself and renewing itself all the time. The new way how the companies go about is you need to ruthlessly cut out what's not part of your now autonomous design. And that's extremely hard. But at the same time, you need to continue experimenting because not every design is going to live forever. Hey, this is not working, but it's not meeting our goal.

[00:15:28] So this concept of authentic, and we also have our own framework and life cycle. Companies do live by authentic life cycle development, right? Just like we got from the software development life cycle in the SDLC. You need to have that and continuously actually renew, refresh. Hey, this agent and this way of governing needs to go back and be fine-tuned. We fine-tune to that. This agent is not going to survive.

[00:15:57] Okay, then time for new agent. We had 600 agents. Now we got to have 60. We used to have 57. We got to have 20. But now we are adding. So that continuous life cycle has to be there for the companies to actually make this all revolution, not treat it as you fix one thing at a time and hammer it down and take the nail out again and hammer it down. But it has to be this cycle. What a great analogy. I'm at confession time.

[00:16:27] I'm going to hold my hand up now. I did not know about evergreen trees there shedding themselves and reinventing themselves either. So you've taught me something there too. And at the very beginning of our conversation today, we started talking about upskilling and reskilling. What a massive topic that is. And it's so important because workforce capability gaps, they're also cited as the biggest barrier to progress. So how did you approach upskilling more than 130,000 employees in Gen AI?

[00:16:57] And what lessons would you share with leaders starting that journey now? Because we keep hearing upskilling and reskilling. A lot of business leaders listening won't know where to start. So you're a great example here, shining example, I would say. So how did you do it? Anything you can share around that? Yes, it's a journey for us because it's still very early days. We ourselves are experimenting and we have a really big investment in this learning area where our CHRO, Piyush, and I really have been partnering.

[00:17:28] Two questions that I would start with. Everybody has access to amazing training these days, right? From Coursera to all the universities and all the online trainings and things. And even with the Gen AI and Argenti, you can create your own training really easily. But going back to my original theme, if the employees that you had currently were the only employees you can have for the rest of your company's life, how would your learning be different?

[00:17:57] And I bring this up because we are at a point where our employees have a deep process knowledge. They live on the ground. They live in the last mile where all the magic happens from the exceptions, right? So what that means is that, oh, my people have all the deep process knowledge that we're trying to imbue to the agents. Then how do I train them?

[00:18:24] I have a vested interest in converting those people as much as possible and not selectively. So that gave rise to a different kind of proprietary training that we are leveraging. And we also had a learning platform, Genome, which compared to even the size of our company, we had so many training hours that our people were taking. So that fundamentally required a complete reset.

[00:18:47] So rather than telling people to go and take just these trainings, how do we extract all the process knowledge and have them actually govern was the number one principle of thinking, starting with our employees first while infusing others. Then the second actually principle was also about, okay, so then you actually get the training, but how are you going to see that that actually is working? And we have to deliver on our client every day.

[00:19:14] So even if somebody got certified in the organic training, guess what? They have to come to their test. Now show me that you yesterday is different from you today in delivering that value. And that was the ultimate test of every day. And those two things are what we're working on every day to work because every single thing will be different for every client, right? I often quote the Ana Carolina principle for different reasons.

[00:19:42] It's like happy families are happy for one reason. Unhappy families are unhappy for many different reasons. And many of the business people use the principle for different ways. The way of my using it is, look, when something doesn't work, it does not work for 10,000 reasons, right? But at the same time, beyond the complaint, it's because there are so many points of failure, right? But when everything works, everything just works because everything's coming together and gelling and converging.

[00:20:11] And in terms of the people, there's so many things they need to know. But we're actually really distilling that down to what are the telltale signs that someone's really learning and changing? And how do we actually put that into the proprietary training? How do we see that that's working? And that's ultimately how we are making it work. Early success, right?

[00:20:36] And the clients are our ultimate voters of, yes, this is working for me. You're giving me bigger value. That's the ultimate test. And every client is different. But we are applying that principle every day and they're learning from it. Love that. And another topic we need to talk about today, why I have you here, is observability, explainability, and compliance.

[00:21:04] These are things that are also bolted on later, if at all, very often. So what changes when these are designed into AI workflows right from day one? And also, why does it matter as systems become more autonomous? Because there's a great case for this, but I'd love for you to expand on that. We have a full responsible AI framework, which if I'm invited next time, I'm going to have to share. It's 12 principles geared on all of the different laws out there from the NIST framework,

[00:21:34] from all the different guidance that are out there. But particularly on the elements that you mentioned, we are observing, and observability in the performance and health is important to us. So we actually have an authentic command center where between the human governance and the agent performance, they're being tracked basically on a daily basis and also on the real-time basis.

[00:22:01] And that observability is important because you have a certain range, if you think about it, the box plot graph of, hey, this means that it's performing well. But then the health is important because how do I observe the health of the agent more in a leading way, right? Not necessarily in the lagging way. And that has been more about observability. How do I actually turn that and say suddenly something's coming in?

[00:22:30] Now, it might be healthy, but the performance dips because the new kinds of information came in, right? So, for example, if the client's bought another company, suddenly the type of information is changing. I know my agent's health is fine, but certainly performance needs to dip. That means that that signal is when I need to fine-tune or change the agent or insert another agent. So we are working on the observability with that angle. And how do we actually make that a signal?

[00:22:59] So when the performance dips or health, it's not the panic, but, oh, it's a good signal that we need to do something. And that's another change management and things, right? How do you make sure that, like, oh, there's an opportunity for us to get better and get better and get better? So that is something that we're doing with this in the authentic command center, by different graphs, by different way of working, by different people working in teams.

[00:23:24] So it's not only a one person just staring at screen and seeing what's happening with the graphs, but people getting, you know, early signals and things. And then the second thing is the storytelling, right? Explainability does not only mean, can you explain this number and what happened is, can I tell the story around and do I know fundamentally what's happening to the environment? And can I communicate that, you know, from the process ground floor to the engineers

[00:23:51] to work with it and then make that, you know, health go back up or performance go back up. So we're tying and also embedding the responsible AI framework into the actual playing ground of where control plane, not the playing ground, like the control plane. So both our human workers and agents can kind of abide by these guard layers that we see.

[00:24:18] And for CIOs, business leaders, and in fact, everyone that's listening who might be in a team at the moment who feel stuck in pilot mode, they're struggling to scale if they have managed to get out as well. What is that one mindset or operating shift that they should be making this year to move that step closer to responsible and scalable autonomy? And again, I appreciate it's a huge topic and it's not as simple as giving a quick answer,

[00:24:46] but very often I think it's not as much about the technology, but more about the culture, the mindset and operating shift that they should be making. So any pointers or any advice that you would offer people listening here? I definitely advise people to go and really read our research that really makes sense. If I can ask them to give one thing, it's about before you set up the governance, you need to have an authentic strategy for that company because every company has the business

[00:25:15] strategy, but how does that business strategy translate into the authentic world? Many clients actually haven't done that and they're running in parallel. Okay, I need to do the authentic strategy while I'm actually building agents. It's like, okay, that's not even cart before the horse. Cart and horse are running separately. So one advice that I would give to the CIOs and the business leaders is please actually do the authentic strategy.

[00:25:44] Really look at what you're trying to do and how you want to get to that autonomy in your company, in your timeline with your business goals. Then the rest will follow. The governance, the training, you know, how you govern. And I'm curious, as someone that spends a lot of time in this space, you probably read a lot of things online. Many of them will be untrue. Maybe they will be exaggerated or myths or misconceptions. Might frustrate you a little.

[00:26:13] On this podcast, I do have a virtual soapbox I'm going to ask you to stand on. And finally, put to rest, lay to rest any of those myths and misconceptions. Any that you see repeatedly in your work and in your industry or your field of expertise that you just want to lay to rest once and for all? What would it be and why? There's so many, Neil. So many.

[00:26:39] But one thing I will say is that even for the leading organizations that our research shows that small number do, this is not easy. Even for companies who are ready, this is not easy. And it takes a journey. And it takes the entire village. Yes, the CIO is at front, chief AI officers at front, CEOs at front.

[00:27:07] But it takes the entire village to get on that journey. And whenever I hear, oh, we'll be doing it in three months because we did it. You know, we don't need many people. We were so ready, so ahead of time. And we did this. Because in the short term, no matter how good the agent is, and that short term will be a while. There's no process, zero process that will not require humans.

[00:27:35] And that, if I had to say one thing that really bugs me when I hear. I love it. Doesn't it feel great to get that off your chest? I'm going to do one more favor for you now. I think every single one of us throughout our career has someone that sees something in us, invests a little time. And we're incredibly grateful because they've played such a big part in our career. Who would that person be and why for you? We give them a quick thank you and restore a bit of happiness in the universe and balance.

[00:28:04] Who would you be most grateful towards? There are so many people that the other day during Thanksgiving, I tried to write. And there are so many. So now my goal is to write only like one to three special people from my past. But there's one person that comes to mind is, so I have a concept of personal board. They might not know that they are in my personal board because I don't give them the official titles.

[00:28:30] But anytime there are people that you call on your bad days and there are people you call on good days. There's a particular mentor who doesn't matter when I call or when I email, makes time. And not only that, I hardly follow his advice. Probably 80-20, right? But he's a really good person to hear. But also he always ends with, Jin Suk, I'm your mentor. I have no vested interest.

[00:29:00] I can give you any advice at the end of the day. You live it. You do that. And he's also such a cutting. He doesn't mean words. I know that when he praises, I'm like, is he really? Complimenting him, this is good? Because 99% of the time he will say, this is not good. Don't do that. But also we've had that relationship for over a decade. And would you answer, call someone who doesn't listen to you 80% of the time?

[00:29:31] But it has been such a tremendous relationship that that is one thing, one person that definitely comes to the top of my mind. And what was his name again? Let's give a minute. Let's recognize him. I think it's so important to surround ourselves with people who will criticize and question some of the decisions we make.

[00:29:52] Yes, we may not always listen, but it's so much better to take those opinions on board and have that support than surround yourself with people that just say yes to everything that you say. That's no good for anybody's growth. So a quick shout out there. You know who you are. But for anyone listening wanting to find out more information about GenPact and the research re-reference, connect with you or your team, where would you like to point them? Definitely go to our website and many announcements.

[00:30:20] But please do definitely read a section of what these small number of leaders are doing to kind of set it up. So look forward to seeing you and look forward to talking to you more about our research and where we are headed and how we are really serving clients for better business outcome in this authentic era. Yeah, we covered so much there from governance, why it hasn't kept pace.

[00:30:44] Optimism often outpaces readiness and how people are the biggest bottleneck or the lack of skills and upskilling that's going on there. So I will include links to autonomy by design research that we referenced. Everything else you mentioned, there'll be links in the show notes to those too. But more than anything, thank you for shining a light on the research results and what you're doing here. Tremendous what you've achieved. And I know you will continue to do so. So I will be asking you back on later in the year. But more than anything, thank you for joining me today.

[00:31:14] Thank you for having me, Neil. Really great time learning from you and the questions as well. I think if there's one takeaway from this conversation, it is that autonomous AI is less about speed and more about readiness. Governance, skills and mindset all decide whether progress sticks or slips. And my guest today made a very strong case that the winners will be the teams who design for accountability and learning right from the outset,

[00:31:43] rather than trying to patch it in later. And I'd love to know how this lands with you. Are you seeing confidence race ahead of capability in your organisation? I suspect a few of you will say yes to that. But maybe there is also many more of you that are finding ways to close that gap already. And I'm very interested to hear how you're doing that. So please pop by techtalksnetwork.com.

[00:32:08] You'll find out how to message me, how to connect with me on socials and work with me and just about everything else. So pop by there and I will speak with you all again tomorrow. Bye for now.