Is your cloud foundation ready for the explosion of AI workloads, or are you about to scale technical debt at the speed of innovation?
In this episode, I'm joined by Apurva Kadakia, Global Head of Cloud and Partnerships at Hexaware, an AI-first transformation company helping enterprises modernize the core systems that will determine whether their AI strategies succeed or stall. With a front-row seat to large-scale cloud programs across industries, Apurva explains why so many organizations that "moved to the cloud" still find themselves unprepared for what comes next, and why modernization-led migration has become a business priority rather than a technology upgrade.

We unpack the real warning signs that cloud environments are not fit for AI, from monolithic architectures and spiraling compute costs to hidden integration complexity and security gaps that only surface at scale. Apurva introduces the idea of "clarity before cloud," a structured approach to understanding sprawling application estates, identifying what truly matters to the business, and matching each workload to the right modernization path using the five R's. It's a conversation that moves beyond theory into the practical decisions leaders need to make now if they want to avoid being locked out of future innovation.
The role of AI inside the transformation journey is another major theme. Rather than treating AI as a destination, Apurva shares how AI-led and human-perfected assessment models are already accelerating application discovery, classification, and migration planning, completing the majority of the heavy lifting while keeping human judgment firmly in control. We also explore why governance cannot be an afterthought, and how a dedicated Cloud Transformation Office can drive adoption, reskilling, stakeholder alignment, and data readiness without slowing delivery.
Looking ahead to a world of agentic systems and rapidly multiplying cloud workloads, this episode offers a clear message. The organizations that win will not be the ones that adopted cloud first, but the ones that modernized with intent.
So as AI moves from experimentation to enterprise scale, are your applications, your architecture, and your operating model truly ready to support it, or is now the moment to rethink your path before the next wave hits?
Useful Links
Connect with Apurva Kadakia
[00:00:03] Welcome back to another episode of the Tech Talks Daily Podcast. My guest today is someone that sits at the fault line between cloud ambition and AI reality. And as enterprises rush to adopt AI at scale and implement AI agents, many are beginning to discover an uncomfortable truth. And that is that their cloud foundations were never designed for what is coming next.
[00:00:32] Lift and shift migrations have moved old problems into new environments. And as I say that out loud, I know more than a few of you will be nodding your head in agreement. Because yes, technical debt has quietly grown, probably grown more than you'd like to admit in public. And governance is something else that lags behind innovation. And what is the result of this?
[00:00:56] It is typically rising cost, complexity, and growing uncertainty about whether current platforms can continue supporting the next wave of AI workloads. Does that sound familiar? Because I think this is why this conversation matters. And my guest today is the global head of cloud and partnerships at Hexaware. And together, we're going to unpack why modernization-led migration has become a board-level issue.
[00:01:26] Much more than another technical nice-to-have. And as AI workloads are expected to multiply rapidly over the next few years, and I think that's something that is not up for debate now, my guest will explain why speed alone is no longer enough, and why clarity, structure, and governance. They may not be as shiny as the new emerging technologies, but these things will now decide whether your cloud investments pay off
[00:01:54] or quietly stall innovation, leaving your AI projects caught in pilot purgatory. And we'll also talk through practical signals that leaders should be watching out for to identify when their cloud estate is under strain. And we'll also look at how to bring order to sprawling application portfolios. Why decisions made today will ultimately shape who can actually deploy AI at scale tomorrow. Yeah, we're going to cover it all.
[00:02:23] Ultimately, it's about what breaks first, what to fix next, and how organizations can prepare their aging infrastructure for an AI-driven future, without adding just another layer of complexity on top of the old ones, and we've all seen what happens there. But enough scene setting for me. Let me officially introduce you to today's guest now. So, thank you for joining me on the podcast today.
[00:02:50] Can you tell everyone listening a little about who you are and what you do? Absolutely. First of all, Neil, thanks for having me on this podcast. I'm Apurva Kadakia. You know, you can know me as more as a cloud evangelist and a transformation executive. Been with Hexaware Technologies for the last 20 months. Hexaware Technologies itself is an AI-first transformation company,
[00:03:17] which helps customers with end-to-end services, starting from build new and disruptive solutions, and was managing and operating and transforming them as well. My role at Hexaware revolves around cloud, responsible for all of cloud transformation and optimization initiatives, as well as partnering with the large hyperscalers to ensure that we ultimately create the right value
[00:03:47] and unlock the right potential for our customers leveraging these partnerships. Well, it's a pleasure to have you join me today. So many topics we can explore together. And I do go to a lot of tech conferences in the US. One of the last ones I went to was AWS re-invent, and they made a big deal about technical debt and the scale of technical debt. They even took us journalists into the middle of nowhere and blew up a load of servers, saying, we're going to blow up technical debt.
[00:04:15] But, of course, technical debt is always going to be around. And I'm curious, as a transformation partner, from your vantage point at Hexaware, what is it that you find breaks first when enterprises try to run AI workloads on cloud foundations that were never actually modernized in the first place? Is this something that you've come across? Yes, we come across this almost every day, if you will, for lack of a better word.
[00:04:41] In fact, I think a larger part of enterprises and customers are dealing with something called this technical debt. I think they are either revolving around outdated systems. You talk about complexity in the architectures. You talk about the architectures being too intertwined, that it's too big to fail itself, for lack of a better word. And when you pair that up with the SME shortage itself,
[00:05:10] because some of these applications have been built many, many years ago. And the actual knowledge itself and how some of these solutions were architected or built is a big challenge for majority of the customers. And I think when you pair that up with trying to identify the right futuristic solution or a future compute strategy itself, it becomes a big undertaking for most customers.
[00:05:37] And I think that's where we realized that this is one of the biggest challenges that most customers are looking to solve for. And I was also reading that you've previously said that many organizations either lift existing problems directly into the cloud or delay migration altogether. But for business leaders listening, looking to avoid some of the mistakes that we're talking about, what are the warning signs that they should be looking out for?
[00:06:06] What's the signal that their current cloud set up will struggle as AI demand accelerates throughout 2026, especially as we're now talking about not just AI, but agentic AI, which is like we're piling more and more tech on top. So if you have majority of your workloads, either monolithic in nature, or if you have the workloads, you know, which are less agile, or even you deal with the daily security challenges,
[00:06:34] I think, you know, these are enough warning signs for you to know that, okay, when you move to cloud, you're actually taking on a significant amount of technical debt as well, which might in the long run could very well turn out to be costly to maintain in the cloud itself. And that's why I think it's extremely important that when you put together, when you start putting AI strategy first for yourself, it's like you certainly start looking at how you modernize your workload to become more savvy for AI,
[00:07:04] how you modernize your workload to become more savvy for cloud itself, right? So that you can run in an optimized fashion and a sustainable fashion. And that's why I think it's extremely important for customers to look out for us like, okay, what kind of a technical debt do I have currently running on-prem in various different forms, if you will, which could be, you know, which would really all the way from complex architectures all the way to the agility and security challenges that you may be running into
[00:07:33] and how you want to eliminate that while you put together your modernization strategy. Another phrase I've heard you use or read that you've used before is clarity before cloud. And I absolutely love that. But I suspect we've both seen organizations that have not just hundreds, but thousands of applications. So when you talk about clarity before cloud,
[00:07:58] how should executives realistically be in assessing sprawling application portfolios without turning the exercise into a multi-year audit that ends up stall in progress? And again, I'm mentioning applications there, but with agentic AI, I would imagine a year, two years, three years down the line, we're going to be talking about agent sprawl. That'll be the next thing. But what's the best way of managing this? It starts ultimately with the right kind of function
[00:08:34] applications that were built with a specific purpose have grown into a massive, I think, you know, use case itself, which becomes very hard to maintain. And that's why I think, you know, it's important for customers to really go back and untangle or unravel the real use case or the real purpose behind the application. And it starts with something we help customers with the assessment strategy itself, right?
[00:09:03] I think we truly believe in it. And we see that the assessment strategy helps customers identify, you know, the true nature of the business itself is like whether it is a business critical application or is it more of a corporate application? And then depending on the nature or the function or the purpose itself, like, you know, how do you want to apply the modernization strategy around it?
[00:09:30] Now, there are some applications which are, you know, to a very large extent become customer facing. And I think the customer experience becomes very important for those applications. And that's where I think, you know, most customers, we tend to recommend is to take a completely greenfield approach to start leveraging more and more of new edge technologies to drive improved customer experience.
[00:09:58] If it is a corporate application and doesn't require a whole lot of business logic change, then that's where, you know, we talk about our transformation approach versus if it is a business critical application itself, which does require very complicated logic processing itself, then we say that, okay, you may want to think about the right compute strategy as well. So these are broadly the high level, you know, we look at it as like, okay, is it a customer facing app? Is it a business critical app?
[00:10:26] Or is it a back office operations app? And depending on that, I think, you know, we help them, help the customers identify what should be the right strategy from innovate, build, or transform. And I think it's also important to maybe highlight the help that AI and automation can offer here. And again, for people listening, how can AI and automation be better applied to things like application discovery and classification today?
[00:10:54] And also what are the tangible time or cost savings that you've seen when you're experienced with working with companies that take that approach seriously and use AI as a tool that can help them out of this too? Yeah, nowadays, I think when we look at it with AI, I think it's becoming more and more smarter in terms of identifying the patterns, identifying what are the best practices and what are the right frameworks that needs to be applied
[00:11:23] for any application development or any compute strategy as well. And when you put all of these together, I think, you know, in fact, we have started adopting AI to say that, okay, we in fact follow in something called as an AI-led and human-perfected strategy, where, you know, we bring in AI into our own tools to certainly bring in some of these right answers,
[00:11:51] which are then further wetted by the humans to ensure that, you know, whatever we have received as an output from an AI is aligned with what we truly believe is in the best interest of the customer. So that's where, you know, we have started building something called as an AI agents for our own, within our own frameworks and within our own technologies and tools
[00:12:17] that we work day in, day out with the customer to go after the most common patterns. So that way, you know, we can at least achieve like 60 to 70% of the work that can be handled by AI versus the remaining work to be done by humans to not only improve the accuracy of the output, but also make it more timely for the customer. And the five R's, they're something that is often mentioned,
[00:12:45] but rarely applied well in large enterprises. So for any decision makers listening, how should they be choosing between re-hosting, re-platforming, refactoring, re-architecting, or replacing when balancing speed, cost pressures, and long-term AI readiness? Because again, huge balancing act here, isn't it? Yes. In fact, I mean, you know, we live in a world, we in fact like to call it as a cloud evolution 3.0,
[00:13:13] wherein, you know, majority of the customers, you know, who would have chosen of cloud as a platform, originally thought about re-host as a strategy, has evolved out of that as well. So we do see a lot of customers now adopting a strategy of saying that, you know, we may want to go more cloud native, which ultimately means taking a re-platform approach in some capacity, or maybe rebuild the approach itself, right?
[00:13:40] However, I think the ultimate goal there is to really take a more cloud native approach. Now, there are certain types of workloads, which needs to stay in the re-host capacity. But this is really where, you know, we certainly go off for a specific workload, which is, let's say, typically running in a cost capacity, with limited ability to kind of, you know, take a cloud native approach to it. We say that, okay, for this type of workload, we can continue to stay in the re-host platform, but almost everything else,
[00:14:10] customers should certainly think about taking a re-platform or a rebuild strategy as well. Now, there is, again, a new nuance to it is like, if there are certain types of applications, which may fall right between the re-host and the re-platform or refactor. And I think, you know, that's where, you know, we certainly look at it's like, okay, how could you containerize the application as well? Not only it allows you to kind of, you know, get closer to the cloud native features, but at the same time, it helps you increase your portability
[00:14:39] amongst cloud as well. And I'm curious, again, from everything that you're seeing, the conversations you're having and meetings with organizations, where do they most often misjudge technical debt during modernization? Because I think every organization is aware of a level of technical debt, but what do they misjudge and what downstream impact does that have once AI workloads begin to scale across the cloud estate? This is a very common scenario we run into
[00:15:08] with every single customers. And I think, like you rightly said, I think the customers tend to talk about the technical debt. However, the true gravity of the technical debt itself revolves around the integration touch points itself, right? You know, most customers tend to undermine how deeply integrated a single application is to a variety of applications. Because over the years, I think, you know,
[00:15:37] every single application has gone through its own level of upgrades or modernizations, allowing, you know, sometimes even a multiple integration point to the same application. And, you know, while some of them may be deprecated, but they are still living very much within the application. And I think, you know, that's the number one thing that, you know, we tend to run. In addition to it, I think, you know, we also see the security gaps as well. Like, you know, it's kind of an extension to what I just mentioned about some of the loose connections,
[00:16:07] which are still out there, even though they are not in use. Over a period of time, they become a huge security vulnerability as well, because some of these integration touch points or some of these connections, they don't even show up in the radar. And I think that's where it becomes challenging for customers to see. And I think that's why, you know, when we think about AI at scale, we really like to start with something called as an enterprise-grade AI platform. That's what we recommend our customers
[00:16:37] because we want to make sure that, you know, you have thought through, you know, what's the right security that you need to put in place? What are the right guardrails you need to put in place? You know, what kind of telemetry you need to enable as well to ensure that, you know, you have the right kind of parameters, right kind of boundaries put around the solution itself to prevent any further such slippages.
[00:17:07] And another problem we have to highlight today, of course, is governance, because it's frequently treated as an afterthought in many organisations. So what does an effective cloud transformation office actually do day to day to keep modernisation efforts on track without slowing teams down? I think there's many people listening will know not what not to do, but not what successful teams are doing at there and what they could be doing differently.
[00:17:38] So most organisations run into what we call as a challenge is around adoption itself, right? When we think about change management or when we think about cloud transformation, when we think about AI enablement itself, right, the adoption becomes the key to the success there. And the adoption can be really improved
[00:18:04] by leveraging a good change management office approach itself. And I think that's where, you know, we tend to educate our customers and talk to our customers proactively about something called as a cloud transformation office. Now, here's the office. This is the office which really focuses purely on end-to-end adoption and change management associated with any new technologies that's being introduced within the work environment itself, right?
[00:18:32] You know, customers, you know, requires in many capacity reskilling of the existing workforce as well, which is where I think, you know, we tend to focus on running it as a parallel thread to the actual transformation to ensure that the workforce feels comfortable to adopt, to embrace this new technology itself, right? We tend to run into so many complexities around something called as the data migration or modernization itself
[00:19:02] or manipulating to ensure that, you know, we have the right data in the right state before it can be consumed by the new technology. And that's where the transformation office comes in to ensure that, you know, we have followed every single step very minorly to ensure that ultimately the data is in the right state as well. And then, of course, ensuring the right kind of stakeholder management as well is really what becomes the key. When we take this whole,
[00:19:30] this governance-based transformation office approach, we realize most of the transformation engagement that we work with are, you know, reach are basically accomplished in a timely fashion or maybe even before time with the guaranteed outcomes or results. As I said at the very beginning of our conversation today, app sprawl is a massive, massive problem. But this year, everyone's talking about agentic AI,
[00:19:59] hundreds and thousands of AI agents being out there. So I think it's safe to assume that, fast forward for a few years, we'll be talking about app sprawl and AI agent sprawl and all the AI things that we've added. And I think understanding exactly what is out there, especially in the autonomous infrastructure, what's happening and where and what each thing is doing is going to be imperative. So if we look ahead, I don't know, let's say three years, 2029, the world will be very different.
[00:20:29] There'll be cloud workloads multiplying dramatically from what we're seeing today. What are the practical steps that maybe proactive enterprise leaders can be taking over the next 12 months to avoid being locked out of AI innovation, ultimately by their own infrastructure choices? What should they be doing to prepare for that inevitable future? Yes. And I think this is one of the number one topic that I think the most customers nowadays, I think the executives are talking about when they think about their AI strategy.
[00:20:58] I think what they're realizing is you could adopt AI in two ways. One is build something new, ground up, or build on top of what you have existing solutions and workloads itself. Now, the reason for the latter is very important is because most customers have years of history that's been built within the organization itself of helping customers or doing businesses
[00:21:26] in such a fashion, right? And that's why it's become very, very important to build AI on top of your existing workload itself. But that's where the gap comes into play in terms of the technical debt that we spoke about earlier as well, either in the form of complexity of the architecture or in the form of legacy nature of the architecture or in the form of the security challenges or agility, right? And I think that's where
[00:21:54] customers have become much more open to the idea of taking modernization-led migration approach. So I think the number one steps that I think most customers are now dealing with and talking about is like how could they really modernize their current infrastructure, which is not at the infrastructure level alone, but even at the application level to make them cloud-native savvy. And once they are cloud-native savvy, they can certainly look at it
[00:22:22] as like how they can enable them for the AI innovation. And if you look at all the conversations you're having and a lot of the information that you read online, maybe even on your own personal LinkedIn news feed, I've got to ask, what do you find that people most misunderstand about your industry? Or are there any myths and misconceptions about your job or field of expertise that we can lay to rest today? Because I have a sneaking feeling there's going to be
[00:22:51] a few frustrating things that you see along the way, but anything we can lay to rest and bust once and for all? Treating AI as a big bank transformation is the number one thing. And the other thing is like AI is going to eliminate all the jobs is another challenge. And I think what we are seeing is ultimately it is opening up a lot more opportunities for a lot of our customers not only to adopt innovation,
[00:23:21] adopt newer technologies, but at the same time uncover a lot of use cases and potentials which were harder to implement earlier. Right? So those are the things that we truly see that people misunderstand is versus when you think about AI, you have to start at ground up or second is really, you know, that AI may necessarily eliminate what we do today itself. I love that.
[00:23:50] And for anyone listening who have been nodding in agreement as your words resonated with them throughout this conversation today, maybe they want to connect with you or your team or just learn more about Hexaware and how you are helping organizations. Where would you like everyone to go? You know, we have a lot of good contents along with some of our customer case studies and blog posts on our own website itself on Hexaware.com. In addition to it, I mean, if there is a direct interaction
[00:24:19] that anyone would like to have with me as well, they could certainly reach out to me on LinkedIn and I'm happy to answer as many questions as possible as well as share experiences that we're living with day in and day out with each of our customers. Well, I think we covered so much there to keep people thinking and talking from clarity before cloud, the five R's, creating firm governance and I will add links to everything that you mentioned now and I encourage everyone listening to check that out and spend a bit of time reading some of that information
[00:24:49] or maybe even ask you an additional question. But more than earthy, just thank you for shining a light on this and most importantly of all, giving everyone an actionable takeaway. I would argue several actionable takeaways, but thanks for sharing that with me today. Thank you, Neil. Thanks for having me on. So as we wrap up, what stands out for me is just how much of the AI conversations still come back to the fundamentals because before agents, co-pilots or anything autonomous
[00:25:18] can deliver value, the underlying cloud estate, the foundations, has to be fit for purpose. And my guest today cut through a lot of the noise and brought back that focus of discipline, structure and intent, modernization-led migration, clarity before cloud and firm governance. These are not just abstract ideas. They're practical choices that shape what is possible over the next few years. And I also appreciated
[00:25:46] the timely reminder that AI progress doesn't come from, hey, let's rush faster. It comes from fixing what is holding you back. And for many organizations, that means facing technical debt honestly. Understanding your application sprawl before it turns into agent sprawl in a couple of years and treating cloud decisions as long-term bets rather than short-term fixes. And I know, these are not easy conversations,
[00:26:15] but they're necessary ones if AI is going to scale responsibly and sustainably. So I'd encourage you all to continue the conversation with my guest. There'll be links in the show notes. You can find both my guest and Hexaware there. And I'd also love to hear your thoughts too. Are your cloud choices helping you move forward or quietly limiting what comes next? techtalksnetwork.com There's many ways that you can contact me there. So check that out
[00:26:44] and I'll return again tomorrow with another guest. Speak with you then. Bye for now.

