The world is building data centers, identity rails, and AI policy stacks at a speed that makes 2026 feel closer than it is.
In this conversation, Rajesh Natarajan, Global Chief Technology Officer at Gorilla Technology Group, explains what it takes to engineer platforms that remain reliable, secure, and sovereign-ready for decades, especially when infrastructure must operate outside the safety net of constant cloud connectivity.
Raj talks about quantum-safe networking as a current risk, not a future headline. Adversaries are capturing encrypted traffic today, betting on decrypting it later, and retrofitting quantum-safe architecture into national platforms mid-lifecycle is an expensive mistake waiting to happen.

He also highlights the regional nature of AI infrastructure, Southeast Asia prioritizing sovereignty, speed, and efficiency, Europe leaning on regulation and telemetry, and the U.S. betting on raw cluster scale and throughput.
Sustainability at Gorilla isn't a marketing headline, it's an engineering requirement. If a system can't prove its environmental impact using telemetry like workload-level PUE, it isn't labeled sustainable internally.
Gorilla applies the same rigor to IoT insight per unit of energy, device lifecycles, and edge-level intelligence placement, minimizing data centralization without operational justification.
This episode offers marketers, founders, and technology leaders a rare chance to understand what national-scale resilience looks like when platform alignment breaks first, not technology.
Remembering that decisions must be reversible, explicit, and measurable is the foundation of how Gorilla is designing systems that can evolve without forcing rushed compromises when uncertainty becomes reality.
Useful links:
[00:00:04] Greetings and salutations. Welcome back again today. Now, when we talk about AI, cybersecurity, smart cities, or even quantum safe networks, the conversation often jumps straight into speed, scale, and performance. But how often do we just pause to ask a deeper question about trust, resilience, and what happens when systems are really pushed to their limits?
[00:00:31] Well, my guest today, he sits at the intersection of those questions. And he's the global CTO at Gorilla Technology Group. And he joined me from Jakarta late in the evening while I was recording from the UK. But he spends his days thinking about national scale systems, AI infrastructure, quantum safe security, and what it really takes to build technology that not just governments, but societies can rely on for decades.
[00:01:01] Not just a product cycle. And we'll also talk about why quantum risk is already a present concern, how regional realities shape AI infrastructure in different ways, and why sustainability has to be engineered, measured, and proven, rather than just talked about.
[00:01:19] And what I love most about this one, it's grounded, and just an honest conversation about what breaks first when companies scale too fast, why trust has to be designed right at the foundations on day one, and how good engineering preserves choice, rather than locking us into fragile decisions.
[00:01:39] Here at the Tech Talks Network, we now have nine podcasts and approaching 4,000 interviews. And that is only possible with some of the great friendships that I've developed over 10 years of podcasting. And a company I'm proud to call friends of the show is Denodo. Because not only have they been on this podcast multiple times, they also help make sense of the AI data chaos that we're seeing now.
[00:02:02] Because the data world is louder than ever. AI hype, lake house complexity, and pressure to deliver more with less. These are things that I talk about every day on this show. But Denodo is helping businesses make sense of it all. Because they provide a unified data foundation for trustworthy AI, lake house optimization, and data products to finally bring service to life.
[00:02:25] So whether you are a CIO or a builder, Denodo helps you activate your data with speed and governance. And their global partner network also helps you accelerate every step of the way. So if you're ready to unlock real outcomes, simply visit Denodo.com today. But now, it's time for today's interview. Let me introduce you to today's guest.
[00:02:48] So a massive warm welcome to the show. Can you tell everyone listening a little about who you are and what you do? Thank you for having me on the show, Neil. It's really great to be here. My name is Rajesh Natarajan. I'm the global CTO at Gorilla Technology Group. I have a very interesting job here at Gorilla. My work actually sits at the crossroads of AI, cybersecurity, and network infrastructure.
[00:03:15] And I spend a lot of my time focused on building resilient, sovereign-ready platforms, the kind of systems that need to kind of operate securely at scale, and systems that often exist outside of very traditional cloud environments. So I spend a lot of time thinking about how today's technology decisions shape trust and resilience for the next decade.
[00:03:43] Well, you're not joking. It really is an incredibly cool industry to be working in. And on this podcast, yes, we've covered quantum computing, AI infrastructure, and smart city platforms. But at Gorilla Technology, you span all of these things. And I'm curious, from your perspective as CTO, what is that unifying problem that you're actually trying to solve across all these domains?
[00:04:07] And where do organizations most often misunderstand the role of advanced technology in national-scale systems? Because we've seen a lot of mistakes. We've seen businesses struggle to get ROI from tech investments. But so what are you seeing here? Well, Neil, at the core, I like to believe that we are solving a very interesting problem of trust and resilience.
[00:04:29] Whether it's AI infrastructure, quantum safe security, or smart city platform, the challenges tend to be the same. How do you build systems that continue to operate securely, predictably, and at scale? And more importantly, when the conditions change, when connectivity is constrained, and when adversaries are extremely sophisticated.
[00:04:53] And I think this is where most organizations also fall prey to the common misunderstanding that advanced technology isn't thinking about its features and performance alone. But at national scale, technology is really about systems engineering. It's about governance. It's about interoperability, lifecycle management, and more importantly, failure modes. You know, if you take the example of AI, it doesn't help if the data pipeline isn't trusted.
[00:05:23] This is a huge problem today. Quantum safe crypto doesn't help if the key management mechanisms are fragile. Smart cities don't work if the edge systems can't operate independent of the cloud when the cloud is unavailable. So there are all these constraints that have to be taken into consideration before we actually even move a step forward.
[00:05:44] So the way I think about Gorilla is that our role is less about chasing the latest and greatest technology, but it's more about integrating the right technologies into architectures that governments and operators can rely on for the next, you know, 10 to 20 years. And that's exactly what you're doing.
[00:06:05] And before you came on the podcast, I was reading that Gorilla recently launched a quantum safe SD-WAN aimed at future proofing national AI and network infrastructure. So quantum safe is a term that many leaders hear but struggle to contextualize. So what real world risks are governments and enterprises exposed to today if they delay taking action in this area? Because it's a very important conversation, especially as we're preparing to enter 2026. Yeah, it is.
[00:06:34] And it is an area which is riddled with misconceptions. And in my humble opinion, the biggest misconception is that quantum risk is a future problem. But in reality, the exposure already exists today. You have adversaries harvesting encrypted traffic with expectation that it can be decrypted later once quantum capable systems mature.
[00:07:01] That means that sensitive data moving across networks today, governmental communications, infrastructure telemetry, IP, personal data may not remain confidential in the future if it's protected by just classical cryptographic methodologies. The second risk is an architectural lock-in. Networks and AI platforms have long life cycles, 5, 10, sometimes hopefully 20 years, given the amount of capex that goes into some of these projects.
[00:07:31] And if quantum safety isn't designed in now, organizations will face this costly and very disruptive retrofitting problem later on, and often at the worst possible time. Yeah. It's like the lock on your door at home, right? So it's always good to have a lock which is strong, whether you need it or not.
[00:07:58] And finally, also, I believe it is important for us to recognize operational risks. Quantum safe isn't just about algorithms. It's about how keys are managed, how systems update themselves, and how trust is maintained in a very distributed infrastructure. So delaying actions will basically narrow down options and will traditionally tend to force us down very rushed decisions, right?
[00:08:25] So the pragmatic approach, in my opinion, is to start with a hybrid crypto agile architecture today so that you're protected now, one which is adaptable later. And, you know, certainly making sure that we're not bidding national-scale systems on hope. And as you said there, at the same time, there is healthy skepticism in the market about how close quantum threats really are.
[00:08:52] And as you said, a future problem some people see it as, which seems bizarre to me, especially where the entire cybersecurity industry is trying to be more proactive than reactive. So that kind of thinking just blows my mind. So how do you prepare balancing that long-term cryptographic shifts without overspending or overengineering in the present? Because I would imagine you still have to find that balance.
[00:09:19] Yes, and I think that's what most of the companies hope that they hit at the right and most opportune time. Yeah. But, you know, let me first acknowledge the fact that this skepticism that you talked about, it's healthy, it's real, and we share it too. So preparing for quantum risk, you know, it doesn't mean, you know, building a hypothetical future at any cost. You know, I wish we had blank checks, right?
[00:09:46] As a technologist, a blank check is my pipe dream, right? But at the end of the day, right, so this balance that we keep talking about, it comes from crypto agility, right? You don't rip out existing systems, right? And you don't bet everything on unproven algorithms. And instead, what you typically tend to do is you design architectures that can evolve, right? Where cryptographic primitives can be swapped out. Policies can be updated, correct?
[00:10:16] Performance trade-offs are understood upfront. And practically what that looks like is that it just means you start with hybrid approaches, right? Target the most sensitive data paths first. Align upgrades with normal refresh cycles, right? Because these algorithms get updated pretty consistently. There are new standards coming about. You know, NIST has certainly set its path on 2030.
[00:10:43] Trying to get on that particular bandwagon is going to be super important for all of us. And by doing this, what we get is protection against known long-term risks. So let's not chase the unknown ones. Let's at least chase the known ones, which we can actually control our destiny with. And we want to do this without carrying any unnecessary costs or complexity today. I think that's very important.
[00:11:09] The mistake isn't under-engineering or over-engineering. It's not about engineering. It is about locking ourselves into designs that remove optionality, right? Good engineering preserves choice, in my opinion. Something that we strongly believe at Gorilla. And that's something that actually helps us scale the way we are scaling today as well.
[00:11:34] And this is even more important when the timeline of this disruption is uncertain. So we have to kind of, to some extent, brace that uncertainty, but at the same time, take a very pragmatic approach from an engineering standpoint as to how we can deal with that uncertainty as it becomes certainty, should it?
[00:11:55] And before you join me on the podcast today, I was doing a little research and I quickly came across a $1.4 billion Freya Singapore contract to help build Southeast Asia's AI data center backbone. It feels like a massive milestone here, but what does a project of that kind of scale reveal about AI infrastructure priorities and how they differ between regions like, let's say, Southeast Asia, Europe and North America? It's a massive story, right?
[00:12:25] Yeah, it is. It is. We are very fortunate to have actually won that particular deal. See, at the end of the day, right, a project of that scale, it just makes one thing exceptionally clear to me. AI infrastructure priorities are deeply regional, even though the technology stack looks just the same, at least on paper. Right.
[00:12:54] In Southeast Asia, if I think about it, the priority is around speed. It's about sovereignty of the data. It's about efficiencies. Demand is growing exceptionally fast. Power and land are constrained and governments want these infrastructures that can be deployed quickly, scaled with modularity and operated locally. This is very important here in Southeast Asia.
[00:13:21] And there's a strong focus on right-sized capacity, right? They want to make sure that there's energy efficiency and resilience, but not excess. And this is kind of a governing principle here, at least that I'm seeing here in Southeast Asia. In Europe, on the other hand, the conversation is more constrained, I believe, humbly, by regulation and sustainability. Right.
[00:13:50] And people talk about grid access, carbon accounting, heat reuse, and compliance frameworks shape. I mean, that's what's essentially shaping the architecture and the decisions as much as performance does in other regions. Right. So expansion is happening, but it is deliberate and it's tightly governed. And I see goodness in that kind of an approach as well.
[00:14:14] And then you have the North America, right, particularly U.S., where it's all about optimizing for scale and specialization. You know, you see these large purpose-built AI clusters often tied closely to the hyperscalers with fewer constraints around speed of deployment. They want it functional. If it takes another couple of months, it's okay. Southeast Asia, no, I want it today. Right. That's a sense of urgency, right?
[00:14:43] But similarly, like, you know, in North America also, like, there's this higher expectation around raw performance and throughput. It's that thing, right? You know, why get a V8 when I can get a V12 in this car, right? I want that V12, right? And, you know, if I just step back and just think about the Freya project and what it highlights, is that there is no single global blueprint for AI infrastructure today.
[00:15:12] The winning architectures are the ones that tend to respect local realities. And these realities are measured in power, policy, and economics. And the end goal or the end expectation is that there is still these implementations that are technologically future-proofed. And that is also important for them, right?
[00:15:38] I mean, the CapEx investments into these things are just ridiculously high. And the more you squeeze out of it, the better off everybody is. And that seems to be the main goal across the world today. And I suspect for many people listening, when they think smart cities, they think about efficiency, safety, and sustainability.
[00:16:03] But they also raise concerns around surveillance, data ownership, and public trust, which again feels like quite a delicate balance. So how do you at Guerrilla approach smart city deployments in a way that earns trust rather than resistance from citizens and policymakers, etc.? I like this question. I like this question a lot. I'll tell you what my philosophy is. Trust is the starting point.
[00:16:33] It is not the outcome. Smart cities fail when they are framed as technology deployments instead of social systems. And our approach is to build around three principles. First, you know, purpose limitation. Technology is deployed to solve clearly defined problems like traffic safety, emergency response, not broad open-ended surveillance.
[00:17:03] Second thing is data sovereignty and governance. We want to make sure that cities retain control over their data with clear policies on ownership, access, and retention. If the citizens don't understand where the data lies, who's got access to it, how is it going to be used, then their trust on that system will automatically erode. And the third thing is architectural restraint.
[00:17:33] I didn't say resilience. I used the word restraint over here. And the reason why I say that is because we tend to push a lot of intelligence to the edge wherever possible so that data does not cross thresholds unnecessarily. It doesn't have to be centralized unless and until there is a clear operational reason to do so.
[00:17:59] And that is not thinking about, hey, if I had this data with me, what can I do with it tomorrow? Right? And it's more about if I have this data with me, what do I need it for today? And how do I safeguard it so that it doesn't become a problem tomorrow? So that viewpoint is very, very different.
[00:18:22] And along with this, I also believe that when citizens and policymakers understand what these systems do and what it does not do, more importantly, and who is accountable at the end of the day, the resistance tends to drop significantly. Right? Transparency and governance, in my humble opinion, matter more than sensors and algorithms. 100% with you.
[00:18:51] And of course, 2025 has been a fantastic year for Guerrilla. He reported a record Q3 revenue and also issued guidance through 2026. And I'm curious for other tech leaders that are listening and from a tech leadership standpoint, what is it that tends to break first when companies typically scale this quickly? It's a nice problem to have. And how are you designing your platforms and your teams to avoid any failure points that you may have seen others make in the past?
[00:19:18] Yeah, we certainly have been really blessed in 2025 and 2026. Looks even better. But, you know, it's important for me to recognize one thing, right? I can tell you what does not break first. Yeah. Technology does not break first. What breaks first is alignment. Processes lag behind scale. Assumptions that we had stop being true.
[00:19:47] And the scary part is teams end up solving yesterday's problems with today's complexity. Yeah. That is what kills companies. So from a platform standpoint, I think the biggest risks are literally like brittle architectures, hidden dependencies, systems that work at smaller scale can fail when the load increases. Right. This is just, you know, engineering physics.
[00:20:16] There's no two ways around it. This is something that we just need to constantly look into. And that's why, you know, my fundamental philosophy and goal within the organization is to design for modularity. Right. Clear interfaces. Operational observability from day one. Right. Without these two things, nothing ever gets out of our company from a release perspective.
[00:20:41] There's also the human aspect of this, you know, and the biggest human aspect problem when we scale too fast is decision velocity. Hmm. Things take time. People start second guessing themselves. Right. So as organizations grow, right, like decisions naturally, they tend to slow down. Right. Ownerships blur.
[00:21:08] And, you know, God forbid the technical debt, you know, the bad word for all engineering organization that tends to accumulate. Right. Right. And the way we counter that is we keep teams small. We keep teams accountable and close to the system that they build. Right. With very clear technical standards. Right. So I have a center of excellence who's looking at what we should be doing, what we should not be doing. They, you know, spread that knowledge across the organizations.
[00:21:37] And when teams have to make localized decisions, they're using these patterns and practices. And if they think that there's a better way to do it, they can go back and interact with the center of excellences and change those patterns and practices if required. Right. But it's a very important thing for us to also be flexible along those lines. Right. So like today, my engineering and my operational team, it's a global team. I have teams in Taiwan, India.
[00:22:07] I'm building one right now in Thailand. I have a team in Egypt. I have a team in UK. Right. And the idea for us is to hire the best and brightest talent that we find, put them, give them autonomy, give them accountability for the tasks that they run into, and make sure that, you know, the agility that we need within the organization is there.
[00:22:32] But at the same time, there's also contribution back into that center of excellence so that we can actually scale knowledge just as fast as we are scaling our business. Right. So I think that's the most important thing. And I hear a lot of leaders talk about, oh, growth needs to be sustainable. We have to be careful. I agree with that. Right. But scaling for sustainability, I don't believe it's about moving slower.
[00:23:02] Right. It's about removing friction before the friction becomes a failure. Right. And I think like, you know, waking up every day and, you know, just thinking in terms of what, how can we solve this problem better? Not necessarily build a better mousetrap, but is there other ways for us to get this optimized? Right. And I think inculcating that culture across my entire engineering organization is one thing that's actually helping us scale.
[00:23:31] Similarly, you know, the procurement methodologies across these different regions that we work in are very, very different. Right. So if you do not have patents and practices over there, you know, even the procurement life cycles become a lot complicated for us. So that's why that regional presence, that regional knowledge is exceptionally important for us. And I'm glad you raised that because we are at a time where greenwashing is a very real problem.
[00:23:58] And sustainability is positioned by Gorilla as a measurable engineering principle rather than a branding exercise, which feels incredibly refreshing, especially with initiatives like the one Amazon Impact Fund. So how do you technically validate that digital infrastructure projects or delivering that measurable, real environmental incomes rather than just indirect promises? Because it comes with a big responsibility, doesn't it, when you set out on something like this?
[00:24:28] Yes, especially when you're talking about, you know, trying to save about 25 to 30 percent of world's fresh water reserves. It is a very scary value proposition for us to go after. And it certainly is a very humbling value proposition as well, because it brings a lot of things to light. We spoke about performance. We spoke about security earlier on today. Right. And we treat sustainability exactly the same way.
[00:24:56] If we can't measure it, it isn't real. Yeah. Right. Number one mantra for us. So at a technical level, what that really means is that, you know, we have to instrument infrastructure end-to-end. We measure power consumption at the workload level, not just at the facility level. Right. We track PUE, energy sources, utilization efficiencies, carbon intensity over time. Right. And these are not one-off reports. Right.
[00:25:25] We do this rigorously on a weekly, on a monthly basis. From edge and IoT systems, we look for device lifespan, maintenance costs, data efficiencies, and how much insight are we generating per unit of energy consumed. It's great I have a camera. It's looking at a wall. There's nothing walking by over there. Why did we put the camera there? Or why did somebody put the camera over there? Right.
[00:25:50] Even just like, you know, some common sense can go a long way. Initiatives like one Amazon, that validation goes a step further for us. Because, you know, you talked about greenwashing, right? And for us, like environmental outcomes, they have to be tied to verifiable telemetry.
[00:26:13] Whether that's sensor data, satellite correlation, or just independent data sets, right? We don't want impact to be inferred. We want it to be observed. Like reforestation, biodiversity signals that we talk about, any anomaly reductions, they are measured continuously and audited periodically. And that's very important for us, right?
[00:26:43] And I call that closing the loop. So the key over here is we have to close the loop, right? Digital infrastructure, it needs to produce data that proves its own impact. And if a system can't demonstrate that feedback cycle, we don't call it sustainable, regardless of how good it looks on papers or slides. So that's a very important thing for us.
[00:27:10] And how important it is to you really shines through. And anyone listening to this podcast, if they pop by your website, one of the first things that they will see is your chairman and CEO was nominated for the 2025 Nobel Sustainability. Trust awarding leadership in implementation. So, again, big achievement. But what responsibility does that place on the tech organization itself?
[00:27:36] And how do you personally define success when innovation, security, and sustainability are almost sometimes pulling different directions, maybe? Yeah, totally. Before I dive into your question, congratulations, Jay, and congratulations, Gorilla team. The tribe has been working really hard. And these recognitions, they're once in a lifetime. Maybe for us, it might be twice in a lifetime. Let's see how things go.
[00:28:05] But the interesting aspect over here is that recognitions like these, you know, it raises the bar internally. Whether it changes anything externally or not, it doesn't really matter. It raises the bar internally. The day we announced that the way the tribe showed up at work, everybody had their head up high, their chin high, they were proud.
[00:28:30] And then they went straight into work to see whether what they were doing was really sustainable or not, right? So, in my opinion, you know, it kind of reinforces that our responsibility is to deliver. We don't believe in declaring intent. We like our work to kind of speak for itself.
[00:28:53] Now, for a technology organization, right, it means that every single architectural decision has to stand up to three tests at once now, right? One, does it advance capability? Typical engineering, right? Second thing, does it reduce risk? Social aspect of it.
[00:29:19] And now we added the third one, which is, does it do so efficiently over the full life cycle of the system? Not just at startup or when nobody's using it, correct? But for the full life cycle of that system. And, you know, if only if it satisfies one or two of these, it's still a no-go for us. It's not good enough, right? So, this has become a standard for us at that level.
[00:29:48] Now, personally, I define success as alignment over time, right? Innovation, security, and sustainability. They all kind of pull you in their own directions in the short term, right? And to me, good technology leadership, it isn't about optimizing for one axis, right?
[00:30:10] It's about designing systems where the trade-offs are explicit, measured, but more importantly, they are reversible, right? So, if five or 10 years from now, the systems that we have built are still trusted, still adaptable, and still operating responsibly in the environments that they were designed for, then I think that, you know, me and the tribe, we've done our job properly, right?
[00:30:38] And I think that is such a powerful point to end on. But before I do let you go, anybody inspired by your journey, want to find out more information about your work and what you're going to be doing in 2026? And there is a lot going on there. Where would you like to point everyone listening? I think the best place to go check us out is at our website, www.gorilla-technology.com. We publish case studies and blog articles pretty periodically.
[00:31:07] There's going to be a few more coming up in the new year, which are specifically sustainability-focused as well. So, that's a great place. And you can always also look us up on LinkedIn. Awesome. Well, I would add links to everything, including some of the work that you're doing there. I encourage everyone listening to check you out. And you really are leading by example on this stuff. But more than anything, just thank you for taking the time to share your story. Best of luck next year. Hope to get you back on, see how things progress. But thanks for joining me today.
[00:31:36] Thank you for having me, Neil. Appreciate it. I genuinely found this conversation refreshing because I think it cuts through a lot of noise and focuses on fundamentals. The kind of fundamentals that often get lost when chasing headlines. And what stood out to me was the way that he framed trust as a starting point rather than an outcome.
[00:31:58] And whether we are talking about quantum safe networks, AI infrastructure or smart cities, I think it's that mindset that feels increasingly important, especially as technology becomes more embedded in everyday life and national systems. And if this episode got you thinking differently about how we design for scale, resilience, trust, transparency, sustainability, all those things, I'd love to hear your thoughts. So please, techtalksnetwork.com.
[00:32:25] You'll find how you can send me DMs on socials and even leave me an audio message from the site. But that is it for today. So let me know your thoughts and I'll return again tomorrow. Bye for now.

