Here's the thing: we all heard Sundar Pichai say that the easy wins in AI have faded and that we may see fewer headline‑grabbing releases from the big players over the next year. That comment feels like a red flag for momentum, but I see it as a green light for action.
In this episode, I chatted with Rahul Pradhan, VP of Product and Strategy at Couchbase, about how teams can take advantage of this pause to move projects from simple experiments into solid, production‑ready services.
I ask why many organizations hesitate to send their data to public AI endpoints. Rahul explains that when you've invested years building data platforms, handing over your proprietary information—even in encrypted form—can feel like handing over the keys to your kingdom. He walks us through how running models inside your security perimeter keeps private data safe and brings up model accuracy since you can tailor inputs and scrub out noise before it ever reaches the inference engine.
Next, we tackle the question of stability. Companies often assume that the path to a live service is straightforward once a pilot works. Rahul warns that managing GPUs, orchestrating models, and serving them at low latency all require skill sets that live at the crossroads of ML engineering and traditional software development.
We round out our conversation by shifting focus from tools to teams. Technology alone cannot carry an AI initiative. We need leaders who set a clear vision, data stewards who govern every data flow, and developers who feel as comfortable writing database queries as they define training pipelines. Rahul offers thoughtful advice on building that culture and shares examples of industries—healthcare, financial services, and retail—where the most far‑reaching uses of AI are taking root.
If you're wondering how to push your proof of concept into a robust service that customers depend on, this episode is for you. I promise you'll come away with ideas you can apply tomorrow and a fresh view of why a little breathing room in AI releases can become the launch pad for your subsequent big success.
[00:00:03] AI has been moving at breakneck speed recently, and according to Google's CEO, the low-hanging fruit, well, that's all gone. Meaning that the next phase of AI innovation won't be about flashy new features, but about getting real-world AI applications into production and solving real problems.
[00:00:24] And to talk about this, I've invited Rahul back onto the podcast. Many of you will remember he's the VP of Product and Strategy at Couchbase. He's been on the podcast several times before. He's speaking to me today in Santa Clara, which is somewhere that I've always wanted to visit, mainly because I'm a huge fan of the Lost Boys film. But more on that story later.
[00:00:43] My guest has spent years at the forefront of AI, data infrastructure and security, and he knows firsthand why so many enterprises struggle to transition from AI experimentation to full-scale deployment.
[00:00:59] Whether it be privacy concerns or compliance challenges to scalability and infrastructure hurdles. Enterprises are now realizing that AI to production requires more than just good models. It demands a whole new approach very often.
[00:01:15] So, what are the biggest roadblocks stopping AI adoption today? How can your organization secure your AI models without stifling innovation? And how will the next wave of AI reshape industries from finance to healthcare and retail? Well, let's find out by getting my friend Rahul back onto the podcast. Time for me to introduce you to him now.
[00:01:39] So, a massive warm welcome back to the show. For anyone that has missed our previous conversations, can you just remind everyone listening with a little about who you are and what you do? Hey Neil, first of all, great to be back here to chat with you on all things AI. So, a topic that's pretty close to my heart. So, just a way of introduction. So, my name is Rahul Pradhan. I'm the VP of Product and Strategy here at Couchbase.
[00:02:05] Primarily responsible for our short and long-term product strategy focused on AI and data. So, one of the areas that we are really hyper-focused on as a company is around this space of how do we enable developers and organizations to build these AI-native applications, which are built on the foundations of agentic workflows and vector search and transactional and analytical data,
[00:02:31] which is a lot of which is kind of the space that we spend our everydays in. So, excited to be here and looking forward to our conversation. Yeah, me too. Pleasure to have you back on the show and so much I want to talk with you about because we're entering the third year since everyone went crazy over generative AI. We're seeing maturity, questions around ROI and measurable impact, all that good stuff.
[00:02:58] And so much so that Google's CEO recently stated that the low-hanging fruit is gone for AI. So, I mean, do you see that as an opportunity for enterprise to shift their focus from experimentation to full-scale AI deployments? What did you take away from that? I would echo what Sundar said, because that's one of the common threats that we keep hearing even from our customers,
[00:03:24] is they are at a stage where a couple of years back, Gen AI was all the excitement, the whole chat GPT moment really triggered people's imagination. And people started doing pilots and various different small projects to get started to validate whether they are actually seeing the results that they were expecting. And they actually, what they discovered through the pilots was, well, a couple of things.
[00:03:49] One is, you can't completely treat AI, especially for this enterprise use cases, as a black box. So there's a lot more work that needs to be done in order to really build a production-grade application on top of that. The other thing they are finding out is all these issues around security, privacy, and compliance. And then how do you bring the data that they have within their enterprise closer to the AI model,
[00:04:19] or rather move the model closer to the data, are primarily the challenges that they are getting stuck at today. So what you see is the POCs and those projects that get kicked off to validate some of these use cases, turn out to be really well, especially when you use some of the foundation models which are open.
[00:04:43] But then when you start thinking about these broader implications of how do I enable these production-grade workloads on my data, that's where the challenge exists for enterprises to go from that POC phase into production. And I think many organizations are still hesitant to use external AI APIs and models due to security and compliance concerns. And many of those are quite obvious why they're so nervous about that.
[00:05:13] But there is a solution here. So what steps do you think companies should take to ensure privacy while leveraging AI? Because it's one of the biggest topics that I keep hearing again and again. Yeah, and I think that's probably one of the top issues at the minds of leaders at these companies. So there are a couple of dynamics here.
[00:05:37] One, they realize the fact that they have to leverage AI in order to be successful, and it becomes a competitive advantage because everybody will be doing it. And it also enables, frankly, a lot of the use cases and the applications to go much faster than they have been able to do and achieve the kind of successes as well as the business results that they would have expected.
[00:06:00] The challenge, though, comes down to, at the end of the day, if you think about what any AI or LLM is, it's really a model that is trained on a global corpus of data, which is typically what the foundation models are. What those models lack is specificity around the data that you have, which is what you need to combine with those models in order to give you the meaningful insights,
[00:06:27] the meaningful outcomes that you want to drive for your businesses. So that's where the conundrum lies for the enterprises, is how do I bring this data, which is all my private data and proprietary data with all the domain knowledge, combine it with that AI model. The moment you send it outside of your, whether it's a cloud boundary or on-premise infrastructure, you are exposing your data to third party.
[00:06:56] You are exposing the data that is going to be, even though it might go encrypted, you have no control over how the data is managed, what is being used to train, if the model is being trained on the data or not, and what have you. So those are some of the challenges that enterprises are facing of how do we get the data and the model close together in a security boundary and a parameter that actually works for them.
[00:07:22] The other aspect is, and the reason why I'm talking a lot about the data is, without that data, the model, the whole, the inherent nature of an AI model, and I think the last time we talked about, we talked about RAG and how AI models are probabilistic in nature, so they could hallucinate or give you the wrong results, which could have a significant business impact for the outcomes that you're trying to drive.
[00:07:50] So in order to eliminate that or reduce that, you need a lot more governance around the data that gets provided to the model, the kind of data that needs to get scrubbed and cleaned and combined with the model becomes critically important. And to do that, you need to be able to do that within kind of a boundary that you are comfortable with. So those are some of the key challenges that enterprises are tackling with today,
[00:08:18] and are looking at solutions on how do I enable that to happen so that I can actually build an application that can give me meaningful insights and outcomes at the end. And another big topic I keep hearing about is that achieving production-grade stability for AI applications is another major challenge. So what would you say are the biggest security and infrastructure obstacles that businesses are facing, and why do so many struggle to overcome that?
[00:08:48] Because it's, again, something I keep hearing about. Yeah, I think if you look at what an enterprise really needs to do, right, what you need to do is to be able to bring – you need to bring a model or multiple models within your security boundary, which basically inherently means that you need to have internal teams of capabilities in order to spin up infrastructure to run the model.
[00:09:15] So that's going to be GPUs that you will potentially run. Go figure out kind of an open source or open wage model that you could run. And there are a ton of them. There's been a lot of innovation on that front. So that's the aspect where enterprises are really excited about the innovation that's happening there. Now they are tasked with the idea of how do I make this work for my setup,
[00:09:43] which means doing it yourself internally within your boundary or going to a third-party provider. So when it comes down to doing it yourself, you need to have the expertise in not just to manage a data team to be able to clean the data, gather the data, which is spread across multiple different databases, typically in an enterprise. So if you think about a traditional enterprise,
[00:10:09] they have various different database vendors that become part of their data estate across their transactional workloads, analytical workloads, as well as real-time analytics workload that kind of reside between the two. And then with AI, there's the whole rise of the concept of vector databases, which are really focused on using embeddings of vectors.
[00:10:37] That is the language that the AI model can talk to. So now you have this disparate data estate that you need to manage. So not only do you need a team to manage that, you also need essentially a team that is proficient in the AI ML world to be able to manage the infrastructure, to optimize the infrastructure, to run those models. And it doesn't stop there, right? Because at the end of the day, even if you provide the model with the data,
[00:11:05] now you need to figure out if the model is giving you the outcome that you expected or not. So there's the whole aspect of evaluation and traceability of what the model actually gave, the output the model gave you, whether it corroborates with what you were expecting or does it aligns with some of your corporate policies, and then making that determination of whether that's an output that is worth using or not.
[00:11:35] And that's where a lot of these enterprises are really focused on figuring on how do they actually fine tune some of the models with the domain data that they have. Either they do that at inference time or do the fine tuning by pre-training the model with these augmenting the data that they have. So those are some, as you can imagine, the challenges that they are looking at this
[00:12:02] vary from not just being able to get the data in the door to getting the AI models to work with the data and everything in the middle in order to validate the workflow for these models. And all of this, by the way, if you're doing an application, one of the things that we are focused on today or we have got used to is just the responsiveness of applications in general. And now, especially when you think about AI applications,
[00:12:30] one of the expectations is you just converse with an application, whether it is through a natural language, whether it's a chat bot or you're talking to an application. What that means is when you have that kind of a real-time communication, the performance and the latency become even more critical. How many times have we tried to chat or have a conversation with a voice assistant? And if there's a lag involved, it's a lot of time people just tune away
[00:12:59] because you feel like it's not doing it enough or putting an obstacle in your way versus you actually going and typing something and getting it done. So those are some of the things where the expectations is heightened. There's an expectation that applications are going to be hyper-contextual, they're going to be real-time, they're going to talk to you in the language that you can converse in. And by the way, they're going to do all of that with the data variety
[00:13:28] and the formats that you have within a typical enterprise in a real-time such that there's no perceived lag. So that is an enormous challenge for an enterprise to be able to take on their own. And organizations that are aiming to build AI solutions autonomously, a problem they run into is usually scalability issues sooner or later. So any key factors that companies should consider
[00:13:55] to ensure that their AI applications can handle enterprise-level demands? Yeah, no, I think the performance and scalability issues that you highlighted, those are probably the most paramount for an enterprise when they are betting their business on applications or their applications run their businesses. So for them, the key thing is how do they make sure
[00:14:23] that they reduce some of these latency lags that they will see? And this happens at the end of the day. It's going to be, you will have the laws of physics to go with this. So it takes a few minutes or a few seconds or microseconds for the data to traverse boundaries. But at the end of it, being able to have it in a, or rationalize your data platform in a way that you have a store or a database
[00:14:52] that can actually manage not just your transactional data that your applications need a lot of times to provide you insight on the existing session that a user may have with you or the historical analytical data. To be able to do that processing and to be able to share that across the different agents, if you will, which is kind of the new AI native way of doing things,
[00:15:19] to do that is critically important for them. So just making sure that you rationalize your data state, being able to bring that, the AI model closer to the data itself, and then be able to actually validate the output of the AI model. Those are some of the steps that they need to implement in order to get to a more deterministic output from these models,
[00:15:47] which are all inherently probabilistic in nature. So you're trying to solve a problem, which is basically a probabilistic output and trying to get a deterministic outcome from it. And another theme I'm hearing at tech conferences, if you're hearing about an issue in the IT infrastructure through a support ticket being raised, you're already too late. You've already failed. There needs to be a more of a proactive than reactive approach to that stuff. And as a result, observability tools are becoming essential for all AI operations.
[00:16:17] So how can organizations use them to improve visibility, but also detect issues earlier and accelerate their AI deployments too? No, I think this is a great point in terms of how do you make sure that you are not coming out as reactive, especially when you're building these AI applications? Because again, where we are now just anchored on
[00:16:45] in terms of our experiences with the applications is going to be that these applications are all real time. They are going to react to us in the moment, which means that if there is a human in the loop who is actually going to look at the observability of this and react to it or for the user to file a support ticket, it is actually too late already. So this is where, again, kind of a lot of use cases leverage AI
[00:17:15] in order to solve these problems too. So what that inherently means is you not just have one model, you may have multiple models within your overall app ecosystem in order to look at the data, the output of the previous models, and then run it through an AI in order to actually give you the tree or triage the outcome in order to give you the analysis and synthesis, which can then help you make the decisions
[00:17:44] that you need to make in much more real time than you would have otherwise. And if we do dare to look beyond the technology just for a moment, something even more important that often gets neglected or even forgotten is the cultural or organizational shifts that are needed to move AI from that proof of concept to production grade applications successfully. So any tips or advice around that? Because I think very often we get distracted by AI, by technology,
[00:18:14] where it is the organizational shift and the cultural or the culture within an organization that's so important too, right? Yeah, yeah, absolutely. And this is, I mean, as much as it's a technology shift, I think it's also a cultural shift for a lot of organizations because in the past, what have happened is you've always had these AI teams who are really focused on machine learning and on building their own models and algorithms, but they were always very siloed
[00:18:42] from your typical developer teams or the database teams. And now with AI coming together, there's a merger of the two teams going on in a lot of places where now it's also the developer's responsibility in order to build the AI native applications to be able to be conversant with it, to be able to know what it means to leverage AI and understands all the nuances that go with AI and MLs
[00:19:12] that previously they didn't have to worry about. So what that means is from an organization perspective, you need to have strong leadership and a vision of where you are trying to go. Going back to how we started, there is a lot of value that can be had from AI, but it's also very important to think about where you are leveraging AI and the outcomes you are trying to drive. So having that strong executive leadership and vision is absolutely important
[00:19:42] in the AI space. And then what it means is as you go down that journey, it's really important to make sure you're in a way upskilling your talent and hiring the people or developing the talent internally who can actually learn more about AI and the effective use of AI. This is a lot around governance of AI. There's a lot around how do I ensure that I'm not compromising my company assets
[00:20:12] when it comes to AI, especially if people are using any of the publicly available APIs and model. So that becomes particularly important. And then having that, all of the teams that are talked about, making sure that there is a, there's a cross-functional alignment across the teams, employees and, and, and the, and their, the engineering team, as well as across the company. Everybody is aware and there's a transparent communication in terms of
[00:20:41] what does AI mean? How do you effectively use it? Because at the end of the day, it's for a company to be successful. It's not just the application. It's also how the internal teams leverage AI that is going to make it a success for the way they operate and the way they are going to be faster at executing some of the, the workflows that they have been doing for the past several years. So there's a change management and there is education that,
[00:21:10] that companies need to be aware of. And I'm curious from everything that you're seeing here across multiple industries, are there any particular industries that you see as maybe best positioned to take advantage of the current lulling major AI advancements and build and deploy scalable AI driven solutions and maybe even get a competitive edge as a result? Do any industries seem to be moving quicker than others? No, I think there is. So if I look at the opportunity, right, I think the, the, the,
[00:21:40] the fundamental thing with AI is it has an opportunity to, it is an opportunity that a company in any industry can leverage. Right? So if you, if you look at where we have come from is a lot of these companies are in varying, varying stages of their digital transformation journeys. I think what that means is you, you have inherent, you're inherently used to the old processes of having, having
[00:22:09] your employees work with workflows such that the workflows can be leveraged, leverage applications and data on the backend in order to do some manual activities. Now you are starting to get into a world where all of those workflows can potentially get automated and now you have, you, you can have those workflows or the employees come back and check in on a workflow to make sure the workflow is getting executed. Right? So it becomes more
[00:22:38] about what a single person might or a team of people might be doing working across multiple different applications to now one person potentially looking at the output of that application which is going through these multiple workflows to kind of get to that outcome. As a result of that, I think the application of that across the industry becomes pretty, pretty widespread. So while the few industries that we see really jumping out of healthcare is one area that people
[00:23:08] are really excited about what it means for health in terms of whether it's patient care, new drug discovery as well as some of the impacts it can have on diagnosing and potentially finding cures for the medicines that, for diseases that we haven't been able to so far. The other industries that stand out and this is an industry or a vertical that has been doing a lot in this space is around financial services, right? So if you have
[00:23:37] fraud detection, automated document processing and things like that, those have been going on in the financial industry for a while but now with AI and the focus on AI there's a lot more they can accelerate some of those capabilities that they were trying to build themselves. And then one common thread that we see is around just customer service, whether it's a travel industry kind of chat bots or agents to help you
[00:24:06] build itineraries and plan trips for you or even in the retail space which is really interesting and fascinating, especially kind of the space that we sit in the stack where you have the capability now to dynamically look at the content on the shelf, figure out where things are based on the color that you're looking for, whether I want blue shoes or green shoes and show me all the different shades of that color. And then knowing where you are,
[00:24:36] again, being able to direct you in real time to the actual location, potentially as a retailer, show you some offers and deals to entice you to go into a store. I think those are some of the things that have a lot of value with generative AI and can potentially accelerate the journey there. So many great examples there. I'm curious if we look ahead, are there any developments in AI infrastructure
[00:25:06] and security that you think might evolve and eventually help enterprises overcome some of these challenges and ultimately accelerate adoption in the months and years ahead? Anything you're seeing here? Yeah, I think we I mean, there's a lot of excitement across the various different components that go into an AI system, right? Whether it's infrastructure itself around GPUs that NVIDIA has
[00:25:36] and some of the other providers provide, they are constantly going at a furious pace in terms of the pace of evolution of the GPUs themselves and what that can mean, which basically is going to mean a lot more cheaper and faster inferencing as well as we have a lot more availability of GPUs. What that means is that goes back to kind of like the Jevons paradox where the more the
[00:26:05] AI becomes cheaper, people are going to use it more and that's going to get to a lot more adoption of AI and AI is going to get embedded everywhere. The other big component we are seeing or we are hearing a lot about are the advances in quantum computing. I think this again takes compute infrastructure to a whole another level. It brings with it also very interesting challenges on security and observability that we need to be very, very aware of. So the security space is
[00:26:34] another space which is continuing to evolve as we look at AI, how can we leverage AI in order to make our systems more secure and also essentially stay one step ahead of people trying to break into it. But again, with the advances in infrastructure, it means that you can actually process things faster, you can find ways to break things that were previously not, couldn't be broken. So I think there's a lot
[00:27:04] of innovation happening in that space as well. And then lastly, I think a space that's probably near and dear to us is really more about the data. It's what does AI really do for data? If you think of it, traditionally all data that is stored in a database has been structured data, semi-structured with SQL and OSQL. You basically had to use the SQL query language in order to get insights. But if you typically look at any organization,
[00:27:34] about 80 to 90 percent are based on what kind of survey you're looking at, data is really unstructured data, data that is stored in PDFs and video data, multimodal data, spreadsheets and stuff which was previously not easily accessible to your application or not easy to parse that data to get some analysis or synthesis out of it. What AI lets you do is AI lets you actually manage
[00:28:03] that data, filter through that data, process it and then get whatever insights you need along with the rest of the data that you have which is a structured and semi-structured format. So for us I think that is pretty exciting to look at the data variety that can be not just managed but leveraged by AI to get those insights. and then you get into going back to infrastructure a bit because as you think
[00:28:33] about these models getting smaller you think about infrastructure getting or compute becoming faster you essentially have the capability to run these models locally on your mobile phone or on your edge devices so you can actually bring AI closer to where the users is significantly important and I think we are starting to see a lot of innovation happening
[00:29:03] in that space whether it's with the latest Apple MacBook that was launched as well as some of the phones that Google and Apple are bringing to market which have pretty much AI grade infrastructure within the devices themselves. Wow, so much food for thought and a big thank you for joining me again on the podcast. The last time you joined me you shared your insights and also added a book to our Amazon wish list this time around I'm going to see if there's something we
[00:29:33] can
[00:31:26] Absolutely love it. Well, let's see what we can manifest together. It's out there in the universe now. Let me know if anything does occur. We'll see what we can make happen. And for anyone listening wanting to find out more about anything we discussed today and find out more about your work at Couchbase too, where would you like to point everyone listening? So we are, I mean, both I and Couchbase have a presence on LinkedIn and Twitter.
[00:31:51] So look me up and feel free to send a message, a tweet or what have you. And I would love to connect with folks. Oh, so it's been a pleasure as always to talk about everything from security and privacy, production grade stability, solutions for the future as well, and how these evolving frameworks can help enterprises meet challenges and ultimately accelerate their AI deployment. So many opportunities there.
[00:32:19] I'd love for people listening to find out, to check out what we've talked about today, contact you at Couchbase, and also feedback to me. What are you working on right now? What difference is it making to you? But more than anything, just thank you for starting this conversation once again, Rob. Thank you so much, Neil. As AI adoption accelerates, the real challenge isn't about building models. It's making sure they're secure, scalable, and production ready.
[00:32:47] So a big thank you to today's guests for giving us that deep dive into the evolving AI landscape, but also showing how organizations can better bridge that gap between AI hype and real business impact. But what do you think? Are enterprises ready for the next phase of AI deployment? Or will security and infrastructure challenges continue to slow them down?
[00:33:13] Drop me a direct message on LinkedIn, Instagram, X, just at Neil C. Hughes. Let me know your thoughts around that. And if you found any element of today's discussion insightful, maybe subscribe for more episodes where we can unpack the tech trends shaping our future together. This is a dialogue, after all, not a monologue. So please keep your messages coming in. And until next time, stay curious, keep innovating, and I'll speak with you all tomorrow. Bye for now.

