What if managing databases on Kubernetes didn't require a team of specialists or endless configuration? In this episode of Tech Talks Daily, I sit down with Tamal Saha, founder and CEO of AppsCode, to explore how his team is building a more intuitive, scalable, and enterprise-ready approach to Kubernetes-native data management.
Recorded during the IT Press Tour in London, this conversation traces Tamal's journey from Bangladesh to Google, and ultimately to launching AppsCode in 2016. He shares how early experiences with Google's internal systems helped shape his vision for a cloud-native data platform built for modern application environments. What began as an open-source passion project has evolved into a comprehensive enterprise suite that includes solutions like KubeDB, Stash, Voyager, and KubeVault.
We discuss the operational realities of managing databases in Kubernetes—from simplifying provisioning and backups to solving problems around TLS management, multi-tenancy, and even secret rotation. Tamal outlines the benefits of a Kubernetes-native architecture for platform engineers, DevOps teams, and developers who want to deploy faster, automate more, and keep full control over their environments.
With real-world insights from enterprise deployments, including large-scale telcos modernizing their infrastructure, Tamal explains how AppsCode is helping organizations move from ticket-based database provisioning to true self-service. He also looks ahead to what's coming next, including support for vector databases, AI-powered provisioning interfaces, and deeper integrations with open telemetry and observability tools.
Whether you're running stateful workloads in Kubernetes today or exploring how to modernize your data layer, this episode is packed with insight into building platforms that work with developers, not against them.
So, is it time to rethink your approach to data in the cloud-native era? Let me know what you think.
[00:00:03] What if running databases in Kubernetes could be as simple as spinning up a container and without the complexity or cloud locking that often comes with it? Well, in today's episode of Tech Talks Daily, I'm joined by Tamal Sahas, founder of a company called AppsCode, who I met at the IT Press Store in London here in the UK.
[00:00:27] And they are a company that is transforming Kubernetes native data management with a suite of tools designed for developers, DevOps teams and platform engineers. So I want to learn more about the story behind the company and their flagship products from KubeDB, Stash, Voyager and KubeVault to find out how the team is tackling head on, often siloed, costly and manually intensive world of data.
[00:00:57] infrastructure. And also, of course, how new trends like AI driven automation, vector databases and open telemetry are shaping the company's next chapter. So if you're working with Kubernetes or simply modernizing your data operations, this is one conversation you're going to want to listen to. But enough from me. It's time for me to officially introduce you to today's guest.
[00:01:23] So a massive warm welcome to the podcast. Can you tell everyone listening a little about who you are and what you do? Thanks, Neil. Thanks for inviting me to your podcast. So I'm Tamal Shah. I'm the founder of the company called AppsCode Inc. So we are a US-based software company. We are developing products in the cloud native space.
[00:01:43] So we are here in London for the Qubicon 2025. So yeah, we are US-based. Our major product is called QubDB and it's the for database management on Kubernetes. That's kind of what we are focused on today. And I'd love to find out more about everything that you're doing. But before we do, I always like to take my guests back in time.
[00:02:06] And we recently spoke at the IT press tour in London and I'd love to find out more about your origin story. So can you tell me a little bit more about your journey, which I believe began in Bangladesh, took you to Google and ultimately from there, what was it that inspired you to launch AppsCode back in, I think it was 2016, right?
[00:02:27] Yeah. Yeah. So my story is pretty common immigrant story. So I was born and raised in Bangladesh, came to United States as a PhD student in 2009. And after I kind of finished my master's, I worked at a few companies initially at Amazon and then kind of worked at Google. So in 2015, I was actually working at Google and I was in their kind of the AdWords organization in the beginning.
[00:02:54] Later I was working for their cloud, it's called Firebase these days. So we're working in that and internally we could see that the container wave was starting to happen, right? The Docker company released their Docker project, I think in 2013, and it was starting to take off. Now, one of the benefits that I had working at Google was that internally we used to use a system, I believe it's still used today, called Borg.
[00:03:23] And what Borg was like, it's kind of like what Kubernetes became as an open source project, where you really kind of take your application, package it up in kind of some proprietary internal format, and push it in the kind of what's the Google's internal infrastructure. And that's how you do a lot of your, pretty much all the internal product development that happens at Google. And when kind of the Kubernetes came out, Google, sorry, the Docker came out, Google wanted to get a foothold in the cloud market.
[00:03:53] And it seemed for them, Kubernetes was that big push because they really needed something new and innovative. And containers seemed to be the wave that they're catching up. And they released Kubernetes. And it seemed like they are the right group of people to actually make it happen. Because before that, when the Docker initially came out, it was mostly for running your application on your laptop, because it can be a build image running. It doesn't work for a development tool. But if you want to run at a scale in a production environment,
[00:04:22] you need something that can kind of manage or orchestrate a cluster of machines, kind of run your application, kind of do all the things that you really need to make it a production-grade system. And Google seemed to have that talent and the experience to doing that. And they were kind of really putting a lot of effort into that Kubernetes project. And I felt, okay, maybe that is something I could work on. And I had the aspiration to do something on my own. And around that time, I also kind of got my, it's called the green card in US, basically my permanent residency.
[00:04:51] So I could be actually self-employed. So I kind of left Google and started working on my own company. So that's kind of how I ended up working on with this company that I started. Yeah. And it really feels like that was a huge pivotal moment for you. And I'm curious, if we hone in on that magical moment, what was it about the release of Kubernetes 1 that made you think that, hey, this could fundamentally change how we approach computing and indeed infrastructure?
[00:05:19] What was it about that moment? Can you still remember? Yeah. So if we look back, right? Like at the time, kind of the state of the architecture was like, you get a machine, like if you're talking about cloud, right? Maybe on AWS cloud or Google cloud, you get a VM. And then you package up your application somewhere, put it there and grant it, which was good. But then when the containers came, it really kind of standardized the whole idea
[00:05:44] and made the thing, like it seemed like a next paradigm shift in the computing way of doing things, right? Like because VMs came out in, I think in 1998, 99. So almost like 20 years at that point. I mean, 16, 17 years. And then it seemed like containers were much more efficient. And you cannot, I mean, at least I still to this day feel like you cannot really go any more granular than that. Like because you are not running a whole machine. You take a VM or a server.
[00:06:12] When you run your container, containers are really just Linux process kind of packaged up in its kind of own namespace, in the Linux namespace and isolated and running. And it seemed like there are a lot of benefits to that, right? In terms of the efficiency of the project running on those kind of environments in terms of like the efficiency is really big deal because you kind of really save a lot of kind of costs or money in the running these applications. And it was much more faster. It's kind of standardized.
[00:06:42] It just seemed that because I was doing the same thing internally Google for three, four years. It really felt like this was missing in the outside that and this could become this new thing. And that's how we started. I felt that, yeah, this could be the next big thing. And I was ready to jump onto my next thing. So I left Google at that time. Yeah. Love that.
[00:07:05] So when you first launched Apps Code, what specific gap or limitation in Kubernetes were you aiming to address from the outset, especially in terms of running stateful workloads, for example? Yeah. So that's actually an interesting story. I mean, when I started, I wasn't really thinking about stateful workloads specifically. But what we were trying to do is really, I mean, I used to call it the idea of like a container cloud, right?
[00:07:31] It's kind of bears or what Heroku was, but something that was built on top of containers. I mean, that was really the original idea of the Apps Code. Because right at that time, the most difficult challenge was that how to even get a kind of cluster up and running and then run your applications in there. And that's kind of what we started. And Kubernetes became 1.0. We felt that, okay, maybe it's ready for that.
[00:07:58] And let's go and build that, kind of build this kind of a new cloud, right? Which is where VM is not the core primitive, but containers is actually the core primitive. And that's how you kind of deploy your kind of the applications, right? So that's what we started with. Now, and then as I kind of started working on that, I think we started to see kind of the lot of the limitations of the Kubernetes 1.0, right?
[00:08:24] Like, I mean, the first thing we wanted to do kind of block food our own platform, right? So we wanted to run our own databases. And then we wanted to make sure that databases we put back up and restore if something had happened because a few times things did go wrong in those early days. And then kind of how to make sure you kind of get the traffic inside the cluster, I mean, which was a big issue. I mean, like a difficulty in Kubernetes at the time because you have to make sure that, I mean, one of the most interesting challenge with Kubernetes is that it's a very dynamic
[00:08:54] environment. What I mean by that is, let's say if you are running with the VMs, right? You create a VM, cloud provider assigns a dynamic IP, and for the lifetime of the VM, it stays the same, right? Nothing really changes. So you can do a lot of things manually, right? Let's say you want to create a X or HAProxy to bring in your traffic from outside the cluster. You can just use those host IP as kind of in the HAProxy configuration or Nginx configuration. But you cannot really do those kind of things in Kubernetes because everything, if pod restarts,
[00:09:24] it gets a new IP. So it has to be very dynamic. That actually has to be kind of automatically regenerated inside the cluster when anything changes. And then the same thing with these database applications, right? Maybe you are running a Postgres with multiple replica. One of the pods start, or maybe you have to update the node or something like that. And then immediately all the IPs change. And you have to actually reconfigure the Postgres configuration to make sure that it's, again, kind of creates this highly available configuration.
[00:09:53] And we have started to hit those challenges as we're trying to build our kind of this, the continual cloud or kind of the past type of architecture. And then, yeah, so we kept trying. And then we started basically what we thought, okay, maybe this container cloud kind of thing will be our business for kind of the product or the thing that we sell for money. But maybe this individual tools will become like a kind of an open source project that gives us access to the community of this new container ecosystem.
[00:10:21] So that's kind of what we started doing, right? We kind of took this individual things that we were building internally for our own product and started kind of publishing them as individual projects on GitHub so that other people could actually also start using those and then kind of start developing a community. And because when I started, I mean, I was not really well known in any way or even myself didn't know anybody in this space. So we just said, okay, let's put it out there, our community.
[00:10:51] So what it is that there's early communities in their Slack, some in kind of the Reddit and all those places. And we'll post our products there. And then people will ask questions. Sometimes they will help out, try out, give us feedback. And that's kind of how kind of it went on. And then at some point, and then at the same time, I was trying to kind of raise funding for the company. I mean, I don't know. I was based in Silicon Valley at that time. So we started trying to raise funding. And then I had a great difficulty in doing that, to be really honest with you.
[00:11:20] And I mean, I don't know. It seemed like, and then I got a first time founder kind of in this infraspace. And we were kind of what's now called the PLG, the product-led growth, because we were trying to use this open source as a thing, which meant that we were not really making any money, I mean, for the open source part of the things. And the container cloud thing that we were trying to build was really, it's too, I would say Kubernetes was too unstable at that time to actually be used as something that you can charge money for. When I say unstable, what I mean is like Kubernetes was really early.
[00:11:49] Even though they did this 1.0 thing, Kubernetes has this, I would say, like this very unusual kind of this architecture where this individual object internally, they have their own version. So even the product, the project as a whole was called 1.0. Everything inside Kubernetes every three months will break. All these even open source projects we were doing, it will have to every three months, we will probably spend like a month trying to kind of make sure that everything again works on the new version when a new version of Kubernetes was released.
[00:12:17] Now, which was okay for an open source project that we were just giving people for free. But if you're trying to charge people money, then you cannot really tell people, hey, this thing that you use in production, you now have to redo again, or you have to take a kind of some kind of downtime to fix it because it's just the underlying platform, right? As a kind of a user of Kubernetes, the project, we really didn't have any control over that. I mean, that is another interesting learning from trying to build a company around somebody else's open source project, maybe. So that's kind of how what's going on.
[00:12:46] And then eventually it came to a point, okay, we started to say, okay, maybe we'll offer some kind of like a support services because we started to see a lot of interest in our Slack. And I mean, I'll spend all my day basically talking to people on Slack, like they will have questions, they will have problems, we'll fix it. I mean, they were helping us in a way kind of test our product because it's just so many different scenarios out there in the infrastructure world. And we can really predict everything as a small company ourselves.
[00:13:15] So we started to offer like a kind of a support service as a way to make money. And as it turned out that we actually didn't make any money offering support services, because if you are really doing support as free on Slack, then, you know, nobody's really buying your kind of commercial support plan. And as things kind of kept going on after, I mean, I would say around the end of 2019, early 2020, things really came to a head.
[00:13:42] I mean, COVID hadn't come into the scene yet, but I've been doing this at that point, say, 24 years and try to raise funding. And this is really, you know, wasn't very successful at all, really, because we didn't have any revenue. And it's very hard to talk about revenue growth rate when you actually have no revenue. So, so, so, so, so, so in the 2020 country, I mean, I was kind of just doing self-funding, kind of doing some contracting, kind of keep things going.
[00:14:11] And then we had to make a decision. Basically, I had to admit that maybe I'll not be able to get any funding. And then the question became, should I still want to continue or not? Because if you cannot really have any funding, then you cannot really kind of build a cloud because building a SaaS product is expensive. I mean, you may be charging $5, $10 to somebody per month, but behind the scene, you are probably spending $3,000, $4,000, $5,000 kind of on your own hosting bill to kind of run the thing.
[00:14:39] Because over time, you start to make money because you will have enough users. But in the early days, you really have to put that funding behind the project. And it became that, okay, maybe there will be no funding. And then COVID hit and things went really dark. I mean, it's hard to imagine how dark that those days were now. But so then what I decided, no, I still want to continue because I was happy doing what I was doing, it's a very weird way to explain. But because when I thought about kind of going back and working for, I don't know, even something
[00:15:08] like Google, who was very good in terms of their salaries and all that, but I was happy doing what I was doing and I didn't want to go back. So I just said, okay, we'll see what we can do based on how we can make money based on what we already have. Because we saw a lot of interest in the individual open source project that we released because people are actually using, because we will see these people like who come to Slack and they are working for big companies. And one of the things with Kubernetes is that nobody is really using Kubernetes for their like personal usage, right?
[00:15:36] I mean, maybe they are doing some home labs to kind of try things out, but end of the day, they are taking this to their workplace and for their business. So we said, okay, let's see if we can turn this thing around and maybe do something with what we already got. So that's when we kind of got rid of the whole idea of going into the container cloud thing. And then frankly, by the time all the major cloud providers already launched their kind of version of managed Kubernetes service. So the whole idea of doing a kind of container cloud didn't really make a lot of sense.
[00:16:06] I mean, I would say even if you look around today, like all this Kubernetes activity that's going on, there is really no company that do this kind of thing anymore. I mean, it's really all the cloud providers have their managed service. And then there are a bunch of this, let's say this is OpenShift or Rancher, but they are not really doing a host. I mean, OpenShift is doing some of it, but that kind of hosted Kubernetes kind of third party as a cloud type of thing didn't really pan out for anybody, I would say.
[00:16:33] So we took those open source projects that we had and started basically going the route of kind of open core, meaning some of the stuff that was already out there in the open source, we kind of really took it back. And frankly, it was a way for us to kind of generate some excitement about our products and company in the marketplace, right, in the community. So we wanted to keep that. But then we started thinking about how you would address a lot of the day two challenges of managing databases and the kind of the backups monitoring was sort of the backup monitoring.
[00:17:02] Those I would say really was the big thing because once you can get something running as a database, everybody will really want to make sure that they don't lose the data. So the backups monitoring was a big thing. And then a lot of those other things that you think about, like how do database scaling, how do, you know, like a disk filling up is a big reason people have that database outage. I mean, even to this day, because they start a database maybe with 10, 20, 50 gigs of disk space. They're just using it. And then at some point disk is filling up because it's really now using production.
[00:17:33] And then they need a way to kind of expand the disk and do it in a way such that the database doesn't have a downtime because it's running the production workloads. So those kind of things started and then some people will focus about like how do you deal, do secret management in this kind of space because Kubernetes secrets are not really secrets. I mean, many of you probably know because it's kind of stored in a plain text in the HCD. And it was okay for a startup to kind of just, you do that.
[00:18:01] But like big companies who care really about their security posture for them, it was kind of an issue. Like we can't really do this. So what else can you do? Can you kind of integrate with maybe a HashiCorp Vault or those kind of things? And that's kind of, so what we did, basically all those things that kind of somebody maybe actually running a production workload and maybe would like to spend money on doing something to kind of pay for a software that will do kind of take care of all those challenges.
[00:18:28] So we took those ideas and put them as a kind of a part of kind of our enterprise part of the product, right? So the kind of the basic provisioning, we left them as open source and kind of a community product. So you can go and do it yourself. But if you are doing like real production deployments, then you will have to kind of need those second part. Obviously, you could build it yourself. I mean, which is usually, I'd say always more expensive because developers building their own thing
[00:18:53] probably cost a lot more than buying a already existing solution that has been tested by many different people. And then using that is easier. So we kind of went to that route. And that's really was, I would say, kind of, I mean, we kept the same legally. It's the same kind of company. But this is really kind of Appsco 2.2, this 2020, when we went from this open source, like kind of religiously open source to be more practical and accept that what the reality is.
[00:19:20] And then the reality of the business B2B software is that you really have to ask people for money if you need to survive. And then that's what we did. And things kind of started to turn around for us from that point. Because one of the things that we found out that all those kind of 800, 900 people, those who were on our Slack, the moment we kind of went into this more commercial route, they didn't really convert. They didn't really convert because it seemed like, and some of the people got really angry.
[00:19:47] I mean, because they were like feeling that, okay, I mean, maybe some way they were betrayed or whatnot. But we had to survive because then I got to pay my engineer salary. So, but then, but we started to meet new people kind of usually on our website, we had a kind of a contact us form. So you can put information there. Or when we put some chatbots, kind of, you see a lot of those websites you'll have like where you can click and kind of ask questions with some support person. So I would do all those support things instead of spending time on Slack, I'll be there.
[00:20:15] And those people who came in and they had a, because the earlier people, they had this expectation, this is going to be free. And the new group of people who actually came in, they already knew from the get go that this is not a free product. I mean, yeah, some of it is free, but you kind of, but ultimately you have to pay. So they were okay with that. And then they didn't mind actually paying. So they will go to our product and they will have their, they will test it out. I mean, and as a new company, I mean, one of the challenges that we have with kind
[00:20:42] of this database type of product was, it's like databases are kind of the crown jewels of companies. So they don't really go and distrust anybody, especially a new company they probably never heard of to do those. So they will do a lot of testing. Usually like a six months, eight months, nine months is very common, like, because they were going to go to testing and then like maybe go through another round of testing and all of that and every little detail. And then anything they probably didn't work or we had a limitation or didn't have a feature,
[00:21:10] that they will ask for it and we'll go develop it. And then they will do another round of testing. And yeah, so they started doing those, but those people will eventually start to convert because they were okay knowing that, yeah, we, it's a commercial product. And, and that's kind of how we kind of went from a company with a kind of a container cloud type of idea to this company, which is now we call this one, it is native data platform,
[00:21:35] where it's really focused on managing databases and not just the databases, but things like databases, message queues, like Kafka or queue or caching like a Redis or connection pools. So any, anything that you will probably use as part of your stateful applications on Kubernetes, the thing that we were building for ourselves actually became the product for the company. And then that's the long story of evolution of Apps Code into this, what we are today. Wow.
[00:22:05] What an incredible story and so many valuable lessons there, especially for startup founders listening that maybe they're a little bit earlier in their journey, because I think that story of how you had to evolve, how it might have initially upset some of your loyal members of your community or driven by other things as well. And the fact that you have to pay for the wages of your software engineers. And as a result, you evolved from this early open source tool to building a fully enterprise
[00:22:33] ready Kubernetes native data platform that you enjoy today. And as I say, fast forward to 2025. I mean, you solve so many different problems through a range of multiple solutions. So just to bring everyone up to speed with what you're doing right here, right now, can you tell me more about Cubestash, Cube Vault and Voyager? Because that's where your focus is right now. Yeah. So one of the things I always admired was the company called HashiCorp, right?
[00:23:02] I mean, I think they have this product strategy where they have this kind of individual tools, like as they have this kind of vault and then they have their nomad and other things. And then, but they also have a kind of overall kind of kind of the HashiCorp cloud type of platform. And I always felt like we could do something similar. I mean, I was really inspired even some of our website design. We were inspired by how they did things because they had this individual brand for the individual product, but then there was the overall commercial company. And so we tried to model ourselves after that.
[00:23:32] And so when you looked at, again, going back to those individual problems that we saw, we said, okay, let's make those individual things that open source things we were doing and tell them their kind of own individual, kind of a commercial version of that. So if somebody just narrowly focused on a specific area, maybe they can still use that part of our product. Because one of the things that happened in Kubernetes is that there's this really like big wave
[00:23:59] of a lot of activity in terms of open source and commercial products that everything had a version of doing something a different way, right? Like you can, like let's say you want to do a backup. Maybe there are multiple different products. So maybe you want to do secret management. There were multiple different ways. And then all the existing companies who were kind of had something before, let's say, Kubernetes became popular. They also started doing their version of some sort of Kubernetes native integrations and all that. So, so we did, okay, let's have this kind of individual product strategy.
[00:24:29] So maybe you can use the individual products. Or if you are really looking to us to help you with kind of the whole end-to-end solution, then we'll have the whole kind of a one uniform story or kind of coherent platform for that. So we ended up with a kind of outer really kind of four different products. I mean, so there's a KubeDB. So which is, so it's not itself a database, but it's a database management, right? So something, let's say, think about if you are using AWS or Azure, they all have their own kind of managed database service, right?
[00:24:58] Like the RDS on AWS. So think about that. But where instead of running your databases outside the Kubernetes cluster, you are running your databases inside the cluster as a container or a port, it's called in Kubernetes. And then it does the whole end-to-end lifecycle management, meaning you do the provisioning of a highly available instance with automatic failover. That's kind of your day one, maybe doing some monitoring. But then on day two, you have to think about how you're doing your backups. How do you restore?
[00:25:25] Or maybe at some point, you take a backup from one region of the cloud and restore into a different region or different cloud. How do you do your scaling? How do you do your TLS management? So all those things that you probably think about. So that's the KubeDB, that's our product. And then for backups, we did this protocol, KubeStash or Stash, sometimes we call it. So it's essentially, okay, database backup is one part, but you're not necessarily only thinking about database, but you're thinking about maybe potentially taking a backup of your
[00:25:53] whole code of Kubernetes cluster, meaning like all those YAML files are manifest that you are created. You may want to keep a backup of that, restore that. Maybe outside databases, sometimes people will have like a stateful application just doing like a, what's called a stateful set with the PVC that you may want to backup because you have some other kind of data in there, which is not necessarily database, but still a stateful. So doing all those kinds of backups is kind of, that's what the stash was.
[00:26:20] And then again, I was going back to this whole idea that bringing traffic inside a Kubernetes cluster was a kind of a big challenge. I mean, so that's where we kind of focused on the Voyager product. So what's the technical, really it's called an ingress controller. And in kind of version one, we built it on top of HAProxy. So HAProxy was really, is a really good and very well used open source project. And when I say good, I mean, it's really high performance. It's very low resource usage in terms of CPU memory when it runs.
[00:26:50] And it was a kind of a, but again, it was a pretty cloud native kind of project, right? So it's been, again, going back to that whole idea that Kubernetes is very dynamic. If port researches, the IP changes. Now you need to go and actually dynamically update your HAProxy configuration. So what our product did in the first generation was that it will generate this Kubernetes, based on this Kubernetes, ingress CMLs and service CMLs and all that kind of the necessary configuration for HAProxy. And then when your port restarted for a service inside a cluster, it'll go and update the HAProxy in a, like, so that there is no like a traffic drop, right?
[00:27:20] So we'll kind of auto reload those things and all that. And that was the first generation. But then in kind of the generation two of this product, what we have done is we really went from HAProxy to the Envoy proxy. The reason for that is like Envoy proxy, even though both projects are open source, but Envoy proxy is really more a community open source than a single vendor open source, right? And then we were able to kind of make changes to the Envoy proxy project as we kind of started
[00:27:48] doing ingress, not for HTTP traffic, but for also for databases. Because when you look at it for microservices or most of the traffic is HTTP, HTTPS. And that's usually by default supported by this kind of what's called the ingress controllers or kind of this load balancers, right? Like Nginx, HAProxy, Envoy proxy, those things. But each of those databases, right? They have their own kind of custom protocol, which is on top of HAProxy.
[00:28:13] When you talk to a Postgres database, it has its own kind of custom protocol for that way. Kind of a Postgres client talks to the Postgres server. Same with MySQL or Microsoft SQL server and things like that. And we needed a way to be able to also do that for a Kubernetes cluster. So let's say your database is running on one Kubernetes cluster. Now, if you are running from inside, trying to access that same database from inside the cluster, then that's pretty straightforward.
[00:28:40] You can just use a standard Kubernetes service address to do that. But as we kind of started going into these bigger organizations, they will sometimes for security reasons or one or other reason, they will have a database separated out in their own cluster, but then applications starting on a separate cluster and would like those applications from that other cluster to access these databases running on this in the other cluster, right? This cross-cluster communication.
[00:29:07] And one of the things, even to this day in Kubernetes is kind of an unsolved, right? Like which is this cross-cluster communication. I mean, obviously you can create a load balancer and all that, but because the database is using their kind of custom TCP protocol, you cannot really just use like a standard TCP because if you need to, let's say, do a custom TLS, right? Let's say when you're coming from that other cluster, you want to have a trusted TLS certificate maybe issued through an let's-encreate.
[00:29:32] But then inside the cluster, the communication from this Envoy to the database pods themselves, that's using an internal TLS because through a SART manager. And this TLS translation or TLS conversion that needs to happen at the Envoy layer, you will really need that Envoy proxy or whatever the proxy that you are using to be aware of the underlying protocol, right? So for HTTP, they can do it automatically because everything is standardized.
[00:29:58] But for this database application or TCP, the custom TCP protocol, they really have to do it yourself. And that's why we went to the Envoy proxy because Envoy proxy being, I know, kind of a community-focused open source project under CNCF, it already had some support for this kind of database protocols. And then, but sometimes it was lacking how they do TCP and all that. And we were able to go and actually build our own kind of additional functionality on top
[00:30:25] of what is already out there in Envoy proxy open source project and do that. And so that's why in this next generation, we really switched from kind of HAProxy to Envoy proxy. So that's kind of what the Voyager, kind of the way we call it the Voyager gateway is really today, which is database hardware proxy. So those are kind of our three products. And then lastly, there is this QVault, right? So this is the HashiCorp Vault, I mean, which is a very well-known, well, used to be an open source project. I mean, since it's been licensed, I think maybe a year or so ago.
[00:30:55] And it was very well used. I mean, I believe it's still very well used, but it's for secret management, right? But again, when you try to go to this Kubernetes, anything that was done before, they had to be adapted to kind of work in the Kubernetes environment, right? Like in Kubernetes, everybody wants to do everything through this YAML or what's called the declarative way, right? You create a YAML file and then that thing goes and kind of deploys the thing for you. So that was kind of what the HashiCorp Vault project was, right?
[00:31:24] It's a kind of an operator for Vault, which can provision a highly available Vault server. And then everything else that you would do with the Vault server, right? Storing kind of what's called a secret engine, creating a secret engine or updating the secret engine. So kind of HashiCorp Vault is another nice thing, which is that it could create sort of manage secrets for a database, right? Let's say we have a Postgres database. By default, you have the root user, which is called the Postgres user and username and password.
[00:31:52] You want that to be rotated because for security reason, maybe every 90 days, you can hook it up with the Vault server and that will do it for you. Maybe you want to create a kind of a limited access Postgres user account with a kind of a limited access permission to various Postgres tables, you could do that with the HashiCorp Vault. So we kind of started integrating those aspects. But again, through this Kubernetes native YAML style declarative approach, and that's where our HashiCorp Vault product became like is this YAML based way.
[00:32:22] So as a user, let's say you want to create a Git repo, create a bunch of YAML files and kind of put that on an Argo CD or push it out on a GitHub style of deployment. You could use all those products, right? So that's how we have all these individual products and that's kind of the portfolio of products we have. And then again, going back to the original idea that we want to have a kind of one uniform story. So in the recent years, I would say, frankly, really this January, we kind of made a GM.
[00:32:49] So we built a kind of a web-based interface on top of it. So kind of a dashboard where kind of all those things, if you are comfortable with doing them in a kind of way, if you are comfortable working with YAMLs, then sure, go ahead and do those. They're very well documented. But as we kind of started going into these bigger organizations, right, there are different groups, types of developers, right? Somebody who may be a DevOps engineer, they are comfortable with YAMLs. They kind of actually prefer that way. Good for them.
[00:33:17] But then there were the .NET developer or maybe it's a Java developer or your Python developer. They don't really want to spend time figuring out how to write those YAMLs. For them, it's really better to do with pickups, right? Go somewhere, click a button, tell a few things. Okay, maybe I want to Postgres with Hydeball, maybe this much CPU memory and give me your database. Give me my database and that's it, right? And set it up all your backup monitoring behind the scene as it should. So that's where we went ahead and kind of again built out this web-based dashboard.
[00:33:47] And then kind of where you can use all these individual products as a kind of a one single thing, but you don't really are thinking about Kubernetes at all, right? So to you, it's no different than using a cloud-hosted product, but yeah, you can use, but it is Kubernetes native. So the benefit is you can go on-prem or on cloud, even like a completely algorithm where there is no internet access and, but still use all these products. Yeah. So that's kind of what we have today.
[00:34:14] And just to help everyone listening understand the kind of problems that you're solving and the kind of value that you're offering to those business leaders listening and how your tech might work in their world. Do you have any use cases that you're able to share on how you've maybe previously helped an enterprise modernize its infrastructure and meet the evolving demands of this increasingly digital and dare I say AI world that we all find ourselves and any use cases you're able to share?
[00:34:43] So I can talk about this one kind of kind of in a giant telco company in Europe that we kind of worked with. Really, they found us because of that kind of that open source history. So the story goes like this. They were obviously using VMs before they started using Kubernetes. And sometime around 2020, they kind of decided, okay, they want to kind of build a next generation platform on top of containers.
[00:35:09] And kind of the reason for that is like all the benefits that you get out of using this container-based approach, right? This rapid deployment, kind of things that like a VM, you take like 10 minutes to get everything running, your container, you get it done in 30 seconds, right? Like, and then all the CICD type of approach, because now that everything is so fast and responsive, you can kind of really do your kind of this, adopt this kind of agile DevOps mindset for your organization, right? And that has a huge benefit for a large company.
[00:35:39] The more developers you have, having them work more efficiently produces even bigger benefit, right? And so they started building this, doing this digital transformation, as we call it today, like through this container cloud-native technology adoption. And they started doing this, building their internal Kubernetes platform, right? So their platform engineering team, kind of they started working with the Rancher for their kind of this Kubernetes provisioning on top of their kind of, I believe they had
[00:36:08] some private cloud, kind of onto VMware and kind of Huawei cloud and things like that. So they're there, the Kubernetes part, they were able to kind of work out using the Rancher. But then as they were naturally starting to do this application deployment, they need a solution for databases because running your databases outside the cluster, I mean, all the things that we just said, like they were hitting all those problems, right? Like now, I mean, okay, maybe your application spins up in 30 seconds, but then you need
[00:36:38] to store the data somewhere and you cannot really file a ticket and then wait for a week to get the database running on a VM by somebody else, right? You need your database to be also ready in that 30 seconds. And then, so they kind of realized that kind of to actually get the maximum benefit out of this cloud-native technology adoption, they need to run not just their stateless applications, but also the stateful applications inside the same environment. And that's the full modernization of their infrastructure.
[00:37:08] So they kind of found us through, I guess, on Google search or something on the internet and reached out. And then we started working with them. And then again, like I said, we were a new company, really no-name company. And they did their evaluation early, but we were very glad that they did take a chance on a new company, really. So they started kind of testing things out. They had their specific requirements. Up to that point, most of our users will be running, probably would allow like everything to be done.
[00:37:38] You can pull Docker images from the internet, but then obviously not in a company like this, right? They are running everything really in an air-gapped environment. So how do you make sure that you can run your databases in an air-gapped environment? Because everything has to be done properly with a secure Docker registry that's private to their infrastructure, right? So there was this long journey that we went through where kind of we started evolving our product to make those things work for them.
[00:38:04] And yeah, so eventually they were able to use our product. They even built kind of some UIs on top of our product too, because at the time we didn't have any UIs. So we only had this YAML-based things. So they kind of built their own sort of Rancher-based kind of Rancher has this concept of an extension. We can kind of customize some of the UIs. So they built that and started offering that as a service to their own internal kind of their teams, application teams. And that's really how it went on.
[00:38:32] So they were able to, when they kind of went to their digital transformation journey, they were able to kind of take these applications, maybe also sometimes build new applications that were pre-containers and bring them into the container world and then run the full end-to-end lifecycle, kind of in all the data management, all the monitoring, all the full stack of that application to the containers and the Kubernetes through user-for-product. So that's kind of how we joined. Yeah. Love that.
[00:39:00] Again, to drill down a little bit deeper on that, are you able to provide, I don't know, maybe a comparison of a previous architecture versus new architecture with KubeDB? Just to help anyone listening understand the difference that you're making and also the ROI there as well and why it's better than, let's say, RDS. Yeah. I mean, so if you think about this way, right? Like, let's say you are running a kind of a previous generation technology, let's say,
[00:39:30] with the VMs, right? So the first thing that, you know, is like how long it takes to get something up and running. You know, typically with the VM, I mean, even, you know, they keep going to talking to companies who come to our booth and they say, hey, we are interested. And we ask them, why are you interested? And they say, no, we are a big company and we need, sometimes we'll need a database and we'll, our application team will have to wait like weeks or get the DBA team to
[00:39:58] get the database and it's really painful because we're kind of waiting at that time and not able to get anything done. The thing beauty of the KubeDB or this kind of Kubernetes native solution is that it's kind of self-service, right? So you are not really waiting for anybody. It has been pre-validated and the DBA team, the way we have built our product, like the DBA team can kind of effectively create this template where they have what they were kind of going to do manually. Now they have been codified into kind of a template where this application
[00:40:28] developer goes into our UI and clicks a button to get a database. It has all those things that DBA want to kind of organization level policy, let's call it, right? Like you must have to have a backup. Maybe it needs to be one bucket or maybe in multiple object storage location because you really care about the data survivability. So you may have to have a backup in two different locations. Okay. Maybe the backup needs to happen every hour or maybe the backup needs to leave for, I don't
[00:40:57] know, 12 months or six months or nine days, whatever it is. So maybe it always has to be a highly available instance. So there is no downtime. It must have a TLS that rotates every 30 days. All these things that matter, they can kind of codify into this template format that we have. And when somebody goes into the UI, they can deploy the database easy. So that's now, you know, you are achieving kind of maintaining, you're not losing any security or you are not kind of really diminishing anything that you were doing for your organization.
[00:41:27] But everybody, everything is much more streamlined and efficient, right? You are not waiting for the DBA team. And then on the DBA team side, it really helps them also, right? It's not that, oh, I don't need DBAs anymore. You still need the DBAs because this product like KubeDB is not going to figure out how to figure that, okay, maybe I have this complex query and which is slow and how to create this maybe two additional indexes and maybe did the query this way to kind of do this, right? It's not for that. It's kind of optimizing.
[00:41:56] This is undifferentiated heavy lifting that you have to do to get anything running, kind of the infrastructure setup, that part of getting automated. So the DBA team can now focus on delivering business value that's important to the business than doing just this kind of more mundane tasks, right? So that's the one thing. So making everything more efficient has a huge cost savings just in terms of the developer productivity, right? So that's one thing. And then another thing is like because KubeDB is a Kubernetes native, so some of the things
[00:42:26] that happens is you are able to use sort of the same stack, right? Think about this way. Maybe if you're running your databases outside the cluster, but the applications or maybe your microservices running inside Kubernetes. Now, when you are doing monitoring or tracing or any of those things, you kind of have to correlate across these different environments. And usually if you are running your databases outside Kubernetes, then probably you have a different type of monitoring system, logging system versus what you have inside the Kubernetes. And it's kind of all spread around everywhere.
[00:42:55] But if you're running everything together in a cluster, then the same skill set that you have for maybe running your microservices, you can still apply to your database or stateful applications and have the same monitoring stack, maybe with the parameters and same logging and everything. And you can correlate all the data. You had a problem maybe with the microservice. You can also go to the database or things like that and look at, okay, what happened there, maybe to the connection pool, things like that.
[00:43:21] And this is a huge beneficial benefit to the kind of the developers who are actually managing these applications because of the whole idea of DevOps or kind of the shift left where developers are really in charge of not just developing the application, but what happens at runtime and they are able to utilize their expertise and skills to kind of manage the full end to an application that they manage for their organizations, right? So these are kind of some of the big benefits.
[00:43:49] I mean, the other thing that we're kind of talking about is cost savings, right? So obviously there is some cost saving coming from this kind of efficiency gain, but then there is a real dollar value cost savings that comes from doing something like a QDBB solution, right? So let's say, let's talk about RDS maybe specifically, but I mean, I don't want to point out RDS, but in the sense like if you are using managed database service in a cloud
[00:44:14] versus something like where maybe you are using something like QDBB where, you know, the databases are running on your own account, right? This is not a hosted service, right? So you're running in your own account and instead of managed by humans, it is actually managed by the kind of the operator as it's called in the Kubernetes world. So the benefits are because Kubernetes, one of the reasons you kind of get cost savings is because you can do much better bin packing, right? Like if you're creating a database with a managed service like an RDS, you're probably
[00:44:43] always giving it a certain amount of CPU and memory, maybe two CPU, 4G memory, but this is a dev instance. You don't need that kind of resource, but maybe you are stuck with doing that because that's maybe the smallest instance you can get, right? But with Kubernetes, you can give it a much more smaller instance. So you have this better bin packing, which saves you real dollars in terms of your infrastructure cost. Then the other thing that happens is that most of these cloud providers will have this
[00:45:08] concept of a spot instance and where effectively instead of paying for the on-demand price or kind of the reserve price, the spot instance is much more cheaper, maybe something 10-20% of the actual kind of on-demand price. With a solution like QFDB, let's say if you're creating an EKS cluster and then used Carpenter to basically dynamically bring up nodes as needed, and QFDB will let you kind of set up a, let's say, highly available database where maybe three of you out of the three replicas you
[00:45:38] have for this Postgres instance, depending on how risk-valorant you are for that particular database server, maybe only one of them runs on-demand. You make sure that at least one instance is always available, maybe a couple of the other ones runs in the spot instance. So one of the risks of the spot instance is that with a 15-minute notice, they can take the underlying node away, so you'll have to move that kind of port from one spot instance to a different one. But since you, maybe this is a kind of a QA instance or staging instance where you are
[00:46:06] okay with taking some of the risk, with a product like QFDB, you can cut your info cost by two thirds because only one of them are using a on-demand node and others are using the spot instance nodes. So, I mean, the other thing with the kind of what we have tried to do with QFDB, I mean, if you look at the ridiculous amount of money usually charged by these cloud providers for their managed database service. I mean, I don't know why this is, but people seem to have been really made very fearful of
[00:46:35] that they can maybe the data, I mean, it's just so difficult to run databases that it has to be a managed thing. I just don't know how we kind of got there, but that marketing really worked. And people are willing to pay, like, if you look at the EC2 pricing versus like RDS kind of hourly rate pricing, it's kind of two or three times that it's really, they're taking the, a lot of times the tokens for software, installing that on the node and setting it up and charging that kind of money. And, and we are able to better compete with that, right?
[00:47:02] And again, not just to point out ideas, but it's kind of true generally for this managed database services. And, and then we are cut costs on those. And then you kind of start to get these other benefits I mentioned earlier. Plus when you're running inside a Kubernetes cluster, your application and databases, you don't have to go through this VPC every time you talk to the database, because when you're doing your managed service, obviously they are running their VMs or the database VMs outside your own kind of account. So it has to do across VPC traffic every time you don't have that anymore, right?
[00:47:32] It's in the same cluster. So, and another thing as we have seen that, like, as we kind of start to go to these bigger companies, nobody really only has a single cloud. A lot of times they will have a mix of some on-prem and cloud, or maybe sometimes multiple cloud because they ended up having different teams do different things. Or maybe they acquired some companies and whatnot, and they are on different clouds. Or maybe they have customers who they're still their software and those customers are on different cloud.
[00:48:01] So they kind of have to run their system on different clouds. And in all these kind of scenarios, using something like a kubedb, where kind of your applications and your databases all kind of running natively on Kubernetes, you are really able to make your application like a cross-platform. Yes, true that you still have to do some other things. Like if you're on AWS, then you're probably using S3. But if you're on Google Cloud, you're using TCS to store the data. Yes, there will be some parts of it is you have to make it cloud-based server.
[00:48:29] But still, the bigger part of the full application stack now can be just a Kubernetes native, not cloud-agnostic. And you can take it anywhere and z-deploy when you need to. You really get that freedom, right? To kind of average some of the window lock-in as much as possible, but run it on top of Kubernetes. So those are some of these benefits that we see users start to get as they kind of adopt the solutions that we have for our customers. Yeah.
[00:48:59] And the question I have to ask is, where do you go from here? What are you working on now? Anything you can share around the road ahead and what's going to be keeping you busy? So we are also thinking how we can take advantage of AI for our product. So, I mean, one of the couple of areas that we are looking to integrate, better accommodate this AI wave is kind of having support for vector databases and kind of provisioning support in our product. So that's kind of one area.
[00:49:27] The other thing is this, we've probably heard about this called the MCP servers, the model context propagation thing, where effectively there's a lens learn how to kind of talk to the real world. In our case, maybe to something like a Kubernetes cluster or kind of a kind of our product like a QFDB. So maybe potentially a building kind of some kind of MCP server. So maybe you can go to our dashboard and say, hey, create a database for with the 4GB memory or the highly available instance.
[00:49:55] And boom, after a few minutes or seconds, you kind of get a database that we're running and kind of the thing knows how to do that thing. Right. So, so it's kind of making a lot of lens do that, maybe through some sort of chat or kind of more like a human language type of interface, something we are kind of exploring. I think I would say one of the things, frankly, I, I've been talking to a lot of people at this conference is just asking like who kind of visits our booth. Like, so how are you going to do this kind of this AI thing?
[00:50:21] Because you are like an air cap and no internet access and everything has to be on-prem. And now all the LLMs are kind of visually SaaS. And to my surprise, it seems that they seem to be okay with sending some of this traffic to kind of this hosted set of SaaS services, like, like OpenAI and stuff like that, like sending their data there to kind of do this AI work. So, but then some of the companies are also kind of trying to build their own sort of internal, at least the inference deployments.
[00:50:48] But it seems like a unsolved problem at this time. I'm really not clear how this will all work out, but that's something. So, so we are kind of exploring also. So, so this kind of AI doing better, more things with UI, more, more useful and interesting things for our customers with AI in our product portfolio is something we are focused on. We kind of mentioned this database kind of the web-based dashboard that we have done. So we kind of kind of taking it to the next layer where it can be really used as a, like
[00:51:14] think of like a cloud provider comes to us, uses our product to offer databases and service to their own customers. We have done a few of those recently, like accommodating those kind of use cases much better where you really have a much stronger multi-tenancy requirements with the strict separation of all the data and everything. Better need to open telemetry kind of based, better observability solution for our platform. So those are some of the things we are focused on and hoping to have more news maybe a later part of the year.
[00:51:44] Fantastic. Well, thank you so much for sitting down with me today. But before I let you go, for anyone listening, just wanting to find out more information, dig a little bit deeper on anything we talked about. Where would you like to point them? Yeah. So you can obviously go to our website, appscode.com. So A-P-P-S-C-O-D-E, appscode.com. And there you will actually go to the product section. You'll see all these individual products that we mentioned under that domain and then kind of visit there.
[00:52:11] I personally are active on LinkedIn and Twitter or X as it's called now. So yeah, LinkedIn, you can search with my name, Tomal Shah on X. It's X.com slash T-S-H-E. So that's where you can always find me. And yeah, and if you want to send me an email, you can send me an email. Tomal at appscode.com. Happy to always chat about anything related to cloud native or Kubernetes and our product suite.
[00:52:36] Well, thank you so much for sitting down talking about the integrated development platform for containerized apps called Apps Code. I do urge anyone listening to check that out, especially if they're looking to provision Kubernetes clusters on any cloud provider, build various open source cloud native tools and participate in that Kubernetes development process and have a voice in its future direction. I think this is just the beginning of a much larger conversation, but thank you for sharing it.
[00:53:06] Thank you, Neil. Thanks for the opportunity to talk to you and to your audience today and anybody listening, feel free to reach out if you are interested in anything we are doing. Thank you. So a big thank you to my guest for sharing his story, the story behind Apps Code and what the future looks like for Kubernetes native data platforms. But what stood out for you in this conversation? For me, one of the things is just how much complexity can be stripped away when tools are built with
[00:53:36] self-service scale and simplicity in mind, especially when teams want to remain cloud flexible. So from shortening database provisioning times to making multi-tenant architectures easier to manage, I think App Code's approach is clearly resonating with organizations that are looking to modernize without compromise. And with vector databases and AI integration on the horizon,
[00:54:04] I think it's clear that my guest and his team are keeping one eye on what's next while solving today's real world problems. But what do you think? Are we moving towards a future where developers and operations teams finally share a common control plane? Let me know your thoughts. Join the conversation. Email me techblogwriteroutlook.com, LinkedIn, Instagram, just at Neil C. Hughes.
[00:54:30] But remember, until next time, until we speak again, stay curious, stay informed and keep innovating. Yeah, I'll speak with you all again tomorrow. Bye for now.

