In this episode of AI at Work, I sit down with Tom Totenberg, Head of Release Automation and Observability at LaunchDarkly, to explore what happens when artificial intelligence starts writing and shipping our software faster than humans can think. Tom brings a rare blend of technical insight and grounded realism to one of the most important conversations in modern software development: how to balance speed, safety, and responsibility in an AI-driven world.
We discuss the hidden risks of AI-fuelled shortcuts in software delivery and why over-reliance on AI-generated code can create dangerous blind spots. Tom explains how observability and real-time monitoring are becoming essential to maintaining trust and stability as teams adopt AI across the full development lifecycle. Drawing on LaunchDarkly’s recent investments into observability, he breaks down how guarded releases and real-time metrics are helping teams catch problems before users ever notice.
From the dangers of “vibe coding” to the rise of agentic AI in software pipelines, Tom shares why AI should be seen as an amplifier rather than a magic fix. He also offers practical advice for leaders trying to balance innovation with caution, reminding us that the goal is to innovate with intention — to measure what matters and build resilience through feedback and transparency.
Recorded during his time in New York, this episode captures both the human and technical sides of what it means to deliver software in an era where the line between automation and accountability is being redrawn.
[00:00:04] - [Speaker 0]
Welcome to AI at Work, a podcast which is part of the Tech Talks Network. And in this podcast, we're gonna venture into the transformative influence of artificial intelligence inside the workplace. And our discussions will focus on both the remarkable breakthroughs, but also the complex challenges of integrating AI into our everyday business functions and workflows. Today, I'm joined by Tom Totenberg. He's a head of release automation and observability at a company called LaunchDarkly.
[00:00:39] - [Speaker 0]
And Tom will explain how AI is amplifying, not replacing the way that teams build, ship, and monitor software, even exposing weak spots that we didn't even know existed. And we will also talk about guarded releases, a topic very close to my heart, and why observability must track new business outcomes like cost and accuracy, and what it really means to deliver AI at speed without losing human accountability. So if you've ever wondered where automation ends and where responsibility begins, you can enjoy this one. He's a great guy to boot. So enough from me.
[00:01:21] - [Speaker 0]
Let's get Tom onto the podcast now. So a massive warm welcome to the show. Can you tell everyone listening a little about who you are and what you do?
[00:01:32] - [Speaker 1]
You bet. Hi, Neil. So nice to meet you, and thank you for having me today. My name is Tom Totenberg. I am the head of release automation and observability over here at LaunchDarkly, and that is a bit of a mouthful.
[00:01:43] - [Speaker 1]
So, really, it's it's overseeing an emerging practice which connects change monitoring with instant runtime control through feature flags or feature management. So, you know, to to really distill that down, it is, us bringing to the the the broader market the ability to automatically progressively roll out, observe the health of a specific change, and then automatically remediate if something happens to go wrong on the metrics that you care about.
[00:02:06] - [Speaker 0]
Well, it's a pleasure to have you join me today. And every day, as you know, I try and take a different area that people are talking around in tech and business, try and demystify a little bit. And one area I wanted to try and tackle with you today is the growing excitement around AI assisted coding. Seems that every event I've been to this year seems to mention low code or AI or or vibe coding. That's the big phrase right now as well.
[00:02:33] - [Speaker 0]
But, of of course, on the flip side of that, there's concern about developers taking shortcuts. So I've got to ask from your vantage point here, what are the the hidden risks that you're seeing for this AI fueled speed in software delivery? Because it it's great on one side, but there are risks on the other. But where do you stand on everything?
[00:02:51] - [Speaker 1]
Oh, absolutely. Well, so first, I wanna acknowledge that this is an emerging Right? Nobody has all the answers. And if anybody says that they do, then they are selling you snake oil, and you shouldn't believe them. Yeah.
[00:03:02] - [Speaker 1]
But that said, I am in general an AI optimist, which is that, oh, you know, yes, there's a lot of problems to be solved, but there's also literally, you know, trillions of dollars going into this, and that's not necessarily something I'm willing to bet against. But currently, there are so many, ripple effects that are happening as a result of AI and making its way into the delivery process. So first, I think one of the obvious ones is that when humans tend to lean too heavily on AI, it creates this mental distance between what is being generated and the person who's responsible for that code. Right? And if if you've ever been a person who's responsible for delivering software, for for writing something that needs to be enterprise grade and scaled and secure, you know that there is so much value in knowing the nuances and intimately knowing all of the secret hidden challenges and puzzles that go along with some feature.
[00:03:54] - [Speaker 1]
So if we seed too much of that to AI, it has tons of ramifications, like when there is an incident. So, I I mean, not all incidents are are caused by AI directly, directly, but they will still happen. Right? And and when those do happen, if we don't have a team that is intimately familiar with what went out the door, it becomes more severe. Right?
[00:04:15] - [Speaker 1]
Because we don't know exactly where to look. We're now having to rely on maybe AI agents to analyze and figure out what's going on. I mean, we we're recording this, two days after, you know, the the big AWS incident of twenty twenty five in October here. And, you know, who knows? There's all sorts of speculation about what actually happened, suspect that we will see more and more severity, when there are these sorts of incidents because of that mental disconnect.
[00:04:40] - [Speaker 1]
And who even knows what the long term ramifications in the industry are if this becomes more of a a practice for the and and more of a crutch, I I would say, for for people who are even up and coming coders. So what's the next generation going to look like? Who knows?
[00:04:53] - [Speaker 0]
Yeah. It's a massive event, and it really highlights the kind of single point of failure there, doesn't it? That one service goes down and half the Internet goes with it immediately.
[00:05:03] - [Speaker 1]
It's Right. Which kinda goes against what the Internet was, designed to be. Right? It was just decentralized. No single point of failure should take everything down, and, hey.
[00:05:11] - [Speaker 1]
Look where we are, you know, a few decades later.
[00:05:15] - [Speaker 0]
And as for age AI generate again. And as for AI generated code, I completely get it. Teams lean on it to move faster. But where do you think that line should be drawn between efficiency and reliability? Because I think especially where vibe coding is concerned, where people think, hey.
[00:05:32] - [Speaker 0]
I can just say it and it happens, but but they haven't got the skills then to check through the code, etcetera. But what do you see here?
[00:05:39] - [Speaker 1]
Sure. Well, once again, going back to me fundamentally being an AI optimist, I mean, as a proof point, I I I literally just got the notification that I got the book vibe coding delivered, which is written by Gene Kim, Steve Yag. Right? A couple really well known, experts in the industry. And so I'm I'm planning to consume this.
[00:06:00] - [Speaker 1]
Right? And and I very much wanna stay on the forefront. But, so so your question, what's the balance between efficiency and reliability? I I think in the, traditional sense, that's actually kind of a false dichotomy. Right?
[00:06:11] - [Speaker 1]
There there's a lot of, research out there, to suggest that small consistent releases are are safe. Right? That's a good practice. But that might have shifted in the age of AI. And if you read the latest DORA state of AI software development, well, whatever the official title is, that report, I I think they put it really well, which is that AI is an amplifier of your existing practice, your existing pipeline and and processes because, it will help highlight how things are going.
[00:06:40] - [Speaker 1]
And so, you can't expect it to be a magic bullet. Right? And you can't just suddenly throw AI at a problem and expect it to be solved, but rather it will it will actually highlight how things are going. If it's a well oiled machine, you're good to go. But I don't know if you're a Formula One guy.
[00:06:57] - [Speaker 1]
I am not. But the the the thing that came to mind, I don't know if you remember when Formula One went to Las Vegas for the first time a few years ago, and there was big news about how they were having to redo the roads in Las Vegas because they were right on the strip. And then during the warm up race, I think they started going through, and there was some, like, metal cover. Like, the big steel, you know, a couple inches thick metal cover that got suctioned up out of the road, and it created a hazard. And this is the sort of thing that would never have bothered us mere mortals driving at normal speeds on this road.
[00:07:27] - [Speaker 1]
But as soon as you get Formula one drivers on there, okay, all of a sudden, this thing emerges and it becomes this hazard. And I think the same sort of thing can happen when you start trying to accelerate too quickly with AI. Right? You'll be able to go inconsistently fast in some areas, which will very quickly highlight any rough patches that you have in in that delivery process. You really gotta be on edge for it.
[00:07:48] - [Speaker 0]
Yeah. That that story in Vegas and the f one was fascinating. And the amount of people that found little shortcuts to watch it by walking down the strip and and then going over the bridge, just repeatedly walking over the bridge to catch a view. Incredibly cool. People always find a way, don't they?
[00:08:03] - [Speaker 1]
I I personally know someone who bribed a security guard just to stand at the edge of a parking lot over the looking over the track. So yeah.
[00:08:11] - [Speaker 0]
I love it. Every man's got a price. And, of course, this year is all about agents and agentic AI. And some early adopters are proudly talking about releasing swarms of agents into, online, and, my agent will talk to your agent. That's what the future's gonna be look like.
[00:08:30] - [Speaker 0]
So as a overcautious ex IT guy, how does observability fit into this new world of AI driven development? And and why is real time visibility more important than ever, do you think?
[00:08:43] - [Speaker 1]
Sure. Well, the natural way that the industry is going is to respond faster, deliver more. Right? There there's so much pressure to do this. And that is also just sort of naturally how we have been delivering, over the course of decades.
[00:08:57] - [Speaker 1]
Right? We went from delivering with physical media and then, you know, having to ship Netflix DVDs to you know, now it's all over the air updates. We got to waterfalls still, but, you know, with over over the air updates until we eventually got into the radical concept of delivering software within living memory of someone having requested it. And until eventually now, we're truly in the realm of of continuous delivery, continuous deployment, continuous release, all all all these sorts of concepts. And that is not universally adopted nor is it universally appropriate for everybody.
[00:09:27] - [Speaker 1]
But with AI driven development here and the observability, that same pressure, that same trend is just continuing. And so real time observability is how we will be able to respond quickly. Right? There is no ifs, ands, or buts about it because we we must be able to very quickly highlight the health of the releases that we have going out the door, the health of our systems that are supposed to ingest and deliver these releases. The and that is going to come through not just the the old traditional way of, you know, carpeting as much surface area as we can with observability, but also making sure that we are smart about what we are looking at.
[00:10:06] - [Speaker 1]
So in in the case of AI agent swarms, essentially, we're seeing an entire new classification of what monitoring means for those. So because we know, right, that there are things like hallucinations. We know that there are cost implications that people are still figuring out. And so monitoring and observability at this point is taking on, not just traditional performance metrics, but also business metrics such as cost, such as, LLM as judge accuracy, such as the the rate that, at which people are going to try to jailbreak this thing. Right?
[00:10:40] - [Speaker 1]
And for all of those, we need to be able to do real monitoring not just on the performance, of it in any traditional sense, like, you know, hey. Is my database up? Are my CPU load at healthy levels? Right? But, also, then the more of those business metrics because it has never been easier to get a surprise bill from your cloud provider or your and your AI provider saying, hey.
[00:11:01] - [Speaker 1]
Wait. You tried this new model, and suddenly, you know, the cost went up tenfold just because you were trying to get the latest and greatest when something that might have been just as accurate if you had that that AB test, right, to be able to compare agent a versus agent b and and really get some data driven feedback on that. You've gotta be able to have that in real time to balance that accuracy against those operational metrics for for the longevity of your own business in addition to the traditional performance metrics. Long answer to a short question, but but hopefully what I said made sense
[00:11:30] - [Speaker 0]
Yeah. 100%. And sticking with the f one analogy, one of the reasons I wanted to steer you towards, observability is before you sat down with me today, I was reading how Launch, Darkly has recently invested heavily into observability. So what was it that prompted that focus, and how does it connect to your overall mission of safer and faster releases?
[00:11:53] - [Speaker 1]
Sure. Well, so so LaunchDarkly traditionally, I think, was more of a one trick pony, right, where where, yes, we we pioneered the concept of commercially available feature flags. We we didn't invent feature flags. That practice has been around for a long time, but we were the ones who really took that sort of stepped up the game, and and we're talking about decoupling deployment from release and all of the benefits that that gives for smooth, safe delivery. However, it turns out that puts us in a pretty unique position to be able to take that to the next level through observability so that it's not just a one directional delivery thing, right, of turn something on and and then great people see it.
[00:12:28] - [Speaker 1]
We also have to complete that feedback loop to know what the the effect that that change is actually going to have. And so that that is why we started investing into observability so much. Is is there something that that internally we we call guarded releases. Right? And be and what that is doing is because LaunchDarkly is the one controlling a change, say, you know, let's give it to 1% of your beta users on a particular device based in London.
[00:12:52] - [Speaker 1]
Great. Cool. That's a very small fraction of a percentage of your overall user population. But because we're the ones controlling the change, we've got data about who's exposed to that change. And that means that we're able to observe specifically the effect of that change really tightly scoped to the people who are exposed to that change.
[00:13:11] - [Speaker 1]
So that means that then we can do a live essentially continuous analysis through observability on whether your metrics are degrading. And, again, that's business or performance metrics, like like engineering metrics. We can see if that change is causing some sort of degradation and instantly respond to it. So, you know, we we already nailed the control plane. That part's good.
[00:13:30] - [Speaker 1]
Now we're really tightening that feedback loop and getting very smart and and tightly scoped and and automated in how you can actually safely expose that and and make sure there's that, again, tight feedback loop to the nth degree through runtime observability, real time observability, and runtime control. So that's the answer.
[00:13:48] - [Speaker 0]
And as someone who oversees both release automation and observability, I'm kinda curious. How do you see these disciplines complementing each other in maintaining the trust and stalability stability when AI is part of the workflow? Do they complement each other? It feels like they do. So a few synergies there.
[00:14:08] - [Speaker 1]
Sure. Absolutely. Well and what I'll say is that I don't think that AI drastically changes the calculus for how how this should work. Right? Yeah.
[00:14:17] - [Speaker 1]
Because when we are are comparing AI versus a human delivery, neither one of them are perfect. Neither one of them are completely deterministic. And and so we got a lot of the same sorts of challenges to to face. And and for context, my background comes from working with a variety of different organizations in heavily regulated industries, things like medical device manufacturing, pharmaceuticals, automotive, aerospace, finance, you name it. And so let let's start with the automation side of things.
[00:14:47] - [Speaker 1]
Every organization has unique architecture, functionality, end user audiences, traffic patterns, everything. Right? So it's very much a go slow to go fast sort of situation if you want to be able to have this to be a repeatable practice. So so figure out what your change categories are. Is this a high risk change that is in this specific place in your application back end architecture?
[00:15:10] - [Speaker 1]
Is this a low risk cosmetic change to the to the front end? Right? Those are two different change processes. And so once you figure out those categories, we should also be able to answer some questions about things like risk tolerance, the desired effective stages that you want to go through. So so who or what should be exposed to that change first.
[00:15:28] - [Speaker 1]
And then once we figure out some of those categories, then then that gives us the the the framework that we need to be able to automate this. It's an ever evolving process. It it's like agile. You don't just get to say you are agile. It is a constant evolution, a constant state of being a mindset.
[00:15:45] - [Speaker 1]
And it's the same thing with the the automation here. Right? Your needs will change, whether that is AI or human generated. But figure out those categories, and that is what lets you scale that out. Now for for for the observability side, because that was artificial.
[00:15:59] - [Speaker 1]
I'm I'm gonna give a more of a nontraditional answer because, you know, historically, like I said, we have been thinking about just covering as much surface area as we can and then automatically looking at at things like infrastructure layers to to know, hey. Is this pod performing well or not? And, you know, that's easy. But what we really should care about is the impact to end users. Going back to that AWS outage that just happened, misconfigurations happen all the time.
[00:16:25] - [Speaker 1]
That is not the problem. The problem is that the end user impact was massive that that it had. Right? And so what we should be able to do is for every change that we have, we should be able to define right up front. Here's the thing that I'm changing.
[00:16:38] - [Speaker 1]
Here are the failure modes and how I will measure those failure modes. And conversely, how how am I going to define success? How do I know that this actually was successful? So we should be able to define both the happy and unhappy path. So by thinking about it that way and by tying the concept of observability more tightly scoped to the thing that is changing, it doesn't really matter whether it's human generated or AI generated.
[00:17:03] - [Speaker 1]
We have our metrics. We've got our change process all set. And so now we're establishing a smooth road, and we're establishing relevant signals for every change regardless of the specific change that goes through that that that overall pipeline.
[00:17:15] - [Speaker 0]
And I always try and give everybody listening a few valuable takeaways from our conversation. So on that side of things, are there any practical steps that you think teams could take to ensure that at their AI tools enhance developer productivity without eroding quality or accountability? Because we keep hearing horror stories of so many AI projects are caught in pilot phase. They're not going into production, etcetera. So any tips or advice you can offer here?
[00:17:43] - [Speaker 1]
Sure. Well well so, yeah, I I think you might be talking even about that Harvard Business Review study. Right? Ninety five percent of them are failing, and and that is because people are still not thinking about about how to measure, how to monitor this AI initiative. So so for things like developer productivity itself Yeah.
[00:18:04] - [Speaker 1]
I'm gonna come back to this probably a little bit later, but but thinking about value stream delivery, focusing on the end user value that you are delivering really is how we should be able to to measure this. So are we actually accelerating that value that's going out the door, or are we just playing with tools for no particular reason? Do developers have an idea, that that is connecting the lines of code that they are writing to what this is supposed to accomplish for for those end users? A big cultural change that has been happening, you know, the the regardless of AI or not, is that we are no longer seeing, the ability to be super specialized in SiloG. Right?
[00:18:43] - [Speaker 1]
You don't just get to write code without thinking about its impact. You don't just get to be on SRE without thinking about the business value and the business ramifications of what this uptime is. You don't just get to throw stuff over to QA and assume that they're gonna catch it. You know, everybody is responsible for uptime. Everybody is responsible for delivery.
[00:19:03] - [Speaker 1]
You built it. You own it. And so because of this, the the practical steps that I think that we can take comes down to structured play. Nobody knows what the the perfect tooling, the perfect processes are for you. No existing consultant in the world right now has all of the answers that will future proof your your delivery pipeline in this rapidly evolving AI landscape.
[00:19:28] - [Speaker 1]
And so so the, you know, the reason that we play as animals is to learn how to run and jump and climb and socialize and collaborate and compete, all of those things. And we should we should do that encourage that same sort of structured play with the people who are responsible culturally for the delivery of this software. Right? They're the ones who intimately know the process. So make sure that we assign testers to new tools.
[00:19:52] - [Speaker 1]
Make sure that we schedule skilling updates, whether those are internal people skilling up each other, whether those are external people who have figured out some cool process or, you know, some vendor that that says that, hey. We've got a radical new approach to it. Cool. Let's hear everybody out. Let's schedule some of that, that time, make sure that their hackathons focused on AI and delivery, and give them a little bit of budget to do this play because, how else are you going to be able to stay on top of things?
[00:20:18] - [Speaker 1]
So make sure that that structure is in place. Make sure that is a regularly scheduled, formally encouraged practice with dedicated time and intention. And that is that is the best way that that you'll be able to figure out what of these specific tools and processes actually work for you.
[00:20:35] - [Speaker 0]
Fantastic advice there. And I ask you to have a little gaze, a sneaky gaze into my virtual crystal ball, looking ahead into 2026 and even beyond, how do you see AI continuing to reshape that software release process from testing to deployment to post release monitoring as well. Do you see any big changes ahead here?
[00:20:56] - [Speaker 1]
Oh, we had to fight that crystal ball. That was You know? So so so like like I said at the beginning, nobody has all the answers, and that includes what I do know is that as these specialized networks of agents become smarter, because we know that these sort of specialized small context jobs is where they really shine, the the processes will also drastically change. And I guarantee there are I mean, there are entire industries spinning up around this. A lot of them will fail.
[00:21:24] - [Speaker 1]
Some of them will succeed. And so everyone is going to try to automate every step of the process. And we will see eventually, you know, which gaps that humans are currently doing can reasonably fill by the current AI tooling that we have or, you know, whatever that next generation looks like. But so so so my question then is if that ends up being with the ones that end up being successful. Let let's say it's, you know, testing.
[00:21:47] - [Speaker 1]
We all have seen now things like, I should be able to write tests in natural language so that I don't have things like flaky tests that fail just because a button moved. No. I should be able to tell an agent, go click on that button. It will figure out where the button is, and it should be able to go click it. So cool.
[00:22:03] - [Speaker 1]
That that's a promising one. Right? Go go figure out if that'll work for you. Yeah. We should be able to see, you know, things like blue green deployments should be able to have pretty tight coupled monitoring to that.
[00:22:16] - [Speaker 1]
With AI Insight, both both well, I would say don't fully see this to AI. Like, we should both be able to have statistical rigor around whether a change is failing or not along with AI insight into saying, okay. Here are the people who were exposed. Here are the machines that were exposed, the requests that were exposed. We should be able to put AI on that sort of thing and identify and remediate infrastructure, right, Remediate changes without without a whole lot of overhead.
[00:22:43] - [Speaker 1]
But all of this is so regardless of where, you know, crystal balling, regardless of where this actually lands in the in those various different places, The the big critique that I hear about AI is, you know, it's it's nondeterministic. It's not perfect. It only succeeds 98% of the time. So if you assign it, you know, a 100 tasks, then it's gonna go off the rails. Well, sure.
[00:23:07] - [Speaker 1]
But, you know, guess what else isn't deterministic and perfect? Like I said earlier, humans are not deterministic. Humans are not perfect. So we are not comparing AI against perfection. We are comparing AI against human imperfection.
[00:23:21] - [Speaker 1]
And so so, again, while many of those initiatives will fail, I would much rather stay on the forefront of that information and then be able to adapt to that landscape and be ready for whatever changes actually come here. So I I kinda gave you a non answer because, honestly, I have no idea what that crystal ball is going to look like, but I'm gonna keep trying them, and I'm gonna keep, staying on the forefront.
[00:23:41] - [Speaker 0]
Diplomatically answer there. And finally, what advice would you give to? Maybe we've got some people listening and they're in risk adverse organizations. Maybe they're in a highly regulated industry or a public company, and they're desperately trying to balance innovation with caution as they adopt AI across that full development life cycle can feel overwhelming and daunting. Any advice for those people listening?
[00:24:06] - [Speaker 1]
Sure. So so once again, I'm gonna remind everybody that I come from a regulated background. Like, I have helped write FMEAs. I'm familiar with ASIL and, you know, all the the ISOs and everything and and and how to to manage risk matrices. So so I I often think about this in different levels of There's personal risk and and and team level risk and, you know, they're like, am I gonna ruin my weekend with an outage?
[00:24:29] - [Speaker 1]
And there's, of course, things like software risk for things like life saving devices. Right? We do not want any room for error, and we don't want a lack of redundancy in the aircraft that that that we're happen to be flying on. Right? But one of the the highest levels of risk is actually the business level risk regarding innovation.
[00:24:51] - [Speaker 1]
And so caution in that case can be just as dangerous as innovation. It so so, again, take this with a grain of salt depending on what you are building your industry, of course. But it has never been easier for some flashy new product to suddenly explode onto the scene, emerge from nowhere, and eat your lunch. An example of that is is things like I mean, think about Zapier, right, where its entire business model is connecting different tools to one another and making sure that these workflows across tools are working really well. Guess what?
[00:25:25] - [Speaker 1]
With MCP servers and being able to say, hey, Claude, what should I talk about in my one on one with my direct report? And it's looking through, like, Slack and calendars and everything and telling you so. Those sorts of connectors are at risk, I would say. Right? Think about the to do lists.
[00:25:40] - [Speaker 1]
And and okay. Cool. I can just, like, tell Lovable, make me a to do app, and it'll create that for me. So all of this low hanging fruit, if there if this is something that can be easily vibe coded now, that has been democratized, so you cannot be too cautious. This this is where I'll return, I I I think, to the the idea of value stream delivery, which is the idea of making sure that we know the value that the product we have and and what's going out the door.
[00:26:08] - [Speaker 1]
The the image that I always have, there was a a book, what was it, from project to product, I think is what it was called, and this is all about value stream management. And the mental image here was that everybody in a car manufacturing plant should be able to see what is going out the door, which is cars. It doesn't matter if you're in in accounting. It doesn't matter if you're sweeping the floors. If you are, you know, receiving guests at at the front desk, I don't care what what job you have.
[00:26:33] - [Speaker 1]
Everybody should be able to see that this is what we do here. We are putting cars out the door. That is the value that we are delivering. And that's so so when we think about caution versus innovation, we should be able to innovate with intention. We should be able to eval establish a value stream process so that every step is measured so that as you actually adopt some of these new processes, as you adopt AI, we're able to identify then the sticking points.
[00:27:00] - [Speaker 1]
So, sure, are we generating faster? Cool. What does that expose further down in the assembly line? Where are we getting bottlenecked now that we have democratized and and unlocked a lot more cogeneration? And, you know, is it in the review process?
[00:27:13] - [Speaker 1]
Is it in QA? Is it in, you know, automated testing? Is it in the infrastructure and deployment layer? Like, what what is it? You should be able to measure all of those and have every single person who is in that delivery pipeline recognize this is the value that we provide.
[00:27:28] - [Speaker 1]
This is the existence that we have at this company. Let's all make sure we're pointed in the same direction and innovate with intention along that pipeline.
[00:27:37] - [Speaker 0]
Wow. And I think that is a powerful moment to end on. Great message to finish our conversation today. But before I let you go, for anybody listening, wanted to find you, your team online, etcetera, Discuss anything we talked about today. We did cover a lot in thirty minutes there.
[00:27:54] - [Speaker 0]
Where's the best thing to to find you or your team at LaunchDarkly?
[00:27:58] - [Speaker 1]
Oh, sure. Well, launchdarkly.com is the easy answer, but you can also find us at all sorts of shows. I mean, we're we're international. We'll we'll be at GitHub Universe and KubeCon and reinvent. I was actually reinvent London earlier this year.
[00:28:10] - [Speaker 1]
So we are all over all over the world, all over the place. But you can also always hit us up digitally, you know, LinkedIn, social media, YouTube, all sorts of content, but please feel free to reach out. We're we're always happy to chat.
[00:28:21] - [Speaker 0]
Well, I got some good news, bad news for you there. The bad news is I missed you at the AWS London because I was there. But the good news is I'll be there in Vegas as well. So, hopefully, we could meet in person, maybe record something there as well.
[00:28:33] - [Speaker 1]
That sounds great. I'm looking forward to it, Neil. Thanks again for having me.
[00:28:36] - [Speaker 0]
No problem. Thanks for joining me. So AI might be transforming the way we release software, but as Tom reminded us today, speed means little without structure. And his idea of structured play, setting a time side for experimentation and learning, feels like the missing piece in so many AI adoption stories that we see. Because safe, measurable, and intentional change, these are the things that turn innovation into very real measurable progress.
[00:29:08] - [Speaker 0]
So a big thank you to Tom for joining me today, sharing his insights, and thank you for listening. And if I do sound a little bit different, I am suffering from one of those autumnal colds at the minute or man flu. But the show must go on. Right? So over to you before I go.
[00:29:26] - [Speaker 0]
What part of the workflow or what part of your workflow could benefit from a little more observability and a little less guesswork? And as I say that out loud, I did very nearly burst into an Elvis song there. But but because of my blocked nose and sore throat, I think it's actually done you a favor there. Enough for me. I'll be back again very soon with another episode, but thank you for listening as always.
[00:29:51] - [Speaker 0]
Bye for now.

