Dynatrace Intelligence And The Shift From Observability To Autonomous Action
Tech Talks DailyFebruary 15, 2026
3590
23:4021.66 MB

Dynatrace Intelligence And The Shift From Observability To Autonomous Action

Perform 2026 felt like a turning point for Dynatrace, and when Steve Tack joined me for his fourth appearance on the show, it was clear this was not business as usual.

We began with a little Perform nostalgia, from Dave Anderson's unforgettable "Full Stack Baby" moment to the debut of AI Rick on the keynote stage. But the humor quickly gave way to substance. Because beneath the spectacle, Dynatrace introduced something that signals a broader shift in observability: Dynatrace Intelligence.

Steve was candid about the problem they set out to solve. Too much focus on ingesting data. Too much time spent stitching tools together. Too many dashboards. Too many alerts. The real opportunity, he argued, is turning telemetry into trusted, automated action. And that means blending deterministic AI with agentic systems in a way enterprises can actually trust.

We unpacked what that looks like in practice. From United Airlines using a digital cockpit to improve operational performance, to TELUS and Vodafone demonstrating measurable ROI on stage, the emphasis at Perform was firmly on production outcomes rather than pilot projects. As Steve put it, the industry has spent long enough in "pilot purgatory." The next phase demands real-world deployment and real return.

A big part of that confidence comes from the foundations Dynatrace has laid with Grail and Smartscape. By combining unified telemetry in its data lakehouse with real-time topology mapping and causal AI, Dynatrace is positioning itself as the engine behind explainable, trustworthy automation. When hyperscaler agents from AWS, Azure, or Google Cloud call Dynatrace Intelligence, they are expected to receive answers grounded in causal context rather than probabilistic guesswork.

We also explored what this means for developers, who often carry the burden of alert fatigue and fragmented tooling. New integrations into VS Code, Slack, Atlassian, and ServiceNow aim to bring observability directly into the developer workflow. The goal is simple in theory and complex in execution: keep engineers in their flow, reduce toil, and amplify human decision-making rather than replace it.

Of course, autonomy raises questions about risk. Steve acknowledged that for now, humans remain firmly in the loop, with most agentic interactions still requiring checkpoints. But as trust grows, so will the willingness to let systems self-optimize, self-heal, and remediate issues automatically.

We closed by zooming out. In a market saturated with AI claims, Steve encouraged listeners to bet on change rather than cling to the status quo. There will be hype. There will be agent washing. But there is also real value emerging for those prepared to experiment, learn, and scale responsibly.

If you want to understand where AI observability is heading, and how deterministic and agentic intelligence can coexist inside enterprise operations, this episode offers a grounded, practical perspective straight from the Perform show floor.

[00:00:04] Welcome back to another episode of the Tech Talks Daily Podcast. Quick question for you all, what does autonomy really look like? Especially when software systems stop asking for permission and start making decisions on their own.

[00:00:21] Well today, this episode comes from Las Vegas, right in the heart of Dynatrace Perform 2026, an event that feels much less like a user conference and much more like a line in the sand where enterprise technology is heading next. And I've heard so many great stories over the last few days about agents, intelligence and automation.

[00:00:44] But here at Perform, the conversation is shifting from possibility to execution, from theory to systems already operating in production. And my guest is Steve Tack, Chief Product Officer at Dynatrace, and this marks his fourth appearance on the show. And I think that alone tells a story.

[00:01:05] And each conversation we see over the years has tracked a different phase of Dynatrace's evolution, from observability foundations to causal AI to the growing pressure that enterprises are facing as software estates expand across cloud, AI and increasingly autonomous architectures. But this year's announcements introduce something called Dynatrace's evolution.

[00:01:57] All of it anchored in Grail and SmartScape, which are designed to deliver answers that enterprises can trust, rather than just guesses that they have to second guess. So across the event this week, customer stories from organisations like TELUS and United Airlines have put hard numbers behind this narrative. We're talking about measurable improvements, faster decisions, clear return on investment. These are things that have been missing for the last few years in conversations around AI.

[00:02:27] So I want you to join me on a journey today that is no longer about dashboards, alert fatigue, or watching graphs during an incident bridge call. And we'll look at something different, turning observability into a control plane for AI-native systems, developer workflows, and ultimately improved business outcomes. So, yes, you've heard the hype around agentic AI, but it is now moving from hype into operational reality.

[00:02:56] So the big question for everybody listening is, how does your enterprise maintain confidence, explainability, and control, all while still moving fast enough to compete? These are just a few of the things we're going to talk about today. But enough scene setting for me. Let me officially beam your ears directly onto the show floor at Dynatrace Perform, where we'll talk about all this and much more. So welcome back to the show.

[00:03:26] Can you tell everyone listening a little about who you are and what you do for anyone that missed our previous conversations? Yeah, thank you for having me here. My name is Steve Tack. I'm the chief product officer for Dynatrace and spend a lot of my time working with customers and partners on what the market needs are, as well as then working with the great Dynatrace team and how we implement product and roadmaps to solve those needs. And obviously, we're recording this at Dynatrace Perform.

[00:03:50] And one of my longest lasting Perform memories was Dave Anderson rapping to Full Stack Baby, I think six years ago in 2020. But that moment has finally been eclipsed by AI Rick. So what do you think of that? Is there an AI Steve in the works too at some point? Let's hope not. And hopefully Dave Anderson's not listening to this either because he probably just hurt his ego a little bit. But absolutely, yeah, it's amazing what you can do with AI creation tools. It really is. And obviously, this is your fourth time here on the show.

[00:04:19] And Perform 2026 feels like somewhat of a pivot for Dynatrace. So what problem were you most intent on solving with the introduction of Dynatrace intelligence, which is all everyone's talking about here? Yeah, it's really exciting for us in terms of bringing this to market. It's been a long time in the coming, actually. But if you think about our market, there's observability in general.

[00:04:43] There's just so much emphasis always on ingesting data and not enough emphasis on what you do with that data, how you take action. And so there are some things that we've been talking about this week. They're not meant to be glib or just pithy, but there's too much time people spend actually gluing things together on themselves, too many dashboards, too many alerts. And our goal at Dynatrace intelligence was really to turn that into action. And obviously, the ecosystem as a whole is helping that.

[00:05:10] You've heard from partners like ServiceNow, Atlassian, the hyperscalers and others in terms of what is possible with Agentic. So while there's a lot of agent washing and a lot of hype on this, really trying to cut through some of the noise and getting people to what they'd be able to do with Dynatrace and moving down this autonomous operations path. And I must admit, when I was watching the keynotes yesterday, another tech conference, we're talking about Agentic AI and the game changer of it. But it wasn't until I think it was United Airlines that came out.

[00:05:39] And the ROI and that measurable difference between October and now, that's what blew me away. That's what people want to hear, isn't it? Yeah. And so it's one thing to do small pilots and POCs. That's where you learn. That's where you gain trust and confidence. But when you hear, yeah, Ramiro on stage from United talking about what they do with their digital cockpit and how it gives them views across every element of that experience, that's great. I was blown away this morning, too. I thought TELUS' story was really impressive. Vodafone was impressive.

[00:06:07] A lot of what these customers are doing by unlocking the data that we have in Dynatrace using Grail, using SmartScape and the AI to drive those initiatives forward. It's really, it's a lot of fun, you know, to hear that. And really, that's what I think makes Perforum so great is it's so anchored in the customer value. Yeah. And for people that obviously are not here, when we were talking about United yesterday, they're now the number one airline for on-time flights, which is a huge metric, isn't it? Something like that. Yeah. Yeah.

[00:06:35] I mean, especially with what's been going on with the weather these last few days, I'm sure they'll age out a little bit. But yeah, they really do an amazing job of putting, you know, the traveler in really the spotlight. So just to roll back for a minute, Dynatrace intelligence, ultimately you're bringing together deterministic intelligence with agentic AI. So for leaders hearing a lot of hype around autonomy right now, where does this approach genuinely change that day-to-day operation that they may be looking to improve right now? Yeah, it's a good question.

[00:07:04] And maybe just pulling on that thread that you referenced about bringing together deterministic and agentic. So Dynatrace, we do have a history in terms of making sure that we are using AI statistical capabilities to drive operations, observability, development forward. And that when it comes to now agentic, one of the reasons we're putting so much emphasis on deterministic is that you can have confidence and trust in the data. You can't do everything probabilistically with AI in general.

[00:07:33] You can have a different decision tree, different path for every agentic interaction. So how do you then get the confidence to move forward? And so with bringing deterministic together, you have that root cause. You have the causal, the predictive capabilities that people are confident in. And then we believe that will help them be more aggressive, might not be the right word, but lean in, take more risk, and have the confidence that they can be successful in those agentic paths. And so you hear a lot about auto-optimization, auto-healing, auto-remediation.

[00:08:01] That's really the goal that we're trying to deliver for our customers, for our partners, that they're able to move down this autonomous operations path. And you mentioned the word risk there and encouraging them to take more risks. One of the big takeaways that I'm going to be walking away from here is it's no longer just about the low-hanging fruit. It's about making those big plays. We're moving away from pilot little experiments and really making a difference, right? Absolutely. And I know that sometimes vendors like to prey a little bit more on the risk and the uncertainty.

[00:08:31] And I fundamentally don't like that approach. I mean, I think there's so much potential here that we should be thinking about what is the way forward? How do we drive that down the innovation path? And to your point, you hear terms like pilot purgatory, where it's really easy to get that quick win in the POC. But if you don't get it out into real production scenarios, you're not getting that return on investment. And so I really hope that, and not just hope, I believe that we're back here next year.

[00:09:00] We'll be hearing a lot of successes from customers on how they were able to get the results, similar to what you were talking about with United. But really how we see that return on the AI, the agentic AI investments that they make. And something else that stands out is you've positioned Grail and SmartScape as the foundation for trustworthy and explainable AI. So why does causal context matter so much when enterprise start letting systems reason and act on their behalf? It's quite an easy question to answer, I would imagine.

[00:09:29] But there'll be a lot of people listening that just don't understand the complexity of the tools that we're playing with here. Yeah, absolutely. And so maybe just a double click into those really quick. So Grail is our data lake house. It's always hydrated. It brings together all different telemetry types into one view so that it's all accessible to analytics. And so that's different than if you think about how a lot of organizations have maybe stovepipe silos, different data stores for events or traces, or not even having a real-time topology like a SmartScape.

[00:09:58] So we bring all that information together. And then the causal piece is just so important because, once again, every agentic interaction might be unique. That totally changes the game in terms of how you then have to manage those, how you drive it forward. And so by bringing together SmartScape and Grail, that also brings together the deterministic and agentic capabilities that we've laid the groundwork for in the Dynatrace platform. This is going to give them that confidence that they understand the explainability, to use a word that you just used.

[00:10:27] And then that's really what you need to make sure that, hey, we're ready to go. We're ready to push this forward in production. And, of course, multi-cloud complexity is also something that continues to rise. So what practical differences will teams notice when, from the expanded cloud native integrations across AWS, Azure, and GCP, when they're troubleshooting incidents under pressure?

[00:10:49] Because it's very easy to be on stage talking about this stuff, but when that P1 hits and it's all hands on deck and the C-suite are asking for updates every 20 minutes, that's when the real stuff happens, isn't it? It is. And so that was another big announcement that we made was just the path forward that we've taken with the hyperscaler native integrations. In many ways, this is making it easier to get data in and also take advantage of the work that we're doing with the hyperscalers themselves.

[00:11:14] And so we're coming off of, at the end of last calendar year, it was reInvent, it was Ignite, it was the GCP show. And what we're doing with all the hyperscalers is leveraging that agentic collaboration. Actually, thinking back to your question two ago, it was interesting that some of the hyperscalers were commenting on how much they now realize the value of that causal AI.

[00:11:35] Because when there are agents, so if you have the Azure SRE agent or the AWS DevOps agent, they know that when they call out to Dynatrace and they use Dynatrace intelligence, they're getting back something they can trust. It's not just a natural language interface that's bolting on data from here or there. They're actually getting answers from us. And that's something that, once again, may be more of a slogan, but you've heard from Rick, our CEO, talk about answers, not guesses.

[00:12:02] It's something that we believe in and that will change the way people adopt across those hyperscaler integrations and really have confidence to move forward and avoid the P1s that you're right. Yeah. And there was a great line mentioned yesterday about we're bringing technology back to the developers. And I think it's such a great line because I think in some circles, developers can be quite cynical and they hear about the AI hype and everything. But bringing it back and that reoccurring theme this year is all about developer productivity.

[00:12:30] And I do think developers have been traditionally underappreciated. So how do these new observability capabilities shift developers from just passive monitoring towards active control of cloud and AI native software and finally remove alert fatigue as well? Yeah, I think our core goal or mission here is that we want to continue to bring the data into their context and flow.

[00:12:54] And right now, if you look at how maybe a development team that has to also take care of production resilience, they're working in VS Code or they're working in their IDE. And then they're just all tabbing to different dashboards and different views. And once again, it's a very manual process. Might work when you start, but once you scale up and those services are interconnected across different systems, it's just not going to work. You're going to hit a wall. And so some of the announcements that we introduced this week, we're bringing that data to them.

[00:13:21] So there are advancements in our live debugger, which brings you those observability insights directly into VS Code or Windsurf or Cursor. It's the work that we've continued to expand on the IDP side. So working with Backstage and giving them those developer optimized endpoints and in-context views. And then the launch of MGA of our remote MCP server. So we've been working with hundreds of customers on this already. You heard from Telus today how they're using that in terms of just making Dynatrace more accessible directly in Slack,

[00:13:49] working with Alassian, working with ServiceNow. This is making it just that much easier for them to stay in their flow and concentrate on development, innovation, delivering more products, shipping more innovation, and not having to deal with the toil that you would if you didn't have that automation capability. And just to double-click on that, with Argentic workflows, MCP integrations, and live debugging now part of the platform, how should engineering leaders listening be thinking about experimenting safely in production without increasing risk?

[00:14:19] Yeah, that's also, I think, what you'll see is a big push in terms of as much as we're talking about Argentic and automation, we also are talking about amplifying the human, not replacing the human. And for these early lessons learned, we fully expect there will still be a human in the loop. And so I think there was a stat shared yesterday. I'm sure I'm going to get this wrong, but I think it was something like 70% of the Argentic interactions still do have that human checkpoint. And so that is how we build trust.

[00:14:49] I think trust, safe worthiness, and maybe that's not a real word, but kind of going down that path of how do we make sure that people feel that they're still in control? The explainability, you know, rings back true as well. But as we have that human in the loop, and then the more confidence they gain, I think the more they'll start to let it run on its own. But the surfacing recommendations, surfacing answers, that will be the start. And yeah, we'll see it continue to flourish. And I was talking with Pablo from ServiceNow yesterday,

[00:15:18] and the possibility of in the future having an AI agent of sorts at the change advisory board meeting, and he said 100%. That's inevitable. Would you agree with that as well? 100% also. I think they refer to that as the pre-flight check, just making sure that if I am making a change, do I understand the potential impact, the potential scope, calling out to Dynatrace to seeing the historical reliance and resilience of that service, to know if it's something that you have to take extra care, or if there's extra risk around that.

[00:15:46] But absolutely, there's so much opportunity to surface and share those ideas and insights as you make those changes. And so I really believe that a lot about will always change. It's an evergreen field to some degree, but how do we then give people the confidence to continue down that innovation path? 100% with you there. And real user monitoring is also something that's been around for years, but what limitations finally forced to rethink there? And how does combining front-end and back-end context there,

[00:16:15] how does that change how teams understand real customer experiences? One, having the right visibility, but then also to the edge on these modern apps. So a lot of people still tend to think about things and concepts of page loads and not swipes and gestures and scrolls and many of those events that you do not see on the surface. So we want to make sure that our customers truly understand what's happening at the edge and the experience they're delivering. Now, we've done that before.

[00:16:42] Dynatrace has always had that belief that everything gets realized from a value standpoint of the user. But by bringing the real user monitoring data into Grail, like we were talking about earlier, now I have that front-end context with the back-end context. And so I can drive the right analytics and understand why certain experiences were delivered, what changes we can implement to drive those forward, and once again, then start to bring in some of the elements of how we can implement change from a developer side.

[00:17:10] So we recently talked about a small acquisition that brought feature flagging capabilities to the Dynatrace platform, so DevCycle, built on open feature, so industry standard in terms of understanding release observability, how we drive things forward and putting more control into the development team for those changes. And this is an example of marrying your developer question together with the real user insights, and then also really getting that business context of how well that service is delivering to the users you're trying to reach.

[00:17:39] So if we look ahead to enterprises that are scaling AI and agentic systems in production, is that a mindset or operating change that you think organizations must begin to make this year to avoid observability becoming their next bottleneck? Is there a mindset shift required here? I think there's, yeah, continuous mindset shifts. I do think that maybe one element that actually helps us with that, though, is AI observability. So AI is not black box.

[00:18:08] We do talk a lot about you might not always know what you're getting back from the LLM and how you add those new AI services, LLM-based services, but AI observability gives you that view. So you can understand from the LLM side the right analytics, what models are best, what model versions are best, are you at risk of leaking PII data, other guardrails that you have to put in place. From a change approach, that is one way to, once again, to give you more confidence that you're moving forward in the right direction,

[00:18:37] and also then understanding the agentic interaction. So how do these different agents behave? How do you get that wide enterprise view across there? That is one way that people should be able to drive forward in a faster way. And finally, I always like to give my guests a virtual soapbox of sorts for anything that, any misconceptions or myths that might frustrate you. I'm sure you spend time on LinkedIn, Reddit, and all the usual places. You see stuff that just really frustrate their life out of you.

[00:19:05] Is there anything that you'd like to lay to rest today once and for all? I mean, so I'm zooming kind of way out now, but I think you get different groups of people. There's the cynics that are kind of why change, and then there's the people that want to be on the leading end of change. And if you're going to think change versus status quo, you've got to bet on change every time. So while there will be a lot of agent washing, I'm sure there'll be some people that'll listen to this, even and say, well, there's yet another vendor talking about AI and agentic AI. I think cutting through the noise is really important.

[00:19:35] I think you have to get your hands dirty in these new technologies. And the potential is massive if people drive towards it. So is it going to be click button easy? No. There'll be a lot of learnings, a lot of things that people will drive through those pilots. We'll get the success rate higher, but it is real. And we see that within the customers that are already presented. So I'm confident that we'll drive change in this market. Don't say this tool will pass. It's yet another hype cycle. It goes through its own maturity, but definitely bet on the change.

[00:20:05] And I'm expecting a lot of value for our customers on the agentic AI side. Well, it's a pleasure as always to talk with you today. For anyone listening wanting to learn more about any of the announcements, anywhere you'd like to point everyone? Absolutely. I mean, the Dynatrace website, our product blog is full of all the new news. There's definitely ways that you can get your hands dirty with it too. So we provide a sandbox environment for anyone that wants to see it live. And we'd be happy to do any work with customers as they go.

[00:20:32] But product news for Dynatrace and the free trial for Dynatrace are two great spots to start. Well, it's a real pleasure chatting with you again today. Maybe we can meet again next year for your fifth interview. Maybe if I get my people to talk to your people, we'll get them to create an AI Steve to do full stack baby. Could that happen, do you think? I hope not, but we'll see. So as this conversation comes to a close today, one of the things that stood out to me most this year

[00:20:59] is that Dynatrace is framing autonomy as a responsibility, not just another shortcut. And throughout the Perform 2026 event, the message has been consistent. Intelligence without context creates risk. Automation without trust, well, that's just going to create hesitation. And what Dynatrace intelligence is attempting to do here is connect all these pieces together,

[00:21:27] grounding agentic behavior into causal understanding and giving teams the confidence to let systems act while humans remain firmly in control. And Steve's perspective, I think, reflects a much broader shift across enterprise technology. And that means the era of endless pilots and proof of concepts is wearing thin. It has been for 12 months or more now. Leaders now expect and want results in production.

[00:21:57] Developers, they want less toil and more time to build. Executives, they want clarity on impact, cost and resilience. And I think the stories shared this week suggest that observability, when treated as an active system rather than just another passive tool, then maybe it can meet all three. And if there's one takeaway to sit with after this interview, I think it's that autonomy is no longer a distant future concept. It's already shaped how change is approved,

[00:22:26] how incidents are prevented, and how digital experiences are delivered. The only real question is whether your organization is ready to bet on progress, learn quickly and move forward with intention, rather than fear or caution. So for anyone listening that wants to explore the Perform 2026 announcements in more details, I will have links to everything in the show notes, so check those out. And if this conversation sparked any questions about how intelligence,

[00:22:56] context and trust intersect in your own systems, let me know your thoughts. And finally, as software begins to reason and act on our behalf, how confident are you in the answers it will be giving you? So many big talking points here. We could carry on for another few hours here, but I've got to go and prepare for a guest tomorrow. So please, pop by techtalksnetwork.com, send me a message, let me know. But that's it for today.

[00:23:24] So thank you for joining me here as always, and I will meet you here same time, same place tomorrow. Bye for now.