3553: How Coralogix is Turning Observability Data Into Real Business Impact
Tech Talks DailyJanuary 14, 2026
3553
32:5925.31 MB

3553: How Coralogix is Turning Observability Data Into Real Business Impact

What happens when engineering teams can finally see the business impact of every technical decision they make?

In this episode of Tech Talks Daily, I sat down with Chris Cooney, Director of Advocacy at Coralogix, to unpack why observability is no longer just an engineering concern, but a strategic lever for the entire business. Chris joined me fresh from AWS re:Invent, where he had challenged a long-standing assumption that technical signals such as CPU usage, error rates, and logs belong only in engineering silos. Instead, he argues that these signals, when enriched and interpreted correctly, can tell a much more powerful story about revenue loss, customer experience, and competitive advantage.

We explored Coralogix's Observability Maturity Model, a four-stage framework that guides organizations from basic telemetry collection to business-level decision-making. Chris shared that many teams stall on measuring engineering health without connecting that data to customer impact or financial outcomes. The conversation became especially tangible when he explained how a single failed checkout log can be enriched with product and pricing data to reveal a bug costing thousands of dollars per day. That shift, from "fix this tech debt" to "fix this issue draining revenue," fundamentally changes how priorities are set across teams.

Chris also introduced Olly, Coralogix's AI observability agent, and explained why it is designed as an agent rather than a simple assistant. We discussed how Olly can autonomously investigate issues across logs, metrics, traces, alerts, and dashboards, enabling anyone in the organization to ask questions in plain English and receive actionable insights. From diagnosing a complex SQL injection attempt to surfacing downstream customer impact, Olly represents a move toward democratizing observability data far beyond engineering teams.

Throughout our discussion, a clear theme emerged. When technical health is directly tied to business health, observability stops being a cost center and becomes a competitive advantage. By giving autonomous engineering teams visibility into real-world impact, organizations can make faster, better decisions, foster innovation, and avoid the blind spots that have cost even well-known brands millions.

So if observability still feels like a necessary expense rather than a growth driver in your organization, what would change if every technical signal could be translated into a clear business impact, and who would make better decisions if they could finally see that connection?

Useful LInks

Thanks to our sponsors, Alcor, for supporting the show.

[00:00:04] Welcome back to the Tech Talks Daily Podcast. I want to take a moment to thank each and every one of you who keep tuning in, sharing feedback, stopping me at events to say hello. Because over the last year, I've been on the road to more than 20 tech conferences around the world. And one of my big goals going forward is to meet even more of you in person, wherever you listen in the world.

[00:00:26] So whether it is a hot coffee or a cold beer after a long conference day or a chat between keynotes, I genuinely love to connect. And you can check out my itinerary for this year and learn more about how you can work with me over at techtalksnetwork.com. Hopefully we can make something work there. But today's guest is someone who has spent years sitting at the intersection of engineering, product and business outcomes.

[00:00:53] His name's Chris Cooney. Now, Chris leads technical go to market and developer relations and brings a practitioner's view on why observer ability data often lives in silos, disconnected from the decisions that really matter. Does that sound familiar? Well, today we're going to unpack how observer ability can move beyond dashboards and alerts and how technical signals can be translated into very real business impact.

[00:01:21] And most importantly, why empowering teams with the right context could change how decisions get made and even give you a competitive advantage. Lots of valuable takeaways in this one. Now, before we begin today's interview, and there's some great insights in that, I just want to give a special mention to my friends at Donodo, who are passionate about the future and logical data management and agentic AI. Because everywhere you look, agentic AI is undoubtedly the next big shift.

[00:01:51] But here's the truth. It can't operate on messy, inconsistent or siloed information. Enter Donodo and their logical data management. So if you want AI that doesn't just automate but operates, start with logical data management at Donodo.com. But enough from me. Let's get Chris onto the podcast now. So a massive warm welcome to the show. Thanks for joining me today. Can you tell everyone listening a little about who you are and what you do?

[00:02:21] Hi, Neil. Thank you very much for having me. My name's Chris Cooney. I am responsible for technical go-to-market here at CoreLogix. CoreLogix is a full-stack observability platform. We are kind of bringing the data back into the customer's account, allowing direct query with extremely flexible and high-performance analytics.

[00:02:37] And my role there is largely on the developer relations side, where I am understanding what the developer community and the observability space are thinking, finding trends, and making sure that our product teams know what those trends are, making sure that our sales teams are trained on what's going on in the community. And indeed speaking with many of our customers and engineers in the community to better understand where their problems are.

[00:02:58] And I'm also responsible for the product marketing side as well, just keeping on top of making sure that our product is shown in the best light on our various forms of media. Well, I'm so glad we finally got to speak with each other because we were both at AWS reInvent, hoping to get an interview live on the show floor. But we're both so busy. It just didn't happen. And in fact, you were actually on stage at the event talking about how high-performing teams can connect observability to business growth, et cetera.

[00:03:27] But for anybody that wasn't there or didn't see speaking there, what problem were you trying to challenge right at the start of that session? What were the points you were hammering home? Yeah. So the base assumption, I think, I said I've been a software engineer for coming on to 14 years. I was an SRE. I was in engineering leadership in various different companies. The pattern that I kept seeing, and I ventured to call it an anti-pattern, was the hard separation of tech and business.

[00:03:56] And that's less common these days because tech is such an important enabler for business outcomes now. But I still think that there's this great divide between what I would describe as the technical. So, for example, you know, the metric of a CPU, whether it's high or low, that kind of thing, is seen as a technical signal. And those technical signals are often siloed in the technical parts of the organization. There are certainly some technical adjacent parts. For example, like in marketing, you have marketing ops teams that are sitting at the side listening to some of these signals. But it's quite rare.

[00:04:26] And one of the things that we realized at Chorologics, we have over 4,000 customers now, and we've worked with so many of them in such an in-depth way. We've seen the importance of these technical signals for the entire organization. So that was a big thing in how we designed the product, how we designed our approach to the market in general. Even our engagement model and our support model has been framed by this, was that these technical signals, when harnessed and processed in the right way, when organized in the right way, have enormous potential for business impact.

[00:04:56] And that is the core essence of the talk that I was bringing forward. And I used very simple data structures to show that and then left it up to the imagination of the audience when I said, imagine what we could do when we have a full stack of observability data, you know, traces, metrics, and logs, and so on. And in that talk, you did introduce everyone to Chorologics, our observability maturity model. Can you tell the listeners a little bit more about what that model really is, how it's designed to help leaders see things differently in their systems?

[00:05:25] Because I think there's a lot of misconceptions about observability. It is evolving that world. I'd love to hear more about what your plans are and what that maturity model looks like. Firstly, Neil, I'll just say that I love to hear maturity. That is fantastic. That is really bringing it home for me. So our maturity model is something that we've produced over thousands of interactions with thousands of customers, actually. And we kept seeing these trends where we realized we can categorize many of the business units within our customers.

[00:05:55] So not the companies as a whole, but individual teams, individual units within our customers. We could categorize them into four different levels. Level one would be they're just collecting telemetry. They're getting coverage of basic telemetry. Things like rolling out open telemetry, which is an open source collection of standards, protocols, SDKs, and so on, for actually collecting that raw telemetry. Just rolling out good practices for collecting information.

[00:06:19] Level two is taking all that telemetry they've collected and saying, well, how is my engineering sort of metrics performing? So, for example, how are my teams delivering? How are my services running? Are my databases healthy? Is my infrastructure going well? In level one and level two, what we call generic problems, because then we get into something more specific. There are many observability maturity models out there today. But I think ours goes the furthest by a long, long stretch. And the reason for that is because level three is where we start thinking about, okay, yes, we've started to measure engineering.

[00:06:49] But what if we take that engineering knowledge and we put it in the context of production? So the difference, to illustrate this, I think is best explained by this. Service level objectives are an internal metric that teams use to track their service, the service they're offering. It's used often by engineering teams. It's not often published. It's something that the teams use themselves to make internal technical decisions. An SLA, service level agreement, is a contractual obligation that you have to a customer. So they both measure similar things.

[00:07:17] They're both measuring uptime or resilience and so on. But they function in different ways. And so measuring SLOs, your internal metrics, your internal engineering targets, that's very much level two. Level three is about thinking, well, how are we impacting the customer? Are we compliant with our SLAs? Is the service we're producing actually having a positive impact? Did that latest outage actually impact the customer in a significant way? And so on. And finally, level four. And level four, I think, is what makes this whole thing really, really unique.

[00:07:47] And it's enabled by CoreLogix's architecture. Very, very briefly, just to explain that. CoreLogix allows customers to ingest data into CoreLogix, but we write it back to the customer's own cloud object storage, for example, S3. The reason we do this is a few reasons. One, we have built an entire query engine from the ground up to allow customers to directly access their data that's held in S3. And what that means is, or indeed many other cloud object storage, but in this example, we'll go with S3.

[00:08:13] What that means is they can hold an enormous amount of data that they wouldn't normally be able to hold. Why? Because S3 per gigabyte per month is very, very low. And the read-write costs are very, very low, especially in comparison to what a lot of SaaS vendors will charge you because they are just using the same technology behind the scenes, just with a markup. So what we've done is allow customers to hold on to way, way more data. And within all that data, within that kind of scale of data, what we started to see was, well, there's these interesting technical things that keep coming up.

[00:08:43] And the example I gave in my talk was you have a single log that says this product failed at the checkout process. And all it says is product ID failed. It's just a pure string. It's not even JSON. It's not even machine readable. And what we do is we go through the stages of parsing it, enriching it, transforming it, organizing that data into data sets and saying, well, now what can we learn about that? Well, we have a product ID. So we can have a lookup table that enriches that product ID with a bunch of product information, for example, price.

[00:09:10] And now, because we know in the business context, this was part of the checkout process, we can actually calculate the money we're leaving on the table as a result of this bug. And that's a really, really fundamental and simple step through one, two, three, and four. Level one is collecting that data. Two and three is using it, or two is using it to understand your service health. Three is using it to understand whether customer sentiment is changing, whether it's impacting our bottom lines. Number four is here is a way of prioritizing our tasks based on revenue impact.

[00:09:40] And as I'm sure you've seen yourself, Neil, like many engineers listening will know, it's a fundamentally different conversation when you say, hey, I want to fix this bug. It's been in the background for a really long time. It's tech debt. And that is a technical way of describing the problem. But if you go and say, hey, this bug is costing us $15,000 every day. It's constant background hum of issues. We could directly save money by fixing this issue. That is a fundamentally different conversation.

[00:10:09] And those are the conversations we try to earn customers to have within their businesses. And this is actually our company mission, which is to help our customers make better decisions. So, yeah, that's a little bit about the maturity model, how it's driving and so on. I love that. And another thing that you talked about that really stood out was you spoke about the shift from simply collecting logs and metrics to making those business level decisions like you just mentioned there. So where do most organizations you meet still get stuck in that transition?

[00:10:38] You must get to speak with so many customers around the world. And indeed, at AWS Reinvent on your booth, you probably heard a lot of similar stories. But where is the bottleneck? Where are they struggling here? Great, great question. By far, by far, the most common problem is siloing. So, for example, when we start engaging the engineering teams and say, hey, there's this business data in here that we can pull out. They go, that's really cool. What do I do with it? And you go, well, who would you talk to?

[00:11:08] I don't know. And so the story I always tell, I think I told this in my talk as well, was my very first role, I was working at a betting company, a gaming company. I was doing a lot of broadcast media. Anyway, we took a lot of betting data from our punters and from customers and from other businesses and so on. I realized as part of my analyses was that we were losing money in some places. And it was an accidental discovery.

[00:11:37] I didn't mean to find that information. And I found it and I went, oh, crazy. And then I just got back to work. I didn't tell anybody. And that's because nobody had told me what to do with that information should I find it. And looking back, it's common sense. Go and find someone. Tell them. Someone will know about this. I just figured somebody already knew. So this kind of training is really, really important. It's the reason why, actually, a big, big part of how we engage with customers now is we think about the maturity of data.

[00:12:05] And that is taking through the maturity model, organizing the data in a sensible way, cleaning it, making it machine readable and queryable. But also training the teams. Some examples of how we do that. We go into companies and we train on things like service level objectives. A company in the States recently, we did several hackathons with them to train on these engineering concepts. Because we know that it's going to have a positive impact on the engineering teams. And, of course, as a business, that's great for us because that impact is connected to us. But we're not going in and showing them how the product works as such.

[00:12:34] That's not the primary benefit. The primary benefit is that they walk away with engineering skills they didn't have before because we improve the quality of the decision making. And these are all things that build people towards that kind of level four business level decision making that happens autonomously in engineering teams. And on stage, you also showed how observability data can explain very real business questions like drops in checkout or performance slowdowns, etc.

[00:13:00] So, of course, when you talk about this stuff, what kind of reactions do you get when people see those technical signals really translated into very real business impact? I'll give you two examples because I think they were both really, really interesting. Both people that came to my talk, one came to the booth afterwards. So, immediately after the talk, I walked off stage and there was a really nice lady. She was telling me about she worked in product and she was trying her best to advocate this kind of message in her company, but she didn't have the words to articulate it. She was trying to piece it all together.

[00:13:30] Yeah. And she said that when we described the way we described it, because the way I put it across was in stages. So, it was like, it's not just here's some data and here's the end result. It was here are the technical steps that you can take to actually turn this obscure technical signal into something that has a significant business impact. And she was like, that's very, very helpful for me because now I can understand the path that I can take my engineers on and my engineering teams, for which we drove my prime responsible.

[00:14:01] On that journey towards this kind of business level intelligence. That was one I thought was really nice. The second was a gentleman I was speaking to at the booth. We had a very large booth at Reinvent this year. It was a 30-foot setup. And I was giving him a demo of the Chorologics platform. And he said, look, this is great. I know my engineering team is going to love it. But here's my big problem. I need to be able to show a dollar figure amount on the screen. And I said, why? He goes, well, because I feel like my engineers just don't know. And because they don't know, they don't care.

[00:14:30] They prioritize the world that they know. They make decisions on the information that they have. And the information that they have is like my disk IO is slow or my RAM is filling up or my database is running out of storage space. And he said, I want them to have revenue in front of them. Is that something you can do? And I was like, well, let me show you. And he'd been to my talk. So he obviously had a hunch. But when I talked him through, again, the steps and then more concrete, I showed him the demo in the platform.

[00:14:59] He said, it's really interesting because this is going to fundamentally change how engineering teams make choices. One of the big important things about scaling engineering teams is autonomy. So, you know, SLOs, for example, came about because Google were trying to work out how can they have autonomous engineering teams at scale? Thousands of engineers all making sensible balance tradeoffs between resilience and, you know, innovation. And some teams obviously indexed heavily towards innovation.

[00:15:27] Some teams indexed heavily towards resilience. And they wanted to try and find a way of getting that balance. That gave birth to service level objectives. This kind of autonomous decision making is how you scale an engineering function. So if you can find a way of embedding a certain level of business acumen in the teams through the data that can be derived from those technical signals, you suddenly have engineering teams that are thinking much more deeply in the context of the business in which they operate.

[00:15:53] I was often shocked in my first few years as an engineer when I would work with teams, you know, in various different capacities at how distant technical folks felt from the business in which they operated. They were, oh, yeah, we just maintain a database. We don't really know too much about what the company does. It often surprised me because I thought, well, they pay us, so we really should know something. And in the end, it was, I realized it was because they had no data. No one was coming to them and saying, hey, that release you just did has had this impact.

[00:16:23] And they felt very distant from the value. And so surfacing that data, the reaction that I got several times in reInvent was that by surfacing this information, by coalescing those technical signals, you get that all important data that allows really great decision making to happen autonomously. And I think that's the key. Quick thank you to the sponsor supporting all of the shows on the Tech Talks network. And this month, I've partnered with Alcor.

[00:16:48] And if expanding engineering operations beyond your home market can be overwhelming, you're not alone. Because if you've ever wrestled with local laws, slow response times, and partners who treat each country as separate rather than part of a wider strategy, you might want to check out Alcor. They approach expansion completely different. They specialize in building tech teams across Eastern Europe and Latin America. And they combine employer of record services with recruiting.

[00:17:17] So you get one singular coordinated process. They help you choose the right jurisdiction based on your needs, run proper evaluation of candidates and onboard teams quickly. And their model is also refreshingly transparent. Most of your contribution will go straight to your engineers and their fee shrinks as your team grows. And there is no cost to exit if you move the team in-house at a later date.

[00:17:45] And I think that kind of clarity is why so many high growth companies in Silicon Valley are working with them right now. So you can find out more details at Alcor.com slash podcast or simply use the link in the show notes. And we've done incredibly well making it 20 minutes into a tech podcast without really mentioning AI. But we'll fix that right away because Olly, which is your AI observability agent, that was a big part of the broader story that you shared in Vegas as well,

[00:18:12] especially around letting non-technical users ask questions around their data. So how does that change the dynamic between engineering teams and business stakeholders who notoriously have a long history of not fully understanding each other? Yeah, it's a fun one. So I've often, in the context of this maturity model, just to sort of build on that, we think of AI as an accelerator through that model.

[00:18:39] Often when you're trying to coalesce technical signals into something business impacting, you need some kind of quite clever, like we have a syntax called data parameters, a full query engine and language for very, very complex data analyses, supporting things like joins and enrichment transformations, aggregations, counts, and averages and all sorts. It's basically SQL but optimized towards observability. And it takes people time to master that.

[00:19:03] So Olly is a multi-agent tool that with full platform that sits separately from Chorologics. It integrates into the Chorologics account. It's very aware of various Chorologics concepts, but it itself is a completely separate UI. The beauty of it is a few things. One is agency. It's not an assistant. An assistant will do things like, you know, translate this natural language into a query for me. We have those assistants baked into Chorologics already. But Olly is a full agent.

[00:19:33] So we say, hey, something went wrong with my service. Tell me what. And it will go and it will dig through and it will figure out, you know, it will query the data or work out where the across the full gamut of data, not just logs, metrics, and traces, not just profiles, but also it will read things like dashboards. It will look at the alerts that are triggering. It will look at the flow of data that's actually coming in and so on. And it will find those all important patterns and give you an assessment. Just to give you an idea of Olly, I am as a human being, I'm a skeptic by nature. There's a famous marketing book, Crossing the Chasm,

[00:20:03] where they talk about like the, you know, the early majority and the late majority and so on. I am by characteristic, I'm late majority. I like to mess around with tech, but I don't like to invest in tech until I've seen it really, really put through its paces. So when AI came about, I was naturally quite skeptical, especially because of the big, big claims that have been made around it and so on. So what I did was I made an assault course for Olly. I was like, okay, let's put this thing through its paces. I made a CoreLogic's account.

[00:20:29] I populated it with three different problem use cases across multiple data types. The first was there is a log indicating that the SQL injection attempt is happening. And that is a query that's coming from a Postgres database. And it shows that someone is trying to like, it shows the telltale signs of somebody actually trying to perform SQL injection. The second is an error that contains the same ID within the query.

[00:20:56] So the ID itself is not in its own field on the log. It is within the query itself. And there's another error log from an application that has that ID in there as well. So there's no testing correlation. And finally, there's a third error that happens, is related to it, but it does not share the IDs. And it's not ingested as an error. It's ingested as a timeout of an info level log. So it's just a more of a log that something has happened rather than something significant. And it says this query timed out.

[00:21:26] And it does not share the ID. It's even a different application. And I wanted to see what it got. And I asked it, hey, there's something wrong with my database. What's going on? And it told me someone is attempting a SQL injection attack, which is causing this error. These things share the ID. And by the way, you have query timeouts. This likely has a customer impact. And here are a series of remediations, including like parameter sanitization and building prepared queries. And I'd even put some telltale signs that this was a Java application. But I didn't put any Java notation in there.

[00:21:55] I just put some class names and things that looked a bit like Java. And it picked it up and was like, hey, yeah, you should use this Spring Boot library for parameter sanitization, building prepared statements, because you are currently vulnerable to a SQL injection attack. And here's the downstream impact. And here's our recommendations. And at that point, I was like, okay, this thing's quite cool. So when you think about it, anyone can ask that question. Anyone can ask, hey, there's something wrong with my database. And because of the UI, because of the AI native experience of Olly,

[00:22:26] you open it up, it looks a bit like ChatGPT. It's just a box. So you're not overwhelmed by this massive array of metrics. You're not overwhelmed by this extremely complex, specialized UI that's designed for developers. You have a box and you just ask the questions that you want to ask in plain English and let Olly

build the visualizations and the explanations that you need to understand what the problem is. Incredibly cool. Something that stands out here, you've ticked so many boxes for business leaders here. We're talking about ROI, measurable business impact.

[00:22:54] And you also emphasize that observability can even become a competitive advantage, not just an operational safety net of sorts. Are there any examples or conversations that reinforce that idea for you? I mean, yeah, it's hard to choose. I would say that that mindset shift is now happening. We've been talking about this for about a year now publicly. Internally, we've been talking about it a lot, the potential for it.

[00:23:24] But I would say that mindset shift is starting to happen now. There was a really fun conversation we had at reInvent. Observability is very expensive. And I said, well, what do you use it for? And he said, well, we ingest logs and we query them. I said, yeah, it's a really, really expensive document database. Yeah, it's not, you know, but if you think about it more in terms of, okay, I get all of my tracing, my metrics, my logs. I also get full alert routing. I get a service catalog.

[00:23:54] I get the visualization of the health of all my services, independent services. I get to track customer impact. I get to track revenue impact all in one single dashboard. I can correlate on all that and I can put alerts and signals through my entire business to change decision making. It's a bit of a steal. So why is it a competitive advantage? Because the health of our technical systems is now so, so closely linked with the health of our business.

[00:24:21] Classic example, you know, even let's take a very, very non-technical company, Marks & Spencer's here in the UK, M&S had a significant cybersecurity event that impacted most of their technical systems. This is a company that sells food from a shop, right? It's an old school company. You know, it's 100 years old. You know, its core of its business is not strictly technology.

[00:24:45] And yet, a significant cybersecurity incident costs them millions and is still being felt today. So the interwoven nature of our technical health and our business ability to function is so clear now that it only makes sense that something that better enables your engineering teams, better enables your technical performance, is a competitive advantage, is a differentiator. The better your tech performs, the better your engineering teams and the better your business

[00:25:13] teams are enabled to do the things they need to do. And when you start to close that gap and you get business people and technical people working much more closely together, sharing data with one another and getting around, collaborating around these dashboards that show the full gamut of a change or an error or whatever, that's when you start to spark those really, really compelling things. This was a discovery of Google in like 2016, right? That the companies that innovate, the companies that learn are the ones that get ahead. And a lot of that innovation and learning went into the technical side.

[00:25:42] But there is no reason why the technical ground that's been made cannot then be applied to the business and can't then translate into a really meaningful, really significant competitive advantage. And when you look back at the event now, after delivering that session, speaking with teams, techies, business leaders, and so many different people from around the world throughout the event, what would you say to a leader that might be listening to our conversation today who

[00:26:08] still might see observability as more of an engineering expense rather than something tied directly to growth and decision making and competitive advantage, et cetera? What do you say to that person listening? It's a great question. What I would say is that that is still a relatively comfortable position to hold. Yeah. But those comfortable positions often slip away from us before we even realize.

[00:26:36] I mean, artificial intelligence has definitely accelerated this, by the way. But more and more, businesses have been using observability data and using more and more technical data in their high-level decision making over time. It's been happening over the past five years anyway. You know, more and more, there's a reason why a five-person company can generate terabytes of data every single day. The data is a differentiator. So, the idea that your telemetry data, that the functioning of your systems, the behavior

[00:27:03] of your users, the performance of your systems in general, the idea that that is not an advantage in and of itself, take observability out of it for a second. Just that telemetry data alone is valuable, does have a business impact. And so the next question you've got is, how do I get the best effective access to that data? And that's an observability system. So to me, it's not a complex technical argument. It's a common sense argument from economics about bleeding the most that we can out of the data that we ingest.

[00:27:32] So yeah, what I would say is that the safety of that position is quickly coming to an end. I think that is a powerful moment to end on. But before I let you go, I'm going to throw a slight curveball in your direction now. Because I know you travel a lot delivering this message at events around the world, etc. So I'm going to assume here you are armed with a big pair of noise cancelling headphones or and or a few books. Now, one of the traditions we have at the end of the podcast is I ask my guests to leave

[00:28:01] either a book that they've enjoyed this year that they'd recommend and we can get people listening to check out or a song for their Spotify playlist. And each guest will add one of those. All I'm going to ask is, what are you going to leave us with today? And why? Could I do one of each? Which yes, why not? Go for it. Love it. So a really great book I read this year. Oh, I'm a big, big fan of literature and philosophy. So there's a lot of stuff to come to mind. I'll pick something a bit more techie.

[00:28:28] I read a brilliant book called Developer Gemini, which is a view on how the engineering role is going to change over the years and how engineers are going to have to adapt their ways of thinking in terms of how they interact with organizations rather than being part of them. So that was really fascinating to have a think about the changing nature of the role, especially in the wake of everything that's going on around AI. From a musical perspective, I went and saw Oasis this year in Wembley. Oh.

[00:28:57] So, which was like unbelievable. So I would probably have to say Champagne Supernova by Oasis. In fact, pretty much any Oasis album except the Shock of the Lightning will make your day. Oh, what a great choice. I will get both of those added. Champagne Supernova, what a tune that is. I've got to press you a little bit here though, because you've kind of woken my curiosity on the philosophy side of things. Can you remember the name of that book as well?

[00:29:27] We'll get that added. Yeah. So this particular example that comes to mind is literature. It's Why All Well Matters by Christopher Hitchens. So it was an assessment of the impact of George Orwell's literature when there were various different, you know, in Europe at the time, there was kind of a battle between whether you're a communist or whether you're like hard into the capitalism. There was obviously the fascist movement happening in Spain, France, Italy.

[00:29:53] And Orwell was one of the first to come out and say, no, this is all, these are all forms of extremism that all start to look much like one another. And it's like a meta assessment of things like a march to Catalonia and coming up for air and 1984 on Animal Farm and the impact that they had across various different groups and still has today. I'm a big fan of George Orwell actually. So yeah, that's the one that came to mind for me. Oh, fantastic. We'll get that added as well. And we have covered so much around all things Coralogix today.

[00:30:22] So anyone listening wanting to find a little bit more information on anything we talked about there, connect with you or your team or just keep up to speed with the latest developments. Where would you like to point them? Yeah. So Coralogix.com, C-O-R-A-L-O-G-I-X.com is the best bet. That's where we put all of our information. We have a blog on there that's full of really interesting topics. A lot of them written by my teams. Then we have content on YouTube. We push our messages on LinkedIn and Facebook and Instagram.

[00:30:49] We have accounts on all of these different platforms. So if you just Google Coralogix, you'll find us. Yeah. Awesome. I'll have links to everything there. And so much I love about our conversations today and some of the insights that you've shared with everyone. And the big standout is this real-time bridge between IT and business stakeholders and providing that unrivaled visibility into data, systems, performance.

[00:31:15] And most importantly for many people listening, business impact, ROI, and et cetera. So much gold in that. I'd urge everyone listening to check that out. Get back to me. Let me know what your thoughts are. But more than anything, thank you for starting this conversation today. Thank you very much, Neil. I had a really great time. Huge thanks to Chris for such an honest and thoughtful discussion there.

[00:31:38] And I generally enjoyed how he framed observability as a way to connect engineering efforts directly to customer and revenue impact. And how tools like AI agents can actually open that data up to far more people across the organization, not just an IT or a tech problem. It can actually make a real impact inside a business. So I'll have links to the show notes so you can explore Chorologics, keep up with Chris's work, and dig deeper into some of the ideas we covered today.

[00:32:07] And I'd also love to hear your thoughts. How well do technical signals and business decisions connect where you work today? Any successes? Anything you get wrong? And before I sign off, another quick reminder that I will be back on the road this year speaking and attending events across different regions around the world. So if you're heading to any of the same conferences as me, I think my first one is Dynatrace in Vegas at the end of January.

[00:32:34] I'd love to meet you face to face and continue some of these conversations beyond the podcast. So pop by techtalksnetwork.com, you'll find all the information there. But that's it for today. So thank you for listening as always. And I'll speak with you all again bright and early tomorrow. Bye for now.