3523: From Chaos to Clarity, Valiantys on Making AI Work for Developers
Tech Talks DailyDecember 18, 2025
3523
30:1221.86 MB

3523: From Chaos to Clarity, Valiantys on Making AI Work for Developers

How much value do your developers actually get to deliver in a typical week, and how much of their time is quietly lost to meetings, context hunting, and process drag?

I'm joined by Phil Heijkoop, Global Practice Head of Developer Experience at Valiantys, for a conversation that cuts through the hype surrounding AI and asks a harder question about why so many engineering teams still struggle to see meaningful returns.

Phil argues that most organizations are only unlocking a small fraction of a developer's true contribution, not because of a lack of talent, but because process drag slowly squeezes out deep, focused work. AI, he explains, does not fix this by default. Without the right foundations in place, it simply accelerates the wrong work at scale.

We explore the long shadow cast by the "move fast and break things" mindset and why that philosophy becomes risky inside regulated, enterprise environments where resilience and trust matter more than speed alone. Phil shares what he sees when organizations chase shiny new tooling while ignoring technical debt, unclear standards, and fragile workflows.

From protecting uninterrupted time for deep work to automating manual friction points and setting shared guardrails, he outlines how teams can realistically unlock three to five times more output before AI even enters the picture. Only then, he says, does AI act as a multiplier rather than a source of chaos.

The conversation also digs into developer experience as a business lever, not a perk, and why leadership clarity, cultural trust, and consistent standards matter as much as tooling choices. We discuss the growing risks in the software supply chain, the sustainability of open source dependencies, and what recent high-profile retirements signal for enterprise teams that depend on them.

If AI is accelerating your organization in the wrong direction, what foundational changes would you need to make today to ensure it amplifies value instead of friction, and how honest are you willing to be about what is really slowing your teams down?

Useful Links

Tech Talks Daily is sponsored by Denodo

[00:00:04] Welcome back to the Tech Talks Daily podcast and that is it for me. No more events this year, not until Dynatrace in January. If you are attending that in Vegas, let me know. We can hook up and have a chat, a hot coffee or cold beer. But having attended so many events recently, I've walked away swearing I won't pick any more swag up but I still walk away with t-shirts, socks and gadgets and even lightsabers that I didn't need.

[00:00:33] And today's conversation will start exactly there before moving into something far more meaningful because my guest is going to be joining me from Valiantys and he's a voice who cuts through the noise around AI, developer productivity and modern software delivery. His name's Phil, he spent years helping large enterprises figure out why their teams feel busy all day long but still struggle to ship meaningful outcomes.

[00:00:58] And we will talk about process drag, deep work, technical debt and why chasing a few shiny new tools rarely fixes broken foundations. Yep, we'll both get on our soapbox and if you care about getting real value from your developers, protecting their focus and making AI work for humans rather than against them, let me tell you, you're in the right place and we'll also have a bit of fun along the way.

[00:01:24] Before I bring today's guest on, a quick thank you to my friends over at Denodo who are passionate about logical data management for AI success. Because let's be honest, AI is evolving fast but the elephant in the room is initiatives are still failing. Not because the models aren't good but because the data foundation isn't ready. That's why organisations are increasingly turning to Denodo and logical data management.

[00:01:53] Denodo unifies your data across every cloud and every system without the need for massive replication. So you can power trustworthy AI, accelerate lake house optimisation and build data products that make self-service real for every team. So CIOs, architects, business leaders each get exactly what they need and when they need it. And Denodo's partners also help you get value even faster.

[00:02:20] So if you're ready to make AI actually work, visit denodo.com and put logical data management to work today. But enough from me. Let me officially introduce you to Phil right now. So a massive warm welcome to the show. Can you tell everyone listening a little about who you are and what you do? Sure. So my name is Phil Hickup. I run our DevEx and SDLC practice here at Valiantus conveniently on the roll-up.

[00:02:50] I help most of the customers in the Atlassian ecosystem, especially the enterprise, like the large Fortune 500 types, modernize the way that they're approaching their software development and product development. Everything from planning to the actual development and writing of code through the operational and deployment and sunsetting of products. Awesome. Well, it's a pleasure to have you join me today in particular because one of the things I love doing on here is demystifying complex technologies

[00:03:15] and also looking way beyond the hype and look at what teams are struggling with, the best ways around it, and finally securing some of that ROI from the AI that we're seeing everywhere at the moment. And when I was doing a little research on you, I mean, you've argued that most teams only unlock a fraction of a developer's real value because of process drag. And that phrase there is something I'm hearing more and more. So how do you diagnose that hidden tax inside an enterprise?

[00:03:43] And what does a team look like when deep work is protected rather than slowly squeezed out? So there's a lot in there. So I'll try and tackle that. Yeah. The easiest, the easy part is diagnosing it, right? A lot of this is a, a lot of people know things are not working ideally. You've already mentioned kind of ROI. Like most of the time we'll come in and I'll, I use the MRI metaphor. Like we're going to look at what's wrong. I don't know what, what we can fix, what it looks like. Like point me to the body part that hurts. Right.

[00:04:12] But like, let's have a look at it. And when you do this enough and have enough conversations, you kind of get a feel for certain patterns. And one that we see a lot is either people aren't able to do work in the sense that they're dealing with tickets a lot. Like there's technical debt. We already talked about that a little bit. They just have too many other things that are demanding their attention that they can't put their head down and focus on what drives value for an organization. Some of that's also just, they have to look for things.

[00:04:38] One of the things I've always told my three-year-old is like, when you ask for help, make it easy for people to help you. So don't just say, I need help. Let's say, I've tried. This doesn't work. Not having a ton of success with the three-year-old adopting this just yet, but like the same principle applies to a company, right? Like if you say, I want you to work on my P1 tasks, make it easy for them to identify what are the P1 tasks? What's the context? What's the strategy? What's like, how do I make decisions within my working context that align to what you want to do as opposed to having to constantly ask for help?

[00:05:08] And I think that that's an obligation from an organizational standpoint because people want to do good work. Like I genuinely believe that, but they often have to go through so many process and technical hoops before they can actually do that. And some of it I think is, is a function of not quite setting the foundation correctly before you try and build upon it. And so you're constantly going back and backfilling things, whether it's like automating tools,

[00:05:33] infrastructure as code, like a lot of conversations happen to be around AI, but there is so much fundamental stuff that still needs to be done that you can see much more ROI from than managing the governance and enablement of your AI agents and trying to orchestrate 20 of those. Like let's, let's automate deployment. Like let's make that easier first. Yeah, completely agree. And before we started recording today, I was telling you about AWS last week, reInvent,

[00:06:00] and they took all the journalists to the middle of nowhere and blew up a server marked technical debt to kind of prove that, Hey, we're getting rid of technical debt. Agenic AI will help you migrate from your legacy infrastructure to two hours, obviously trying to get it all on Amazon, AWS servers. Yeah. You didn't see that one coming, but technical debt is still a massive thing from what you're seeing here. When there's this obsession with all things AI and agentic AI and rushing to the future,

[00:06:28] is technical debt still holding people back from what you're seeing here? Absolutely. I think if anything, it's going to get worse. I think there's a couple of pieces that people don't realize when they talk about it. I think technical debt is a class of problems. And so you can look at code debt, like debt decisions making that no longer fit for purpose on a code base basis, like within a module. You can look at it from an architectural debt perspective. So like, Hey, I built this around certain non-functional requirements. They are no longer valid because we outgrew them.

[00:06:58] That's technically technical debt because you have to constantly incur the interest payments and costs and associate with that. And I think that there's a certain amount of technical debt that's normal, right? Like you build a thing and then you move on to the next thing. And over time, the context changes that that thing needs to operate in. And so you'll have to revisit it. That's the same way with most things. That's why the debt metaphor is so good. But the faster we can build things, the faster we are incurring that debt.

[00:07:26] And again, like some level of debt is healthy because you're able to multitask multiple things, but a lot of it is also a subjective thing, right? You're going to come in and you have to figure out what is the optimal balance between various trade-offs. Are we going to assume much higher demand, much higher load, much heavier infrastructure needs? Great. It costs more time. It'll be, it'll last longer, right? Like you're building, but it'll take you longer, right? That's kind of the balance you're looking at.

[00:07:53] But some of these things are also like stylistic decisions, right? Like, are we going with a serverless approach? Are we doing spinning up our own servers? Kubernetes are not. Like all these things are, there is no right answer. And I think that's where a lot of these conversations end up happening is like, you're going to get an opinionated approach, but if your opinion approach is not consistent, then you're going to have different opinions that clash at different systems that clash. And so the faster you can build these, you're going to remediate old debt and incur new ones.

[00:08:21] And it doesn't mean if I've stretched the metaphor a little bit, that the new debt is at a lower interest rate. You're going to have to maintain all that new code, all that new infrastructure. And so I think it's going to get worse before people realize that in most cases, frankly, less is more probably. I think they need to spend more time doing code hygiene and making sure that their infrastructure is sound and not chasing the newest functionality, but figuring how do I make my core value prop more robust? I think that that's the direction we need to go in.

[00:08:50] It's not sexy. A lot of executive mandates push you in the wrong direction. But most people should still be thinking, I'm building this thing to last. It needs to be still valid five years from now. Because if you look at a lot of code bases, not a lot of things last five years. You are ripping out lines of code and modularity and functionality much faster. And that, I think, is a symptom of either the decision-making wasn't sound, we didn't spend long enough thinking about it, or maybe the environment changed faster than we thought.

[00:09:19] But some of that we could have anticipated. So I think that's where we may have to pull back and say, look, maybe we should spend a little more time thinking about this and a little less time building everything. Because maybe not everything needs to be built. Yeah. And I think another possible cause is the mantra of move fast and break things has been shaping software culture for, what, 20 years or more now. And many enterprises, of course, they don't have the luxury of breaking the wrong thing.

[00:09:47] So how do you reconcile that desire for velocity with that need for resilience in regulated or high-risk environments and the non-sexy stuff, as you mentioned there? I think there's a couple of things to look at. One is, if you think things through from a value to the customer perspective, what's the ROI? What's the trade-off there? Very quickly, you'll start to realize that a lot of stuff is nice to have. It's not need to have. And nice to have stuff, usually, you can ignore safely for a little while.

[00:10:16] You're not focused on the urgent stuff. And I think the other thing is that I think engineers, especially engineering managers and leadership, have a responsibility to push back on certain decisions and say, look, the downside for a lot of things going wrong has such a disproportionate negative impact on our business, on our reputation, on our brand, than any potential upswing. AWS, when their DNS crashes in New York City, we all suffer.

[00:10:44] Any optimization they make to their DNS is not anywhere near as costly to AWS. And I think that that principle holds. I think you used to hear this a lot. You need two positive things for every negative thing. I think you need significantly more when it comes to software to mitigate a lot of the negative stuff. And for enterprise in particular, they have an established brand. They have established customer base. They have a trust relationship they need to maintain.

[00:11:10] So if a bank says, we want the fastest, the newest, and everything else, I'm like, I don't care about that. I want to access my money. I want to know it's safe. I want to know my information is safe. I don't care about all these new fancy things. I need you to do your core value prop to me. And I think that that's where a lot of these things, there's too much chasing the shiny new toy. And a lot of people realize, like, you got to focus on that thing that you're known for,

[00:11:35] because whichever bank you're using, like, they're not known for the fastest app. But I get concerned, for example, if an app doesn't have multi-factor authentication, like basic security, resilience, like most enterprise things, they don't need 800 widgets and everything else. They just need to do the right things. And this is not new, right? Multi-factor has been around for a while. Clovers have been around for a while. These are what I would consider core to the craft of software building nowadays. Yeah.

[00:12:03] And this is one of the reasons businesses don't find that elusive ROI when they're constantly distracted by chasing that shiny new thing. And I was reading before you joined me today that you described DevEx as the mechanism that translates developer time into real organizational value. Again, something we don't talk about enough. So where do you see enterprises misunderstanding DevEx? I'm sure you've come across a few myths and misconceptions along the way.

[00:12:31] And how should business leaders be rethinking its place in the product lifestyle? I've got a feeling this is going to be a topic close to your heart. It absolutely is. And I'll try and leave the soapbox behind when I talk about this. I think there's a couple of things, like key points. One is because developers touch every component of software and every company now is effectively a software company to a greater or lesser extent. DevEx is not just the individual developers experience when it comes to like, hey, do I

[00:13:00] have free lunch or any of the, you know, the Googleplex type of benefits? A lot of it is just a, can I do good work? Am I constantly interrupted? Am I dealing with systems that are legacy? And so I have to do a lot of manual work or out of my comfort zone stuff. But if you think about the environment, so that's process, that's tooling, that's culture that you've put your developers in that dictates how much they can do for you for the business. Right? So the more blockers that are there that you haven't removed, the more you're going to

[00:13:30] see them not able to really maximize their impact. And despite the 996 becoming sexy again, I think most people are just going to put eight hour days in and that's okay. But if you look at it, eight hours should be plenty if you can allocate them correctly. But if I have, let's say six hours of usable work to do on a day-to-day basis that I can allocate, if I can take three hours and do deep work, I can do significantly more than six hours of fractured work.

[00:14:00] And I think that that's where a lot of the ROI comes from is like, do you protect that? Do you track it? Do you make sure you have your own little feedback processes internally to make sure that if something happens and none of this is malicious, right? It's just the entropy of the world doing things. Like you have a new engineering manager. They want to have a weekly standup with everybody. Great. That's not, that's normal. That's fine. If they do that at 10 30 AM, you've broken the morning and a half. Now I can't have a morning session of deep work. Maybe we didn't do that.

[00:14:29] Maybe they didn't do that. Let's move that meeting, right? Like this is all little stuff. The amount of this that compounds though is insane. And some of this is also just that does every product need a unique workflow? No, probably not. There are going to be exceptions, but like there's so many things where you can just do the basics, right? That will enable developers to do the creativity, the really good things that they're used, that like they're known for. And that's where I think that a lot of the magic, a lot of the, the, frankly, the beauty

[00:14:57] of a lot of this, because one of the things I love about development is just the fact that you can, you create value out of thin air, right? You're like literally taking thoughts and putting something out in the world. And so I think it's unfortunate that a lot of development environments are steeped in this when there's no one who wins by not addressing this. It's just an unknown problem. And it's just hard for people to want to do that introspection. So that's why a lot of it ends up being a, I know it's broken. Please help me. Please diagnose it. And that's what we do.

[00:15:27] We'll benchmark it. We can give you fairly honest assessments and tell you industry-wide company your size. This is kind of the range most people sit. And here's where you are on each of these metrics. And that gives us a, here's how we can focus. Here's what we can do. And there's so much value in what you just said there. And I think people listening around the world in every corner of the world, where what you said will resonate with them. And you mentioned meetings there. Combine that with context hunting and status signaling.

[00:15:56] They still drain huge amounts of developer focus. Are there any other practical interventions that you've seen unlock a step change in flow without relying on the big restructuring? You know, keeping it simple, making small little changes, marginal gains, etc. Anything else that you've seen that works particularly well? Because everything you mentioned, I think we've seen in every organization around the world. I'm just curious if you've seen any positive ways of combating that.

[00:16:25] I think there's two things that have the highest benefit relative to how much effort they are. These are the low-hanging fruits generally. One, I think is if everybody who has a say in what you're supposed to be doing from a building perspective maintenance, like someone who sets your KPIs, whether it's the CTO, the CEO, the VP of engineering, it doesn't really matter. But if that person signals and radiates out, this is what I need, and this is what good looks

[00:16:51] like in a way that is understood by everybody within their context, right? Engineering manager has different requirements than a junior engineer, and a DevOps engineer, and SRE has a different set there. But if we can agree on what good looks like, and you'll be surprised how many people don't fully understand how to break high level down into tacticals, then everybody suddenly knows how they can contribute to what good looks like. That, I think, is the easiest. It's a people problem, right? It's a translation thing.

[00:17:18] But that, I think, unlocks the most because people, like I said, people want to do good work. It's just hard because I have to figure out, like, what's our priority this week? Are we changing? Are we chasing something new this quarter? Is it still this product? Is it still this user? Like, make that easy, make that consistent, and you'll see a lot of these things compound really quickly. And then I think the other piece is a lot of, there's a balance that needs to be struck, but people need to have a little agency in how they work. And I don't mean the remote work versus in the office thing.

[00:17:48] I like the office. I'm weird like that. But I mean things like there's governance that I think needs to exist within organizations, right? There's a, you cannot do things outside of this particular box, but there needs to be enough freedom in that box to be able to say, I like Chrome. I like Firefox. I like this IDE. I like this thing. How shouldn't it be micromanaged? Like, how people deliver value, how they do this, how they solve certain problems, how they address it. And I think that's also one of those things.

[00:18:15] Like, when you loosen that up and you signal trust to your teams, they generally repay it several fold over. And if we do look at the shiny high level stuff that we see on stages, inside keynotes, at tech conferences, AI always promises automation across the entire development lifecycle. But for the techies out there, if you look under the hood, many teams are still wrestling with fragmentation, unclear standards, poor knowledge flows, et cetera.

[00:18:44] So how do you help organizations in your work create those shared guardrails so that AI enhances quality instead of just amplifying the chaos? One of the things we do is we point out that, like, if there's ambiguity in any of these things, whether it's your definition of ready or your non-functional requirements or just your code standards, AI will figure that out real fast, right? There's a garbage in, garbage out piece. But it's also just a, like, if you ask it for something and it delivers two documents,

[00:19:14] you realize your developers probably have the same problem. They just only found one of them, right? And so what we generally do is we'll try and smoke test how your process works. And we'll try and automate as much of it as we can, because you'll see where things get jammed. And it's a really good signal because you can say, well, if the AI gets stuck here, you best believe your humans are getting stuck there. They just spend longer or they don't force their way through it or they have a way through it, right? The typical kind of walking path that cuts corners type of thing.

[00:19:41] And so having those conversations with people helps. You understand the gaps. Most people will know it. They just don't have an avenue to, like, give that feedback to the organization. Communities of practice are really powerful for this. And then once you have all those things in mind, you're going to fix it. And you fix it for both, right? Humans and agents, in many ways, occupy the same space in the sense that they need to translate requirements, non-functional and functional, user stories, epics, and everything else. They translate that into working code that's in production.

[00:20:11] And so the ambiguity gets removed, benefits everybody. Clarity becomes a thing. You can say, hey, for all products in this particular category, here's the list of non-functional requirements. Here's the list of functional requirements. Here are security standards that we're going to do. And here's what definition of done is for each step in the PDLC. Because frankly, I think a lot of people would benefit just from shifting some of these things left, right? Create ownership where people actually do security testing before things go to production.

[00:20:38] Because that still blows my mind every now and then that they're just like, yeah, sneak that thing's after deploy. But simply put, if you give people that power, they will happily use it. But you just got to make it easy for them. So I think that when you do those things and you set it up, then AI gets accelerated. Absolutely. People want to do that. Automate the menial stuff. But I need to have a way that I can put it unambiguously to the AI agent. And humans suffer with ambiguity the same way.

[00:21:07] They just don't balk at it the same way. They'll be like, all right, fine. I'll figure it out. But that's not what you want. You want them to be able to give you good in the way that you intended it, right? So there's a communication problem. And so we benchmark this across the org. We identify where there's leakage, where there's duplicity, where's all the other stuff. And then basically, it's a to-do list of things that you start to address. And then very flow, right? The gold rat, you find a limiting step. You address that first.

[00:21:35] And suddenly, you start to see progressive changes. And we always look at product cycle time as one of the big North Star metrics because everybody has a hand in it. And you can start to see that go up. And you see it go up with fewer escape defects and higher quality and fewer support tickets linked to each release. And this is not a sprint. This is a marathon. But this is generally a rising line that we see.

[00:22:00] And again, when I was doing a little research on you, one of the things that stood out is you've been very vocal about software supply chain risks, especially around open source projects that lack sustainable support. So how should enterprises be maybe rethinking their responsibility towards the OSS they depend on? And are there any lessons they should be taking from the Ingress NGINX retirement timeline that has also been our news feeds recently? Any big takeaways there? All right.

[00:22:30] Now I've got to bring the soapbox out. I know I'd get you on there sooner or later. Yeah. The free and open web is something that everybody's building up on, right? Like a lot of things are free and they are maintained by people that do it out of passion and out of love. And this can be from libraries, from tools to languages, right? Like not a lot of people are paying for the upgrade of Python and TypeScript and everything else. Oh, Microsoft a little bit now. But like no one actually paid for the development of that. These are things that were built organically.

[00:23:00] But you rely on them to build your product. And so you can generally pull what's called an S-bomb or a D-bomb, right? Your bill of materials. And you can go through it. And there are a lot of tools out there. The U.S. government has a couple of good ones that can just basically go through that list. And again, automation, real easy. And it'll flag everything. And you can look at it from a, hey, what's the latest time? Like when was the last time this was released? If something's a year old between release, you can probably assume security patches hasn't happened.

[00:23:29] And you want to look at actual versus like current too, right? Like what is the latest version of a library? And what's the one you're using? Because there might be a Delta there. And a lot of these things force discussions in engineering terms. We talked about technical debt. This is an infrastructure and investment or supply chain debt, really, in most cases. If you audit this and you should do this on a regular basis, it'll force the conversation of, do we really need 800 NPM packages, for example? Maybe not.

[00:23:56] Some of those things you can fold into your own code base, like that adds resilience. It increases your maintenance requirement. But oftentimes that's not as bad as a lot of people think. And then when you look at it, a lot of these tools will pull out a like NPM, for example, does this really well. Like it's constantly begging for support for each of these packages that you pull up. They'll say this package is looking for funding. And you can realize internally, like what does supporting look like? What is that? Like who else uses it? Is this thing something that's basically going to die?

[00:24:26] Or is this something we really, really rely on? Ingress NGINX is a really good example. Like if you look at the SRE spaces on Reddit or somewhere else, like they're like, this is going to take us nine months just to copy the runbook. So just to figure out what a migration path would look like to one of the other options. But it was maintained by two people for free because they loved it. And they kept doing it out of sense of duty. And they've been asking for support for years and nobody gave it to them. So at some point, yeah, the straw is going to break the camel's back in that case.

[00:24:55] And I think that it's a really poignant example because a lot of people will actually be able to measure the impact of this, right? The opportunity cost for their DevOps org and everybody else to say, I need to take this off the roadmap. I need to replace this. I need to spend this much time here. That probably is going to be for a single enterprise or already more than it would have cost to maintain this package. And if you look at that company-wide, because this was a popular package. If you look at that industry-wide, the ROI numbers are staggering.

[00:25:23] But it's hard to avoid something that you're not certain is going to happen. So it becomes an engineering and a risk mitigation conversation. So you bring this to your CISO as opposed to your CTO, then they should take this seriously. I don't want to say might. I think they should take this seriously. They own that particular piece. And when your software supply chain goes down, because we just talked about packages. We haven't talked about malware being injected because that's a thing that happens fairly frequently. We've talked about supply chain injections.

[00:25:53] So it's not even malware being injected to the package itself, but it's someone intercepting it. There's so many different risks attached to it. And I think it's only going to get worse with AI just randomly pulling stuff in without an audit step. So it's important. And I'm going to get off my soapbox now, but it's absolutely something. I think there's going to be a situation where somebody trips on this in such a way that you're going to have a... When Maresk had that ransomware attack and they were out of business for a while,

[00:26:21] or JLR recently couldn't make cars for a couple of weeks. This is operational, existential potential problems for an organization. So I think people should start to take this a little more seriously. Yeah. Yeah. And your passion for this topic really shines through. So if we were to look ahead into 2026, what kind of development culture do you think will thrive in this increasingly AI-driven world? Agentic AI full of agents that are talking to each other, etc.

[00:26:51] And any advice that you'd give to leaders listening who want to build teams capable of delivering three to five times the output without burning people out? I think a lot of it is cultural, right? So if you foster a culture of experimentation, people will try these tools. They want to. I think developers have a tendency to want to stay on the bleeding edge as much as possible. So you have to foster that culturally. I think a lot of it becomes a don't try and be prescriptive about it.

[00:27:18] If your development team says, I can't use AI here, believe them. I think there's also absolutely a point where you say, what does responsible use of this look like? What does accountable use of this look like? Because you have to go and be able to say, I can't hold my agents accountable. I can fire them. That doesn't mean anything. But I need to be able to audit a lot of this and look at it and figure out what's that going on. So you need guardrails in place. And then the last thing I'd say is a lot of it centers on trust. Really, you got to trust your people. Give them the space to do these things. Let them push back.

[00:27:48] Let them shape the environment that they need. And I think you'll be surprised how many people rise to the occasion. And I think that is a beautiful moment to end on. But before I do let you go, I know you do a lot of talks out there. You can be on the road yourself. So for anybody listening wanting to find out more about the company, about your work, where they can get in contact with you, or equally where they can find out just more information about anything we talked about, where would you like to point everyone? How can they keep up to speed with everything?

[00:28:18] I think the company has a website. So valiantis.com. And you can find me on LinkedIn. My last name is unique enough that if you just type that, I'm probably your first or second hit. I'm happy to engage with people there, too. So it's not even a sales pitch or anything like that. Feel free to comment on my posts, send me a message. Like, I love having these conversations with people. I think that's probably the best way to engage. If you want to have structural help, find the company. If you want to just talk or disagree with something I say, you can find me on LinkedIn. I'm totally there. Awesome.

[00:28:47] Well, I'll have links to absolutely everything from the company page to your LinkedIn and social channels, etc. And such an important topic and your passion for everything we discussed really shines through today. So I would say it was less about soapbox moments and more about that passion and love for what you do. And more than anything, I just thank you for shining a light on this stuff and going beyond the high level for once. Really appreciate your time. Thank you. Appreciate the opportunity, Neil. Thank you.

[00:29:43] Thank you. Thank you. Take a look at the discussion today. Feel free to check that out. And please share it with someone who spends too much time in meetings or chasing the next tool. Let's try and convert these people once and for all. But seriously, as always, thank you for listening. I'll speak with you all again tomorrow. Bye for now.