What really happens when AI helps teams write code faster, but everything else in the delivery process starts to slow down? In this episode of Tech Talks Daily, I'm joined once again by returning guest and friend of the show, Martin Reynolds, Field CTO at Harness.
It has been two years since we last spoke, and a lot has changed. Martin has relocated from London to North Carolina, regaining hours of his workweek. Still, the bigger shift has been in how AI is reshaping software delivery inside modern enterprises.
Our conversation centers on what Martin calls the AI velocity paradox. Development teams are producing more code at speed, often thanks to AI coding agents, yet testing, security, governance, and release processes are struggling to keep up. The result is a growing gap between how fast software is written and how safely it can be delivered.
Martin shares research showing how this imbalance is already leading to production incidents, hidden vulnerabilities, and mounting technical debt.

We also dig into why this AI-driven transition feels different from previous waves, such as cloud, mobile, or DevOps. Many of the same concerns about security, trust, and control still exist, but this time everything is happening much faster. Martin explains why AI works best as a human amplifier, strengthening good engineering practices while exposing weak ones sooner than ever before.
A significant theme in the episode is visibility. From the use of shadow AI to expanding attack surfaces, Martin outlines why security teams are finding it harder to see where AI is being used and how data flows through systems. Rather than slowing teams down, he argues that the answer lies in embedding governance directly into delivery pipelines, making security automatic rather than an afterthought.
We also explore the rise of agentic AI in testing, quality assurance, and security, where specialized agents act like virtual teammates. When well-designed, these agents help developers stay focused while improving reliability and resilience throughout the lifecycle.
If you are responsible for engineering, platform, or security teams, this episode offers a grounded look at how to balance speed with responsibility in an AI-native world. As AI becomes part of every stage of software delivery, are your processes designed to safely absorb that change, or are they quietly becoming the bottleneck?
Useful Links
Thanks to our sponsors, Alcor, for supporting the show.
[00:00:03] What happens when AI helps team write code faster than their delivery pipelines can safely handle it? It's something I'm hearing more and more at the moment. So today I invited friend of the show, Martin Reynolds, to join me on the podcast again. He's the field CTO at Harness, and he has spent more than three decades helping engineering teams ship software reliably.
[00:00:29] And he's also seen every major platform shift along the way throughout his career. So I want to find out more about some of the cycles that he's seen and what he makes of the latest transformation. I also believe since we last spoke, he's now moved to the US. But today, technology will unite us again to unpack what feels genuinely different about this AI moment that we're experiencing.
[00:00:51] So we will talk about the AI velocity paradox, why faster code doesn't always mean safer delivery, and how shadow AI is expanding the enterprise attack surface, and why engineering and security teams all need to rethink if they want AI to genuinely improve outcomes rather than just amplify risk. But I don't want to reveal any spoilers. So enough from me. Let me officially introduce you to Martin right now.
[00:01:22] So a massive warm welcome back to the show. We last spoke two years ago when you were living in London, but now you're in the US. I feel like that is a podcast on its own. But can you just remind everyone listening a little about who you are and what you do? Sure. So I'm Martin Reynolds. I'm field CTO at Harness, which is a software delivery lifecycle company. I got more than 30 years experience in delivering software. Probably every industry in the UK, I feel like I've covered.
[00:01:50] Right now, though, I work with engineering teams, security teams, platform teams on helping them design and understand the strategy for just delivering software well, reliably, repeatedly. And we were talking before we started recording today. You're now living in the US, working from the US there. And of course, have you bought a big American style freezer yet? Have you filled it with food? You know what? The apartment came with a big American style freezer.
[00:02:19] It has three things in it. I'm not a huge eater. I'm sure over time I can fill it up. Yeah, I'm sure. And back to your working life. I mean, you've spent, what, more than 30 years in software development. You've probably seen a lot of changes in your role as field CTO at Harness as well.
[00:02:38] Especially one of the things I've got to ask you is what feels fundamentally different about this AI-driven shift compared to earlier waves like cloud, DevOps, mobile and everything in between. What feels different this time around? So that's a couple of things. And it's interesting because we had a team meeting yesterday and we were talking about this a little bit.
[00:02:58] So one of the things that's different is I remember seven, eight years ago talking about moving things to SaaS and like the objections that you had and is it secure? Have we got all the right protections in place? Can we have our data in that place? All of those things. And that played out over years, realistically, before SaaS became like the accepted way of doing things.
[00:03:22] And then when I look at that with AI, it's a lot of the same questions and objections are all happening. They're just happening a lot faster, right? Those decisions and acceptances and what's there. And AI just lowers a lot of barriers and can really, it is really a game changer. Those same concerns and queries still need to be addressed. Still need those right guardrails in place and everything else.
[00:03:49] And from my perspective and the thing that I'm seeing more and more is I think AI is a real human amplifier. I think it takes what we can do and allows us to do it more of it at scale and really kind of amplify the human. And I do love that about it. And I think that's I think that's been true of other changes in the past, the DevOps movement, that kind of automation stuff.
[00:04:15] But I feel like this is a real big step forwards. And one of the reasons I was excited to get you back on your podcast is your recent research highlights the AI velocity paradox where code gets written faster. Everything downstream slows down. So why are testing security and release pipelines? Why are they struggling to keep pace with AI generated code? What's going on here? So, yeah.
[00:04:42] So really kind of summarizing that all up, AI coding agents are great. People are developing, bringing in 30, 40 percent more code. But then all those processes down the line haven't had that kind of AI acceleration applied to them. So what you now have is you have these things that were designed and worked for human speed developments that worked for human speed development. They're all there and they're struggling to cope with that load.
[00:05:12] And so some of the stats that we pulled out around that, like I think it was more than 60 percent of organizations say they ship code faster. But there's a big but there, but 45 percent of AI related deployments had problems and more than 70 percent had like production incidents from AI generated code.
[00:05:34] Making sure all those downstream things can scale with it is really where that's where the bottleneck is. And that's where I think the exciting innovations are happening going forwards is bringing AI to those things like testing, like security governance, like just making sure all of those things can happen and can happen at scale. And they need those same enablers, right?
[00:05:58] The same enablers that you've got in generating codes, you need in generating tests, you need in applying your security testing, maybe automate your security fixing. All of those things need to happen to allow it to scale. And I don't want to sound all doom and gloom because I do think there's I do think there's solutions to this. And I think this is where I'm going to probably sound like a broken record here.
[00:06:21] The software delivery parts, the if you like, the core, the CD is which kind of orchestrates all of those things. If you're using things like platform engineering, you've got templated pipelines, adding those things and making them scalable where you've got an organization that's maybe using 30 template pipelines across its whole enterprise. You can add those things in and change them at the core.
[00:06:47] And then maybe a hundred teams and a thousand developers or 5,000 engineers are then adopting all those practices and those guardrails. Whereas if you've got that kind of more legacy approach where all those teams have built all their own different pipelines and their own ways of deploying. And they've got all even in a small organization. If you've got, I don't know, say 20 teams, right? And they do on average 10 services each. So that's 10, 10 pipelines per team times 25.
[00:07:16] Suddenly you've got a much bigger job to make those things scalable. And I think there'll be many people listening that just automatically assume that faster code automatically means faster delivery. And there's a big focus right now on technical debt. I went to AWS last year and they took all the press and media to the middle of nowhere and then blew up an old server, a big stack of old servers and technical debt and just blew it up to kind of send the message of we're going to get rid of it. But I don't think it is going anywhere.
[00:07:46] I mean, where do you see organizations paying the price, whether it be through network cost overruns or growing security exposure? Yeah. Do you know what? I did a panel once and I can't remember where it was at now. I did a panel and we had some great people from like Barclays and JP Morgan Chase were on that panel. And I think it was the person from Barclays who said like technical debt is like milk, right? It's not like wine. It doesn't improve with age. It goes moldy and smells.
[00:08:14] And it's like, and I do like that analogy. But, you know, coming back to your question, velocity doesn't necessarily mean safe delivery. Those two things aren't like, they're not, it's not a, it's not an equation that balances. So AI code can genuinely hide vulnerabilities. It can hide poor design and those things show up in QA and they show up in operations. You need to be able to deal with them. Right. And so you need all those.
[00:08:42] And again, I'm coming back to those guardrails. You'll hear me mention them a lot. Having guardrails in place that make sure you're doing the right security scanning. You're doing the right testing. You know, you're, you've got all those guardrails in place. You're generating S-bombs. You know, the providence of everything. All of those things can help with that and help it. But it's a, it's an evolving landscape, right? Even with AI, what you're seeing is novel AI security vulnerabilities that have been generated.
[00:09:10] And then you've got to deal with things like hallucinations and it's a much more complex environment. And again, it's about making sure that your delivery, in fact, everything from when they commit code onwards is, has all the right guardrails and protections in place. And that's how you make sure that you have not just velocity, but safe delivery that comes along with that velocity. You don't want to be in that 75% who have issues that have been generated from it.
[00:09:40] So it's, it's a real balancing thing. I do also think I do want to add in on this. One of the things, and I know what you're talking about with the AWS. And I think one of the things that's coming along and even at Harness, we announced something that's going to do this like autonomous code automation type functionality, this kind of thing where you can actually say, Hey, I've got a coding buddy here. And what I want him to do is I want him to go away and fix all these high vulnerabilities that are in my code.
[00:10:06] And he's going to sit there in the background and churn away, trying to fix them, iterate and iterate and iterate until he gets stuck. And then it's going to say, Hey, I don't know what to do here. I don't know what the right, the right thing is. You give it some more advice and it iterates and iterates until it has a solution. And then it says, okay, here's what I think the solution is. And that, that human who's responsible for it can say, yes, that's a good, let's merge that up and commit it. Or actually that's not quite right. Right.
[00:10:34] So what you're doing is they can be getting on with building the new feature while their partner is sat alongside them. Their AI buddy is sat alongside them fixing some of that technical debt. And that's the kind of thing that I think AWS were alluding to is that kind of thing is where you can have this coding partner, autonomous tasks that are sitting there next to you doing those difficult things. Even in our own demo, we did a, let's convert this from go to rust. And it just sat there and iterated and iterated and iterated and did that.
[00:11:04] I think a more likely scenario is something like, Hey, upgrade from this.net version to this.net version or upgrade from this node version to this node version, which is one of those painful, horrible tasks that engineers don't enjoy, but needs to be done to get rid of that technical debt to keep you up to date. So I think it's on the way, but you have to do it again as an amplifier for the human. The human still needs to be there to guide it, to give it the right advice, make sure
[00:11:33] it's not introducing new security vulnerabilities, which comes back to the guardrails alongside it. Yeah, a hundred percent with you. And throughout both our careers, I think we've been around long enough to see that everything happens in cycles and we've both seen IT losing the battle against BYOD. Then it was shadow IT. Now in 2026, shadow AI is now being described as an even bigger risk than shadow IT.
[00:11:57] So why has AI expanded the enterprise attack surface so quickly and why security teams finding it harder to see than maybe previous technology shifts that we've seen? I think it's where that data is and the AI tools that they're using. Before I get into what good looks like around that, there is just a high amount of usage of AI in organizations, authorized and unauthorized, right?
[00:12:24] Where they're just saying, hey, I'm going to use Gemini here or I'm going to use whatever cursor for this. But they don't really have visibility of where those LLMs are being used, where they've been trained, what the data was. So it's a much bigger surface and often they have access to so much information. Hey, read all my documentation and tell me that all this stuff, I'm going to plug you into our
[00:12:50] internal documentation for A or I'm going to plug you into our proprietary code base for B, right? So it's expanding that risk and you don't know where that is going. Now, I'm not saying that AI solutions don't exist for that. They absolutely do. But when it's coming in as a shadow, as a, this isn't an authorized tool, we haven't vetted where it is. We haven't got the version that allows us that protection where it's running in our own space.
[00:13:17] If that makes sense, really expands that risk and detecting where it's used, right? It is a real challenge, I think, for organizations. I think good organizations have systems that look for that and could detect for that. I also think that they have good policies around it. Even internally at Harness, we have an AI governance team who we have all the use cases for AI. We have the tools that we're authorized to use. And don't get me wrong.
[00:13:43] It's not a small number of tools that we have available at our fingertips. It's a big, long list. And there's a process that exists for saying, hey, I want to add a tool into my tool chain. This is the tool. Can we get it vetted and get it in? Making that process easy and simple really helps. And especially when you're talking about your software, you don't want to introduce security vulnerabilities, especially if you're, I don't know, a bank or a healthcare provider or an
[00:14:12] insurance provider or a school, right? All of those things have really sensitive data and you want to be really careful about how you're building that. So again, having all the right guardrails in place, even in your own software delivery, to try and understand where AI generated code is used or AI tools are used. And actually, again, having things like golden pipelines, platform engineering style approaches really helps with that because you can make sure those checks are in place, right?
[00:14:42] You can make sure that the governance is in place. You can make sure that those things are all there. So if we were to start with the assumption that many security teams can't always identify exactly where AI is being used, what are the first visibility gaps that leaders should maybe focus on fixing without bringing development to a halt by being too authoritarian, shall we say? Yeah, I know. I genuinely think most security teams don't want to be a blocker, by the way. Like, I don't think that's a thing.
[00:15:12] I think it's a perception, but it's not necessarily true. But I think you need to start with a single source of truth. You know, and if we're talking about software delivery, you're talking about making sure that your code repositories are secure, that you're using least privileged access tokens, that you scan all your commits with the right security tools and that's automated. Make things like S-Bomb generation and dependency tracking. Make them automatic. Build them into your pipeline process.
[00:15:41] Make them automatic and make them not optional, right? That is just, that is the, and what you're doing there is you're really in a very simple, low friction way. You're actually making it so that the security team has that visibility and move the governance into the pipeline, right? Move that governance into your pipeline, into your software delivery pipeline. Work with the security team saying, hey, what do you need? What is the visibility you need?
[00:16:08] And how do we get it so that we're providing you that automatically, right? Make it part of that pipeline, make it so that the guardrails are there so that they don't have to come in and do a security checkpoint prior to release or anything else, right? Because they can already go in and see at any time that all the right things have been done. And if they're not done, they know it's not going to get released because the guardrails are in place to stop it. A quick thank you to the sponsor that supports every podcast across the Tech Talks network and every episode.
[00:16:38] And this month I'm partnering with Alcor. And if you've ever tried to hire engineers in another country, you probably know just how painful it can be. Different laws, patchy support and partners who don't truly understand engineering roles. So Alcor approaches this from a different tech point of view. They specialize in Eastern Europe and Latin America and they're able to combine EOR capabilities with recruiting.
[00:17:04] So you get one partner handling everything and they help you choose the best location for your stack, find developers with the right depth of experience and run proper assessments so they can onboard people quickly. And they also give you a model that respects both transparency and margin. Most of your spend goes directly to your engineers and the fee will decrease as the team expands. And you can even transition everyone in-house at that time when you're ready without having
[00:17:33] to worry about a penalty. And that structure is why a mix of early stage and unicorn stage companies use them as they scale. So if you want to take a look, visit alcor.com slash podcast or tap on the link in the show notes. But now, on with today's show. And if we also muddy the waters further by bringing in agentic AI agents or AI buddies starting to take on roles like testing, quality assurance, parts of delivery, etc.
[00:18:03] What have you seen these specialized agents genuinely improving software quality and developer focus? Because as you said earlier on, we don't want to be doom and gloom here. There are so many positive examples that don't make it onto our newsfeed. So is there any good ones you can share today? I mean, I've seen a few. So I've definitely seen some around the QA space that I just think I think have been amazing where they've been able to do and not just the unit testing piece.
[00:18:29] I've seen things like speed improvements and stuff with unit testing because AI can be smart around you've changed this code. So when you run these things, right, and you would see big time savings from people who do that kind of thing. It's not just that, though. It's like, hey, here's my story that came from the product owner. And really, that's the test that you need to run, right? Here's the journey that I'm going to do in my software.
[00:18:55] And AI being able to generate that set of integrated UI front-facing tests and seeing that has really helped kind of accelerate and match the speed of the AI code that's been generated with the testing that's happening. But it's not just that. It's things like AI is quite good at doing things like predictive kind of, hey, this looks like it's going to be a problem. I'm looking at your code base.
[00:19:23] This looks like it's going to be a problem because I already understand what your deployed environment looks like. Or the other one that I've seen quite a lot is kind of that auto-fix type functionality around security vulnerabilities. Hey, we know how to fix this. It's a well-documented fix. Here's the pull request with that fix in place. So you've made this change. We can see what, here's the fix for you. Why don't you just merge that in, right?
[00:19:49] So instead of the, so what we're actually seeing is they're becoming like virtual team members. If that makes sense. It's like having your own little virtual team. Hey, I've got a tester that's sitting on my shoulder that's helping me build out my tests. I've got a SRE on my shoulder that's helping me make sure I'm delivering resilient software. I've got a, I've, I've got a security guy on my shoulder. He's helping me make sure I deliver securely. And they're not just telling you there's a problem. They're telling you how to solve the problem. Right.
[00:20:19] And that's where that amplification comes in. It's saying, hey, we're looking at this. We think this is going to be a problem, but if you do this, it won't be a problem. Okay. I'm going to do that. Right. It's accelerating and amplifying the human. And so it's not just a chat bot thing. It's really like a, a teammate sitting next to you and, and agents help that because they're specialized in that area. I also think I want to mention here as well. I think having a selection of agents is a really good way to validate, especially if
[00:20:49] you've got like a coding agent here that is, I know you're using Gemini to do code generation and you're using chat GPT to do code review, right? So use the different or cursor or whichever one it is, but you're actually using different agents to, to validate against each other because they're trained slightly differently. They have slightly different focuses. And again, it gives you that use AI to help you check AI. Just don't use the same AI to do it.
[00:21:18] Yes. That's a great. If that makes sense. If you talk about that in human terms, if a human write some code and commits it and does a PR that review for that pull request or merge request should never be done by the person who wrote the code, right? It should be done by a different person in the team. That should be a hundred percent true for AI too. If an AI agent has assisted in writing the code and you want something that's going to help
[00:21:45] you with code reviews, use a different AI agent, right? To do that, apply the same principles because it makes sense and it works. Such great advice there. Great call. And if we look traditionally, I think there has always been a slight tension between development teams who are trying to move fast and might get frustrated by security teams trying to reduce risk, but they're all for perfectly good reasons. No one's the bad guy here.
[00:22:11] So what do you think needs to change culturally and operationally for DevSecOps to work in an AI native world? Because as he said, no one wants to be seen as blockers. Everyone wants to be the business enablers, but there are responsibilities around that. Is there anything that needs to change culturally or operationally? So do you know what it is? I think it's more that we need to apply the principles that we've been trying to do with the DevOps movement. Yeah. And I don't think we've ever really got there.
[00:22:39] I remember attending a conference probably like 13, 14 years ago now, and somebody put up this slide and what it had on it was not, it had DevOps and then it expanded out the middle and it had DevSec, Prod, security, compliance, blah, blah, blah, blah, blah, blah, like everything in the middle. So it was Dev, all the things, ops in the middle.
[00:23:04] Because the point was, is that actually it's a communication thing. And I saw this highlighted again. We had our own user conferences September last year, and we had some security people up on stage. And one of the things that came across loud and clear is they want to talk to engineering teams and platform teams and DevOps teams because they want to have the conversation about, okay, this is what we need.
[00:23:31] Let us help get it into the pipeline, get it in. So it's automated. We just want the visibility, right? We don't want to be a blocker. Here's the constraints that we need in place. If you can get those guardrails in place. And it's all about, ultimately, it's about communication. And I don't think it's a new message. I just think it's as important, if not more important than ever. So have that communication. Do you know what?
[00:23:58] If you're in a platform engineering team or a DevOps team or an engineering team, have security attend your meetings. Not every meeting, but like maybe your sprint planning meeting or your design meetings or have them involved. Don't go to them when you're like, hey, I've done all this stuff. I just need you to tick my box to say I've had a security review. Get security involved in the design. Get security involved in the process early on.
[00:24:27] And then they won't be a blocker. And this isn't new. This isn't revolutionary. I just think we're still not very good at it. The most successful teams are the teams that do that, right? Secure by design movement that we've seen. And that's true whether you're building software or whether you're building pipelines. Whether you're a platform engineering team and you're building pipelines and platforms for software to be deployed to, security should be in at the design time.
[00:24:55] They should understand what you're building and then tell you what the guardrails need to be so you can codify them and automate them and make it just part of the process. Again, fantastic advice. And for any engineering and indeed security leaders listening are all united in feeling the pressure to adopt AI quickly. What kind of mindset shifts are needed, do you think, to make AI a force for better software outcomes rather than just faster, riskier releases?
[00:25:24] The exciting stuff there. Because we're IT guys at heart. What do they need to change to deliver better software outcomes and deliver that ROI on these projects? So I think the focus is don't optimize for more code. Optimize for safe, reliable outcomes. So look at the outcome. Yes, you want to get more things out, but you want to do it safely and reliably.
[00:25:52] So optimize for the whole process, not just for, I want to get more code. I want to get more features. You need to, and I think that's where the challenge has been. We talked about it right back at the start of this conversation was generating loads of code. But if you're not designing the whole lifecycle to support that, I don't think it's going to work. And really just kind of bake those guardrails in for security, for testing, for compliance from day zero.
[00:26:20] And it should really just scale innovation. You just want to do it responsibly. Don't sit there and just think, hey, I can do more stuff. I want to do more stuff, but I want to do it safely, reliably. I previously, one of my previous roles, we did a lot of healthcare software that went out to NHS and in the UK and doctors and healthcare providers and stuff. And we also did education software.
[00:26:46] Again, very sensitive areas, children's data, designing those processes to make sure that we were compliant. And what we were really doing, especially when we wanted to drive change in those regulated environments was instead of trying to say, hey, what we need is really strict security steps and code reviews and testing at the end of the process to say, yes, you're allowed to deploy this. We took a step back years and years ago.
[00:27:16] And it's, again, the same approach we're talking about now, really, and saying, how do we design our delivery process so that when a school or an NHS trust or whatever takes that piece of software, how do we make it so that actually they can say, yeah, no, we know it's going to be good. We don't have to sit there and do five months of testing, right? We're comfortable that you can do this bug fix and we don't have to go through with that
[00:27:42] kind of old style of have it in this environment for ages and then roll it forwards. And it's about designing the process in such a way that we can give them all the information that they trust the process. So here is the process. We designed it with all your requirements in mind. Here's the report that tells you we've done all those things. So you can now adopt that faster than you would have previously, right? We want to be able to deploy multiple times a week, not every three months or just in school
[00:28:12] holidays or whatever it is, right? And yes, it's a process and you have to build trust, but you have to design that process with all those requirements in mind. I think AI is just another layer on that. Design your process to support the fact that it's going to have AI touch it somewhere. Just going back to that shadow AI, I think design your process to assume that that's the case and make sure you have all the right checks and governance in place. Fantastic advice.
[00:28:40] A pleasure as always having you back on the podcast and sharing your invaluable insights. Before I let you go, one of the things I try and do with my guests now is give them a virtual soapbox to stand on and finally lay to rest some of those myths and misconceptions in their industry that frustrate them as they're scrolling through LinkedIn or in fact, any online publication or even in AI when it hallucinates. So I've got to ask you, what do people misunderstand most about your industry?
[00:29:08] And are there any myths about your job or field that we can finally lay to rest today? Let's do something positive here to finish on. I've got two and I've already talked, I've already mentioned one. So I've got two, but I am like AI will replace developers. Is the, like we hear this all the time. I've seen some org saying, I think there was some Y Combinator stuff around 70% of the startup code is AI generated.
[00:29:35] And it's never going to replace developers, right? Yeah. It's just not. I think it amplifies great engineering and it exposes bad engineering faster, right? But I don't, I genuinely do not think AI will replace them. But it's a, it's a true amplifier and it helps. And my second one is, and this is very specific, I guess, to the space I work in now is security and speed are opposites.
[00:30:05] That's the myth. Security and speed are opposites, right? The reality, you can automate and embed security into your process. And often that is going to make teams faster because they're not waiting for that security review at the end because it's built into the process. So there is that myth and it's still out there, right? And there's still people who try and avoid the security team. And we've talked about it a little bit. And I'm glad we did because that is one that it really frustrates me.
[00:30:35] I, in my previous role, one of my, you know, favorite people to work with was the CISO, the chief security officer. He was great to work with because I could go and talk to him and we could have a great conversation about what they needed in his space and what I needed in my space. And we could come up with a joint plan to solve that for the whole organization. That was awesome. That's how it should work.
[00:30:59] And I, I regularly rely back on that kind of collaboration and it reached out to things like what's the right tooling that gives you the right information. I know what's great for me from an engineering side. What's great for you from a security side? Let's bake all that in so that we have one view for all of us. The view that you see and the view that we see should be the same. We should understand security the same, right? I should understand if something's bad and you should understand if something's bad and it should be the same understanding.
[00:31:29] Oh, I can. It doesn't sway you down. Yeah. I can hear people all around the world nodding in agreement there with what you're saying. And for people listening that maybe want to carry on this conversation, talk with you or your team, or just keep up to speed with the kind of work that you're doing at Harness and some of the big announcements coming out and equally the report that we've referenced today. Where would you like to point everyone? So harness.io is a great source and there's a bunch of resources under the reports that we do and under our blog.
[00:32:00] Also, you can find Harness on LinkedIn. You can absolutely find me on LinkedIn. It's just my name, Martin Reynolds. You know, if you do the, you know, slash in slash Martin Reynolds, that's me. You'll find me. I am always happy to receive messages and I regularly post on what we're doing at Harness and where we are as well in terms of at events and speaking and various things. So feel free to get in touch with us. I have one final call out.
[00:32:26] We also run Harness sponsors, something called the Engineering Excellence Community, which we run events globally for that. There's a site you can sign up for free, has a free assessment model. We have a new model. We're on version two of the model. We've got version three of the model coming where you can kind of assess your software delivery platform. We've also, I know, got some great articles from members of the community, people from like JP Morgan, Capital One, who are on that site where they're talking about what good software delivery looks like.
[00:32:55] You can just go to engineeringx.org. It's free to sign up and then you get access to all those resources too. Wow. I will add links to absolutely everything there. If you look in the description of this episode or the blog post where you find it, go to the useful link section and you'll find everything that you've just mentioned there. And I think last time we spoke, it was about the importance of internal developer portals or IDPs. Always so much value in our conversations.
[00:33:24] And I always look forward to talking with you today. Everything from the AI velocity paradox to the surge of shadow AI and agentic AI and engineering and just so many big takeaways. And a big thank you for joining me on the show again today. It's been a real pleasure. Thank you very much for having me. And I hope to be back again soon. I think if there's one takeaway from this conversation, it's that speed and safety do not have to be at odds. But only if teams design for both from day one.
[00:33:54] And Martin today shares clear, grounded thinking on why AI should amplify good engineering rather than replace it. And how security really does work best when it's built into the delivery pipelines. And why collaboration between engineering and security teams, how this matters more than ever. So as always, you'll find links to harness Martin's work and the research we discussed in the show notes. And if this conversation sparked a reaction and agreement or even a disagreement, I'd love
[00:34:22] to hear your thoughts, whatever it is. What are you seeing inside your own teams as AI changes how software gets built and shipped? As always, techtalksnetwork.com, socials at Neil C. Hughes. You'll find everything you need there. Let me know. Other than that, time for me to kick back and have a nice cup of tea. But I'll return again tomorrow. Bye for now.

