How can organizations implement AI safely while reaping its benefits? In this episode of Tech Talks Daily, we sit down with Danny Allan, the Chief Technology Officer of Snyk, to discuss this crucial topic.
Recent findings from a Snyk survey reveal that a significant majority of IT managers are concerned about their teams' over-reliance on AI code completion tools, potentially bypassing essential security protocols. With developers facing high burnout rates, the pressure to adopt AI is intensifying. However, as Danny Allan emphasizes, the solution lies not in banning AI but in strategically integrating these tools within organizations.
Danny shares Snyk's innovative approach to AI adoption, which includes providing security guardrails and thorough analysis to ensure AI tools are utilized securely. He discusses how Snyk partners with customers to mitigate risks and enhance productivity without compromising security. By implementing a clear plan for AI integration and partnering with experienced vendors like Snyk, companies can balance the productivity gains of AI with robust security measures.
We also explore a real-world example where a team of 5000 developers successfully rolled out an AI coding assistant with Snyk's security analysis, achieving significant productivity gains. Danny delves into Snyk's AI Intelligence Framework, which focuses on using AI internally, securing AI implementations, and enhancing products and services with AI capabilities.
Are you interested in learning more about how to securely implement AI in your organization? Tune in to hear Danny Allan's insights and discover how Snyk can help you navigate the complexities of AI adoption. As always, we invite you to share your thoughts and experiences on this topic. How is your organization approaching AI integration, and what challenges have you encountered?
[00:00:00] Are we too trusting of AI in our development processes? Well, today I'm joined by Danny Allen, CTO of a company called Snyk. And together, we're going to unpack the integration of AI within software development and focus on the crucial balance between innovation and
[00:00:21] security. And Snyk is a leader in development security. They've observed a significant reliance on AI code completion tools, which while boosting productivity, often bypasses established security policies. And yes, this has ignited concern among some IT managers, but with a Snyk survey
[00:00:41] revealing that 80% are wary of their team's dependency on these tools amidst a rising developer burnout, there seems to be a few contrasting issues here. And when I read that Danny believes that the answer lies not in restricting AI tool usage, but in strategizing
[00:00:58] their secure adoption, I had to get him on the podcast today. Because as AI becomes more embedded in our digital fabric, the question I'm left with is how can organizations leverage these powerful tools without compromising on security? I just want to take a time out
[00:01:16] to express my gratitude to everyone who supports our mission of delivering content every day to 140,000 listeners across 165 countries. I'm grateful for the support that allows me to maintain this daily tech podcast. And it's also an opportunity to talk about the fact
[00:01:31] that legacy managed file transfer tools are dated and lack the security that today's remote workforce often demands. And companies that continue relying on that outdated technology, they can put their sensitive data at risk. So if security breaches are keeping you up
[00:01:47] at night, why not sleep soundly with Kiteworks MFT suite because their software security hardening includes an ongoing bounty program, regular penetration testing, and with one click appliance updates, staying secure has actually never been easier. It doesn't have
[00:02:03] to be complicated. So while other solutions might leave you vulnerable, Kiteworks offers military grade protection. So there's no need to compromise on security. So step into the future of secure managed file transfer with Kiteworks by visiting kiteworks.com to get
[00:02:19] started. That's right, kiteworks.com to get started today. It's time to shift gears somewhat introduce the person that you've all been waiting for today. So buckle up and hold on tight because no matter where you are in the world, I'm going to be beaming your ears all
[00:02:33] the way to Boston, Massachusetts, where Danny Allen is going to be sharing his insights on nurturing a secure productive environment in an AI accelerated world. So a massive warm welcome to the show. Can you tell everyone listening a little about who you are and what
[00:02:51] you do? Sure. If there's anything else, great to be with you. My name is Danny Allen. I'm the chief technology officer at Snyk security. And yes, my role is to set the strategy on the technology level for the company. And of course, being in the cybersecurity space,
[00:03:05] it means being very in tune with what is happening in the cyber world. So I've got to ask Danny, how does Snyk partner with customers to help them implement AI safely? And what
[00:03:18] are the key benefits of doing it so correctly? Because the reason I asked that question is there's so much hype around AI at the moment. And there's a lot of people wanting to be
[00:03:28] part of that AI narrative, but know what to do next on having somebody to guide them through that process. That's where things get a little bit murky sometimes. So how are you helping people with this? That's a great question. You know, we actually did a study just recently
[00:03:44] of it was over 500 organizations and what they were doing with AI. And what we learned is that 96% of them are already using AI. And so we have a very broad customer base, we have over 3000
[00:03:55] customers. And so we have a lot of experience in understanding what the motivations are and what they're doing. And so how do we partner with them, we listen to them, we understand what their motivations are, why they're adopting AI. And then we use the experience from the other
[00:04:09] 3000 customers in how to do that effectively within the organization. And I'm glad you mentioned that survey there, because I was reading it before we came on the podcast today. And there's a few standout stats for me. And one in particular was 80% of IT managers have expressed concern about
[00:04:27] their teams, maybe relying too heavily on AI code completion tools. So can you expand on that and some of the main risks associated with that reliance? Because there is a concern somewhat that in many
[00:04:40] circles, we're all getting a little bit lazy by leaning on these tools. Yeah, well, we like these tools because it makes us more efficient, right? It's autocomplete as I'm writing a letter, it
[00:04:51] finishes it. And for coders, of course, you can see why they're so high adoption, it helps them be more efficient. But the reality is that in that same survey, 56% of the respondents indicated that
[00:05:04] it generated insecure code. And actually, there's a public survey on this that said 36% of the time that code coming from one of the most popular code assistants was generating vulnerabilities within that code. So there's a real risk there if you become over reliant on these coding assistants
[00:05:23] to help you build software, who is responsible for that? If a vulnerability gets introduced, is it the coding assistant? Is the developer? Is it the organization? And on the flip side of that stat, there's also an inconvenient truth that 73% of developers out there are experiencing burnout.
[00:05:41] We've all heard of those people that are just coding for hours and hours at a time. So how can AI be utilized to better alleviate some of this pressure without compromising quality and security, etc? Well, burnout's a real thing, both with developers and with security people. We're all
[00:05:59] being asked, of course, to do more and time is the most precious resource. And so one of the things that I always recommend is, can you take artificial intelligence or machine learning technologies and
[00:06:09] help it automate away the more mundane parts of your job? So for a developer, for example, not only are they writing code, but they're often asked to document what the code is doing.
[00:06:21] And there's a real opportunity there because some of the coding assistants can take the code that you've just written, and it can turn it into natural language and do that documentation for you. So it does help with productivity, but I would argue, Neil, that it helps specifically
[00:06:36] with the areas that developers don't really want to do, the mundane part, which is documentation or unit testing, or the parts of their job that do cause that burnout. And I think looking back
[00:06:49] at last year, I was attending a lot of tech conferences. There was a lot of knee-jerk reactions to AI. Some organisations were just banning it. This year, I'm seeing a lot more
[00:06:58] excitement in adoption and how can we work with this? And it was refreshing to read that you mentioned that the solution isn't to ban developers from using AI, but to have a more clear plan for
[00:07:09] AI-powered tools. I'd love to dig a little bit deeper on that. What do you think are the essential components of a plan like that? What does it look like? Well, we talk a lot about AI fitness
[00:07:21] and AI wellness. AI fitness being, are the products ready? Are you ready to implement it? And so for example, here at Snyk, we have AI policies around which LLMs, that's large language
[00:07:37] models that you can use, and for what kinds of data and where it can be used, and having all the practical technical resources in place. The other part of that is AI wellness. Is there a cultural
[00:07:49] readiness to adopt that AI within the organisation? And a lot of that really is just partnering. This is why the partnerships between Snyk and our customers or the customer and the organisations that are partnering with them become so important because they can bring that experience into the
[00:08:08] equation and they can help them understand where they're going to achieve the greatest benefit. So yes, I'm a big believer in don't stand in the way of AI. If you can use a tool in your toolbox,
[00:08:18] you should absolutely be using it. And one of the reasons for that is you'll hear from developers, in fact, in that survey, I believe, or a similar survey, 80% of developers said they would go around
[00:08:31] policies. So don't try and stand in their way. What you want to do is help them to take advantage of artificial intelligence, but do it in a way that is productive and safe and beneficial for the
[00:08:42] organisation. And given that AI integrations are, let's be honest, are inevitable across just about every industry. What would you say are the best practices that you'd recommend for organisations to better implement these tools securely? You've probably seen a few mistakes. You probably get
[00:08:58] this question asked to you on many occasions from customers and clients and everyone in between. But what would you recommend here? Well, you want to be very clear on what the rules of engagement are.
[00:09:12] So for example, if you're in the UK and you fall under the Data Protection Act, you need to be very careful about the data that you're using. If you're in the EU and you fall under GDPR,
[00:09:21] you need to be careful about the data. And so there are many aspects to it, but one is understanding what can and what can't be used in public models versus private models, because you probably don't
[00:09:33] want your personal sensitive intellectual property going out to a public model where it's shared with other people. So being very clear on what data can be used where and with what models becomes critically important. I'm also a big believer in just educating people on how to use
[00:09:51] it effectively. So for example, you can use AI to help write job descriptions, but you may not want to include within that internal sensitive data to the organization within a public job description
[00:10:06] that you're creating. And so you just need to be careful about what data is used where as you're moving forward with AI. We've talked about your survey, and I think it's probably a great opportunity to introduce everyone listening to Snyk. There'll be a lot of people hearing about
[00:10:21] you guys for the first time. So can you tell me a little bit more about the company and also how Snyk's product suite, which includes Snyk code, Snyk open source, how that is helping mitigate some
[00:10:31] of the security risks we're talking about today that are associated with AI generated code? Yeah, well, I mentioned earlier, Neil, that 96% of developers were using some type of coding assistant. And that's like autocomplete as you're typing along, it helps them be more productive.
[00:10:48] And of course, they all want to use that type of tool. So one of the things that Snyk is a pioneer in is ensuring developer first security. We've always said, we help the developers code faster,
[00:11:02] deliver applications faster, but do it in a secure way. Don't wait until the very end and discover that there's a security vulnerability and then be faced with Sophie's choice of do I publish this knowing there's a vulnerability? Or do I send it back and slow things down?
[00:11:17] So we've been very focused on making sure as developers write code, that there's no vulnerabilities in it. And primarily, it's one of two areas. One is they're including open source components, are there vulnerabilities in those open source components that's part of the supply chain?
[00:11:35] Or is their code vulnerabilities in the code that they've actually written? So that's the two products that you just mentioned, Snyk code and Snyk open source. And what we've been very effective in doing is partnering with organizations who say
[00:11:48] to the development teams, hey, we will give you a coding assistant. If you implement these types of checks, every time you write code, every time you include an open source component,
[00:11:59] you need to run it through the guardrails of Snyk that will validate to make sure that it is secure, that it is safe, that it's not compromising the organization. And so we've been very effective in
[00:12:12] that the developer gets the carrot of a coding assistant, but you have the guardrail to ensure they're doing it in a very secure way. And I'd love to also explore the concept of AIIQ that
[00:12:25] I use Snyk advocates for. Can you tell me a bit more about that and why you think that's crucial for organizations too? Well, AI is everywhere right now. Everyone is talking about AI this
[00:12:37] and AI that, and how can we use AI? And I really like to break it down into three categories of AI and understanding why you're using it. Don't use it just for the sake of having a marketing message
[00:12:48] of we use AI. So for example, here at Snyk, we have AI internally to help our internal organization be more efficient. We have chat bots and you can ask questions about this product or that product
[00:13:02] or for enablement. So we use it very effectively internally. The second way of using AI is can you secure the AI? Because now that organizations are looking at how can I use large language models,
[00:13:15] how can I use AI within my business? You need to make sure it's secure. And that can be as simple as an LLM that gets downloaded is poisoned in some way. You need to make sure that the way that
[00:13:28] you're leveraging AI, we've been talking about coding assistance. You need to make sure if they're using coding assistance that you're actually securing the output of those coding assistants. And then the third way that I think about AI is how can you use AI to make your products and
[00:13:47] services better? So for example, here at Snyk for the last five years, we've been using machine learning and our vulnerability assessment. And I won't bore you with the details, but traditional people would use algorithms to say the code that someone just wrote, is that secure? Is
[00:14:04] it not secure? And they would start at line one and they would go through that. Well, nowadays, we're using symbolic regression analysis. It doesn't really matter, but these are techniques, new techniques that result in a better, faster for the customer. And so when you talk about AI
[00:14:22] IQ, I always like to understand where the AI is being used. Are you using it internally? Are you securing the AI that you're using? Are you using AI for your customer benefit? And then making sure
[00:14:34] that you're doing it in a way that actually does result in productivity gains and efficiency gains for the organization. And just to bring to life some of what we're talking about here, are you able to share any real world examples of how Snyk has maybe helped companies implement
[00:14:51] AI tools securely and some of the benefits that they've been able to reap as a result of doing this too? Yeah. I had mentioned earlier that we're often deployed alongside AI coding assistants,
[00:15:04] and we have a large technology organization that's a customer of Snyk, has been for a very long time. They have 5,000 developers, all of which were clamoring to get an AI coding tool.
[00:15:15] And so what they did is they said, look, we'll give you the AI coding tool, as long as every time you do a pull request or you merge code into the code base, we'll use security analysis.
[00:15:26] And within 60 days, two months, they had rolled out this AI coding assistant, but with security attached to it, to all 5,000 developers. Now, the win was of course the time and rolling that out
[00:15:40] and getting it going. But the real benefit to the organization was not only were they able to do that really quickly, but they got significant productivity gains. So they're using the same number of developers, the same number of heads, but of course, the speed and velocity at which
[00:15:56] they're able to develop applications improved. And I don't know what the exact number was in that case, but it was in the double digits. There was a significant productivity gain in the velocity
[00:16:06] of building software. And so there are many of these real tangible examples that we have where organizations can do more with less and do it in a very secure, efficient way. And we've heard today about you, your work and everything that Sneak are doing. And I'd love
[00:16:24] to find out more about what makes you tick because I think there's a real pressure on all of us to be in a state of continuous learning now. So in a way of asking who checks the checker,
[00:16:35] where or how do you self-educate? How do you keep up to speed with all these changes? Well, I am a very voracious reader of books and consumer of podcasts. I believe very much in networking and taking advantage, standing on the shoulders of others. And so
[00:16:53] on the podcast side, on the cybersecurity front, for example, I love podcasts like Darknet Diaries that talk about how malicious individuals act or for learning about businesses, I like podcasts like Acquired. One of the books that I read just recently that really opened my eyes to better
[00:17:12] measurement and better metrics was called Metrics That Matter by John Doerr. But I might continuously try to learn from those around me and others in the industry that have gone through similar exercises. Yeah, great answer. I always think you should learn something
[00:17:30] from every single person that you meet and talk with. And there's so much content out there to consume as well. I'll get that book added to our Amazon wishlist for people to check out. But
[00:17:39] as I said, we've talked about a lot of information today. We covered a little bit about that survey. So if anyone listening wants to find out more about that, about the Snyk products
[00:17:48] we mentioned or maybe even follow or connect with you, ask you a question. What's the best starting point for everything? Well, the best starting point, of course, is our website. So Snyk.io, S-N-Y-K dot IO. And you'll get all the information there. The particular survey we were
[00:18:05] referring to is the 2023 AI Code Report, which you can download and get the full subscription there. If anyone wants to connect with me, of course, you can find me both on the website or on LinkedIn.
[00:18:18] Danny Allen is the name. Awesome. Well, I do urge people to check everything out, including that survey. I'll put a link to that as well. And as we said, some big, big stats in
[00:18:30] there, such as 80 percent of IT managers are somewhat or very concerned about how their teams relied too heavily on AI code completion tools and bypassing security policies. But it wasn't just about highlighting the problem. It was the solution which you'd so eloquently put there.
[00:18:45] So thank you so much for joining me today. I'd love to hear what everyone listening thought of the conversation and I would encourage them to add to what we talked about. But more than anything,
[00:18:54] Danny, just thanks for sharing your story today. Thank you for having me, Neil. It was great to speak with you. Danny was a real champion today. I ran into a few technical issues when we hit the
[00:19:03] call. Incredibly cool guy. And I think it's clear after talking with Danny today that the path to securely integrating AI in development environments is nuanced and it also requires meticulous planning and execution. And Sneak's approach, focusing on partnership and practical security frameworks,
[00:19:23] almost offers a blueprint for others to follow. Yet challenges remain in form of cognitive biases and the sheer pace of AI adoption. But this just underscores the need for continuous education and
[00:19:36] adaptation. So the big question I want to put out to you listening is how are you and your organisation managing the balance between AI driven efficiency and security? I'd love to hear your thoughts and experiences on and also anything you thought about today's conversation.
[00:19:54] Did it spark any ideas? Did you think we missed something? As always, email me techblogwriter outlook.com Twitter, LinkedIn, Instagram, just at Neil C Hughes. You've heard from me. You've heard from Danny sharing his invaluable perspectives. Now I want to hear yours. But
[00:20:11] other than that, I've got another guest lined up for tomorrow who will be joining me and waiting in your podcast feed bright and early tomorrow. But thanks as always for listening and until next time, don't be a stranger.

