Can designing for human error become the strongest cybersecurity strategy in an AI-accelerated world?
In this episode, I sit down with Yaz Bekkar, Principal Consulting Architect for Barracuda XDR and a member of the company's Office of the CTO, to explore why the speed introduced by AI is changing the risk equation for every organization. As automation allows teams to move faster, it also means small mistakes can scale at machine speed. Yaz argues that resilience in 2026 is no longer about trying to prevent every incident. It is about anticipating failure, containing the blast radius, and recovering quickly without bringing the business to a standstill.

Our conversation challenges one of the most persistent narratives in security, the idea that people are the weakest link. Yaz explains why safeguarding the workforce begins with reshaping the environment in which they operate. When the secure option is also the easiest and fastest path, risky shortcuts begin to disappear. From secure defaults and least-privilege access to paved-road workflows for administrators, he shares practical examples of how organizations can reduce complexity, limit exposure, and support better decisions under pressure.
We also tackle the limits of annual compliance training and the cultural shift required to build real cyber resilience. Yaz makes the case for continuous, bite-sized practice embedded into everyday work, from three-minute phishing simulations that teach without blame to short, hands-on misconfiguration drills for technical teams. The result is stronger habits, faster response times, and a security posture designed for real human behavior rather than ideal conditions.
If AI is accelerating both innovation and risk, how do leaders move from a prevention-only mindset to resilient operations that protect business continuity when controls fail? And what would change in your organization if every system was designed with the assumption that someone, somewhere, will eventually make a mistake?
Useful Links
Learn More About Barracuda XDR
[00:00:04] What if the biggest risk in cyber security today isn't careless people, careless workers? But systems that pretend humans never make mistakes. Let's be honest, we all make them. And as AI speeds everything up, from how we work to how attacks unfold, errors now travel at machine speed. A single click, a rushed permission change, or a misconfigured cloud setting
[00:00:31] can ripple across an organisation in minutes. And this raises an uncomfortable question. If mistakes are inevitable, and I think we can all pretty much assume that they are, why do so many security strategies still act surprised when they actually happen? And that's exactly what we're going to explore in today's conversation. Because joining me today is Yaz Bekar from Barracuda. And he's someone that believes real cyber resilience starts by designing environments
[00:01:01] that expect slips, expect mistakes. But rather than blaming people after the fact, he argues that the strongest security programmes don't try and fix humans. They actually reshape the systems around them. So when something does go wrong, it doesn't turn into a business-ending event. Or blame culture, like the Spider-Man meme where everybody's pointing at each other. So today we're going to talk about the safest path, why it has to be the easiest path,
[00:01:30] how secure defaults and clear paved roads can change behaviour under pressure. And why culture shifts only happen when habits and incentives change. So whether that be from bite-sized security drills that teach without shaming users, to designing controls that work when stress levels are high. We'll cover it all today in this conversation about resilience in an AI-accelerated world.
[00:01:57] So if you ever wondered whether your organisation is genuinely built to survive mistakes made faster than ever before, I think there should be a lot of takeaways in this one for you. But enough from me. Let me officially introduce you to Yaz now. So thank you for joining me on the podcast today, Yaz. Can you tell everyone listening a little about who you are and what you do? My pleasure.
[00:02:21] Yeah, so Yaz Bekar, I'm the Principal Consulting Architect for Barracuda XDR. I'm also a member of the office of the CTO of Barracuda. So my role is basically as an advisor. I serve as an advisor for customers of Barracuda, but also for our internal solution architects as well. So any questions related to cyber security or XDR, so I'm always here to assist them.
[00:02:50] Well, it's a pleasure to have you on the podcast with me today. And I must admit, whenever I hear Barracuda, I feel all warm and fuzzy inside. And the reason I say that is I go to 20 plus conferences a year all around the world. But Barracuda is by far the most scenic, beautiful setting I've ever been to a tech conference in Austria. It's something else, isn't it then? It was really beautiful. It was a really great conference there. The landscape was just amazing.
[00:03:19] I mean, it's like a postcard, really. So yeah, definitely. It really is. And from the tech side, one of the things I wanted to talk about, as we enter 2026 and race through it already, AI is accelerating everyday work. But you've said that we will see more mistakes made faster. So why does that reality force maybe a rethink on cyber resilience and what that actually means,
[00:03:44] especially at a time where so many are releasing so many different AI agents out there? To be honest with you, I see cybersecurity a little bit like an equation, because speed changes the risk equation. AI is helping move faster. Developers, analysts, marketers, attackers as well, unfortunately, all of us.
[00:04:08] But when speed goes up, small mistakes can scale instantly, faster. Example, one misconfigured AI can push wrong permission, sitting across hundreds of endpoints in minutes, doing mass deployment, for example. So that's why I mean like the mistakes here, AI that can generate could be a disaster for businesses.
[00:04:38] Now, what's the resilience is key. It just doesn't mean prevent everything. It has to mean anticipate errors, contain a blast radius, detect quickly, and of course, recover fast. So I would say the new question, can we stop every mistake? No. It's when mistake happens at machine speed,
[00:05:08] the question is, can we survive it as a human speed? So I think that nowadays it's going too fast. And unfortunately, businesses are not ready for that quick change. Yeah, it does feel that way. I've seen a lot of evidence of that as well. And I think many security strategies still focus very heavily on fixing human behavior. But can you expand on why you believe safeguarding the workforce
[00:05:36] actually should start with redesigning the environment that people work in instead? Well, I have seen a lot of businesses when very often they blame that the employee clicked the phishing link. And actually, I see it from a different point of view. I see that the mistake here is your security and not the people. Because actually, the people will always click, right? They will always click the phishing links.
[00:06:05] And it will always happen that phishings will bypass the security rings and will be at some stage delivered. Now, I'm not saying that education and security awareness for employees is not necessary. No, no, no, it's absolutely the opposite. It is necessary. But you have to build your security rings in a way that you be aware that a mistake will happen and a disaster can happen.
[00:06:33] Now, if the system is confusing, rushed, and permission heavy, even smart people make bad clicks. We should stop treating users as the problem and start treating poor design as the real problem. From my perspective, redesigned environments, so secure collaboration defaults, automatic data classification,
[00:07:02] definitely policy guardrails, and fewer risky choices exposed to users. So that is my point of view. And when I was doing a little research on you before you joined me today, I was reading how you often talk about making slips non-fatal. So what does that look like in practice in a large organization when systems are designed with failure as an expectation rather than an exception? Exactly.
[00:07:30] So it's been assuming mistakes will happen and designing so they don't become a disaster. Sometimes I ask businesses straightforward, do you have a disaster plan? I would say that 80% of the answers I get is, no, I don't have a disaster plan. So basically the disaster happened, the panic happened, everyone is running from east to west and not knowing what to do.
[00:07:57] And nowadays it's necessary to have a plan A, a plan B, a plan C, basically to have a disaster plan. Without a disaster plan, I mean, you cannot function. And I have seen also businesses that have really robust security and they have been impacted. So always planning for the worst and be ready, right?
[00:08:22] Prevent what you can, limit impact, catch early, respond fast. That's why we say about MTTR as well. So the time of remediation, the time of the recovery as well. If, for example, your servers are impacted or encrypted, what is your plan? What you are going to do in that case? Do you have any offline backups? Are you ready to implement them and to do the restore?
[00:08:52] So I think that businesses should really care. It's not, what's often I say is that it's not if you will be impacted, it's when you will be impacted. And when it comes to secure defaults and least privilege, these are things that are widely discussed but rarely implemented well. And as humans, we will always take the least path of resistance.
[00:09:16] So why is making the secure path the easiest path so hard for organizations to get right? Because we do hear a lot of talk and we've been talking about this for many years. But why are many still finding it hard to get right? Yeah, so three reasons I see. Legacy, complexity, and fear of friction.
[00:09:42] So legacy, all systems were built for open access, not modern threat models. We can still see a lot of, I'll say, medical organizations, hospitals using old systems. They must use, for example, an XP machine device because that software can run only on the device, XP, for example. Or old servers.
[00:10:12] From a perspective of complexity, access rights grows over years. Nobody full on cleanup. So sometimes, unfortunately, administrators built very wrong policies, creating too much big complexity. Then the users see too much difficulties to access to files or folders.
[00:10:41] So what they start to do, they start to be saving those files on USB sticks, for example, or sending them on their Gmail accounts. And the third thing is fear. So Teams worry, secure defaults will break workflows and trigger support tickets. So an example, a local admin rights. Everyone agrees they are risky.
[00:11:06] But removing them without transition plan can often break the software install process. So that's why it's sometimes the companies or organizations delay it forever. What is the best work out is the phase rollout.
[00:11:26] So doing implementations by phases, step by step, have clear plan and consolidation of servers and removing the legacy servers and legacy devices. That is really, really important. Even a lot of customers sometimes are reaching out and saying, well, I have still a server, which one is 2008. Can you protect it? Well, dear customer, cyber hygiene first.
[00:11:55] You need to upgrade your server before starting to think to protect it. Yeah, I keep hearing stories of the legacy tech problem and the technical debt is becoming an increasing problem. And also, I think it was Mike Tyson that once said, everybody has a plan until they're punched in the face. And as a result, maybe we should be unsurprised that risky shortcuts do tend to emerge when the pressure hits.
[00:12:21] So how do pave roads change day-to-day decision making for admins and end users when that time is time? So if people are under pressure, they follow the fastest route available. So give them a safe, fast route. Example for admins, instead of manually opening broad firewall rules at midnight during an incident, they use approved automation rainbooks.
[00:12:50] So with expiry, for example, it is necessary to think ahead, right? It's not like just, I know we can be all under pressure, but choosing the fastest way is not always the best. Sometimes you can be under deadline. The question is not how I can release it fast.
[00:13:17] You should think how I can release it safe, even if necessary, we can postpone the deadline, right? Don't be under pressure. That is really important. Another bugbear of mine that I've seen throughout my career is those dreaded annual security compliance training. It remains the norm in many companies. It typically involves just hitting next for about 15 times and then you've got a tick next year.
[00:13:45] Oh gosh, I see those so many. It's like going to training, like going to the gym once a year and expecting to stay fit. Yeah, I have seen those sometimes like, yeah, we have some compliances. We need to do an audit, but we need to do a vulnerability scanner, for example, once a year. Okay. And how much often your environment is changing? Well, constantly, continuously.
[00:14:15] And you need to do the audit only once a year? Really? I mean, threats evolve, right? Monthly, daily, weekly, one big yearly. So, I mean, annual training is usually generic. Real attacks are continual. So, for example, the compliances are not only from a perspective to just check a box.
[00:14:44] And unfortunately, I see a lot of enterprises. Businesses, businesses. They just want to check the box that, yeah, we are compliant. But, yeah, it's nice to be compliant. It's nice to have an ISO 271. It's great to have a SOC 2 type 2. It's great to have a SOC 2 type 2.
[00:15:08] But also question, does those compliances fit the trade landscape that is right now? Because the trade landscape is changing continuously. Governments are not changing constantly those compliances. It's great to have a compliance. Don't understand me wrong. It's great.
[00:15:32] It can be an indication that you have some type of cyber hygiene, but doesn't mean that you will be not impacted. So, yeah, I suggest always and recommend customers to do an audit like monthly, at least monthly, to be aware about those open doors, open windows, and close them safely. Changes happen constantly within their environment.
[00:16:01] I completely agree. And if anything good comes out of this episode, some business somewhere will be making those changes. I think we all feel that pain. And one of the other things that you advocate for is continuous bite-sized practice like micro-drills and realistic fishing exercises. And for business leaders listening that may be interested in trying something like this, do you have any examples of how these methods have built stronger habits without creating fear or blame? Absolutely.
[00:16:30] A three-minute simulation. For example, creating drills of phishing attacks. Simply deliver a fake phishing email to your users. And there are plenty of platforms that are able to simulate and send it to your own employees. So that just simply you know who is clicking, who is not thinking and clicking too fast on the link, right?
[00:16:57] So nowadays, a lot of businesses, what they tend to do is just simply put everyone inside one room and let them take the security awareness course. Right? I bet 80% of them are not listening to that course. They are not taking care about it.
[00:17:21] What is more interactive and what is more interesting is to have the employees clicking the phishing email and then you can coach them, hey, you received an email and you clicked on that phishing link. That was a fake phishing email. But what if that phishing email was real? That could impact us all, including you. So, yeah, instant feedback, practical takeaway, and of course, no public shaming.
[00:17:51] That is really key. I mean, we all can make mistakes, right? We are learning constantly on a daily basis. It's all about to learn something from that. So, a phishing simulation hand in sales inbox, for example, using fake urgent contact update. So, if someone clicked on it, red flag, and then basically educate your users. Love it.
[00:18:20] Food for thought indeed there. And if there are any leaders or organizations listening that want to improve their cyber resilience in this AI-driven world, is there a first mindset shift leaders need to be thinking and thinking about, especially around their people, process, and security design? Any big mindset shifts or common mistakes you see people making? Well, yes. Move from perfect prevention to resilient operation.
[00:18:49] So, leaders should ask if this control fails, what happens next? Can we contain impact in minutes or in hours or in days or in weeks? I have seen some impacts that basically shut down all the business. So, it can create a lot of reputation impact on their business.
[00:19:12] And also, can teams execute response playbooks without heroics? So, without no changes on the environment. So, basically, everything can be automated. If there is such impact on that server, there is an automation to replace it within minutes. And another thing, another question, did we design for real human behavior under stress?
[00:19:43] People are not the weakest link. They are the, I would call it, the adaptive layer. So, process shouldn't be paperwork. It should be executable during incidents. Security designs shouldn't just block risk. It should preserve business continuity. And actually, business continuity is key, is really key.
[00:20:09] I mean, if you lack really your business, if you are doing great, you should think about the business continuity. If there will be an impact, can you safely go out from that incident? So many great points there. And hopefully, a lot of actionable points for people listening. Now, one of the other things I try and do on this podcast is give my guests a chance to step on a virtual soapbox
[00:20:37] and dispel some myths and misconceptions that they may have seen or that may just frustrate you when you see them on your LinkedIn or Reddit or wherever you hang out there. So, what do people most misunderstand about your industry? Are there any myths or misconceptions around your job, your field of expertise that we can finally lay to rest today? I'm sure there are a few, but anything that you would like to lay to rest today once and for all? Well, yes.
[00:21:04] I believe that our message, our value is cybersecurity first within Barracuda. That's our focus. And here in Barracuda, our focus is to assist and to be side by side with our customers. And that's what we do, actually, within XDR.
[00:21:28] Now, in my position, in my role, what I do often is threat simulations. And a lot of people can just Google me, Yaz Bekar, and threat simulations or webinars that I have run. So, there are plenty of use cases. I do those threat simulations not to insert the panic in them. I just try to open their eyes.
[00:21:58] For example, there was one which one was very interesting, the MFA bypass and attacks, men in the middle, conditional access bypass. So, we do threat simulations and we try to educate businesses and users. And that's our goal. So, yeah, everybody can reach out to our channels and they can reach out as well to me on LinkedIn. So, they are more than welcome. Awesome.
[00:22:26] Well, I'll add links to everything you've mentioned there, including you've only have a couple of big events this year. So, I'll put links there too. And more than anything, hopefully, I can get to meet you later in the year. Maybe we can record something at one of the events. But more than anything, thanks for joining me today. Thank you so much. It was a pleasure. Thank you. Thank you.
[00:22:54] And that is, in a world where AI is speeding everything up and mistakes are no longer slow, isolated events, we know they will spread fast before anyone has time to react. And what my guest made clear today is resilience is not about pretending that those mistakes are not going to happen. They are inevitable. So, designing systems, defaults and habits that assume they will, they make sure the outcome is more survivable.
[00:23:24] Whether it be secure paths that remove those temptations for shortcuts to continuous practice that fits real workflows rather than just one of those dreaded annual box ticking compliance training exercises. So, I think today's conversation challenged the way that many organisations still think about security culture. People are not the weakest link. They are simply the reality that every system has to work with, especially under pressure.
[00:23:53] So, over to you. Did this episode make you reflect on how your organisation handles human error, incentives and recovery when things go wrong? Love to hear your take on this one as always. Are your systems designed for perfection or resilience when that reality hits? We know it's going to happen. What are you going to do about it? As always, techtalksnetwork.com. You'll find all the links to get in touch with me, send me an audio message or a direct message on socials.
[00:24:23] Plus, you'll find out how you can work with me if me talking into your ear every day is not enough. But that is it for today. So, thank you to Yaz and massive thank you to each and every one of you for listening every day. Speak with you again tomorrow. Bye for now.

