What happens when AI starts moving faster than the people meant to control it?
In this episode, I'm joined by Bernard Montel, Field CTO EMEA at Tenable, for a timely conversation about the AI risks many organizations may be underestimating. Bernard believes we are heading toward a defining AI accident and that the first major incident may come through speed, scale, and unintended consequences rather than a malicious attack.
We talk about why so many companies feel pressure to adopt AI at pace, while visibility, governance, and control struggle to keep up. Bernard describes this moment as "driving faster than we can steer," and explains why shadow AI, overprivileged identities, cloud misconfigurations, and exposed AI projects are already creating real business risk.

The conversation also looks at agentic AI and why giving systems the ability to take action changes the security equation. A chatbot giving a wrong answer is one problem. An AI agent making flawed decisions, leaking data, or interacting with industrial systems is something very different.
Bernard also shares why AI can become a distraction from the security basics that still matter, including cloud security, identity, exposure management, and vulnerability remediation. Attackers may be using AI to move faster, but many of the weaknesses they exploit remain painfully familiar.
We also discuss Tenable's new agentic AI framework, announced during RSA, and how the company is using AI to help security teams respond at machine speed while reducing exposure across IT, cloud, OT, identity, and AI environments.
For business and security leaders, this episode offers a clear warning and a practical takeaway. AI adoption is no longer a future conversation, but control, governance, and exposure management need to move with it.
How prepared is your organization for an AI incident caused by accident rather than attack? Share your thoughts.
Useful Links
Visit the Sponsors of Tech Talks Network and learn more about the NordLayer Browser.
[00:00:00] - [Speaker 0]
So a big thank you to NordLayer for backing the podcast and supporting the kind of real world cybersecurity conversations that we need more of. Because as someone that records 65 plus interviews a month, I've personally seen a huge increase in browser based attacks over the past year, whether that be phishing, malicious extensions, account takeovers, the list is long. And it's all happening where people spend most of their time inside the browser. So NordLayer's new business browser, that's built to address exactly that. It blocks malicious sites before they load.
[00:00:37] - [Speaker 0]
It limits risky behaviors like uncontrolled downloads or data sharing and gives you visibility into how your team interacts with web apps. And it also helps you stay compliant by controlling access and enforcing policies without the need to rely on multiple disconnected tools. So for anyone listening that is thinking seriously about reducing risk in SaaS heavy environments, this feels like a smarter and more focused approach. And you can learn more about it by visiting nordlayer.com/browser. Let me know what you think.
[00:01:12] - [Speaker 0]
But now, let me introduce you to today's guest. What happens when the first major AI disaster won't be caused by a criminal mastermind from a James Bond movie, but more likely an organization that has just been moving too fast to see the risk that they've been building over the last few years. Well, today, I'm gonna be joined by Bernard Montel, field CTO EMEA at Tenable. And together, we're gonna talk about a warning that feels very difficult to ignore right now because he believes that the first defining AI incident will likely be accidental, born from speed, complexity, weak governance, and a false sense of control. And that idea lands hard because it cuts through so much of the current noise around AI.
[00:02:08] - [Speaker 0]
Because while we see headlines on our news feed that focus on innovation, productivity, and the race to deploy agentic systems, the opportunities that agents brings, my guess brings the conversation back to something much less glamorous but more urgent because he's gonna argue that many organizations are possibly adopting AI faster than they can properly understand it, monitor it, and secure it, almost like little children playing with very dangerous toys. And in his view, businesses are trying to keep pace. They're doing their best. But in doing so, they could be unwittingly creating blind spots that could lead to leaked data, flawed outputs, or automated decisions that spiral in unexpected ways. So we will discuss today why AI risk is often misunderstood and why the biggest danger might not come from some brand new Hollywood style cyber threat.
[00:03:08] - [Speaker 0]
It could come from the same old weaknesses that security teams have been dealing with for years, identity sprawl, cloud misconfiguration, unpatched vulnerabilities, and poor visibility. The difference now is speed because attackers are using AI to move faster while defenders are still overwhelmed by alerts, fragmented systems, and pressure from the business to roll out AI at all costs. Does that sound familiar to you? If well, if it does, this conversation will look at exactly what organizations can actually do about it because he's gonna share why he believes exposure management, governance, and visibility overshadow AI are now business issues, not just security team concerns. So as your company races to build with AI, ask yourselves and your teams, are you truly in control or are you simply hoping that they are?
[00:04:06] - [Speaker 0]
And with that scene perfectly set, let me bring today's guest on to the podcast right now. So thank you for joining me on the podcast today. Can you tell everyone listening a little about who you are and what you do?
[00:04:21] - [Speaker 1]
So I'm Bernard Montiel. I'm the field CTO for Tenable in EMEA. I've been in Sabbath for the past twenty five years. I started my career around identity and access management and some crypto. And then I've done some identity, you know, threat detection and response, you know, everything around SOC in my career as well.
[00:04:45] - [Speaker 1]
And I've joined Tenable roughly four years ago as a field CTO for Tenable today covering the EMEA region.
[00:04:54] - [Speaker 0]
Excellent. Well, thank you for joining me today. You've been doing a lot of traveling. It's been a big week for you. RSA, there was a big agentic AI framework announcement.
[00:05:03] - [Speaker 0]
For anyone that missed that, tell me more about that and what you announced there and what's exciting about that.
[00:05:09] - [Speaker 1]
You know, AI is used by attackers. No doubt about it. They are using it, and certainly everyone has followed the fact that Antropic announced that some, you know, run some groups were using their system. They've jailbreak their system, and 80% of those attacks by that attack group is orchestrated and automated by AI today. So there is that aspect which is really attackers are going quicker.
[00:05:43] - [Speaker 1]
They do that as a machine rhythm. So everyone needs to be helped going quicker as well from the defender side of the house. So we decided we will implement an agentic framework helping doing orchestration, automation, and helping our practitioners to go quicker understanding, you know, vulnerabilities, misconfiguration, all of those exposures we're highlighting for them so AgenTeC AI can help them to go quicker than they are obviously better armed against attackers or today, you know, using AI as well.
[00:06:27] - [Speaker 0]
And I was reading before you joined me today, you've warned that the first major AI incident is very likely to be accidental rather than malicious. So what does that scenario actually look like in a real world? Why do you think organizations are are possibly underestimating this risk?
[00:06:44] - [Speaker 1]
You know, the keyword today around AI is speed. The speed of adoption, the speed of, you know, deployment, the the speed of usage. Keep in mind that we when we talk about AI, we talk about a genetic AI today. You know? That exactly the subject.
[00:07:05] - [Speaker 1]
So by if we make an analogy with driving a car, if we are driving too fast, there is a risk of accident. And everyone around, you know, the cybersecurity community is thinking about, hey. When that will crash? When we talk about that is, what is an AI? It's a statistical engine and a huge amount of data.
[00:07:32] - [Speaker 1]
Okay? So undeterministic statistical engine can make mistake. And we know that today. You know, we call that hallucination. It's just because, you know, the math can have in stat, can have some part that are wrong.
[00:07:50] - [Speaker 1]
So that is one part. And if you combine that with the data, if the data, for example, is compromised, if the data is wrong in somewhere, so at the end of the day, we're not again talking about a chatbot that is giving you a wrong answer. We're really talking about a genetic AI that has the capacity to take decision and act. So if that agentic AI is starting to do some actions which are leaking data, and imagine this agentic AI is attached to an industrial system, that could have a real impact into our daily life. So that is why I mentioned a couple of months ago that certainly somewhere, sometime, we will have a big AI accident that will have a real impact.
[00:08:44] - [Speaker 0]
And sticking with that car analogy you mentioned there, I also read that you described the industry as almost driving faster than we can actually steer. I love that line. But where are you seeing that play out most right now inside enterprises that are adopting AI at speed? He must have picked up quite a few stories out there and a few observations in the in the field. But where what are you seeing out there?
[00:09:07] - [Speaker 1]
So let's let's keep the analogy. I think they are in the traffic jam right now. Yeah. They don't know how much AI are used within their organizations. This is a clear statement today.
[00:09:20] - [Speaker 1]
You know? We talk about shadow AI here. Shadow AI is not just a a vague concept. You know? ShadowAI is a fact that, you know, employees within organizations try to use AI every single day today.
[00:09:36] - [Speaker 1]
Okay? A year ago, I met a I met a conference, and then I asked the audience, do you use AI monthly? Some people were raising their hand. Weekly, less people. Daily, less.
[00:09:50] - [Speaker 1]
Today, this is completely the opposite. You know? Everyone is using AI daily. Within an organization, you need to know, first of all, how many AI technologies are used by employees. And that is not a single simple question or simple answer.
[00:10:10] - [Speaker 1]
You know? And that is something they need to know. This is the visibility they need to handle around those technologies used by their employees, first of all. So regarding the speed, I would say that the first use case that most of the, you know, c level people that I'm meeting every week are going back to me are saying, you know, we don't have that visibility. We don't know how many tools are used.
[00:10:38] - [Speaker 1]
And so we need to be able to have that part. The first the first, you know, use case is really shadow AI. So that is really one one of the main pain that they have today regarding the speed of adoption. The second part is the AI they are building. Okay?
[00:10:55] - [Speaker 1]
So they they are controlling that, but they don't know what kind of data or they can know what kind of, you know, actions of those agentic AI can take. And that is something they need to monitor, and that is the second part of, you know, the speed of usage and the speed of, you know, development of AI within organizations.
[00:11:18] - [Speaker 0]
And I think there are many leaders out there that they believe that they are being responsible, Ro, while they're rolling out AI. And as I said at the very beginning, any leaks or attacks are very often as a result of an accident rather than anyone having any malicious intent inside an organization. So where is that disconnect between perceived control that they're doing things right and actual control? And and why is that gap widening?
[00:11:45] - [Speaker 1]
I think you have an expression in England, which is being between a rock and a hard place. Yes. Is that this one? Okay. So I think they are clearly there.
[00:11:55] - [Speaker 1]
You know? The business is pushing for AI. Everyone. Okay? I don't know any organizations doesn't have any AI project.
[00:12:03] - [Speaker 1]
They have to do it, run it, deploy it now. In the meantime, you know, what is the gap? The gap is all of those projects are exposed. You know? We've seen a terrible that an AI project has 70% of critical vulnerabilities, where a classical project as 50% of critical vulnerabilities.
[00:12:29] - [Speaker 1]
80% of overprivileged identities within, for example, an ad AWS, you know, stack of AI project used by organizations. And we don't have anything against AWS because, you know, we have highlighted the same kind of issue with Google Vortex, which is their the Google stack for AI project. We have a huge amount of overprivileged identities. What does it mean? That mean that they are running AI project without taking care about reducing the exposure.
[00:13:07] - [Speaker 1]
So they have an AI exposure gap. They also have an AI governance gap. The way to take control back on those AI project is having a governance of AI. Sitting down, understanding the policy, what those agentic AI should do, can do, and what are the red lines. If you just define it on paper without monitoring, if those policies are applied, then you don't have an AI governance.
[00:13:39] - [Speaker 1]
You are still blind or you still have some blind spot regarding how AI is behaving and how AI is exposed.
[00:13:48] - [Speaker 0]
And just to bring to life what we're talking about here, when that first major AI accident inevitably happens, I think it is when, not if, what do you think that immediate consequence or consequences will be for businesses, regulators, and public trust and the blast radius, etcetera? What do you think will happen then?
[00:14:08] - [Speaker 1]
It will depend if that will be an incident only on leaking data or something given, you know, having more impact. You know, we know for example that ten years ago, fake news within, you know, social networks had a physical impact in Germania with some, you know, real conflict between people only because of a fake news. And we know that AI today can be used in fake news as well a lot. So we don't we do not have to underestimate that AI accident could have a real impact to our real life. Okay?
[00:14:48] - [Speaker 1]
But when we talk about that incident, it will all depend about if that would be just a leak of data or having much more impact within our daily life. Can we imagine that, for example, in AI today, agents can you know, have access to some OT systems. And then, you know, critical infrastructures and then water supplier and then electric supplier can have an accident as that could have a bigger impact. Talking about the blast radius, it would just depend on the nature of the incident. The consequence of that should be AI governance and AI compliance.
[00:15:30] - [Speaker 1]
Okay? So again, looking back to the beginning of the conversation, the speed. The speed of AI and the time of compliance are not at the same level. You know, when I made an analogy and I made a, you know, for example, a a comparison between the speed of AI using used by attackers, and this is why we have launched XIAI, you know, helping practitioners, is to reduce the gap. The gap of compliance and the gap of deployment should be reduced as well.
[00:16:03] - [Speaker 1]
I know institutions, government are working right now to put in place more policies to be able to control better AI deployment. But what organizations need to do is not wait for the compliance, have an AI governance, and close the AI governance gap.
[00:16:25] - [Speaker 0]
And I'm very fortunate. I get to go to a lot of different tech conferences around the world, and it seems that everyone I go to at the moment, there's a lot of excitement, a lot of hype around all things AgenTik AI. And away from the keynote, so I think there is almost a growing sense of the access that agents get and how AI is pulling away from maybe some of the core security fundamentals that have been in place. So how are you seeing organizations possibly, not intentionally, but neglecting basics like identity, cloud configuration, and exposure managements as they chase these AgenTik AI innovations?
[00:17:04] - [Speaker 1]
They need to step back and consider AI like any other technology. At the end of the day, this is a technology. I mentioned AWS, GCP, Azure. All of them today are proposing out of the box set of services you run-in the cloud, and you can start deploying and using your own AI service and project. You put the data, you choose a model, you start to train it, tune it, and then, you know, deploy it.
[00:17:35] - [Speaker 1]
Okay? Those are running in the cloud. 80% of them are running in the cloud. Some of them are running on prem, but most of them are running in the cloud. So what we've learned so far for the past ten years around cloud security need to be applied to AI cloud based project.
[00:17:57] - [Speaker 1]
Same. Now when we talk about, for example, exposure management, we talked about we are talking about understanding the full attack surface from an exposure standpoint, meaning on prem, IT, OT, cloud, identity, and AI. So we need to include AI technologies within an exposure management program, the same way we've done that for all of those technologies for the past fifteen years. By doing that, we have then a proactive approach to reduce the risk proactively on identities, on misconfigurations, on the vulnerabilities, the same way you just described. Those three elements need to be handled the same way for AI projects.
[00:18:47] - [Speaker 1]
When we're doing, for example, vulnerability scanning, what we do? We discover assets, ShadowAI in this case. And then we can apply, you know, best practices reducing the risk proactively with exposure management, identify misconfigurations, and believe me there is a lot, and identify those vulnerabilities. The critical vulnerabilities of 70% I mentioned before need to be detected and fixed. Otherwise, people will have access to those projects.
[00:19:17] - [Speaker 1]
They will change the data, and then the AI will crash to an incident.
[00:19:24] - [Speaker 0]
And on the flip side here, if we look at things from the attacker's perspective, are we really seeing new AI driven threats yet, or are bad actors still exploring the same old weaknesses that organizations continue to overlook? What are you seeing there from the attacker's perspective?
[00:19:40] - [Speaker 1]
This is still the old kind of way to attack a system. That didn't change Yeah. At all. You know, if you think, you know, AI will be used to do Star Wars, you know, with a brand new malware and detective stuff like that, this is not the truth. This is not Yeah.
[00:19:58] - [Speaker 1]
True. You know, we see that a little bit, but nothing very new here. You know, the usual suspects, the very old stuff, you know, are still there. There is only three ways to compromise the system. The non vulnerabilities, which has been here for a while, the misconfiguration, and an identity with a lot of privileges that has not been fixed.
[00:20:22] - [Speaker 1]
Okay? What is new is that those attackers, and we've seen that, and I mentioned the isentropic, you know, public information that has been jailbreak by an attacker group is they just go quicker. They only use AI today to make those attacks to be automated, and they are much quicker than they were before. We've seen and we are observing that with our research team at Tenable. You know, time between a vulnerability that has been published and the time that that vulnerability is exploited.
[00:21:00] - [Speaker 1]
When I've joined Telenburg four years ago or five years almost, it was roughly a month, weeks. We've seen days, but weeks was roughly the stuff, you know what, and sometimes even months. Now it's less than a day and sometimes even negative. So the time between a vulnerability is published and the time when the vulnerability is exploited is shrinking dramatically. That mean that attackers are using tools, AI in this case, to go quicker.
[00:21:37] - [Speaker 1]
This is it. This is the only stuff, and that is not really new. You know, we have tried to do that a long time ago even before having gen gen Gen AI. Gen AI for them just gave them that capacity to go even quicker, and that is exactly what we need to take care about. So we need to go quicker, and we need to use AI to beat AI.
[00:21:58] - [Speaker 0]
And as a solutions, not problems, Categuard, I always likes to try and give everybody listening some valuable takeaways on how they can improve their security there. I wanted to highlight how at Tenable you focus on exposure management across the entire attack surface. So tell me a little bit more about how this approach helps organizations regain some of that control in an environment where AI is increasing complexity and risk in in equal measure. Tell me a bit about your approach there.
[00:22:28] - [Speaker 1]
The time of detection and response only is over. You know? We can't just run, you know, after alerts and alerts and alerts, and we are always late. And if you understood since the beginning of that podcast about that speed of using of AI, it's we have a scale issue. If we want to scale, we cannot just looking after an alarm system, having an alert, and just a detection.
[00:22:59] - [Speaker 1]
If if this is ringing and you have an alert, this is already too late. Okay? So our approach is to help organization and mainly c level to change their mind. We have too much data. We have too much information, too much alerts.
[00:23:14] - [Speaker 1]
We can't play that game anymore. It is not efficient anymore. We have to do that. We have to change the mindset to have a proactive and preemptive approach. By putting all of those exposures together, again, from all of those domain that I just mentioned, classical vulnerabilities from the network.
[00:23:35] - [Speaker 1]
By the way, the network still exist. You know, cloud, IT, OT, and then AI together, we have then the capacity to have a full view of all of those assets and the link between them, and that is super important. Identity is the link between assets. You have access to some assets. I have access to some assets.
[00:24:00] - [Speaker 1]
We don't have the same level of rights, meaning that potentially you are at risk or I'm not because I don't have enough rights. You are considered as an user based on your identity and your privileges, which could be part of an attack path. What attackers are trying to find? Attack path. They try to find a way to compromise a system through a vulnerability or through a misconfiguration.
[00:24:26] - [Speaker 1]
From there, they use an identity, They do, you know, a lateral movement, and they try to access to the data they want to compromise. That didn't change. But what we need with exposure management is to link everything together. Imagine you have a map. Imagine you have an electric circuit and you drive into it.
[00:24:50] - [Speaker 1]
What is important is not the number of, you know, transistors, is how much some of them are weak, how much some of them are very strong. And if you can find the pass, that is exactly what the attacker can do. But if you as a defender with an exposure management program and with an exposure management tool as well, you can find that pass before then you close the door. Then you make your electric circuit stronger. That's exactly what we want to achieve including AI exposure.
[00:25:25] - [Speaker 1]
Then we have a better approach to be proactive rather than being only reactive.
[00:25:32] - [Speaker 0]
And for any business leader listening to our conversation today, they feel that very real pressure to adopt AI quickly. What what practical steps would you leave them on how they can or what they can take today to balance innovation with security before that inevitable accident becomes a reality? Any any takeaways that you would offer those people listening?
[00:25:54] - [Speaker 1]
You know, leaders and they are really they and for example, the CISO, they need to answer to the business. And sometimes even to the board. Okay? The board and the business give them the responsibility to have a business continuity. Okay?
[00:26:13] - [Speaker 1]
So keep in mind that they have that responsibility. They need to be able to reduce the risk. And again, with exposure management, can do that proactively. But answering to two questions: Am I exposed today? And if I am exposed, what can I do to reduce that risk?
[00:26:31] - [Speaker 1]
We try to help them being able to answer to those two questions. All of the technical discussions that we had, vulnerability, misconfiguration, identity, and so on, agentic AI, are for the practitioners. We need to help them. But at the top level, they need to be connected to with the business. They need to have the business to have the business continuity, you know, and that is really their mandate.
[00:26:59] - [Speaker 1]
But answering to two questions, am I exposed today? They're opening the news like I'm doing every morning. When I'm going here in the metro in Paris, you know, I'm with my telephone, I'm looking after, hey. Is there is any brand new attacks today? What is the new threats today?
[00:27:14] - [Speaker 1]
If I see that there is a specific, you know, ransom groups or attack a group that could target an organization, you know, if I am an organization, if I'm, you know, I don't know, an industrial organization, if there is an attack against my industry, immediately, I should raise my risk. And immediately, my top management could say, we've seen that in the news. Could we be exposed? I need to find out if I have those technology that has been exposed. I need to find out if we have any exposure that is currently in place in my system that can be exposed as well.
[00:27:55] - [Speaker 1]
So c level, top level, two elements in mind, governance, and I am exposed today.
[00:28:05] - [Speaker 0]
And for anybody listening that would like to find out more information about anything we talked about today and also keep up to speed with some of the announcements that are gonna be coming out from you and your team later this year, Where would you like me to point them? I will put a link to the the recent announcement at RSA. We'll include that. But anything else you'd like me to point people listening to?
[00:28:27] - [Speaker 1]
So we have a very active website. You know, we have a place called the blogs and I really recommend you to go there. You know, the XIAI that we have announced this week has been posted in the blog. But this is not only product announcement. You know, we have a research team which is detecting vulnerabilities, which is publishing a lot of reports around known vulnerabilities and obviously exposures globally.
[00:28:56] - [Speaker 1]
For example, we have a quarterly cloud and AI risk report where our research team not only looking at their vulnerabilities, but also looking for any kind of cloud and AI related weaknesses and exposures. So we have a quarterly report which is published again in tenable.com website. So if you go to the blog part, you will see a lot of research, sometimes even weekly, daily. We are publishing some information regarding what we have found from a research standpoint. Know, we are providing technology, but we have a huge research team that is looking daily to any kind of exposure from classical vulnerabilities to identity exposures to cloud and AI today.
[00:29:41] - [Speaker 1]
And OT, obviously, is part of it.
[00:29:45] - [Speaker 0]
Well, we covered a lot today from how the first major AI incident is likely to be accidental, not malicious, that widening gap between competence and control, and why AI is a distraction of sorts that that can cause the neglect of core systems and responsibilities. So I will add links to everything that you mentioned there. I would urge people to check that out, and also make blog a part of your morning routine to stay up to speed with some of the things that you're seeing out there. But, more than anything, thank you for shining a light on this and sharing your story. Really appreciate your time today.
[00:30:19] - [Speaker 1]
Thank you very much.
[00:30:21] - [Speaker 0]
So could the biggest AI wake up call arrive as an accident that nobody intended, but everybody should have seen coming? This is the question lingering with me after today's conversation because Bernard's argument is not built on fear for the sake of it. It's just built on a very real observation that organizations are so busy pushing AI into production, they are struggling in many cases to answer basic questions around visibility, identity, exposure, and governance. But one of the strongest takeaways from me here is that AI isn't replacing the old security playbook. It's just adding a little bit more pressure to it.
[00:31:02] - [Speaker 0]
So cloud security, identity management, vulnerability hygiene, and exposure reduction, all these things matter just as much as they did before. In fact, they matter even more because AI is merely amplifying the consequence of weakness. Weaknesses that have always been sat there waiting to be exploited. And that, for me, is what made this conversation so relevant for business leaders, not just security professionals. And maybe you're a security professional thinking, hey.
[00:31:31] - [Speaker 0]
My board need to listen to this conversation. If so, please send it over. Because I think many companies are capable of launching AI projects right now, but far fewer can say with confidence that they understand where and how these systems are being used, what data they are touching, and what actions they are allowed to take. And it is this gap where the risk starts to grow often quietly until a leak, until a failure, or an operational shock forces everyone to pay attention. So maybe you can avoid that p one.
[00:32:03] - [Speaker 0]
So remember, if you wanna learn more about Tenable, I will include links in the blog post associated with this episode, and I'd love to leave you with a question to take home, a little homework. Are you building AI with enough control to earn trust, or are you waiting for an accident to teach that lesson the hard way? Let me know. Techtalksnetwork.com. You'll find everything you need over there.
[00:32:25] - [Speaker 0]
I'd love to hear from you. But it's time for me to go now, so I'll be back again tomorrow with another guest. But thank you for listening as always. Speak to you soon.

