What happens when your AI agents start making decisions faster than your security team can even see them?
In this episode, I sit down with Sunil Agrawal, Chief Information Security Officer at Glean, to unpack a shift already underway in enterprises. With predictions that 40 percent of enterprise applications will include autonomous AI agents by the end of 2026, we are moving from human-led workflows to machine-to-machine interactions at a scale most organizations are not fully prepared for.
Sunil brings a rare perspective, blending more than 25 years of cybersecurity experience with an inventor's mindset shaped by over 40 patents. What stood out to me in our conversation is how quickly the traditional security model is becoming outdated. As he explained, "autonomous agents break those assumptions because they operate across tools, varying permissions and data sources with alarming speed and autonomy." This creates what he calls the "autonomy gap," in which the CIO's drive for speed collides with the CISO's need for visibility and control.
We explore how that tension is playing out in real organizations today, and why so many are already falling behind. Nearly half of businesses still lack the AI-specific controls needed to prevent untraceable incidents, and the risks are not always what you might expect. Sunil argues that the first major rogue-agent incident is unlikely to be a malicious attack. Instead, it will come from confusion: a well-intentioned system taking the wrong action in the wrong context, with consequences that ripple across the business.
The conversation then turns practical. Sunil breaks down his AWARE framework, a structured way to introduce real-time guardrails that evaluate intent, context, and risk before an agent takes action. Rather than relying on static policies, this approach focuses on continuous runtime enforcement, where systems are constantly assessed based on behavior rather than assumptions.
What I found particularly valuable is how this moves beyond theory into something leaders can act on today. From starting with tightly scoped use cases to investing in full observability, this episode offers a clear roadmap for balancing innovation with accountability. As Sunil put it, organizations that succeed will not be the ones that move fastest, but the ones that prove trust at scale.
So how do you embrace the productivity gains of autonomous AI without opening the door to invisible risk, and are your current security models ready for a world where the "user" is no longer human?
Useful Links
Visit the Tech Talks Network Sponsor NordLayer Browser
[00:00:00] - [Speaker 0]
A quick thank you to NordLayer for supporting the podcast and helping me make these daily conversations possible. And if you are listening and you're responsible for security or IT, you will know the reality. The reality that most of your risk now sits inside SaaS apps and browser activity. That gap is exactly what NordLayer is addressing with its new business browser. So instead of bolting security on from the outside, it builds it directly into the browser itself.
[00:00:32] - [Speaker 0]
This means you can control access, monitor activity, enforce policies, and reduce shadow IT all from one single place. And most importantly, it does it without adding deployment headaches or complex onboarding. You get things like browser based data loss prevention, SaaS access control, and zero trust browsing, but delivered in a way that your team can actually use. So if you've been trying to simplify your stack while improving visibility, please check it out at nordlayer.com/browser. But now it's time for me to introduce you to today's guest.
[00:01:13] - [Speaker 0]
How do you secure a workforce that no longer looks human? This is a question that has been quietly creeping into boardrooms recently, especially as AI agents move from experimental tools to active participants inside their enterprise systems. Because we're no longer talking about automation in the traditional sense, these agents are now making decisions, triggering actions, and interacting with other systems at a speed that most security frameworks were never actually designed to handle. So my guest today is the CISO at a company called Glean, and they sit right at the center of this shift because they have over two decades in cybersecurity and more than 40 patents to his name. My guest today brings a very rare perspective that blends builder, operator, and strategist.
[00:02:06] - [Speaker 0]
And one of the many things that stood out to me in our conversation today is just how he cuts through the hype and focuses on what is actually happening inside organizations right now. So together, we're gonna explore the reality behind the predictions that a large portion of enterprise applications will all include autonomous agents, and talk about what all this means in practice inside your organization. And he will also explain how these systems are already embedded across tools like Slack, Salesforce, and ServiceNow, quietly reshaping how work gets done while also introducing entirely new risk models along the way. So we will unpack what he calls the autonomy gap, that growing tension between CIOs drive for speed and the CISOs responsibility for control. And it's a balancing act that feels similar, especially if you remember the early days of cloud adoption, more than a few echoes there.
[00:03:06] - [Speaker 0]
But this time around, it's the stakes that feel different. When machines are making decisions independently, it's visibility that becomes harder, and mistakes, well, they can scale pretty fast. So if you are a business leader trying to balance innovation with accountability or simply someone curious about what happens when machines start acting on our behalf, today, we're gonna give you a clear and honest look at what comes next. So are we building systems that we can truly trust, or are we moving faster than our ability to understand the consequences? Well, that sounds like a cue for me to officially introduce you to my guest.
[00:03:50] - [Speaker 0]
So thank you for joining me on the podcast today. Can you tell everyone listening a little about who you are and what you do?
[00:03:58] - [Speaker 1]
Sunil Agrawal, the chief security officer here at Glean. And I also happen to manage IT, hence, you know, responsible for both aspects, you know, introducing IT tools and making sure they are secure. Been in the company for little over three years. For folks who might not know what Glean does, we started as a enterprise search company. We used to call ourselves the glee the Google for enterprise, a single place to search for a entire enterprise corpus.
[00:04:30] - [Speaker 1]
When ChatGPT got launched, we became the ChatGPT for enterprise, a chatbot where you can ask questions and get back responses based on your enterprise corpus. These days, of course, we do a lot of we provide agentic platform to our customers so that they can develop agents to make themselves 10 x more productive. And, of course, we do many more things now. So that's a brief intro on me and the company. Been in the cybersecurity space, good twenty five plus years.
[00:05:00] - [Speaker 1]
Started as, you know, a security engineer, a builder of security products. That's what I spent first fifteen, seventeen years. And the last seven, eight years, in addition to being a builder, I'm also consumer of security products as a seesaw.
[00:05:15] - [Speaker 0]
Well, thank you so much for joining me today. And I'm very glad that you are joining me as well because I've been reading some pretty big stats out there that I'm hoping you can help me and the listeners make sense of. And the first is Gartner is predicting that 40% of enterprise apps will include autonomous AI agents by 2026. And guess what? We're here right in the heart of 2026.
[00:05:40] - [Speaker 0]
I've got to ask, what does this stat actually look like inside a real organization? Why does it fundamentally change how security teams need to think about risk, maybe even identity as well? But what are you seeing here? What does it mean in to business leaders listening?
[00:05:55] - [Speaker 1]
Yeah. So first of all, you know, pace at which the innovation is happening, it almost seems like 40% is little too low a number. I mean, you can't use an enterprise app today without a generic functionality built in or some agentic functionality built in. Now that could be you're using your Slack, your Salesforce, your ticketing system, your Zendesk ServiceNow. You're using a customer support system like an intercom.
[00:06:24] - [Speaker 1]
Whatever system you are using, I mean, everything has generative AI and agentic functionality built in somewhere. So I would venture that stat actually looks bigger. Now, however, having said that, now each of the system are actually, you know, little bit siloed. I mean, they do provide a agentic functionality, which is kind of woven into the day to day operation. You know, agents that triage incidents, agents that draft customer responses, or if it's a financial system you're using, agents that reconcile transactions.
[00:07:02] - [Speaker 1]
So they're everywhere. So now the key shift is, you know, pretty similar to how I talked about Glean where we started with a system that was a search based, information retrieval based. Now you are seeing autonomous agents that are not just retrieving data. They're analyzing. They're reasoning, and they're acting upon the data, often without any human approval during each other step.
[00:07:32] - [Speaker 1]
So now that changes the risk model completely. Traditional security was built for humans, you know, for structured systems, very predictable data flows. Now agent autonomous agents break those assumptions because they operate across the tools, varying permissions and data sources with alarming speed and a so the question for the security team is now moving from, does this user have access? To, is this behavior appropriate right now in this context, and should it be allowed to continue to operate? So that's the fundamental shift that we are seeing.
[00:08:16] - [Speaker 0]
And before you join me on the podcast today, I was doing a little research on you. And I I was reading how you've spoken previously about the autonomy gap between spend and between speed and control. So can you tell me a bit more about what that gap looks like in practice for CIOs and CSOs and and why it's becoming harder to manage as machine to machine interactions continue to increase this year?
[00:08:41] - [Speaker 1]
Yeah. No. Absolutely. So maybe for the audience, the autonomy gap is the widening distance between, you know, how fast the organizations want to deploy agents, take advantage of all the productivity gain, and how slowly the governance, the controls, the visibility are catching up. So for the CIOs, the pressure is business velocity.
[00:09:04] - [Speaker 1]
You know, as I men mentioned, I manage IT. So there's always a pressure that how do I deliver AI value? Allow my employees to automate work and the company stay competitive. But then wearing my CSO hat, the pressure is also proving that the same systems, you know, they have the required visibility, accountability, and are bounded by my corporate policies. So that's the autonomy gap that I talk about.
[00:09:32] - [Speaker 1]
Now with machine to machine interaction, it's getting harder, you know, because now agents are starting to interact not just with people, but with apps, with APIs, with tools, with different data sources, and also other agents. So that creates far more decision paths for the agents and far less time for any manual intervention. So the operational reality is that enterprise AI is touching dozens of systems, accessing sensitive data across all the departments, and is running with varying degrees of permissions. So if you don't have identity access management configuration run time oversight, you know, the gap the autonomy gap that I talk about gets filled by shadow AI, you know, with leads to blind spots, brittle controls, and the gap just keeps on widening.
[00:10:31] - [Speaker 0]
And we we mentioned shadow AI though, which is a huge topic. And I think there's also often a tension between innovation and oversight. So how can organizations realistically balance a CIO's push for rapid AI adoption with a CISO's need for visibility, accountability, and auditability? A massive balancing act there, isn't there?
[00:10:53] - [Speaker 1]
Absolutely. And I would say I mean, we've gone through this journey before. I mean, we saw and when we adopted cloud, when we moved from on prem to cloud, when we exchanged our capital expenses to operational expenses. You know, we again rushed into that moment. We moved our workload from on prem to cloud, and the governance, the security, you know, trade.
[00:11:21] - [Speaker 1]
And we saw what happened over the last fifteen, twenty years. There were many, many security breaches. It's not because inherently cloud was less secure. Just that, you know, the people did not pay attention to either adopt the right security tooling or the security tooling had to mature. Now in order to make sure that we don't make the same mistake when adopting JNI, I would say the answer is make governance, security, privacy part of an entire AI deployment model from day one.
[00:11:56] - [Speaker 1]
So security becomes the guardrails so that you can go faster. So in practice, what that means is a few basics. You know, make sure you govern who can deploy agents within your enterprise. What data can they touch? What actions can they take?
[00:12:19] - [Speaker 1]
What requires human approval? You know? In all of you know, any strong program that I have seen where our customers have had huge success is they've started gradually. They started agents with narrow scope, you know, very tight oversight. And then they slowly expanded autonomy across each of the axis I talked about as to who can build agents.
[00:12:43] - [Speaker 1]
What data can they touch? You know, slowly as you get more confidence, you open it up to the rest of the employee base. You open it up to more data. What actions can they take? You slowly allow it to take a little bit actions like, hey.
[00:12:58] - [Speaker 1]
Create a Jira ticket, post in Slack to, hey. I can restart a database and so on. So the key is, you know, gradually adopt and roll it out. And the last thing, do not forget, is auditability. Have complete observability and auditability of what the agents are doing.
[00:13:21] - [Speaker 1]
That's very important for two purposes. First of all, you in case something goes wrong, you should be able to reconstruct what an agent did across the entire system, across the timeline. At the same time, you can also keep on evaluating that is the agent staying within the guardrails. If it's not, then you can go back and fix the agent, or you can go back and fix your guardrails as the situation might be.
[00:13:48] - [Speaker 0]
And again, before you came on, I was having a look on the Glean LinkedIn page. I was reading a great story there about how you're helping businesses deploy AI agents securely with the Aware framework. And the Aware framework, my understanding is it introduces guardrails for autonomous agents. But can you break down what it does in simple terms? Explain how it actively blocks things like, I don't know, prompt injection or malicious inputs in real time?
[00:14:16] - [Speaker 0]
Because there's like a big opportunity here.
[00:14:21] - [Speaker 1]
No. Absolutely. So, again, in simple terms, aware, it's a five part check for any agent. Who is acting? What context are they in?
[00:14:34] - [Speaker 1]
Are they staying within the scope? And how risky is the action right now? And can we trace what happened afterwards? So the a w a r e, a stands for the actor intent. The w is for the work context.
[00:14:54] - [Speaker 1]
A is for autonomous guardrails. R is the real time risk scoring and blocking. And e is the ecosystem observability. So that's what the Aware framework is. Very simple framework, you know, you'll be able to, you know, understand how it applies to your any agentic platform, any agent that you build very easily.
[00:15:18] - [Speaker 1]
And in practice, the way this becomes real is through controls such as, hey, what are the topics my agent is allowed to operate on? So you can say, hey, this is an HR agent. You will only operate on HR topics. This is a sales agent. You'll only operate on sales topics.
[00:15:39] - [Speaker 1]
And then making sure that before this agent takes any action, you have checks. Hey. Is this action aligned with the intent of the agent? And then at the run time, you know, inspect all the input actions against the agent's intent before you proceed. So now you talked about prompt injection malicious inputs.
[00:16:04] - [Speaker 1]
Yeah. Now the point is we got to block all of them as they occur. Now but the key thing that the Aware framework is contributing it is you don't do detection of prompt injection or malicious input in isolation. You actually bring in the work context, the context in which the agent is trying to do a certain things. The work context.
[00:16:29] - [Speaker 1]
The persona of the user who's executing. You bring in all that context. Now that allows you to detect things like prompt injections or malicious input lot more effectively Because you have all the data points to make that decision versus operating it in isolation. So that's in a nutshell how Aware is helping you, you know, tackle some of the thorny problems that Gen AI brings in, which are the prompt injection.
[00:16:59] - [Speaker 0]
And one of the reasons I wanted to ask you that question is I think I didn't wanna leave anybody behind. And also, many organizations are still very early in their AI journey. So it's so important to broadcast a message like this, let people know there is a solution. But why is it you think I think another stat here. 47 of businesses currently lack AI specific security control.
[00:17:23] - [Speaker 0]
So why do you think that is, and what are the risks of continuing to deploy AI without a foundation like this in place?
[00:17:31] - [Speaker 1]
Yeah. Absolutely. As I was talking about the cloud adoption. Correct? Yeah.
[00:17:35] - [Speaker 1]
As I said, hey. Let's not make the same mistake. Unfortunately, we are. Yeah. So the adoption is moving way faster than governance.
[00:17:46] - [Speaker 1]
You know, so AI went from being experimental to being a strategic imperative for every company very quickly. Many teams are still trying to retrofit controls after the fact. A lot of organizations, you know, focus too narrowly on, hey. Let me secure the model. Let me secure the weights and the biases.
[00:18:09] - [Speaker 1]
Let me secure in the data lineage. What data went into the model security? However, you know, within the real enterprise, what we see is the challenges are not coming from the model security alone. They're more coming in from the operational environment. What is what data is being fed into the model at runtime?
[00:18:32] - [Speaker 1]
What permissions were set on the data? What tools can now my agentic system invoke? What workflows am I enabling? And what are the actions that that agent can take? So this is all around the model.
[00:18:50] - [Speaker 1]
Correct? Not just so much of model security. So I would say a lot of enterprises, either they did not adopt governance fast enough, or when they adopted, they took a very, very narrow approach. And the risk of proceeding, you know, without having without understanding all the attack vectors and without enforcing all the right guardrails is that the incidents become untraceable and hard to contain, especially when shadow AI is already spreading across the organization. So if you don't have the visibility into what tools, agents, or models the employees are using, you cannot reliably trace where sensitive data is going, what actions have been taken, or where the risk is accumulating.
[00:19:39] - [Speaker 1]
Without the foundation, it's almost the worst of the two worlds. Either the adoption stalls because nobody trusts the system, or people move ahead anyway and create shadow AI or unchecked agents and continuous data leakage. So the risks are pretty high in either which way.
[00:20:01] - [Speaker 0]
So much of what you said there will resonate with leaders listening around the world. And I also read that you suggested that the first major rogue agent incident is more likely to come from system confusion than a deliberate sabotage. So on that side of things, what kind of scenario should leaders be thinking about here? And how can they possibly prepare to to ensure that they are ahead of the game if any of these scenarios, come to light?
[00:20:28] - [Speaker 1]
No. Absolutely. And so yes. So thanks for bringing that up, Neil. So the thing that I often talk about is the early incidents are going to happen because your well intentioned agents is, like, potentially going to be fed the wrong context, the wrong scope, you know, is going to trigger some wrong action, which might end up deleting data or exposing sensitive content.
[00:20:59] - [Speaker 1]
Again, these are going to be well intentioned agent, but they're just going to potentially get things wrong. Concrete example of this would be, you know, you wrote an agent, and the purpose of that agent was to share the note with the organization wide Slack channel. But it got certain things wrong, and instead, it send it out as an email. Or, you know, the intention was for the agent to just read data from a database, but it ended up updating. And a lot of these things are, again, not going to be a sabotage.
[00:21:35] - [Speaker 1]
It's just that the agent got confused as to what action should it take under a given scenario. So the what the leaders should do is they should not just be thinking about the, you know, a nation state actor going after the agent, but they should also be thinking about the confusion between a model failure or a bad prompt misconfiguration and a real attack activity. They should be able to distinguish between the two. And so without the right visibility, without the guardrails, the team may not know which situation they're dealing with unless the until the damage is already done. So the preparation starts with the basics, Real time observability.
[00:22:23] - [Speaker 1]
I can't stress that enough when it comes to agents. Least privilege agent identities. Approval or pause before taking high risk actions. And then having drills that simulate rogue agent containment. So none of the concepts that I'm talking about are new.
[00:22:45] - [Speaker 1]
They just had we got to apply them to the agents with high degree of urgency.
[00:22:50] - [Speaker 0]
And you're someone with quite a unique vantage point on everything we're talking about here. So from your unique perspective as both a CISO and an inventor with over 40 patents, how should security evolve when the user is no longer human or not no longer guaranteed to be human, but an autonomous agent that is making decisions independently alongside humans as well? What should they be thinking there?
[00:23:16] - [Speaker 1]
Yeah. Absolutely. So I would say the biggest shift is that the agent identity is now becoming the new perimeter. Every autonomous entity needs a distinct identity. You know?
[00:23:30] - [Speaker 1]
It needs to have its own scope set of permissions and a very clear path that how would we revoke that identity in case things go wrong. That means, you know, extending all the familiar security principles, the principles of least privilege, the zero trust, the continuous verification, the auditability to the agent identity just the way we did it for people. We also need to move away from all the static policies to actual run time enforcement. Why? Because these agents are built on, first of all, a nondeterministic LLM.
[00:24:13] - [Speaker 1]
These LLMs, you are fetching data in real time. You're processing the data in real time. Based on the data that you fetch, you decide what are the next set of actions to take. And because of all of this, any kind of static policy is failed, is guaranteed to not work. You need real run time enforcement.
[00:24:36] - [Speaker 1]
It has to inspect your guardrails, have to inspect the prompts, the plan that the LLM is coming up with, the tools that it's deciding to invoke, and look at the outcome continuously to make sure that the agents don't drift. They're not chaining actions in a certain order to cause data leakage, and they're not making arbitrary decisions on their own. So this is just not a control problem, but a design problem. So the strongest organizations, they will treat agent security as an operational discipline. You know, just the way we treated our SaaS applications.
[00:25:20] - [Speaker 1]
That was not, you know, you do it once, you roll it out, you're done. No. You need that operational discipline. You need to have metrics. You need to have telemetry, and you need to plan that in case things go wrong, you can contain them.
[00:25:34] - [Speaker 1]
So you need to have those playbooks, and you got to practice those playbooks.
[00:25:39] - [Speaker 0]
And for any business leader that's listening to our conversation today who want to move quickly with AI, but also they very clearly want to avoid any unintended consequences, they may be a little bit cautious on that. Are there any practical steps that you would leave them that they can take away today to to build trust, maintain control, and still capture the upside of autonomous systems and all the good things that come with them?
[00:26:04] - [Speaker 1]
Yeah. I mean, again, very similar to the Aware framework. I would say, you know, my five step process I generally think about as to how to safely adopt AI within an organization. So the first, start with a narrow, you know, high value use case. Explicitly define what the agent is allowed to do, what data can it access, what tools it can call, and where human approval is required.
[00:26:34] - [Speaker 1]
So that's the step one. Step two, make sure that the basics are in place. You know, all the data that you provide access, you know, you have the proper permissions set on those data. Make sure that you know that the data you're providing access to the agent, you know, doesn't contain any sensitive data. If you happen to be in health care, maybe make sure you don't provide anything that is a PHI to your agent.
[00:27:04] - [Speaker 1]
Make sure you have run time attack protection. We talked about prompt injection. Make sure you have systems in place against the word. And then before any risky actions are taken, make sure you are checking that those actions are aligned with the intent of the agent. So that's step number two.
[00:27:25] - [Speaker 1]
Step number three, invest in visibility, visibility, visibility. If you cannot see what agents are doing, explain why they did it, and reconstruct what happened later, then that program is bound to fail. Fourth, treat this thing as a very cross functional from the start. The best programs that I've seen really work is they bring security, IT, data, legal, and the business under a common governance model. Right?
[00:27:59] - [Speaker 1]
The business, they will determine the use case. But then you can't leave the security or the legal team out. You got to make sure that they are evaluating it to make sure that the agent is compliant, the agent is secure. You've got to bring your data team in to make sure that the data this agent will have access to is the right set of data. So you got to bring in all the teams together.
[00:28:25] - [Speaker 1]
That's step number four. And finally, step number five, expand gradually. The companies that win won't be the ones that automate the fastest. They'll be the ones that prove trust as they scale. So that would be my five point checklist as to how to adopt AI safely and securely.
[00:28:50] - [Speaker 0]
Wow. So much gold in there, and you've given so much for people to take away an action. And I would encourage anyone listening that are serious about this to get in touch with you, find out more information about all things Glean. But is there anywhere in particular that you would like me to point everyone everyone listening?
[00:29:10] - [Speaker 1]
Absolutely. I can be reached out on LinkedIn, of course. Or if you want to learn a little bit more about Glean or everything Glean can do to make sure your company adopts AI in a safe, secure manner, you can, of course, go to our website and reach out to us, ask us for a demo.
[00:29:30] - [Speaker 0]
Awesome. Well, I will add links to everything you mentioned. Again, urge people listening to go to techtalksnetwork.com. There'll be a blog post over there. There'll be lots of links, and I'll try and add a video from the Glean, one of the Glean social channels as well just to help try and bring everything we've talked about to life.
[00:29:47] - [Speaker 0]
And I encourage everyone listening as well to feedback to me as as well as yourself, and let me know what you're struggling with, what's working, what's not. That's how we can all learn from each other. But, Sonny, thank you so much for just taking the time to sit down with me today and bring all this to life in a language everyone can understand. It really is priceless what you've offered here. So thank you.
[00:30:09] - [Speaker 1]
No. Thanks for having me here, and good day to the rest of the audience.
[00:30:15] - [Speaker 0]
So if AI agents are already making decisions inside your business, how confident are you that you can see, explain, and control exactly what they're doing? I think that is the question that kept coming back to me after the conversation today. And Sunno, he made clear that we're entering a phase where identity, behavior, and intent all matter far more than traditional perimeter based security. So when the user could be an autonomous agent, every action needs context, every decision needs oversight, and every system needs to be observable in real time. And I think there was a subtle warning that was repeated throughout this episode because many organizations, they are still treating AI security as an extension of model security, focusing on data sets and training processes.
[00:31:11] - [Speaker 0]
But the real risk is emerging in the operational layer where agents interact with live systems, where they access sensitive data and take action without constant human input. And at the same time, I think there's a real opportunity here. The organizations that get this stuff right, they'll be the ones that are the fastest to deploy AI, but also the ones that can build trust as they scale. And that means starting small, setting clear boundaries, investing in visibility, and bringing security, IT, and business teams together right from the beginning. And this doesn't have to be about slowing down innovation.
[00:31:51] - [Speaker 0]
Let's bust that myth straight away. It's just about making sure innovation does not outpace our understanding. But as always, I love to hear your thoughts. How are you approaching AI adoption in your organization? And do you feel confident that your security strategy is keeping up with the pace of change?
[00:32:11] - [Speaker 0]
Well, let me know. Pop over to techtalksnetwork.com. Love to hear your take on this. But that's it for today. I'll return again tomorrow with another guest and a few more questions for you all.
[00:32:22] - [Speaker 0]
But that's it for now, so speak with you then. Bye for now.

