Inside EY's 2026 Tech Pulse Poll The Hidden Risks Of AI Adoption
Tech Talks DailyMay 04, 2026
3559
27:4325.37 MB

Inside EY's 2026 Tech Pulse Poll The Hidden Risks Of AI Adoption

What happens when the race to deploy AI starts to outpace the ability to control it?

In this episode of Tech Talks Daily, I sit down with Ken Englund from EY to unpack findings from the latest 2026 Technology Pulse Poll, and the conversation quickly moves beyond theory into something many leaders will recognize from their own organizations. There is a growing tension between speed and oversight, a "velocity paradox" Ken describes, in which businesses are accelerating AI adoption while governance struggles to keep up.

The numbers behind that story are hard to ignore. A large majority of tech leaders are prioritizing speed to market over careful vetting, while more than half of AI initiatives are happening outside formal IT oversight. For anyone responsible for security, compliance, or risk, that gap raises immediate concerns. But as Ken explains, it is not as simple as labeling this as reckless behavior. Much of this activity is driven by real innovation happening closer to the business, where teams are experimenting, solving problems, and creating value quickly.

We spend time breaking down what that looks like in practice. From the rise of shadow AI tools to the growing risk of sensitive data exposure, there is already evidence that the consequences are beginning to show. At the same time, nearly every executive surveyed sees autonomous AI as central to future competitiveness, which means slowing down is not really an option either.

One of the most useful parts of the conversation focuses on what organizations can actually do about it. Ken shares practical insight into why architecture matters more than ambition, how companies should think about optionality in a fast-moving AI ecosystem, and why observability is becoming a missing layer in many deployments. We also get into the reality of measuring AI value, where the conversation is shifting from promised returns to the often-overlooked cost side, including token usage and uncontrolled spending across departments.

There is also a broader discussion around leadership and culture. Governance frameworks may exist on paper, but the real challenge lies in operationalizing them across a business that is already moving at speed. Add in geopolitical pressures, evolving regulations, and the complexity of deploying AI globally, and it becomes clear why many organizations feel overwhelmed.

This episode is not about slowing innovation down. It is about understanding where things are breaking, what leaders are getting wrong, and how to build a path forward that balances progress with accountability.

So, as AI budgets continue to rise and autonomous systems become part of everyday operations, how will your organization close the gap between ambition and control, and are you already further along that path than you realize?

Useful Links

Please check the partners of the Tech Tech Talks Network

[00:00:00] I'd like to thank Denodo for supporting the Tech Talks network and helping us bring so many different stories to life. Because every business needs data that its teams can actually trust. So if you need data your teams can trust, Denodo can help your organisation deliver curated, governed and easy to use data products for analysts, business users and AI applications alike. And you can learn more by simply visiting denodo.com.

[00:00:28] AI adoption is moving at a pace that many organisations can barely contain. And it's also creating one of the biggest tensions in businesses right now. Leaders know they need to move fast, but speed without oversight can leave companies exposed in ways that are incredibly easy to miss. Until that is, something breaks.

[00:00:54] Now EY's 2026 Technology Pulse Poll puts numbers behind that reality. It found that 85% of tech executives prioritise speed to market over exhaustive AI vetting. And while 52% say department level AI initiatives are operating without formal approval or oversight, the result is what EY calls a velocity paradox.

[00:01:21] One where ambition is accelerating faster than governance can keep up. And that gap is already producing very real consequences. They found that 45% of tech executives reported a confirmed or suspected sensitive data leak tied to unauthorised generative AI use in the last 12 months. And also 39% firmed or suspected proprietary IP leaks for the same reason.

[00:01:49] And all this comes at the moment where 95% said AI spending will increase over the next year. With cyber security, cloud, AI talent and infrastructure all attracting and contributing to more budget. In other words, companies are pushing harder on AI even as many admit they don't yet have a firm grip on the risks. So today's conversation I think matters because I want to go beyond the AI hype.

[00:02:16] This is about how organisations scale AI without losing control of cost, architecture, governance or trust. And for this conversation I am delighted to be welcoming back Ken Englund. He's EY America's technology sector growth leader and he's going to be helping me unpack what this looks like on the ground. Why so much AI activity is happening outside formal IT oversight right now. Yep, you're not on your own.

[00:02:46] And what business and technology leaders need to do now if they want AI investment to produce real value rather than expensive chaos. If you are running a business right now, you may have noticed there's a quiet shift happening. One that most people are still underestimating. And that is your company doesn't live inside your network anymore. It lives inside the browser. That's where your SaaS apps sit. That's where your data moves.

[00:03:16] And increasingly, that's where attackers are focusing their attention. So NordLayer has just launched its new business browser. And it's designed specifically for small and medium sized companies that need visibility and control without the overhead of enterprise security tools. What I like here is the balance.

[00:03:38] You get advanced protection, better compliance and full visibility into how your team is working online, but without slowing anyone down or forcing them to learn anything new. Feels like a practical step forward rather than another security layer that adds friction. So if you want to see more about how it works, please head over to NordLayer.com slash browser and check it out. And let me know your thoughts. But now on with today's show.

[00:04:08] So a massive warm welcome back to the show. It's almost a year since we last spoke. But for anyone that missed that conversation, can you tell everyone listening a little about who you are and what you do? Yeah. Well, first, thanks, Neil. Making a little bit of time. I appreciate it. So Ken Englund, I'm a partner in Ernst & Young. I focus and responsible for our high tech growth segment. So if you think about scale up companies, AI native companies and all that sort of activity. So that's really where I spent all my time. And I'm a lifelong consultant.

[00:04:37] I wouldn't know anything else. Well, it's a pleasure to have you back on. Last year, we were talking about the technology pulse poll. And here we are 12 months later. We've got a new report here. So it points to what you call a velocity paradox. Just saying that out loud. I feel like I want to do a Doc Brown impression and say, great, Scott. But I mean, essentially, it means where organizations are accelerating AI adoption faster than they can govern it.

[00:05:05] So what does that tension look like inside a business on a day-to-day basis? I think it will resonate with a lot of people, too. Yeah. I think the way I think about it just very broadly is that sort of adoption is outpacing governance, right, on a very big balance. I think there's a lot of, you know, FOMO in terms of executives, you know, wanting to drive AI as fast as possible.

[00:05:29] But, you know, we're starting to see some cracks in the armor relative to the implications of maybe not always getting it right from that perspective. And, you know, you kind of alluded to it. What we've said in our survey, and just to refresh everybody, you know, we conduct a survey three or four times a year, 500 tech executives. It's about broader IT. But for the last couple of years, 99% of it is all about AI, as you would imagine.

[00:05:52] So this go-around, really, 85% of these respondents have said adoption and speed outpaces sort of governance, right? And that's what we really say is the paradox, right? Move as fast as you possibly can, but, you know, sort of making sure there aren't unintended consequences. And I think in this last survey, we're starting to see people having concerns about the consequences is really probably the biggest takeaway, Neil. Yeah. I've got to a lot of tech conferences.

[00:06:19] And one of the big things I'm seeing at the moment, as everyone is, is this huge push towards agentic AI and hundreds, if not thousands, of agents going out there doing things. I saw, I think it was a news report from Target recently who said, if your AI agent buys anything from us, that's your responsibility. We're not going to be giving any refunds here. So it shows there's a bit of a problem on the horizon there. And I think 85% say they prioritize speed over exhaustive AI vetting.

[00:06:49] Is this a calculated risk or are organizations just underestimating the consequences of moving this quickly and almost like little children playing with dangerous toys without realizing the risks? Yeah, Neil, I think it's a little bit of both. I think certainly the pace to not be last, to be first, to not miss out on the opportunity, I think is really critical. And then, you know, there haven't been a tremendous amount of horrific, you know, headlines in the paper yet.

[00:07:17] They're slowly, you know, leaking out from that perspective. But, I mean, part of our survey, I think a really interesting piece was about 45% of the respondents basically said they have had confirmed or they believe they've had an exfiltration of data or inappropriate access of the AI agents and that sort of stuff. So, you know, I describe it as kind of the hot stove moment for a kid, right? Like, don't touch the stove, you go touch the stove, lesson learned, right?

[00:07:45] So I think a little bit is that. I think the other piece for me that really struck kind of a chord from about a year ago when we talked was about almost 100%, 97% of the respondents said they believe sort of autonomous agents are very high or will be essential to their future business success, right?

[00:08:04] So almost all of them are saying, to your point, whether you want to talk about a genic or autonomous agents, this shift to moving ultimately somewhere down the road to no humans in the loop, I think it's a little further out than maybe some people would say, is a real item that I think is driving the discussion. And one of the most striking findings from the report is over half of AI initiatives are now happening outside of formal IT oversight.

[00:08:32] And as an ex-IT guy, this is the stuff that keeps me awake at night. I mean, how did we get to a point where so much AI activity is occurring inside companies without IT being involved? And what are the immediate risks regarding data leakage, IP exposure? We've kind of seen it in the past with the shadow IT and SaaS products, et cetera. But this is a whole other level, isn't it? Yeah, Neil. And I think shadow IT is sort of somewhat analogous, but I actually think it's slightly different.

[00:09:01] For me, at least, if you think about shadow IT, it was sort of all these appendages built out around sort of the corporate infrastructure to move faster, you know, break barriers, more value to the business directly. And I think that still occurs. I think if you think about sort of all of this stuff that's occurring outside of a very centralized function, first, I would describe it as that's really much more where organic innovation and experimentation is occurring in the business, right?

[00:09:29] So a little bit different in that chicken and the egg, I would say that innovation was the first thing that happened. And sort of companies have put sort of a governance framework around it, right? So I look at it as both. There's certainly risks associated with it, but I also think of it as a bit of a healthy connection between getting the lowest center of gravity where innovation and business values are going to occur, right? So that's the positive side.

[00:09:53] The downside clearly is a lot of risk around exfiltration of data, inappropriate access, you know, agents, you know, doing things that they're not allowed to do and that sort of stuff, which I think is a really critical item. What I'd say, Neil, as I said, I spent all my time in high tech. And if you go back a couple of years ago, we used to talk to clients about, I'll just call it really broadly risk AI. Maybe historically people refer to as responsible AI and that sort of stuff. And a couple of years ago, most clients are like, we got it.

[00:10:23] We don't need any help. What we're seeing in the last six months, really that discussion coming back to the forefront that says maybe we don't have it all. Maybe we do really need to think about how we govern this and we manage this and put at least sort of a minimum set of viable road kind of curbs on the whole discussion as we go through it. A hundred percent with you. And I think it does indicate that the ambition to deploy AI is huge.

[00:10:48] It's massive, but scaling it securely across the enterprise, that remains one of the biggest challenges. And from an architectural or structural standpoint, are there any foundational elements that organizations are struggling with right now? You've got the luxury of speaking to so many different clients across so many different industries. Anything, any trends that they're struggling with now? Yeah, I just have a few things sort of anecdotally in what we're seeing overall.

[00:11:14] But, you know, Neil, what I'd first say is these companies we had talked to a year or two ago, their biggest concern was hallucinations, right? Like what is, you know, how are these models going to perform? You know, how close to deterministic results can we get? And if you think about that, that's a pretty silent discussion right now. So things that were super important 18 months ago, it's not that they have no importance, but really the priority has certainly shifted from that regard. So just I use that as a bit of a frame of reference.

[00:11:39] I think what we're seeing and sort of what we're advising companies to do, the most critical thing that companies can do aside from protecting sort of their environment is really create a set of optionality as we go forward. Because what's really clear at this point, the winners today in terms of tools and models will continue to evolve and change. And the premium for some optionality and how companies architect these activities are really important.

[00:12:07] So our point is absolutely multi-model, multi-vendor will ultimately be multi-agent orchestration as we go forward. So how do you build those layers? We're personally spending a lot of time talking to clients about really an observability layer in this whole discussion. So really trying to drive some level of transparency and operation of data and the results and that sort of stuff. That's kind of number one.

[00:12:30] Number two, what we've said is, you know, access, agent access to systems is a major critical item. No different than if you think about an employee from that perspective. So, you know, access management will be really critical. And thirdly, what we've said is, you know, agents are going to be very dangerous without any data. So your overall data strategy, data architecture, access will be ultimately really critical.

[00:12:55] And then what we also say is, look, it's great to think about fully autonomous, agent agents and that sort of stuff. But, you know, point blank, anything that's critical in your enterprise right now, we expect a human in the loop for the foreseeable future. And I think as an ex-IT guy, it was always drilled into me that you can only improve what you measure. And, of course, here we are now. Many leaders talk about ROI, but many have failed.

[00:13:22] Few seem to measure it consistently in their AI projects. We've heard a lot of stories of things not getting out of pilot phase, et cetera. So I'm curious, what does a practical framework for measuring AI value actually look like? Because there's a big scrutiny on any tech project now, whether it's AI or not, on that ROI. But what's that measurement framework look like? Yeah, so I think, Neil, a few things. I mean, there's a ton of frameworks out there. We generally advise our clients.

[00:13:51] It's less about the nuts and bolts of a particular ROI framework. What we have said is, make sure you consistently approach that in terms of how you fund these discussions, right? So it's less about the nuts and bolt detail. Two things we say overall is, you know, make sure you've got sort of the return piece figured out. But actually, you need to give a lot more thought to the cost side of the equation.

[00:14:14] So what we're seeing, and you may see all this hype in the market around token maxing and all this sort of stuff around agent usage. And what we said is, like, you know, a huge function of the actual return is what the cost level is. And just because we see a day over day, week over week, month over month decline in token pricing, that doesn't mean the net cost of tokens and what you're using it for is critical.

[00:14:39] So ironically, you know, people think about the return piece of the equation as being the most critical item. We think that's a relatively straightforward discussion to figure out. Make sure you keep an eye on the cost equation, you know, because that's actually where we're looking at discussion. You know, we have clients now who across the broad enterprise, not just for developers, are giving token budgets to every employee to go try stuff, right?

[00:15:04] Which, you know, this line of sight on how tokens are being spent, we think is a good perspective on how to look at this discussion. And then what I'd also say, Neil, overall, like, if I had a dollar for every time I heard this from a client, I'd probably already be retired. Clients are drowning in use cases. I mean, it is not uncommon for a midsize enterprise to have 1,000, 2,000, 3,000 identified use cases. So how do you sort and prioritize those?

[00:15:33] Oftentimes, it's around zero-based budgeting. So, you know, is a finance use case more important than a sales use case, more important than a dev use case? So to have kind of a standard framework to go through all this stuff is really critical. So I don't think there's kind of a silver bullet to solve the ROI discussion. But the important part is those ROI discussions are starting to occur, right? A little more discipline is coming up as we talk about scaling up these experiments. Right. Yeah.

[00:16:01] And we're also seeing AI budgets continuing to rise. So on that side of things, how can CIOs and business leaders listening regain better control, rein in some of that decentralized spending, and build governance that doesn't slow innovation to a halt? That's one of the reasons that maybe many have avoided IT when they're going forward with AI projects, et cetera. But it is a tricky balance sometimes, isn't it, between governance and innovation? Yeah.

[00:16:29] Well, I think a couple of things, Neil, I'd highlight first, just to your comment around increased spending on AI. I mean, that sounds like a no-brainer discussion. But what I will say is if we look longitudinally at our survey we've done over the last three years, you know, you go back two or three years ago, increased in AI spending was 90%. Last year, 92%. This year, 95%. There isn't much more they can get to 100% of respondents. But the compounding of the amount of money being spent on AI is quite incredible, right?

[00:16:57] Especially when you think about sort of the deflationary curve of what AI compute is costing, right? Like just the consumption of all this stuff is really incredible. We really think about two things. I like to think about sort of no-regret actions an enterprise should take. And the first is separating the budgets associated with AI around risk, privacy, all of these responsible AI frameworks.

[00:17:24] Like you've kind of got to carve those off, fund them, and make sure, you know, you have an ounce of prevention for a pound of cure, right? So then you really get into a discussion of how do you sort of orchestrate and manage budgets related to deployment of product and function and that sort of stuff. So, you know, first for us is visibility, really capturing that. As I mentioned earlier, I think getting a good handle on token spend is a piece.

[00:17:47] It's a little bit reactive in the discussion, but understanding sort of how tokens are being spent and consumed across an organization by department or by project is really an important first point, right? I think companies are going to look back and over the next 12 to 18 months and go, I had no visibility or idea of just how much we were spending in this space, right? So it's really kind of a new category from that perspective.

[00:18:11] Not that we haven't managed compute spend before, but you think about how pervasive and who can access this sort of compute in an enterprise is getting much more diffuse, right? So we think that's a huge piece of what's going on. So you've got to measure what you want to manage is step one to your point, right? Yeah, 100%. And you're someone that lives and breathes the EY technology pulse poll. You've seen it all several times a year.

[00:18:37] I'm curious, when you were looking through and gathering all the stats, et cetera, was there anything that surprised you in this one? I think a couple of things surprised me overall. First and foremost, and hindsight is 2020 now, we looked, and if you looked at the major spend in the last post poll, top line item in there, roughly 80% of respondents around increased budgets around cybersecurity, right? And at the time, I mean, certainly that's good. That's like eat your vegetables.

[00:19:06] Good thing to do health-wise. But if you look now, just over the last few weeks, and given some of the new model information that's coming out, cybersecurity is going to be top, top of the list, right, from that perspective. So at one point, it surprised me, but now I have a lot more clarity on that whole discussion. And then I think, for me, actually, just the fact that that much innovation is occurring outside of a governance framework still organically in these companies is a piece. And, you know, the awareness is being raised.

[00:19:35] And my point is, a lot of companies have already put together some sort of framework, but what's going to separate the successful is those people who actually operationalize and take advantage and execute on those frameworks, right? Like we all put this stuff up. We got one, check the box. But how you actually operationalize it is going to be really critical, right, for that perspective. So I think that came through pretty clearly in my mind as well. And then lastly, I would just say talent continues to be a huge line item of investment.

[00:20:04] I mean, it seems pretty intuitive, but, you know, it continues to be a big piece. And a lot of that is separated from, you know, acquiring talent. Maybe the lifecycle of training people is, you know, starting to ebb a little bit. You know, how are people going to get new ideas into their company? So I do think you'll see more of the talent sort of rotate between enterprises. And that may be a nice way to kind of harmonize the knowledge base across all those groups.

[00:20:30] And if we dare look ahead a few months or even years into the future, as more of these unsanctioned tools evolve into autonomous agents, et cetera, any risks you see if organizations don't start addressing that gap? Now, there's a lot to be doing at the moment, isn't there? But it's now what to do, where to start feeling incredibly overwhelming. Yeah, no, I think, look, everybody, if you don't feel overwhelmed, you're probably not moving fast enough, right? Like, I think it's a pretty natural thing at this point.

[00:21:00] You know, so two things, I think, around tools and vendors and all that sort of stuff. I think it is critically important to manage that upfront early and often. So, you know, before you let some of this tooling into your enterprise, that's your best place to manage this versus playing. I always describe it as sort of whack-a-mole around all these AI tools in place. I think, ultimately, a couple things are super important to think about.

[00:21:25] First of all, I always say sort of the AI application ecosystem will start where the ambition of the frontier models ends. And I think we're seeing that, right? And I only bring that up as I don't know where that boundary is, personally. But I think we're going to continue to see the big players continue to eat away at the smaller players, obviously, over time for a variety of reasons.

[00:21:48] So, you know, back to my point about having optionality in your architectures and making sure you can plug and play, whether it's agent, agent orchestration, you know, intra-platform agents, compute AI models, really going to be important. But I think you've got to work really hard to not let in tooling until you're quite sure of what you want to do. I mean, I've had dozens of meetings with CIOs who said, I'm just trying to now rationalize all the tooling that's come in.

[00:22:17] And this could be in a mid-sized enterprise. Literally 30 to 40 AI tools have popped up in the last 24 months and working through that. So my advice always is, you know, lock the front door before, you know, you have an unannounced visitor. From that perspective is a big piece of the equation. And then lastly, I would just say, again, it's all going to shake out. I suspect in most of these major categories, there's two or three key winners in those spaces, you know, whether it's around customer care or legal or chat.

[00:22:46] So there's going to be a lot of vendor attrition over this space. So making sure you have some fungibility as you go through it will be a big item, right? Those are my words of wisdom just given, you know, how do you kind of manage that? I love it. And one thing I'd also like to bring up before I let you go is at the moment, there's a lot of geopolitical volatility and sovereign AI mandates. How are you seeing these things complicating AI growth plans?

[00:23:15] Anything in the Pulse report around this? Because, again, big talking point right now. Yeah, I think overall broadly, and I think it's more implicit in the responses were really a discussion about how these companies are going to globalize AI. So I think a lot of that discussion comes into sovereign AI discussions.

[00:23:33] I think if you think about just – well, I think about if you deploy a copilot or some sort of broad agent across a global enterprise, just the amount of nuances in terms of legal, privacy, all these sort of things are really coming up heavily. I mean, we feel very bullish on the broad expansion around sovereign AI as compute models overall driving AI demand across the globe.

[00:24:00] I think as an individual company, there are a lot of nuances to how things are handled. Are you going to record meetings? Can you transcribe meetings? Can you not in particular countries? And, you know, you really got to get to a country-level detail to figure out how some of that stuff will occur. Well, the EY technology pulse pole, full of so many big stats there and great insights. And as you said, it gets refreshed several times a year.

[00:24:25] So for anyone listening wanting to keep up to speed with things, check out some of the things we talked about, connect with you or your team. Where would you like me to point everyone listening? Yeah, Neil, just direct them to ey.com and they can look up Tech Media Telco. TMT is our industry segment. They can find a lot of useful insights there. Awesome. Well, I'll have links to that as well as a direct link to the report. I urge everyone listening to check that out. And please feedback to either of us there.

[00:24:53] Let us know how this reflects what you're seeing out there and what you're doing and how you're navigating some of these challenges. But more than anything, thank you for starting this conversation once again. Now I know the report doesn't come out not once a year but several times a year. We need to get you back on sooner. So I look forward to speaking to you again very soon. Sounds good. Thanks, Neil. Have a good one.

[00:25:15] What stands out from this conversation for me is that real challenge with AI in 2026 is no longer whether an organisation believes in it. They clearly do. The harder question is whether they can build the architecture, measurement and governance that are needed to support that belief at scale. And when more than half of AI initiatives are taking shape outside any formal insight, the risk is simply not wasted budget. Huge technical debt.

[00:25:45] Huge technical debt. There's also a valuable leadership lesson in all of this. Innovation and control.

[00:26:26] A more manageable risk. But as always, huge thanks to Ken Englund there from EY for returning to the show, sharing such a clear-eyed view of what is happening inside enterprise AI right now. And just for being an all-round great guy. Always a pleasure to chat with him. And I do look forward to getting him on much more frequently. And for everyone else, here's a question I'd love to leave you with.

[00:26:52] Is your organisation genuinely scaling AI with purpose, with measurement and control? Or are you just simply moving fast and hoping governance can catch up later? You're focused on going forward. Whatever it is, let me know your experiences. Good, bad, indifferent. I want to hear them all. TechTalksNetwork.com You'll find 4,000 interviews there. Lots of different ways of meeting me as I attend tech conferences around the world. So check out the events page. And there's also 4,000 interviews.

[00:27:21] So lots to see and do there. But that is it for today. I've taken up far too much of your time already. So I'll return again tomorrow with another guest. But thank you for sticking with me today. Speak to you again tomorrow. Bye for now.