What actually separates AI that delivers real value from AI that never makes it past the demo stage?
Recording live from Qlik Connect, I sat down with Ryan Welsh, Field CTO of Generative AI at Qlik, to get a grounded, practitioner-led view of what it really takes to make AI work inside a business. While the industry has spent the past few years racing to experiment, build, and deploy new capabilities, many organizations are still struggling to turn that progress into capabilities people use every day.
In our conversation, Ryan cuts through the noise and explains why so many AI initiatives fail. Not because the models aren't powerful enough, but because they're not designed to fit into real workflows. He shares why context is far more than just a buzzword and how getting the right data, in the right place, at the right time, enables AI to deliver meaningful outcomes.

We also explore the growing shift toward agentic AI and the responsibilities that come with it. From designing systems that can act autonomously while remaining under control to understanding where humans need to stay involved, Ryan offers a practical view of how organizations can move forward without introducing unnecessary risk. There's also a refreshing honesty around where we are right now. After a wave of investment and expectation, many companies struggled to see immediate value from AI. But as Ryan explains, that period is changing, with more organizations finding ways to scale what works and move beyond isolated use cases.
So, as businesses look ahead, what does it really take to move from experimentation to execution? And are we focusing too much on building more AI rather than the right AI for how our organizations actually operate?
Join me for a candid conversation from the heart of Qlik Connect, and let me know your thoughts. Are you seeing AI deliver real outcomes in your business, or is it still stuck in the demo phase?
Useful Links
Connect with Ryan Walsh on LinkedIn
Visit the May Sponsors of Tech Talks Network and learn more about the NordLayer Browser.
[00:00:00] - [Speaker 0]
This podcast only exists because of partners like NordLayer. I'm incredibly grateful for the support that helps me deliver these conversations every single day. And one of the things I've learned in those many conversations is that hybrid work has changed everything about how teams operate. People are now working from different locations on different devices and often outside traditional networks. But the one constant here, no matter where people are working, is the browser.
[00:00:28] - [Speaker 0]
This is where collaboration happens. That's where the data moves, and increasingly, that's where the risk lives. Now NordLayer's business browser is designed for this new reality. It allows teams to work securely from anywhere, even on personal devices, all while giving organizations better control over access, data movement, and browser activity. And it even supports modern ways of working without forcing old security models onto them.
[00:00:59] - [Speaker 0]
And it is that balance between flexibility and control that is where a lot of companies are still struggling. So if anything that I mentioned there resonates with you, if it sounds familiar, I encourage you to take a closer look at nordlayer.com/browser. But now, on with today's show. Welcome back to the Tech Talks Daily podcast where I'm recording this episode live from the show floor at ClickConnect. Yet the excitement is here around AI and agentic AI.
[00:01:34] - [Speaker 0]
But it's also mixed with something else, a growing realization that building AI is one thing, but making it work inside a business is something very different. And yet, over the last couple of years, many organizations have invested heavily in AI. They've built pilots, tested use cases, and explored what the technology can do. But now we're looking at turning that into real value, something people can actually use every day want to use. So where have companies gone wrong, and what's it really take from impressive demos to systems that are delivering very real outcomes?
[00:02:11] - [Speaker 0]
Well, to explore that, I'm joined by the field CTO of generative AI at Qlik, and he brings a practitioner's perspective into this conversation. He spent years building and commercializing AI technologies and working directly with organizations trying to make this work at scale. So today, we're gonna get into the realities behind enterprise AI, understand why so many initiatives have failed to deliver value over the last few years, why context is far more than just another buzzword, and why embedding AI into existing workflows is the difference between adoption and rejection. And, yeah, we'll also talk about the risk and responsibilities that come with AgenTek AI, the importance of getting the data foundation right, and why more AI? It's not always the answer, but enough for me.
[00:03:05] - [Speaker 0]
It's time for me to beam your ears all the way to Kissimmee, Florida, where you can join myself and Ryan here on the show floor at Qlik Connect. Massive warm welcome to the show here at Qlik Connect. For everyone listening, can you tell them a little about who you are and what you do?
[00:03:22] - [Speaker 1]
Yes. Hi. I'm I'm Ryan Welsh. I'm the field CTO for analytics and AI for Americas here at Qlik. Joined Qlik two years ago now, January 2024.
[00:03:32] - [Speaker 1]
I was the founder and CEO of Kindy that was acquired by Qlik.
[00:03:35] - [Speaker 0]
Awesome. And there's so much excitement around AI right now, but also somewhat of a growing frustration that many initiatives aren't delivering real value. So from your perspective, what separates AI that looks impressive in a demo or a keynote from AI that actually works in production and delivers value?
[00:03:52] - [Speaker 1]
Yeah. AI systems that deliver value to enterprises are connected to the context of the business, which is key, and that's like a massive term that we can unpack in in in a in a little bit. But also importantly, they're embedded deeply into the workflows of the people that are actually using them. I think this is where a lot of folks get it wrong is is, oh we have this awesome AI thing over here, just need you to completely forget the workflow that you've been using for the next for the last twenty years and you know, you'll be 10 x more productive. No one's gonna adopt that product.
[00:04:22] - [Speaker 1]
Right? And so, you know, the systems need to have the context of the business, but then also need to be embedded deeply into the workflows of of how people work today.
[00:04:31] - [Speaker 0]
Do you think sometimes businesses underestimate that cultural aspect? The the what's in it for me, the getting it over the line? Because we concentrate so much on the technology, but very often, you can put that technology in front of everyone, but if it's not adopted and people don't wanna use it, it's not gonna go anywhere. Right?
[00:04:45] - [Speaker 1]
Yeah. Yeah. I mean, I think it's a steep job sign where it's you start from the user and you work back to the technology. Right? And it's like, is the problem that you're actually solving?
[00:04:53] - [Speaker 1]
What is the thing that you're actually doing for this user? Then you go back to, hey, do I even need a large language model to solve this, or could I do it with some just advanced statistics? And so, yeah, as as technologists, we're often think about technology first versus the use user, but you gotta consistently remind yourself. Start with the user, and work back to the technology. Don't try to shoehorn a technology into everyone's workday.
[00:05:20] - [Speaker 1]
It's just not gonna work.
[00:05:21] - [Speaker 0]
Yeah. And you're someone that's worked across the most advanced technologies from quantum to AI. But when you look at enterprise AI today, where are organizations still maybe wasting time or budget without realizing it? I suspect you see a lot of this out in the field. But
[00:05:35] - [Speaker 1]
Yeah. There's there's been actually a big shift just just in the last six months. A lot of things have evolved quickly in the AI landscape for for for enterprises. 2025 was, I think, defined by that MIT report about 95% of companies not realizing any value from from artificial intelligence. And I think, you know, we could debate the methodology of that of that.
[00:06:01] - [Speaker 1]
There was a pretty small sample. Methodology was wasn't all that stuff. But honestly, like, directionally, it was kind of right. It felt like, you know, at least from my conversations For the years lead leading up to that, it's felt like a lot of folks were doing a lot, making a lot of investments, hurried investments given the pace of of the innovation that was going on. And it does feel like we've hit a bottom in 2025.
[00:06:24] - [Speaker 1]
But what's come out in the 2025, and now accelerating here in 2026, is people have realized value. They've realized that pocket. And now they're going, oh my gosh, I need to scale this across the organization. I need to move it from a 10 person group who now is doing the work of a 100 to a a thousand person group who can now do the work of 10,000. And so it that that's that's really where where we're at.
[00:06:50] - [Speaker 1]
It does feel like 2025 was was that that bottom where everyone was like, AI is struggling, we're not gonna realize the value to, no, we're actually super close and let's get to get get to scale. And that all, I I think changed when people started to to understand what it actually took to to get AI right. And that goes back to that combining in a large language model with your contextual enterprise data because an LLM has no idea how your business operates. Or I I hope it I hope it doesn't, or else your data was was leaked somewhere and it got into the LLM, the large language models training training dataset. But but, you know, it's it's when when enterprises started to think about, I need to combine the LLM, I need to combine it with my enterprise data inside the workflows.
[00:07:35] - [Speaker 1]
That's when they started to get stuff stuff right and starting to take off here in '26.
[00:07:39] - [Speaker 0]
And one of the consistent themes here at ClickConnect is this very real bottleneck, and that isn't the model. It's the data and the systems underneath it. So where do you see that breaking down most often in the real world environment?
[00:07:52] - [Speaker 1]
All over the place. I mean, humans aren't getting their data right for longer than I've been alive. Right? But it but it does accelerate. You gotta have clean data, you gotta have timely data, you gotta have governed data, you need all of that stuff to get the language model to understand the context of of of your business, and also make business decisions and and understand how it's gonna drive insights from from from that data.
[00:08:18] - [Speaker 1]
And so, I mean, honestly, the the bottlenecks are are are everywhere. Do I have the right curated dataset? Where is my my my data? Quite literally, where is it geographically? It's it's it's all the things.
[00:08:29] - [Speaker 1]
Is it clean? Is it representative of the actual task and the actual workflow that that we're trying to improve here? So it's it's all those things that go into getting your data right for for enterprise AI solutions.
[00:08:44] - [Speaker 0]
And I think if there's one word mentioned at tech events this year, more than AI, more than agentic, it's context.
[00:08:52] - [Speaker 1]
Yeah.
[00:08:52] - [Speaker 0]
It's one of those words that gets used everywhere, but seldom explained. So what does context actually mean in an enterprise AI environment, and why does it matter so?
[00:09:01] - [Speaker 1]
Yeah. It's well, it's it's gonna be linked back to the context window, and so this is the the size of data that you can put into a language model to have it then understand, and actually operate on. So I'm sure everyone's familiar with with typing a prompt that would go into the context window or or having uploading data, a PDF file, a Word document, or even just the previous chat. So how these chat systems systems work is as you continue to type and type and type, it's continuing to load that information into the LLM such that it says, oh, I know what we're talking about. Otherwise, you would have no idea what you're talking about because you're just asking this question as opposed to all previous questions before it.
[00:09:39] - [Speaker 1]
And so that that idea of context very much gets its name from the context window of putting information into into the language model. And so, when we think about it in in that way, it's how do I bring enterprise data into the into the language model? So, an example being retrieval augment in generation, which is a common pattern for enabling natural language question answering of of unstructured text data. Now if I have 50,000 documents, I literally can't load those 50,000 documents into a language model. Well, I could do semantic search over it, or vector search over those documents.
[00:10:14] - [Speaker 1]
I could find the relevant snippets. I can reduce the 50,000 documents to maybe 50 lines of text that are all relevant to my question. So with that context, I can then load that context into language model along with my question, and the system can say, alright Ryan, I see that you're asking about Click's vacation policy. You know, here are all the snippets from our workday files about our vacation policies, and they're loaded, and now I can answer answer your question. And so, when we talk about context, it's it's very much enterprise data that now needs to be curated, cleaned, governed in a way that users can access it, or agents can access it, and then apply that into a language model to have a language model act on on that data.
[00:11:00] - [Speaker 0]
And AgenTek AI also being positioned as the next step forward, but it also raises some concerns around control and risk. I don't know if you saw this, but Target recently mentioned that if, hey, if your agent goes and buys anything from Target, that's on you. You know? It's this so they've they've done a press release around that. So, I mean, how do you design agentic workflows that can act autonomously while still remaining governed and predictable?
[00:11:24] - [Speaker 1]
Yeah. Well well, it it there's there's that could be a whole podcast in in in and of itself, but there's a few ways that you wanna attack this. One is actually having the right data that that agent can act on. Right? If policy has changed between last week and this week, and the agent goes and accesses a knowledge source that has last week's policy in it, well, what?
[00:11:46] - [Speaker 1]
It has last week's policy in it. It doesn't have this week's policy, and this this week's policy is different than last week's policy, and so it it could change. So so you wanna have your data up to date, your data clean, such that the agent has the most up to date, the most clean data for it to actually operate on. And then there's a bunch of things that you wanna think about, and this is all use case specific of what are the constraints that you want to put on this this agentic system. So, you know, if if you're doing something that has high impact output, and so, you know, this agentic workflow could lead to the purchase of a million dollars worth of goods.
[00:12:25] - [Speaker 1]
Alright. Let's put a constraint on The agent can't buy a million dollars worth of goods. The agent can only buy a $100 worth of goods. Anything over a $100 is has has to go to a person to to sign off on, and ultimately execute that. And so you're gonna think through all these different kind of, I'll call them guardrails to keep the agent on on track, but it does start with the data, then go down to, alright, what do we want these autonomous systems to be able to do on their own?
[00:12:53] - [Speaker 1]
What are we gonna require humans to review? And just kinda think through that on a use case by use case basis.
[00:12:59] - [Speaker 0]
And there's also a strong message emerging that AI isn't the answer, but better AI is. So for organizations and business leaders listening today, how should they be thinking about prioritizing the right use cases rather than just building more agents, more models? Because in some areas, we're we're hearing, hey, we just need more agents. We need more AI. But it's not the answer, is it?
[00:13:19] - [Speaker 1]
I think there's there's something really interesting going on because the cost of, like, generating content and the cost of generating code and the cost of generating stuff has basically gone to zero. Right? And so if you're using coding agents, yes, everyone can code, and everyone can create really bad code. And it's like, congratulations, or everyone can create blogs, everyone can create marketing material, and everyone can create really bad blogs and really bad marketing material. And so I'm actually starting to see like a bit of a pushback on the more is better, and more and and kinda leaning towards like, actually less is better.
[00:13:53] - [Speaker 1]
Yeah. Like like less, but really quality stuff is continues to be better than just just more is more and more is more is better. And that's I think that's been like one of those changes that has happened in the last, again, a few months. And so like just tying it back to like coding agents as as as an example, we see a lot of people in Silicon Silicon Valley now say like, I'm just I'm not gonna spit out a ton of code. The goal is to create as little code as possible that is perfect for this very specific thing.
[00:14:24] - [Speaker 1]
Right? And that's a different mindset shift, and I think a lot of the world's starting to starting to go in that direction as a as a as a reaction to more is more, and more is more is better. And I think that that same thing is happening with with language models too, because as these models get bigger and bigger and bigger, yes they become more capable, but they also become more expensive to run. And so could you have a specialized language model that's good at insurance claims processing, or analytics, or or anything like that, where you can now have lower compute costs, which lead to higher gross margins, obviously, for the vendors and and other folks that are ultimately selling those solutions on, which is doing more with less. I think that's been a change that's that's happened in the in the last few months.
[00:15:09] - [Speaker 0]
And you're also someone that's been involved in commercializing complex technologies before. This isn't your first rodeo. You've probably seen a few patterns here, but are we ever at risk of repeating some of the same mistakes with AI that we've seen in previous technologies? Always.
[00:15:24] - [Speaker 1]
We always repeat the same mistakes, and I think it's fun repeating the same mistakes. And look, I've I've been building AI technologies now for over a decade. I was very lucky to be on the the ground floor when when deep learning was being applied to images, and think about how we can actually apply this to language, and then actually go out and start a company and actually apply that to language, and and be at the forefront of that, and even contribute to the academic literature and all and all that, and push pushing that forward. And even I get caught up with the technology, and I go, oh, it's gonna do all this cool stuff, and then I make the same mistake, and I go, Ryan, you learned that mistake eight eight years ago. That's right.
[00:16:03] - [Speaker 1]
You need to you need to remind your you need to remind yourself. And so I I I hope, you know, with my experience, I I course correct a little bit faster than everyone else. But but everyone's gonna make those mistakes, and I think it's just, you know, things things you have to have to do regardless of the the technology, it's quantum computing or or even cloud technologies or AI technologies, it doesn't matter. It's just it is what it is.
[00:16:28] - [Speaker 0]
Yeah. And at ClickConnect this week, we've seen some strong customer proof points for many big names, household names, from cost savings to operational efficiency, but what stands out to you as a genuinely meaningful example of AI delivering real business impact? You must hear many different stories, but anything stand out to you?
[00:16:46] - [Speaker 1]
Yeah. I mean, there's there's there's it's well, why it's it's still early days for for AI technologies. I there was a a great stat, I I saw it the other day, where, you know, people always always ask me, Ryan, what's the most applicable use case for AI? Is it driving cost savings? Is it driving productivity gains?
[00:17:06] - [Speaker 1]
Or is it driving, you know, revenue expansion for for for companies? And of course, you always wanna drive revenue expansion for for for a company. But it was only of the companies that are actually getting value from AI, only about 20% of them were driving real meaningful revenue expansion. But 66% of them were seeing real meaningful productivity gains and cost savings. And so, I think there are a lot of use cases that folks are exploring and actually deploying with AI today, is driving really meaningful productivity gains and cost savings, and obviously that drives higher profit margins, which can then have a bunch of follow on effects for for society, but there's a there's a ton of use cases in production right now that are just driving really meaningful value, productivity gains, and and cost savings for for organizations.
[00:17:56] - [Speaker 0]
And to give people a valuable takeaway here, we are seeing a shift from dashboards to systems that act to actually act. So what's the biggest cultural barrier that you see inside organizations when AI does start making or influencing decisions? Because we're all control freaks in some way, and it's nervous a letting stuff go, isn't
[00:18:15] - [Speaker 1]
Yeah. And and right honestly rightfully so, in interacting with these these AI systems. I mean, again, I've been building these systems for for for a decade, and and every time I start to have a fully autonomous system, I catch it doing something silly. And I go, wait a minute, Ryan. You got, you know, I gotta put some guardrails on this thing.
[00:18:33] - [Speaker 1]
You can't just have it, you know, run for forty eight hours straight and and expect expect some some outcome. I think it's important to educate people on the limitations of of these technologies, then also let them let them play. I know friends of mine who who were not in artificial intelligence, but I introduced them to AI, they said, hey, how do I start? And I would just like introduce them to say, how to put Claude code in the command line interface. And these are people that have never written code in their life, and now they're building applications.
[00:19:05] - [Speaker 1]
Right? And and they're building really terrible applications, but applications nonetheless, and they're actually working and people people can use them or or or doing data analysis or doing anything really with these AI technologies. So so, you know, educate your people on on how these systems work, but honestly, get everyone a license to these technologies. You know, it's through the click platform or directly to the large language model vendor, and let people iterate on these things and learn from them. And I think you'll be surprised how many times people can actually surface use cases themselves because they're the ones on the ground floor, then they'll gonna learn how these tech technologies work.
[00:19:44] - [Speaker 1]
And then once they service themselves, I think you have less of a trying to manage or doing change management because they're the one that came out with the use case. They're the one that came out with the use case. Right? And they're the ones that's driving productivity for themselves, and so it's a bit of a bottom up grassroots use case definition within the enterprise versus versus more top down, which can run into those change management challenges.
[00:20:06] - [Speaker 0]
And another word on the tech conference bingo card is human in the loop. And if agentic workflows are gonna take on more responsibility, where do you think humans should stay firmly in the loop, and where can they safely step back? I know it's a bit of a balancing act, but Yeah.
[00:20:22] - [Speaker 1]
It it is. It is. And and, you know, I I gave a presentation, it had to be like seven, eight years ago now at a at a conference in London where I had like a simple two by two framework, and it and it was just just really like, what is the impact of this workflow? Like, again, is it buying a million dollars in in goods, or is this impacting a human? Well well then you you need to have steps that allow, you know, well, one, the system can't make a decision.
[00:20:47] - [Speaker 1]
Like, it's actually a human, only a human can can make that decision. But then you also need to surface the ability to for the human to actually understand how the system got there. Right? And so I I I think you need to build systems that have explainability. You need to put the guardrails on them such that they're not making such meaningful decisions that can have a serious impact to the company or a customer.
[00:21:12] - [Speaker 1]
And if if you do that, then you can build these systems where even if it's a high impact use case, so say it's again buying a million dollars worth of goods, but And the system makes its recommendation, and the system provides all the logic, and all the documentation, and all the reasoning, and all of that for why it's gonna make that decision, then the human can actually review it very quickly and go, yep, that's about right. Yep, that's perfect. And so the the biggest challenge is is is when these systems make that recommendation, and they're just a complete black box, and has no idea why it's making the recommendation. So the user can't actually do it. And so so you wanna find those those spots where you can just get the highest leverage, you know, the user's able to make meaningful decisions from the data to drive meaningful productivity and gains for the organization with minimum discovery into how the system ultimately got there.
[00:22:01] - [Speaker 1]
And if you can find that sweet spot, then that leverage can be incredible.
[00:22:05] - [Speaker 0]
And finally, if I pull out a virtual crystal ball here and we look ahead, what do you think will define success for organizations over the next twelve to twenty four months? And what will companies that get this right be doing differently from everyone else, do you think?
[00:22:19] - [Speaker 1]
Two things is they're gonna under understand that it it it is all about the data. So I I I say that as someone that has built AI models and continues to enjoy pushing the limits of algorithmic advancement. Like, that's always gonna be awesome. But if if we think about if we just stopped development today, if there was no new language model ever released, and we never came up with any new technologies, we still have like ten to twenty years of use cases to to deploy with today's today's technologies. And so that's gonna come down to getting your data right for even today's technologies, because you have ten years worth of of productivity gains and and deployments that you can actually figure out.
[00:23:05] - [Speaker 1]
But then also embedding these things again deeply into the to the workflows of of of users. I use the example of, again, these these coding agents. And so there's been a few of these companies that have been the fastest ever to 100,000,000 or a billion dollars in in recurring revenue. And I think the reason why they got there is because they live in the workflow of the developer. Right?
[00:23:27] - [Speaker 1]
Like they live in Versus Code. They live in the command line interface. Like, you're not telling the developer like, oh, go over here now and be a 100 x more productive. You're you're just telling them, just do the same thing you do every day, and you'll be a 100 x more productive. Like, imagine imagine that.
[00:23:44] - [Speaker 1]
And so, you you know, if you can get your data right, then you can kinda think about embedding these capabilities into the workflows of of the user, then I think the companies that that do that right are just gonna continue to accelerate and have this kind of like virtuous cycle that that that happens is as you become more and more productive, you just continue to accelerate away from any competitive pressures that that you might have because you're the one nailing it, and they're not.
[00:24:11] - [Speaker 0]
Well, for everyone listening, I'll add links to ClickConnect, to the Click website, and indeed your LinkedIn in case anyone would like to carry on the conversation. But more than anything, thank you for stopping by and starting on today. Thank you.
[00:24:22] - [Speaker 1]
Yeah. Thank you so much.
[00:24:23] - [Speaker 0]
What I love about ClickConnect this year is there's no hype. There's no suggestion that AI is a silver bullet that will magically transform a business overnight. Instead, there's a strong reminder that success comes down to getting the basics right. As Ryan explained today, the organizations that see real value are not the ones building the most AI. They're the ones building the right AI, systems that are connected to the context of the business and embedded into the workflows that their people already use.
[00:24:53] - [Speaker 0]
And, yeah, last year, we did see many companies struggling to see value, but we're starting to see a shift now. Organizations are finding those pockets of success and are now looking to scale them. But that scaling only works if the foundations are in place. And this brings us back to the core theme running through everything at ClickConnect this week, Data, context, and trust. They're no longer nice to haves.
[00:25:20] - [Speaker 0]
They are the difference between AI that works and AI that fails. But I wanna hear from you. Are you building AI that fits into how your business actually operates, or are you asking your business to adapt to the technology? Because I think we've heard today that that decision will determine whether AI will deliver real value or remain stuck as just another promising idea. As always, techtalksnetwork.com.
[00:25:47] - [Speaker 0]
You can leave me an audio message there. You can meet me on the road. I'm at a lot of events over the next few months, and there's also 4,000 interviews there too. But that's it for today. So thank you for listening as always, and I'll speak with you all again tomorrow.
[00:26:02] - [Speaker 0]
Bye for now.

