Qlik Connect: Nick Magnuson On Trusted Data and Agentic AI
Tech Talks DailyApril 18, 2026
3488
21:2219.56 MB

Qlik Connect: Nick Magnuson On Trusted Data and Agentic AI

What if the reason most AI projects fail has less to do with the technology and more to do with how the work itself is designed?

Recording live from Qlik Connect, I sat down with Nick Magnuson, Head of AI at Qlik, for a conversation about the gap between AI ambition and operational reality. Because while many organizations are still focused on models, tools, and the race to deploy new capabilities, the real challenge often sits somewhere much less glamorous. Workflow design, trusted data, and making sure AI fits the way a business actually runs.

Nick brings more than two decades of experience in machine learning and predictive analytics, and in this conversation, he shares why so many AI initiatives fail before they ever create value. His view is refreshingly direct. Most failures are not technology failures at all. They are workflow failures, where teams try to force AI into the business without first understanding the outcomes they are trying to achieve.

We also explore the rise of agentic AI and what it means when systems move from generating insights to taking action. Nick explains why governance becomes even more important in that world, how organizations can balance speed with control, and why trusted data has to move beyond being "good enough for reporting" to becoming reliable enough for decisions and automated execution.

There is also a strong discussion around openness, portability, and the growing risk of vendor lock-in. As enterprises build more complex AI ecosystems, flexibility is becoming a strategic advantage, especially for organizations trying to scale without creating expensive dependencies they will regret later.

For mid-market businesses with limited resources, Nick also shares a practical path to production. A reminder that operationalizing AI does not require massive teams or unlimited budgets, but it does require clarity, discipline, and a focus on the right problems first.

So as the next wave of enterprise AI moves from experimentation to execution, what will separate the organizations that scale successfully from those still stuck in pilot mode? And are we asking the wrong questions by focusing on more AI, instead of better AI?

Join me for a thoughtful conversation from the heart of Qlik Connect, and let me know your view. Is workflow design the missing piece in your AI strategy?

Useful Links

Visit the Sponsors of Tech Talks Network and learn more about the NordLayer Browser.

[00:00:00] - [Speaker 0]
This month, I'm partnering with NordLayer, and it's support like this that allows me to keep bringing you conversations from across the global tech community. And if you are running a business right now, you may have noticed there's a quiet shift happening, one that most people are still underestimating, and that is your company doesn't live inside your network anymore. It lives inside the browser. That's where your SaaS apps sit. That's where your data moves.

[00:00:31] - [Speaker 0]
And increasingly, that's where attackers are focusing their attention. So NordLayer has just launched its new business browser, and it's designed specifically for small and medium sized companies that need visibility and control without the overhead of enterprise security tools. What I like here is the balance. You get advanced protection, better compliance, and full visibility into how your team is working online, but without slowing anyone down or forcing them to learn anything new. Feels like a practical step forward rather than another security layer that adds friction.

[00:01:11] - [Speaker 0]
So if you wanna see more about how it works, please head over to nordlayer.com/browser and check it out, and let me know your thoughts. But now on with today's show. Welcome back to the Tech Talks Daily podcast where I'm recording this episode at the Click Connect event in Florida. And one theme that keeps surfacing in almost every conversation I've had this week is AI is not failing because the technology is not good enough. In many cases, it's failing because too many projects were never designed to deliver real outcomes in the first place.

[00:01:53] - [Speaker 0]
And over the past few years, we've seen an explosion of experimentation, teams testing ideas, building prototypes, and exploring what's possible. But moving from that experimentation phase into something more operational and something that improves how a business actually runs and then able to scale it, this is the part where many organizations are still getting stuck. So what does it really take to bridge that gap? Well, to explore this, I'm joined by Nick Magnuson, head of AI at Click. And Nick has been working in this space long before the current wave of generative AI.

[00:02:37] - [Speaker 0]
And today, he's responsible for driving clicks AI strategy, product development, and innovation. So in this conversation today, we're gonna get stuck into why so many AI initiatives fail to make it into real workflows and why success starts with clearly defined business outcomes, and why workflow design is often that missing piece. And we'll also explore the growing importance of governance in an agentic world, and how organizations should be thinking about the cost as they continue to scale AI, and why openness and flexibility, how all these things collectively are becoming critical in a rapidly evolving ecosystem. But enough for me. Let me beam your ears all the way to Orlando, Florida, where you can sit down with myself and Nick right now.

[00:03:31] - [Speaker 0]
So thank you for joining me here at Click Connect. For anyone listening, can you tell them a little about who you are and what you do?

[00:03:38] - [Speaker 1]
Yeah. Well, I'm Nick Magnuson. I'm the head of AI at Click and primarily responsible for all of our research and development and product related initiatives around AI. I've been at Click now about four and a half years. Came to Click through an acquisition they made of a company called Big Squid where I was the CEO.

[00:03:58] - [Speaker 1]
It was a machine learning company. It was really their first foray into AI, I guess, with a more direct, like, this is a conscious thing we're trying to do. And, yeah, we've been building out the AI stuff here since since then.

[00:04:12] - [Speaker 0]
So you were ahead of the time, really, because about four years ago, that was pre chat GPT and OpenAI and everything. So how much have you seen changed in that time?

[00:04:21] - [Speaker 1]
Yeah. I mean, it's it changes and the pace of changes has increased. I think that's the biggest thing. But, yeah, going back to, let's see, 2015 when when I started doing this in software, it was it was really early. Like, machine learning and this the type of AI we're dealing with now was not that wasn't present.

[00:04:38] - [Speaker 1]
It was just it was traditional machine learning. But, yeah, you know, the things that that are occurring now where you have model updates with significant, like, changes each time and that occurs, like, not even within six months time frames anymore, that is a change in a pace that I don't think I've seen in my career, you know, and I don't think anyone's seen in their career.

[00:04:58] - [Speaker 0]
And, of course, over the last few years, we've seen this clear shift happening from AI experimentation and pilot phases to operational reality. But from your perspective, why do so many AI initiatives still fail to make it into real workflows?

[00:05:12] - [Speaker 1]
Yes. This is a great question. You know, we talk about getting into a a production situation, and I think it goes back to what are you trying to achieve when you first go in and start the project. A lot of projects start with it is an experiment. It's not tied to a business outcome.

[00:05:31] - [Speaker 1]
There's not sponsorship from, you know, executive. There's not, like, the operational support to make it actionable. And, you know, and that's fine. Maybe for some companies, is just about experimentation, and they use terms like, hey, we went live or we're now tracking adoption usage. But the reality is in order to get the value out of it, it's gotta be more deliberate about trying to create measurable operational improvement.

[00:05:56] - [Speaker 1]
And to do that, you've gotta have a mindset around tying it to a business outcome that matters. And like I said, having the requisite support, which really isn't anything new. Like, even, you know, old analytics projects, like, you had to have the same things in place. It's just AI moves so much faster when you when you're trying to build these types of projects that you have to have that in place. Otherwise, either either fail or, you know, you may have success from a technical implementation, but there's no business value derived from it.

[00:06:25] - [Speaker 0]
And one thing we keep hearing this week is that AI failures are often workflow design problems rather than technology problems. So what does good workflow design actually look like in an AI driven organization? I appreciate that is that's the utopia, isn't it? But what does it actually look like?

[00:06:43] - [Speaker 1]
Yeah. I mean, think at the beginning, like I said, you you've gotta tie it to a business outcome, a need, a challenge. Right? And AI can, I mean, it can solve a lot of challenges that it's really difficult or not not particularly cost effective to to use a lot of resource, human resources to achieve? You know?

[00:07:05] - [Speaker 1]
So you you think about stages. You've gotta have the right support up front. You've gotta have the right sponsorship up front. Again, tied to the right business outcome that you're be very deliberate about what you're trying to achieve with it. I think sometimes there's no, like, hey, up front, like, we expect this type of result.

[00:07:20] - [Speaker 1]
And therefore, when you get the result, you don't even know if it's good or if it's bad. And then as you carry it out, like, you know, you've gotta have the operational support, the right teams in place, the right buy in. And again, these aren't new concepts. Like, again, they go back to as far as analytics has been around. But again, I think the thing that's different is AI can move so much faster, especially if AI is part of the process of building these things out.

[00:07:44] - [Speaker 1]
Yeah. So, you know, I think maybe the one other thing that I would maybe submit here is you gotta know that these things are very iterative. Like, even if you think you know everything upfront, you're gonna discover things that are they're not what you expected, good or bad, but the fact is that you're gonna have to iterate. So there's some iteration that should be expected no matter what.

[00:08:03] - [Speaker 0]
100%. And this year, of course, AgenTik AI is being positioned as the next evolution, but it also raises some concern concerns around control. So how do you design AgenTik workflows that connect with autonomy while still being governed and predictable?

[00:08:20] - [Speaker 1]
Yeah. I think governance now has a new meaning and maybe a bit more weight than it did before. If you think about governance frameworks before, they were if you're trying to build something, the government governance came at the end maybe as a last step before you put something into production, and maybe there was a little bit of monitoring thereafter. Because you had humans in the process there, there was a bit maybe of an escape hatch because people were watching the thing. But when we go into solutions that are AI and agentic, meaning it's also autonomous, meaning they can take action, like, governance has a a whole new construct around it where you've gotta be able to understand the inputs that go in.

[00:09:01] - [Speaker 1]
Because if there's again, if things break down, it's largely because of the inputs. It's not because you didn't think of the workflow the right way. It's because the inputs have changed and so on. So governance has, I think, a new context where we're trying to understand why an AI system has made a decision or taken an action, what data and information it was using when it did that. So, again, traceability, lineage are all really important, and, you know, the ability to do that constantly more important than it was before.

[00:09:30] - [Speaker 0]
Yeah. And I think trusted data is a phrase that has come up in almost every conversation here this week. So at what point does data move from being just good enough for reporting to being truly ready for AI driven decisions and actions?

[00:09:44] - [Speaker 1]
Yes. This is very related to governance topic. You know, just to put some perspective on it, when we used to think of trusted data in the lens of, like, traditional dashboards and and and reporting, it was, hey, is the data tidy enough to make the visuals do what I think they should do and be able to be shared with other people? But now we're using data to power AI systems that have the autonomy again to take action. So that construct around what is trusted is is a bit different where we have to be able to know and be able to monitor and be able to understand the why and what inputs were used and how, and importantly, be able to refine those because when we find that things start to drift from where we originally thought they would be with an AI system, like, we are responsible for reinforcing the right behaviors and the right the right outcomes.

[00:10:31] - [Speaker 1]
So trusted data, I think, has more relevance now than it did before.

[00:10:36] - [Speaker 0]
Percent. And you're dealing with enterprises trying to scale AI, but also mid market organizations with far fewer resources at their disposal. So what is a practical path to production? What's that look like for companies that don't have the large data science teams to back them up?

[00:10:53] - [Speaker 1]
Well, I think the stuff that we're working on and many of the things that we announced this week make it much easier for smaller organizations to get to the same scale as a large organization that has more resources. And I say that because now you can use agentic workflows to build solutions that leverage AI, whereas before you needed to have large scale, not only data science teams, but also, like, the development teams, the implementation teams that could put it into a production setting. So it might have been actually software developers taking that in. And now, like, we're seeing that we can drastically reduce the resource requirements, which opens up, I think, the opportunity for smaller organizations to leverage AI in a way that they couldn't before.

[00:11:35] - [Speaker 0]
And cost is also becoming a bigger conversation now, especially with token usage and infrastructure demands. How should organizations think about controlling the cost of AI as they continue to scale it across their business?

[00:11:48] - [Speaker 1]
Yeah. I think I think cost is probably a a two part problem for me. One is, yeah, you've gotta think about how this thing scales and what it looks like and what are the cost, you know, and and the benefit thereof. I think you can't look at it just purely on cost. You've gotta consider the benefit.

[00:12:04] - [Speaker 1]
The second part of it is we know and can observe quite without much question. Costs have come down and will continue to come down over time. So I think any organization that's thinking about it and putting cost as the main barrier should rethink that because costs will come down. New technologies will come online that'll have higher costs for sure, but that doesn't shouldn't be a prohibitor in terms of getting started, in my opinion. Yeah.

[00:12:30] - [Speaker 0]
And there's also a strong stance around openness and portability, particularly avoiding vendor lock in, which is something I've heard a lot about here at the moment. But why is that becoming such a critical issue in the current AI landscape?

[00:12:43] - [Speaker 1]
Well, so a couple reasons I think that interoperability or what we called freedom, as you may have heard here, I think Mike said that on on stage the first day. You know, that's important because a couple reasons. One is the technology is moving so fast. Right? And if new technology comes out that your business can benefit from, you need the flexibility to build a move to it, and it may not be with the same vendor that you're currently using.

[00:13:10] - [Speaker 1]
Mhmm. You know, we've built our own architecture that way so that we can be very nimble when new stuff comes out that we can utilize it and our customers can benefit from it. So I think, you know, that that to me is a a principal reason. Two, you know, it's never a best practice to get fully locked into a single vendor because then, you know, from a lot of reasons, they you don't have flexibility, whether it's on the cost, the price, the technology choices, you know, how the stack works together. You know, they've just it's better to, I think, in my opinion, shoot for fit for purpose, best of breed solutions and be designed to enable that.

[00:13:48] - [Speaker 0]
Yeah. I'm glad you said that because I think many vendors are pushing all in one platforms, but we're also seeing more fragmented ecosystems at the same time. So how do you balance flexibility with simplicity when designing AI architecture?

[00:14:01] - [Speaker 1]
Well, yeah, I think there's a I mean, this is getting a bit technical, but when we think about it from a product design standpoint, we wanna try and abstract away different components that we can then interchange technologies, but the components and the way they work together, that does not change. So as an example, we have something in our agentic experience called the LM gateway or the AI gateway, where it is a single interface for all of our services to interact with different model providers. And as those model providers, we may change them, the interface that we've built in in, you know, within the product to to, you know, to basically speak to one another, that does not change. So you're only changing one component of the architecture when when that happens. And so that's when I talk about flexibility, build it with that in mind because that's necessarily gonna be the case here, and the cycles will probably even get faster than they are today.

[00:14:52] - [Speaker 0]
I think there's also often tension between moving fast and maintaining control, especially with governance and compliance. So how do organizations strike that right balance without slowing innovation? Always a tricky balance. Nothing's changed there, but any any words you'd

[00:15:07] - [Speaker 1]
No. I get this is a really big topic. AI, like, in just in our experience, the development process when AI is is leading it is magnitudes faster. And so you can't you can't have the same processes that we've relied on previously for that. So, you know, my my suggestion or the things that I would advise people to think about is let's not make some of the prior procedures that are at the tail end of any process of development.

[00:15:36] - [Speaker 1]
They need to be more embedded into the actual work as you go, whether that's security compliance, whether that's, like, regulatory anything like that should be done as you develop as opposed to done at the very end.

[00:15:48] - [Speaker 0]
And looking ahead, I'm curious. What do you think will separate the organizations that successfully operationalize AI over the next twelve to twenty months from those that remain stuck in pilot mode. What do you think will separate those two? Is is that gap gonna get wider, do you think? Or

[00:16:03] - [Speaker 1]
That's a great question. I think what'll separate is those that have, and this is maybe stepping away from the technology, a bit more of a culture of wanting to rapidly iterate and know that they're going to fail in some cases in their acceptable in acceptable circumstances. But the fact that the culture is more about innovating than those that are more on the fence and wanna do it very pragmatically, and, you know, that's prudent for some organizations. The risk tolerance level come into play in terms of what organizations can do. But I think those experiment quickly and fail in some cases and learn from it and then go on, like, they're gonna be in a better position, let's say, three or four years from now where they've got stuff into production, getting the, like I said, measurable operational improvements in place versus those that are still, you know, kinda working through experimentation phase, if you will.

[00:16:58] - [Speaker 0]
And we're recording this on the last day of Click Connect. There's been a lot of announcements this week, a lot of press releases. What is it that excites you most about what has been announced this week?

[00:17:08] - [Speaker 1]
Well, what excites me the most here is most of the stuff that we announced is live. Yeah. It's in production. Click Answers, the newer version of it, we launched in in February. There's a thousand user there's thousand companies using that already.

[00:17:23] - [Speaker 1]
There's, like, tons of adoption, tons of usage. That to me is most exciting because, yeah, we can announce things in the future, but the reality is we have AI today, and we have customers that are leveraging and benefiting from it now. And so, you know, as I've been here and and talking to to our partners and our customers and others, it's been really encouraging to see that people are loving what they've got and are using it today as opposed to thinking about what's ahead.

[00:17:51] - [Speaker 0]
And when you take that short flight back to Salt Lake and you've hopefully, you did have a few problems getting here just for everyone listening. But when you reflect on all those conversations with customers, with partners, what are you gonna be taking away and thinking about on that?

[00:18:06] - [Speaker 1]
Well, the first thing for me is I want our development teams, the people that are building the stuff, to hear that from, I guess, secondhand for me, and to know that what we're embarking on is is you know, we're we're hitting the right mark. It's it's resonating with customers. They're they're using it. They're excited. You know, that's that's probably the first thing because, you know, we've put a lot of hard work into getting to where we're at.

[00:18:28] - [Speaker 1]
Secondarily, I think on the converse list side is we can't slow down. Like, we need to continue to innovate. It's a very competitive market. Our customers have increased expectations now because of what they've seen us do. And, you know, we're gonna go back to work and keep working on the new capabilities that we think can move us even further ahead in terms of the, you know, the technology that we can offer and the use cases it can support for customers.

[00:18:52] - [Speaker 0]
Love it. Well, thank you so much for sitting down with me today. I will include a link to all things ClickConnect and indeed your LinkedIn if anyone wanted to reach out to you. But I know how busy you are, so just thank you for stopping by.

[00:19:03] - [Speaker 1]
It was a great conversation. Thank you. Thank you.

[00:19:05] - [Speaker 0]
One of the many things that stood out to me in the conversation today is just how much of this keeps coming back to fundamentals. Yes. There's a tendency to look at AI as something entirely new, something that requires completely new ways of thinking. But as Nick pointed out today, many of the principles that determine success haven't changed at all. You still need clear business outcomes.

[00:19:32] - [Speaker 0]
You still need the right level of sponsorship and support. And you still need to design systems that people will adopt and actually use. The difference now though is speed because AI is accelerating everything. Development, iteration, and it also accelerates both success and failure which means the margin for error has become much smaller. And there was also a very important point around governance made there in the conversation.

[00:20:05] - [Speaker 0]
Because in a world where systems are no longer just generating insights but taking action, understanding how decisions are then made, where the data comes from, and how those systems behave, they're all becoming essential. And then there's the bigger shift here that's happening right across the industry from closed platforms to open ecosystems, from static models to constantly evolving architectures, from isolated experiments to embedded operational systems. So much going on here. And I think if you're thinking about your own AI journey, the question shouldn't be about what do you build. It's whether what you build is designed to deliver real outcomes at scale in the way that your business actually operates.

[00:20:53] - [Speaker 0]
But I'd really be interested in your take on this. Are you still experimenting or are you starting to see AI become part of how work actually gets done? As always techtalksnetwork.com. I'd love to hear from you, but I'm afraid we're out of time today. I'll return again tomorrow with another guest, but thank you for listening as always.

[00:21:13] - [Speaker 0]
And I'll speak with you again tomorrow.