What happens when the excitement around AI collides with the reality of deploying it inside a business?
At SAS Innovate, that question came up repeatedly, and in this episode, I sit down with Manisha Khanna, global product marketing lead for AI at SAS, to unpack why so many organizations are still struggling to move from AI pilots to meaningful business outcomes. While headlines continue to celebrate the rapid rise of generative AI and agentic systems, Manisha brings a far more practical perspective shaped by working directly with enterprises trying to operationalize AI at scale.
One of the most striking parts of our conversation centers on why AI projects continue to stall. According to Manisha, the biggest problems are not weak models or lack of ambition. Instead, organizations are running into unpredictable inference costs, operational complexity, governance challenges, and internal resistance to change. She explains why many companies still approach AI as a technology purchase rather than a transformation strategy, and why governance built in from the beginning can actually accelerate adoption rather than slow it down.
We also spend time exploring what agentic AI really means beyond the hype. Manisha shares why SAS chose supply chain as the launch point for its first industry-packaged agent and how agentic systems differ from copilots by acting more like coworkers than assistants. Rather than simply providing recommendations, these systems can actively participate in business workflows, helping organizations move from monthly optimization cycles to near real-time decision-making.
Another major theme is the growing importance of governance and accountability. As organizations deploy AI into regulated industries and customer-facing environments, the focus is shifting away from "whose model is best" toward "who is deploying the best use cases responsibly." Manisha explains why governing the use case itself matters more than obsessing over model benchmarks, and why companies that bolt governance on afterward create friction for themselves later.
The conversation also touches on where AI is already delivering measurable value today. From customer complaint management in banking to aircraft maintenance support systems powered by retrieval-augmented generation, we discuss how organizations are seeing success when AI augments existing workflows rather than attempting wholesale disruption overnight.
What stood out most for me is how often the human side of AI came back into focus. Manisha repeatedly emphasized that leadership communication, employee trust, and organizational readiness are just as important as the technology itself. If leaders position AI purely as a cost-cutting tool, fear and resistance follow. But when AI is framed as a way to empower people and improve outcomes, adoption becomes much easier.
As organizations continue to implement AI and agentic systems, the biggest question is no longer whether the technology works, but whether businesses are ready to lay the foundations needed to make it succeed.
Useful Links
Connect with Manisha Khanna
Please check the partners of the Tech Tech Talks Network

Learn more about the NordLayer Browser
[00:00:00] - [Speaker 0]
I'm incredibly grateful to the team at Denodo for backing the Tech Talks Network and helping us produce over 60 interviews a month. And if you are looking for better ROI from your lake house, this message is going to be worth hearing. Because Denodo helps reduce complexity, control costs, and accelerate time to insight. And it does that by connecting all of your data sources in real time. So make your lakehouse work harder with Denodo.
[00:00:32] - [Speaker 0]
And you can do that by simply visiting denodo.com. There's a growing tension in the AI conversation right now. On one side, you have bold promises around generative AI and agentic systems that are transforming entire industries. And on the other, you have the reality that most projects are still struggling to move beyond pilot stage and deliver real measurable business value. Well, at SAS Innovate, that gap between ambition and execution has been somewhat of a reoccurring theme, and it's exactly what I'm exploring in today's conversation because I'm joined by a global thought leader in AI governance and agentic AI at SAS.
[00:01:22] - [Speaker 0]
She's been at the center of conversations around why so many AI initiatives fall short, and more importantly, what needs to change for organizations to finally see real return on investment. So today, we'll unpack why up to 95% of AI projects are still failing to deliver meaningful impact, but not because of weak models, but because of integration gaps, poor strategies, and a lack of governance that's built in from the start. And we'll also get into what Agentic AI actually means in a business context, why supply chain is emerging as one of the strongest early use cases, and why governance, when done right, can actually accelerate innovation rather than slow it down. So this is a conversation today for anyone trying to move beyond hype and understand what it really takes to make AI work inside a business. A quick thank you to NordLayer for supporting the podcast and helping me make these daily conversations possible.
[00:02:24] - [Speaker 0]
And if you are listening and you're responsible for security or IT, you will know the reality. The reality that most of your risk now sits inside SaaS apps and browser activity. That gap is exactly what NordLayer is addressing with its new business browser. So instead of bolting security on from the outside, it builds it directly into the browser itself. This means you can control access, monitor activity, enforce policies, and reduce shadow IT all from one single place.
[00:02:59] - [Speaker 0]
And most importantly, it does it without adding deployment headaches or complex onboarding. You get things like browser based data loss prevention, SaaS access control, and zero trust browsing, but delivered in a way that your team can actually use. So if you've been trying to simplify your stack while improving visibility, please check it out at nordlayer.com/browser. But enough from me. Let me introduce you to my guest now.
[00:03:29] - [Speaker 0]
So thank you for joining me here at SAS Innovate. Could you tell everyone listening a little about who you are and what you do?
[00:03:36] - [Speaker 1]
Absolutely, Niyi. Great to be here. I am Manisha Khanna, and I lead the global product marketing team for AI at SAS.
[00:03:43] - [Speaker 0]
Well, thank you so much for sitting down with me. Last year, MIT famously estimated 95% of AI projects would fail. You wrote a great blog on this and how most failures were caused by integration gaps rather than bad models. A year later, are enterprises getting any better at moving from pilots to production, or are we still repeating the same mistakes?
[00:04:04] - [Speaker 1]
Well, I'll give you a very honest opinion on this, Neil. While the that project Nanda report, came last year, there's also continuous reports coming out, which are talking about number of agenticare projects which are expected to stall. Now some of those stats are very glaring. And when I speak with customers, working with a lot of customers to make sure they're able to drive value out of AI, we are really seeing three themes where there are issues emerging, and they are just common. I do think they are solvable in due course of time, but then they are there are immediate hurdles, and I'll just tell you a few of them.
[00:04:39] - [Speaker 1]
There are hurdles with regard to costs. So the cost of these models and when you deploy them to production and when you start building application on top of it, it is very unpredictable. And imagine the inference cost is something, which is expected to grow as the model usage gets increased over a period of time because, you know, at what what what point in time are these technology companies who are providing you models would want to get profitable. Right? So so that's my number one, which I hear as concerns from lot of industry players I work with.
[00:05:10] - [Speaker 1]
The second concern is around complexity. Now there's adoption rate, which is picking up in general public, and I did give you an example, just before we met, before this interview that my kid is using ChatGP and Gemini to basically design their own rooms. But when it comes to enterprise scenarios and when you start deploying HMTech AI or AI into their own existing operations, there is a lot of complexity involved. Right? So not every customer is ready to transform their operations at the way at at, basically, the length it's expected by agent TKI.
[00:05:42] - [Speaker 1]
Because if you're talking about increasing the level of autonomy with these agents, your staff and employees need to be onboard with that, and they need to be onboard with the new process flow and how this will look like. Your regulators have to be onboard with how you're deploying this, and are you able to explain those results? So that's there's really a true complexity in every deployment, which leads to lower adoption rate. So, you know, these are the things I'm hearing a lot from my customers, and these are themes which I think the industry needs to work on to get us get us better and so that we can really get to that scale which these systems are designed for.
[00:06:17] - [Speaker 0]
And I'm glad you mentioned regulators there. So another stat here. I think it was 16% of organizations approach AI strategically with governance, transparency, and trust all built in from the start. So why are so many businesses still treating AI as a tool purchase rather than that operational strategy that everybody's talking about here?
[00:06:37] - [Speaker 1]
So since you spoke about governance, and I'll I'll give you some more anecdotes here. Right? So think about AI governance applied as post facto when you've already deployed your systems. And that post facto thought is what makes governance a blocker for your operations because you're not proactively figuring out that your systems will fail, and I need to build in controls for that. Versus if you want your AI governance to be an accelerator, you wanna make sure that your systems are deployed with governance built in, not bolted on afterwards.
[00:07:09] - [Speaker 1]
And I think this is what some of those stats allude to. The principally, the whole idea is when you know, who deploys governance and where does it start? Right? Does it start from people who are building AI? And my answer to that is yes.
[00:07:21] - [Speaker 1]
Absolutely yes. Those are the people who understand those AI systems the best, and they are geared to make sure they are deploying AI in a trustworthy manner. But at the same time, as you start moving up the curve from developers to users to actually enterprise level business operation and processes, the level of governance obviously needs to be be an accelerator, and you should not require 100 approvals to deploy an agent take out system in enterprise and to approve every result that those agents are giving because then you're not getting that value out of it.
[00:07:53] - [Speaker 0]
Yeah. 100% with you. And here at SAS Innovate, you're unveiling the new supply chain agent as the first industry package agent. So why did supply chain become the right place to start, and what does it tell us about where AgenTek AI delivers the maybe the fastest ROI? Because that's another big topic right now.
[00:08:12] - [Speaker 1]
Yeah. And, know, Neil, one of the challenges in the industry is the whole industry is trying to figure out which use cases will deliver the maximum ROI. And we started with supply chain because, obviously, SaaS has a number of customers in that industry. We have been doing advanced analytics for that industry for a number of years. So think of supply chain process, which is actually a series of decisions you are taking across to optimize your supply chain, which includes selecting your buyers, making sure you are supplying the right product at the right spot at the right time.
[00:08:43] - [Speaker 1]
And that whole manufacturing industry that whole supply chain process requires a series of decisions. And, typically, how those decisions are taken and you just kind of, imagine yourself in a business environment where you are in a supply chain meeting where traditionally analytics or forecasting models used to tell you, hey. This is where your supply chain will work the best. Right? So go and try this scenario.
[00:09:06] - [Speaker 1]
Try this strategy. But then after you are you're back to a meeting where a finance planner, a supply chain planner, a business operation leader, they are discussing everything in a meeting to decide, these are the trade offs. These are my scenarios. If I reduce the supply for this particular customer, then I do not meet the SLA. So I need to run that trade off again.
[00:09:26] - [Speaker 1]
All of that entire process, imagine if you're doing this time and again, it can easily take you a month. And that's the reason why many organizations run supply chain optimization once a month. Now with AgenTech AI, the core benefit of that is you do not have to wait for the entire month for your supply chain optimization to run. Right? Because the benefit is that your scenarios are changing all the time.
[00:09:49] - [Speaker 1]
Your supplier demand is changing. Your business dynamics are changing all the time. And with agents doing doing that task, you can be in a meeting, and you and me can be in a meeting, and we could discuss this is what the agent is giving us as an outcome, and that's nearly real time. And we think the use cases like this have a huge impact, in terms of business processes, and hence, we thought, you know, let's just start with that supply chain agent. We have existing customers, and we have, you know, a great use case which can immediately deliver ROI without getting into complexities of how would we create this agent.
[00:10:21] - [Speaker 1]
We wanna get into complexities where how we deploy that agent in a customer scenario. And everything around that, which is the piping and, you know, packaging of it, is built by SaaS for those customers.
[00:10:32] - [Speaker 0]
And there is so much excite and around agenda sorry, again. There's so much excitement around AgenTik AI and all things agent at the moment, but also a lot of confusion. So for that business leader listening, how do you explain the difference between maybe a helpful copilot and a true enterprise AI agent that can actually transform operations? What's how would you explain the difference there to that person listening, sat on the fence, not knowing where to start?
[00:10:57] - [Speaker 1]
Wow. You made me reflect today a lot, Neil. Thank you. So the difference between a helpful copilot versus an AI agent who's really performing tasks for you, autonomy mostly, is you have to shift your thinking from using Charge GPT to really having a coworker, which is an agent.
[00:11:17] - [Speaker 0]
Yeah.
[00:11:18] - [Speaker 1]
Right? Because that coworker, while you're doing your own job, it's giving you recommendations. It's working on behalf of you. It's taking actions on your behalf. It's doing part of the work you're doing in real life.
[00:11:31] - [Speaker 1]
So imagine just having one more person in your team while an assistant is lot more dependent on you. It's relying on your judgment while it's just giving you some opinions, which you can decide and not decide to take. But when it comes to having a coworker, that's where the true enterprise value will get unlocked if you're able to do it right and do it well.
[00:11:49] - [Speaker 0]
Completely agree with you. And one of your sessions also focuses on AI governance and assurance, and many leaders do still see governance as slowing innovation down. So I'd love to bust a few myths and misconceptions here. Can you expand on why you believe governance is actually the thing that makes AI scale faster and safely? Because there's a battle between the old mindset and the new mindset here.
[00:12:11] - [Speaker 1]
Absolutely. Absolutely. And I come from a bank banking background where I've seen a lot of risk management policies embedded as well. Lot of them are, as I said, post facto. Right?
[00:12:20] - [Speaker 1]
And when I explain governance and the importance of it, I think we can all go back to Formula One race. Right? Think about how seamlessly the tires get changed. The pit crew works together. Right?
[00:12:34] - [Speaker 1]
It's happening in a right fashion and in in a fast paced environment because there is governance built in, and that governance is not making the car slow. Governance Yeah. Is giving you that telemetry in the car, which is making the driver more educated to run at speed or to race at speed. Right? And this is what good governance looks like.
[00:12:54] - [Speaker 1]
Right? So, you know, again, governance in my mind, we have to shift the mindset that it's a lot about culture of the company, but it's also about making sure you're thinking about governance right from the start, not as a post facto when you build a system and then, you know, you hit by regulators and you get hit by auditors who are asking you, what this what did this agent do? Right? Why did you take this decision? That's too late because that's when the impact has already happened.
[00:13:21] - [Speaker 1]
That's where especially in high state regulatory environments, the decision has been taken. A customer did not get a loan approved or a medicine was offered to a to a patient, and this has got serious implications.
[00:13:34] - [Speaker 0]
And SaaS AI navigator is designed to govern any model or agent, including third party tools like Claude or Microsoft Copilot, which feels like a big deal to me. So how important is it for enterprises to govern the use case itself, not just the model underneath it, do you think?
[00:13:50] - [Speaker 1]
Yeah. I and I do think we need to move beyond this whole conversation of models because in the industry, we are focusing a lot on, hey. Whose model is better? Right? I think we need to start talking about who's deploying the best use cases, which are giving the highest and best outcome right now.
[00:14:06] - [Speaker 1]
And that's where and, you know, again, lot of regulations are emerging. Every country has a very different viewpoint on in terms of how they're tackling governance. And when I work with customers, I just tell them the accountability is not regulators. Right? The accountability is because you need to get the right decision in front of your end customers, so the accountability is yours.
[00:14:23] - [Speaker 1]
And if you're deploying AI systems, we need to start thinking about governance right now.
[00:14:28] - [Speaker 0]
And you've also spoken about organizations that are overspending on sales and marketing AI while missing those bigger ROI opportunities in back office automation. So where do you see companies still getting the investment priorities wrong, and and where do you think they should be looking instead?
[00:14:45] - [Speaker 1]
I think, Neil, the biggest mistake which a lot of companies are making, and I have been a culprit of that in my past role when I used to work in a different company. Lot of times, we get carried away by some cool, catchy demos. Right? And when your AI strategy starts with technology, you will end up in a disaster. Right?
[00:15:03] - [Speaker 1]
And that's what's happening. There's a lot of disappointment happening because we see a cool model. We start seeing some nice fancy demos, and we want them immediately applied in our own business context. But there are places where I'm seeing even for agentic AI, I I can confidently say we have started to see some customers getting some real business value out. Couple of them spoke at our SaaS Innovate conference, so I'm gonna definitely go catch up with those recordings and use my digital past for SaaS Innovate to make sure I hear what the industry is doing.
[00:15:33] - [Speaker 1]
But then if you notice and go deeply in terms of themes, in terms of where those use cases are successful, lot of those companies have picked up use cases which are augmenting their existing business process. One of the examples we have is a large bank in Greece, which is using AI agents for customer complaints management. Now this is one of the cases where you still can you have human in the loop. You can still have agents give some recommendations. You can use, technologies like SaaS intelligent decisioning to automate some of the process flows so that you can speed up that work of customer service officer.
[00:16:04] - [Speaker 1]
We also have another case, which is, an airline provider, which I'm very excited about, which is using SaaS technology, which is SaaS retrieval agent manager for doing context engineering. And the use case we are solving for them is we were doing predictive maintenance of AirTrust for them for a number of years. But what they said to us was that my maintenance officers wait a number of hours to find out from the maintenance manuals, which is a huge pile of information, at least 300 pages maintenance manuals for each aircraft where they have a fault code. But to find how to fix that fault code, they have to scan that entire document log, and that takes time to take action. And the cost of that time is that the aircraft is land standing on the fleet, which cost the company millions of dollars.
[00:16:48] - [Speaker 1]
So context engineering is a use case which is picking up really fast in the industry. It's simple to implement. It's a lot more mature. All you're doing is building RAC pipelines and getting your enterprise knowledge base, which is unstructured information, right in the hands of business users so that they can chat and interact with that document and get to that information and insight faster.
[00:17:08] - [Speaker 0]
And, Elshwy, your session here at SAS Innovate on building AI on a solid foundation as to whether organizations are building on a bedrock or quicksand. So what are those warning signs a company's AI strategy is built on those weak foundations, especially around data trust and operational readiness?
[00:17:26] - [Speaker 1]
I think you summarized it well. It's data trust and operational readiness. Right? So, again, think the bedrock there and, you know, I also think about a lot of other countries. Right?
[00:17:37] - [Speaker 1]
I come from Asia, and I was born and brought up in Asia, I moved to United States. And I also think about those countries which are evolving really fast, but then they do not have access to all the models and the infrastructure which many other countries have. Right? So think about data and governance as equalizers for those countries. Right?
[00:17:56] - [Speaker 1]
And even though your models are still evolving, they're changing at a very fast pace, there are new models emerging, that will be a differentiator. But the true differentiator will be when companies start harnessing that data, and they also get their governance right, and that can really get them to scale. So the challenge question which you have and, like, the the pressure test is, have I fed any data or knowledge to this particular model to make this more context ready and relevant to my domain? Right? Have I had governance in place?
[00:18:27] - [Speaker 1]
Am I evaluating the models I'm using? And am I evaluating it the right way, making sure they are well tested before they deploy it to production? And those are the, you know, some very simple pressure tests, which if all of us adopt and start asking questions to our own vendors, we will get better outcomes and quick outcomes from our exercises.
[00:18:46] - [Speaker 0]
One of the great things about attending any conference, of course, is the people that you meet, people that you wouldn't normally meet. You've done a few sessions here. You probably had a lot of feedback, a lot of conversations. What what is the kind of feedback you've been getting? What kind of questions are people asking you?
[00:18:59] - [Speaker 0]
Any trends then?
[00:19:01] - [Speaker 1]
So we are seeing a lot of interest coming, and this is coming from existing SAS Viya users in our Copilot capability. Now a lot of the users we have are doing data and AI tasks in SAS Viya, which help them to manage data, to develop models, and deploy decisions to production. And with Viya Copilot, we are able to improve productivity for data scientists, data engineers, business analysts so that they can speed up the data and AI tasks, ultimately getting from insights to action faster. Right? So this is our theme, which is really, really very interesting, and there's a lot of customer demand coming from existing Satisfire users.
[00:19:40] - [Speaker 1]
At the same time, I think there's a lot of questions at this stage with regard to, hey. Where are you successful in AgenTek AI? Am I really seeing any good use cases which I can start applying in my own business context? Because you know what? I've started experimenting with it.
[00:19:54] - [Speaker 1]
AI governance, was an eye opener for me. I ran a roundtable this morning on AI governance where we had, 30 year old participants from various industries and different walks of life. And one of the thieves which came out from that AI governance roundtable is that even for highly regulated industries who have had AI governance in place, the whole agentic AI era is resetting all of those parameters. So a lot of them are back to square zero and saying, we need to learn, and our AI governance policies really need to keep evolving for a number of years, and we don't know how to get started. Right?
[00:20:28] - [Speaker 1]
These are the common themes which I am I'm seeing. There are real use cases. There is a lot of excitement. There's a lot of hype. People want these these technologies in their businesses.
[00:20:37] - [Speaker 1]
They are a lot of them are figuring out how to go about it. There is success, though, in Copilot applications, early success which I'm seeing, which is a great sign that the industry is moving forward.
[00:20:48] - [Speaker 0]
So if we do have a CEO listening to our conversation today anywhere in the world, they're feeling that pressure to do something with AI or agentic AI, but feel overwhelmed by the noise and the hype. What are the the first maybe two or three decisions that they should take to ensure that they create measurable ROI rather than just another expensive pilot project?
[00:21:08] - [Speaker 1]
Yeah. I would just say start with your end goal, which is your north star in mind in terms of how can AI benefit your own customers and your own businesses. Lot of these technologies are also about transforming your businesses and creating new business models, and they are just not wrappers you can put on existing processes. There is a lot you need to do if especially, and this is a call to action for all the leaders who are listening to this podcast. There is a lot you can do in terms of setting the narrative right because I am sensing a lot of fear in the industry where your own employees might be fair fearful of using AI, which can really stall adoption.
[00:21:43] - [Speaker 1]
So if your narrative is, hey. This AI technology is going to make you more powerful, you will unlock a lot of opportunities versus saying, hey. I have this AI technology, and my goal is to get efficient. That's a warning sign. So, again, set the tone right, make sure your employees are on board, and at the same time, go and pick those use cases smartly, which start with something which will deliver quick value so that then you can get more success across eBay and get more traction on your investments.
[00:22:10] - [Speaker 0]
Fantastic advice. On a personal level, when you're taking that travel, you're traveling home after the conference, taking in all the conversations, the keynotes, the announcements, what are you gonna be reflecting on? What are you taking away from Sassery Invent?
[00:22:25] - [Speaker 1]
So I am going to read a lot about agentic AI evolution, and there were some very interesting discussions I had with a couple of analysts who were walking in the conference as well. And they told me about some interesting reports they are releasing. Couple of themes I'm interested to look at is, again, is this supposed to speak about model inference cost? How is that gonna impact end customers? I do wanna read about how humans are getting impacted, with change in AI and how can we make that change management easy.
[00:22:55] - [Speaker 1]
And the third topic I have in mind is AI governance. Again, everybody has to learn together in this pace of change we are we are experiencing right now. So that's something I'm gonna go advocate for more, talk to a lot of my customers to to basically encourage them to start doing something instead of getting worried about or waiting for regulations to come.
[00:23:15] - [Speaker 0]
And my research on you also tells me as well as doing all that, you've also got a talk coming up. Any spoilers you can tell me about that?
[00:23:22] - [Speaker 1]
I have a talk coming up today at SAS Innovate where I'm going to talk about our road map for a very interesting portfolio, which is SAS models and agents. What we are doing for the industry is not all the customers have the talent to really get the building blocks together. And I I'm I'm gonna speak about an analogy of Jenga Tower. You know, if played Jenga games. Right?
[00:23:42] - [Speaker 1]
It's about each NTIKI implementations are all about putting those building blocks together. Right? There's block for your data engineering and context engineering. There's block for trust and governance. There's a block for your talent, which is your data engineers, data, you know, machine learning scientists, and people who now understand those large language models and know how to implement them.
[00:23:59] - [Speaker 1]
And this is a lot for any organization to take. So our package models and agents, the goal of that portfolio and the vision behind it is how can we simplify and offer you that entire stack instead of you stacking those Jenga blocks together. So I'm very excited about that talk, and look forward to Neil talking to you one more time about the feedback we get from the industry.
[00:24:20] - [Speaker 0]
Absolutely, lovely. I will include a link to your LinkedIn for people listening if they wanna carry the conversation. Is there anywhere else you'd recommend people listening check out at all?
[00:24:29] - [Speaker 1]
Yeah. I do write a lot on our SaaS blogs channel, and that's where I would, you know, request your audience to go follow me. I do talk a lot about AI strategy topics. I do like to cut through the hype, and I have a very pragmatic view towards AI implementations. And I love to talk about SaaS and the implementations we are doing with our customers where we have been successful so the rest of the industry can learn from it.
[00:24:52] - [Speaker 0]
Perfect. I'll add links to everything that you mentioned there, and I appreciate that you are running around site at the moment, and I've got a talk to do. So I just thank you for taking the time to sit down with me.
[00:25:01] - [Speaker 1]
Thank you, Neil, for joining us at SAS Innovate. Pleasure to have you here.
[00:25:04] - [Speaker 0]
One of the things that I loved about our conversation there is just how much of the challenges around AI have very little to do with the tech itself. As my guest explained, the biggest barriers are cost unpredictability, operational complexity, and the human side of implementing change. Whether it's employees unsure of how AI fits into their role to leaders chasing the wrong use cases based on impressive demos rather than addressing their real business needs. And for those reasons alone, I think it's clear that success comes down to decisions, not just deployment. Because there's a big shift happening right now, especially in how we think about governance.
[00:25:45] - [Speaker 0]
Instead of being something that just slows thing down slows things down, it's becoming a mechanism that allows organizations to scale AI both safely and confidently. And when you combine that with focus on the right use cases, especially those tied to existing workflows and measurable outcomes, then you start to see where the real ROI begins to emerge. So if you're feeling the pressure to do something with AI but are unsure where to start, hopefully, this conversation today gave you somewhat of a grounded perspective. Start with the outcomes that you want to achieve, bring your people with you, and focus on use cases that create immediate value rather than just chasing the latest trend. As always, I'll leave links to everything that we mentioned and talked about today in the show notes.
[00:26:36] - [Speaker 0]
And remember, you can continue the conversation with her. But I'd love to hear your take. Are we finally moving from AI experimentation to very real business impact and outcomes, or are you still stuck in pilot phase? Let me know the journey you've been on. Love to hear from you.
[00:26:53] - [Speaker 0]
And you can find me at techtalksnetwork.com. But that is it for today. I've taken up far too much of your time, so I will return again tomorrow with another guest. But thank you for listening. Bye for now.

