Salesforce - The Vision For Agentic AI And The Future Of Work
Tech Talks DailyMarch 18, 2026
3631
33:2130.52 MB

Salesforce - The Vision For Agentic AI And The Future Of Work

What does it really take to move from AI hype to something that actually works inside a business?

In this episode, I sit down with Shibani Ahuja, SVP of Enterprise IT Strategy at Salesforce, to talk about why so many enterprise AI projects stall long before they deliver real value. While the market is full of noise around agents, copilots, and automation, Shibani makes the case that the real issue is often much simpler and much harder at the same time. Design. She explains why model capability alone will never rescue poor architecture, weak governance, or unclear data ownership.

Our conversation goes well beyond the usual agentic AI headlines. Shibani shares what she learned from speaking with hundreds of C-suite leaders over the past year, and why many early enterprise AI conversations were too focused on models instead of ecosystems. We unpack the difference between predictive, generative, and agentic AI, why trusted data means more than having lots of information, and how Salesforce's own internal journey revealed conflicting knowledge, governance gaps, and the importance of determinism in enterprise settings.

I also loved Shibani's perspective on the human side of this transformation. We talk about why successful organizations are framing agents as a capacity multiplier rather than a headcount story, how to bring employees along through visible wins and shared learning, and why the best starting point is often a simple, boring use case that removes pain for frontline teams. She also shares her thoughts on the eight design principles for the agentic enterprise, the myths that frustrate her most, and what will separate the leaders from the laggards over the next 18 to 24 months.

This is a conversation for anyone feeling pressure to do something with AI, but wanting a clearer view of what meaningful progress actually looks like. Are businesses building the right foundations for an agentic future, or are too many still mistaking experimentation for strategy? Have a listen and let me know your thoughts.

Useful Links

[00:00:04] Welcome back to the Tech Talks Daily Podcast. Now today's episode is for every leader who's being told to do something with AI or do something with agentic AI and secretly find themselves scratching their head wondering where the real value in all these tech investment is going to come from. And my guest today is from Salesforce and she's going to be talking about her eight design principles for the agentic enterprise.

[00:00:32] And we'll also discuss why the real blocker isn't model capability. It's design, data readiness and operating model change. And together we will offer you a practical straight talking look at how the most successful organizations out there are moving beyond pilots and they're succeeding. But if you're struggling, we'll also talk about how to build agents that employees trust,

[00:00:58] why adaptability, governance and clear economic measurement of digital labor, how all these things will separate the leaders from the laggards over the next two years. But enough scene setting for me. I've got a guest to introduce you to. So buckle up and hold on tight as I beam your ears all the way to Canada, where my guest is waiting to speak with us today. So a massive warm welcome to the show.

[00:01:25] Can you tell everyone listening a little about who you are and what you do? Sure. Shabani Huja, SVP Enterprise IT Strategy. It's a mouthful. I guess the best way to describe who I am, what I do is maybe a bit of a past, present and future. So I've only been with the organization about Salesforce. I've been with Salesforce for about a year and two months. And prior to that, I was a customer. And Salesforce loves to tell successful customer stories. And so we'd done at, I worked at one of the largest banks in Canada. I was running digital, doing a few other things at that organization.

[00:01:55] We drove personalization at scale. That was the other hot buzzword before AI came along. Successful implementation. Salesforce invited me over and said, hey, would you talk to us about your implementation? I said, yeah, sure. No problem. At the end, they had an unscripted, unprompted question for me. Hey, you've got Mark Benioff that's sitting right here. You've got the entire leadership team. What advice do you have for the company? And I had to think long and hard. Should I give the filtered or the unfiltered version?

[00:02:22] And let's just say that it was the unfiltered, very raw, very honest version that I gave that landed me in the role that I'm in now, which is now I am helping the company determine our go-to-market strategy for how we engage the C-suite or said another way, act as an advisor to the C-suite leaders of enterprise organizations. So typically the largest organizations and much of the conversation, as you can imagine, is all about AI transformation strategy.

[00:02:47] But at a much higher level than one might think, because I think enterprise organizations are really trying to tackle a lot right now. Oh, wow. I feel like there's a whole other podcast there on what that unfiltered version was, but we will leave it there for now. I mean, not only have you been on both sides of the Salesforce fence there, but you're also speaking with CIOs every day who are under pressure to just do something with agents. It's everywhere we look right now.

[00:03:14] Over the last 12 months, I've been across Europe, the Middle East, US, here in the UK, and Agenda KI, custom agents, et cetera. It's all everyone's talking about. But where are they genuinely making progress? And where are they still stuck in experimentation? And one of the reasons I say that, there was a big stat last year about 95% of AI projects stalling. And then I think Gartner had predicted earlier this year that by 2027, many projects will fail. And there's a lot of dooming out there. But what are you seeing?

[00:03:45] You know, well, they succeeded at pilots. Even if 95% are not delivering value or outcomes, certainly by trying out pilots. In reality, I think that there was a lot of CIOs, a lot of tech leaders that did certainly go in very fast and furious on generative AI and exploring capabilities like GitHub. So that may have been very successful within point spaces. I also have seen a lot of success within highly contained, high value workflow.

[00:04:13] So you think of service deflection or IT help desk, we're seeing some progress there, lead routing. Knowledge retrieval is probably one of still the ideal candidates. And then you see internally, I see some organizations now also looking at where can AI help internally. And I'm using the term AI broadly and loosely because there's always predictive, generative and agentic. And I'd say, you know, the space that I'm speaking about is probably still very much so on the generative side, because it's the stuck in experimentation is where we start to

[00:04:41] think about the more autonomous or agentic side is where I think folks are still, tech leaders are still trying to determine what exactly is it that they need to move from that demo to production. Yeah. Or what it takes to connect agents to live production data, as opposed to the data set that they might've cleansed and put off to the side, because for this pilot, we're just going to need that content or that data. And one of the reasons I was excited to speak with you today is when I was doing a little

[00:05:08] research, you said, I've read that you said that the real blocker isn't model capability, but very often it's just design. So what does poor design look like in the wild and how does it show up in, in those failed pilots? Maybe too. It's a, they go hand in hand. You're, you're, you're almost leading the witness here of, but it goes really hand in hand. It's where, you know, folks are giving access, giving agents access to data or, um, giving, giving access to data, but there's no data strategy.

[00:05:38] There isn't that the clean data is a term. I don't know fully how well, uh, folks understand again, what clean means the lineage of it, the, uh, the quality of it and how to measure post implementation in it. I've six, this isn't often, but you know, saying to your agent, please don't hallucinate. That's not a strategy being more prescriptive and more defined about that. Um, but even something as simple as having an agent that has very clearly defined role or,

[00:06:06] or where human approval is not an afterthought or a bolt on it's being considered within the workflow. It's being considered how agents and humans are actually working together as colleagues, not as purely augmentation in, in a, in a single process. And I think over the last few years, most enterprises have been in that constant by old China unlock value from silo data unstructured day. One of the big phrases last year at conferences was, Hey, no data, no AI.

[00:06:35] And I think that message is real hammer, really hammered home now, but your upcoming framework here in 2026, that focuses on design principles for the agentic enterprises. So before we get into the details here though, what was broken in that first wave of enterprise AI that, that made a new design approach so necessary right now? The, I think what's, what was broken was just really this idea that people kept saying, I

[00:07:04] would go in and I think I may not have shared this yet is last year I met with over 500 C-suite leaders and the conversation would start. And I would typically be in a room with sales folks on one side, C-suite leaders, but typically CIOs on another side. And we off would go the conversation around AI, AI, AI. We would use the word agentic AI. And very quickly, early in the year, CIO would say, well, we're going to build, we're not going to buy. And this idea was, I was scratching my head to say, okay, build versus buy.

[00:07:33] What do you, what component part are you building? What, what, what part of agentic AI specifically, because we were talking about agentic AI, are you building? And it dawned on me as I had more and more of these conversations that a lot of CIOs were thinking really model first, not ecosystem first. They were saying, I have an LLM and I have a data scientist team, and that should be enough. I can figure out the rest without recognizing that it's not generative AI. It's not like a, you're not automating just an end-to-end process where you're going to

[00:08:02] build the process. You're going to automate it from, from one space to another. Agentic AI is something much bigger, much broader. And so the, the, the emergence of whether it was the, the agentic maturity model that we built very early last year, or these eight design principles, it's, it's by a desire to just generally educate leaders and the world at large right now about what exactly is agentic AI and how does it differ from predictive or generative AI?

[00:08:31] And so we, we've realized that the need to talk about concepts that help bring people to a common area of understanding is far more important than selling a single product right now, because you can't really sell if people don't understand what they need and why they need it. And how open have they been in those rooms to these conversations and understanding the differences that do people immediately get it? Not immediately. I think that there is this again, truthfully now, now as I'd say now it's changed. It certainly has changed.

[00:09:00] I really see it starting to see the tide change, but early days, I, AI, people would use a word AI repeatedly, repeatedly. And then I would pause and it'd be a bit delicate about how you're, how you're educating folks, not to, not to insult them, but to start using some examples. And the agentic maturity model was a perfect way when you put it on paper, when you say, Hey, look, this is a spectrum of AI. Chatbots, the, if this, then that type, where you had to think of every unique scenario, they sit on the spectrum of AI.

[00:09:30] So, you know, just saying AI alone could not be enough, may not be enough. Then you've got chatbots, you've got chatbots, you've got co-pilots. You think about the early, early wave of chat GPT. And then when I, when we would start to use the, the agentic maturity model to describe that the first level of an agent moves beyond just giving the answer to a question, but actually saying that I can make a recommendation showing reasoning. And then you show the next iteration or the next level, which is instead of just giving

[00:09:57] reasoning and giving a response and giving a recommendation, it's performing the action for you. I think that would be the point at which the, the level would come on and say, ah, it's the performing the actions autonomously within guardrails. And then we kind of work backwards to say, okay, so an LLM is not enough. What did, let's, let's talk about what the component parts are. Let's talk about what you want to build. Help me understand what it is you want to build. I would, I would break down what are the component parts for agentic architecture.

[00:10:22] And that's when you would start to see the lights come on where C-suite leaders and just generally technologists would recognize, okay, this is a bit more complex than, than just a model. And Salesforce often talks about this customer zero journey. And since you, since joining the company, what did you learn internally about governance, determinism and, and data readiness that surprised even you and your teams, but any, any, anything you, you picked up here from joining the team?

[00:10:52] Well, I'll give a, I'll, I'll trust touch on the three that you talked about. I think there's governance, determinism and data readiness, but I'll, I'll touch on like one of my favorite examples of a story that I tell of when I first, first joined Salesforce, we were doing one of our earliest implementations of agent force, our agent. And it was on our help.salesforce.com site. So very typical agent, that's information retrieval for customers. And so we put this agent out into the wild, our wild being our public site.

[00:11:20] And we encourage folks, ask questions, ask the agent question, think of it as your search bar. And when, what is happening was customers were asking this agent question, but the, the agent was yielding different responses. And so immediately we thought, oh, this agent is hallucinating. We shut it down. We pointed the agent inwards and the agent started to do almost an audit of the knowledge articles that were underlying our public site, which were always there from the beginning.

[00:11:49] And what it highlighted for us that always from the beginning, there was conflicting knowledge articles, that there was two knowledge articles that were giving different responses. And so this agent had helped us really bring to surface the underlying data challenges that we have. So when we get to like the idea, I'll start with the last one that you asked around data readiness. It was not about volume. It was about clarity. Like we had many knowledge articles, but did that mean that they were right? They were the accurate just because they'd been there all along. Were those the ones that we should have been pointing the agent to?

[00:12:18] So it was really on this, this idea that it's about, it's not just the volume of data that you have, it's the readiness, it's the clarity, it's the traceability. It's the, even something as simple as the definition of a customer in your organization, do you have a clear definition of a customer? Order all the attributes and the parameters that make up a customer. So that was that one example of how I love to talk about our own agent, how it became a guardian agent or an auditor agent that helped us identify those challenges.

[00:12:46] Coming to governance in trial and error. I think like many organizations, we went quick in implementing these agents to see how it works and see what happens. And what we discovered was this idea of governance needs to be designed up front. To retrofit it after the fact or retrofit controls after the fact, it becomes exponentially harder. And so that was another key learning around governance. The other one around determinism and determinism means what we learned in our own space and using

[00:13:16] a bit of the example of that help agent that we launched was determinism matters much more in creative context, in enterprise context than it does when you need certainty. There are certain times where you do not want an agent to give you two answers. There are certain times when the answer is black or white. And I think that what we learned by where determinism matters and where non-deterministic behavior

[00:13:42] is appropriate, fine tuning and tweaking because reliability is what's going to bring brilliance. That's that was, again, a major learning for us not to say like we're just going to set something out into the wild. We want it. We want to be able to contain and control. And I think about, of course, I came from banking. That's something I think about quite a bit from a banking perspective, from a risk perspective. So many great points there. And I was also reading before you joined me today that you describe an agentic maturity

[00:14:08] model that moves from chatbots to multi-agent systems. So what has to change in the operating model, not the tech stack, for an organization to move up a level? Because I'm sure there'll be people listening around and that's where they're at and that's what they need to do. But getting there, it can be quite tricky and often overwhelming. I thank you for asking the question unrelated to the tech stack and to the operating model, like operating model, org structures.

[00:14:34] This is, again, you're really calling out one of, again, the failure points of why organizations are still struggling and challenging because there's this need to think about this agentic transformation holistically. And so when we come back to the agentic maturity model, you need to move a level up, not related to the tech stack. Let's start from like maybe a chatbot to a task agent. That's from like level zero to level one. You really need to have defined ownership of workflows.

[00:15:02] It can't just be a tech department that's just going to say, we're going to automate this. No, you've got to really understand who owns that workflow to be able to know who you're engaging and how their roles will shift or modify or be enhanced. And then when you're moving from like a task agent to a complex agent, so this will move from level one to level two, it's really understanding cross-functional accountability. Because when we start to move from simple to complex, inherently, you're typically crossing over functional departments.

[00:15:29] And at this point, you want to start to think about redesigning KPIs. Is the way that you measured an individual who was in a department performing an action and handing it off to another individual in another department, maybe they didn't have KPIs of how that handoff occurred. Maybe they did. But considering how do you measure the success of that agent as you start to give this agent actions that cross-functions, start to think about cross-functional accountability, how you measure it.

[00:15:57] And then when you move from complex agent actions to level three to level four, which is around really multi-agent systems that span across other areas, it's really about understanding how to orchestrate the governance of those agents. You may have agents that are from one company and another company, and how do you ensure that those agents are respecting the guardrails that they have within their own respective domains, but really able to look at enterprise-level data standards and data governance in that space?

[00:16:27] And other changes, I think that more broadly speaking, regardless of whether they are from one level to another, it's the budgeting. Is AI becoming a workforce line item and how you are thinking about digital labor versus human labor? Or risk models, the continuous oversight versus static approaches, determining the frequency of which you are evaluating the risk in those models. Talent strategy becomes something very critical because you are more than augmenting.

[00:16:56] You've got process architects that are perhaps turning into prompt engineers. Are you, is your CHRO lockstep with CIOs as you are starting to move across that maturity model in how you're transforming the organization? And then the last one is really around executive sponsorship. You know, what may have started as a CIO initiated thing, a fun thing, GitHub copilot, it's gone. It's now like COO to CIO, CEO, CIO.

[00:17:23] It's like that CIO has become very central, but CIO, CHRO, you think of all of the intersections, I really see the CIO becoming close to every single function across an organization, the more and more companies become agentic enterprises. And you mentioned a few moments ago, the problem that the agent had with conflicting information with knowledge articles. And in the end that you managed to find all the agent actually identified where the problem was. And I think data trust keeps coming up in every AI conversation.

[00:17:51] So in practical terms, what does trusted data for agents mean for a CIO who might be listening and dealing with fragmented systems and conflicting definitions of the truth? You know, this could probably be yet another separate deep dive. And one that I'm personally taking an interest in where, you know, as again, more conversations I have, the more I realize that there's an opportunity to educate and ground everyone on what this

[00:18:17] means, but I'll give the quick and dirty response here, maybe this is the teaser to have us back, is like clear data lineage. Are you taking that data trusted at the point at which you have found it? Or does it trace back and you are able to understand how that data has moved and morphed and adjusted along the way? Or defined ownership per domain? There's different data domains and really clearly defining that onus that agreed upon business definitions of that data.

[00:18:44] We've got business semantics or acronyms. Does finance and does the business look at the same acronyms the same way or the definition or the metadata on that data that there is a common language and a common understanding? Or data standards, you might think about real-time accessibility or permission-aware orchestration, not just orchestration, but being conscious of agents that are orchestrating work amongst multiple domains, multiple spaces.

[00:19:14] But the permissions of what they are allowed to see, allowed to do is grounded in every step. It's not just embedded within the agents for this process, but it's almost like a fabric that is across the entire architecture to ensure that agents are operating with integrity. The other big phrase that we hear a lot at tech conferences is keeping the human in the loop. There's a big human element here that always gets mentioned.

[00:19:42] So how are leading organizations you're seeing getting employees to buy in so agents are seen as collaborators in real workflows rather than just another top-down initiative that they don't fully understand? Because, again, very often people talk about the technology and the new shiny tool, but the culture is what's required to get that buy-in, isn't it? A hundred percent. So here's where I'll use our own company.

[00:20:09] Again, we drink our own champagne, we say, as customer zero. We have included our colleagues. And I'll take marketing. I work in a marketing organization. How we're bringing our colleagues along, number one, is we have given a number of our colleagues across the marketing organization access to multiple AI tools, all sorts, all sorts. And we've created this channel in Slack that's Bite Size AI. All of the leadership across marketing, all of the executives are on that chat, all of our colleagues are on that chat,

[00:20:37] and we are encouraging and incentivizing and rewarding, recognizing colleagues that are talking about their best AI hacks. So that's on a small scale showing that, look, you're not cheating. We want to know how you're using AI. We want you to teach everyone else how you're using AI. Like, use it to your advantage, because I think not touching it, not actually using it, not actually applying it day to day creates a bit of fear. And then when you go, you think more at the leadership level,

[00:21:04] you know, framing agents is a capacity expansion, not headcount reduction. That's a pretty basic one. Organizations, leaders are measuring returned, the time returned to human judgment as a metric, not just actually capacity savings, or really great organizations are making those wins visible. And I don't mean visible just at an investor day call. Before they even take it to the investor day call, they're celebrating with their colleagues first. You know, coming back to that agentic maturity model,

[00:21:33] a lot of the advice that I give to a number of CIOs is when you're trying to determine from the, you must get something in production, I've got 170 use cases, I don't know where to begin. Find the most simple, most boring, singular use case that will make the lives of your frontline colleagues much easier. That's what you focus on. And you celebrate the time of capacity or the pain that you've taken away from those colleagues. And you celebrate it in such a big way. And it may not be the splashiest story for investor day calls,

[00:22:02] but talk about the momentum that you'll have internally where people suddenly go, okay, I get this. I understand this. The other pieces are training managers, not just employees. Teaching managers and enabling them on how they can bring their colleagues, their employees along on this journey. Not that it's just like an executive thing and then the employee thing. It's everyone along to say they want to be a part of it. They want to say that I had a part to play in helping us move into this future because this future is where we're at.

[00:22:32] Love that. And it's time to have a little fun with you now. I'm going to ask you to gaze into my virtual crystal ball. If we fast forward 18 to 24 months into the future, what do you think will separate the companies that have successfully scaled agentic AI and got it into production from those that are still talking about pilots? What difference do you see there? And I understand that the level of change we've seen in the last three years makes it almost impossible to predict the future. But how do you see this playing out?

[00:23:02] I had someone ask me a very similar question of just like, where do you predict technology will go in five years? It's funny. My answer, whether it's related to where technology will go in five years or your question of what do successful organizations look like, there's a similar response that I would give. And then I'll give you the real tactical examples too. I talked to the gentleman that asked me, where do you think technology is going to go in five years? I kind of chuckled and I went, I have no idea. No clue. What do I know? I don't have a crystal ball, but what I said was, look, we all know about EQ.

[00:23:32] We all know about IQ. These are concepts that are well-known. To me, AQ, adaptability quotient is going to become the next big thing. And for him, the CIO that asked me the question, it was adaptability quotient from two perspectives. I'd say for his perspective, it was more on the adaptability quotient of the tech stack that you've got, that you're selecting. We cannot predict the future, but we see this technology changing so rapidly. Make sure you are not getting yourself locked into a technology stack that doesn't allow you

[00:24:01] to be adaptable, that isn't composable, that isn't deconstructable. From the way in which you're asking the question in terms of organizations more broadly being successful, I think of adaptability quotient is still so applicable because it's about the adaptability quotient of colleagues. The adaptability quotient of leadership, the adaptability mindset to say, we're ready to pivot. We're ready to change. We're ready to move the way this technology and the way society is changing. And that it's not a big hiccup when the change happens.

[00:24:30] There's almost like a positive embracement of it. And that's something that I learned in the one year that I've been at Salesforce is I've just been amazed at how quickly we as an organization will shift priorities. And I don't see anyone griping. People are going, all right, let's do this. We're in. We're all in. Now, to answer your question in a more maybe tactical way, I think looking at the organizations that are going to be really successful in scaled AI in production, their architecture,

[00:24:59] coming back a bit to adaptability quotient, their architecture is going to be designed for orchestration. It's not going to be designed for a singular process automation. They're going to be looking at unified data strategies. And thankfully, I think data is the new bacon or not so new bacon. I think that that's going to have a bit of a comeback and people are really starting to recognize the importance of data. Governance is going to be embedded within, again, that AI is not going to be a separate

[00:25:25] department with a separate group of people in separate pods running separate agile sprints to build something. It's all embedded within the day-to-day. And I think another big one is workflows are going to be reimagined, not just an automation layering to say that this is a process that exists today. We're just going to automate it and give it to an agent to do that. That's not what we... Does marketing need to be called marketing in the future? Does sales need to be called sales in the future?

[00:25:52] Can we take processes that existed in departments and actually blast through them while still maintaining separation of duties, even if it's an agent, but thinking bigger and broader about what an operating model and org structure looks like? And the last is really around having clear economic measurement of digital labor. There's a lot of experimentation. But we've got lots of metrics in which to measure success, efficiency, effectiveness, experience, risk and control metrics. And I think that that's really going to be the difference.

[00:26:21] And then on the flip side, you've got the laggards that are still probably going to be comparing model benchmarks or running very tailored and targeted pilots or debating what security frameworks they should have. So there may be treating AI as innovation theater, as opposed to this is a growth. This is embedded within our strategic priorities. This is going to fuel where our company's going in the future. It's not just a shiny technology thing over there. Wow.

[00:26:50] So many powerful takeaways there for listeners. I'm going to put my virtual crystal ball away and replace it with a virtual soapbox now for you to stand on. And let's try and lay to rest some myths and misconceptions that might frustrate you when you're browsing through LinkedIn, Reddit or wherever you hang out. I mean, what do people misunderstand most about your industry? Are there any myths about your job or field of expertise that we can lay to rest today? The floor is yours.

[00:27:18] Let me just prop up on my soapbox for this one. What I love is this idea of the MIT study that says 95% of AI pilots are failing to deliver value. Perfect. You're telling me why. And then there's this association to say everything AI because of that study is bad. I go, well, no, actually, let's dissect it. That's all about generative AI. We're talking about agentic AI. We're talking about building at scale from the onset.

[00:27:46] But related and linked to that is, again, the narrative around SaaS is dead, that AI will build applications. Okay. That sounds interesting. Because when you look at the organizations that are making those declarations, the architecture that they're declaring, they offer companies to be able to replicate what we do. It's kind of replicating what we have. You still need to have the application. You still have to maintain it day two. And I think this idea, the biggest myth around better models will solve it.

[00:28:15] No, they're not going to solve design debt. Absolutely not. Or that AI strategy is an IT project. Not at all. It's an operating model transformation that goes well beyond just a technology project over here. Or another one is around agentic AI will replace humans. It's redistributing human judgment. It is absolutely not replacing humans. It will upskill humans.

[00:28:40] And I think that that is a lot of what's necessary to understand is that models are great. LLMs are great. They need a shell. They need a body. They need a space and a place to operate and execute. They need a game. The model will be able to engage in digital form. Think of it as a hologram with customers and colleagues. And that's what we're offering to our customers to say, bring your own model, bring your own data lakes and warehouses, bring your own component parts.

[00:29:08] We act as a shell, a very sophisticated shell that will allow you to do so much more and engage across the customer and the employee plane with digital labor like no organization has ever been able to do before. Wow. Powerful stuff. How much better does that feel to get that off your chest? I went right at a few wrongs there. And for anybody listening, want to dig a little bit deeper on anything we talked about today

[00:29:34] from those eight design principles for the agentic enterprise, the busting myths and so many big talking points there. Where would you like to point everyone? Well, join me. Join me on LinkedIn. I do try to share a lot of this content, much about, and it's generally thought leadership, not in the space of sales and products, but really about these concepts, agentic maturity models, the eight design principles, how organizations are, great organizations are looking at cross functional steering committees.

[00:30:01] So I'd love for your listeners to come join us, listen to us, you know, explore Salesforce. Another piece is a call to action to anyone that's listening. Do me a favor. And just for fun, look at the architectural and look up what architecturally is different between predictive, generative and agentic AI. And I think that that might spark a thought around why is Salesforce playing in this prominent, in this space so prominently?

[00:30:27] And I think that just even a bit of research like that would help, but we have world tours. So we have our big dream force conference once a year, but we host world tours, which are like mini dream forces around the world. Please look out for us. Oftentimes you'll see me, my colleagues, my peers on stage helping customers understand and prospects understand where we're headed into the future. Look for a world tour near you. You'll find richness in that. Particularly even just come and watch the keynote. And it's a lot of fun. I'll look for you.

[00:30:57] So much gold advice in our conversation today from you. So thank you so much for sitting down with me. I think many people listening are feeling that pressure to do something with agents. But as you said, the real blocker isn't model capability, but design. And with this upcoming framework, focusing on eight design principles for the agentic enterprise, I would advise to go exactly where, as you've said that, go out there and learn the differences. I'll include links to everything, including the mini dream forces and dream force, et cetera, and your LinkedIn.

[00:31:25] So go to useful links on the show notes here and on my website, techtalksnetwork.com. You'll find all the information you need, but just thank you for having a bit of fun with me today and sharing your story. Thank you. Thank you so much. This is fabulous. A massive thank you to Shabani there. I think for me, listening to her there, a conversation was just packed with insight. Whether it be busting the myth that better models alone will fix everything, to showing

[00:31:53] how agentic AI becomes real when it's embedded into workflows, culture, and leadership priorities. It's not just a technical conversation. And that's why I think today's conversation offered a roadmap for any organization that wants to move from experimentation into measurable impact. And for anyone listening, if you want to explore the eight design principles for the agentic

[00:32:19] enterprise, the agentic maturity model, or see much of what we talked about today in action, remember, you can connect with Shabani on LinkedIn, where she shares regular thought leadership articles and posts. And you can also learn more about Salesforce at Salesforce.com and experiment firsthand at a Salesforce world tour or Dreamforce, of course. So head over to techtalksnetwork.com.

[00:32:44] You'll find 4,000 interviews there where you can catch me on the road, how you can leave me a message, and all those links I just mentioned. But as always, thank you for listening. Thank you for making it to the end of this episode. And I cordially invite you to join me again tomorrow, where we have another guest lined up, who will also offer actionable insights and valuable takeaways. Meet me here. Same time, same place tomorrow. We'll do it all again. Bye for now.