What if the most important jobs of the next decade already exist, but we just have not named them yet?
In this episode of AI at Work, I sit down with Marinela Profi, Global Market Strategy Lead for AI Agents and Generative AI at SAS, to unpack how artificial intelligence is reshaping work at a deeper level than most headlines suggest. We are not just talking about tools, automation, or faster workflows. We are talking about new roles, new decision structures, and a fundamental shift in how humans and machines collaborate inside modern organizations.
Marinela brings a grounded, enterprise-tested perspective to agentic AI, cutting through the confusion that still surrounds the term. She explains why large language models are not agents, why autonomy is often misunderstood, and why most successful AI systems will always keep humans in the loop. We explore how agentic systems differ from traditional AI, how deterministic guardrails and probabilistic models must work together, and why governance needs to be designed into systems from day one rather than bolted on later.

One of the most compelling parts of this conversation is our discussion on future roles. A few years ago, no one imagined titles like cloud governance architect. Marinela explains why roles such as AI decision designers and AI experience designers are likely to follow a similar path. These are not abstract ideas. They are practical responses to real challenges organizations face as AI systems begin to act, decide, and operate at scale.
We also dig into where teams tend to go wrong. Too many organizations rush from pilots to hype without addressing data readiness, orchestration, or accountability. Marinela shares real examples from regulated industries, including banking, insurance, telecoms, and manufacturing, where agentic AI has moved from experimentation into production by focusing on decision workflows rather than flashy prototypes.
This is a conversation for CIOs, CDOs, business leaders, and professionals who want to understand what AI means for work beyond surface-level narratives. It is also for students and early-career listeners who want to prepare for roles that are still taking shape, but will soon be unavoidable.
If AI is becoming an expected skill rather than a specialist one, how do you prepare yourself and your organization for work that is already changing in front of us?
I would love to hear your thoughts after listening. Where do you see human judgment becoming more important as AI systems grow more capable, and which future roles do you think we will be talking about next year?
Useful Links
Connect with Marinela Profi
Thanks to our sponsors, Alcor, for supporting the show.
[00:00:06] What if the most important jobs of the next decade haven't been named yet? What if they don't even exist yet? I think that's the reality of people of all ages are trying to get to grips with at the moment. Because AI is already changing how work gets done. But something even deeper is happening beneath the surface. New roles are beginning to form around how humans collaborate with intelligent systems, and how decisions are designed and how responsibility will stay firmly in the world.
[00:00:36] Human Hands, even as automation accelerates, are all a big part of the AI conversation right now. And my guest today is a big part of that shift. She's the Global AI and Generative AI Market Strategy Lead at SAS. She spends her days helping leaders move beyond pilots and prototypes and into real systems that people can trust, govern, and actually use.
[00:01:01] So today we're going to break down agentic AI without all the noise, without all the hype. Instead, you're going to hear why large language models are not agents, how autonomy really works inside serious organisations, and why the human role is shifting from doing tasks to shaping decisions.
[00:01:23] And if you care about where work is heading, and how leaders should prepare their teams, or what skills still matter when AI becomes expected as part of their everyday role, this episode is for you. But before I get my guest on today, I want to give a quick thank you to my friends at Denodo, who are playing a big part in supporting this show.
[00:01:48] Because one of the questions I hear more and more from listeners on this podcast is, why does AI succeed, or why does it fail? Because let's be honest, AI is moving fast, but success is often still elusive. Now, most projects fail not because of the AI, but because the data foundation isn't ready. This is why organisations are increasingly turning to Denodo.
[00:02:14] Denodo delivers trustworthy and AI-ready data without the need to copy it everywhere. So if you're ready to understand why your AI projects fail, and how to succeed with AI, simply visit denodo.com and take control of your data world. But in obscene setting for me, let me officially introduce you to my guest. So a massive warm welcome to the show.
[00:02:41] Can you tell everyone listening a little about who you are and what you do? Hey, Neil. Thanks for having me. Pleasure to be here with you on the podcast today. I am Erin LaProfi. I work currently as a global market strategy lead for Agentic AI and Generative AI at SaaS. So what that means is that I use a combination of industry, technology, domain expertise to essentially drive the market strategy,
[00:03:10] the planning, the messaging, and content for the Agentic AI and the Generative AI topics at SaaS. I have a pretty hybrid background. I've done several roles over the past almost 10 years. But today, my main focus is all about market strategy. What do we build? How do we launch it? How do we message it? How do we meet our customer needs, our partner needs, when it comes to Agentic AI and Generative AI features?
[00:03:39] And I think when people hear Agentic AI, many people will still picture advanced chatbots or just leap into that assumption there. So from your work at SaaS, what actually separates Agentic systems from traditional AI? And why does that distinction matter for how work gets recognized inside the enterprises? Because there's so many misconceptions out there on what Agentic is and what it isn't. But tell me what you're seeing. Yeah, that's a great question.
[00:04:09] I'll just keeping it very simple. When we talk about traditional AI versus the Gentic AI, traditional AI, they are systems that are reactive and they're very task specific. They either predict an outcome or they can just you ask a question and gives you an answer. That's it. When we talk about agentic systems, they are systems that are goal oriented. So they are able to act. They don't just predict.
[00:04:37] They plan and they use tools to make an action, to reach a specific objective. And I can give you a real world example. You can use traditional AI to forecast the weather or detect fraud. Or you can open ChatGPT, which is a generative AI tool to ask, hey, help me write an email to invite Neil for a coffee chat.
[00:05:00] But if you ask either of the systems, when is my coffee chat with Neil, it's going to say, I don't have access to your calendar. With agents, agents are able to access tools. So it's going to be able to access your calendar, set up the invite, send a reminder, draft email, send it for you. So we really are entering a space of this active companions, artificial companions that are able to do things for us.
[00:05:28] So from a, you know, from like why is this important for how work gets redesigned? It's because in enterprises, this shifts entirely the human role from a doer to an orchestrator. So basically, instead of redesigning a single task, now humans are going to redesign workflows where AI manages these different subtasks.
[00:05:59] You've been very clear, LLMs, they're not agents. And for leaders who are under pressure to move fast, how should they be thinking about combining deterministic systems with probabilistic models in a way that stays reliable and stays governable? Because it seems that leaders are balancing so many different areas now here. How do they get that balance right? Well, absolutely. So this is a distinction I repeat often. LLMs are language prediction engines.
[00:06:29] They're not decision makers. So basically, an agent is really a combination of an LLM with something else. Now, what is that else, right? You need more than just an LLM because alone, large language models are not enough because they are stateless, they're passive, they cannot act unless they're prompted. They lack memory, they lack goals, they lack autonomy.
[00:06:54] And so really, the distinction for leaders is that the answer is not LLM or no LLM. It's how you frame the system around them. So leaders need to think about large language models as reasoning engines, right? So those are the systems that are able to understand the request, split the request into certain subtasks. But then the real power comes when you combine those with deterministic systems.
[00:07:24] Deterministic systems can be tools, can be deterministic guardrails, like, for example, business rules, optimization models, or domain-specific logic that allow you to ensure consistency and accountability. Okay? Okay. So that's what really makes something agentic, not just generating the output, so the LLM, but doing something with purpose, knowing the specific data of the enterprise, knowing the specific business processes, rules, accountability.
[00:07:55] So really using the creativity and flexibility of LLMs with decision systems that you can audit, test, and trust. And that's why deterministic guardrails have become so important today that you really need to combine those with large language models to be able to have an agent that actually works for an enterprise and that you can actually trust. And inside the workplace, it can be incredibly difficult. There's a lot of scary stories out there.
[00:08:25] And I recently had a guest, Nicole, from Monday.com, and she was saying that AI is enabling people to step up, not step aside. And new roles are starting to appear around how humans work with AI rather than how AI replaces humans. And so it's very refreshing to hear this side of the story. So what would you say are the most realistic emerging roles that you're seeing? And what skills should professionals start building now if these job titles don't exist yet?
[00:08:53] We could have people listening in full-time education that are studying for roles that don't exist yet. Yeah. Well, I think, you know, I would like to start by saying that a few years ago, no one imagined a cloud governance architect. So in a few more, we'll wonder how we ever operated, to my opinion, without an AI ritual designer or an AI decision designer.
[00:09:21] So I'll tell you in a moment what that role is. But the main point is that we're not just creating tools with artificial intelligence. We're really shaping intelligent collaborators and experiences that are going to redefine, first of all, how we live, but also how we work. So we're not just changing what we do. It's changing who does it, how we do it, and even why we work the way that we work, right?
[00:09:51] So traditional job descriptions are going to dissolve, and we're already seeing that somehow, into some fluid roles where humans and machines collaborate dynamically. So I like to call it almost like the co-pilot economy, where the most valuable employees are the ones that know how to delegate to AI and know when not to delegate to AI, right? Because that's another thing, right?
[00:10:18] It's not just, oh, now I can have AI do everything for me, but like also knowing when not to delegate it. So, for example, a role, some of the roles that I foresee as becoming mainstream is the AI decision designer.
[00:10:31] So as we enter the face and the world where AI is going to make decisions for us and act on those decisions, it's going to be really important to have someone that shapes how AI makes or recommends choices in regulated environment. And that cannot be a machine. That has to be a human.
[00:10:54] Somehow, maybe we already have people that are somehow doing it in some enterprises who are actively experimenting with some more frontier use cases and innovative use cases. But we don't have that role that has been officially recognized as being important and has been officially formalized yet, right? Someone like an AI decision designer. Somebody else could be an AI experience designer.
[00:11:21] So how somebody that crafts how humans interact with AI within a company across different channels, across different tones, user flows. Right now, we're just seeing single or individual departments within companies establishing center of excellences or pilots. They just run or test on specific use cases, whether it's in marketing, sales, customer support.
[00:11:48] But once we will have those become the norm, that how are all those use cases going to interact together? How are those all going to live in the same place, in the same company? That's where we're going to have somebody that the needs for someone that is an AI experience designer within a company across different departments. So these are just some examples of things, roles that I see.
[00:12:14] But the common denominator, to my opinion, is that AI is leveling the playing field between technical and non-technical workers. So you're going to have a lot of data scientists, for example. I am a data scientist. That's my background. That's how I started working in the tech world as a data engineer, data scientist. You no longer need to be a data scientist to build or interact with AI.
[00:12:39] And so this is going to lead to what I call the rise of dual fluent workforce. So you're going to have marketers who understand AI-powered personalization. You're going to have teachers who co-create curricula with generative models. You're going to have HR leaders who build agent workflows for onboarding. So what stays important?
[00:13:04] I think things like data literacy, the ability to translate real-world business problems into decision flows that AI can support, and systemic thinking. We don't talk a lot about systemic thinking, which it's about understanding how data flows between autonomous components
[00:13:29] and where the failure points are in a process that is not going to be linear anymore when we think about a job, whether it's product marketing, whether it's HR. And this is a little bit like what happened with the internet in the early 2000s. Nobody could tell you why it was important for you to have a website. But everybody would tell you, you need a website, otherwise you're going to be dead.
[00:13:53] And a similar thing is going to happen with AI, where you're no longer see somebody in your resume asking, do you know how to use internet? It's going to happen the same thing with AI. It's going to be an expected skill that you're going to have to already know, similarly to how you know how to use a PowerPoint or internet. The same thing is going to happen with artificial intelligence. It's going to be expected. 100% with you.
[00:14:20] And over the last three years, we did see many companies jumping on the Gen AI bandwagon and thinking tech first, problem second. And as a result, many struggled to find ROI from their tech investments. And many organizations always rush from pilots straight into the hype, wanting to be part of the narrative. So based on what you're seeing globally, though, where do teams most often misjudge their readiness? Whether that's data maturity, orchestration or governance.
[00:14:48] And how can they course correct early? Any signs they can look out for this time around with a gentical? Well, absolutely. And I've seen the pattern, this pattern of rushing from pilots straight into hype again and again.
[00:15:07] And, you know, we're seeing many statistics out there from analysts and big firms that are predicting that in 2026, we're going to see a lot of proof of concepts for both the Gentic AI and Gen RTVI being abandoned. So the most common gap, I would say, is never technical. It's always thinking artificial intelligence is a feature rather than a system.
[00:15:34] You are in the moment where you as a C-level decide that you want to invest money in artificial intelligence. You are not investing money into a feature. You're not investing money into a tool. You're not investing money into a software piece that you're going to buy and you're going to plug into your enterprise. You are investing into an entire system that it's going to change completely how you operate. It's going to change your competitive positioning into the market.
[00:16:02] And so teams, what I've seen is that often start with a flashy prototype, whether it's, oh, let's build a chatbot. Let's build a model. Let's build a dashboard. And then once they've done that, they're like, OK, we're done. Like, this is victory where we're successful. Now let's present this to the board. And from there, from there, what? They often don't have an answer to, OK, now what?
[00:16:27] When it comes to scaling, they realize that whatever they've built, it lacks data lineage. It lacks explainability. It lacks performance monitoring, which is the boring but essential infrastructure. To me, one of the big things that it's often misjudged is data orchestration, which sounds so boring. But it's really all about you have an enterprise. They have the data.
[00:16:57] But it's not agent ready, meaning it's not accessible via clean APIs or it lacks the metadata an agent needs to understand the context. Because a big misconception is that people think that agents just magically understand what they need to do and how they need to operate.
[00:17:20] But the reality is that what truly differentiates an agent, a successful agent, from one that is not going to be successful, it's ability to understand the context and the quality of the tools that has available to use to reach that specific goal, to perform that specific action, to make a specific decision. And so the data orchestration piece, it's always often misjudged.
[00:17:51] Now, the way to course correct, I would say early, is designed for governance from day one. By asking, if this works, how will it be used? Who will govern it? What happens when the model is going to drift? Because it is going to drift. So start with minimum viable governance. Don't wait for your legal department to come up to you with a 50-page policy.
[00:18:21] Build a technical sandbox with automated logging, audit trails from day one. So just think governance by design, not as an afterthought, not as, okay, we've built a chatbot, not, we need to govern it. So much golden advice there. And I think autonomy is one of the most misunderstood areas in AI.
[00:18:45] So for your customers that are listening or potential customers, how do you help leaders think about autonomy levels in practice, deciding what machines should decide, what humans must retain, and how that balance evolves over time? Any advice that you would offer here for people listening? Yeah, autonomy, it's a very fascinating concept. Because it's really the one thing that typically comes to mind when people think about AI agent.
[00:19:12] Oh, AI agent means fully autonomous magic unicorn that's going to solve all my business problems. The reality is that what we're seeing at working, it's that actually almost the 99% of agents are never going to be fully autonomous. They're always going to have some human in the loop as part of the process.
[00:19:34] And only for maybe the 1% or really small percentage of use cases, and sometimes are the most boring and repetitive ones, you can actually achieve realistically a full autonomy without any human in the loop. And again, it's just for very small tasks. So don't aim, this is my advice to enterprises, never aim for full autonomy.
[00:20:02] And most importantly, don't think that you are failing if you build an agent that has human in the loops. It means just that you are trying to innovate high-risk or high-stake decisions like loan approvals, medical diagnosis. For those use cases, you still need human oversight. You still need human in the loop. So don't think like autonomy there needs to be completely autonomous.
[00:20:29] Actually, that is the recipe for disaster. Versus if you have low-risk, high-frequency, or highly repetitive decisions, and there are some use cases on this, AI can and should act more autonomously. But those are very, very, you know, small, highly repetitive use cases.
[00:20:56] Because the rule of thumb is generally always put the human in the loop. I just want to give a big thank you to my sponsor who is supporting every show, every episode across the Tech Talks network. And this month, I'm proud to be partnering with Alcor. And anyone who's tried to scale an engineering team across borders, they will know firsthand how messy it can get. Because they deal with endless providers, then there's confusing rules to deal with in each and every region,
[00:21:25] and fees that always seem to surface at the last minute. Now, Alcor, they solve that by acting as a partner rather than just an intermediary. And they focus on tech teams that expand in Eastern Europe and Latin America, and they bring employer of record services together with recruiting. So, essentially, they help you pick the right country, source the right engineers, and assess them properly. And then get them active for you and your company within days.
[00:21:54] And one of the things that stands out for me is the financial transparency. Around 85% of what you pay goes directly to your engineers. Their fee goes down as your team grows. And if you ever wanted to bring your team in-house, you do so with no exit costs. That kind of clarity is why Silicon Valley startups, including several unicorns, have chosen Alcor.
[00:22:18] And you can find out more by simply going to alcor.com slash podcast or follow the link in the show notes below. And you work closely with so many customers across a myriad of industries, from banking and insurance to manufacturing and so many more. So, I don't expect you to name any names here, but just to bring to life the kind of thing that we're talking about,
[00:22:42] can you share an example where Agentic AI really moved from experimentation into production? And what made that transition successful? What did they measure? How was it deemed a success? Any stories you could share there? Yeah. I can share two or three stories. So, one example that stands out is from a large telco customer working. They were working to reduce customer turn.
[00:23:10] They were already using some traditional AI. They were using a predictive model to predict basically who was likely to leave. But they quickly realized that insights alone weren't enough. And so, we helped them to build an agentic workflow that didn't just predict turn, but also triggered the right retention offer tailored to each specific customer. So, what made it successful wasn't just the AI part.
[00:23:39] It was, again, the decision orchestration layer. So, combining machine learning with business rules, offer constraints, eligibility logic, all governed centrally. So, the agent wasn't just acting, but it was doing it in a strategic and within bounds. So, again, going back to that combination of LLMs, probabilistic and deterministic models, that was the success factor.
[00:24:07] Another example was fraud detection. SaaS is very big in highly regulated industries. We have some of our customers, it was one of our customers that they were already flagging transaction using traditional AI to flag fraud. The agentic piece identifies, instead of just flagging a transaction, identifies a fraud,
[00:24:29] gathers the necessary customer history, drafts the notification email, prepares the regulatory filing for a human to approve it or not. And so, it's a more comprehensive process. Again, human in the loop. And I think what succeeded there was because it was integrated into the existing SaaS ecosystem
[00:24:53] and into the existing customer ecosystem that they already had, not as a standalone, quote-unquote, cool tool they had to build from scratch. But it was already built by design as integrated into the SaaS system. Last, a very boring one, very highly automated, internal ticket triage. Believe it or not, this is one of the use cases where Gentic AI has a lot of power.
[00:25:22] We had the customer that receives basically internal tickets by IT system. And they utilized a mix of SaaS, open source tools, LLMs to identify similar previous cases and send a response, either asking clarifying questions or providing solution steps. And this reduced the average time to solution and freed the team up for more meaningful work
[00:25:48] as they aren't actually an internal service desk. They were more of a customer service. And I absolutely love that. And I think you called it a boring example. But I think that example will be a massive, massive thing for people listening because anything that can help pave the way to quicker solutions and giving people to or giving service desk teams time to work towards value-add activities,
[00:26:16] business value rather than just firefighting, has to be a great thing. And I would argue that over the last few years, that's possibly where many enterprises have gone wrong. They've gone after the shiny, exciting things. And it's the boring stuff that makes – that's what makes a difference, isn't it? That's exactly right. Yeah, don't go after the shiny things. Just identify the very highly repetitive ones and start from there. And, of course, we've got to mention there will be a growing concern in some circles
[00:26:42] around things like ethics and trust as AI systems gain more agency. So from your perspective, what does responsible governance actually look like day-to-day, not as policy documents but as operational habits inside teams? Again, this goes back to think about governance from day one and making the conversation around responsible AI real. From conversations that I've had with customers,
[00:27:10] I would say that ethical AI isn't just a technology. It's really a practice. And it's a culture, right? It's something that needs to be embedded within the culture of the organization. And an example could be like from a day-to-day, like how it looks at from a day-to-day. Are your models, builders, required to document assumptions and bias risks?
[00:27:37] Do your business users have access to explainability tools that make decisions, interpretable in their own language? Because, again, I work with global customers and it's not just English. Sometimes we take that for granted. I had a customer that told me we never deploy a model unless we can explain it in plain business firms. That came from a life sciences customers who build clinical trial optimization models.
[00:28:07] And from them, regulatory bodies, they don't accept black box AI. So they've made it a non-negotiable habit to have every model must include an auto-generated explainability report that a medical director, not a data scientist, a medical director can understand. So that's how an example of how you make it a habit baked into how you built and deploy AI.
[00:28:34] Or, for example, a company from a public sector client in Europe talking about bias detection. They said bias is something that we test for a single line of code is even written. So what they would do is basically they, before even selecting features or training data, their team would hold what they call a bias planning workshop,
[00:28:58] where they proactively flagged which variables could encode systemic bias, things like zip codes or names or school districts or more and more. And they would just do that for everything, like a bias planning workshop. So that's governance at the design level, right? Not after the fact.
[00:29:19] So these are just some examples on how ethical AI needs to be considered and treated as a cultural aspect of an organization. And there are so many big takeaways that you've given listeners today. And I'd love to give them one more. So if we have a CIO or a CDO listening to this episode today, what would you say are the three concrete moves that they should be making in the next six to 12 months?
[00:29:45] What can they walk away with and finally shift from isolated AI pilots to systems that deliver sustained business value? I appreciate that. It's possibly an episode entirely on its own. But any quick takeaways you'd offer people listening? I would say first, build a decision inventory. I am very big on decisions. Map out the top 20 decisions that are driving business value for you,
[00:30:10] whether it's marketing, operations risk, and assess how data and AI could augment or automate those. So build a decision inventory, then invest in orchestration. So don't stop at the model. Don't just think about, oh, what's the model that can include or infuse AI or automate this? Don't stop at that. Ensure you have the tools to govern and orchestrate the flow from data to decisions to action.
[00:30:35] And then three, create always cross-functional AI pods. So what I mean by that is we need to start blending data scientists, IT, business subject matter experts, compliance into agile teams, not just to pilot, but really to scale. So rethink a little bit, like be ready and be open to rethink the organization chart.
[00:31:02] Because it matters and it is as important as the tech stack. So don't overlook it or don't make it an afterthought, right? Like we need to get, sometimes we think, oh, just because it's marketing, then I need all the marketers to be part of that team. No, we need to start getting more comfortable in blending cross-functionally iPods when we want to run pilots or to scale.
[00:31:28] So I would say these three things, build a decision inventory, invest in governance, orchestration, not just modeling and create cross-functional teams. Fantastic. And you've given everybody listening so much today. I'm going to cheekily ask on a personal level to give one final gift. And that is a book that you'd recommend that we can add to our Amazon wishlist or a song that means something to you.
[00:31:53] It could be your walk-on song or something or a guilty pleasure that we can add to our Spotify playlist. I don't mind which, but what would you like to leave everyone with and why? Absolutely. Yeah, I'd love to leave both, a book and a song. Yes. Because they speak to two sides of who I am. The book is The Mountain Is You by Anna West. This book hit me in a moment of deep transformation. I moved from across the world.
[00:32:21] I moved from Italy to the United States, built a career in AI, and often felt I had to perform strength to succeed. So this book really helped me understand that self-sabotage is often self-protection in disguise. So it taught me that, you know, I recommend this book to anyone going through big change or anyone who secretly feels exhausted.
[00:32:49] When it comes to the song, I would say Rise Up. This is an old song by Andra Day. I, um, has been a companion in so many chapters of my life from career moves to personal setbacks. And just, you know, I, I always play when I need to remind myself of my own strength. So it's not just about resilience.
[00:33:12] It's really, this song teaches that it's about choosing to rise with grace, even when, and especially when no one sees it. So I would say these are my two recommendations. Oh, fantastic recommendations. I echo what you said on the book. First of all, that's currently the soundtrack to my dog walk. I listen to it on audible at the moment. So it's a great choice and a great song too. So I'll get those added to everything. And for people listening, we've probably set up so many light bulb moments today.
[00:33:40] They're going to want to find out more about you and your work. Where would you like to point everyone listening? Uh, yeah, sure. So I'm most active on LinkedIn to search Marinella Prophy. You'll find my blog posts, um, updates that I currently post from the AI world. I'm also about to speak at TEDx Harvard and the February. So I'm super excited for my first TEDx talk.
[00:34:04] Um, if you want to learn more about how, you know, the work that we're doing at SAS and how we are powering Agentec AI and trustworthy decisioning systems, you can visit sas.com slash Agentec AI or reach out to our team directly. And we're always happy to share what we're building and learning and how we can grow together. Well, I'll be adding links to everything that you mentioned there.
[00:34:29] So much valuable information that will be incredibly valuable to business leaders listening around the world. Best of luck on your upcoming TED talk. Be great to get you back on later in the year. See how that goes. But more than anything, thank you for sharing so much of your time today. Really appreciate it. Thank you for having me. Bye, Neil. So if you've been wondering whether AI is pushing people out of the picture or quietly pulling them into more meaningful work, I think today's conversation should leave you with somewhat of a clearer answer.
[00:34:59] And what my guest shared today goes far beyond theory. You heard how Agentec AI works in practice, why most systems should never aim for full autonomy, and how governance becomes a daily habit rather than just a document filed away that nobody reads or knows. Where to find. So I think we got a rare look at future careers before they officially written into job descriptions, along with some practical advice on what to learn now while these roles are still forming.
[00:35:29] And my big takeaway from all this is simple and reassuring. The organisations that win with AI, they're not the ones chasing shiny demos. They're the ones that understand decisions, design collaboration between humans and machines, and treat trust as something that's built into the system right from day one. But over to you.
[00:35:52] If this episode sparked any ideas, raised any questions about your own role, your team, or your leadership approach, first of all, share it with someone who's thinking about the future of work right now. And if you want to keep that conversation going, check out Marinella's work. I'll include links to everything that she mentioned. And finally, I'd like to ask you a question.
[00:36:12] If these new AI-driven roles are already beginning to take shape, what's that one skill that you're going to start building today so you're ready when they become the norm? Love to hear your thoughts on this one. Tech blog writer at outlook.com. Techtalksnetwork.com. 4,000 interviews. So many great conversations and insights there. Send me an audio message. Connect with me on socials, just at Neil C. Hughes. So many ways you can find me, but let me know.
[00:36:40] Other than that, I do cordially invite you to join me again for the next episode. But more than anything, thank you for stopping by today. Speak with you soon. Bye for now.

