What happens when AI agents begin to outnumber humans? In this episode of The Tech Talks Daily Podcast, I sit down with Steve Lucas, CEO of Boomi, to explore this thought-provoking prediction and what it means for businesses, the workforce, and society. With extensive experience leading companies through transformative journeys, Steve brings invaluable insights into how organizations can prepare for a future dominated by AI.
Steve delves into the concept of the "AI big bang," where AI agents are set to fundamentally reshape workforce dynamics and decision-making processes. He highlights how these agents are rapidly evolving, influencing everything from customer service and supply chain management to market analysis. But with this growth comes the pressing need for transparency, explainability, and control. How can businesses avoid being overwhelmed by this AI influx? Steve argues that companies must reimagine work processes and establish robust AI governance frameworks to ensure human oversight and ethical AI deployment.
As AI becomes deeply integrated into everyday business functions, Steve outlines the critical role of protocols for future communication. The rapid evolution of technology requires clear frameworks to enable AI agents to interact in ways we can understand and control. He shares how Boomi is addressing this challenge by developing an AI governance platform designed to help organizations manage agent creation, execution, and compliance effectively.
Steve also reflects on how business leaders can stay ahead in this rapidly changing landscape. With AI set to revolutionize global learning and knowledge accessibility, continuous learning becomes essential. Drawing on advice from industry leaders, Steve emphasizes the importance of dedicating time to understanding AI technologies and using AI itself for personalized learning experiences.
This episode offers a deep look into how AI will shape the future of work, the critical need for governance, and the opportunities and risks that lie ahead. As AI agents become a powerful force in business, how will your organization adapt? Are you ready for the cognitive revolution that's already underway? Let us know your thoughts after the conversation.
[00:00:03] What happens when the number of AI agents in the world surpasses the number of humans? Today I'm going to be joined by Steve Lucas, CEO of Boomi, and they are a leader in intelligent connectivity and automation. But today we're going to be exploring the profound implications of what he calls the AI Big Bang.
[00:00:28] And as a seasoned CEO who has led multiple companies through major transitions, Steve is going to be bringing with him a wealth of experience and insights into how organizations can navigate this rapidly evolving space. So yeah, today we're going to talk about the future of AI agents, how they can redefine workforce dynamics, decision making, and productivity.
[00:00:52] And Steve was also going to share his vision for the protocols and governance that's also needed to ensure these agents operate transparently, explainably, and under human oversight. I will also discuss how businesses must adapt their work processes to harness the power of AI while addressing all those IT staples, governance, ethical considerations, and global accessibility.
[00:01:19] So thanks for joining me on the podcast today, Steve. Can you tell everyone listening a little about who you are and what you do? I'm the chairman and CEO of Boomi. Boomi is a technology company that serves over 23,000 businesses all over the world. And really what we do is quite simple but important. We connect everything and everyone inside a business. You know, organizations today, they struggle with connecting data and applications and systems. As you know, it's an incredibly complex world that we live in.
[00:01:49] The average company today has over 350 cloud applications inside of their business and over 1,000, in fact, closer to 10,000 data sources. And when you're dealing with that kind of complexity, how I orchestrate all of that, how do I connect these applications, data, APIs, systems, that kind of becomes the question of the day. So that's what Boomi does. We empower our customers to connect all those systems, connect people to that information,
[00:02:18] and really just orchestrate your entire business. So that's what we do. Well, thank you for joining me on the podcast today. And there's so much I want to talk with you about, especially because towards the end of 2024, I think it was Gartner that made huge predictions about how it was going to be the year of agentic AI and AI agents everywhere. And one of the things that put you on my radar was I saw that you predicted that the number of AI agents will soon outnumber humans.
[00:02:47] So can you tell me a bit more about that and what you see as the most significant implications of this shift for organized agents will be, particularly in how they approach decision-making and workforce dynamics? Look at the Industrial Revolution. And granted, okay, it was 125 years ago, but it's still instructive. Back then, we were inventing steam-floored engines that could outperform a horse. And we marveled at that capability and what could we do with it.
[00:03:16] But what the Industrial Revolution caused is a reorganization of resource. Prior to the Industrial Revolution, there was no real assembly line. We were creating things in a bespoke fashion. If you built a car, it was one person that knew every part. And it took a long time. Henry Ford, in particular, he revolutionized manufacturing by bringing an assembly line approach. And so here we are, probably 25-plus years later. I think it's a cognitive revolution.
[00:03:45] I really do. I think that's what's going on. What I mean by that is, you know, we, for the longest time, we said, well, look, during the Industrial Revolution, that's mechanized transformation. The robotics revolution that happened in the 70s and 80s, that's robotic revolution. And granted, we had to watch the robots to make sure that the quality was as good, you know, as humans could deliver. And if you read any article back then, it was always the question was, wow, you're moving to a robotic assembly line.
[00:04:13] Will it be as good as what humans do? Yes. Yes, is correctly the answer. In fact, we prefer that over bespoke. Now, here we are. We've reserved knowledge work, that notional aspect for the gods, us humans, right? We do knowledge work. No one else does knowledge work. Yet I can take almost anything, any content, any context, provide a large language model, chat, GPT, claw, it doesn't matter.
[00:04:43] Just provide that information. I can get a reasonably good response in near real time. To me, it's not a question of if. It's a question of when will we start to seed this work that we've reserved for the gods to these agents. Now, here's what's going to happen, though. What's going to happen is we have to rethink and answer questions like, well, what is our assembly line for knowledge?
[00:05:07] Like today, all I know is an income statement for Boomi gets produced by my finance team. I've really looked at how optimized is that work? Exactly which humans do that work. But I can tell you, two years from now, just two years from now, very few humans will be involved in the actual construction of that income statement. They will oversee it. They won't be involved as deeply in it. They'll check the quality of it. And you can look at anything. You can look at process.
[00:05:36] You can look at workflow. You can look at knowledge work. It doesn't matter. And that water line is just going to go up. And these agents, they're going to eat it all. Wow. So with this AI big bang of sorts that is coming our way, I'm curious, how do you envision AI agents to reshape things like productivity? And what opportunities or indeed challenges might arise from companies aiming to integrate these agents effectively into their operations? I'm sure you've seen a few mistakes made along the way.
[00:06:06] But how do you see this happening and evolving? I will stay on safe ground right now because there's inevitably someone online that goes, but what about? And we're in that but what about exception phase right now. Just to look at transformation of customer service and support. That's a no-brainer, right? Even with error rates, even with hallucination, the reality is if AI can transform customer service to the point where when I dial a number or I'm with a chat bot,
[00:06:34] instead of the first question being, can I speak with a human? Actually, me preferring to speak with an AI agent. I think that transformation is upon us. But that to me is kind of the no-brainer. We're going to start to see things like customer service support, but all the way to things like supply chain optimization, market analysis. These are the things that are going to go not the way of the dinosaur, but going to be consumed with agents. So that's happening now.
[00:07:02] And what we have to do right now is reimagine how work gets done. So if you're a CEO and you talk to a lot of them, but if you're a CEO out there and you're not asking yourself, how do I reimagine work where humans are overseeing much of the work done by these robots, similar to industrial, similar to robotic revolution? But what's different and what's compelling is these industrial machines over the past 100 plus years,
[00:07:30] these are limited to their physical location. AI agents, they can collaborate globally. They can collaborate instantly. They can share learning. They can coordinate decisions. They can evolve their capabilities at a rate and pace that we've never seen. So they're going to outnumber humans because of those things. That's why they're going to do it. But what we as leaders, we have to rethink how our workforce will work in this environment. But I can tell you this, the flip side is true.
[00:08:00] For every, when I say blockbuster video, Neil, what do you think of? Man, it's another life, isn't it? You know, having to rewind tapes and take them back to the stores. It's the cautionary tale, is it not? Right? Yeah. It's our collective couple generations cautionary tale. Ignore technology, something bad happens, right? This is an entirely different ballgame.
[00:08:27] For every one blockbuster video, there's going to be 10,000 that we have for our next round where we just fundamentally, companies, they ignored AI. They didn't wrap their heads. Organizations don't wrap their heads around, how do I refactor my workforce? And those are the ones that are going to get left behind, is the ones that aren't thinking about refactoring their entire workforce around agents in particular. 100%. There's a long list of companies that failed to evolve digitally there.
[00:08:54] And I thought others that spring to mind are Toys R Us, Kodak. That list goes on and on. And I echo everything that you said there and everything that I talk about in my work life. I completely agree with how you see this is running through. But when I'm a customer and when I'm going on a website and I'm trying to talk to a human being or it doesn't know the answer to my question, I get frustrated. How long do you think we are from banishing those negative experiences that people listening might be going through at the moment
[00:09:23] when they just want an answer to a question that is not pre-programmed into their book? I think that even if you have, and look, I know there's going to be some folks rolling their eyes going, wow, Steve's on the Sam Altman bandwagon. It's not that. It's simple math. Even if we see a modest single-digit percentage improvement in the capabilities of these models, it's certainly not a decade. It's not five years.
[00:09:49] Look, if you had asked me two and a half, three years ago, hey, how long before we get to an AI kind of agent or agentic approach in the workforce, I would have said decades. It would have been my default response. But the genie is out of the bottle. You know, Pandora's box is now open. So really now it's a rush to figure out how can we take a knowledge worker and break their job down into the tasks that are more time-consuming
[00:10:17] and don't really require a ton of cognition, irrespective of the role. You can say, Steve, building an income statement requires cognition. Not really. You know, there's kind of some straightforward math and categorization behind it. So I think as we break these knowledge worker jobs down into constituent roles, the question is what parts or where can the human add tremendous value? Where can they and how can we massively increase their productivity?
[00:10:42] So we're going to start looking at how can I take your average worker in finance and surround them with a dozen agents that are going to double, triple, quadruple their productivity. Those are the questions that we're going to start answering now. Now, the flip side to this is it's not just about are we going to reimagine the workforce? We are. We're going to create new workflows, new ways of working and all of that.
[00:11:06] But the real question is how do we create observability for the humans to watch these agents work? So if I really truly do have an AI agent building my income statement, where's the observability? Or if I have an agent totally handling all of my customer service and support, where's the observability? Look, they will drift. You know this, again, probably better than most humans.
[00:11:30] 50 years of writing code, 50 years of software with one simple rule. It's an if-then-else statement. It's reliable. If this happens, do them. Now we're deviating from that. We're saying, oh, software, we're gifting you the ability to not do that. We're giving it the make up the else part of that statement on your own.
[00:11:53] So when we talk about an if-then-else statement, we're literally saying to software, no, if it doesn't seem right, come up with a better response. That's uncharted territory. It really is. And for those people that are listening that we were talking about a few moments ago, the next thing that they might ask is what about transparency, explainability, and control? Because all these are also critical in managing the influx of these AI agents.
[00:12:20] So we've kind of seen what happens in the past when Silicon Valley moves fast and breaks things. So can you tell me a little bit more about the importance of those principles and maybe share some practical steps that organizations can take to maintain them, especially as AI continues to proliferate? Well, the digital observability, I think, is going to be, you know, and what we call at Boomi is agent governance and AI governance.
[00:12:46] This is a sorely needed area of software that honestly doesn't really exist today. So by the way, Boomi is endeavoring to build and we are launching our own AI governance platform coming up here in the early spring. And we believe it'll be the first in the world that's comprehensive. But all you have to do, look at any major revolution. Look at the internet. The internet prior to the advent really of TCPIP was just a bunch of, you know, disconnected networks.
[00:13:16] Nothing talked to each other. But TCPIP created this universal language for computers to communicate across networks. And look, we need those standard protocols for AI agents to interact. But you can look at anything, HTTP, you know, transforming web browsing, SMTP, forgetting email. We don't appreciate these things now where API and REST protocols and, you know, just transforming software integration. We don't appreciate all of that now.
[00:13:42] But, you know, back then it was transformative and agents will need that. How does one agent talk to another along this knowledge or cognitive assembly line that I think is going to emerge for every major task? So, look, I think that governance platform – or let me back up. Standards and protocols need to exist. Governance platforms need to exist. All of these things, it's not just how can I build an agent.
[00:14:08] Look, if you're a bank or a hospital, you can build the greatest prescription checking AI agent in the world. But what if it's wrong? What if it's wrong, right? And you can't afford – you can be 90% accurate in customer service and support. That's actually – you'll get a promotion. We will promote the AI agent from worker agent to vice president agent. No problem if it's 90% accurate.
[00:14:33] If you're 90% accurate in healthcare – and I give credit to a good friend of mine over at Constellation Research for, you know, coming up with this analogy. But, look, the reality is you can't be 90% accurate in finance. It's 100% all the time. Same in healthcare. Okay, so these governance – the protocols must exist, the governance platform. But we're just in this – we're in the create mode. How do we build these things? How do we create?
[00:14:58] And where – Boomy is that company saying, whoa, okay, create's awesome, but we also have to govern, observe, and control. And I suppose as AI agents interact more autonomously as the years progress, the need for robust communication protocols and registries will become more evident than ever. So what kind of frameworks do you think organizations and industries need to be implementing to ensure that seamless and secure communication between those AI agents? Anything that you see here?
[00:15:27] Yeah, well, I mean, there's a lot of really good stuff in the market. But I think – so at the highest level without getting super granular, I would start to think about three to four buckets for AI. One is obviously the creation. So those are what models am I using? What data will I feed those models? Am I training? Am I fine-tuning? Am I grounding? The vast majority of companies, they're never going to train or fine-tune a model. You're simply not going to build a model.
[00:15:56] Well, you're not training it. You're not fine-tuning it. You're getting a model from MetaList. So you're using Llama or using Claude with Anthropic or ChatGPT. You're using one of those models. It's somebody – or Google's – somebody else built the model. But there is this notion of how do I take that model and then use some of my own data to ground the model? If you look at Boomi, we have a product. We have an agent called ChatB. ChatB is literally – so we took a large language model,
[00:16:24] and we're training it on everything, all of our chat forums, support forums, our internal sales, Slack logs, you name it. So me as a news person, if I have a question about, well, our customer is asking me X, ChatB responds in real time. If I don't know the pricing of a product, it will actually tell me. So we built an agent that radically improves the productivity of all of our salespeople, support people, and that's called ChatB. Hey, that's great. But we didn't build that model.
[00:16:53] We're just – and we're not even fine-tuning it. We're just grounding it, which is take some data, give it some context. That's what we're doing is the grounding of it. Most organizations will exist there. But this agent-building area of operation, you need a whole framework. What models will you use? What data? What are our governance rules? So that's one bucket. The second bucket is going to be the actual execution operation of the agents. Where will they run?
[00:17:19] Boomi has something we call an agent garden, which is a secure place that you run, operate. It's a container for agents. These need to evolve. That's the second. The third is governance. How do you detect drift? How do you control – and that's when a model gets off target. How do you detect hallucination? Let's say that – I was just speaking with the CLEO of a very large technology company. And he said, we have over 800 AI projects slated for next year.
[00:17:49] 800 AI projects. How on God's green earth are you going to govern 800 AI projects? And let's say – which it will, right? That you're using – and this is not – you're using Llama 3 or whatever. And we discover, well, some coder put an Easter egg in that model, right? A developer. So when you type in someone's name, it spits out something that it shouldn't. Your organization has to be able to immediately react and control that, right?
[00:18:16] So these are the kinds of things that I believe you need. So I think there's the kind of model – or sorry, agent building. There's agent execution. There's agent governance. And then there's your whole organization needs an AI steering committee. What standards? What do we deem appropriate? What will we see to an agent versus a human? All of those decision-making things need to exist. And then lastly is just going to be discovery.
[00:18:42] There is physically no way for you or me or really any human to keep up with everything going on with AI right now. It's just evolving too quickly. But I think in high level, when you think about those categories and I'd start putting resources in. And if you don't, welcome to Blockbuster Video Land. You just blow my mind with the 800 AI projects. Wow, that sounds incredible.
[00:19:07] Well, obviously what we're talking about here is the rapid evolution of AI technologies. And with that, we'll demand new approaches to communication and interactions, which is both exciting and indeed daunting to a lot of business leaders that might be listening. So how can they prepare for this fundamental shift in how AI agents and humans collaborate both internally and externally to avoid being that next Blockbuster?
[00:19:34] Understanding what your cognitive assembly lines are now. So all the things that we take for granted, especially me as a CEO, I ask a question and an answer magically shows up from a human in an email or a Slack or whatever. Understand how that information, how that value associated with the information is created, how it's derived. How is it assembled inside of your organization? You have to understand these cognitive assembly lines in your own organization. You must prepare.
[00:20:04] Now, that doesn't take a book. That takes effort. It's someone understanding workflows and processes, right? And some of those are really well-worn workflows and really understood. Others are obstructed. It's well outlined. That's any organization. If anybody ever says to you or me, you know, hey, I understand every workflow and how information is created. But come on, really? Insider, you know. So I think that's number one.
[00:20:30] But then number two is you've got to be, I think the average CEO needs to, I'll say it this way. Never in my 30 years of software have I felt a bigger desire to learn than I have right now. Because what I see is something that will just change the very fabric of software. It will change how we think about how we build systems, how we as humans interact. And hopefully in the end that we're more productive, that our lives do get easier. I hope that as well.
[00:20:59] But I think it's cognitive assembly line. I think it's learn every day. If you're a CEO or, you know, any executive and you're not spending at least an hour a day absorbing information, either, you know, what you're talking about or any, if you're not, I don't see you surviving this. I really don't. So I think it's those two things. And just consuming just, you know, the right amount of information, but also determining who are you going to listen to because there are very different views of how this stuff plays out.
[00:21:29] There's the Mark Benioff view, which I think is really smart and brilliant and well thought out in many ways. Then there's the Sam Altman view, which is a little bit different. And I think every executive is getting a lot of information right now and kind of putting their own spin on it. So a question I've got to ask, especially as someone with experience leading companies through significant transitions throughout your career, what lessons could you share with executives who are attempting to navigate this new AI driven transformation?
[00:21:58] Because although the technology's changed, we've been here many times before when it comes to digital transformation, haven't we? You know, some of the best advice I ever got from anyone is a gentleman named Bill McDermott. And he's the CEO of ServiceNow, used to be the CEO of SAP. And I consider him a personal mentor, but just an incredible time learning from him, observing.
[00:22:23] And some other day, and this is a compliment to Bill, Bill's got that New York accent a little bit. So it's like, you know, whenever I'd ask him a question, he would always start with like, Steve, my man, you know, there'd be a little bit of that. But whizzing in every one of his responses. One time Bill, you know, I asked him about becoming a CEO. Bill, this is probably 15 years ago. And he just said, and I showed with him kind of, you know, like all of us, we tend to be a little hard on ourselves. I kind of do this.
[00:22:52] And then he goes, Steve, I'm going to stop you right there. You be you. And that was, now I ain't the flying. I was talking about anticlimactic. What does that mean? But as he, you know, as I thought through it, it was, and Bill did unpack it for me, but it was, you know, be that first rate version of yourself. Don't be the generic second rate version of someone else. The second thing, though, that I've learned over the years is Bill has this extraordinary aptitude to focus on the outcome.
[00:23:20] I would tend to focus on, well, what does our software do? What does this, you know, what is the function versus the outcome? And I think in the world that we're in, you salute perfectly, Niels. We've seen this before. Then we have. And to a certain degree, the question to ask ourselves is, okay, let's assume for a minute that these agentic workers do outnumber humans, whether they do or they do not. But let's say they do. What is the world like? What is the outcome of that?
[00:23:48] How do we, is it about people losing their jobs or is it augmenting humans with superpowers that we never thought we'd have before? The stress that we're feeling through workforce transformation, we've seen this before, that all of these revolutions and, you know, the technology one being the latest. So I actually think, I wish I was like 20 years younger. You know, it's just this, man, I'm so jealous of what people get to work on right now. This is going to be exciting.
[00:24:17] You and me both, my friend. And of course, with the rise of AI agents, we're going to see more concerns around ethical and operational areas in businesses. So what role do you think regulation and governance will play in ensuring these agents are deployed responsibly? And sorry to bring the mood down a little here, but as an ex-IT guy, it's kind of in my blood. And how should businesses approach some of those considerations? I mean, there will obviously be regulation.
[00:24:45] There will be, as is the case with many of the revolution, there will be labor laws, safety standards. I will say across the, you know, the large part of the world that we live in, and this is not meant to be a political comment. I think our legislators in many countries need to get profoundly more aware of what this technology can do and how it works, right? It's, but you know, I think that the government has a role to play.
[00:25:14] I think industry has a role to play. Like if you're inventing, how do you self-regulate and to what degree, right? And we see technology companies kind of ebb and flow in terms of how much self-regulation do they apply and to what end. You know, I was recently attending AWS reInvent and someone said this and I didn't, but I was absorbing as a case in point, the number of intrusion attempts that were happening on this
[00:25:42] particular company's network five years ago versus today. It's gone up an order of magnitude because of AI, right? So the reality is that we're going to, you know, this is going to be a circular answer to your question. We're going to have to use AI to also regulate and combat AI, right? Like for our AI governance, we're actually using AI to manage and govern AI because it will operate at a scale rate and pace that we just cannot as humans, right?
[00:26:09] And bear in mind, someone's going to invent very soon a way for agents to speak to each other that humans don't understand. That will happen, right? Agents will evolve and find more efficient ways to communicate. So we need to be able to understand how do we govern all this? This is not science fiction. It's science fact. It's just, it really comes down to the function is just T. It's time. That's all this is.
[00:26:35] So if we look at the bigger picture here, let's try and leave people listening with a valuable takeaway and maybe a little bit of inspiration. How do you see AI agents impacting the global workforce? And will this shift create new opportunities for things like upskilling and innovation? Or will it exacerbate existing workforce challenges? And what kind of steps should leaders take to maybe ensure that balanced outcome? Because as Bruce Springsteen used to say, hey, nobody wins unless we all win.
[00:27:06] Well, yeah, leave it to the boss. And, you know, so often we focus on the risk and not necessarily all the reward, right? And there's a lot of risk associated with AI. I think learning in general is something that is going to completely transform. I thought that was so exciting. Just the other day, I was on a drive. I had to drive about 45 minutes to a location. I fired up ChatGPT and I turned on the audio version. I was just in my car by myself.
[00:27:34] Anybody driving by me would have thought I was crazy. I'm just like waving my hand, talking in the air. But I'd like you to create a customized course for me right now because I'm the CEO of Boomi. And I want you to teach me everything about how a large language model is constructed. I want detail. And we went through this exercise around acquiring large volumes of data, selecting your neural network, training the model on that, then fine tuning.
[00:27:59] And I just thought, my God, I've got this magic box that is a bespoke personalized teacher that can look me in, understand context about me, and then give me a personalized lesson. And, yeah, we call those teachers. But the level of knowledge, and don't get me wrong, I mean, I owe my life to many a good teacher. But the reality is, look, extraordinary learning will transform the global landscape.
[00:28:26] And it will transform, and I believe, make knowledge equally accessible. As much as the internet did to a certain degree, it will transform teaching and learning, I believe, in extraordinary ways. People that have, you know, different ways that they learn, that have cognitive challenges, they will be transformed.
[00:28:46] People that have advantages that AI can understand and then exploit in a good way to help them be better human beings and contribute in new ways. I think learning is going to transform. So that's part of it. I think that will change the global landscape. But what I worry about as well is there are companies that have extraordinary scale. They have deep pockets that no one else has taken. And I'm not, this is not a bad thing. But take a look at a Walmart, for example. The reality is, what was the narrative on Walmart in the 2000s?
[00:29:16] Well, the retail landscape was laid to waste with Walmart's cost or price cutting initiatives, right? Because they could get scale and reduce price to a point where it eliminated the mom and pop shops, at least here in the U.S., if not in other countries as well. Those same companies, and this is, I'm not vilifying. It's not it at all and not my intent.
[00:29:37] But those same companies have extraordinary deep pockets and they are spending billions of dollars creating a technology advantage, market advantage, buying power advantage, pricing advantage that the average mom and pop shop will not, or even new market, will not be able to achieve.
[00:29:58] And so if we don't take advantage of learning, if we don't invest in AI, if we don't get to those levels, especially for the companies that exist today, I believe that it may just deepen the advantage that these efficient organizations have given their economies of scale. So I think about it a lot, but I think that's where government needs to step in and help businesses achieve. Wow. And I think that's a powerful moment to end on.
[00:30:26] But before we do, I mean, we've covered so much today. As we said a few times here, we've been here many times before from the blockbuster video days, the rise of everything from Uber. There's so many, so much has happened. But I think the only difference is now the rate of change is so much faster. If we look in the last five years from going, working at home, working from home at scale to hybrid, working to AI in just two years. Absolutely insane.
[00:30:53] And there is a real pressure on us all to be in a state of continuous learning. So my question to you is when you're not talking to chat GPT and audio mode on your phone, when or how do you self-educate? Any other tips around that? Obviously, I will watch some really good podcasts, number one. But number two, look, I mean, I think that, again, I would say that I come back to what I said earlier. Never have I felt a desire to learn like I do today.
[00:31:20] And I think that your point around self-education, it is the thing that CEOs need to be doing today. A book I'm currently reading right there, Why Machines Learn, excellent book. I mean, it's got more math in it than I desired to review. There's another book I just finished. This is called A Brief History of Intelligence.
[00:31:42] But these are, you know, I think what my preferred method is picking the things that excite me, right? And I often get asked this, and I know you do as well. You know, people will say, hey, Steve, how do I, you know what, how do we take the next step in my career? And there's been some good commentary on this. People said, oh, do what you love. No. My view is not doing what you love. My view is, and I agree with several other people that have this point of view, it's do what you're good at. Do what you're good at.
[00:32:12] Doing what you love, that comes as, it's an outcome of doing what you're good at, right? But find those things that you're uniquely talented. Find the things that are hyper-motivating and hyper-interesting. I love technology. I love everything about it. I love the transformative nature of software. I feel like I'm getting paid to play football or basketball. It's just fun. Oh, man. And that really comes through in your passion for the subject and in our conversation today.
[00:32:38] And anyone listening that would like to carry on that conversation, find out more about you or Boomy and anything we talked about. Where would you like to point everyone listening? Well, obviously, they can go to our website, Boomy.com. But hit me up on LinkedIn. I'm out there. Send me a note. Steve Lucas. I try to be as extensible as physically possible. I love learning from new people. Can't wait to meet you. Excellent. Well, I'll add links to everything so people can find you nice and easy.
[00:33:04] As we said earlier in our conversation, these predictions that AI agents will soon outnumber humans is incredibly exciting and maybe daunting to some people listening. But your thought-provoking perspective on the implications for organizations, the workforce, and productivity shifts. Real food for thought. I can almost hear light bulb moments going off around the world right now. But just thank you for starting this conversation today. Oh, Neil, thank you. And look, hey, if I'm right, can I come back on the show? You got it. Okay.
[00:33:34] So big question for you. Are you ready for a world where AI agents dominate industries and shape our interactions? And how can you as a leader ensure that this transformation benefits society, benefits your workers while minimizing risks? These are the critical questions I'd love for you to dive in. These are the critical questions that I need your help to bring to the discussion and raise anything that we don't cover in these episodes. It's not a monologue.
[00:34:03] It is a dialogue. So please email me, techblogwriteroutlook.com, LinkedIn, Instagram, and X just at Neil C. Hughes. I'd love to hear your perspective. So please join the conversation. And don't forget, I'm back again tomorrow with another guest. I've already got one lined up for you. In fact, I've actually got 110 interviews booked in until April. So don't go anywhere. I've got lots of amazing content coming your way, and I want you to be a part of it.
[00:34:32] So hopefully I will speak with you all again bright and early tomorrow. Bye for now.

