3528: How Boomi Thinks About Scaling AI Without Losing Control
Tech Talks DailyDecember 22, 2025
3528
26:4923.41 MB

3528: How Boomi Thinks About Scaling AI Without Losing Control

What does it really mean to keep humans at the center of AI when agentic systems are accelerating faster than most organizations can govern them?

At AWS re:Invent, I sat down with Michael Bachman from Boomi for a wide-ranging conversation that cut through the hype and focused on the harder questions many leaders are quietly asking.

Michael leads technical and market research at Boomi, spending his time looking five to ten years ahead and translating future signals into decisions companies need to make today. That long view shaped a thoughtful discussion on human-centric AI, trust versus autonomy, and why governance can no longer be treated as an afterthought.

As businesses rush toward agentic AI, swarms of autonomous systems, and large-scale automation, Michael shared why this moment makes him both optimistic and cautious. He explained why security, legal, and governance teams must be involved early, not retrofitted later, and why observability and sovereignty will become non-negotiable as agents move from experimentation into production.

With tens of thousands of agents already deployed through Boomi, the stakes are rising quickly, and organizations that ignore guardrails today may struggle to regain control tomorrow.

We also explored one of the biggest paradoxes of the AI era. The more capable these systems become, the more important human judgment and critical thinking are.

Michael unpacked what it means to stay in the loop or on the loop, how trust in agentic systems should scale gradually, and why replacing human workers outright is often a short-term mindset that creates long-term risk. Instead, he argued that the real opportunity lies in amplifying human capability, enabling smaller teams to achieve outcomes that were previously out of reach.

Looking further ahead, the conversation turned to the limits of large language models, the likelihood of an AI research reset, and why future breakthroughs may come from hybrid approaches that combine probabilistic models, symbolic reasoning, and new hardware architectures. Michael also reflected on how AI is changing how we search, learn, and think, and why fact-checking, creativity, and cognitive discipline matter more than ever as AI assistants become embedded in daily life.

This episode offers a grounded, future-facing perspective on where AI is heading, why integration platforms are becoming connective tissue for modern systems, and how leaders can approach the next few years with both ambition and responsibility.

Useful Links

Tech Talks Daily is sponsored by Denodo

[00:00:04] Welcome back to the Tech Talks Daily Podcast. I'm here at AWS reInvent in Las Vegas. Today, I'm joined by Michael Backman from Boomi. And Michael's someone who spends his time looking five to 10 years ahead and then translating those signals into practical steps that companies can act on today. So today, though, we're going to talk about human-centric AI and what it takes to keep trust at the center of agentic systems.

[00:00:34] And also how governance, sovereignty and observability, how all these things can shape the next chapter of automation. And here at AWS, Michael's been impossible to ignore. He's been on stage twice this week. And his perspective on swarms of agents, security expectations, and the future of work, I think all brings thoughtful conversation to counterweight some of the hype and hyperbole that we see online.

[00:00:59] So our conversation today will move from deep technical questions to the human shifts unfolding all around them. But before I get my guest on today, I want to give a quick thank you to my friends at Denodo, who are playing a big part in supporting this show. Because one of the questions I hear more and more from listeners on this podcast is, why does AI succeed or why does it fail?

[00:01:24] Because let's be honest, AI is moving fast, but success is often still elusive. Now, most projects fail not because of the AI, but because the data foundation isn't ready. This is why organizations are increasingly turning to Denodo. Denodo delivers trustworthy and AI-ready data without the need to copy it everywhere.

[00:01:49] Essentially, you can optimize your lake house, accelerate agentic AI, and build data products that finally make self-service real and achievable. And with a powerful partner ecosystem, teams get to value even faster. So if you're ready to understand why your AI projects fail and how to succeed with AI, simply visit Denodo.com and take control of your data world.

[00:02:17] So a massive warm welcome to the show. Can you tell everyone listening a little about who you are and what you do? Neil, I'm Michael Bachman. I am here at Boomi for now eight years. I run research, technical research, market research, and all of the things looking out into the crystal ball five to ten years from now and try to make it all relevant to today. Incredible cool. And obviously, AWS reInvent means different things to different people. I believe you've been on stage twice. Tell me a bit more about what you're doing here.

[00:02:47] Yeah, that's right. Well, Boomi has a large presence with AWS here at reInvent this year in 2025. But we've had a partnership with AWS for a very long time. And so the platform is built on top of AWS, but you can run Boomi really anywhere you want to and whatever platform that is. It's just our choice for our design time platform is here. So we have a lot of partnerships through AWS with customers that are also here.

[00:03:16] It's been it's been a great connective sort of conference this time around. What I've been talking about is human centric AI, you know how to keep the human at the heart of where AI is. And it really involves trust versus autonomy and a variety of different other features that we need to think about, you know, responsible AI, observability, governance, those sorts of things. And another talk I'm giving today is on governance.

[00:03:46] And and we're going to focus a lot on sovereignty and different aspects of governance as well and how companies like mine need to be looked at as central themes and connective tissue for any companies that are that are going to get into the space, build agents, orchestrate them, manage them, so on and so forth. It's interesting you say that because there's so many big announcements around AI, agentic AI and talk of releasing swarms of agents out there as an XIT guy. That makes me a little bit nervous.

[00:04:15] And the security aspect or security first mindset, that's often missing from the conversation, isn't it? It is. Yeah, I think, you know, it does. It's all good. And I think we've learned this over the last few decades that we have to get security involved early. We have to get legal involved early. That's certainly not changing anytime soon. And I and I think from our standpoint, one of the things that we want to do is make sure that security really feels comfortable. Like IPsec isn't what it used to be.

[00:04:44] And I think agents are going to change the landscape of how security needs to be managed, observed and governed going forward. And while I'm not, you know, deep in security, I'm what I do know is that if we are building agents with security in mind, guardrails, governance, all of that up front, that's a good starting point. And that's really what we're all about, you know, inside of Boomi. Yes, we're an integration and automation platform.

[00:05:13] Yes, we orchestrate agents. Yes, we manage the flow of data between any types of endpoints, you know, within our platform. And we also want to give our customers the keys to observe how their agents are going to be built, whether they're swarms or whether they're individual agents, it doesn't really matter. We know that the stakes are going to be higher and higher the more agents get deployed. With over, I think we're approaching 35,000 agents at the moment deployed with Boomi.

[00:05:43] And then we're looking at 50, 100 or more thousand, maybe even as close as the end of the year. Next year, we expect a lot more agents to be built after everyone really understands them need to have security and governance in mind going into that. Yeah. And another phrase we hear a lot is the importance of keeping the human in the loop. And obviously, a lot of work you're doing is that importance of human-centric AI. Tell me a little bit more about that and why that's so important to you as well. Yeah.

[00:06:11] So, this is a time when it's really easy to abstract the human out of the loop, right? And so, the hype that's been, I think, prolific, certainly since, you know, you've had Transformers coming to the fore. And really, since the advent of, you know, super apps like ChatGPT, just as an example.

[00:06:31] Now, Claude and, you know, all of the other UI enabled or the GPT or Transformer enabled UI bots that we have or interfaces that we have. That has created such a hype cycle for a lot of companies that we can think that anything is automatable right now.

[00:06:56] Specifically because these models seem really, really intelligent and really be, they're able to do the types of jobs that humans have historically done and do them better. And in some cases, that's absolutely true today. In the future, it'll probably get stronger and stronger. But I think the nature of the worker is changing and evolving. And I think the most important skill that one can have to keep everything human-centric is really critical thinking.

[00:07:26] And so, this is one element we've been discussing. I've discussed this with students at universities. I've discussed it with the audience, you know, at a talk I gave earlier in the week at reInvent. And I think the one way to keep the human in the loop is to be flexible as a human as you scale your trust.

[00:07:46] And what I mean by that is the higher the trust you have in an agentic system, the more likely you're going to give that agentic system autonomy to do things, you know, that you essentially don't want to do. And or maybe that management thinks that you no longer should do and that sort of thing.

[00:08:07] So, I'm trying to be instructive about how we make sure that we're thoughtful about where the human is going to be in the loop.

[00:08:16] And I do think that if there are manual processes today that a company simply doesn't want to change either because that's the way they've always done it or because they feel that they have better tools with humans and only humans to provide the inputs and, you know, the deterministic logic for the apps that they're building and that sort of thing.

[00:08:44] I just want to send a warning like that's a real easy area to disrupt. Maybe not today, but certainly in the next few years, if you stick with manual processes or if customers or organizations stick with manual processes and don't even start to look at changing them, it's more likely that they're going to be disrupted.

[00:09:05] And so, the important thing is to not be afraid to jump in, to take on these new types of programs, which I call agents computer programs with probabilistic thinking and computation. And so, we just need to look at how we're going to use these agents for our advantage and still maintain control, be at the helm, be in the loop or on the loop.

[00:09:30] Meaning, either in the loop where we're constantly being checked in with these systems or on the loop where we're sort of auditing what the agents do and spot check and then start to think about what kind of new future we want. And that is squarely where I'm trying to get everybody focused in this new wave of agentic, of human-centric agentic systems.

[00:09:56] And it's interesting you say that because it feels almost like a paradox because the most important thing for everybody listening is to be more human and to improve their critical thinking skills. But over-reliance on certain AI tools reduces critical thinking. Yeah, it is quite at odds with each other. That's why I even brought that up yesterday. You know, there are plenty of reports that we've probably seen. We're getting dumber as a society, right?

[00:10:25] And this is where if one seeds her or his thinking to someone else, we do this as humans anyway. So, as humans, we have so much capacity. We have so much in our day-to-day lives that we're dealing with. It's hard to keep everything straight sometimes. And so, yeah, you need to delegate to different resources.

[00:10:51] Historically, delegation meant give tasks to other humans to go and fulfill it. Now, it could mean give tasks to other non-human agentic systems because humans are agentic systems too. Yes, yeah. So, non-human agentic systems and have them fulfill at scale faster, fail fast so we can learn faster. And those are the ways that I and my colleagues tend to use agentic systems now. It's not to replace our thinking.

[00:11:20] It's to augment what we're doing, more or less, or give tasks that either we don't know how to do or that we think are too laborious, boring, all of the other adjectives, the stuff we don't want to do. And then we can focus on the things we do. Yeah, completely agree. And I was at an IFS event recently, and they were talking about it's not using AI to replace people, but it's taking your existing 300 people and 10xing them to do the work of 3,000 people.

[00:11:50] And that's where we're really heading, isn't it? Because there's a lot of scary stories out there, but that's what businesses should be focusing on. Absolutely. And just as a customer example of that, there is a healthcare system in Colorado that we work with, and their executive staff, who were a part of an agent workshop that we did with them several months ago, they said the same thing as you just did. We don't want to ever replace our people.

[00:12:18] In fact, we're going to expand the number of humans we have, because for every human we bring in the loop, we're going to expect them to do 10 or 100x more, some order of magnitude more than normal human workers without AI or agentic help can do.

[00:12:36] And so this is where we start to see armies of humans and non-human agentic systems working with each other to do all sorts of things that we were never able to do before, such as checking in patients faster in a clinical setting, making sure that medicines are signed off on in quicker ways so that you don't have to disrupt a patient's pain cycle.

[00:13:02] There are a whole litany of other types of use cases where they're thinking, yeah, we want humans and non-human agents to be hyperproductive going forward. I think almost exactly a year ago, Gartner said agentic AI would be the big theme for 2025. It has been. We're here now at the last big tech event of the year. Everyone's talking about agentic AI, but now it's about providing the services, the technology to empower businesses to go and bring this stuff to life. But you're a guy that looks to the future. So what does all this mean to you?

[00:13:32] Yeah, the future is interesting. You know, there's this notion of artificial general intelligence that has been, you know, this – call it a North Star. I don't know what you want to call it. Defining AGI is really tough. But there was a group of researchers back in 2023 and updated in 2024 a paper on defining levels of AGI. And I like that framework for a number of reasons.

[00:14:02] Number one is it puts boundaries around narrow and general intelligence. And then it gives kind of an indicator of, all right, at a certain level, you know, a general system is going to have 50% of human knowledge in one model, 75%, 90%.

[00:14:22] And then something superhuman beyond that where all of the collective human cognition is less powerful than, you know, one generalized super intelligent model. I kind of like that framework because it gives me – it gives me some watermarks inside of the pool to go off of and see where we're at. However, I don't know what intelligence is yet.

[00:14:49] And I don't think – I think we have attributes of intelligence, but I think academically, I don't think we have a standard definition of intelligence quite yet. I also don't think that humans, in order to be human, means to be intelligent. I think there are different systems of intelligence all throughout our existence on this planet.

[00:15:09] And I think we are very arrogant to think that maybe we have the highest level of whatever this intelligence thing is. And if we want to model a system after our brain, probably want – if that's what we end up doing, we probably want to understand our own brain first before we start to model it, right?

[00:15:31] And so that said, if we start to reach these higher levels of intelligence, then theoretically we can supplant or we can create new industry. We can create new innovations. And we've done lots of really cool things so far. But in the future, I don't think a large language model is going to be the way that we're going to get to this AGI threshold.

[00:15:56] I think there are a number of different things coming down the road, some of which were discussed here like neuromorphic or symbolic architectures where that is more human-like. But I think, you know, I like this approach of mixture of experts where you take different types of computational models like traditional – what I'm calling with air quotes – traditional ML. Coupling that with something like a large language model for many tasks.

[00:16:25] And then also something that's more human abstraction related, which is symbolic logic or neuromorphic types of architectures going forward. And using these different computational platforms to make a more holistic worldview will probably get us to something with more and more generalized models.

[00:16:42] We can also take pages out of, you know, there was a recent podcast interview with SSI founder, former OpenAI chief scientist Ilya Sutskaver, who said that right now, you know, we're taking LLMs kind of to the limit of their effectiveness. And so we throw scale at this problem and allow these things to just expand.

[00:17:10] But there's some point where there's a diminishing return on what these systems can do. And so we're probably going to go into another research phase for a little while longer until we find out where the next big things are. And we may get there quicker and quicker. But I think this composite sort of worldview, you know, maybe human and extra human brain like AGI might be a little further off than we thought.

[00:17:41] And that might buy us some time to do other things like calibrate policy for the future. Because if these things are coming, then yeah, there could be disruption down the road that we don't anticipate right at, you know, in 2025. We could get everybody a common vernacular for how to understand AI and call it for what it is, you know, really, in my opinion, a computational platform.

[00:18:07] And then figure out what things we want to change with our processes, our society, our polity and politics, and just governance of these systems overall so that we're goal aligned. And I think we're probably going to have another AI winter coming because at a hardware level, chips and energy are going to be really constrained resources that aren't going to allow us to have laws of scale quite yet.

[00:18:36] So I agree with Ilya Sutskaver on, yeah, we're probably going to go into this research period from the scaling period that we've had into a research period, which is a signal for me saying, okay, before we get to the next scaling curve, we're going to need other hardware resources that we currently don't have.

[00:18:55] So now is our time to recalibrate. And so I'm saying for the next three to five years, this could be a really good time to recalibrate and then really think pragmatically about a lot of different things around data systems and interaction between those systems. And we are at that magical time here where everyone listening is going to be thinking about a new year, 2026, what we're going to do differently, exploring new and exciting things. What excites you about the year ahead?

[00:19:23] Oh, I'm excited through the lens of my own kids who are university age and what they're interested in. I think this is a point I brought this up earlier in the week about every morning sounds trite, but it's true. I think about how technology affects humans and how humans affect technology. And one might think, oh, my goodness, this guy has a really boring life if that's what he wakes up thinking about.

[00:19:51] And you might be right. And I do like I want to think philosophically about decisions we make, how technology can impact us. I mean, there's all sorts of advantage that technology provides us, makes us more comfortable when we're building systems for the future. How do we want to design those types of systems? Do we want a traditional approach that capital provides?

[00:20:19] I think that's a fair thing to do. I also think, well, is there another way? Are there better ways that we can go about creating new technologies and implementing them such that society could be a better place? And so those are fun things to think about. The way I think about them and apply them in my work day to day is I look at platforms like ours, which we're an integration platform.

[00:20:44] To me, what that means is we are connective tissue between all of these different systems, access to data, scaling, ways that we can connect dissimilar things and make order out of chaos when it comes to integration and automation. And I love where we're at because we're really not beholden to one particular platform. We can connect any of them.

[00:21:10] And that, to me, gives us a really neat advantage. And so when I think about all of the neat technologies that are coming down the road, like different types of language models, different ways to store and manage data, different ways that are new systems that are coming online that solve problems with people who don't know how to code but are creating these amazing applications, at least at a small scale, that could grow into something larger.

[00:21:38] These are the types of things that excite me. And when I also think of things like how are we going to help the planet, consume fewer resources, maximize potential, make it a better place for us all to be. Those are the kinds of things that I'm aspirationally hopeful of. I'm also not naive.

[00:22:04] I understand we've got, you know, we want to show ROI for the things that we do. And we're absolutely doing that today. So it excites me to look into the future and then also show value for today. So making whatever projects real now is what we're really into in our platform where we're going in the future. Yeah, I'm excited about that too. So it's that whole path is really neat. And finally, I mean, you mentioned your kids there.

[00:22:33] One of the trends you've probably noticed is how we no longer just Google things. There's a big change. Even during the holiday season, you're no longer searching for a gift. You're turning to an AI assistant of sorts. And for many businesses that have spent thousands to get people to their website, that's going to change too, Robyn. It's already, it's changed a lot. I think, you know, I can't remember the author's name of this book and it's killing me, but it's algorithms we live by, which came out in, I believe it was 2015, somewhere in there.

[00:23:02] Just talking about the ways that technology impacts how humans think. If that book were to be rewritten today, it's rewiring brains in a different way. It's almost like a cognitive offload, right? And so my children, one of my kids is very AI forward and the other one is very much opposed to it, which creates a really interesting set of topics that we discuss.

[00:23:32] And one of them is more artistic and, you know, looks at copyright and those sorts of things. And we have discussions about it and she doesn't know she's using AI quite as much as she is, but she's trying not to use it as much as, you know, my son who uses it as an assistant, uses it as an idea exchanger.

[00:23:58] You know, if, if he wants to ideate about something, he can go as deep down the rabbit hole, doesn't have to do searches the way we did traditionally. And it makes things, it makes things neat for him. Fast forward, not fast forward, but just panning over to one of the universities I spoke at not too long ago.

[00:24:17] So they're doing the same, the students there are doing the same thing, you know, with open AI or Anthropic or any of the other frontier model, model providers. They're not doing traditional search. They're finding answers to their questions. The key is how are you going to fact check what you're getting back?

[00:24:39] And so I think this is where it will be important to still have human creativity in the mix and definitely, definitely, definitely critical thinking. It's so important.

[00:24:53] And I think it will be helpful if we as humans, we learn these skills of critical thinking and how to test assumptions and not believe, you know, if we don't believe what humans are giving back to us with answers, how can we do the same thing for, you know, these modern, modern types of computational platforms? And I think that brings us full circle. So before I let you go, anyone listening wanting to learn more about you, your musings and your visionary side of things, and indeed all things boomy. Where would you like to point out?

[00:25:22] Yeah, come find me at boomy, Mike Bachman, M-I-K-E-B-A-C-H-M-A-N, at boomy, B-O-O-M-I.com. Or you can look me up on LinkedIn, Michael Bachman, Philly. And yeah, those are two really, really easy ways to find me. Awesome. Well, I'll put links to everything. Enjoy the rest of reInvent and maybe see you rocking out to Beck later. Oh, that would be lovely. Hopefully we'll see you there.

[00:25:49] So a big thank you to Michael for bringing much needed clarity to a moment where every business is wrestling with the pace of AI and the responsibility that comes with it. And I think his insights on critical thinking, human in the loop design, and also telling us a little bit more about why this is actually a rare window for recalibration. And offering a grounded way to think about what is coming next, I think is incredibly enlightening.

[00:26:13] And as we approach 2026, I'd love to hear how you are approaching all the questions that we raised today inside your own teams as you plan for the next year ahead. Please share with me your thoughts. TechBlogWriterOutlook.com LinkedIn X Instagram Just at Neil C. Hughes. But that is it for today and indeed my time in Vegas. So a big thank you for listening as always. And I will speak with you all again very soon. Bye for now.