How can businesses deploy AI with full transparency, accountability, and trust? In this episode of Tech Talks Daily, I sit down with David Villalón, co-founder of Maisa AI, to explore how their agentic AI technology is transforming enterprise decision-making. With many organizations still hesitant to integrate AI due to concerns over trust, hallucinations, and lack of explainability, Maisa has taken a different approach—ensuring full traceability in AI-driven processes.
David introduces Maisa's Vinci KPU and its "Chain-of-Work" concept, which moves beyond traditional generative AI models to create structured, deterministic decision-making processes. Unlike black-box AI systems that provide probabilistic answers, this approach allows organizations to see exactly how AI reaches its conclusions, making compliance and regulatory approval far more achievable. We discuss why this is a game-changer for industries like finance, oil & gas, and automotive—sectors where AI decisions carry high stakes and require robust governance.
With AI agents becoming increasingly autonomous, businesses must balance efficiency and oversight. David explains how Maisa's model allows for independent AI operation while ensuring humans remain in the loop for critical decision-making. He also shares best practices for mitigating AI-related risks, including financial losses and reputational damage, through governance frameworks and real-time intervention capabilities.
Beyond today's AI applications, we look at the bigger picture—how agentic AI reshapes leadership and organizational structures. As AI takes on more operational tasks, leaders are shifting from hands-on execution to managing networks of AI-powered "digital knowledge workers." What does this mean for the future of work? How will businesses adapt to remain competitive in an AI-driven economy?
Looking ahead, David shares his insights on the biggest opportunities and challenges facing AI adoption over the next five years. With AI evolving rapidly, how can companies future-proof their AI strategies while maintaining trust and control? Tune in to hear how Maisa is pioneering a new era of AI-powered decision-making.
[00:00:03] What if AI could not only execute complex business processes, but also provide complete transparency and traceability? And do it for every decision it makes? Well today I want you to imagine a future where leaders don't just oversee teams, but also orchestrate digital factories powered by accountable AI agents, all of them working autonomously and yet under human control.
[00:00:30] Well my guest today is going to be talking about Maisa and how they've developed a groundbreaking knowledge processing unit or APU and a unique chain of work approach. Where unlike traditional AI models that generate answers based on probability, their technology focuses on generating traceable processes to reach those answers, addressing one of AI's most pressing issues. Yeah, I'm talking about hallucinations.
[00:00:58] And this technology isn't theoretical. It's already being used across industries far and wide from automotive, oil and gas and finance. Ultimately helping organizations streamline critical business processes, maintain regulatory compliance and reduce costs. So what does all this mean for leadership? How will AgenteKI reshape roles, responsibilities and decision making structures in businesses all around the world?
[00:01:27] Well enough for me. It's time for me to bring today's guest on who will introduce himself and share his insights on the future of leadership, where oversight and strategic thinking becomes more important than day-to-day execution. And together we'll also explore the governance frameworks, what are essential for mitigating tasks as AI agents proliferate. So as agentic AI becomes mainstream, how is your organization going to adapt?
[00:01:55] Will you lead the AI revolution or will you struggle to keep up? Hopefully today's guest will make that journey a little bit easier. So let's get him on the podcast now. So thank you for joining me on the podcast today for people hearing about you for the first time. Can you just tell everyone listening a little about who you are and what you do? Yeah, for sure. Thank you for having me here and I'm glad to be invited. I'm David, the CEO and co-founder at MISA.
[00:02:24] MISA is an Spanish AI and American AI company that we are changing the paradigm on enterprise automation and transparency. But on my end, I come from working directly on AI for a lot of years. I was director of product and Spanish AI with senior technology that hyper-scaled and we raised 23 million. It was a great experience. Then chief AI officer in another company.
[00:02:50] I was lucky enough to be very, very early on this wave of AI. One of the very early testers of GBT-3, beta tester of GBT-4. So I was able to see the wave that was coming and now the wave, I think that it has arrived. It really has. I mean, we're entering what our third year of the crazy town right now. And one of the things that made you stand out to me, set off my tech spidey sensors,
[00:03:17] was when you introduced the concept of something called chain of work for full traceability in AI decision making, which is a big hot topic right now. So for anyone hearing about you for the first time, can you expand on that and maybe answer how this approach differs from traditional generative AI models and why transparency is so critical for enterprise AI adoption too?
[00:03:43] Yeah, for sure. So, at my side we come from a very different perspective in terms of AI. My co-founder and me, my co-founder, it's also, I think that it's important to mention one of the biggest contributors on Hagenface and on models with more than 600 models. And we both come from a very deeply expertise on AI.
[00:04:04] I was more in the applied side, but we very early noticed that there were a lot of problems that they weren't solved as we expected. At the end, AI are probabilistic models. So they are just, in a way, predicting the next token. And that means that by definition, if you use them to generate answers, you are always going to get the probability of not going right. And that's the well-known problem of hallucinations.
[00:04:31] And although that the problem might seem not that big, for example, you can get 0.5% of hallucination and answer noise. It has a lot of context. That doesn't seem a problem. But once you have a multi-step process that you need to read something, then you need to put that information in another side. And then you need to make a report, for example, that involves a lot of steps. If you have six steps, that's now 10% error rate.
[00:04:59] If you have 60 steps to make, now it's 60% of error rate. So the problem that we are directly addressing is that the NAI operates as a black box. So it's very difficult to trace how decisions are made. And this creates huge problems of trustability and compliant risks. That is what it's indeed making not mass adoption.
[00:05:22] Because you cannot go outside of a chatbot that helps you more than really changes how your business operates. So by creating one technology that we call KPU, Knowledge Processing Unit, that uses fundamentally AI differently. Instead of using it to give answers, instead of using AI to get to an answer, we use it to get the process to get to the answer. What needs to get executed to get to the answer in a code based.
[00:05:51] So that means, at the end, that we do not open the door for hallucinations to appear. We do not allow the black box to give hallucinations because it's not giving me answers. It's giving me things that need to be done in order to get to the answer. And that allows us to open this black box and to have full observability and traceability of what is done to get to a decision and to get to a result. And that's what we call chain of work. That is the combination of this deterministic traceability,
[00:06:21] leverage in our technology of execution, plus the explainability layer that enables anyone without any technical context to understand what is happening in each step. And towards the end of last year, Gartner famously predicted that agentic AI would dominate 2025. And it seriously seems to be doing just that.
[00:06:44] And this year we're saying agentic AI is predicted to reshape leadership and organizational structures by 2030, just five years away. So I'm curious, how do you see leadership roles evolving as businesses begin managing AI-powered digital factory knowledge workers alongside human workers? Yeah, no, I think that it's a great question.
[00:07:07] I think that by the next years to come, 2020, 2030, like I had to thought about that. Like, yeah, that's five years away. The time flies. It's coming. So at the end, we are going to be ending having it's like factories, in my opinion, of digital workers in sense, like we have in the physical work with the product that in manufacturing, for example. But it's the human, the human role, it's going to transition from contributor to operator in a sense.
[00:07:37] So what is going to be doing it's at the end, it's going to have a full swarm of agents talking out agents of things that are able to conduct us intelligently. And that can go from making huge researchers and report over the research to be to propose decisions over complex situations that would have taken ages to make.
[00:08:00] So at the end, the shift will refocus this leadership or strategy more into the oversight of the tasks rather than the this management every day or this contribution every day.
[00:08:16] So I see companies evolving, having more and more workers making the jobs and humans with the, with our unique context and understanding of, of data points that we have as living beings in the world, providing the right feedback or context for making sure that what has done makes sense.
[00:08:41] Uh, these will make teams of five execute like teams of, or 500 people that are right now. And this is going to completely reshape how, how the world looks like. Um, you have to think also in that sense that we are very, very early, uh, on AI. And what I've been my dad is that we are in the eighties and we have computers that have, uh, 20 kilobytes. And we think that's insane.
[00:09:07] And, and we have AI making, I don't know, like half, uh, like, uh, right now up to 3000 tokens per second. That is like 10 pages of code of content per second. It's the limit where we are right now, but that is going to hyper scale. And you have to think that in five years from now, we will have, uh, uh, agents capable of doing a key hotel of content. I like the book every single second or creating a full code base almost every second.
[00:09:36] So that's that amount of generation that is going to come, uh, uh, first needs to have a way that can be trustable. But secondly, we completely reshape how we approach what can be done, uh, in the day to day works and how, what is the productivity of the companies and the human role.
[00:09:55] And leaders are going to be more generalists capable of managing and conducting AI to do what it needs to, to, to, to achieve than, than just making them the, the, the, the job. As you right now use Excel, for example, to do account, uh, to, to, to, to work. It's going to be very similar. And of course, with the proliferation of AI agents and large organizations will be implementing thousands of them.
[00:10:21] And I would imagine there also comes risks that might be financial losses or even reputational damages. So what, so what safeguards and governance measures should these organizations be implementing to mitigate some of those risks when deploying thousands of agenting AI agents out there? Yeah. So at the end organizations, what they are going to be needing the most are, uh, governance frameworks.
[00:10:46] Um, uh, what they are going to be needing the most is being able to trace, uh, and explain all the decisions that, uh, that are made. They are going to be able to include the human in the loop with, with oversight, meaning that, uh, not enabling just the AI to operate autonomously, but being there for the decisions to be made, like approving or rejecting, um, one mortgage, for example, or, uh, at the final stage might need to still involve.
[00:11:16] Uh, uh, uh, AI in the middle. But at the end, if we, if we do not have this layer of, uh, trustability or, or full accountability, uh, we are not going to be able to make these systems operate in the real world with, uh, compliance and overcome these, these, these risks. While the capacity of these systems increases, uh, real time monitoring, uh, and, and even intervention is going to be one of the things that we will have to, to take always in, in, into account.
[00:11:45] And that systems in the sense of UXs, uh, user experiences are going to, uh, have to take, uh, uh, into account. So at the end, um, it's, it's indeed why we are also so obsessed with this AI accountability, because we think that, uh, we will need to log the decisions to make them explainable, uh, just to overcome this potential risk. And if something, when they go wrong, a decision is not how, uh, is not how, uh, it should have happened.
[00:12:15] You can go back and check what, what, what was the, the root cause. What was what, uh, maybe AI, uh, uh, have that decision. And before you came on the podcast today, I was reading about how your, I think it's Vinci, uh, KPU outperforms frontier LLMs in areas like coding, math problem solving and accuracy.
[00:12:39] But I've got to ask if we look under the hood, what factors contribute to this improved performance and how do those capabilities translate into practical business applications for those business leaders listening? Yeah. Yeah. So at the end, um, on, on, on, on, on, on the KPU, in order to get these results, what we did, it's, uh, trying to extract what is more valuable of the models.
[00:13:04] That is the capability to find patterns and to reason, uh, and to, and, and to get the, the, the features of reasoning from, from them. So then giving them the right word, word, workplace workstation, let's say in a way to, to, to make a date best, what they are good at. So, uh, it's what it's enabled us to get this, this amazing results.
[00:13:29] So it's not the same if you put someone to do a math exam and one has to do mental, uh, everything with its mind. And the other one has a computer next to it, who is going to get better, uh, grades at the end and better results. Eventually, probably the one that has the computer. So this is a bit similar. We, that, that's how we, we've been able to make it.
[00:13:52] And by doing that, we are able to target use cases directly on, on the most critical business processes and companies. So, um, before my side, there are, for example, car manufacturer companies that had army of human teams, monitoring emails, tasks that enter into the system. They belaced that in the supply chain and with us, uh, the, and leveraging these, uh, agents in our product, my side studio that now they are able to.
[00:14:21] So every day wake up with a work done and now act as oversight, see if it was well done or not, and simply confirm or reject the decisions made. And, uh, we having these, uh, state of the art performance and, and, and incredible results in the day to day. And despite the hype that we see in our newsfeed and all those different stories around AI, many businesses have been cautious in their approach.
[00:14:49] Many remain hesitant to deploy AI in live production environments. I think a lot of it is down to trust and accountability issues, but how does your technology address some of those concerns? And what industries do you see as, as early adopters of agentic AI systems? What are you seeing here? I think it's a pretty good question.
[00:15:11] Um, in that end, the, for business to adopt is, is one key component, obviously is accountability and traceability. Or another key component is the speed from how you can transition proof of concepts to reality, to production impact. And, uh, right now, current, the developments, uh, imply that you will need to, uh, put efforts of developers. You will need to join a lot of pieces in order to get something to work.
[00:15:39] And you are in a never ending flow of, I make something, I expose it to reality and I have to get it back to iterate it. And, and this goes on and on and that's what is also making not adoption as fast, uh, as it, as it should be. And early adopters at the end include, uh, um, major players, uh, in, uh, like, I can tell you that almost every industry it's has in there, uh, in the budget this year to, uh, experiment.
[00:16:07] We say, uh, and only those who are able to not just create, uh, chatbots and that really are able to create these authentic process automations are the ones that are going to see more impact. We are talking about automotive, oil and gas, financial sector, um, uh, cyber security. Um, to be honest, it's, it's, it's, it's almost every industry is waking up that this is something that the first ones that figured it out are going to be, uh,
[00:16:37] uh, early, early, early movers and they start to gain, um, uh, escape, escape velocity, uh, from the competitors. And they are all trying to, to, to, to get this as soon as possible. But again, they, they, the, the problem is that the market right now is so it's very noisy. There is a lot of different companies, uh, a lot of approaches and the most well-known approaches in the market are starting to be proven wrong, but the market has not realized, uh, that, uh,
[00:17:07] I'm talking about architectures that are used ways that are, that people think that should be deployed, that, uh, still carry these errors of, of hallucinations, lack of, uh, traceability, uh, um, you have to build almost an agent for each use case. So, um, uh, so yeah, this is how, how, how, what I think that it's, uh, happening in adoption and the industries that will get more impact.
[00:17:35] And I think it's important to highlight, we're not talking about replacing people with AI agents here. One of the things that stood out to me about what you're doing is your technology allows for yes, autonomous decision-making, but with humans still in the loop, very much a copilot of sorts. So, uh, how do you ensure that AI agents can operate independently while also maintaining that human oversight that we can't avoid it?
[00:18:01] It is, it's, it's required and probably will always be required for critical decision-making. Yeah, that's, that's a great question. Um, so the, the thing that we don't think never about a replacement because there's the, no, there is one thing that it's very tacit, implicit. That is the know-how that you gain based on experience on a job and that no one has that documented in any place because it's literally in your mind. And that's why you are an expert, but a senior in, in a job.
[00:18:30] And you have that, uh, in, in, in your mind. So. It's related to what I say about the context. Um, the thing is that until AI has the full awareness that we have, uh, in our day-to-day jobs, it might not be able to not only replace, but, uh, to do the job as we do it right now. But what is going to be able to help us with is to create almost the 90% of the job.
[00:18:57] So I can spend the last 10% into what really adds value or what really is the, the small things that are the decision made, but have everything already, uh, pre-made. That is the looking for information, making the analysis, making the comparisons, preparing a report, uh, getting into the solutions. So how we can make sure that, uh, we stay in control is always, uh, uh, uh, in a very pragmatic way, uh,
[00:19:27] putting an eye between, uh, the decisions that need to write to databases that need to, uh, act over our system of records and, and, and I stay it. I think that we are still going to encounter and that in the years to come that we feel more safe, even, even if it has it 1000 times, right. The human being the one saying, okay, I agree with sending this email, or I agree with, uh, this decision that has been made.
[00:19:54] Um, and, and, and this is how I see that this is going to be, uh, go, uh, going for the next years to come. Obviously things will evolve during the time, but we will have this intrinsic value, uh, of having a unique awareness of how everything operates. And, uh, and that, uh, I don't see in the short term AI, uh, uh, been able to, to have.
[00:20:16] You've had so many early adopters from a broad spectrum of industries, including major players in the automotive, oil and gas, and financial sectors to name but a few. So just to bring to life what we're talking about here and the kind of value add that we're talking about, do you have any real world examples of how your authentic AI is being used to strengthen operations and meet regulatory requirements? Because it's not about the tech. It is a real business value add we're talking about here, right?
[00:20:46] Yeah. Yeah. So similar to the one that I told you before we have, uh, with different carbon of factors, um, uh, that, uh, we, we try for those companies. We always try to differentiate by addressing critical business processes. So we go for the high risk, high value, uh, because we think that those are the ones that really change the odds over what your company can achieve with AI.
[00:21:09] And those include, uh, the example of having this army of procurement teams that monitors all the information daily and that they are always, uh, on delays and cannot, uh, overwhelm by the amount of work that they have to do. That they proactively in their existing workflow, uh, in their existing device already see every day, the job done prepared to be delivered. So that makes them by done. I mean that it has read the task. It has read the emails.
[00:21:38] It has looked into the attachment. It has, um, uh, navigated the databases. It has created the case study. It has been able to deliver the conclusion plus all these steps that are made, um, in the oil gas, uh, sectors. We have on compliance requirements and recall on regulatory reporting. So we have these companies, uh, um, that every day are facing new regulations.
[00:22:03] And now even more that we are in this moment of, uh, changing to renewable, uh, energies and that they are always facing, uh, what about the underwater, uh, regulations? What about the, uh, how, how do you, uh, all this environmental, uh, they have again, armies of humans having to look for that information, putting that into a teams. Uh, uh, channel having to ping the right person. Then it needs to read it.
[00:22:32] Then it needs to put comments, check by the, the, its own regulations internally, then send it to the group again. That goes into, uh, the, um, all the companies that were there together from, uh, uh, in the industry.
[00:22:46] So what we do in that example, uh, is that every time that a new file, uh, is received, uh, uh, we directly read it, understand what is it about, check for the stakeholders that need to be informed that this is something that relates to them for the decision makers. Some for the owners of these verticals.
[00:23:08] Um, then, uh, it takes again for the organization information about what is the, uh, its positioning as a company. And at the end, it's able to make the comments, deliver the, uh, analysis being the rank person that, Hey, here you have the, this document was received. Here you have a nail day document. Here are the comments that I made based on these things that are in our company. And this is the conclusion that should be made.
[00:23:35] So what would take one week of work literally takes, uh, a few minutes, even an hour, if you want to go into more details, uh, of, of reviewing. Um, so these are impactful use cases, non chatbots that you go on and blow everything, but that happened almost in the background. And suddenly, uh, you wake up almost with the world ready to be reviewed and, and, and done.
[00:24:00] And we started the podcast today talking about that 2030 goal that's waiting on the horizon already under five years. But if we do look ahead, where do you see the biggest opportunities and challenges for a Gentic AI over the next three to five years? And how are you positioning yourself to lead in this continuously evolving space?
[00:24:21] So I think that the, the opportunities that we can find is that enterprises, um, uh, will redefine workflows, uh, in a lot of senses. They will try to reduce the inefficiencies in the decision-making and the, they will start investing more into adoption.
[00:24:40] Um, that means that, uh, I see that, uh, as opportunities, they will reshape how their processes work to become more efficient and to don't have us to wait weeks for a decision. If, uh, and, uh, with them. So the opportunities are coming mostly for a redefinition of their processes on the, in, in, in, in, in the workflows of how they operate.
[00:25:04] And, but the, the challenge is also a very interesting point because I think that regulation will increase in a lot of senses and it will require more and more AI to be transparent and accountable. It's going to be just a matter of time that people that right now are not caring about errors will suddenly are suddenly created and snowball that it will be just a matter of time that they realize that it's too big for them to, to control.
[00:25:31] And we'll, we'll, we'll, we'll, we'll need to, to, to take that, uh, uh, into more consideration. And then also, uh, we will see, I think more companies creating or retraining, uh, uh, employees or creating academies or systems to train their internal, uh, employees to work effectively with AI. And, and one last challenge that I would say is that, uh, companies will also reshape.
[00:25:59] How is their technology stack, uh, uh, uh, made, no, they have very old technology stacks in a lot of, in, in, in, in a lot of occasions. And that is a blocker for them. One thing that it's also blocking a challenge that they have to overcome is that it's difficult for them to introduce technology because it's not accessible. No.
[00:26:19] And in order to have trust, you will need to have accessible data for the agents to work, uh, in a, in a safety way, obviously in something that, you know, in a setup that is safe for them. But, um, it's also a very big challenge, uh, that we will have to, to consider. Well, thank you so much for coming on the podcast and sharing your insights today. We have been talking about a very serious subject and I'd love to break out of that before I let you go and ask you to leave one final gift for everyone listening.
[00:26:49] And that is a book that you'd like to add to our Amazon wishlist. I always ask my guests this question, but what book would you like to leave everyone listening and why? Yeah, it's, it's a book that, uh, uh, I, I, it changed in my perspective and I read it every year. I can tell you that at least two times that is the book of discourses, uh, discourses of, uh, epic tattoos. Um, it's not, uh, it's a simple book.
[00:27:17] Uh, and, and every time that you read it, you discover something new. And every time that you read it, uh, seems like the first time because it changes the moment that you are reading it and the perspective that you have. And, um, I, I really like it. And I think that it can, in the moment that we are going to be living there in the next years where resiliency, uh, embracing the change and, and opening to this new world that the AI is also bringing.
[00:27:45] It's going to be crucial that, uh, we are prepared from a mindset perspective and, uh, a big tattoos, uh, had some, some clues about, uh, what was a good mindset for, for these times. Oh man, you've got me intrigued there. I'll be checking that out. I'll add it to the Amazon wishlist. And obviously we talked about a lot of useful information today.
[00:28:07] Anyone interested in exploring a Gentic AI with you or anything that we talked about, where would you like to point everyone listening to find out more information? Yeah. Uh, so it's, you can go, uh, you can try in my SAS KPU, uh, at KPU.masa.ai. But, uh, we share a lot of insights and updates on LinkedIn and Mesa. Mesa is M-A-I-S-A. Uh, in Spanish we say Mesa, in English they say Mesa.
[00:28:34] Uh, so yeah, feel free to reach out if you are interested in execution, in executing digital work, uh, using Bulletproof AI agents, enabling better work technology and chain of work. Happy to, to, to, to be in touch. Well, I'll add everything that you mentioned there to the show notes so people can find you nice and easy. As I said, we covered so much in a short amount of time there. So just thank you for sitting down with me. Really appreciate your time. Likewise. Thank you for having me here, Neil.
[00:29:00] What I loved about today's conversation is Mesa's chain of work approach powered by this knowledge processing unit is addressing one of AI's most significant challenges. Ensuring transparency, traceability and accountability in every decision that AI makes. We've all heard those horror stories of black box AI where nobody knows what is in there or how it comes to the conclusion that it does.
[00:29:25] So discussing how this technology goes beyond traditional AI models and provides this deterministic traceable process instead of probabilistic outputs seems like a big leap forward. Because it means businesses can now rely on AI for critical operations, confident that every action is auditable and explainable.
[00:29:46] And with early adopters in automotive, oil and gas, financial services, it's clear that this approach is already making a big impact. But as we wrap up, is your organization ready to embrace agentic AI and technology like this? How will you balance the power of automation? And are your leaders prepared to manage digital teams as effectively as they manage their human ones? Please let me know your thoughts. How do you see AI reshaping leadership in your organization?
[00:30:17] Let's keep this conversation going. Email me, techblogwriteroutlook.com, LinkedIn, X, Instagram, just at Neil C. Hughes. Let me know your thoughts. And if you enjoyed yourself, why not come back tomorrow for another guest? Another example of how technology is transforming our life, our work and world. Same place, same time tomorrow. I will meet you here inside your podcast feed, ready to talk into your ear balls. But that sounds a little bit sinister. So I'm going to go now. I will speak with you all tomorrow.
[00:30:47] Bye for now.

