What happens when the need for rapid AI innovation runs up against the growing pressure for trust, accountability, and compliance? In this episode of Tech Talks Daily, I sit down with Mrinal Manohar, CEO of Prove AI, to explore how risk management can accelerate rather than hinder AI deployment.
Mrinal shares how Prove AI is helping organizations build trust into their AI systems from the start. At a time when businesses are moving AI models into production, yet often lack visibility or safeguards, Prove AI offers a solution grounded in transparency and automation. Their approach uses distributed ledger technology to create tamper-proof audit trails for AI models. This allows teams to focus on innovation while having the infrastructure in place to meet evolving standards and regulatory demands.
We discuss why traditional monitoring techniques fall short in an AI context, especially as models become more complex and decisions happen in real time. Prove AI's infrastructure is designed to support continuous risk mitigation. By recording every event and decision with cryptographic certainty, they make it possible to prove safety, compliance, and responsible use without relying on labor-intensive manual audits.
Mrinal also explains how Prove AI's upcoming GRC product aligns with ISO 42001 and helps companies stay ahead of regulatory expectations. Whether you're deploying AI in customer service, manufacturing, or high-risk environments, the platform ensures clear oversight without disrupting speed or agility.
This conversation covers practical examples of AI risk in action, from automated railway inspections to drive-through ordering systems. We also explore how distributed ledger technology is helping redefine AI governance, offering companies a way to move fast with confidence.
If you're scaling AI and wrestling with risk, compliance, or trust, this episode will give you a fresh perspective on how to build guardrails that support growth—not slow it down.
[00:00:04] How can we build trust in AI when its decisions are made at lightning speed? And the risks, well, they're constantly evolving. On today's episode of Tech Talks Daily, I'm going to be joined by the CEO of a company called Prove AI. And he's going to be bringing a fresh perspective to one of the biggest hurdles in AI adoption. Yeah, governance. And with a background that stretches back to auditing automated trading systems way back in 2007,
[00:00:33] my guest knows firsthand that managing AI risks isn't a one off task. It's a continuous challenge. And Prove AI is on a mission to accelerate safe innovation by giving businesses the tools that they need to confidently move from AI pilot projects to real world production. And in the conversation today, he's going to be talking about how they're doing just that through tamper proof, time synced audit logs,
[00:01:01] all of which are powered by distributed ledger technology. So what does it take to make AI not just smarter, but safer? Well, it is time for me to officially introduce you to today's guests, where we'll find out all about this and much more. So a massive warm welcome to the show. For everyone listening, hearing you for the first time, could you tell everyone listening a little about who you are and what you do? Thanks for having me on, Neil. I'm Rinal Manover.
[00:01:30] I'm the CEO of Prove AI. At Prove AI, we essentially build systems to de-risk the deployment of artificial intelligence. And really the angle we come from is enabling people to innovate faster. The analogy I always like to use is everyone drives faster if they have good seat belts. We make those seat belts. And one of the things that I love doing on this daily tech podcast is finding more about the origin story behind my guests.
[00:01:58] So can you tell me a little bit more about exactly what inspired the launch of Prove AI and how your technology maybe addresses the increasing challenges of AI risk management? Because it's a huge topic right now. The origin story actually kind of goes back to 2007. I graduated grad school in 2007 and mostly worked on Wall Street, Bain & Company, Bain Capital. But during grad school, one of the things we built was an automated trading system.
[00:02:27] And, you know, it's 2007. So we didn't have LLMs or complex ML models. So essentially, we built a trading system from scratch that replicated what AI does, you know, deciding when to buy a stock or sell a stock.
[00:02:44] And compared to today where you think of like models that have 9 billion or 10 billion parameters, I think DeepSeek is like, you know, TEDx that effectively, you know, what we were building had closer to 30 or 40. Again, back in 2007, you know, processing power restraints we had. And what I noticed even then was what was really, really hard with that underlying AI system was auditing it and trying to figure out why it made certain decisions.
[00:03:13] And so right from there, that memory stuck with me. And then when I discovered things like distributed ledger technology and other wonderful audit slash tamper-proof systems, I realized that there was a great opportunity to combine technologies where we could audit or at least create a wonderful trail for AI systems in an automated manner, which hasn't been done now.
[00:03:42] And I think it's one of the primary reasons that adoption has been stymied. And I think with AI models becoming more and more complex, especially as the technology matures now, manual monitoring can drain developer resources. So how do you approve AI? How do you automate risk management to essentially allow teams to focus more time on innovation? Yeah, the way we automate it really.
[00:04:08] So I think I think you've come on a really important point here, right? Like the standard auditing workflow, you know, safety software approach that we had in the Web 2.0 world does not work at all for artificial intelligence. Because if you think about like the problems you had in the past, they were typically episodic and not continuous. So I'll give you an example.
[00:04:35] Like if you wanted to monitor your financial system, for example, you know, you do a financial audit once every quarter, once every year, what have you. And if you're not continually monitoring it, it doesn't really make a difference because, you know, you need to report once a quarter, once a year, etc. With artificial intelligence, you have two issues.
[00:04:54] One, your problem is continuous, meaning if your AI chatbot or say, you know, your AI robotic system, which is painting cars, starts going off the rails, you will see the economic impacts of that mistake immediately. So it's not an episodic problem, it's a continuous problem.
[00:05:15] And secondly, the growth of complexity is exponential, meaning you have N models, which each reference N data sets, each of them might spawn N agents. And you're looking at, you know, cubic, quadratic growth in complexity. And so this is the reason why this primarily is a technology problem that needs to be solved with technology.
[00:05:40] So putting together a committee, you know, make sure Bob ticks all these boxes on a forum and hands them over to Alice is just not going to work anymore with artificial intelligence systems. And how we automate it is really our approach has always been use technology to solve the technology problem. So it's deep integrations with model providers like Hugging Face, deep integrations with data sources, integrations. And right now we're actually talking about like going closer to the metal on NVIDIA systems.
[00:06:10] And, you know, once you have all these integrations in place, all you actually need to do is pick up the event stream that's coming out of these AI systems. And then you have, you know, pretty wonderful audit log and a bunch of transparency into what's going on. Sorry, that was pretty long winded. But I really wanted to talk about why we take that approach, because I think it's important to understand. AI is not like your typical IT stack. Yeah, 100% with you.
[00:06:39] And I think that extra explanation is really needed now. And another thing, I mean, distributed ledger technology is a key part of your solution. It's not just about AI here. So how does DLT enable secure timestamp data capture and enhance AI risk management? Because that seems to be the secret sauce here too. It's definitely part of the secret sauce.
[00:07:02] You know, we use a bunch of technologies, but let's talk about like why DLT is actually incredibly important for this particular use case. DLT does two things, at least when it comes to AI management, that other technologies cannot do. The first is your entire audit log is tamper-proof. And so if you compare that to, you know, a traditional database, which is one, not purely time-synced.
[00:07:30] And second, you know, there's a single point of failure. You have a database administrator and, you know, that person can go change what happened at any point. What do you really, really need for auditability purposes, proving that you're actually running AI safely is a tamper-proof log that really tells you what's going on underneath the hood.
[00:07:50] If you think of AI as, you know, the most crazy non-deterministic system, DLT is actually the most deterministic system, which will give you a real look through into what exactly happened with your build, which is incredibly important. And the second thing is distributed ledger technology is multi-party. You know, it allows, it has very, very rigorous access control. You can have multi-key systems to decide who has access to what information.
[00:08:17] And given that artificial intelligence has been multi-party right out the gate, it's very, very important to have an underlying technology that also is. And to help everybody listening understand the tangible benefits, the ROI, the value that you're offering here, how does automating the monitoring of AI systems not only improve safety, but also accelerate the pace at which businesses can maybe experiment with no applications? So the ROI really is right now.
[00:08:46] So let me describe what's going on right now, which might make this a little bit clearer. We were recently at NVIDIA GTC. We spoke to a ton of people who are trying to innovate and use artificial intelligence to improve customer service, operational efficiency, etc.
[00:09:04] And when we asked them why most things are still stuck in the proof of concept stage and haven't gone into production, what they say is our higher ups or legal or compliance is basically telling us, hey, we're reading news stories every day that are scaring the hell out of us about how AI can go off the rails.
[00:09:40] And so I'm not going to be able to tell you what's going on. willing to, for lack of a better analogy, push it out on the accelerator. So going back to my initial analogy, the actual primary reason why things are stuck in proof of concept limbo has
[00:10:09] been because people, the AI innovation branches at these big companies don't have a tool that allows them to say, yeah, look, we can drive really, really fast here because we've got some seatbelts. Stuff could go a bit wrong, but it's not going to be catastrophic. And they just don't have that tooling at all. And we're trying to rectify that. And to further bring to life everything you've just talked about there, are you able to share any real world examples or use cases
[00:10:38] that you're maybe discussing with potential customers and how your solution would help those customers manage AI risk more effectively? Yeah, absolutely. I'll give a couple of examples just to be specific here. So one area where we're seeing a lot of customers really interested is industrial processes. Like, for example, if a company is maintaining railway tracks and, you know,
[00:11:05] maintenance of railway tracks is a very, very complex problem. There's thousands of miles of rail and different ways of doing different weather conditions and different rates at which things fail or need to be reclamped, etc. However, all of this is fairly predictable. You know, it's metal on top of rock, and there's certain weather patterns. And if you have all that data, an artificial intelligence
[00:11:31] system can actually make this much more operationally efficient. That being said, you want to make sure that the output that you're getting from the AI versus the calibration and training that you've given it syncs up because otherwise, you know, there could be a catastrophic disaster if, you know, a train went off the rails, for example. And so that's one example of a use case where it's incredibly important to have that end-to-end audit log, absolutely understand every single data set that went into
[00:12:00] training this particular use case, and have remediation for your underlying systems. And another quick example I'll give is, you know, if you think about restaurants or, you know, say, any place where you interact with customers on a regular basis, for example, if you're going through a drive-thru and you want to order a cheeseburger, you want to make sure that that underlying AI system
[00:12:26] doesn't, you know, accidentally order 10,000 pizzas, for example. I think there was a funny news story about how McDonald's had implemented AI in their drive-thrus and the order started getting really, really wonky. Like, the system even started ordering things that weren't on the menu, things like foie gras and caviar, etc. And the issue with that and the underlying problem there is the companies are less worried
[00:12:54] about Larry Lawman and, you know, regulatory noncompliance. What they're really worried about is when issues like that happen, it engenders a lack of trust from their underlying customers, and customers at the end of the day are what drive business. And so a lot of the functionality that we're building and the real prime motivator for our customers is they don't want their end customers
[00:13:22] to opt out of, you know, these underlying AI systems that are improving operational efficiency. I would also say something else we need to talk about here. We often focus on the speed of technological change, but alongside that AI regulations, they're also continuing to evolve alongside it. So what role do you see Prove AI playing in helping organizations also stay compliant while still pushing the boundaries of innovation, of course?
[00:13:51] You know, I'd love to give like a little context here. Like, you know, we're not really primarily just about regulation. Our view really is that the issue with AI is not about, you know, new regulations coming to the fore. It's more about being compliant with existing regulations,
[00:14:15] meaning theft is still illegal. Doxing someone or exposing their personally identifiable information is still illegal, right? Like, you don't need a bunch of new laws to actually motivate you to be compliant with existing laws, right? Like, you don't need... Like, everybody was like, oh, if the government does not pass a set of laws around AI regulation, people can just do what they want.
[00:14:41] But that's a really... Yeah, that's a really naive way to think about it because like, everything that's been illegal for the last 300 years is still illegal. So you probably don't want to do it. And, you know, your end customers will be pretty mad if that happens. Now, thinking about that, we think that the way to stay on top of this is really to automate everything and typically against
[00:15:06] a standard. Because, you know, similar thing happened in cloud computing, information sharing, et cetera. You know, initially, people said there's going to be a raft of legislation. What ended up happening is standards came about, like SOC 2 compliance for cloud computing and information sharing. Before that, things like Sarbanes-Oxley for financial statements. And similarly, with artificial intelligence, I think it's going to be a standards-based approach. And right now, the
[00:15:33] call it most fully fleshed standard is ISO 42001. And so our approach really is make sure that you're compliant with a standard. If you take ISO 42001, for example, you know, even countries that have fairly rigorous regulation against AI builds, you'd be, you know, 99% of the way compliant. And our approach really is, you know, when it comes to governance, risk and compliance,
[00:16:01] is to do this in an automated manner. So essentially, if you're logging everything and you're auditing everything and have recourse, that gives you automated compliance and certification of what's going on. Because it's highly fluid, you can have dynamic policy enforcement. And because, you know, everything's working on a DLT system, access control, et cetera, is all cryptographically secure. So that's really our approach. We think all of this is going to be largely standards-driven
[00:16:29] and not really a bunch of, you know, very specific laws. And I'm curious, from everything that you're seeing, the conversations you're having at the moment, what would you say are the biggest challenges that you're seeing in scaling AI risk management? And how does your technology help overcome some of these hurdles too? The biggest issue is that, you know, manual approaches and what's happened in the past with,
[00:16:54] you know, Web 2.0 or other software just doesn't work. It's completely impractical. Going back to, you know, what I talked about, the absolutely exponential growth in complexity with AI systems and the fact that it's multi-party and multi-tenant. So, again, you know, at the risk of repeating myself, our view has always been, this is a technology problem that needs to be solved by technology. We're inherently techno-optimists at the company.
[00:17:21] And we think, you know, you can actually solve this with technology. So what we really do is we consolidate oversight into a single automated platform. This gives you an up-to-date, you know, view of AI activity across your organization, but in real time, because this is a continuous compliance problem. Meaning, you know, as I said in my previous example, this is not episodic. If your AI starts going off the rails, you are going to feel the pain immediately,
[00:17:49] which is evident in the news stories we read all the time. And because you have this real-time visibility into all your AI activities from a centralized interface, you're automatically documenting what's going on. And as a result, you have, you know, the best form of recourse. That's really the starting point. Of course, we're thinking about how to take that even further, you know, with really, really good connectors. You'd probably, you know, a few months from now,
[00:18:16] just be able to snap in certain models, data sources, and reconciliation mechanisms. There's no, you know, silver bullet here, but giving a ton of explainability and transparency into your underlying AI build will mitigate, you know, a large portion of the risk. And as this space continues to evolve, how do you envision the evolution of automated AI risk management? What do you see happening next?
[00:18:43] Are there any trends that you think organizations or business leaders listening should be preparing for in the near future? Anything that you're keeping a close eye on here? So actually, let me answer the second part of the question, because I think that that answers the first part of the question. We were just recently at NVIDIA GTC, and, you know, in his keynote, Jensen Huang said something really, really evocative, which I really agree with.
[00:19:09] He said, you know, look, AI is just an application, and truly what it's about is a data that's been fed. Everything starts and ends with the data that's going into your AI systems. So you can't really have an AI strategy if you don't have a data strategy. Data lineage is really the most important underlying function of AI governance, because, you know, if you think about all the problems we talked about,
[00:19:38] you know, bad orders at your McDonald's, PII being exposed, copyright information being exposed, every single one of those can be traced back to the data that was fed into the underlying model. AI is really nothing but, you know, a set of gates until data trains it to do a certain thing. And as a result, what we're seeing is things are actually moving closer and closer to the bare metal or the hardware processing.
[00:20:04] NVIDIA recently announced, you know, a bunch of open source machine learning operations tools. And, you know, these are APIs that are not heavy on UI, UX. But what it will enable you to do, especially if you use a product like Prove AI, is have really, really deep insight, almost, you know, directly at the hardware level,
[00:20:25] into how underlying data sets, underlying models, underlying parameterizations are actually coming together to create the output that AI models are. So, you know, the TLDR there is like the big trend we're seeing is there's a big recognition that it's all about data and data lineage. And that's why, you know, the DLT aspect of our system is really, really important, because this data comes from multiple parties.
[00:20:55] And since it's all about the data, you really do need a timestamped and tamper-proof audit trail of where all of this came from. So I think it's a powerful moment to end our conversation on today. And thank you so much for sharing your insights with me today. But before I let you go, I always ask my guests to leave one final gift, and that is a book that we can add to our Amazon wishlist. Is there a book that you'd recommend for everybody listening that they can check out? What would it be and why?
[00:21:25] I'm going to go with two. My favorite piece of science fiction as it relates to artificial intelligence is probably Hyperion. So Hyperion and the Hyperion Cantos series is really, really evocative. It's written in the style of the Canterbury Tales. And my favorite book overall, I just think it's the best novel ever written, is Blood Meridian by Corbeck McCarthy.
[00:21:51] I think it's just probably the most vivid retelling of American history. And I love that the author kind of doesn't really have a point of view. There's no moralizing. There's no lecturing. There's just a, hey, this is what happened. Make of it what you want. Like, I love that. I, Corbeck McCarthy also wrote No Country for Old Men, which became a really successful movie, is one of my favorite authors. So that's what I recommend to people.
[00:22:21] Those are both good reads. Ah, I love it. Well, I will get both of those added to our Amazon wishlist. And for anybody listening, maybe we've sparked their curiosity. They want to find out more information about Proof AI. Dig a little bit deeper on some of the things we talked about today. Where would you like to point them? Either go to our website, www.proveai.com. And similarly, you can find us on LinkedIn. We try to be as responsive as we can.
[00:22:46] Well, so much I loved about our conversation today about how you're leveraging technology that many have cast aside distributed ledger technology. Inherently, the most secure way to capture data in a timestamp manner. I would ask anyone listening that is interested in enabling their organizations to automate AI risk management, maybe check it out and get a bit hands-on and have a play and see what's happening there. It's so much great work you're doing. But thank you for bringing it all to life today.
[00:23:16] Thank you, Neil. It's been a pleasure speaking with you. Big thank you to my guests for lifting the lid on the work that Proof AI is doing and how they're helping organizations move faster while also staying safe. And what resonated most for me is this idea that risk management in AI, it really isn't a roadblock. It's a launch pad. And by making it easier to prove safety and compliance through real-time tamper-proof logs,
[00:23:45] Proof AI is removing some of the biggest barriers that stop companies from scaling their AI efforts. And whether you are a regulated industry or just trying to avoid a reputational damage from another AI misstep, the tools and strategies that my guests shared today, I think they offer a timely reminder that good governance doesn't have to come at the cost of innovation. So if you'd like to learn more about Proof AI's approach or explore their new platform for AI compliance, please check out the links in the show notes.
[00:24:14] But now I want to hear from you. Are you thinking about AI governance in your own organization? Are you confident in your ability to prove safety and compliance? Or does it just feel like a black box? Let me know. Email me. Techblogwriteroutlook.com LinkedIn. X. Instagram. Just at Neil C. Hughes. Keep your messages coming in. I'll return again tomorrow with another guest.
[00:24:41] And hopefully I'll have the privilege of speaking with you all again tomorrow. Speak with you then. Bye for now.

