As someone who spends a lot of time covering AI announcements, product launches, and conference stages, it is easy to forget that most AI today is still built for desks, screens, and digital workflows.
Yet the reality is that the vast majority of the global workforce operates in the physical world, on roads, construction sites, depots, and job sites where mistakes are measured in injuries, collisions, and lives lost. That gap between where AI innovation happens and where real risk exists is exactly why I wanted to sit down with Amish Babu, CTO at Motive.
In this episode, I speak with Amish about what it truly means to build AI for the physical economy. We unpack why designing AI for vehicles, fleets, and safety-critical environments is fundamentally different from building AI for emails, documents, or dashboards.

Amish explains why latency, trust, and reliability are non-negotiable when AI is embedded directly into vehicles, and why edge AI, multimodal sensing, and on-device compute are essential when milliseconds matter. This is a conversation about AI that has to work perfectly in messy, unpredictable, real-world conditions.
We also explore how Motive approaches AI as a full system, combining hardware, software, and models into a single platform built specifically for life on the road. Amish shares how AI can help prevent collisions, support drivers in the moment, and create measurable safety and operational outcomes for fleets operating across transportation, construction, energy, and public sector environments. Along the way, we challenge common misconceptions around AI in vehicles, including the idea that it is about surveillance rather than protection, or that all AI systems are created equal when lives are on the line.
If you are interested in how AI moves beyond productivity tools and into high-stakes environments where safety, accountability, and trust matter most, this episode offers a grounded and practical perspective from someone building these systems every day. I would love to hear your thoughts on this one. How do you see the role of AI evolving as it moves deeper into the physical world?
Useful Links
[00:00:03] What does AI look like when mistakes have very real consequences? Well, today's episode goes far beyond workflows and we're going to go straight out into the physical world where milliseconds matter. Because today I'm joined by the CTO of a company called Motive. We're going to be talking about how AI is being designed for roads, vehicles and environments where safety comes first.
[00:00:31] So yeah, we will get into why building AI for the physical economy is fundamentally different from digital use cases. And how Edge AI is changing what is possible in real time. And why that shift could quietly save lives. So if you think AI is just about productivity tools, dashboards and office workers in corporate America,
[00:00:55] we're going to deliver a bit of a wake-up call today on how much bigger technology and solutions are. So hopefully we'll deliver you a few surprises, but enough from me. Let's introduce you to my guest now. So a massive warm welcome to the show. Can you tell everyone listening a little about who you are and what you do? Yeah, thanks Neil for having me. My name is Amish Babu. I'm Chief Technology Officer here at Motive.
[00:01:23] Here at Motive, we build AI for physical operations. So what that means, we serve industries that build, move and power the world. Transportation and logistics, construction, energy, oil and gas, field service, the public sector and more. So Motive brings workers, vehicles, equipment and spend under one unified AI platform. We use real-time edge AI and telematics to help the organization prevent collisions and injuries,
[00:01:51] reduce paperwork and manual work, improve productivity and profitability. 100,000 organizations have used Motive today. We have customers like FedEx Freight, Halliburton, Kone, ABM, amongst many others. Before Motive, I was at Amazon Square and most recently at MetaOculus. And I came to Motive to really, because AI was something I wanted to work on and the impact here is very tangible.
[00:02:21] Helping drivers get home safely, reducing severe accidents, making physical work fundamentally safer and more efficient. So it's a pleasure to be here on your podcast. Yeah, thank you so much for taking the time to sit down with me. It's such an important conversation because I go to a lot of tech conferences in the US each year. And much of today's AI solutions that I see are all focused on desk jobs, office work, corporate America.
[00:02:48] But there's a stat that I've used a few times on this podcast now, and I'll drop it again now. 80% of the global workforce is actually deskless workers, which is difficult to believe sometimes when you see everything that is being sold out there. And looking at your own journey, you spent your time building AI for the physical economy, where mistakes have very real human consequences.
[00:03:13] So I've got to ask, how does designing AI for roads, vehicles and fleets, how does that differ from building AI purely for digital environments that you may have seen? And also, why has it been historically underserved considering that 80% stat that I mentioned? Yeah, it's a good question. Like you said, most of AI people interact with today is built for digital workflows.
[00:03:39] And obviously, if you haven't been under a rock the last couple of years, LLMs have been all the rage, right? And they're really good at writing emails, summarizing documents, generating images, answering questions in browsers. And they're very, very powerful and very, very cool. But they operate in environments where latency, occasional errors or bad answer are usually inconvenient, not catastrophic, right?
[00:04:06] But we operate in the physical economy, on roads, in yards, on construction sites, where we have unstructured and unpredictable data, right? The data that we deal with is real time. And so we have milliseconds to react on edge in our hardware to alert drivers. The environments are safety critical. A missed alert or delayed response can mean a serious collision, an injury, or a multimillion-dollar incident. So there's very little margin for error, right?
[00:04:36] And so designing AI for that world is fundamentally different. You need real-time perception and decision-making at the edge inside the vehicle, not in the cloud. And you need to have multimodal sensing that understands the full scene, real time, right? So that means video, telematics, GPS, motion, time of day, and more, right? So and then a much higher bar for precision.
[00:04:59] We can't have the hallucinations, right, on generating images or documents where it'll give you an answer, but it could be incorrect, right? You can't do that when you're delivering it real time. So the space has been underserved for many reasons, but one main reason is the data is just hard to get, right? You need a large, diverse set of ground truth data and careful handling of sensitive video and operational data. And that only gets built over time. And Motive has been at this for 10 years.
[00:05:29] And the engineering problem is much harder, right? You're trying to tram a bunch of AI models into an edge piece of hardware that's running real time and generating real results for the drivers and the fleets, right? And so most of the talent in the LLM world in the digital economy has been focused on the desk-based use cases. But the physical economy really requires you working on edge in field.
[00:05:57] And Motive took this bet very early on working on this. So the last couple of years, like I mentioned, has been focused on LLMs and that world. And we use them in our product, but we actually have been working on AI for the last eight years overall. And so we've been collecting the data. We've been working on our models and delivering them in real time on the edge.
[00:06:17] When you talk about using AI on things like roads, I think they're very different from, let's say, the A44 road here in the UK to those long winding roads in Arizona. So I've got to ask, what would you say are the biggest technical and environmental challenges that AI systems face when operating in unpredictable and safety-critical conditions where there are no margins for error? Yeah, it's a good question, right?
[00:06:45] And different environments in different countries require different approaches. And if you take a road like the A44 in Wales and parts of England, you see immediately why this problem is non-trivial. From an AI and systems perspective, a few challenges stand up. Dynamic, low-margin environments. The system has to understand lane structures on narrow or poorly marked roads, closing speeds to vehicles ahead or around blind corners, slippery surfaces, standing water, fog, low sun.
[00:07:13] Environments like this, risk isn't constant. It spikes by time of day, weather, road design, driver fatigue. Add on top of that that you have commuters, heavy-duty vehicles, buses, and other things all working together. They all require different tuning and different approaches, right? And so you have to have a system that can take in inputs from multiple sensors, multiple videos, and make sense of it. And so that's one of the problems.
[00:07:41] And then you have to have the ability to adapt to long-tail scenarios and rare events. So a road like A44, it's not just collecting the data and getting better at it and training the models over time. You actually have to have multi-stage testing, offline experiments to validate that our models are working in these environments. So in short, roads like A44 illustrate why generic AI in the cloud isn't enough.
[00:08:08] You need robust edge intelligence, which Motive has worked on for many years, tuned to the realities of unpredictable, safety-critical driving conditions. And we are seeing a shift towards edge AI and multimodal sensing, something that you mentioned a few moments ago. So why are on-device compute, stereo vision, and sensor fusion, why are these so critical for real-time safety, do you think? Yeah.
[00:08:36] When you're trying to prevent a collision, milliseconds matter. Even an event happens on the road and your AI has to send a video to the cloud, wait for inference, and then send an alert back, you've often already missed the moment where the driver could have reacted. That's why we built our systems around on-device compute and multimodal sensing.
[00:08:57] On-device compute, our AI models run directly inside the vehicle on devices like AI-Dashcam+, which has three times more AI processing power than our previous generation and can run 30 high-precision models at the same time. So we have to start with the compute on the device, make sure that it can actually handle the models that we're running overall and have to deliver results in milliseconds.
[00:09:25] Stereo vision is something we've added recently, and it adds all. So this is basically two cameras being able to sense the road together, just like your eyes being able to see from both vantage points. And this is a first in the AI-Dashcam industry to be able to use stereo vision overall. And it helps us with models like forward collision warning, lane swerving.
[00:09:49] And finally, like we use sensor fusion, video, audio, GPS, telematics, and motion data from the hardware in a unified device. So it's not just vision. We take in the 360 view of all of these sensors together to create a real-time perception and an engine that delivers real safety results. And when AI is embedded directly into vehicles, latency and trust all become non-negotiable, as you mentioned there.
[00:10:18] So where do you begin to think about accuracy, response time, and reliability when lives are potentially at stake? It feels like such a huge responsibility there. Yeah, when you put AI inside vehicles, you're not building a novelty feature, right? We're not building a chatbot, right? You're building a safety system. That changes everything about how you think about accuracy, latency, and reliability.
[00:10:46] So we kind of design around two philosophies. Accuracy is precision where it matters and recall where it's critical. We think about accuracy in a task-specific way. For in-cab alerts, where we're delivering that millisecond alerting, we prioritize precision and accuracy. If the AI is constantly wrong, drivers learn to ignore it, and you've lost the safety benefit.
[00:11:09] We invest heavily in long-tail analysis and strict confidence thresholds before an alert is allowed to fire. For collision detection and severe events, we prioritize recall. Missing a serious collision is unacceptable. So we bias towards capturing those events and then using AI in the cloud to enrich and route the incident correctly. And then I found that Motive's AI dash cams successfully alerted drivers to unsafe behaviors.
[00:11:37] And then when we talk about latency, edge-first design, we design for real-time performance. Models run on the device. So detection and alerting happens within fractions of a second without a round trip to the cloud. AI dash cams, AI Dashcam Plus's processor and architecture are tuned to optimize workloads across CPUs, GPUs, and DSPs. So we can run many models concurrently. And then let me talk about reliability and hardware, right?
[00:12:06] AI dash cam plus and our AI dash cam device have been designed from the ground up. And this is pretty unique in Motive. We're a full-stack company. So we design the hardware as well as the firmware as well as the software. And we put it together to create a really reliable device. So we're testing this device in multiple environments in extreme heat, vibration, and shock. And backed by device management tools so fleets can proactively monitor health at scale.
[00:12:36] This combination, accurate models, low-latency edge inference, and hardened hardware is what gives fleets the confidence to put our AI into truly safety-critical roles. I think it's about a year ago there was a lot of stories on our news feed of leaders struggling to prove the business value, the ROI of AI there. And in physical operations, is that different? How should companies benchmark AI performance and connect it to measurable reductions in risk and cost?
[00:13:05] What kind of things do you measure there? And what kind of real-world outcomes are customers seeing from Motive's AI in terms of safety, productivity, and ROI? Because that's where the big focus is at the minute. I'm just curious how it differs from different industries and different use cases and what you're measuring and seeing here. Yeah. In physical operations, as you mentioned, you can't justify AI on cool factor. You have to show measurable reductions in risk and cost.
[00:13:33] As an aggregate, probably around 170,000 accidents we prevented and saved up 1,500 lives. And customers see that in these trials, right? So 80% reductions in collisions on average. Some concrete examples here are Western Express, for example, one of our customers cut rollovers and overall accidents by 66% after focusing on risky driving behaviors with Motive.
[00:14:00] Western Express is actually a really good example of how we approach AI overall. They came to us with a problem where their drivers were pulled on the side of the road, right, to rest and take a break often, right? And they call this a sitting duck or where basically they're on the side of the road. And it's a very dangerous situation. And they needed an AI solution to basically prevent that, right? Cars can run into them in very high speed. And that kind of highlights our approach overall.
[00:14:30] We take these very real problems. And with our knowledge base and our team, we're able to actually generate custom models that apply across the industry. And so we developed a model with them in several months and were able to deploy it and prevent accidents for them.
[00:14:47] So now going back to the question overall, once you go look at that and once we look at the trials, they can see that not only do they reduce accidents at a very high rate, 80%, but the financial ROI is there then as well. You can translate this all into lower accident insurance costs, reduced fuel and maintenance costs, lower legal exposure, and faster exoneration. The key is to treat AI like any other critical investment.
[00:15:14] Define the metrics, run a proper trial, and hold the system accountable to hard outcomes, not just dashboards. And building AI on the edge often means combining hardware, software, and models all into one single system. And I'm curious, what lessons have you learned about making these pieces work together, both reliably and at scale? Again, a huge achievement here. But tell me more about that.
[00:15:40] Building AI at the edge isn't just a modeling problem. We're not just focused on the model. It's a systems engineering problem. And a few lessons we've learned is design the hardware, the firmware, the software, the models together. If hardware, firmware, and AI are designed in separate silos, which you often find when companies buy the hardware separately and don't design full stack, you end up constrained by the weakest links.
[00:16:05] With AI Dashcam Plus, for example, we co-designed all of those pieces together. The Qualcomm-based compute platform to run many models concurrently, the dual-end steer camera system to enable depth-aware perception, the sensor suite, video, audio, GPS, movement sensors to support rich sensor fusion. This tight integration lets us push more sophisticated AI to the edge without sacrifice and responsiveness or reliability.
[00:16:33] Secondly, what we've learned is to unify the devices to reduce failure modes. So AI Dashcam Plus, our newest device, combines those into one unified device, cutting install time nearly in half and improving reliability. And then three, kind of an obvious one but an important one, is engineer for the real world, not just the lab. So vehicles experience high vibration, temperature extremes, inconsistent connectivity.
[00:16:59] We brought engineers from companies like Apple, Meta, Cisco, Amazon, GoPro, people who know how to build consumer-scale hardware with enterprise-grade reliability. And we subject devices to rigorous environmental and lifecycle testing. And then another one in this ever-involving AI world is make the system upgradable. So we built AI Dashcam Plus on a modern Android-based platform.
[00:17:26] So we can ship new AI models and features like stereo vision, two-way calling, and AI voice assistant to our customers. So the takeaway, Edge AI works reliably at scale only when you treat hardware, software, and models as a single system and design for the messiness of real-world operations.
[00:17:47] And when I was doing a little research before you came on the podcast today, I was reading how Motive has recently announced a new class of intelligent commercial vehicle hardware. Check this out. It can run more than 30 AI models simultaneously on the Edge. So what does this unlock for road safety and autonomy that was not possible before?
[00:18:08] And how does AI Dashcam Plus, how does that change what's possible in terms of detecting risk earlier and preventing collisions compared to the previous generations of dashcams? Because I suspect many people listening will have some of those old-school dashcams, but this feels like a massive step forward. Yeah, thanks, Neil. AI Dashcam Plus is really a labor of love, and we're super excited to bring it out to market.
[00:18:34] So it's our biggest leap forward yet in real-time edge-based road safety. It's not just a smarter camera. It's a unified AI system running in the cab that changes what's possible in early risk detection and collision prevention. So a few things that's unlocked. Much broader and richer risk detection in real time. So three times more AI compute, 30 high-precision AI models, as you mentioned.
[00:19:01] We, you know, we'll continue to deliver our models of distraction on phone, eyes off road, smoking, and improving things like close following, lane swerving. And it's just a massive step forward for the industry overall. And while, you know, one model I'd like to, like, call out very specifically is our fatigue model. And it kind of also, like, highlights how we approach AI on edge, but also on AI Dashcam Plus.
[00:19:28] Fatigue is not just a singular event in time, right? So we all have experienced it where we're on a long drive, we're getting tired, our eyes are closing. You know, most AI systems would take a snapshot of a yawn, eyes closing, and say, hey, this person's fatigued. But that's not actually how it works, right? It's a risk that builds over time, right?
[00:19:54] And so we've taken this approach where an AI Dashcam Plus is where we're deploying this model overall, is that we've created a model that looks at not only yawning, eyes closing, swerving, all, basically all together. And we're delivering these types of really complex models to our customers. And AI Dashcam Plus, we're going to continue to deliver on that over time where we're using multimodal input,
[00:20:22] the dual cameras on the device to do many new types of things. One of those things, for example, is license plate recognition. So we have two cameras on AI Dashcam Plus, one a wide field of view camera to see what's going around, and then a narrow field of view camera where we can actually look at who's around you at any given time with license plates overall. So say there is an accident, we can actually know which vehicles were around you during that time.
[00:20:48] We're also using those dual road-facing cameras to improve models like forward collision warning, where we're able to take that stereo vision view, like I mentioned before, and do new types of models that just wasn't possible before. On top of that, we're introducing human-centered, hands-free communication features like live two-way calling,
[00:21:10] and the Motive voice assistant lets drivers and managers communicate and act without taking hands off the wheel or eyes off the road. So we're bringing modern tech to an industry that hasn't been served in this way before, and our customers are loving it. The net effect, fleets can see more, act faster, and prevent more accidents, and we're going to continue building on this new platform year after year, and super excited to have it out in the market.
[00:21:39] At the beginning of our conversation today, you were telling me about your customers that operate in environments as varied as construction sites versus oil and gas fields versus transportation. I've got to ask, how do you design AI that can adapt to such varied real-world conditions? Because there are all different ends of the scale and spectrum here, aren't there? Yeah, you're absolutely right.
[00:22:05] The environments are as varied as urban delivery, long-haul trucking, heavy construction, oil and gas, public sector fleets. The core challenge is to build AI that's general enough to work across these diverse conditions and configurable enough to respect different risk profiles, regulations, and cultures even, right? So even the UK is different than obviously the US in terms of how they look at safety. We approach that in several ways.
[00:22:32] One, shared core models, domain-specific configuration. We maintain a set of core perception and safety models for things like distraction, close following, speeding, lane position that apply broadly across industry. Then we allow customers to choose which behaviors to detect, set thresholds by fleet, vehicle type, or location, and decide which events should trigger in-cab alerts versus only manager review.
[00:22:58] A construction fleet and a passenger transit operator might use the same underlying AI, but configure it very differently. And then, you know, we have diverse real-world data at scale. Because we see data for more than a million vehicles and assets across industries and geographies, our models are trained on a very, very wide distribution of real-world conditions from the US and obviously to the UK.
[00:23:22] We pay special attention to edge cases, specific industries like safety, work zone patterns, off-road operation, and regional nuances, road design, signage, and driving norms. So that's in general how we look at it. So you get a global AA brain that's constantly learning, but with the knobs and dials that each fleet needs to adapt to its specific real-world conditions. Love it.
[00:23:49] And one of the things I always do at the end of my podcast is I give my guests a virtual soapbox of sorts to bust a few myths and misconceptions, maybe lay them to rest once and for all. And I would imagine in your industry you do see a lot of truths and untruths on your timelines and news feeds. So what do people typically misunderstand most about your industry? Or are there any myths about your job or area of expertise that we can lay to rest today?
[00:24:18] Anything that strike a chord with you or that you want to retire today? Yeah, I'd say there's three things that I'd like to highlight. I think like misconception one is AI safety is just about dashboards and after-the-fact review. That it's a passive thing, right? That it is a safety manager looking at events after the fact. That's not how Motive approaches it, right? We deliver in-cab, in-the-moment, AI results.
[00:24:48] And it's really about making sure that we're preventing an unsafe behavior and rewarding good behavior in the field. So I think that's the misconception overall is that the AI is actually to look backwards in time. And no, that's not how we look at it. And the problem, obviously, like I mentioned before, is super hard to do that in real time. But really, that's where the ROI presents itself.
[00:25:13] If we can prevent these unsafe behaviors on the road, that is the real benefit to the customer. So I'd say that's misconception number one. Misconception number two is that AI in vehicles is about surveillance, not safety. And I'd say a common misconception is that we're watching drivers or that the fleet and safety managers are watching drivers. Big brother. But our customers and our own philosophy are the opposite.
[00:25:40] Use AI to protect drivers, exonerate them, and coach constructively. We build features like driver privacy mode and AI coach so that we can help drivers. And well-implemented, AI systems can actually improve trust between drivers and safety teams. That's a common misconception, as you can imagine.
[00:26:00] But given the trust that we develop with drivers over time, they see the benefit and really adapt to our product and even liking our product over time. So misconception number three, I'd say is that all AI is good enough, right? That their AI dash cam is an AI dash cam. But in safety-critical environments, accuracy and system design matter enormously.
[00:26:25] And if you go back to what I said before, Motive takes the approach of full-stack design, starting with the chips, the hardware, and then working all the way to the safety manager and the fleet manager. We've leaned into independently benchmarking because we think fleets should demand proof that their AI actually detects unsafe behavior reliably in the field in real time. And we've invested heavily in edge-optimized hardware, real-time models to meet that bar.
[00:26:54] This isn't a generic software problem. It's about building high-consequence systems that you can trust when people's lives are involved. And so we believe our AI is industry-leading, and we will continue to carry that lead with AI dash cam plus. So we're super proud of that overall. Love it. And we will finally lay to rest those misconceptions you listed there. And I'd love to hear people's feedback on everything they heard today.
[00:27:20] And for people listening that do want to find out more information about anything we talked about today, and we mentioned a lot of products, a lot of information, where would you like to send everyone listening? Yeah, the easiest place to start is our website, gomotive.com. You'll find an overview of Motive's platform, how we help physical operations become safer, more productive, and more profitable. And then off of that, you can look at our blog and find things about AI dash cam plus, about driver safety, our AI accuracy.
[00:27:49] Our blog has a lot of great content there. And, of course, I'm always happy to continue the conversation on LinkedIn. I'm there. But the core message I'd like to leave the listeners with is this. AI for the physical world is here today. Motive has been working on this for many, many years. It's already preventing collisions and saving lives. And organizations have been embracing this real-time edge AI to run safe, efficient operations. Well, I will add links to everything that you mentioned.
[00:28:18] I would urge anyone listening to check out some of the information there. Keep up to speed with the information on the new blogs coming out as well. And in this episode, we covered so much in a short amount of time from designing AI for the physical world versus the digital world, the differences there, the shift to edge AI and multimodal sensing, and ultimately delivering measurable real-world ROI with AI. Too many acronyms there, but you know what I'm saying.
[00:28:44] So just a big thank you for sharing your story today and the great work you're doing. Thank you. Thanks for having me, Neil. I think today's conversation is a reminder that some of the most meaningful AI work is happening far away from desks and screens. And my guest shared what it really takes to build systems that drivers can trust, operate in unpredictable conditions, and still deliver measurable outcomes for both safety and cost.
[00:29:12] And it also raised a much bigger question about where AI creates its greatest impact when the stakes are at their highest. And as always, love to hear your take on this. Where do you think AI can make the biggest difference in the physical world? And what should leaders be paying attention to right now? If you pop over to techtalksnetwork.com, you'll leave me an audio message, DM, connect with me on socials, and so much more. So pop over there and let me know your thoughts.
[00:29:39] And while you're doing that, I'm going to put the kettle on and prepare for tomorrow's guest. Speak to you again tomorrow. Bye for now. Amen. Amen.

