On this episode of Tech Talks Daily, I am joined by Sumeet Kumar, CEO of Innatera, to discuss why neuromorphic computing might be the most transformative technology of the coming year.
Neuromorphic processors are no longer just a concept confined to research labs; they're poised to revolutionize industries ranging from IoT to wearables, bringing unparalleled efficiency and autonomy to edge devices.
Sumeet explains how Innatera's groundbreaking Spiking Neural Processor (SNP) T1 bridges the gap between biological brains and artificial intelligence, enabling real-time decision-making with ultra-low power consumption.
With this technology, AI can now process complex data directly at the source, eliminating the need for energy-intensive cloud-based processing.
I learn more about the unique capabilities of the SNP T1 as Sumeet highlights how its efficiency extends battery life for devices like video doorbells and wearables while opening doors to previously unattainable applications in industrial IoT and healthcare diagnostics.
We also explore the broader implications of neuromorphic processing for the AI landscape, discussing how it addresses challenges of scalability, energy efficiency, and sustainability. With commercialization on the horizon, Sumeet shares why 2025 is a tipping point for neuromorphic technology and what this evolution means for the future of smart devices.
As neuromorphic computing shifts from theory to reality, what opportunities and challenges lie ahead for industries adopting this innovation? And how might this technology redefine our expectations of AI-powered devices in our daily lives?
As always, I'd love to hear your perspectives. Where do you see the greatest potential for neuromorphic processors? Share your thoughts with me.
[00:00:03] How can mimicking the human brain revolutionise AI? Well, here we are in 2025 and the field of neuromorphic computing poised to redefine the landscape of AI and signal processing offering unparalleled efficiency and autonomy. That's the big sales pitch. But what is it? What does it mean? I want to talk about it in a language that everyone can understand.
[00:00:28] So when somebody mentions the word neuromorphic computing, they don't talk like Doc Brown in Back to the Future thinking, great Scott!
[00:00:36] So I've invited Sameet Kumar, CEO of a company called Inaterra. And they're a company leading the charge with its groundbreaking spiking NeuroProcessor or SMP-T1.
[00:00:49] And I wanted to learn more about how this year could be the tipping point for the commercialisation of neuromorphic processors,
[00:00:56] and how this technology bridges the gap between biological brains and artificial intelligence.
[00:01:03] cool, right? Yep, from wearable and IoT, automotive applications. Their innovation is setting the stage for a future where devices can think faster, work smarter, and most importantly of all, in a world where we're taking sustainability much more seriously now, consume less energy.
[00:01:23] So what does all this mean? What does the evolution mean for the future of AI? And how will it shape the next wave of intelligent, energy-efficient devices?
[00:01:31] Well, let's dive in with my guest right now and discover how neuromorphic processing could transform the way we think about computing and connectivity.
[00:01:42] So a massive warm welcome to the show. Can you tell everyone listening a little about who you are and what you do?
[00:01:50] Sure. My name is Sumit. I'm one of the co-founders of Inaterra, and I serve as a company CEO.
[00:01:56] My own background is in microprocessor design. And in 2018, we spun Inaterra out of the Delft University of Technology here in the Netherlands,
[00:02:04] basically to commercialise this very new piece of technology that we had built, mimicking how the brain processes information.
[00:02:12] And this is basically the chip that we've been developing at the company since 2018.
[00:02:17] And you're working in an incredibly exciting space.
[00:02:21] And one of the things I try and do on this podcast every single day is demystify some of these tech trends that we're hearing about,
[00:02:27] put them in a language everyone can understand.
[00:02:29] And I would imagine there would be a lot of people listening that have not heard of neuromorphic computing,
[00:02:36] and it has been gaining momentum. So I think you described 2025 is going to be the tipping point for its commercialisation.
[00:02:44] So can you just demystify, tell everyone listening what it is and some of those key factors that you think are driving this shift
[00:02:51] and why the timing is right for scaling these processes now?
[00:02:55] And I appreciate there's about three or four different questions in there, but can you set the scene for us?
[00:03:00] Absolutely. So just to start things off, neuromorphic computing basically means brain-inspired or brain-like computing.
[00:03:09] And the entire idea of this paradigm is that we develop computing solutions that work just like your brain.
[00:03:17] So for instance, every time you see something, smell something, hear something,
[00:03:21] there are certain processes that occur in your brain that allow it to very robustly and very quickly identify what's happening in the world around you.
[00:03:28] And when you think about it, the brain is tremendously efficient. It's very quick.
[00:03:33] It's doing so many different processing tasks all at the same time.
[00:03:37] And I think that this is really the big ambition or the big goal for everyone in the computing industry.
[00:03:43] How can we create processors? How can we create computers that mimic the amazing capabilities of the brain
[00:03:50] and hopefully make devices which last longer in terms of their battery life are more responsive
[00:03:55] and more intuitively understand what it is that you want to do?
[00:03:59] That's what we do at Inaterra.
[00:04:01] We build brain-like microprocessors and we apply these processors towards finding patterns in data that comes out of sensors.
[00:04:10] Our chips tend to be about 10,000 times more efficient than conventional chips that you find in the market today.
[00:04:15] And essentially, they allow you to find patterns and sensor data while using 500 times lesser energy than conventional solutions.
[00:04:23] And they allow you to do that about 100 times faster.
[00:04:26] And the key to how we're able to do this is we use software that is inspired by how your brain works.
[00:04:32] And we have a silicon structure which closely resembles how your brain itself is constructed from the ground up.
[00:04:40] Now, in the last 10 years or so, you've seen so many more AI.
[00:04:45] AI tends to be the big buzzword in the industry today.
[00:04:50] And I think that in the last 10 years, the industry has come a long way in figuring out how to apply AI
[00:04:57] and where to apply AI in the overall solution space that exists.
[00:05:02] And in the data center, I think that we've been very successful already deploying AI.
[00:05:07] And consequently, you see AI-based services popping up in the cloud all the time.
[00:05:14] But a fundamental gap that still exists inside the industry today is much of the data that is captured and sent to the cloud
[00:05:22] comes in from sensors inside of small devices like your smartwatch, your mobile phone,
[00:05:27] or the video doorbell which is hanging out in front of your door.
[00:05:31] And today, anytime you need to do meaningful AI-based processing of the sensor data,
[00:05:37] you've got to capture all of that data from the sensors, beam that up over a wireless link over the internet to the cloud,
[00:05:44] and that's where you end up finally doing the processing, which is tremendously wasteful if you think about it
[00:05:49] because the majority of data that comes out of sensors is actually just a waste.
[00:05:54] You throw it away.
[00:05:55] So essentially, the entire objective with neuromorphic computing that we think is going to be very powerful
[00:06:01] is that it's going to allow sensor data to be processed directly at the source itself.
[00:06:06] So as soon as the data comes out of the sensor, you're able to figure out if or not it's relevant
[00:06:11] and what's actually inside of that data.
[00:06:14] And then you can decide whether you want to send it up to the cloud
[00:06:17] or you can actually implement a lot of application functions directly next to the sensor
[00:06:21] without ever even having to use the cloud.
[00:06:23] I think that's why neuromorphic computing is becoming so relevant today
[00:06:28] because if you look out at the market, you see sensor-based devices popping up in such large volumes.
[00:06:35] There are 4 billion new devices coming online each year,
[00:06:38] and these devices tend to be packed with dozens of sensors.
[00:06:41] And you've got to process that sensor data in some way.
[00:06:44] And neuromorphic computing is increasingly being viewed as the most energy-efficient way
[00:06:51] to process all of this data directly next to the sensor.
[00:06:54] And it's taken a while for the industry to really discover applications for it.
[00:06:58] But this is the reason why we think that the coming year and the years ahead
[00:07:03] would be very pivotal for neuromorphic computing as well as companies like Inatera
[00:07:08] because we see a lot of applications taking off
[00:07:11] where neuromorphic computing is a key enabling technology.
[00:07:16] And before you came on the podcast today,
[00:07:18] I was reading about Inatera's spiking neural processor, the SNP.
[00:07:23] I think it's T1 and how it's positioned as a groundbreaking innovation
[00:07:26] in the field of neuromorphic processing.
[00:07:29] But for everybody listening, a lot of people may have heard of NPUs
[00:07:33] in some of those new AI PCs.
[00:07:36] But how does your processor differ from those traditional AI chips
[00:07:40] and its advantages in terms of things like energy efficiency,
[00:07:44] real-time decision-making, and all that real-world stuff?
[00:07:48] Certainly.
[00:07:49] So, I mean, to show you this spiking neural processor team
[00:07:53] inside of its evaluation kit, and we started providing this to customers in 2024.
[00:07:57] The T1 is the world's first neuromorphic microcontroller,
[00:08:01] which brings ultra-low-power AI capabilities directly to the sensor.
[00:08:05] The T1 is the world's first ship of its kind that applies this brain-inspired processing technique
[00:08:14] that is based on something called a spiking neural network.
[00:08:17] What differentiates a spiking neural network from a traditional neural network
[00:08:21] is that spiking neural networks, which you find in your brain,
[00:08:24] have a natural notion of time built into it.
[00:08:27] They aren't these huge trillion-parameter models, so to speak.
[00:08:32] If you look at spiking neural networks inside the brain for different applications,
[00:08:37] like detecting certain sounds, they're very compact neural networks,
[00:08:41] which fire in a very sparse way.
[00:08:44] They activate in a very sparse way, and they tend to be extremely energy efficient.
[00:08:48] And we think that these spiking neural networks are a great match for sensors
[00:08:54] because they allow sensor data to be processed very efficiently.
[00:08:57] Essentially, sensors tend to be very resource-constrained because they've got to be very small
[00:09:01] and power-efficient.
[00:09:03] And these spiking neural networks allow this processing to be done in a very efficient way
[00:09:08] directly next to the sensor.
[00:09:10] The T1 is perhaps the most efficient chip of its kind
[00:09:14] that can run these spiking neural networks in a native fashion in silicon.
[00:09:20] So for most applications, when you connect the T1 chip to a sensor,
[00:09:24] and then you're trying to do something like voice recognition,
[00:09:27] or you're trying to identify if there's a person standing in front of your door
[00:09:30] based on a radar sensor, which has been integrated into the video doorbell,
[00:09:35] we're able to do all of this processing using less than a milliwatt of power,
[00:09:40] which is ridiculously low.
[00:09:42] And we're able to do that across complex applications.
[00:09:46] And all of this energy efficiency comes purely from the fact that
[00:09:50] the chips mimic the brain, they have a very power-efficient analog mix signal architecture
[00:09:55] that closely resembles how the neurons and synapses of your brain are structured
[00:10:00] and how they work together.
[00:10:02] And these chips run software which are based on spiking neural networks,
[00:10:06] again, the software of the brain,
[00:10:07] allowing them to be super efficient, super quick,
[00:10:10] finding patterns and sensor data.
[00:10:11] And this is basically a piece of technology that we've protected
[00:10:15] with about 19 plus patent families,
[00:10:18] so about 120 plus individual patent applications as of today.
[00:10:22] And that's really what sets us apart in the industry.
[00:10:25] And one of the big promises of neuromorphic computing
[00:10:28] is this ability to bridge the gap between our biological brains
[00:10:32] and artificial intelligence.
[00:10:34] So how does this technology simulate brain-like neurons?
[00:10:38] What does it mean for the future of AI and computing?
[00:10:41] It feels incredibly exciting here.
[00:10:44] Where's this heading?
[00:10:45] If you really drill down into how the neurons and the synapses of your brain work,
[00:10:50] and I'm really oversimplifying this to a very basic fundamental,
[00:10:55] is that these neurons and synapses have a notion of time built in.
[00:10:59] So rather than having 32 bits of data and 64 bits of data streaming through the processor,
[00:11:05] the neurons of your brain just use very simple voltage spikes.
[00:11:08] So when data comes in, the data gets converted into a pattern of spikes,
[00:11:14] and these spikes are separated in time.
[00:11:16] And what the neural networks in your brain does
[00:11:19] is really try to identify which of these spikes are correlated.
[00:11:23] And by finding those correlations,
[00:11:25] they identify patterns inside of the input data.
[00:11:28] So if you look at really how the neurons work,
[00:11:30] they're very simple structures.
[00:11:31] They collect a charge over time,
[00:11:34] and when the charge reaches a certain level,
[00:11:36] the neurons fire an event, and that's all the computation there is
[00:11:40] inside of these neurons and synapses of the brain.
[00:11:43] And that's what our chips do as well.
[00:11:46] They very closely mimic the structure,
[00:11:48] and that's what allows them to be super efficient.
[00:11:51] Now, what makes these neuromorphic processors such a great match for AI at the sensor
[00:11:57] is that, first of all,
[00:11:58] they allow you to work with the sensor directly in the analog domain,
[00:12:02] so you don't really have to convert the data coming out of the sensors
[00:12:06] into a digital domain where you end up spending a lot of energy.
[00:12:09] You can actually ingest data directly in the analog domain.
[00:12:13] You can ingest that data as quickly as it's coming out of the sensor,
[00:12:17] so you remove a lot of the inefficiency in the pipeline
[00:12:20] between the sensor and the processor as it exists today.
[00:12:24] And then you're able to do really powerful AI right as the first step of the processing pipeline,
[00:12:30] which is something that is simply impossible to do today with traditional technologies.
[00:12:35] That is the promise of what neuromorphic computing is able to do right now.
[00:12:40] But looking forward, there are a whole bunch of exciting paths for us to follow,
[00:12:45] and let me paint out what a couple of those really look like for you.
[00:12:49] One of those paths is really think about processors that are able to compensate
[00:12:55] for changing behavior in the system over time.
[00:12:59] So think about sensors that over their lifetime kind of change how they behave,
[00:13:05] either because they're starting to wear out or quality changes
[00:13:10] or they degrade over their lifetime.
[00:13:12] Neuromorphic processors are able to really compensate for all of this
[00:13:15] because they're able to learn on the fly.
[00:13:18] That learning is a very important feature as well,
[00:13:21] and that could be a completely separate direction in which we go,
[00:13:24] where these chips are able to learn in the field from data
[00:13:29] which they're ingesting constantly.
[00:13:31] Think about your grain.
[00:13:32] Your grain doesn't have a separate training algorithm
[00:13:35] and a separate sort of inference phase.
[00:13:38] Everything seems to happen in the same fabric at the same time.
[00:13:42] And this could be really powerful.
[00:13:44] Think about neural networks that can adapt their functionality over time,
[00:13:48] that tend to strengthen certain pathways as you use certain features
[00:13:52] and tend to weaken pathways as you don't use certain features.
[00:13:56] These are capabilities that these chips will be able to bring in.
[00:13:59] And finally, I think one very critical part of neuromorphic is really control.
[00:14:03] Think about yourself dancing, how smoothly you're able to dance
[00:14:06] or how well you're able to balance on a bicycle.
[00:14:09] And all of this comes down to your brain being able to process data
[00:14:13] in this low-latency fashion,
[00:14:16] responding very quickly to the stimuli that you're getting in.
[00:14:19] I think neuromorphic computing would be very fundamental
[00:14:22] in driving the next generation of industrial automation,
[00:14:26] robotics, and general machinery
[00:14:28] with this sort of intelligence built directly into devices.
[00:14:32] So it's a very exciting future for neuromorphic.
[00:14:34] We're just scratching the surface in terms of the applications
[00:14:36] that we're able to do today.
[00:14:39] And I'm glad you mentioned that because neuromorphic processes
[00:14:42] are being held for their potential to revolutionize things
[00:14:46] like edge applications from wearables to industrial IoT.
[00:14:51] And there'll be a lot of business leaders listening,
[00:14:53] thinking about the ROI on any tech investment
[00:14:56] and what problems we're solving here.
[00:14:57] So how do you see this technology transforming industries?
[00:15:01] And are there any challenges that you need to overcome
[00:15:05] for widespread adoption?
[00:15:06] What are you seeing here?
[00:15:07] Because I would imagine you're setting off a few light bulb moments
[00:15:10] with business leaders at the moment.
[00:15:12] But again, what problems are you solving?
[00:15:14] What industries are you going after?
[00:15:16] And what kind of challenges do you need to overcome to get there?
[00:15:19] So what we've seen in the last couple of years
[00:15:22] is a growing acknowledgement or recognition
[00:15:26] amongst vendors in the industry
[00:15:28] that are deploying sensors into applications
[00:15:31] that AI has a power problem.
[00:15:35] In deploying AI to many application use cases
[00:15:38] that we know of today,
[00:15:40] there is a significant problem
[00:15:42] that comes from the power usage
[00:15:44] of the microprocessors
[00:15:45] that are deployed into these use cases.
[00:15:48] And for a long time,
[00:15:50] the power problem was not really known
[00:15:53] to many practitioners within the field
[00:15:56] simply because there weren't that many
[00:15:57] AI processors out there.
[00:15:59] But practically, we know now that the moment
[00:16:01] you try to do something at scale
[00:16:02] in the form of a product
[00:16:04] that you can deploy on the market,
[00:16:06] you end up consuming a lot of power.
[00:16:08] And that's the first place
[00:16:09] where we've been able to differentiate ourselves very well.
[00:16:12] Our chips basically consume,
[00:16:14] on average, under a milliwatt of power
[00:16:17] or a few milliwatts worth of power
[00:16:19] for even complex applications.
[00:16:22] And these are applications which run continuously,
[00:16:25] not necessarily applications
[00:16:26] where we're turning off the processor
[00:16:28] for a majority of the time,
[00:16:29] like many of our competitors do.
[00:16:32] And the value here has been incredible.
[00:16:34] And I'll give you a couple of examples of this.
[00:16:36] We applied our chips in a partnership
[00:16:38] that we did with the leading
[00:16:41] Japanese radar sensor vendor.
[00:16:43] And we applied the sensor
[00:16:45] as well as our processor
[00:16:46] towards video doorbells,
[00:16:48] basically as a way of identifying
[00:16:50] without using a camera in the doorbell,
[00:16:53] is there somebody standing in front of your door?
[00:16:55] Because it turns out that
[00:16:56] when we turn the camera and the doorbell off,
[00:16:59] we're able to save a significant amount
[00:17:02] of battery charge inside of the device.
[00:17:04] And then you have to recharge
[00:17:05] the doorbell that much less.
[00:17:08] And we developed the world's
[00:17:09] most efficient solution
[00:17:11] for radar-based,
[00:17:12] applied to video doorbells.
[00:17:15] Just to put it into perspective,
[00:17:17] our solution applied to a typical doorbell
[00:17:19] that you can find on the market
[00:17:21] extends the battery life of the device by 6x.
[00:17:25] So rather than recharging the device
[00:17:26] once every three months,
[00:17:27] you end up recharging it
[00:17:29] once every one and a half years,
[00:17:31] which is a radical increase
[00:17:33] in improvement in user experience.
[00:17:36] Think about how often you've had to go
[00:17:38] and take the doorbell off of your doorframe
[00:17:40] and then charge it overnight
[00:17:41] and then install it back again.
[00:17:43] You have to do that once every few years
[00:17:45] with our technology inside.
[00:17:48] We also had a different use case
[00:17:50] where we applied the chips
[00:17:52] towards what's known as
[00:17:54] audio scene classification in headphones.
[00:17:56] And the idea there is that
[00:17:58] you're listening to music in your living room
[00:18:00] and then you get up
[00:18:00] and you walk out the front door
[00:18:02] and now you're in a noisy street
[00:18:03] and then you get into a crowded bus.
[00:18:05] All of these environments
[00:18:07] have a radically different noise profile
[00:18:09] and they kind of affect
[00:18:11] how you perceive music.
[00:18:13] And in order to cancel out the effect
[00:18:15] of all of these different noise profiles,
[00:18:17] the headphones have to pick
[00:18:19] different filter settings continuously
[00:18:20] based on where you are.
[00:18:22] But the key problem there is
[00:18:23] the headphones have to identify
[00:18:24] the environment that you're in.
[00:18:26] And once again,
[00:18:27] we had by far the most energy efficient
[00:18:30] audio scene classifier of its kind
[00:18:33] running on our chips
[00:18:34] with an accuracy that was top of the line,
[00:18:37] competitive with the rest of the industry
[00:18:39] with a latency about three times shorter
[00:18:42] than what our choices competitors
[00:18:44] were able to do.
[00:18:45] But most importantly,
[00:18:46] with a power usage
[00:18:47] that was about 10 times lower
[00:18:49] than the most efficient microcontroller
[00:18:52] available in the market today.
[00:18:53] So very simply put,
[00:18:55] what we do for vendors,
[00:18:57] for OEMs and ODMs
[00:18:58] that build devices
[00:18:59] is that we allow
[00:19:01] groundbreaking new application
[00:19:03] functionalities to be realized
[00:19:05] wherever you're trying to use a sensor,
[00:19:07] which is producing data.
[00:19:09] We allow that data
[00:19:10] to be processed very efficiently
[00:19:11] and very quickly,
[00:19:12] allowing you to unlock
[00:19:14] fundamentally new sensing use cases
[00:19:17] and sensing functionalities,
[00:19:18] even in devices
[00:19:20] that run on really tiny batteries
[00:19:22] or devices that require
[00:19:24] very fast response times.
[00:19:26] And this is something
[00:19:27] where we've been outperforming
[00:19:29] traditional AI-based solutions
[00:19:31] that have been available
[00:19:32] in the market for a few years.
[00:19:33] We're outperforming them
[00:19:35] by leaps and bounds.
[00:19:36] And you mentioned
[00:19:37] the power problem there.
[00:19:38] Scalability and energy efficiency
[00:19:40] are long known for being critical
[00:19:42] for advancing AI.
[00:19:44] And you mentioned how you're addressing
[00:19:46] some of those challenges,
[00:19:47] but also what role do you see
[00:19:49] it playing in enabling
[00:19:50] a new generation of smart devices?
[00:19:53] Because again,
[00:19:54] it feels like there's
[00:19:55] some big opportunities here.
[00:19:57] There are.
[00:19:58] Isn't Leap partnered
[00:19:59] with a different Belgian-American
[00:20:03] sensor vendor.
[00:20:04] And what was really spectacular here
[00:20:07] is that we were using a sensor type,
[00:20:10] which has been around for a while,
[00:20:11] and where this vendor
[00:20:13] had been working on developing
[00:20:15] a very new class
[00:20:17] of application function
[00:20:18] to take to market,
[00:20:19] but it had simply not been possible.
[00:20:22] And then we came along
[00:20:23] and in a span of three weeks,
[00:20:24] we had a first prototype
[00:20:26] of a really fundamentally
[00:20:27] new application,
[00:20:28] which we'll be unveiling
[00:20:30] in a couple of months from now.
[00:20:32] And this thing introduces
[00:20:33] radical new capabilities,
[00:20:35] the ability to detect
[00:20:36] human presence
[00:20:38] in different smart environments,
[00:20:40] even in devices
[00:20:41] that run on really tiny batteries.
[00:20:43] And this is really showing the way
[00:20:46] in some sense,
[00:20:48] giving us an idea
[00:20:49] that there are applications out there
[00:20:51] that we've simply not thought about yet,
[00:20:53] because the kind of AI capabilities
[00:20:56] that we have available to us,
[00:20:57] they're simply not suitable
[00:20:59] for many of these applications.
[00:21:01] And with the neuromorphic processors
[00:21:03] coming into the picture,
[00:21:04] there are so many new applications
[00:21:06] which are now made possible
[00:21:08] that even with the OEMs and ODMs
[00:21:10] that work with our chips,
[00:21:12] the realization is that
[00:21:13] this opens up
[00:21:15] a completely new class
[00:21:18] of application features
[00:21:19] that we've not even been able to,
[00:21:21] we've not even been thinking of so far
[00:21:24] because we haven't had a tool
[00:21:25] to achieve these.
[00:21:27] These are suddenly made possible
[00:21:29] with your chips.
[00:21:30] And that's the kind of feedback
[00:21:31] that has been the most revealing.
[00:21:34] So to us,
[00:21:36] it kind of feels like
[00:21:37] you're inventing
[00:21:39] a completely new class of device,
[00:21:41] a completely new segment of the market
[00:21:43] with this technology,
[00:21:43] which so far has been completely untapped.
[00:21:46] And traditional AI models
[00:21:48] have often relied
[00:21:49] on immense computing power,
[00:21:50] which has been limited
[00:21:52] for applications
[00:21:53] that require autonomy
[00:21:54] and speed.
[00:21:56] And just to allow
[00:21:57] everyone listening
[00:21:58] to maybe think bigger,
[00:22:00] can you just expand
[00:22:01] on how neuromorphic processing
[00:22:03] can almost redefine
[00:22:05] the possibilities
[00:22:06] for AI applications
[00:22:07] on the edge?
[00:22:08] Because again,
[00:22:09] it feels like
[00:22:09] the only limitation
[00:22:11] is your imagination really.
[00:22:12] But how do you feel about this?
[00:22:14] We think that
[00:22:16] it's a massive market
[00:22:17] with a huge impact.
[00:22:19] And I hope
[00:22:20] that it ends up
[00:22:21] touching all of our lives
[00:22:22] in the foreseeable future.
[00:22:24] There are billions of sensors
[00:22:26] sold every year.
[00:22:28] Like I said,
[00:22:29] four billion new devices
[00:22:30] each with dozens
[00:22:31] of sensors inside.
[00:22:33] And our hypothesis
[00:22:35] is that
[00:22:36] all of these sensors
[00:22:37] producing data
[00:22:38] will need
[00:22:40] a processor
[00:22:40] sitting right
[00:22:41] at the front line
[00:22:43] being able to filter out
[00:22:44] whether the data
[00:22:44] is relevant or not,
[00:22:46] if there's something
[00:22:47] of value
[00:22:48] inside of that data.
[00:22:49] And that's where
[00:22:50] neuromorphic processing
[00:22:51] is going to be
[00:22:52] of significant impact.
[00:22:53] Whether it goes
[00:22:54] into a sensing function,
[00:22:55] a control function,
[00:22:57] a sort of a thinking function,
[00:23:00] neuromorphic
[00:23:00] will eventually
[00:23:01] be everywhere
[00:23:02] right next to the sensors.
[00:23:04] You could be touching
[00:23:05] billions of devices
[00:23:06] every year
[00:23:06] that have a neuromorphic
[00:23:08] component to them.
[00:23:09] And that's the kind
[00:23:09] of impact
[00:23:10] that we're looking at.
[00:23:12] One specific angle
[00:23:14] that excites me
[00:23:15] the most is how we can
[00:23:16] use neuromorphic
[00:23:18] for good.
[00:23:19] And there are
[00:23:19] three different aspects
[00:23:20] to this.
[00:23:21] The first is
[00:23:22] deploying neuromorphic
[00:23:24] in the context
[00:23:25] of human-machine interfaces.
[00:23:26] I think that
[00:23:27] human-machine interfaces
[00:23:29] can be a lot
[00:23:30] more intuitive.
[00:23:31] They can allow
[00:23:32] better access
[00:23:33] to technology
[00:23:34] for everyone.
[00:23:35] I think that
[00:23:36] this barrier
[00:23:36] between the physical world
[00:23:38] and the digital world
[00:23:39] should be much
[00:23:40] more intuitive
[00:23:41] for us to cross.
[00:23:42] And we're doing that
[00:23:43] with this notion
[00:23:44] of ambient intelligence
[00:23:45] where the intelligence
[00:23:47] is around you,
[00:23:48] it serves you,
[00:23:49] it intuitively
[00:23:49] understands
[00:23:50] what you need to do
[00:23:51] and you don't need
[00:23:52] to explicitly
[00:23:53] command it.
[00:23:54] And eventually
[00:23:55] that will lead
[00:23:55] to better access
[00:23:56] for technology
[00:23:57] to everyone.
[00:23:58] The second one
[00:23:59] is specifically
[00:24:01] around healthcare
[00:24:02] and quality of life.
[00:24:04] Something like
[00:24:05] a neuromorphic
[00:24:05] processor
[00:24:06] by virtue of the fact
[00:24:07] that it's so
[00:24:08] energy efficient
[00:24:08] could be sitting
[00:24:09] in your wristwatch
[00:24:10] looking at your heart rate
[00:24:11] throughout the day
[00:24:12] and picking up
[00:24:13] the first signs
[00:24:14] of heart disease
[00:24:15] years before
[00:24:16] you ever become
[00:24:17] symptomatic.
[00:24:18] So a little red light
[00:24:19] can go up
[00:24:20] in your watch
[00:24:20] and say,
[00:24:21] hey,
[00:24:21] I've just detected
[00:24:22] something potentially
[00:24:23] dangerous.
[00:24:24] You might want
[00:24:25] to go and see
[00:24:26] your doctor
[00:24:26] and perhaps
[00:24:27] we could catch
[00:24:28] something
[00:24:29] long before
[00:24:30] it becomes
[00:24:30] fatal
[00:24:30] or while
[00:24:32] it is still
[00:24:32] treatable.
[00:24:33] And I think
[00:24:34] that this
[00:24:35] stands to
[00:24:36] revolutionize
[00:24:37] really how
[00:24:38] healthcare
[00:24:39] is implemented
[00:24:40] around the world
[00:24:41] even in developing
[00:24:42] countries
[00:24:42] where you don't
[00:24:43] always have
[00:24:43] access to
[00:24:44] qualified
[00:24:45] first-hand
[00:24:45] diagnostics.
[00:24:47] This is perhaps
[00:24:47] an easy way
[00:24:48] to actually
[00:24:49] implement that
[00:24:49] across the board.
[00:24:50] And the last
[00:24:51] way is the
[00:24:52] entire energy
[00:24:53] crisis.
[00:24:54] We rarely
[00:24:54] talk about
[00:24:55] the cost
[00:24:55] of AI
[00:24:56] functionality
[00:24:57] in terms
[00:24:57] of carbon
[00:24:58] footprint.
[00:24:58] Do you ever
[00:24:59] think about
[00:25:00] how much
[00:25:00] your voice
[00:25:01] researches
[00:25:02] cost in terms
[00:25:03] of carbon
[00:25:03] dioxide
[00:25:05] emissions?
[00:25:05] You don't
[00:25:06] really think
[00:25:07] about the
[00:25:08] face tagging
[00:25:08] function in
[00:25:09] your images,
[00:25:10] what the
[00:25:11] impact of
[00:25:11] all of that
[00:25:12] is.
[00:25:12] And today
[00:25:12] much of
[00:25:13] that happens
[00:25:14] in the
[00:25:14] cloud.
[00:25:14] Some of
[00:25:15] that happens
[00:25:15] inside the
[00:25:16] devices in
[00:25:16] your hand.
[00:25:17] But there's
[00:25:18] a lot that
[00:25:18] we can
[00:25:19] improve in
[00:25:20] that process.
[00:25:20] So if you're
[00:25:21] able to
[00:25:21] streamline AI,
[00:25:22] if we're
[00:25:22] able to deal
[00:25:23] with more
[00:25:24] of the world
[00:25:24] sensor data
[00:25:25] directly at
[00:25:26] the source,
[00:25:26] we can
[00:25:27] streamline
[00:25:28] how we
[00:25:28] use energy
[00:25:29] in implementing
[00:25:30] AI across
[00:25:31] the value
[00:25:31] chain.
[00:25:32] And we can
[00:25:32] make the
[00:25:32] picture a lot
[00:25:33] more efficient
[00:25:34] and a lot
[00:25:35] better.
[00:25:36] That's
[00:25:36] ultimately
[00:25:37] the three-part
[00:25:38] ambition that
[00:25:39] at least I
[00:25:39] have and
[00:25:40] that I
[00:25:41] would like
[00:25:41] to see
[00:25:41] personally.
[00:25:42] And that's
[00:25:42] what we're
[00:25:43] working,
[00:25:43] all of us
[00:25:44] at In
[00:25:44] Interactive
[00:25:45] Woods.
[00:25:45] And we're
[00:25:46] currently
[00:25:46] recording our
[00:25:47] conversation
[00:25:47] today at
[00:25:48] the very
[00:25:48] beginning of
[00:25:49] 2025,
[00:25:49] just before
[00:25:50] you fly
[00:25:51] out to
[00:25:52] CES in
[00:25:53] Vegas.
[00:25:54] So can you
[00:25:54] tell me a
[00:25:55] little bit
[00:25:55] more about
[00:25:55] that and
[00:25:56] ultimately how
[00:25:57] you envision
[00:25:58] the most
[00:25:59] significant
[00:25:59] industry or
[00:26:00] what you see
[00:26:01] as the most
[00:26:01] significant
[00:26:02] industry
[00:26:02] chips
[00:26:03] resulting
[00:26:04] from the
[00:26:04] commercialization
[00:26:05] of
[00:26:05] neuromorphic
[00:26:06] computing?
[00:26:07] And how
[00:26:08] are you
[00:26:08] preparing to
[00:26:08] lead this
[00:26:09] evolution?
[00:26:09] I would
[00:26:10] imagine CES
[00:26:10] is going to
[00:26:11] play a big
[00:26:12] stepping stone
[00:26:12] in that,
[00:26:13] but anything
[00:26:13] you could
[00:26:13] share around
[00:26:14] that?
[00:26:15] Sure.
[00:26:15] CES is a
[00:26:16] really important
[00:26:17] show for us
[00:26:17] because one
[00:26:18] of our
[00:26:19] beachhead
[00:26:19] markets is
[00:26:20] the consumer
[00:26:21] electronics
[00:26:22] market where
[00:26:22] we see a
[00:26:23] lot of
[00:26:23] interest ranging
[00:26:24] from devices
[00:26:25] like wearables
[00:26:26] and hearables
[00:26:28] down through
[00:26:29] portable devices
[00:26:30] like these
[00:26:31] battery-powered
[00:26:31] security cameras
[00:26:32] that you
[00:26:32] mount outside
[00:26:33] your house
[00:26:34] or video
[00:26:35] doorbells
[00:26:35] and even
[00:26:36] things like
[00:26:36] smart TVs
[00:26:37] and building
[00:26:38] automation
[00:26:38] systems.
[00:26:39] So there's
[00:26:40] really so many
[00:26:40] wonderful
[00:26:41] application use
[00:26:42] cases there
[00:26:42] that we're
[00:26:43] already targeting
[00:26:43] with some
[00:26:44] of our
[00:26:44] customers and
[00:26:45] partners with
[00:26:46] the chips.
[00:26:47] We have a
[00:26:48] very active
[00:26:48] event season
[00:26:49] coming up,
[00:26:50] so we'll be
[00:26:50] at the CES
[00:26:51] show at the
[00:26:52] beginning of
[00:26:53] January and
[00:26:55] then we're
[00:26:55] exhibiting at
[00:26:56] the Venetian
[00:26:57] and we have
[00:26:58] a number of
[00:26:58] our latest
[00:26:59] demos,
[00:26:59] really cool
[00:27:00] stuff on
[00:27:01] display.
[00:27:02] And I
[00:27:02] welcome you
[00:27:03] to come and
[00:27:03] see that in
[00:27:04] person.
[00:27:05] Some of
[00:27:05] this is
[00:27:05] really
[00:27:06] magnificent
[00:27:07] stuff,
[00:27:07] even though
[00:27:08] I'm saying
[00:27:08] it myself.
[00:27:09] And we have
[00:27:10] more events
[00:27:11] coming up in
[00:27:11] March.
[00:27:12] We'll be at
[00:27:12] Mobile World
[00:27:13] Congress and
[00:27:13] Embedded World.
[00:27:15] And generally
[00:27:15] most of our
[00:27:16] initial applications
[00:27:17] target the
[00:27:18] consumer IoT
[00:27:19] as well as
[00:27:20] industrial IoT
[00:27:21] verticals.
[00:27:22] We also have a
[00:27:23] number of
[00:27:24] engagements in
[00:27:25] the automotive
[00:27:25] space because
[00:27:26] automotive is
[00:27:28] a very important
[00:27:28] area for
[00:27:29] neuromorphic.
[00:27:30] Think about
[00:27:30] the number of
[00:27:31] sensors that
[00:27:32] go into each
[00:27:33] vehicle and
[00:27:33] why energy
[00:27:34] efficiency is
[00:27:35] becoming so
[00:27:35] much more
[00:27:36] important in
[00:27:37] the context
[00:27:37] of future
[00:27:38] electric vehicles.
[00:27:39] And there's
[00:27:40] definitely a lot
[00:27:41] of work to be
[00:27:41] done in the
[00:27:42] automotive space
[00:27:43] as well.
[00:27:44] And eventually
[00:27:45] we will also
[00:27:46] have a bit
[00:27:46] more of a
[00:27:47] presence in
[00:27:47] that area.
[00:27:48] But today we
[00:27:49] focus quite a
[00:27:50] bit on the
[00:27:51] consumer space
[00:27:52] in terms of
[00:27:52] battery-powered
[00:27:53] devices, which
[00:27:54] is where I
[00:27:55] think our
[00:27:55] impact is
[00:27:56] the strongest
[00:27:56] today.
[00:27:57] Well, hopefully
[00:27:58] we can bump
[00:27:59] into each other
[00:27:59] on the tech
[00:28:00] circuit at some
[00:28:01] stage.
[00:28:01] I am frequently
[00:28:02] taking this
[00:28:02] show on the
[00:28:03] road to
[00:28:04] various tech
[00:28:04] conferences around
[00:28:05] the world.
[00:28:06] But if I ask
[00:28:06] you to look
[00:28:06] beyond that,
[00:28:07] look into the
[00:28:08] future, look
[00:28:09] into a virtual
[00:28:09] crystal ball of
[00:28:10] sorts, how do
[00:28:11] you see
[00:28:13] neuromorphic
[00:28:13] computing reshaping
[00:28:15] our understanding
[00:28:16] of AI?
[00:28:16] And what role
[00:28:18] play in enabling
[00:28:19] more human-like
[00:28:20] intelligence and
[00:28:21] machines?
[00:28:22] Because it feels
[00:28:22] like that's the
[00:28:23] next step.
[00:28:24] But how do you
[00:28:24] see this taking,
[00:28:25] Shane?
[00:28:26] We've just
[00:28:26] taken our
[00:28:27] first chips
[00:28:28] into production
[00:28:29] and like I
[00:28:30] said earlier
[00:28:30] on in the
[00:28:31] show, we're
[00:28:33] only scratching
[00:28:33] the surface in
[00:28:34] terms of the
[00:28:35] applications that
[00:28:35] neuromorphic
[00:28:36] processors are
[00:28:37] today.
[00:28:38] Some of that
[00:28:39] comes from the
[00:28:39] fact that we
[00:28:41] only have a
[00:28:41] limited
[00:28:42] understanding of
[00:28:42] how our
[00:28:43] brains work.
[00:28:44] We only have a
[00:28:45] limited understanding
[00:28:46] of the kind of
[00:28:47] things that your
[00:28:47] brain is able to
[00:28:48] each do.
[00:28:49] And wherever we
[00:28:50] have these
[00:28:50] theories, it's
[00:28:52] still somewhat of
[00:28:52] a superficial
[00:28:53] sort of
[00:28:54] understanding where
[00:28:55] we can go a
[00:28:56] lot deeper.
[00:28:56] So from that
[00:28:57] perspective, I see
[00:28:59] neuromorphic
[00:29:00] processors being a
[00:29:01] completely new
[00:29:02] class of
[00:29:02] component within the
[00:29:04] computing landscape.
[00:29:05] Today we're
[00:29:06] deploying them
[00:29:06] primarily for
[00:29:08] things like
[00:29:09] pattern recognition
[00:29:10] and sensor data
[00:29:11] processing, but
[00:29:12] I think there's
[00:29:13] scope for them to
[00:29:14] do so much
[00:29:15] more learning and
[00:29:16] intelligence in
[00:29:17] new forms being
[00:29:18] applied across the
[00:29:19] value chain.
[00:29:20] I think that
[00:29:21] eventually neuromorphics
[00:29:24] will grow to take a
[00:29:25] good portion of that
[00:29:26] value chain in the
[00:29:27] future, but in
[00:29:28] exactly what form
[00:29:29] that is going to
[00:29:30] happen, I still
[00:29:31] don't know.
[00:29:32] Our ultimate
[00:29:33] ambition, I think,
[00:29:35] by 2030 is to bring
[00:29:36] intelligence to a
[00:29:37] billion devices, and
[00:29:39] I think there's a
[00:29:40] lot more that's going
[00:29:41] to happen beyond that
[00:29:42] time frame, and
[00:29:43] it's an exciting
[00:29:43] time to be doing
[00:29:44] neuromorphic just
[00:29:45] for that reason.
[00:29:46] Well, we covered so
[00:29:47] much in our
[00:29:47] conversation today, and
[00:29:49] I suspect anybody
[00:29:50] attending CES will
[00:29:52] want to check you
[00:29:53] out in the Venetian,
[00:29:54] but for anyone that's
[00:29:55] not attending, want
[00:29:56] to find out more
[00:29:56] information, keep up
[00:29:58] to speed with some
[00:29:58] of the announcements
[00:29:59] that might be coming
[00:30:00] out of that, or just
[00:30:01] follow your progress
[00:30:02] throughout the year.
[00:30:03] Where would you
[00:30:03] like to point everyone
[00:30:04] listening?
[00:30:05] If you'd like to
[00:30:06] come and see us at
[00:30:06] one of the many
[00:30:07] shows that we do
[00:30:08] through the year,
[00:30:09] there's always an
[00:30:10] appointment booking
[00:30:11] page on our website
[00:30:12] that you can fill
[00:30:13] out, we'll get in
[00:30:13] touch with you.
[00:30:14] But our LinkedIn
[00:30:15] tends to have the
[00:30:17] most updates of
[00:30:18] what we're doing as
[00:30:18] a company and as
[00:30:20] a team.
[00:30:21] We tend to really
[00:30:22] have a people-first
[00:30:22] culture, so you
[00:30:23] see all the fun
[00:30:24] stuff that we do as
[00:30:25] an inner-share
[00:30:26] team here at the
[00:30:27] company.
[00:30:28] And we've got a
[00:30:29] bunch of new
[00:30:30] exciting announcements
[00:30:31] coming up, especially
[00:30:32] around our technology,
[00:30:33] so I hope you'll
[00:30:34] stay tuned on our
[00:30:35] LinkedIn, and we
[00:30:36] catch you at one of
[00:30:37] our events where you
[00:30:38] can see these things
[00:30:38] in person.
[00:30:39] Fantastic.
[00:30:40] I will add links to
[00:30:41] everything so people
[00:30:41] can find you nice and
[00:30:43] easily.
[00:30:43] And I think for
[00:30:44] anybody listening that
[00:30:45] has heard about
[00:30:46] neuromorphic processes
[00:30:47] for the very first
[00:30:49] time, I'm hoping we've
[00:30:50] demystified it today,
[00:30:51] put it in a language
[00:30:52] everyone can
[00:30:53] understand, understand
[00:30:54] why 2025 could be a
[00:30:56] tipping point and how
[00:30:58] it's bridging the gap
[00:30:59] between biological
[00:31:00] brains and AI and
[00:31:01] ultimately what this
[00:31:03] evolution means for
[00:31:04] the future of AI,
[00:31:05] whether it be from
[00:31:05] scalability to energy
[00:31:07] efficiency and the
[00:31:09] future of smart
[00:31:10] devices.
[00:31:11] So many big things to
[00:31:12] take away and think
[00:31:13] about, but thank you
[00:31:14] for bringing it to
[00:31:15] light today.
[00:31:15] Thank you so much,
[00:31:16] Neil.
[00:31:16] It was really great
[00:31:17] talking to you.
[00:31:18] As we've heard from
[00:31:19] today's guest, the
[00:31:20] shift from traditional
[00:31:22] AI to brain-inspired
[00:31:24] neuromorphic computing
[00:31:25] isn't just a
[00:31:27] technological advancement.
[00:31:28] It's also a new way
[00:31:30] of thinking about how
[00:31:31] machines process
[00:31:32] information.
[00:31:32] And the spiking neural
[00:31:35] process that we talked
[00:31:36] about today represents a
[00:31:38] big leap forward,
[00:31:40] enabling ultra-efficient
[00:31:41] real-time decision-making
[00:31:43] for devices at the edge.
[00:31:45] And I think also by
[00:31:47] extending battery life,
[00:31:49] reducing energy
[00:31:50] consumption or unlocking
[00:31:51] new AI capabilities,
[00:31:53] this innovation could
[00:31:55] shape industries and
[00:31:56] ultimately redefine
[00:31:58] possibilities.
[00:32:00] But as we look ahead,
[00:32:01] how might these
[00:32:02] advancements influence
[00:32:03] the way we design
[00:32:04] devices, interact with
[00:32:06] technology and even
[00:32:08] address global challenges
[00:32:09] like sustainability?
[00:32:11] The conversation doesn't
[00:32:13] end here with this
[00:32:14] podcast episode as you
[00:32:16] go on to another episode
[00:32:17] or browse for
[00:32:18] information online.
[00:32:19] I want to know what
[00:32:20] questions you're going to
[00:32:21] be asking about
[00:32:22] neuromorphic computing,
[00:32:24] especially as it will
[00:32:25] continue to scale and
[00:32:26] evolve this year and
[00:32:28] beyond.
[00:32:28] As always,
[00:32:31] techblogwriteroutlook.com
[00:32:32] x LinkedIn,
[00:32:33] Instagram.
[00:32:34] I'm just
[00:32:34] Anthony C. Hughes.
[00:32:35] Nice and easy to find.
[00:32:37] So keep those
[00:32:37] thoughts, questions,
[00:32:39] pictures to come on the
[00:32:39] show coming over.
[00:32:41] Other than that,
[00:32:42] I'll be back again in
[00:32:43] your podcast feed,
[00:32:44] waiting patiently for
[00:32:45] you to hit the play
[00:32:46] button tomorrow and
[00:32:48] hopefully you will join
[00:32:49] me there.
[00:32:50] But that's it for
[00:32:51] today.
[00:32:52] Bye for now.
[00:32:53] Bye for now.

