Why do today's most powerful AI systems still struggle to explain their decisions, repeat the same mistakes, and undermine trust at the very moment we are asking them to take on more responsibility?
In this episode of Tech Talks Daily, I'm joined by Artur d'Avila Garcez, Professor of Computer Science at City St George's University of London, and one of the early pioneers of neurosymbolic AI.
Our conversation cuts through the noise around ever-larger language models and focuses on a deeper question many leaders are now grappling with. If scale alone cannot deliver reliability, accountability, or genuine reasoning, what is missing from today's AI systems?

Artur explains neurosymbolic AI in clear, practical terms as the integration of neural learning with symbolic reasoning. Deep learning excels at pattern recognition across language, images, and sensor data, but it struggles with planning, causality, and guarantees. Symbolic AI, by contrast, offers logic, rules, and explanations, yet falters when faced with messy, unstructured data. Neurosymbolic AI aims to bring these two worlds together, allowing systems to learn from data while reasoning with knowledge, producing AI that can justify decisions and avoid repeating known errors.
We explore why simply adding more parameters and data has failed to solve hallucinations, brittleness, and trust issues. Artur shares how neurosymbolic approaches introduce what he describes as software assurances, ways to reduce the chance of critical errors by design rather than trial and error. From self-driving cars to finance and healthcare, he explains why combining learned behavior with explicit rules mirrors how high-stakes systems already operate in the real world.
A major part of our discussion centers on explainability and accountability. Artur introduces the neurosymbolic cycle, sometimes called the NeSy cycle, which translates knowledge into neural networks and extracts knowledge back out again. This two-way process opens the door to inspection, validation, and responsibility, shifting AI away from opaque black boxes toward systems that can be questioned, audited, and trusted. We also discuss why scaling neurosymbolic AI looks very different from scaling deep learning, with an emphasis on knowledge reuse, efficiency, and model compression rather than ever-growing compute demands.
We also look ahead. From domain-specific deployments already happening today to longer-term questions around energy use, sustainability, and regulation, Artur offers a grounded view on where this field is heading and what signals leaders should watch for as neurosymbolic AI moves from research into real systems.
If you care about building AI that is reliable, explainable, and trustworthy, this conversation offers a refreshing and necessary perspective. As the race toward more capable AI continues, are we finally ready to admit that reasoning, not just scale, may decide what comes next, and what kind of AI do we actually want to live with?
Useful Links
Artur's personal webpage on the City St George's University of London page
Co-authored book titled “Neural-Symbolic Cognitive Reasoning”
[00:00:04] Why do the most powerful AI systems in the world still struggle with hallucinations, opaque reasoning and trust? I think it's something that everybody has encountered when using an AI tool. So today I've invited the Professor of Computer Science at City St George's University of London. And he is one of the pioneers of something called Neurosymbolic AI.
[00:00:31] And he spent nearly three decades working at the intersection of learning and reasoning. Long before language models dominated the conversation. We've seen all the hype over the last three years around all things generative AI. But today I want to unpack what Neurosymbolic AI actually means in practical terms. That's one of the reasons that I record this podcast every day. Take something people are talking about, demystify it, put it in a language that everybody can understand.
[00:01:00] So none of us feel uncomfortable about asking questions and feeling foolish. I'll do that on your behalf. So one of the things I want to learn about is why scale alone cannot fix the core weakness of today's models that we're using. And how by combining neuro learning with symbolic reasoning, how these two things could lead to safer, more transparent and more efficient AI systems.
[00:01:28] So this is a conversation about escaping that black box AI and try and understand where AI could go to next. Not just where it is today. And roads, where we're going, we don't need roads. Buckle up and hold on tight as I beam your ears all the way to London, where I will officially introduce you to today's guest. So a massive warm welcome to the show. Can you tell everyone listening a little about who you are and what you do?
[00:01:58] Thank you so much. It's great to be here. And my name is Arthur Garcet. I'm a professor of computer science at City St. George's University of London. I was born in Rio de Janeiro, got a PhD from Imperial College London and have two lovely children who take after their lovely mother.
[00:02:20] And I've been working in AI for almost 30 years from the start in this area called Neurosymbolic AI. And you're incredibly humble, too, because I was also reading before you joined me on the podcast today that you are often described as one of the pioneers of neurosymbolic AI. So for people listening who are familiar with large language models, because that's all they see in their news feeds at the moment, but less so with this approach.
[00:02:49] How do you explain neurosymbolic AI in practical terms? And why does it matter so much right now? Thank you, Neil. You know, if you use large language models, you can be amazed and frustrated almost at the same time. Yeah. So say you ask an LLM to write code for you and it makes an error, you know, error A.
[00:03:15] You ask it to fix it and it makes another error, error B. And then you ask it to fix that error and it makes the same error A again. So what we need is software assurances, guarantees that with a high probability, certain errors will never happen.
[00:03:37] This is where neurosymbolic AI come in to offer those guarantees that such powerful AR systems can ever make those errors. They can learn from experience, but also reason correctly about what has been learned. And I'm so glad you've raised this because current LLMs, yes, they are incredibly powerful. But those issues with hallucinations and misinformation, we've all seen examples of that and trying to fix things.
[00:04:07] I myself have accounted for fix problem A and then problem B because you just end up going round and round in circles, losing your temper. And opaque reasoning also continues to undermine trust. So from your research, why are these limitations so hard to fix with scale alone? What are you saying here? Yeah, very, very hard to fix indeed.
[00:04:29] We have to remember that the AI industry has been spending huge amounts of money on this since the release of ChatGPT now more than three years ago. And you really only need one bad hallucination to destroy trust. So the problem is in the purely data-driven approach, training these models from data alone and not knowledge.
[00:04:59] There are far too many exceptions. So we have to combine the data-driven approach with general rules, this knowledge that I keep mentioning, to offer the above guarantees. If you take an example, self-driving cars, they learn to drive from data, collecting millions of hours of driving experience. But there's also the highway code with the rules of the road.
[00:05:24] So in Neurosymbolic AI, learning combines this learning from data with neural nets with knowledge. So these rules. As a result, you designed and implemented what is widely regarded as the first neurosymbolic system for learning and reasoning. So what did that early work reveal about the strengths of combining neural learning with symbolic reasoning? What did you uncover here?
[00:05:53] That a little knowledge can go a long way. Indeed. So there's a lot of interest now in using neural nets to perform reasoning. And rightly so, because of their great success. So people are looking to do more and more complex things with neural nets, like continual learning, reasoning, planning.
[00:06:14] And that was the goal of Neurosymbolic AI from the start, you know, to study learning and reasoning together, not as two separate subfields, which is normally the case in AI, that they are studied as separate subfields. And in Neurosymbolic AI, we study how to get reasoning within neural networks right by design rather than trial and error, which seems to be the deep learning approach to reasoning.
[00:06:45] And as a professor of computer science at City St. George's University of London, you work right at the intersection of theory and indeed real world impact. And that's where the real value is. So where do you see Neurosymbolic AI offering the biggest gains for safety and reliability in deployed systems? It feels like quite a big moment here. But what are you seeing?
[00:07:09] Yeah, there's this promise of AI as a general purpose technology or even AGI, artificial general intelligence. But there are real gains happening right now in domain specific areas where knowledge is readily available. So in medicine, finance and engineering, three examples.
[00:07:30] And there are exciting applications in medical diagnosis, drug discovery, prediction and explainability, new materials and so on. So one benefit of combining knowledge and data is that we can consolidate the knowledge learned from different application domains. And I think this is going to happen in the next five years.
[00:07:59] And if we do it right, using decentralized neurosymbolic approaches, then I think that can be the path to AGI. And one concern business and indeed policy leaders have been raising recently is that modern AI systems, they cannot explain their decisions. We end up with this black box AI problem. So how does neurosymbolic reasoning change the conversation around transparency and accountability?
[00:08:28] I know there's so many people looking for answers around this, but what does it change? Yeah. Yeah. Accountability is very important. We use something called the neurosymbolic cycle. We call it the Nessie cycle in neurosymbolic AI. And it's all about translating knowledge into a neural net and go the way around, translating network into knowledge.
[00:08:57] So this last part, translating the network into knowledge, is the biggest challenge for large networks, but it is feasible if we do it in stages. And this we call knowledge extraction. This is what offers explainability to the trained models. It is a major technology for accountability. But accountability is more than just explainability.
[00:09:28] We proposed this idea of an accountability ecosystem, which has various aspects of systems engineering. But explainability is certainly one important aspect of that. And so in this cycle, this Nessie cycle, as we try to translate networks into knowledge and knowledge back into neural networks,
[00:09:52] the interesting thing is that when we scale this up, so we apply the cycle many, many times, that's very different from increasing the amount of parameters and the use of data. Scaling in neurosymbolic AI is about increasing the number of times that we apply the neurosymbolic cycle.
[00:10:18] So to learn from data, but also to consolidate knowledge at each step of the application of the cycle. And if we do it right, it should lead to model compression, very much the opposite of scaling deep learning. And I also wanted to highlight that you're the president of the steering committee of the Neural Symbolic Learning and Reasoning Association.
[00:10:43] And I'm curious, from what you're seeing here, how has interest in this field shifted as organizations look for AI systems that they can actually trust in high stakes environment? Have you noticed a big improvement on the interest? Is there a lot of interest? What are you seeing here? Very, very nice to see all the interest in the field in recent years.
[00:11:07] When we started the Nessie workshops 20 years ago, we were only really a few academics. That became the Nessie Conference, and it now attracts many academics and practitioners each year. And they have different interests and expertise. And so it creates a focus for the group to innovate from the combination of their expertise.
[00:11:37] The main goal is that the AI task is very complex, and it won't be solved by a single subgroup. Then the association and the Nessie Conference, they intend to, from the beginning, the goal was to bring together such key players in AI learning and reasoning. So both subfields together to actually crack the nut.
[00:12:03] And if we look back to the early days of cloud, they were incredibly inexpensive to attract people, get enterprises on board. And then over the last few years, we've seen those prices continue to escalate. And I think one of the things that makes AI different is instead of licensing or seats, it's actually tokens and the usage of that that I can see getting more and more expensive as the years progress.
[00:12:26] And efficiency, I think, will become as important as capability, especially as those compute costs rise. So how does neurosymbolic AI compare with today's large models when it comes to data requirements, energy use, and a long-term sustainability? Because I think we're just starting to wake up to some of these costs now. Yeah. Scaling in neurosymbolic AI will require investment. So we can actually measure this.
[00:12:55] But it's nothing like scaling deep learning and what we've seen in recent years. You know, instead of learning from all the data and only then worrying about how to do reasoning or to impose guardrails, with the Nessie cycle, you kind of learn a little, reason a little, and then repeat. So we can also use the Nessie cycle. And when a learning task is consolidated with knowledge, so we obtain a description of that task, it can be reused.
[00:13:24] So the neural net become more modular. And so if we can consolidate and reuse knowledge, we expect to achieve model compression. And this is the key to multitask learning with neurosymbolic AI. So instead of requiring bigger and bigger models and more and more parameters and more and more data
[00:13:49] with all the implications to copyright violations and so on, here we are really hoping to achieve model compression. So that's why I say that scaling neurosymbolic AI is really the opposite of scaling deep learning. I think this idea will be put to the test very soon, but we'll have to make that investment. This month, I'm partnering with Alcor.
[00:14:16] And if you've ever tried to hire engineers in another country, you probably know just how painful it can be. Different laws, patchy support, and partners who don't truly understand engineering roles. So Alcor approaches this from a different tech point of view. They specialize in Eastern Europe and Latin America, and they're able to combine EOR capabilities with recruiting. So you get one partner handling everything, and they help you choose the best location for your stack,
[00:14:46] find developers with the right depth of experience, and run proper assessments so they can onboard people quickly. And they also give you a model that respects both transparency and margin. Most of your spend goes directly to your engineers, and the fee will decrease as the team expands. And you can even transition everyone in-house at that time when you're ready, without having to worry about a penalty.
[00:15:11] And that structure is why a mix of early stage and unicorn stage companies use them as they scale. So if you want to take a look, visit alcor.com slash podcast, or tap on the link in the show notes. But now, on with today's show. And for any leader and practitioners listening who are excited by AI, but also they're very level-headed, cautious about some of the risks involved, are there any signals that you think that they should be watching out for
[00:15:40] to know when neurosymbolic approaches are ready to move from those research phases into everyday systems? And how far away do you see this becoming? Yeah. In domain-specific areas, as I said, it's already happening. Yeah. But in specific domains. And you're right that the risks are serious, especially in the general purpose approach and the road to AGI,
[00:16:09] you know, the race to AGI. The risks are serious, especially the risks of agentic AI. And so accountability, again, will be very important. You know, getting explainability to work in every case, out of the box, ideally. This idea of being able to query a neural network like a database, having that reliability of logic, and getting the regulatory incentives right also,
[00:16:39] and scaling the neurosymbolic cycle so that it can be applied efficiently to multiple tasks, and then reusing knowledge as it is applied and consolidating knowledge. Well, thank you so much for coming on, sharing so many of your invaluable insights today. But before I let you go, I always like to have a little bit of fun with my guests. And is there a book that has inspired you or means something to you that we can add to our Amazon wishlist?
[00:17:09] I always ask my guests to leave something there. Or a song for a Spotify playlist. I don't mind which you leave, but what would you like to leave and why? It has to be Kahneman's Thinking Fast and Slow. Yeah. It was inspiration for the idea of AI's System 1 and System 2, so the neurosymbolic AI with neural nets and logic, neural nets being System 1 and logic being System 2.
[00:17:38] And as for a song, my playlist, I think, is too eclectic. I cannot really choose one song. Okay. Well, I'll get the book added to our Amazon wishlist. Not a problem there. And one final thing before I let you go. I always give my guests a chance to sit on a virtual soapbox here. Are there any myths and misconceptions out there that we can lay to rest today? Because I suspect as you're working in AI,
[00:18:06] you see a lot of things on your news feed, when you're looking online, et cetera, in forums, and there might be a few untruths or misconceptions. Are there any myths and misconceptions around AI that you just want to highlight today before I let you go? I think the discussions around risk and going back to accountability again, explainability, all the rest of it, this went a bit out of control. Yeah.
[00:19:04] will be an opportunity to continue to develop the field by going in that direction of this middle ground, which in my mind is decentralized neurosymbolic AI. I think that's a perfect moment to end on. But finally, for anyone listening wanting to find out more information about your work, contact you or your team or anything at all, where would you like to point them?
[00:19:33] Where should they be going to stay up to speed with everything we talked about today? They can go to NessieAI.org for links to all the Nessie conferences and workshops. There's the new Neurosymbolic AI journal. There's a recent Nessie and the Road to AGI paper on my webpage with pointers to the technical work.
[00:19:57] And the accountability in AI paper has pointers to work on explainability, but also a broader discussion around AI regulation, the risks of social media and the impact of AI on employment and education, which are important aspects also to consider that go beyond the technical aspects. Awesome. Well, I will gather up all those links. And for anybody listening, if you look in the show notes to this episode
[00:20:27] you're listening to, there'll be a section called Useful Notes. All the links will be in there. So I urge everyone listening to check those out. But more than anything, it's such an important topic and topic that I'm just so relieved that we're able to shine an important light on it, the light that it deserves. So thank you so much for taking the time to stop by and share your story. Really appreciate your time. My pleasure. Thank you so much. It was great to have this chat. Thank you. I think if the past few years have been about proving what AI can do,
[00:20:56] I think today's conversation is a reminder that how AI reasons may matter just as much as how powerful it becomes. And my guest offered a very clear perspective on why trust, accountability, explainability, these things can't be bolted on later, or at least they shouldn't be. And why neurosymbolic approaches may offer a much-needed different path forward, especially as compute costs rise and regulation tightens.
[00:21:25] Everywhere from healthcare to finance to the long-term debate around AGI, which is another episode on its own. I think today's discussion is one that challenges some of the loudest assumptions in today's AI narrative. And that's why I was so excited to get him on here. So I will include links to Arthur's research, the neurosymbolic learning and reasoning community, the papers we discussed. I'll add all that stuff to the show notes. And as always, I'd love to hear your take.
[00:21:54] Do you think the future of AI depends on better reasoning, not just bigger models? As always, let me know your thoughts. techtalksnetwork.com. If you pop over there, you can send me a message directly from the site, and you can even send me an audio message. Yep, I could listen to your voice instead of you listening to mine all the time. Come on, this is one-sided, man. Let me hear your voice too. If not, you can send me a DM on socials too,
[00:22:22] just at Neil C. Hughes on LinkedIn and all the usual places. But have a think, let me know your thoughts. And if you've had nothing to say this time, I will be pestering you tomorrow as I return directly into your podcast feeds. So hopefully meet me there, same time, same place, and we'll do it all again. Thanks for listening. Bye for now.

