What happens when the AI race stops being about size and starts being about sense?
In this episode of Tech Talks Daily, I sit down with Wade Myers from MythWorx, a company operating quietly while questioning some of the loudest assumptions in artificial intelligence right now. We recorded this conversation during the noise of CES week, when headlines were full of bigger models, more parameters, and ever-growing GPU demand. But instead of chasing scale, this discussion goes in the opposite direction and asks whether brute force intelligence is already running out of road.

Wade brings a perspective shaped by years as both a founder and investor, and he explains why today's large language models are starting to collide with real-world limits around power, cost, latency, and sustainability. We talk openly about the hidden tax of GPUs, how adding more compute often feels like piling complexity onto already fragile systems, and why that approach looks increasingly shaky for enterprises dealing with technical debt, energy constraints, and long deployment cycles.
What makes this conversation especially interesting is MythWorx's belief that the next phase of AI will look less like prediction engines and more like reasoning systems. Wade walks through how their architecture is modeled closer to human learning, where intelligence is learned once and applied many times, rather than dragging around the full weight of the internet to answer every question. We explore why deterministic answers, audit trails, and explainability matter far more in areas like finance, law, medicine, and defense than clever-sounding responses.
There is also a grounded enterprise angle here. We talk about why so many organizations feel uneasy about sending proprietary data into public AI clouds, how private AI deployments are becoming a board-level concern, and why most companies cannot justify building GPU-heavy data centers just to experiment. Wade draws parallels to the early internet and smartphone app eras, reminding us that the playful phase often comes before the practical one, and that disappointment is often a signal of maturation, not failure.
We finish by looking ahead. Edge AI, small-footprint models, and architectures that reward efficiency over excess are all on the horizon, and Wade shares what MythWorx is building next, from faster model training to offline AI that can run on devices without constant connectivity. It is a conversation about restraint, reasoning, and realism at a time when hype often crowds out reflection.
So if bigger models are no longer the finish line, what should business and technology leaders actually be paying attention to next, and are we ready to rethink what intelligence really means?
Useful Links
Learn More About MythWorx
Thanks to our sponsors, Alcor, for supporting the show.
[00:00:04] What if the biggest problem with today's AI isn't what it can do, but what it takes to keep it running? Because every week, it seems that we're told bigger models mean better intelligence. More parameters, more GPUs, more data centres, more power. And on the surface, that story makes sense, of course, because generative AI has delivered some jaw-dropping moments over the last couple of years. But behind the scenes, cracks are starting to show.
[00:00:33] So, from spiralling energy demands, using way too much water in areas that don't have the luxury of an abundance of water, not to mention long infrastructure wait times and growing concerns about accuracy, latency and trust. Well, my guest today is Wade Myers. He's from MythWorks and he believes we're reaching the natural limits of those large language models approach.
[00:00:59] So today, he will explain why brute force scaling is actually becoming unscalable. And why adding more GPUs is starting to feel like attacks rather than progress. And why the next generation of AI will look far more like a human brain than a massive prediction engine. So we'll talk about reasoning AI, systems that learn concepts instead of memorising data,
[00:01:27] delivering deterministic and auditable answers, and run on a fraction of the power required by today's model. And I think that is something that's so important. We've all been on social media and seen so many videos, or should I say, AI slop, that many of the users that are creating these things do not take a moment to think about the Earth's resources that they use up. And I would urge anyone to look it up because a 30-second video requires a lot of energy resources.
[00:01:56] And my guest will also share why this new shift could unlock private enterprise AI, powerful offline edge intelligence, and entirely new use cases that simply don't work with cloud-based LLMs. So, if AI headlines that you keep reading are still obsessing over size and scale, I want to ask you, are we missing a quieter but far more important shift towards intelligence that actually reasons?
[00:02:24] Curious? I was hoping you'd say that. Before I bring today's guest on, a quick thank you to my friends over at Denodo, who are passionate about logical data management for AI success. Because let's be honest, AI is evolving fast. But the elephant in the room is initiatives are still failing. Not because the models aren't good, but because the data foundation isn't ready.
[00:02:49] That's why organizations are increasingly turning to Denodo and logical data management. Denodo unifies your data across every cloud and every system, without the need for massive replication. So, you can power trustworthy AI, accelerate lake house optimization, and build data products that make self-service real for every team.
[00:03:14] So, CIOs, architects, business leaders, each get exactly what they need and when they need it. And Denodo's partners also help you get value even faster. So, if you're ready to make AI actually work, visit Denodo.com and put logical data management to work today. So, a massive warm welcome to the show, Wade. Can you tell everyone listening a little about who you are and what you do?
[00:03:44] Sure. I'm a tech entrepreneur and tech investor. So, I do both. I have several venture funds. I have a couple of venture studios that I've launched where the same team builds multiple companies. And I'm a founder of tech companies. And there's so much I want to talk with you about today. I mean, if we look through our news feeds, obviously, we're recording this during the week of CES as well. So, there's nothing but stories of AI.
[00:04:09] And for listeners who hear headlines about how even larger AI models every week keep occurring and keep coming out there. Now, we're talking about agentic AI and thousands of agents, etc. Can you tell me why you believe that today's large language models are actually starting to hit real limits around power, cost, and latency? Oh, sure.
[00:04:31] Well, for the novice listener, an LLM, a large language model, depends on massive amounts of compute power. Because it's trolling through massive amounts of information that needs to be constantly updated. So, an LLM is static. So, unless it's fed with new information every day, oh, my goodness, Neil, every day there's an explosion of information on the Internet, right? And so, an LLM has to ingest all of that on a daily basis.
[00:05:00] It has to be tagged and labeled and added to its data set so that it can do a better job of constantly updated answers for any time someone gives it a prompt. And there's a couple of issues. First of all, so it's subject to scaling laws, right? The only way to produce better results or to produce faster results or to continue to add more information on a daily basis, it just continues to scale.
[00:05:28] And so, the only way you can do this is build more data centers, add more GPUs, you know, add more parameters, right? You see, like, DeepSea came out with a 2 trillion parameter model, right? And so, it has to continue to scale that way because an LLM works based on predictive outcomes on data sets that it has ingested.
[00:05:53] So, not only is it static, it needs to be added to, it also cannot prune information. So, AI ends up kind of trained on AI, right? So, there's all this AI-generated content, which is really, really easy now to generate. And every AI platform out there has to ingest all of that. It's just like content on top of content on top of content. And, of course, data poisoning can seep in and hallucinations and all of that.
[00:06:22] But, yeah, the reason why it's unsustainable is as you go to the dotted line to the upper right forecast of compute effort required, you know, increased usage, increased data center requirements, increased electricity, GPUs, etc. It's just unsustainable. It runs off the edge of the world, right? So, we need something better than what we currently have. Now, it's phenomenally powerful.
[00:06:48] And it's the fastest growing, like, inflection point ever, right, in terms of technology. But it's, obviously, there's going to be, you know, second waves, third waves, etc., or generations, whatever you want to call it, of tech that's better and more efficient. And you said that it is unsustainable. But I also read that you argue that adding more GPUs is actually becoming more of a tax rather than a path forward.
[00:07:14] And at a time where organizations are struggling with technical data, it is. I thought that analogy was spot on. So, what made you so confident that there had to be another way to approach intelligence without brute force scale? It's a brave, bold move. But is there a story there behind that realization and your next move? No, you're right, Neil. So, a couple things, right? Number one is we do have an AI platform that is far different in terms of architecturally and how to approach this AI.
[00:07:43] And that is a full reasoning model that is extremely efficient. And there's other AI platforms working on similar initiatives. So, I'm anticipating this entire second generation and multiple and nth generations to come of new forms of AI. But, you know, full reasoning AI would act more like the human brain where we don't need to carry around every encyclopedia that's ever been printed in the world on our backs, right? We just, we learn things and we can reproduce that learning with very low power.
[00:08:13] The human brain only requires 20 watts of power, right? So, an AI that is architected around reasoning like the human brain works is going to be far, far more efficient than an LLM. And GPUs, of course, were designed for graphics processing. So, they're kind of a, they're almost being misused for AI, right? But you need this sort of raw, dumb processing power as opposed to a CPU, which does have a lot of intelligence built in.
[00:08:40] So, full reasoning AI that can actually work on a small footprint CPU adds even more efficiency, right? So, I, like with MythWorks, for example, we've done a bunch of benchmark tests and we can outperform, you know, 2 trillion parameter LLMs, you know, quite handily with far, far less power. So, I know we have it. I know it's coming from other innovators as well.
[00:09:06] And there'll be an entire next generation of AI, you know, on the scene soon. So, at MythWorks, you're betting on hybrid reasoning architectures that are modeled more closely on how humans think, which is a real breath of fresh air too. But for people listening and hearing about MythWorks for the very first time, tell me a little bit more about it. And also, what makes it fundamentally different from how mainstream LLMs reason today? Because, again, I think that's so important.
[00:09:36] Right, right. So, a typical LLM is predictive. You load a whole bunch of information. You ask it a question that looks at that information and predicts what, you know, a good response for you. And it's very, very good at language. Oh, my goodness. I can load a, you know, 100-page PDF and say, summarize this in one paragraph. And you go, wow, that's amazing, right? But it can get stuff wrong and, you know, misplaced stuff and hallucinate and subject to data poisoning and all of that stuff.
[00:10:04] Real reasoning does not have to be trained on something specific. I'll give you an example. A lot of the benchmark tests, like an MMLU Pro, an LLM needs to be told all the questions in advance. It's like if you and I were going to, like, a, you know, standardized college examination sort of test. In Britain, it's called the, what do you call your, you know, your exams? We have, like, an SAT here in the U.S.
[00:10:31] So imagine an LLM needs to be told, here's all the questions we're going to ask you, and here's all of the answers, okay? And then that way it's trained on that. And now when someone asks them one of those questions, it looks and answers it. But in most cases, an LLM will only be able to answer that at about 87.5% correctness, right? Because it's because there's so much information in LLM, and there's going to be some hallucination, data poisoning.
[00:11:00] But you have to train it. It can't just come up with an answer. Now, if you and I learned how to do math, we can sit down and take a math test without having seen that question or that problem before, right? Because we've learned it. So a reasoning AI, like MythWorks, learns something. Doesn't have to, again, carry around all the information on its back. It's learned it.
[00:11:27] So we took the MMLU Pro, for example, benchmark test, completely blind. Didn't see any questions, any answers ahead of time. Walked in there, took a PhD level, physics, science, biology, math, you name it, and crushed it. Because it's learned. That's the core difference. So we don't have to have all this data that we're crunching. We don't care about all the new information that was added to the internet just yesterday. Because, again, that's just like content.
[00:11:55] And there's so much AI slop now that it's even harder to, like, you know, cull through it. And, you know, when DeepSeq first came out, it kind of roiled the markets. And people thought, oh, wow, this is amazing. They trained so much faster and cheaper than the chat GPT-5. No, they just trained on other AI, right? So LLMs can train on LLM, right? But there's a problem, again, when you have, you know, data poisoning seeps in.
[00:12:23] And when AI trains on AI, it just sort of can go off in weird directions. But when you've learned how to do something, and again, I go back to where reasoning is really important, is where it needs to be deterministic. And there needs to be an audit trail. So imagine anything, let's say finance, big banks. You don't want to trust your account information to an LLM, which is going to combine your information to everybody else's and, like, just come up with something. You want a deterministic answer.
[00:12:52] You want to say, no, if it's like math or science where there's a true answer. And ideally, there's an audit trail to say, how did you derive this answer, right? So a full reasoning, we can go back and say this is exactly how we came up with this answer. And it's because we've learned that discipline, right? That's way different. And one of the boldest claims you make is achieving roughly one-tenth of the power for the exact same workload.
[00:13:18] So assuming that GPU tax, when that eventually disappears, how does that change who gets to build and who gets to deploy advanced AI? Assumingly, it lowers the barrier. Does it? Oh, yeah. So right now, for example, there are huge wait times for data center capacity, for electricity, and for GPUs. I think GPUs right now are backed up six months. In some cases, in the electrical industry, according to the U.S., right?
[00:13:46] It's as much as two and a half years to get power to some place. And so there's all these constraints we're already seeing. Now, if we don't need GPUs, which we don't need at MythWorks. We can operate on a small CPU, right? If you don't need all that power. So if you kind of think of all the layers of AI, right?
[00:14:06] From the data center itself and the electricity that you plug into the data center and then the processing power that's required and the information and the information storage that's required and the cooling that's required and all of that whole stack that's required to power a large language model with just tons and tons of information. And then compare that to I've learned how to do something. You give me a question and I'll answer it.
[00:14:34] Or put me on a small device and I'll deliver AI, right? That's a math difference. So there'll be winners and losers in this next wave as we move to full reasoning. But importantly, LLAMs do a really good job for what they were designed for with language. I'm thinking about we don't want to just guess at what's coming down the road at us on an autonomous vehicle. It's got to be deterministic. It's got to be accurate. So full reasoning AI is not a little more efficient, but it's way more accurate.
[00:15:03] I just want to give a big thank you to my sponsor who is supporting every show, every episode across the Tech Talks network this month. And this month I'm proud to be partnering with Alcor. And anyone who's tried to scale an engineering team across borders, they will know firsthand how messy it can get. Because they deal with endless providers, then there's confusing rules to deal with in each and every region, and fees that always seem to surface at the last minute.
[00:15:31] Now, Alcor, they solve that by acting as a partner rather than just an intermediary. And they focus on tech teams that expand in Eastern Europe and Latin America. And they bring employer of record services together with recruiting. So essentially, they help you pick the right country, source the right engineers, and assess them properly. And then get them active for you and your company within days. And one of the things that stands out for me is the financial transparency.
[00:16:01] Around 85% of what you pay goes directly to your engineers. Their fee goes down as your team grows. And if you ever wanted to bring your team in-house, you do so with no exit costs. That kind of clarity is why Silicon Valley startups, including several unicorns, have chosen Alcor. And you can find out more by simply going to alcor.com slash podcast or follow the link in the show notes below.
[00:16:28] Oh, and before you join me today, I was doing a little research on you guys. And I read that you pointed out that this approach works best across domains like law, physics, medicine, and finance, but without domain tuning. So why is this such a big shift from how enterprises think about deploying AI today for any business leaders that are listening? Well, yeah. So right now, if I'm going to deploy AI in an enterprise, I'm typically using open AI, you know, chat GPT-5.
[00:16:58] And I'm deploying some agents around that or Gemini, right? So I'm going to cloud AI. I'm basically throwing my information into the information bucket with the rest of the world. And all those license agreements say that I'm, you know, just like on Google or anything else, they now own your photos that you load up, right? All these big AI platforms say, we now, Neil, we now own all your information. Thank you so much for giving it to us. We're going to train on this and, you know, maybe use it against you, right? And wait a minute.
[00:17:28] I want private AI. I want just my information and I want AI for specific outcomes. And I want to be able to do this without spending millions per month on training and data center infrastructure. And so surely there's a better way of, and I think this is like the typical Gartner, you know, high curve, right? Everybody kind of rushes out there and embraces AI and starts playing with AI.
[00:17:53] And then you've got, you know, you know, chimps riding motorcycles and cats riding bicycles. And you go, okay, are we really, we're using AIs to create Trump, baby Trump videos? You know, surely there's a better. And so as we play with this and as corporations kind of dip their toe and go, oh, we have to rush at AI. Neil, this reminds me of the dot-com days when it's like corporations were rushing at, you know, the internet going, we need to have a website. We need to do this, we need to do that. You know, mistakes are made and, you know, there's security holes, there's all kinds of issues.
[00:18:22] But then the next generations come in, next generations, and then you kind of figure it out and then tech enablement works. So right now many enterprises are disappointed with their AI initiatives going, we didn't really get the outcomes we wanted. And gee, we're kind of afraid now that we know that we've had to give our information into the cloud to be used for training purposes for everyone else. So where the enterprises are headed is private AI installations. But then you go, oh, are you kidding me?
[00:18:51] I'm going to take a, let's say, $100 million company and tell them you have to have your own data center, your own GPUs, and own your own sort of AI environment. So one of the things we're doing at MythWorks is we're building private AI that is a very small footprint that can be used for model training and everything else where it's proprietary. This is just your information. Because many companies just say, we have, take a big bank. They have plenty of information about their customers.
[00:19:18] They don't want to pollute that and combine that with other bank customer information. They want to just use that themselves and have very specific capability coming off of that. And so that's our first product we're rolling out this quarter is a model training product specifically for enterprises to be able to train faster, cheaper, and better than using a typical cloud AI infrastructure. And I'm glad you likened it to the dot-com era there. We saw it there with the websites, as you said.
[00:19:48] And I would also say, also reminds me, I think it was 2007, the iPhone came out. A year later, the App Store launched. And what were all those apps? What were they? You could turn your phone into a chainsaw, a glass of beer, or a flute. It's the same pattern, isn't it? We just play and do stupid stuff until we learn how we can do something useful with it, right? No, exactly. It's that same kind of curve. I just laugh at how people are using AI right now. It's such a total waste.
[00:20:16] But it is part of the learning curve. It's almost like we're in toddlerhood. You know, we're kind of picking up our block of AI and we're biting on it and licking it and, you know, kind of figuring it out. And we'll figure it out. Yeah, 100% with you. And here we are in 2026. A lot of excitement around the kind of opportunities that are lying ahead. And you said that the winners will not be the biggest models, but actually the smartest architectures.
[00:20:41] So from your vantage point at MythWorks, what should businesses and or what should business and technology leaders be watching for as this shift begins to play out? Any signs that you think that we should look out for? Well, so if I go back to that sort of multilayered cake of what is required for AI from, you know, data centers and electricity and processing power, et cetera. At every single layer, there will be innovation.
[00:21:09] And it's all happening simultaneously. Like there's some really cool wafer technology right now that claims to be 20 times faster than an NVIDIA GPU, for example, like Cerebrus, right? So there's really cool innovation at every one of those layers that's happening. I mean, people are trying to, you know, fire up nuclear power plants to replace, you know, or to add more electricity and to, you know, add two things, right? So at every single layer, leaders should anticipate changes.
[00:21:37] And so what that means is, as a leader, you don't want to commit to anything long-term right now because all those layers are subject to change. And eventually it'll sort of settle out to where it'll be the most efficient architecture on top, you know, delivering results that you want. You know, riding on the most efficient sort of chip architecture or wafer technology, whatever that is, you know, sitting on and potentially, you know, really nimble, small scale data centers.
[00:22:05] Oftentimes large scale loses in a big way when there's lots of innovation because innovation puts large scale out of business quickly. And large scale means you can't, you aren't as nimble as you could be on navigating the changes. I've seen this over and over again in other industries where often small scale, really high speed beats large scale all day long because you can quickly innovate and change. And so I'm looking at all those layers and saying leaders should anticipate a lot of change.
[00:22:33] And not that they should hold back on investment, but just don't be committing long-term. When I'm hearing like, well, investing 50 billion in a massive data center is going to be, you know, a jillion miles wide in West Texas. Like I wouldn't do, I'm short on that kind of concept, right?
[00:22:49] It's like, I would rather have small nimble data center architecture where I'm constantly swapping out processors and swapping out storage and different architectures for AI to try to figure out what is the best mix of innovation that delivers AI that really works well to solve specific problems. And I've got to ask, what about yourselves? What's your focus this year at MythWorks? What can we expect from you guys this year?
[00:23:18] Our first product rollout this quarter is model training to basically offer that model training service for LLMs. Because we can train, we know that we can train any kind of AI model far, far faster and cheaper and better than a typical current architecture.
[00:23:32] So model training service followed by, you know, what we're kind of like our holder, our placeholder product name is a Nero IMS, which is an edge AI capability, really small scale, really efficient, works on your smartphone, gives you a tremendous AI power on your smartphone. You can be in airplane mode, disconnected from the internet and have powerful AR on your, you know, in a tiny file that you just unpack and wow, you got AI in your phone.
[00:23:58] And then we're using that product to rollout to basically help people envision AI at the edge on all kinds of manner of edge devices, whether you're connected or not. That can, it is very powerful. That can produce very specific things for AI because edge AI doesn't require all the information in the world like an LLM, right? Yeah. I said for this specific device, well, you know, think of an autonomous vehicle or a drone. What do we want it to do? And we want to put AI on that device.
[00:24:28] So it's got to, you can't, you can't put a strap a GPU to a drone, right? So that's our second is going to be edge capable AI for the, for edge. Exciting times ahead. Sounds like we're going to need to get you back on later in the year, see how things are evolving. And for anyone listening, you might have set off a few light bulb moments today in our conversation. They want to find out more about all things. Mythworks connect with you or your team. Where would you like to point everyone listening? Yeah. Mythworks.ai.
[00:24:57] So it's M Y T H W O R X. Dot AI. Mythworks. Dot AI. Awesome. Well, I'll add a link to the show notes. I do urge anybody listening to check that out, especially when I love how this AGI platform that you're creating thinks like a human self trains users, minimal compute power, but most importantly, outperforms LLMs and LRMs. Certainly worth checking out. And I meant what I said, I'd love to get you back on later in the year, see how things are evolving.
[00:25:27] But more than anything, thank you for sharing your story today. Thank you, Neil. Really appreciate it. If smarter AI requires less power, less data, less infrastructure, who does that actually change the game for? This is the question I keep thinking about after my conversation with Wade, because his argument isn't that large language models are useless. They clearly have their place. But as businesses move past experimentation and start demanding accuracy, auditability,
[00:25:54] clear outcomes, the limits of prediction based systems then become much harder to ignore. And one of the things that stood out to me today was this idea that we're still very early in how we use AI closer to the playful early days of GeoCities websites on the internet and the first wave of smartphone apps where we turn our phone into a glass of beer. It feels much more like that than the finished destination, doesn't it?
[00:26:21] And in that context, I think reasoning based architectures feel like the next natural step, especially for environments where guessing simply isn't good enough. And I also think there's a strong strategic message here for leaders. When every layer of the AI stack is changing at once, from chips to power to architecture, long-term bets on massive infrastructure can quickly become liabilities.
[00:26:46] But smaller, more adaptable systems could well end up winning simply because they can move faster. And as this AI conversation continues to evolve and mature, are we finally ready to prioritize thinking over scale? Over to you, techtalksnetwork.com, LinkedIn X, Instagram just at Neil Segu's. Let me know. I'll be back again tomorrow. But that's it for today. Bye for now.

