LAMs (Large Action Models) and the Future of AI Ownership
Tech Talks DailyJanuary 27, 2026
3571
32:2023.04 MB

LAMs (Large Action Models) and the Future of AI Ownership

What happens when AI stops talking and starts working, and who really owns the value it creates?

In this episode of Tech Talks Daily, I'm joined by Sina Yamani, founder and CEO of Action Model, for a conversation that cuts straight to one of the biggest questions hanging over the future of artificial intelligence.

As AI systems learn to see screens, click buttons, and complete tasks the way humans do, power and wealth are concentrating fast. Sina argues that this shift is happening far quicker than most people realize, and that the current ownership model leaves everyday users with little say and even less upside.

Sina shares the thinking behind Action Model, a community-owned approach to autonomous AI that challenges the idea that automation must sit in the hands of a few giant firms. We unpack the concept of

Large Action Models, AI systems trained to perform real online workflows rather than generate text, and why this next phase of AI demands a very different kind of training data. Instead of scraping the internet in the background, Action Model invites users to contribute actively, rewarding them for helping train systems that can navigate software, dashboards, and tools just as a human worker would.

We also explore ActionFi, the platform's outcome-based reward layer, and why Sina believes attention-based incentives have quietly broken trust across Web3. Rather than paying for likes or impressions, ActionFi focuses on verifying real actions across the open web, even when no APIs or integrations exist. That raises obvious questions around security and privacy.

This conversation does not shy away from the uncomfortable parts. We talk openly about job displacement, the economic reality facing businesses, and why automation is unlikely to slow down. Sina argues that resisting change is futile, but shaping who benefits from it remains possible. He also reflects on lessons from his earlier fintech exit and how movements grow when people feel they are pushing back against an unfair system.

By the end of the episode, we look ahead to a future where much of today's computer-based work disappears and ask what success and failure might look like for a community-owned AI model operating at scale.

If AI is going to run more of the internet on our behalf, should the people training it have a stake in what it becomes, and would you trust an AI ecosystem owned by its users rather than a handful of billionaires?

Useful Links

Thanks to our sponsors, Alcor, for supporting the show.

[00:00:04] Welcome back to another episode of the Tech Talks Daily podcast. Now, a quick question for you all. What happens when AI stops answering questions and starts taking actions on your behalf? And is there any escape from the big five tech companies? We often hear about the big four. I'll be lenient. Let's say the big five.

[00:00:24] Well, my guest today is the founder and CEO of a company called Action Model. They've just come out of stealth and we're going to talk about a very different vision for the future of AI. We're going to unpack why large action models go beyond chatbots and how automation is moving from theory to real world execution and why he believes ownership of AI should sit with communities rather than just a handful of tech giants.

[00:00:52] So if you're curious about where AI is heading next, this is a conversation that might stretch your thinking or might just get you thinking a little bit differently about the tech landscape and the part that we all play in it and how we might be able to form our own communities to do things a little bit different.

[00:01:09] Before I bring today's guest on a quick thank you to my friends over at Denodo. AI is evolving fast, but the elephant in the room is initiatives are still failing. Not because the models aren't good, but because the data foundation isn't ready. That's why organizations are increasingly turning to Denodo and logical data management. Denodo unifies your data across every cloud and every system without the need for massive replication.

[00:01:39] So you can power trustworthy AI, accelerate lake house optimization and build data products that make self-service real for every team. So if you're ready to make AI actually work, visit Denodo.com and put logical data management to work today. Enough scenes setting for me. Let me introduce you to my guest right now. So a massive warm welcome to the show.

[00:02:06] Can you tell everyone listening a little about who you are and what you do? Yeah, well, thank you for having me on. So my name is Sina and I'm the founder of Action Model. The future of AI is going to be a crazy journey, to say the least. It's rapidly evolved over the last few years alone. And what we're seeing is essentially a handful of billionaires and trillion dollar companies completely own almost all of it.

[00:02:35] And that's a pretty scary thing to imagine when you are talking about something that will potentially run the whole world and do almost everything for us. From starting with computers, ending up with robotics. So with the Action Model, what we've built is something that enables the community to get involved. Individually, there's not much we can do and we're pretty weak.

[00:03:03] But together, we're actually very strong. And the Action Model is all about a community initiative, which we can all come together, contribute in different areas. And together, we build something very big and we all own a fractional stake in the whole ecosystem. That's the foundations of an Action Model. My background is engineering. I've been programming since I was about 10, studied computer science. And my last startup was in the UK.

[00:03:32] It was a fintech called Yoelo. And we were one of the first companies to bring QR code payments into the UK. And in that company, we had this narrative, which was around Visa and MasterCard. Whenever you paid with your card, Visa and MasterCard would take about 5% of the transaction fee from businesses. Right.

[00:03:55] And our solution, which was new, innovative, we connected to banks directly and we could process that for 90% less. And so the local business saved a tremendous amount of money. So it's good for the local economy. And through that and word of mouth, people came together to promote our products. And we ended up reaching millions of users within a single year. We grew rapidly. And we exited that company.

[00:04:25] But I learned something very important in that journey there. And that is that essentially people want to resist against something. People want to come together and start an uprising, a movement. And in that company, no one had anything to lose. You know, it's not you that loses anything from the card payment. It's the business. So people were very selfless in that company. But here, we're talking about AI.

[00:04:53] And we all have something to lose here. We all have something that is going to be replaced by AI. Whether, you know, right now there's a billion people that work behind the computer screen. And the future looks very bleak for these people. Because AI is going to be able to do almost all of that work. So really, I went quite in detail with that summary. No, and I'm glad you did. Because it's such an important topic right now.

[00:05:22] And for people listening, I would urge anyone to try and do anything online. Or even on your phone. And just send a message. You know, you cannot do it without touching one of those big four or five tech companies out there. And even just looking at messaging alone. I don't know if it's Instagram Messenger, WhatsApp, Instagram, Facebook. Yeah, they're all linked together by one company. And that's how most people message.

[00:05:49] And you argue that today's AI industry focuses power in too few hands. And I think you are bang on the money here. So from your vantage point as a founder, what exactly is broken in the current model of AI ownership? And why does decentralization genuinely change outcomes rather than just more rhetoric? There's a few areas in which coming together as a community of millions can really disrupt this system.

[00:06:18] And together, we can create something very big. Firstly, you know, I have to talk about what is the future of AI, right? And how does it work? To date, we've all come to know and love chatbots. And this technology, they're called large language models, LLMs. And essentially, you speak to it, you give it some text, and it gives you back some text, right? So they're nice chatbots. And the training data required to create these chatbots is in like, how do they make these AIs?

[00:06:48] They essentially scraped all of the language from the internet, from books, articles, social media, anything that humanity has produced in the form of language. They basically, you know, they took it. No one was rewarded for it. And they compressed it into these incredible chatbots. And so that was what we've had so far. And these tools are incredible, you know. The future is automation.

[00:07:16] It's actually doing the work, not just chatting. And the training data required for this is substantially different. And it's not as easy to collect. You know, it's not just sitting there on the internet. The type of data that you need, if you want to create an AI that actually uses your computer. And, you know, what I mean by that is it will literally act as a person. It looks at the screen, and it uses the mouse and the keyboard of your computer.

[00:07:43] And it can use all the software that you normally use, right? And with that, automate all of the work that you need it to do. So to train this kind of AI, you need the AI to have an incredible amount of data on how to use all of these different platforms. From as simple as going into Gmail, clicking the send button, clicking subject, you know, writing the information and then clicking send. Incredibly simple journey, obviously.

[00:08:12] All the way to something very complicated like going into AWS and creating a server. These all are tasks online that people are doing. And training an AI on how to do these is what's coming next. And at ActionModel, we've developed a system which is extremely simple to use. Users just download a browser extension.

[00:08:35] And within 60 seconds, they can start contributing their data on how they use the internet to essentially train these kinds of AIs, right? And the reward for that is they receive fractional ownership. So that's a way where when we have a community of millions of people, we start collecting data points across the whole internet, across every single platform that exists.

[00:09:01] And AI becomes very powerful, very flexible with a breadth of functionality. And so this is what the power of the community brings initially in terms of, you know, actually helping to train these AIs. If you think about it, your data is already being taken, you know? You know, look at the chatbots I just told you about. It took all of humanity's language to create these chatbots. And no one was rewarded for it. And they're doing the same again with the next stage, with automation.

[00:09:31] ActionModel actually does reward you for it. So that's the first fundamental part of it. And then secondly, it comes down to distribution. You know, as you said, there's a handful of these trillion dollar giants that we touch on an almost daily basis. And they have incredible distribution. Whereas if you have a community of millions of people, firstly, these people don't have any interest, any vested interest in these big giants.

[00:09:58] You know, no one has any interest in open AI or Anthropic or any of these companies exceeding. You know, it's just a handful of billionaires that win. Whereas with the ActionModel, we all own it together. And so when you have a community of millions of people that have a shared ownership of this ecosystem, then everyone becomes your biggest advocate.

[00:10:22] And that creates a very powerful distribution, which is self-filling and rewarding to the whole community. It's such an incredibly cool concept. And kudos to you for creating this. And ActionModel also introduces the idea of large action models that can click, type, and complete tasks online, as you said there. So in simple terms, how are LAMs different from large language models? And where do they struggle compared with human workers?

[00:10:51] Anything you can share around that just to bring it all to life? Yeah, absolutely. I mean, so as I said earlier, large language models, chatbots, you speak to it, it speaks back to you. It gives you advice. It tells you to go and do these things. Whereas large action models, they're trained on actually doing those things themselves. What I mean by that is think about the billion or so people around the world who are employed to sit behind a computer screen. How do they use the computer?

[00:11:21] You look at the screen and then use the mouse and the keyboard. And with that, these billion people are doing everything, using all the different systems and platforms and applications required for their jobs. And these large action models are being trained on how to actually use those software and those systems in exactly the same way. They can look at the screen and they can move the mouse and use the keyboard of the computer and act entirely like a person does.

[00:11:49] This technology is being developed by the big guys. So you will start seeing this technology coming through OpenAI and Anthropic and Gemini and these behemoths. And so this future of automation is inevitable. And when you go to, you know, when these companies go to your employer and they say, hey, look at this software that we have. It will do the work of these people that you currently hire, but for 5% of the cost.

[00:12:19] And it runs 24-7 in the cloud. Your employer is going to say yes. And that's the unfortunate truth and the type of world that we live in. And all of that value transfer at the moment is destined to go to these big tech companies. Whereas with the action model, the whole ethos that we've built is that we come together, we build it together, and we own it together. And there'll be many people listening and they'll be able to envision exactly what you're talking about here.

[00:12:49] And they'll be immediately worried that action-capable AI could automate huge amounts of knowledge work. So how do you think about that balance between productivity gains and the risk of job displacement and who ultimately captures the value created here? It's got to be something you're incredibly passionate about too, right? Yeah, I mean, as tough as it is to kind of imagine, you know, what a bleak future that looks like. We're talking about a huge proportion of the planet.

[00:13:20] I can't see any way that it won't happen. You know, I see it as an inevitability. As I gave in that example, when you go to the employers as these big tech companies and you offer them an incredible way to automate their processes and what might be their biggest cost base as well. At the end of the day, we live in a capitalist world. And every business is trying to do one of two things. Make more money or be more efficient.

[00:13:50] And that's exactly what these tools provide. And so I see it as an impossible wave to swim against. The only thing you can do is try and swim with it. Unfortunately, you can't join the big guys. So all you can do is join the resistance. And that is the action model. And ActionModel claims that it can verify real user actions across any website without APIs and without SDKs.

[00:14:20] And I suspect that that will split the audience in two there. There'll be one side thinking, hey, that is incredibly cool. But then there'll be the more cautious IT and security teams worried about security concerns. So technically, how does that verification work? And what prevents bad actors from gaming the system or creating a convincing fake behavior? It must be a question you get asked a lot. But tell me more about the security aspect there. Yeah, I mean, I'll give you a background first on what Actionfire is.

[00:14:49] The browser extension that we have, its core purpose is to train the large action model. By collecting data points on user journeys across different applications. So the AI learns how to use those applications. All right. You know, we reward you in the background for this. So it takes someone 60 seconds to set up the browser extension. And then they can forget about it. And in the background, they're just using the internet as normal.

[00:15:17] And they're earning rewards based on what websites they go on. Right. Because they're helping to train the AI. Again, that data is already being taken by the big tech companies without your permission, knowledge or reward. At least we're transparent about it. And we actually do reward you. That's the fundamentals. Now, when we're rewarding people for their contributions, the rewards aren't, they're not equal in terms of websites.

[00:15:46] You know, different websites have different value. So if you're just watching YouTube videos, this obviously isn't very valuable to training in AI. Right. But if you're using Salesforce or HubSpot or AWS or one of these software or applications that businesses pay a lot of money for, it's very important and valuable to train AI on using those systems. Right. So we have a leaderboard within our application.

[00:16:16] We've essentially tracked thousands of websites, mapped them to our systems, and we reward them differently based on the value of those websites. So YouTube doesn't get that high of a reward. And the Salesforce would get 10, 20, 30 times multiplier because it's so much more valuable to train. So those are the fundamentals of what action fires.

[00:16:42] And within that, we have a subset of tasks as well. So each domain or application we want to train on, yes, it runs primarily in the background, but we have individual tasks as well in which we tell the user, go and do this specific thing because this is very important. And we want a lot of people doing it so the AI gets amazing at doing those things.

[00:17:07] And the way the browser extension works is it essentially picks up the journey that you went on. What this means in the data that we collect is essentially the path that you took through the website. You know, where on the screen did you click? The mouse coordinates. What did the buttons look like? Context. Things like this. This is the information you need the AI to understand and to be able to train it. So those are the things that we collect.

[00:17:33] And then, you know, the browser extension itself has a lot of different kind of fingerprints that it picks up in terms of variables and bot detection to ascertain as to who's a real person and who's not. So a lot of it is obviously trade secret. We've figured it out along the way. We've done extensive several months of beta testing. We have about 30,000 users at the moment in the beta test. So we're pre-launch. At this moment, we're actually launching next week.

[00:18:02] We've had about 30,000 users that we've been testing with. And so we're very comfortable with how the systems work. There is some detail around how we collect information, how the browser extension works in our docs, which is public for anyone to go and read. It's docs.actionworld.com. A quick thank you to the sponsor that supports every podcast across the Tech Talks network. And this month, I'm partnering with Alcor.

[00:18:29] And if you've ever tried to hire engineers in another country, you probably know just how painful it can be. Different laws, patchy support, and partners who don't truly understand engineering roles. So Alcor approaches this from a different tech point of view. They specialize in Eastern Europe and Latin America. And they're able to combine EOR capabilities with recruiting. So you get one partner handling everything.

[00:18:56] And they help you choose the best location for your stack, find developers with the right depth of experience, and run proper assessments so they can onboard people quickly. And they also give you a model that respects both transparency and margin. Most of your spend goes directly to your engineers, and the fee will decrease as the team expands. And you can even transition everyone in-house at that time when you're ready without having to worry about a penalty.

[00:19:24] And that structure is why a mix of early stage and unicorn stage companies use them as they scale. So if you want to take a look, visit alcor.com slash podcast or tap on the link in the show notes. But now, on with today's show. And for anyone listening that's reading up on this while they're listening to this podcast, they're going to be attracted by privacy features there and making a stand against big tech, joining a global community. And when they read that action model, it is privacy first.

[00:19:54] But then later they'll read that the system observes detailed browser activity. There might be a slight mismatch there. And again, it's probably a question you get a lot. So how do you square that circle on what protections exist if something was to go wrong or the data is misused? So firstly, we're incredibly transparent with everything. So we articulate all the types of data that we do collect. Those are not only in the docs, but they're also in the dashboard.

[00:20:21] So whenever you run the browser extension, you can literally click into your training history, and you can see exactly every single bit of data that we've collected. So we're incredibly transparent about it. The second layer is that you can essentially block anything that you don't want to train on. So by default, we have a block list with like a few hundred websites in there. So you can add to that whenever you want, things like banking, things like messaging platforms, email platforms,

[00:20:51] things that have a lot of sensitive information we don't train on. So you can go and add anything to that as well. Also, within the browser extension itself, on the local machine, all the sensitive information that we can pick up on is removed before it's sent to the cloud for training. And so it goes for a process before it even touches the internet that it's anonymized.

[00:21:15] And then before it goes for the training run, it's packed with thousands of other people, and there's similar training on whatever that website is. So there's something called K-Anonymity, which essentially mixes all the information together, and it's impossible to trace back to anyone in particular. So there's loads of different things.

[00:21:42] Again, we've articulated a lot more than what I've just said in the documentation itself. So everyone's free to go and read it. The final point is that, you know, I can't stress this enough. Your data is being taken right now. You know, if you go and read the terms and conditions, you know, I'm not expected to read it, but go and copy and paste it into a chatbot and ask it, what can they do with my data? And they will always come back and say,

[00:22:10] the data is theirs to do whatever they want with. They don't have to disclose anything to you. They can sell it. They can use it. They can train AI with it. You know, if you think that the big tech companies, Chrome that you browse on, Apple that your phone is on, Instagram, Facebook, Amazon, literally every single one, one of the biggest value propositions they have is the data that they collect across billions of people. They don't reward you for it. They don't disclose it to you. They're not transparent about it. Action model is not any of those things.

[00:22:40] You know, we are very open and transparent about everything that we collect. We let you modify what you want to provide. We let you delete it at any instance that you want to. And most importantly, you actually get rewarded for it as well. And again, such a great point. And next time a TNC box turns up on their screen with, I don't know, 15 pages there, do exactly what you said there. Copy paste it into a chatbot and an AI agent and ask it, what can they do with my data?

[00:23:09] And it will set off more than a few alarm bells. I can promise you. And elsewhere, we will have a few people listening that have been in the Web3 space for many, many years. They picked up a few war stories along the way, and they will say Web3 incentive systems. We've got a bit of a mixed track record because many projects have attracted attention rather than that long lasting engagement, which is required for adoption, et cetera. So what would you say makes Action Fire different in practice?

[00:23:36] And how will you measure success beyond those token reward and the community? The thing about Web3 that's evolved over the years is that it's essentially, you know, Web3 has gone for a progression of going from mainly hype to reality. In Web2, there's nothing but reality. You have to have a good product. I need to sell it to people. I need to make money.

[00:24:05] Web3 traditionally in the past was a little bit devoid of that. You know, it was more about hype and anticipation and future gain. So the result of it was that projects would have the hype, and then it would boom, and then it would not come to fruition, or it wouldn't fulfill the expectations, and it would come crashing down. And the reason for that is because people want making real products within Web3. I mean,

[00:24:35] look at the meme culture that existed over the last year. For every good project that has been in Web3, there's probably 100 vaporware ones, right? And the action model, this was completely against our ethos. We have been in stealth for over a year building the action model. The team is now 40 people crunching every single day. It's about 25, 30 developers working full time.

[00:25:05] And when we release next week, we have a full product stack. There's like six different products that we have within the action model. And not only is the community actually contributing to build the product itself, but they then become the distribution mechanism as well. And that's the most powerful part about it. When we have millions of people in the community, these millions of people,

[00:25:34] everyone either works for a business or they own a business, right? And again, none of these people have an incentive for open AI or Anthropic or Gemini to succeed. But if they're a participant of the action model, they have fractional ownership of it, they will be very incentivized for the action model to succeed. And as a result of that, all of those people that either work for a business or run their own business,

[00:26:02] they would rather use or recommend the action model over any other product. And then you end up in a situation where you have millions of advocates. And that distribution channel is very powerful. If the action model ends up having 10,000 B2B customers, you've got hundreds of millions in revenue. And that's in a pool of 50 million businesses globally that employ those billions of people

[00:26:28] that will be automated in the next five years or so. And so this distribution strategy that you have ends up bringing the product to fruition. And that's where the incentive model comes back on itself and rewards everyone, right? So the problem, as you've mentioned across Web3, was that loop never succeeded. People rely more on hype than on reality.

[00:26:56] And if we fast forward a few years, let's assume action model has succeeded. What does the internet really look like for the average user, the average person listening, the business leader and indeed developers? What would you consider your biggest failure mode along the way? And how do you see it all evolving? First, I'll paint what I believe the future looks like as a whole. Yeah. And then what the action model has to do with it. So I don't think that in 10 years from now,

[00:27:26] people will be working behind a computer screen. Right now, there's a billion people that are employed to work behind a computer. And I think that number will drop down to maybe a million. And that is because everything that we currently do on computers can be automated. And AI will do all of that work. And what will be left behind is, I think, entertainment and gaming.

[00:27:52] Employers, when given the opportunity to reduce their costs by 95% and their efficiency up by 10x, they're going to take it. And so, as I said, it's an inevitability that that happens. What that looks like is maybe a dozen or so big tech companies that have capitalized on that value transfer. These are the big tech companies that we already know about.

[00:28:23] And they will be the ones that have built these enormous AI systems that can automate almost everything on a computer. Action model, if we succeed, will be one of those players. But the difference will be that the ownership will be every single person that contributed to it. From downloading the browser extension to helping to grow the network, bringing businesses on board, creating workflows in the marketplace,

[00:28:53] completing action fire quests, anything that you do within the action model ecosystem, you receive a fractional stake of the entire ecosystem. And so, if we are to be successful, even if that's 1% of the market, which would be enormous, then that's shared by the millions of people that contributed to it, not just a handful of billionaires. And I think that is a powerful moment to end on.

[00:29:21] But as I always say at the end of every episode, you've listened to me, you've listened to my guests, now it's time for them to go out there, do their own research, read up about it a little bit more and immerse themselves in the ecosystem. For anyone listening, maybe they want to be part of the launch, maybe they want to read the white paper you mentioned, maybe they want to join your community or sign up. Where's the best starting point for everything? Well, I think first, people need to learn more about what's coming. And we've got some amazing demos on our website and in the documentation

[00:29:51] showing the capability of our current AI. So go and watch those videos and it will open your eyes as to what's coming. I mean, that's now. Five years from now. So just go and read up on the technology that we're building and what the whole landscape looks like. Secondly, jump on board. At the moment, we're launching next week. So you can go and join the wait list and you can follow us on Twitter.

[00:30:19] That's where our main announcements are. If you want to be a builder and contribute heavily, you can join our Discord server, which is where everyone's kind of participating right now. I think there's about 15,000 people in there that are part of the beta testing community. Yeah, it's going to be a big year for AI and it'll be a big year for Action Model. And I hope that we can get it to millions of people across the world working together to fight back for AI ownership. Well, I'll put everything you mentioned there

[00:30:49] in the links to this podcast episode. Wherever you're listening right now, go to the description. The links will be in there. I'm also going to create a couple of blog posts on my website, techtalksnetwork.com. There'll be all the links there, but I'll also embed one of those videos that you mentioned just so you can go straight there and open that up. And I'd love for anyone listening that's jumping on board. Let me know. What did you find? What did you enjoy? What did you like about what you saw? And what did you disagree with if there was anything? I'd love to hear from you. Let's keep this conversation going. But more than anything,

[00:31:19] just thank you for coming on and shining a light on this. It's something that will impact every person listening everywhere in the world. So just thank you for sharing that with me today. Pleasure's all mine. Thank you, Neil. I think listening to my guest there, he raised some uncomfortable but necessary questions. And he did share an incredibly bold view there of how community-driven AI could rebalance who benefits as machines take on more tasks. So head over to techtalksnetwork.com. Send me an audio message. Send me any kind of message.

[00:31:48] Let me know your thoughts. There ain't no right and wrong here. There's a lot of different opinions around those that will support big tech, those that want to use technology to form communities. What do you think? Let me know. But that's it for today. So I'll be back again tomorrow with another guest and hopefully you'll join me again then. It'll be a completely different topic but I'll be putting you at the heart of the conversation. Speak to you then. Bye for now. niin Thank you.