3230: Inside io.net's On-Demand GPU Infrastructure
Tech Talks DailyApril 04, 2025
3230
52:1741.88 MB

3230: Inside io.net's On-Demand GPU Infrastructure

What happens when blockchain meets AI infrastructure at scale? In today's episode, I sit down with Gaurav Sharma from io.net to explore how a decentralized GPU network could reshape the future of machine learning, AI development, and compute accessibility.

io.net has grown rapidly over the past year. With more than 325,000 verified GPUs already in its decentralized network, it's offering an alternative to the high costs and limitations of traditional cloud compute.

What caught my attention is the platform's ability to reduce GPU costs by up to 90 percent, giving startups and researchers access to performance that would otherwise be out of reach. In fact, over 73 partners have already integrated io.net's infrastructure, helping drive month-over-month network earnings growth of nearly 60 percent.

But this conversation goes far deeper than computing. Gaurav walks me through the vision of a more transparent, open, and incentive-driven AI development ecosystem. From its collaboration with OKX to power Web3 infrastructure for AI developers to enabling real-world applications like Zerebro AI agents, io.net is building a new paradigm for how infrastructure should work in the era of intelligent systems.

We also explore the convergence of blockchain and AI and why this isn't just a niche Web3 experiment. It's about creating real incentives for data sharing, enabling collaboration across models, and removing bottlenecks in how builders access the tools they need. Gaurav also shares how the company is evolving from a computing network into a full-stack AI development platform, including tools for no-code agent creation.

So, what will the next generation of AI applications look like when they're powered by a global, decentralized network instead of a handful of cloud giants? And how can developers take advantage of this shift today?

[00:00:03] So the AI revolution is in full swing, but there's one major roadblock. Access to affordable GPU power, very often at scale. And with demand for AI compute continuing to skyrocket, traditional cloud providers are struggling to keep up, leaving developers with sky-high costs and limited scalability. Well, enter a company called io.net. They're a decentralized GPU network that's

[00:00:33] flipping the script by aggregating unused GPU capacity from around the world, delivering AI compute at up to 90% lower cost than traditional providers. Yeah, that's got your attention, hasn't it? There's been so many big headlines this year about the ROI of AI projects. Well, there's one for you, 90% lower cost than traditional providers. And my guest today is going to break

[00:00:58] down how their decentralized infrastructure is powering the next wave of AI innovation. With way more than 350,000 plus verified GPUs in their network and partnerships with Web3 giants like OKX, io.net is already beginning to redefine what is possible in AI and blockchain convergence.

[00:01:23] So could decentralized compute power be the key to scaling AI faster, cheaper and more transparently? And how will that impact developers, businesses and the future of AI infrastructure? Well, it's time for me to officially introduce you to today's guest. Thank you for joining me on the podcast today. Can you tell everyone listening a little about who you are and what you do?

[00:01:51] Hi, Neil. First of all, it's great to meet you over the call. I'm looking forward to this discussion from some time now. A quick background about me. I'm Gaurav. I'm currently a CTO at io.net. By background, I'm a builder. I'm an engineer. I started my career from a few startups, mostly in Linux kernel file system core areas, then moved to HP R&Ds of the world where I worked on

[00:02:17] network file systems and helped HP R&D build one of the earliest network file systems in like early 2000s. And then from there, I moved to Amazon. I was there for a few years working on publishing pipelines for Android books, Android apps, products on that side when they used to get published. I moved to a booking holdings company known as Agoda in the Southeast Asia area side.

[00:02:43] So I was in the leadership position there first three years in the back end and three years in the AI and machine learning side. From there, I was looking towards new challenges in life because I was working in Web2 for a long time. I was looking for something new. And then blockchain, Web3 was becoming big. So I figured the best place to be is Binance at that time. So I moved to Binance. There were a lot of opportunities. They were basically changing the whole financial ecosystem with

[00:03:10] the help of blockchain. So I moved there, was on the leadership side of things on Binance for a couple of years. And then I realized how the industry is headed towards AI and how I actually from a background of a building can contribute to the Web3 ecosystem and can take the adoption of a whole AI and ML to the next phase of building. And that's where the Ionet and the whole journey of Ionet happened. So yeah,

[00:03:36] been here for the past one plus year now and have never looked back from that place. Wow. That's an incredible story there from being a builder, an engineer, your journey to some big tech giants at Amazon, and then of course, Binance. I've got to ask what was the origin story or what was it that inspired the creation of Ionet and how does a decentralized GPU network, how does that differ

[00:04:02] from traditional cloud computing providers, just so we can set the scene for everybody listening? You would have always heard of stories that people start working on a project which is very different and different problem statement. And it eventually evolves into a much bigger problem and a much important solution for the whole industry. And that's kind of the same story with Ionet also. So how it started was that one of the co-founders here and Dex CEO was working on his own hedge fund.

[00:04:32] He was a crypto influencer earlier, then he was also a builder, he ran his own hedge fund. And he used to trade in crypto tokens. And then the bear market came in. Now he wanted to basically be in a place where he can still make money for his investors. But because the bear market was there and there was not much of the movement in tokens and so on and so forth, he moved towards stocks. And when he moved

[00:04:57] towards stocks, his number of models, the size of the models changed. And at that time, he was very price sensitive as a builder. So he was looking towards the good and strong GPUs, but at a cheap price. And exactly at that moment, Ethereum was moving as thick. And all the Ethereum miners at that time were not needing the GPUs, which they used to do in the past. So he got an idea at that time that why don't I

[00:05:23] like aggregate all these these GPUs from different Ethereum miners and use it for his models, which he's creating for the stocks. That's where it started from. It's a very similar story like in Slack. Now people use it left, right and center. It was basically built as a tool in a gaming company so that developers and the managers can talk to each other. And it ended up being a product which is much bigger than the gaming company itself. Something very similar. We were working on a tech where we use

[00:05:52] and aggregate these GPUs from all these miners for our internal models. And we realized that the industry is headed in a direction where if builders need access to GPUs fast at a cheap price, there's no solution in the market right now. Right. Especially at scale. And that's where we moved away from these hedge fund

[00:06:17] internal product side of things towards a pure deep end. That's where the origination of the idea came in. Right now, how we are different than a centralized player? Right. So in a centralized player, let's say me and you both are doing a business right. And you have a software company, you let's say work out of Europe. And let's say I'm giving you 1000 GPUs in Amsterdam and your business is doing well. And now

[00:06:45] suddenly you want to expand your business in Singapore. So how it works in a normal centralized businesses, you'll come to me and say, got it. And he's 1000 H100 in Singapore. Can you please help me out? Because our relationship is working well. Now, for us as a centralized player, in that case, I would have had to go to Singapore, rent a place, go to Nvidia, buy a GPU, wait for the shipping to happen. In the meantime, I'll try to create a team, hire a team, right? When these GPUs will get

[00:07:14] delivered, I'll assemble them. And then I'll give it to people. So there's a period where people have to wait, even if they have money in the pocket, in the centralized ecosystem. And many a time, because this centralization is happening, there is so much only the number of GPUs, the centralized player also can get from Nvidia or the world. But there's always a shortage or a delay in getting the resources

[00:07:39] as a builder. And because these centralized players have to put in so much of initial cost in giving you the GPUs for buying or renting a place in Singapore, hiring a team, buying these GPUs, they basically propagate that cost and more to you as a builder, and they charge you a premium, providing those GPUs. Right? So we saw this problem, right? We are from this business of the Amazon,

[00:08:06] we saw this business. So we said, why don't we basically take advantage of the tech? Right? Because there are so many data centers, which have GPUs, but they can't fight with this centralized players, right? They can't, right? Then we have so many software companies, which I saw in my past experience, where, let's say you are writing a model right now.

[00:08:32] You wanted 2000 GPUs. You bought them, you built a model. But now to iterate and make a better version of that model, you don't need same amount of compute now. But you have these 2000 GPUs sitting there. What do you do with it? Right? There are problems like this, right? There are a lot of now community people which have like eight, 10, 10 strong GPUs there, right? So we saw this. Why don't we create

[00:08:56] a tech where we can test the authenticity of the hardware and people come to us? We give them some incentive when they give us the supply of GPUs. And then we create a software layer where the consumption of this GPU is seamless for the builders, right? And because we don't have this initial very high cost, we give it at a fraction of a price, 70, 80% or lower compared to the centralized build.

[00:09:25] So that the whole ecosystem can flourish. They don't need to spend millions and millions of dollars on these GPUs. They can save this money and they can build more stuff when they have this unlock and the saving of this money. That's where this whole concept of IO Deepin and Ionet came in. Absolutely love that. And just to highlight the scale of what we're talking about here,

[00:09:49] before you came on today, I was reading that you've got over 325,000 verified GPUs in your network. And for business leaders listening, how does this, what does this mean for them? And how does IO.net ensure scalability, security and performance for AI and machine learning workloads, which is, I would imagine the big problem that they're all trying to solve at the moment. So to be honest, actually, the number of GPUs we have access to give the customers

[00:10:17] is actually much bigger than that. 325 is what we have verified right now and have the platform ready when we can like use them. But the actual inventory is much bigger because I have seen these things happening in the web to side of things in a very mature state in past already. For example, let's say you are booking.com. There are some like expedited the world, right? All these companies,

[00:10:43] when they sell hotels, they have their BD teams and these marketing teams, they have a edge over each of these companies in different parts of the world. For example, a booking.com may have a very high and good inventory in Europe, right? Expedia might have it in let's say America, there would be a different player in Asia, right? But when you go to these platforms, you still see good prices

[00:11:11] irrespective of the part of the country you part of the world you go to, right? The way it works is that all these players, they create a layer, which we normally in industry, we call it affiliate API and so on and so forth. Wherein Expedia would be giving their inventory to booking holdings. In holding up giving their inventory Expedia, vice versa, right? And they have their own business model where they charge some premium if they give it to the different players. But all these players work

[00:11:37] with each other to give a better experience and a much enhanced experience to the users. So all these inventories from different companies, they aggregate each of these players and they give it to the users, right? So basically this aggregation of inventories happen in web2 world already. But this hasn't happened in the web3 site. In web3 site, Google is still selling their own inventory. Amazon is selling his own inventory. There's no one who has done this aggregation, right? So we're doing this

[00:12:04] top level aggregation which people come and give us inventory. But we have the access to APIs of many data centers and big project which we live can go fetch their inventory whatever is there on their platform, make it a part of our platform and give it to the customer, which we haven't verified yet. We can still go like 325 is dead. But if you want, let's say more than half a million GPUs,

[00:12:30] we have access to that also, right? This is something which we have ready and people can consume it. If somebody like and it has happened in the recent past where people can come and say, I want 1000 GPUs in Malaysia, for example, what we would do, we will basically have this web of APIs from different data centers and providers which we have, which is not even verified right now. We will use this API scan. This is our requirement parameter. Is there any data center which have

[00:12:58] the same inventory and requirement and which can suffice us, right? We will go there, we will look at this and this, is meeting our requirements, right? We enlist them and they're waiting in the queue to be part of the network. We take that supply, we run our test, which we say proof of work, networking test, speed test, like memory test. We do all these things and we say, okay, this particular hardware is of high quality, right? The data center is of high quality because the network networking is good,

[00:13:27] storage and everything looks of excellent quality, right? Is meeting the customer requirement. Now, this hardware you can add to the network and can give it to the customer, right? So 325 is something which we have already verified and put it in the queue. But anytime this consumption or this inventory goes out, we have easily another like half a million there, which we can actually expand to

[00:13:52] and bring it to the network if needed. So we built this web of APIs and inventory in the background, which we saw working as a business model and successfully working in this web to site and nobody was doing it in GPU. So we did that. So we have this product also in the background, which helps us with access to any hardware in any part of the world we need. Wow. That's just incredible. And something else that put you on my radar was your partnership with

[00:14:20] OKX. And that aims to, from what I was reading, bring Web3 infrastructure to AI developers. And that's something that seems in something quite magical for me because we talk about AI, we talk about blockchain, but the thing that interests me is what happens when all these technologies begin to converge. So how do you see blockchain and AI converging in the future? Because again, this seems to be quite a powerful moment we're approaching here.

[00:14:47] In my opinion, the adoption and the pace of development in AI is lacking few things. Yeah. And those particular parameters could be very easily and at a very high level could be complemented by blockchain. Let me go a bit deeper in this statement. So when you build an AI model, what exactly are you looking for? You're looking for good

[00:15:13] set of engineers, obviously. You're looking for a good compute at a cheap price so that you're looking for a good, like if say you have $10 million being funded. And if you are using around $7 million or $6 million of that money and just renting the computer, buying the computer and building a model, you're left with only $3-4 million to hire a team or try out different ideas to create your ecosystem in a product. So if you can save money from the $6 million to let's say

[00:15:41] you're only spending $2 million and you can save $4 million. This $4 million you can use to hire more smarter and a much bigger team. You can have a bigger runway and you can iterate on more ideas. Second thing what you need is you need data. Your model is as good as a quality of data you have, period. Right now today, unfortunately, we are at a place in the industry where the access to this

[00:16:07] data is only with centralized players. And that results in an ecosystem where these big players are just becoming bigger and bigger. And when smaller players want to create models, they don't have access to this. So we need an ecosystem and an incentivization mechanism and a tech where the builders of the future can get access to this data like a centralized players have. Right. That's

[00:16:31] one problem, which we also have. Right. So let's start with this too only. So what we are doing is we are creating. So blockchain can help in transparency. Like that's what the fundamental value is. And you ask any builder that there is a model which performed, let's say on a score of 190. And then there is a model which is scoring 80. And you ask a builder, which one will use?

[00:16:57] I can guarantee you majority of time people will say 80. And you'll be surprised why? But the other one is like 90. Why are you picking a lower performance model? The reason is if it's open source, the builder can look inside and see what is it doing? Where is it lacking? Where I can contribute and make this model better for just not us, but for the whole ecosystem so that we all can contribute and make it better and better. And we're going in a much higher speed of innovation as an ecosystem. That's how

[00:17:26] builders think. Right. And that's exactly the quality and transparency blockchain brings. Everything is transparent there. As a builder, I know where my data is coming from. Is it biased? As a user, I have seen many times myself that I'm interacting with these models and chat GPT's the world, right? And they're directing me to a specific answer six months back. And when I ask the same

[00:17:54] question today, I'm getting a very different answer. Politically, ethically, very different thing, right? And then you go into this paranoia that, okay, like, is something happening, right? Is it the same model I was working on six months back? Like, is there some personal biases being given to us as humanity and so on and so forth, right? And all this paranoia goes in and some particular parts of the world are more sensitive to this particular thing. Now, what blockchain brings

[00:18:20] is this confidence that the model which you were using six months back, is it the same model today? Has something changed, right? Has it deployed something new? Has the data which is being fed have been changed and put some biases on? So all these things happen. So if you have transparency in the ecosystem, it brings you confidence, right? What confidence does is it brings adoption.

[00:18:46] Adoption brings momentum to the network. And when momentum in the network comes in, then the flywheel comes in, right? And when a flywheel starts, then the innovation will keep going on and on, right? And that's what's lacking in today, to be honest, in the web-to-side of things. There's certain centralized players, they have a monopoly of data, they have a monopoly of compute and a monopoly of knowledge set, right? And they're just becoming bigger. And we all know when the companies become

[00:19:13] big, there's a lot of inertia which comes along with it. So the innovation also stops on that side. So neither the new players can come in because they don't have this access to compute and data. And because these centralized players have become so slow, the whole ecosystem is not growing. And neither it's fair as builders to be in that part. I think those values of transparency,

[00:19:39] easy access to compute, data, incentivization mechanism, where now when people build something in the web-to-world and they have to pay money from the pocket and so much of high cost for everything, you get it like extremely cheap in the web-3 side. I believe it will create an economy of tokenization in web-3 side, which will bring in a lot of web-2 builders to web-3. And that will be

[00:20:07] the actual adoption which the whole web-3 ecosystem was looking for. I'll give you an example, right? Let's say today you are an ML model builder. You go to, let's say, a web-to platform. What do you do? At time teaser when you start, you pay the money for compute. Then you start building, building. Now you need a tool for deploying the model. You pay for that. If you need a set of data, you pay for that.

[00:20:35] And let's say if you do a good job at creating of the model, creation of the model, then what? You have to find the customer for it. And you're paying money all throughout the process, right? How you've benefited? Imagine in a web-3 side of things where you have a platform. We built this platform on the I-O intelligence and we have already launched it. And now we are growing this ecosystem to more components and his own economy. But imagine that you have a platform where you come in

[00:21:03] and you get GPUs for free for a certain amount of time. You can come as a builder and start developing without paying any cost, right? Now all the data is centralized players because there's no incentive for them to give it to you. But now we have a token economy where when data providers gives us data, they get incentivization. The more the data is being used, the more incentivization

[00:21:28] they get from our side. Now they'll have reasons to share data to us. So the builder get this data also, right? Now they create a model and let's say they build a good model. Then we as a platform, we see if the model is good and is being used by a lot of people, then we give incentivization to the builder also for doing a great work. And we will help their model to be adopted at more places, given the surfaces that more people can see and so on and so forth. Imagine how many web-2 people now

[00:21:55] will start going to web 3 and start using this platform because they have this free compute, the access to data which they don't have, and they don't have to pay money in the beginning in the pocket till a certain time. And then even when they pay, it's 80% lower, right? And plus when they do a good work, they don't need to run around, find the companies, find the users, the platform are helping them. And they get the incentivization when they do a good work immediately. There will be a

[00:22:19] huge adoption and migration of these web-2 builders to web 3.0. Wow, there's some big stats in there. And I think it's so important to talk about that because this year, one of the big topics has been around the pressure of finding ROI on any AI project or in fact, any tech project. And before you came on the podcast, I was reading again, io.net had been credited with reducing GPU costs by up to 90%. It's a phenomenal figure, isn't it? Is there anything else that makes you different

[00:22:49] from those web-2 centralized providers? Because I think those figures kind of speak for themselves, but anything you'd like to add to that? Well, I think we briefly discussed it about in one of the earlier questions. Yeah. The way our economy of scale works and system work is very different. Like we have built this tech where people are sitting in the queue to get people to get like part of the block rewards and get the incentivization. When their hardware gets used, we give them the part of the money which the customer also gives. So they get rewards from

[00:23:19] both the sites. They get incentivization of tokens plus part of the money which is coming from the customer also, right? So there's good enough rewards as a supplier when they come to the ecosystem, then for us as a business, we don't need to put in and lock in money right from time T0, right? We're getting this inventory from the suppliers, which they're not able to sell in other places. So we help them find the customers. And for us to scale up, it's as easy as adding a

[00:23:48] couple of boxes and getting the access of these supply to the customers, right? So for us, let's say to add and handle, let's say 10,000 more GPUs, it'd be a cost of like one box added to the textile. So we've built this tech already, but on the other side, for a centralized place to provide this 1,000 GPUs, they have to spend like 50 million plus dollars in their side to give the access of computer people. And that 50 million dollars when they spend from their pocket, they basically propagate

[00:24:16] that cost to the user and they charge a premium for that. Because one more thing which people don't realize is that these GPUs and the maturity of the GPUs is growing as an industry. So there's a new model coming every six months. And the old one is not as popular as it was like six months back, similar to your MacBooks, right? Yeah, you have these M1s, M2s, M3s, M4s, right? For majority of the

[00:24:44] users, will M2 be good enough for the use case? The answer is absolutely yes. But the moment that M3 or M4 gets launched, right, people want the new hardware, right? Even if it's a bit higher performance, that's exactly how it happens in the software side of things. When they're creating a model, can the old 4090 meet their requirement? The answer is absolutely yes. But if there's a flashy new hardware, which will be 20, 30 or 40% better, they'll just go for it. Because from their side, the software company, they're saying we are always spending money. If you're spending money,

[00:25:14] why to go for an older hardware? Why don't we go for a new one? So we understood this case. We understood that if we go in a business site and we start buying inventory, this inventory goes old very fast. But there is so much you can scale. If you buy a 100 million inventory and you bought a 50 million GPUs and suddenly in six minutes it goes old. And now there's a demand for the new one. There's so much only as an ecosystem you can scale, right? And that's also the reason why these

[00:25:40] centralized players charge a premium also, because they know get the maximum out of these GPUs ASAP. Otherwise, you may not be able to earn anything in a year from now. So get all the value which you or the money which you've spent in buying this hardware ASAP from the customer. Now that's not the place where we are at, right? We are creating this marketplace where we are getting the best supplier of suppliers and giving it to customers. And this also enables us to give the corporate a

[00:26:08] very, very cheap price. Plus we don't have to like buy, rent a place, have a huge team, like managing these big data centers and so on and so forth. We don't have to do that. There'll be a data center at a very efficient place where the electricity is cheap, the labor is cheap, and they're able to do things, best of things on their side. Suppliers, we take the supply from that and give it to the customers. And we've built the stick, and that's one of the reasons we can give it such a cheap price.

[00:26:36] And we're recording this conversation in Q1 of 2025. And this conversation, well, this figure may be out of date by the time this goes live, but I've read that you've partnered with something like 73 companies this year alone. And I've got to ask what kinds of AI projects are benefiting the most from this decentralized compute power? Just to bring to life the kind of value and the kind of thing that you're helping businesses deliver right now in this field.

[00:27:02] So it's, to be honest, it's with two companies also and with three companies also. Yeah. Like they're all, in fact, I was so proud recently that I went to one of the events in India recently, and this IO intelligence platform, which we have created, there were a couple of people which came in the event who were college kids and they've built their college projects on top of IO intelligence. Right. So our users are right from college people to web two startups, the web three startups.

[00:27:32] And recently from past four months, we have the marketing will kill me if I'll tell it here. So I'll leave something to them, but we have a couple of web two players also, which have more than 200 million user base, and they're also taking compute drones. So there are big web two companies also which are coming in. There are a few startups which are now becoming big in web two. Like for example, we are working with one data AI. They take compute from us.

[00:27:59] In web three side, Zarebro was working with us, right? Sentient works with us. So there are like all these AI agents, the popular AI agents and the future agents of the ecosystem, which are closely working on our side of things. Plus we have these web two good startups and these big giants now who are consuming hardware from our side, right? And I'll try to paint a picture that people can understand how this whole

[00:28:24] business works, right? With God's grace, we're doing great. And we're not a very old company. We basically are like one, one and a half year old company in the first seven to eight months. And when we started, we were focusing on creating this tech where suppliers can come and give us the GPUs. And then we can create a stable orchestration layer, which makes it easy for the builders to consume these GPUs at mass scale. That's what we were trying to build at the beginning, right?

[00:28:52] Our main focus was on the tech piece. Pivoting to the business and bringing the revenue and all these things in, we just started like four months back once we were comfortable with our tech. And from there, we basically moved from less than 1 million ARR to now 35 plus million ARR within span of four months. And this industry works in a bit different way compared to the others also,

[00:29:18] is that normally when customers come to us, they don't because we're still growing in a startup, right? And I understand their sensitiveness that initially they'll come and they'll say, okay, give me a compute for $200,000. We'll try it out for a couple of months. And once they see, oh, the platform works well, they can scale well, their uptime is good, the supply is good. Then they come up with big orders. Then that $200,000 becomes $2 million. We exactly saw this thing in the Web3

[00:29:48] customers that they initially came, they took few GPUs used for a couple of months. And when they had confidence, then they come up with big orders. Now we are saying the same thing in the Web2 side of things also, that all these big Web2 players, they initially came, they took the inventory for two, three months. Now their quarterly goals have been met. And now in the next quarter, their goals, now they're coming to us and they're ordering the bigger set of GPUs from our side.

[00:30:13] So I think that's where we see. And that's why our revenue on the yearly basis is exponentially growing because a lot of customers now we are getting is just from word of mouth. Like feel the agents, honestly, we didn't even approach them. They approached us that because like you're a builder, right? All the builders sit together in some event sometimes, or they go out on dinners and they're talking about their tech and they suddenly say, oh, you know what? I'm working

[00:30:38] on this cool tech of I.O. where we save 70 to 80 percent of our cost on GPUs. And that's enabling us to do X, Y, and Z. The other guy will say, oh, really? For how long you're using it? They'll say a couple of months. If you just say, well, yes, they'll also come. And we have customers like this. And then these Web3 customers, when they talk to Web2, then they come in. So this flywheel and the whole business thing on the deep end, we feel internally in the company is a solved problem for us. That is organically

[00:31:06] growing. The flywheel has started, the customers are coming on their own. We have a BD team which is functioning well. That is a solved problem. Now what we are focusing on as a team is to basically create software layers and products around this so that we give such a high value set that people don't just come to us to get the compute for the cheap price, but they're coming for the ecosystem. And on the other side, once somebody comes into a platform, they never have to go out when their

[00:31:36] requirements are increasing, when they need the software, when they need data layer, right? When they need decentralized storage, when they need decentralized networking, when they need a platform where you can with a one click deploy a model and people can start inferencing and auto scale, right? Like all these use cases is something which will enable us not only to bring more builders on our

[00:32:00] platform from Web2 and Web3 both, but it will also enable us that people who come initially seeing the best price which we're providing, but they'll stick with us for a longer period of time. And that's how we have seen the businesses being built. So now we are focusing on the products which are giving this value added service. That's why we're creating this IEO intelligence as an ecosystem with its own economy that people can come consume these popular models, create models, scale and so forth.

[00:32:28] Right. And it's proving out to be a good direction as a company because these value added services product, which we were creating, we initially thought that we will launch it for the user community because they have really supported us. We started with that and these products suddenly got so much of traction and like within a month that now we have like couple of Web2 customers who are saying that this IEO intelligence product,

[00:32:56] we want a separate instance of it just for ourselves and we're ready to pay. So, so this and I and from there when I did the research I realized that GPUs is a 1 trillion plus business in the coming years, but honestly, the software layers, the SaaS product on these GPUs will be 50 to 100 times bigger than just these GPUs itself.

[00:33:21] And that industry is just starting. So whoever is the early builders in that side will be the giants of the future. And people are just like right now, just stuck up in like GPUs and compute. They're not realizing that this next phase is where like the billion dollars companies will be built with a very small team size.

[00:33:46] And for many business leaders, they'll be judging you on how you compare with some of those big traditional providers. So how would you say IEO.NET compares to those major platforms such as AWS, Google Cloud, etc. In terms of things like reliability, accessibility, ease of use, what would you say to those people that are going to be asking those kind of questions? And they're probably questions you get asked a lot, right?

[00:34:11] I will really motivate people to basically go to the platform and try to consume a few GPUs. And we are very confident because we have done this open exercise multiple times in multiple events, both web 2 and web 3 side. And people don't even realize that it's a web 3 platform. They've built it in a way that even a web 2 person can come and start consuming, right?

[00:34:37] And they can just do it in like few clicks. So it's as simple, if not more simple to consume the inventory in our platform compared to AWS. Because in many of these businesses, when you want to take the inventory in, we literally have to do KYC at times. And we all know the KYC could be like really cumbersome to complete. And even when you give your documents as in the KYC process, you have to sometime wait for days and weeks to complete.

[00:35:04] And then you get the access to this compute. Then there's a centralized company who will basically have to wait and you have to give their requirements. And then they'll say, oh, you have to consume it in this way. So it's a weeks of time to consume the inventory on a lot of these web 2 players. If you use our platform to even hire 1000, 10,000 GPUs, it's just like few clicks away.

[00:35:27] No KYC, no waiting for days and weeks for doing this KYC, KYB. No need to talk to somebody, a middleman to give your requirements to. Everything is there on the platform. You can use it with the crypto. You can use it with stable coins. You can give it to the bank. There are multiple ways that which you can pay also. So we literally built a platform which is seamless

[00:35:50] to use. And you don't even realize whether it's a web 2 or a web 3 platform. It's agnostic to the builders on either side. And what challenges do you see in the adoption of decentralized computing? Anything that you'd like to raise today? And also how are you trying to make those challenges a little bit easier to overcome at IO.net?

[00:36:13] I think it's just propagation of knowledge, which I'll say, right? So because when you talk about decentralized compute, it's basically a spectrum, right? What do I mean by that? There will be customers who come from web 2 side who will say that I need compute, but I need my

[00:36:36] 1000 GPUs or 5000 GPUs in a single data center. Because for my model to perform and train better, I need all the data locally sitting at one place. And that's what my requirement is, right? And then you have another extreme where there's a pure decentralized app. And they'll say, if I need 500 GPUs, I need those GPUs from different suppliers and different data centers. So that if one particular

[00:37:04] supplier goes down or does something X, Y, and Z, or one data center goes down or two data centers go down, my application is not impacted. But there's this spectrum where one needs supply from like different sources at the same time. And then there's one user where they need everything from single place. So we have built this platform where you can go in either of the extremes. We have suppliers from these big suppliers, where if you want all compute to be consumed from a single data center, it's

[00:37:32] possible through the product. If you need pure use case, you can do that also, right? Now this knowledge in the industry is not there because many times what I see is web 2 people when they come in, they just imagine that if you are deep in and if I take 500 GPUs, the 500 GPUs will be sitting in one country here, one country here, one country here, one country here. And when I train my model, it's not a performance. Right? That's one thing which they feel, which is not true. And when we educate them,

[00:37:59] okay, this is a platform where you can put your scale in any direction in the spectrum and you can consume in this way. So that's one which industry is not aware. On the second side also, I think we as a company have built the tech where people who consume us, they give good feedback. But as a company where we can be better is we have to like open source and publish more papers for the work we have done. For example, there is a very popular framework known as Ray, which is now used by OpenAI

[00:38:27] as the world, a lot of big companies when they consume thousands and thousands of GPUs. Right? It's one of the new techs in the industry. Right? So we took that framework, we forked it, and we made it in a way that it can work in decentralized manner. So you can consume these thousands of GPUs and let's say 50 of them go bad or one data center go bad or three data centers go bad. We automatically

[00:38:53] figure it out and then replace this inventory from the other places in a stable way. So you as a customer, a consumer will not even realize, okay, 30 of the boxes have gone down. We'll do it seamlessly for you in the background. So we built this tech. Now we didn't do a good job till now where people have this visibility of what we have done papers around it a bit more. I think that's where I think we as a, as a company also can do a better job. So I think this education piece

[00:39:21] is lacking as an industry from our side. I don't think like when I speak to a lot of my friends in Web2 side, they are literally surprised that we can provide a compute and this software that's such a cheap price and they're paying huge amount. Right? So we haven't gone out of our way from Web3 to Web2 to propagate this knowledge that there is this whole Web3 ecosystem. Right? And it's much

[00:39:47] more bigger than just tokens. There's real tech being built, the real ecosystem being built, which you can consume and take benefit off. Right? And this education piece, I think we should be getting better at as a whole Web3 industry, because many a time we just keep talking about token token, the price going up and down. We talk about all this tech, but in a much smaller group where only these Web3 players are

[00:40:10] sitting. But we have to go out of our comfort zone, go to the Web2 side and show what we have built in the past few years and how can they benefit. I think if this piece we can do well, then our adoption industry will be tremendous. And as AI and machine learning demand continues to grow and indeed continues to mature, we're starting to see that now too. What's next for IO.net? Where do you see

[00:40:37] the industry heading over the next few years? Anything you can share around where the future and where you're heading? So we are investing in platforms. We have realized that our main value set is building platforms. So a lot of people speculate that they'll build this particular agent. At this particular, at this juncture, we have realized that we have the engineers from Google's, the Amazon's, right? The VMWays of the world. And all these guys, what they have done in past 10,

[00:41:06] 15 years is to build platforms. That's what we take pride on. We have experience it and we'll do a great job compared to many other players in the market. So we are focusing on that right now. What we have done is when we were thinking of this whole vision of the project in when we were starting, so we initially started with Deep Inside. But before we even started the company, like actually, we realized the whole ecosystem needs a player which is a decentralized

[00:41:32] cloud. Meaning a player which can provide compute, storage, networking, everything, right? For decentralized apps. That's what we started with. And now what we are doing is the Deep Inside the Sol problem. Now, every quarter, we take a particular problem in-house and start to tackle it. And we do it in a way where the customers are asking for it so that we can make a real business out of it

[00:41:58] in the very next quarter. We actually start to get revenue from it, right? There's a whole sustainable economy and ecosystem within the company also where there's a particular team which is building this product. They are being funded from just the revenue which is coming from this product which they have built, right? So that we can scale horizontally or as a team, right? Internally. That's the way we build. So in the short term, we are focusing on I.O. Intelligence and that platform, what it is doing is picking up

[00:42:26] the best of the open source models of the world. We are deploying it on our GPUs and giving it for free for people till a certain time to be consumed. And after that, we will charge. Now, on top of it, we are creating decentralized storage now, right? We are creating a layer where people can create models and with one click deploy it. You can bring your own model, deploy it. People can start inferencing it.

[00:42:54] They can auto scale it. So this whole builder ecosystem is the one which we're building in next three, four months and it's already getting a lot of traction inside, right? Then along with this, we are also realizing now that AI agents are becoming very popular in the recent term, right? So we're creating a product wherein we

[00:43:19] are ourselves creating some basic agents. And then top of this, we are creating a tool where even if you are a non-tech person, you can just drag and drop these agents and can create your own complicated agent. Or you can bring some agent from the outside along with these combinations, just drag and drop and build your own product. So this is another product which we are building so that there are a lot of people who don't know how to go, but they have the product sense.

[00:43:45] They know where the industry is headed because they read so much, they consume so much, but they just haven't been exposed to coding per se, right? And they're not able to build cool stuff. So we're creating a product for those guys also. I think that's another product we're building. And with respect to the answer of your question, like where the industry is headed, I think the industry is headed towards more and more AI. Each and every problem you can think of, AI will propagate in that problem

[00:44:12] statement in the next six months to a year. The way I think it in my head normally is that the AI agents will always, the whole AI ecosystem will always follow how humans have evolved. So for instance, today as a company, when I open a startup, there are only two, three people who start a startup and they're generic people, right? They do tech also, they do product, although they do business also. That's what exactly has happened in the AI side of things. You have this chat GP to the world,

[00:44:40] which are generic models. They are decent at doing many things, but they are not great at doing one specific thing. They're generic, right? That's what the models are. They're generic right now. How humans start the company when they open a startup. When the startup is successful, then what do you do? You basically hire people who are good at a particular problem statement. You hire a CTO, you hire a chief marketing officer, you hire a chief product officer who are great at

[00:45:07] their vertical problem statement. That's where AI agents will be headed to. They were generic models. Now you'll see more and more models, which will be good in one particular problem statement. There will be an AI agent which will be great in coding. There will be one AI agent great in creating images, right? Illegal problems, problems and suggesting a solution, so on and so forth. Once you do this, how do humanity evolve? We basically work together

[00:45:35] and then we produce and we create synergies. Like when one company succeeds, what they do? They partner with other companies, they create synergies and then they build a product at a top level. That's exactly what AI agents will do. You will have these specific agents which are master of their own problem statement. Now all these agents will work together, right? And then they create their own economy. So I think if you take a step back and you will think, okay, how humans evolved, that's exactly

[00:46:02] where AI agents will evolve into. And whatever you do today as an individual, you will see an AI agent doing that job for you. And then you'll have all these agents working for you, along with other people's AI agents. So I think that's where the industry is headed. David. Exciting times ahead. And I think there is this, a real pressure on us all to be in a state of continuous learning right now. And as someone that is leading the way with scalable and cost-effective

[00:46:33] computing power for machine learning and artificial intelligence applications and so much more, I've got to ask, how do you keep up to speed with everything? Where or how do you self-educate? So I've been lucky, to be honest and blessed by God, that I'm working in a field which I was passionate about. So when I work for like 12-15 hours in a day, for me personally, I'm working

[00:46:56] on a field where I was really passionate on. So I don't feel it as work. So I really enjoy when somebody launches a new paper and have to go through it. And when they say, okay, we are launching this particular paper in two weeks from now, I'll look forward to it. So I'm a very avid reader. I read a lot of papers. I go on medium quite a lot. I read and consume from there. There are a lot of

[00:47:20] publications which I follow. So I've been lucky where I'm working in a field which I was always right from childhood passionate about. And luckily we are in a place that we have similar thinking and value people in the company also. I think that's what has kept us in a place that we are innovating at a very high base. Because we have these readers who read this researches where the innovations are happening. And then we have these same people who have been in these big companies and they know how

[00:47:49] to build businesses. I think that's what the ecosystem was lacking. In our ecosystem, there were a lot of people who knew tech. There were a lot of people who knew crypto itself. And there were a lot of people who knew how to build businesses. But having a team where you know how to build platforms at a scale, know how to use crypto, tokenomics, blockchain to build products, and then build it in a way that the actual business. That was like, they're not many projects which actually have all

[00:48:18] these three components. And we were lucky in a place that we have people from Binance of the world and other crypto projects, VMware's, Amazon's, Google's, where people have both business and platform. So, and these guys, I'll tell you like, I have a person like you're saying, how do I learn? Right? I read a lot, but I have a couple of people who are double PhDs in the company. This person who is like an ex-Amazon and a PhD. Right? And like, how much can you compete with

[00:48:46] them? You can read whatever you want. They're reading along with you. Right? So we have this culture internally in the company of challenging each other in terms of ideas also. And when the challenge happens, they bring a lot of new ideas in from their experience and what they have read. I do the same, the others do the same. And collectively, we keep discussing and learning where the industry is headed. And then we have some smart people who have a great business acumen. And then they just wrap those ideas in a way that they can actually work, make it work in terms of business also,

[00:49:15] and we can actually make money out of it. Ah, I absolutely love that. I think that's a perfect moment to end on today. But before I do let you go, anyone listening want to dig a little bit deeper on all things io.net and indeed anything that we talked about today in greater detail, and also they've got a passionate community out there. So where would you like to point everyone listening? So we have an IoNet X channel. We share regularly our updates there. We have a LinkedIn page also,

[00:49:44] and we have a great community on Discord as well. I think you can follow us in all three, and we try to propagate everything which we are building, and what we are doing and all partnerships we are doing and things which will be coming very soon. All these three platforms, we give regular updates. And our website itself has a great documentation. And we're trying to build it in a way that even a non-tech person can consume. So we have put up a lot of videos for the products,

[00:50:11] how they can consume. So even if you're a non-tech person, you can go there and start to consume and learn how you can benefit from the platform. And that really came across in our conversation today. I think that demystifying this complex technology, putting a language that everybody can understand, whether they're a C-suite member that's not that techie or someone that works in that area, but still needs a lot to learn around that. For me, you gave so many actionable insights today

[00:50:38] around the convergence of GPUs, blockchain and AI, and so much more, and really brought the topic to life. So just thank you for sharing your story and incredible insights today. Really appreciate your time. Thank you, sir. It was a pleasure talking to you. As AI models grow in complexity, the demand for compute power isn't slowing down. But the way we access it, that is what's changing fast.

[00:51:04] And io.net is proving that decentralized GPU networks, they're not just a concept, they're already becoming a game changer for AI development, making powerful compute accessible, but at a fraction of the cost. So what does all this mean for the future? Will decentralized infrastructure outpace traditional

[00:51:26] cloud giants? Or will we see more of a hybrid model emerge? And how will this shift empower the next generation of AI startups and the next generation of billion dollar companies? Let's keep this conversation going. If this episode has sparked any new ideas or anything that you'd like to share, please email me techblogwriteroutlook.com, LinkedIn, x Instagram, just at neil c hughes too.

[00:51:55] So much food for thought in today's episode certainly got me thinking about a lot of things. Let me know what you're thinking. And I'll return again tomorrow with another guest. But thank you for listening as always. And hopefully I will speak to you all again tomorrow. Bye for now.