How Scale Computing Is Powering The Next Wave Of Edge Infrastructure
Tech Talks DailyMarch 07, 2026
3614
21:3819.8 MB

How Scale Computing Is Powering The Next Wave Of Edge Infrastructure

How should businesses rethink infrastructure when applications, data, and users are increasingly spread across thousands of locations?

In this episode of Tech Talks Daily, I sit down with Mark Cree, President and Chief Operating Officer at Scale Computing, to talk about why the future of enterprise infrastructure is moving closer to where data is actually created.

This conversation was recorded following the 66th edition of The IT Press Tour, where some of the most interesting conversations in enterprise infrastructure centered on what happens when businesses move away from oversized, monolithic stacks and toward practical, distributed solutions. From retail stores and airports to remote industrial sites, the edge is becoming a critical part of modern IT strategy.

Mark shares how Scale Computing has spent years building an edge-first platform designed to run critical workloads reliably across everything from a single location to tens of thousands of distributed sites.

Mark also reflects on his own journey through the technology industry, which includes founding companies acquired by Cisco and NetApp, working as a venture capitalist, and leading major storage initiatives at AWS. That experience gives him a unique perspective on how enterprise infrastructure has evolved, particularly as organizations reconsider the balance between centralized cloud environments and local processing closer to users and devices.

During our conversation, we explored why edge computing is becoming increasingly important for AI workloads, especially when large volumes of data are generated outside traditional data centers. Mark explains how processing information locally can reduce costs, improve performance, and enable entirely new use cases, from monitoring customer behavior in retail environments to running intelligent systems in remote locations.

We also discuss the ongoing reassessment underway across enterprise IT teams following major industry shifts, including changes in the virtualization market and growing concerns about vendor lock-in. Mark explains how Scale Computing is positioning itself as a flexible alternative by combining virtualization, containerization, networking, and security into a platform designed specifically for distributed environments.

Looking ahead, Mark shares his perspective on where enterprise infrastructure is heading over the next five years. As smaller AI models become more capable and organizations seek greater control over their data and systems, the role of edge platforms may become even more important.

Instead of relying solely on massive centralized environments, companies may find new value in distributing intelligence closer to where real-world activity occurs.

So, as organizations rethink how they deploy applications, manage data, and control infrastructure, is the next big shift in enterprise IT happening right at the edge? And how prepared is your organization for that change?

Useful Links

[00:00:04] Welcome back to the Tech Talks Daily Podcast. And today we're going to talk about the kind of infrastructure that you or your enterprise rely on every day without even realising it. I'm talking about the systems running at the edge in places like airports, convenience stores, fast food restaurants and retail sites. All places where downtime is simply not an option.

[00:00:30] And my guest today is Mark Cree. He's the president and CEO of Scale Computing. And he's almost tech royalty. He's built and led businesses across storage, networking, venture capital, even AWS. And he has a sharp point of view on why so many workloads are now shifting back from the cloud and how edge AI changes the math entirely.

[00:00:54] And also get a better understanding of what companies are looking for as they rethink virtualisation after the VMware disruption. So if you're responsible for distributed IT or you're just tired of complexity and surprise hidden costs and vendor locking, you should be able to get a lot from this conversation. A lot of value in this one. So enough from me. Let me officially introduce you to Mark right now.

[00:01:21] So thank you for joining me on the podcast today. Can you tell everyone listening a little about who you are and what you do? Hi, my name is Mark Cree. I'm the president and CEO of Scale Computing. And what Scale Computing offers is the most comprehensive set of edge computing solutions out there in the world. So we support most of the lot. You can't go anywhere without seeing us.

[00:01:51] We're in airports. We're at gas stations. We're in, you know, restaurants. My background, I started out as an engineer. I went into the entrepreneurial space and sold the company to Cisco, which did one of the very first products that put storage over the Internet. Cisco acquired the company. So I was the first VP GM of Cisco storage networking group. Went on from there to be a venture capitalist for a couple of venture capital firms.

[00:02:19] Then started another company that was doing in-network storage caching that NetApp acquired. Then spent a couple of years consulting. Started another company called InfantIO, which was doing basically metadata acceleration for AI workloads and also putting stuff to the cloud. That company was on a great trajectory and then COVID hit. So I ended up transitioning to AWS, who was one of our partners.

[00:02:49] And I was most recently the GM of the storage gateway at AWS before I came to Scale. Wow. Wow. Incredible journey you've been on there. We should have met in person at the IT press store, but unfortunately I couldn't make it. And one of the things that I learned from Philippe is that you've positioned Scale Computing as an edge-first platform company at a time when many enterprises are actually rethinking centralised infrastructure.

[00:03:15] So what did you see changing in the market that made such a strategic focus, rather than just another deployment model? It seemed like you're heading its on here. Well, so I've only been in the company for four months, so they've been at this for a decade basically. I think it's no secret we have all the Taco Bell locations in the US and all the Taco Bell locations in the UK, several other brands like that.

[00:03:42] But I took a brief nine-month period off between AWS and coming here and became a full-time student and took a graduate class in AI and built model. And I realised that part of the challenge we're going to have going forward is a lot of data resides at the edge that needs to be pre-processed at some level before it gets sent back to some sort of large corporate model. Otherwise, you're sending these massive files back and forth,

[00:04:10] and the data has to go within multiple iterations of processing. So that became sort of one of the reasons I really like this space. And we have customers actually doing that. We have one that provides, I can't give you the name, I think, but they provide agricultural supplies. They're all over the US. I think they have some in the UK. And they're running AI to monitor their stores. And if a customer lingers around a high-priced item,

[00:04:39] they send us a dispatcher salesperson. We've got some installations where we're basically recording voice and looking for keywords and processing that with partners. And I think just in general, I mean, the world's gotten more and more distributed, right? And, you know, I think we've seen this with hyperscalers and cloud computing. I mean, I was an AWS executive. A lot of those workloads are going back on-premises just because of the cost

[00:05:08] and the fact that, you know, processing it locally tends to be a lot less expensive than sending it to some big hyperscaler or large data center to be processed. I completely get that, especially if we look back to the Broadcom's acquisition of VMware triggered a wave of reassessment across IT teams, especially for the smaller to medium-sized enterprises that were suddenly feeling squeezed out. So from your perspective and the conversations you're having,

[00:05:35] what are organizations really looking for as they evaluate alternatives? And where does scale computing fit into that big migration story that we're seeing? No, that's a great question, Neil. So if we look at the landscape out there for virtualization, you know, you have really sort of three large players, VMware, Nutanix, and scale computing.

[00:06:00] The only vendor that's really right-sized for the edge is scale computing. So what we're finding is, you know, people that are moving away from VMware, one, you know, want something that's easier to set up, but a lot of cases at the edge versus being in some expensive data center where the licensing is expensive and it has to run a really beefy hardware. So what we've done is basically tailored our solution to be very simple to install.

[00:06:29] We run virtualization, we can run containers, we can do managed services. So it's a one-stop shop, so to speak, which the other vendors don't have. They're just strictly doing virtualization. For us, that's just one part of our protocol stack. And one of the things that stood out to me is scale now supports everything from just a single site deployment to tens of thousands of distributed locations. So I've got to ask, what are those architectural principles that allow your platform

[00:06:58] to scale from one edge location to 50,000 without becoming operationally complex? Because you make it look very easy, but it's one of the biggest problems that people have been searching for a solution for for a long time. So what do you do here? No, and this one, you know, I'm relatively new. So this was all new to me. They've been doing this for a couple of decades now. So they being the original Acumera company that merged with scale and what they provide

[00:07:26] is a very simple router solution that's custom built that just goes on prem and has full redundancy and switches and it can have multiple interconnections. It can have a fill over to 5G. So it's a real sort of plug and play, ultra redundant solution. But what we also have, and this is, this was kind of blew my mind away. We have an app store.

[00:07:52] So if you install this little box, you can say, I want to bring up cash register. And it has an app to do that. You want to do, we're doing this with one of the several restaurants. We want the smart fryers. Well, we have an app for that too. It's one point of sale. We have an app for that. You want a camera. So we, all that. So there's like a couple of dozen applications. So it, although, like you said, on the surface, it looks complex, but we've gotten enough

[00:08:20] drive time in all these retail variants that we've been able to standardize a solution that works for all of them. And the interesting part about the hyper core part of the business, the virtualization part of it, we can virtualize all of our prior solutions. So you can actually have virtualized, this other product is called active visual. So basically what that solution does is it draws a map of all the locations and you can see the status of the product.

[00:08:48] So if you have something that's out of service or out of spec, we, the customer can see that the central location. Incredibly cool. And I think another area that many enterprises and business leaders are increasingly wary of is the dreaded vendor lock-in, especially after being burned in the past. And it's become a growing concern for enterprise buyers now. So how do you at scale balance integration and simplicity while still giving customers

[00:09:16] that flexibility and choice that they, they might be increasingly demanding right now after being burnt in the past? Well, I think that's one of the areas that we, we are unique. So we, we offer a natural, uh, virtualization solution with the hyper core product. Um, we have a container solution we call a reliant product. And then we have on top of that, customers can put any sort of compute applications that they have.

[00:09:42] So in most of these retail applications, they typically have some software of their own that we accommodate through like a partner, like one of the equipment providers provides a platform and we run their software in our system. Um, so how we have avoided vendor lock-in is none of this is proprietary. Um, the backend, obviously network management part of it, where you see the tiles of that's, that's our technology, but there's nothing proprietary about it other than, you know, we're showing a network map, so to speak.

[00:10:12] So our, our philosophy is, you know, have a solution toolkit, so to speak, whether it be virtualization containers, just general compute or firewall. We offer a firewall service to the customer can pick and choose what they want. So to speak. And when talking about edge environments, they often run mission critical workloads in areas far and wide from retail to healthcare manufacturing and logistics. And that list goes on and on.

[00:10:40] So just to maybe bring that to life, what we're talking about here and the value add that we're talking about, are you able to share a real world example, perhaps the Taco Bell's multi thousand location rollout I was reading about that. I think again, illustrates what reliability and performance can look like in production because it's a pretty big deal, but I'm not sure how much you could share with me. So if it's that or another story. I didn't, I didn't get a chance to talk to marketing on which customers we can talk about, but I can give you some general examples.

[00:11:09] Like we have, um, we have a handful of customers that are doing like oil fill management. So they're like really remote stuff and all of our solutions are, are completely fault tolerant. So like on the virtualization level, you essentially have three nodes so you can lose a node and keep on running. You can alert, you can hot swap another node in there.

[00:11:33] Um, the, the networking part behind all that is, as I was saying, it's, it's basically redundant switches, the redundant interconnections, and also almost always some sort of cellular fallback. Um, so we, we've got this, if you go into an airport anywhere in the world, I guaranteed the convenience store. One of them you'd walk into, we're in there running that store. Ah, okay. I love stuff like this.

[00:11:59] So it's, you know, it, and we'll, we'll have more to talk about this as we sort of launch the solution to combine solution, but there's probably nowhere you'd go. We're not there. Um, more, more so in the U S cause that's where our customer base started. Like we're in all the gas, a lot of gas stations were in convenience stores. We're in the airport. Um, it list goes on and on and we're building up our business in the UK. We just want a big construction firm. There's got thousands of sites.

[00:12:27] Um, the Taco Bell implementation in the UK is the same as what we're doing here in the U S. Um, so there's other retail sites we can go after, but the way the product is built, the products cause the family is the ones that provide the compute have redundant clusters. Uh, and the, and the part of the platform that provides connectivity, like I was saying has multiple interconnections.

[00:12:53] Um, there's really cool app store and the, the fallback is typically a, some sort of cellular everything completely dies. So you'd have to have, you know, multiple switch failures, multiple router failures, multiple compute node failures to have some sort of catastrophic event to happen. And we, and we would get an alert when those were happening. So we'd know about it. The customer would on their dashboard. Yeah.

[00:13:23] It's incredibly cool. I love hearing about this stuff when it's technology that's right in front of our noses that we didn't know about. And until a conversation like this. And I'm curious when we, with AI workloads increasingly moving closer to where the data is actually created, how are you preparing your edge infrastructure to support some of the AI driven operations that, that you must be seeing now without forcing customers into oversized or over engineered solutions?

[00:13:50] Because it seems right at the heart of everything that you do, you, you like to keep things simple. Yeah. So the approach we've taken there is, you know, we can provide the, the interconnect. And then, like I was saying on the compute side, they can pick containers or virtualization or some combo of both. And, and they typically have their own apps they want to run.

[00:14:12] So what we do there is we partner with all the major hardware vendors, um, Lenovo, Supermicro, those types of vendors. And the customer really right sizes that compute themselves. And we just integrate that. So it's the same. So how that applies to AI, because I didn't answer your question is if they need a GPU, they just check the box with the supplier to make sure the GPU in there, we accommodate it.

[00:14:40] And I'd love to ask you to take a look in a virtual crystal ball of sorts. If we look ahead and modular infrastructure will inevitably replace monolithic stacks. What, what does the next five years look like for distributed enterprise IT? And I understand just asking that question is insane with what has happened in the last five years. Five years is like 20 years in old money. So what role, and also what role do you see scale computing playing that will shape this

[00:15:08] new future that is evolving in front of our eyes at, at breakneck speed? Well, I think, you know, scale will be there because we essentially own the edge. Yeah. But yeah, I'll give you an example. One of the things I learned in this deep dive AI class I took for nine months, you know, we were creating models and, you know, all kinds of difference, you know, object recognition and, you know, medical stuff.

[00:15:34] And what you start to realize is you don't need a really monolithic AI model to get predictable results. I mean, the level of accuracy you get by getting 10 X more horsepower isn't 10 X the accuracy. It's a small increment. So I, you know, like a llama, which runs, you know, people's PCs, it can do it on a laptop. Those models are becoming sophisticated enough that you're going to get the same sort of

[00:16:00] quality response you would get from some large model running in a massive data center. So I think that's going to really help move more of the compute intelligence to the edge because it's going to be readily accessible. And most of the stuff is open source, by the way, also. So that will be part of it. And then, you know, we're seeing this, like I said, I'm the next hyperscaler executive.

[00:16:24] There really is a move away from, with maybe the exception of the AI data centers being built out right now with monolithic concentration to compute for customers. What you find out is if you don't own the architecture, it gets prohibitively expensive. If you're in one of the hyperscalers at some level of commitment. And then sort of the next level is, do you want to do it yourself and have it all centralized?

[00:16:51] And a lot of these applications are, I mean, we live in an economy where it's a world economy, right? Yeah. So by the very nature, your users are going to be distributed. So getting the compute closer to the user, so it gives you an telecommunication cost, you know, theoretically, I guess, delays the response time. But it also lets you put a lower cost compute close because you're not having to deal with all the

[00:17:19] heavy uplift of going to a data center. And I'm curious from all the conversations that you've been having, there's a hot topic right now around data sovereignty and data residency. And especially here in Europe, it's a big topic. But from the conversations you're having, are organizations wanting to own and control their infrastructure more than they were concerned about in the past? Are you seeing any movement here? Some, yes.

[00:17:47] It hasn't been a major focus yet with our customer base because they tend to be geographically distributed. But certainly, I mean, you know, if you're looking at using a public cloud, you have some real issues there with data sovereignty, right? Where if you built out an edge computing platform with our product that scale provides and owns or you own, you can really keep yourself locked down in a geo, right? Either logically or physically.

[00:18:16] So, yes, we're seeing some of that. I think we'll see a lot more of that as time goes on. But our mission is really, you know, to provide a top to bottom solution for the edge, whether it be, like I was saying earlier, compute, firewall, virtualization, containerization. So, you come to us with, here's the problem. And we've made it have, the customer may have some apps that they've been running in the past that are kind of hobbled together.

[00:18:44] We'll make that one integrated solution that, you know, provides you with fault tolerance, you know, geo-dispersed connectivity if you need it, even between continents, things like that. And for anybody listening that would love to dig a little bit deeper on anything we talked about today, maybe they've got a problem that they're thinking about now. They want to connect with you or your team or just find out more information about scale computing. Where's the best starting point for everything? Where should they go? Well, that one's simple.

[00:19:13] So, it's scalecomputing.com, our website. We also have a decent presence on LinkedIn, which a lot of people obviously are connected on LinkedIn. Um, so start with the website and then if you, you know, would like to dive deeper and need to talk to somebody, the website will, will point you to the right person as well. Awesome. Well, I'll have links to both the website and the LinkedIn platform there. And anyone looking for top to bottom solution for the edge, I would definitely urge you to

[00:19:42] go check that out. I'll have links to everything. And it sounds like your own journey and the journey at scale computing is going to continue to evolve this year. Things are happening quite fast. So, it'd be great to maybe get you on towards the end of the year and see how things are evolving further. But more than anything, thank you for sharing your story today. Well, no, thank you. And I'd be happy to come back and give you an update. I think what I enjoyed about this chat today is just how grounded it felt.

[00:20:07] Edge computing can often sound abstract until you hear the real world examples that bring it to life. The fault tolerance, the operational visibility and the reality that so much data is created is created far away from any data center. And that's why it's coming back. And which is possibly one of the reasons it's coming back to the edge, especially with AI. But Mark also makes a strong case that the next wave of AI is not going to live in one place.

[00:20:37] We might be looking at smaller models, local processing and better economics. All things that will push intelligence closer to where people actually work. So, I will share links to scale computing in the show notes. And I'd love to hear from you. Where do you think edge infrastructure becomes a business priority in your world this year? I'm sure you've got more than a few stories.

[00:21:02] So, hop over to Tech Talks Network and either leave me an audio message or DM over there and we can continue this conversation. But just a big thank you to Mark for taking the time out of his busy day to sit down with us today, share his story and what he's seen. So, that's it from me today. I will be back again tomorrow waiting in your podcast feed as always. And I will speak with you all then. Bye for now.