10 March 2026

Alex Freedland, Co-Founder and CEO, Mirantis

Read The Article
Alex Freedland, Co-Founder and CEO, Mirantis
"AI is probably a hundred-year revolution, and we're like year three."

Could you tell us why AI is not a bubble? We would like to hear why you think AI is a real, lasting trend, not just hype.

AI, in my opinion, and I think many people agree, is just a different capability that we can achieve through compute. AI is defined as intelligence. What we have discovered, and the ChatGPT moment three years ago kind of made it mainstream, is that if you use the technology that was developed quite a bit before—it existed for tens and tens and tens of years—but finally you got enough power of compute through the mathematical changes that you do through that compute thing with the data, you get intelligence. It behaves like something smart, and it gets smarter if you put more compute to it.

Just like in the days of the internet, the technology was available long before, but finally, the ability to move data and pictures across modems and stuff became so widespread that people noticed, and that kind of gave birth to the internet as we know it. Well, similarly, AI technology was developed throughout the 20th century, but finally, the power of compute became noticeable at the human level, and you get a chatbot that can just talk to you and look smart.

And then we saw how much smarter it had gotten over the last three years, and we now call it the age of AI. So the reason I don’t think we’re in the bubble is that there are multiple situations. One is that we are very, very early on how this intelligence can be used. And when you look today, there’s really just a couple of simple use cases that have been used. One is the chatbots, and you’ve seen them in ChatGPT and Gemini and all these models that people use every day, and they become useful. And the other is a fairly sizable use case—people have been writing code to program with foundational models. So these have been two basic use cases, and maybe a billion people have been using it. There’s a little bit of monetization that’s behind it, but it’s very, very early.

But it’s obvious now that, given how quickly the models are improving and how much better they’re becoming, it’s actually very possible to make them much more useful. And I’ll give you a couple of examples. In every vertical application out there, they’re trying to use models to do things faster. Just yesterday, we were at a meeting, a government meeting with NVIDIA, and they were talking about education, how to educate people so that they can be useful in the ever-changing world of artificial intelligence. And NVIDIA showed how they build courses using artificial intelligence and the models. And what used to take three to four weeks to build the course, they can now do it in a day. So there is tremendous efficiency that is being recognized that people are now willing to pay for, and we’ve barely just scratched the surface.

Another very simple example is graphics and video processing. You remember when YouTube just showed up, people didn’t understand why somebody would pay a billion dollars for this little fledgling website with videos. And today, just believing that somebody bought YouTube for a billion dollars looks ridiculously inexpensive. And it became the largest content producer, is now one of the largest channels, and is a worldwide phenomenon. Very similarly, we’re just starting to play with pictures and content. And if you put it on ChatGPT or Sora and all that, and you want to draw a picture or make a movie from an idea, it just takes a long time. Like, before you were born, in the early days of the internet, you would try to draw on a website, that’s how it started. Twenty years later, it’s instantaneous, and we don’t even think about it.

So that’s how early we are in content creation, and that’s yet another use case. And I can just go on and on and on, and the better the models will be, and the cheaper the unit economics of intelligence will be, the more use cases will become. And we’re literally, I mean, it’s probably a hundred-year revolution, and we’re like year three. So we’re certainly not in the bubble, I mean, that’s a guarantee. There are more nuanced questions about whether certain assets are overvalued, and they’ll get corrected and all that, but we are very, very early on a generational or multi-generational shift.

Why do you believe NVIDIA will continue to lead and not be commoditized or threatened by hyperscalers, other chip makers, and the shifting dynamics of the AI markets?

Nobody knows, right? I mean, we had companies like Cisco that were very, very successful 25 years ago, and then valuations went extremely high and then corrected. And to this point, it’s still a dominant networking player, but valuation hasn’t come to where it was in 2001. So the question is, is NVIDIA a Cisco? Is NVIDIA something else? Or is the AI a thing like the oil boom of the 1870s and 1880s, and NVIDIA is more like Standard Oil, which became all the oil companies we know, like Chevron and Exxon and all of those?

NVIDIA is a very interesting company. It’s been in the world of compute for 30 years now, and they pioneered a different type of compute that they built, a GPU, which is basically parallel compute. So it does very well for the type of applications where you can split a problem into a myriad sub-problems, all of which could be computed simultaneously. And when they started, it was an esoteric workload that they would use. It would be computer graphics and what’s called HPC, high-performance computing, would use that type of workload, the highly parallel workload.

Now, when AI came in, it turned out that AI is predominantly a highly parallel workload, and the demand for parallel or accelerated computing just went through the roof. So what NVIDIA has done in the last 30 years has done a couple of things. One, it built a set of libraries called CUDA, which know how to work with the hardware primitives and set the standard for all the software tooling people use to build optimized software for that type of hardware. And the whole ecosystem that consumes this type of hardware is standardized on CUDA, and they continue to extend the CUDA libraries for each new use case. And that software moat that they have on the CUDA libraries is a very, very strong thing. I mean, the analogy would be, this is what Microsoft did in the 80s and 90s to build the standard. And the world changed, and Google came, and others came, and Microsoft continues to dominate because of those software standards that they’ve built. So NVIDIA has done that very well.

The other thing that it continues to do very well, and it’s done better than anybody, is it is on a relentless innovation path where they figured out how to build very, very advanced next generation hardware, but also integrate that into a whole system between compute and networking and system integration, so that each new generation becomes roughly three times as powerful per token as the previous generation. And so long as our algorithms are such that with the improvement of compute, we can get improvements of the outcome of the use case we’re going after, whether it’s a foundational model or spatial model or anything else, the world cannot afford not to take more efficient compute into the circulation because otherwise they’ll get priced out.

So let’s say you are a company like Microsoft or OpenAI, or you name it, and you train your most advanced reasoning model. Ultimately, you will consume as much compute as possible, and that compute will give you a certain result that you need to reach. So if you do this on a previous-generation NVIDIA, it will cost you $5 billion. If you do it on the next generation of NVIDIA, even though NVIDIA will be more expensive per unit, the cost of training a similar model will be $300 million. So the unit economics become better, and even though you’re buying more expensive equipment, the results you can achieve are actually better, and as a result, you can have better monetization and better everything else.

So what happens is that when NVIDIA trailblazes, they produce something nobody else can. And not only do they produce it for the use case that people understand, but they make it general purpose, which makes it possible for people to build a new use case that wasn’t possible before, and do it on the NVIDIA chips and on the CUDA libraries that come together. And so long as NVIDIA can continue this relentless pace of innovation and have their CUDA libraries support these new use cases, the most compute-intensive companies that need more of this to move the needle will buy as much NVIDIA as possible because that’s the cheapest way for them to innovate.

Now, in the meantime, once they understand what it takes to do their not-so-innovative use cases and they’re becoming large enough, they can go and create an ASIC, which is a custom chip, that can specifically do this very limited type of compute that’s required for that well-understood use case. And they can build it more cheaply and integrate it into their supply chain. So, in the wake of this innovation, there are all these other companies taking what’s already understood and building cheaper versions.

But NVIDIA, which is very interesting, is actually only selling its latest equipment for maybe 2, maximum 2.5, 3 years. So if you look today, the GBs and Bs are being sold. There’s a little bit of Hs, and that’s it. H100, you can no longer buy from NVIDIA. So they only sell two to three generations of their stuff. So they’re not dependent on new sales of things that can be commoditized. And they continue to innovate tremendously. And not only are they continuing to innovate, but they’re also investing in the ecosystem to unlock new use cases.

So people are saying that the next new use case that’s going to be as prominent as ChatGPT will be robotics—physical AI. And NVIDIA has been innovating with people, designing those physical things, and so on. For each of those physical designs, there’s a training AI factory in the back, and it’s a virtuous circle. So long as NVIDIA continues to out-innovate everybody by a generation or two, they won’t get commoditized.

Now, you look at Intel right before them. And Intel was this incredible engine that remained dominant. And in the end, they actually got commoditized. So what happened there? Intel built its business model on the fact that it had the money and technology to develop two generations of semiconductors ahead of everyone else. So they were building their moat on the Moore’s law that said that every 18 months, you can have the next generation of your semiconductor size that makes the whole thing much cheaper. You produce a wafer, and the cost of a wafer is roughly the same, but you can cut it into much smaller pieces. And so the cost of each processor would be much smaller.

So Intel had the manufacturing, the money, and the scale to do two generations at the same time. And when somebody would catch up with them, like AMD for one generation, for the second generation of Intel products, AMD wouldn’t have the money to invest in two in a row, so they would fall behind. And it lasted all the way until Moore’s law stopped working, because the size got down to the scale of atoms and molecules, and you couldn’t make it smaller anymore. And then they lost that moat, and everybody else caught up.

So it’s unclear what NVIDIA’s moat is, since they’re not a manufacturing company. Everybody is using TSMC. So their moat is software, the ability to rapidly innovate at a pace no one else can, and the supply chain they build to scale to the industry’s requirements. So long as they continue at that relentless pace of innovation, they will not get commoditized. Now, will they stop at some point? Will they reach the physical limits of what’s possible, and will others catch up? That’s possible. But right now, if you look at their announcements, they’re not going to get commoditized anytime soon.

How and why should enterprises be investing in the NVIDIA ecosystem?

It’s a little bit more complicated question because ultimately, enterprises are not in the business of being AI factories. And let’s have a definition. What’s an AI factory? AI factory is a factory that produces tokens, that produces intelligence. So enterprises ultimately produce something else and use the tokens. They use intelligence. They’re consumers of AI factories. They’re not producers of AI factories.

So enterprises should invest in the NVIDIA ecosystem if they decide not only to consume but also to produce. And they would also produce to just control their own destiny, and maybe, at a certain scale, they have requirements for cost, sovereignty, data, and all that. So these things will prompt enterprises to consider becoming producers of intelligence rather than just consumers. But ultimately, I think their first and foremost motion will be to consume intelligence.

So they will go, and they will consume intelligence from the likes of AWS and Google and others, or they’ll produce and consume intelligence from the SaaS motions of companies like OpenAI and Anthropic, you name it. And depending on who these enterprises are, they may actually build their own factories. So, for example, if you’re GE and you’re an enterprise, you will need to build your own factory, and that factory has to be driven by intelligence. You cannot just consume; you have to produce that intelligence. So when you go into manufacturing, when you go into things of that nature, you build cars, you build energy, you build all those other things, you have to have your own intelligent production. So there you would invest in the NVIDIA ecosystem for the same reason anybody else will, because their most innovative things will give you the best return on your investment. And because all the tools are actually built to build on top of NVIDIA because of CUDA.

Now I think a different question will be: will compute commoditize and be only available in public clouds, or will it be only available from hyperscalers? Like what happened in the non-accelerating, traditional computing world, cloud kind of happened, and people were building private clouds and hybrid clouds and all that, and then 20 years later, hybrid became private, became not a thing, and cloud all went into hyperscalers. And hyperscalers ended up winning this game and commoditizing everybody else. So VMware couldn’t scale more and got acquired by Broadcom, Red Hat got acquired by IBM, and they’re probably the only success story in the cloud over the last 20 years, but not at the scale that Amazon is. So Amazon is worth two trillion, and Red Hat got acquired for 30 billion. So we’re talking two orders of magnitude, almost.

So what’s going to happen with the production of intelligence, of AI factories, and will these be commoditized? And so I think the jury is still out. This is a different question than the one you asked, but the jury is still out. But there is a significant difference between the worlds of AI and traditional compute. And the difference is that ultimately, AI is an energy play. So, because of the nature of this particular type of compute, the gating factor is not the availability of compute; it is actually the availability of energy.

And if you can have enough energy and enough access to capital to be able to build, to access energy and build your building around that in a data center, and you can buy compute, it’s enough for you to actually start monetizing on that. And you’re seeing this huge growth of what they call neo-clouds, of these new types of clouds that are raising money, and they’re buying a lot of compute, and they’re actually procuring a lot of energy. You saw CoreWeave and Nebulous and Iron, and many other people are just coming in, in Saudi Arabia, and all that.

So at the very least, they know today that because energy is so important, if you have access to energy, you can buy this, put your NVIDIA or other compute into the data center, and then lease it on a long-term contract to a hyperscaler. And roughly 100 megawatts will give you roughly a billion dollars in hardware hosting lease. But also, the market is very interesting because when you look at how you sell your energy, the cheapest way to sell energy is just energy. The second-cheapest way to sell energy is as bare metal as a service on top of NVIDIA. You probably get a 30-40 percent premium on that. You can also sell energy by selling tokens on a model. You can take a good model, put it on your stack, and sell that, and the price of that is two and a half to three times higher.

So if you have energy and you have money, you could definitely sell it as bare metal compute to Microsoft, but you could invest in the software stack and be able to actually run models on top of that, and then sell it for twice as much. And you have every incentive in the world to do that, because why would you sell the energy that would otherwise be unavailable for cheaper than you can if you invest a little bit more money? Very similar to how the oil industry is operating, where selling derivative products, refined products, is a lot more valuable than selling crude oil. So you can think of just the raw energy, then kind of crude product, which is bare metal on NVIDIA, or you can sell tokens, and that is twice or three times as much, sometimes five times as much, sometimes three times as much, but it’s a lot more valuable commodity. So suddenly, access to energy becomes a differentiator.

And then there is a second differentiator, and that’s called sovereignty, because every country, every state, every industry now thinks about having their own AI factory that will keep the data within the boundaries and the rules within the boundaries. And most importantly, will also try to get the benefits of the factory to the constituency of that sovereign space. So there are two angles that exist today in the world of AI that were not really that prevalent in the world of traditional computing: energy and sovereignty.

And these two anchors create enough—there’s enough capital coming into those anchors that you can actually invest and build things that will create more value on top. And you can actually have defensible AI factories at the model level that you can then sell to the world. And I think that’s the difference between the AI world and the traditional computing world, where you will have a lot more factories everywhere in the world that are very specific because of energy and sovereignty.

And that also means that if you are an enterprise, you have to consume that intelligence differently, whereas before you could just go and buy it all from Amazon, and maybe you would do a double consumption strategy. You’ll do Amazon and Google. Amazon and Google won’t have all the intelligence necessary. They will not have the capacity, and they do not have an incentive to lower the price because they will control the large size of the market. They will actually vertically integrate. They will destroy the margin by building an efficient stack. But because demand is always going to be higher than supply, they will just charge a premium for their demand.

So if you’re an enterprise, you can buy some of that from Google, Microsoft, and AWS, but then you will have a plethora of other supply available, and you’d better find a way to consume from them. So hybrid consumption models will become a standard. And the smart people at Amazon and Google already understand that. And what they’re doing is they’re building hybrid software to consume intelligence from other people, not just do long-term data center contracts, but now consume at a higher level. So they were already preparing for this, but I think smart infrastructure players are starting to think about the other way to ensure they can have a control point into an enterprise, rather than a consumer through AWS. So there are actually two big waves coming. Both of them are incredibly well-capitalized and well-anchored. So this is going to be a very different dynamic than we saw in the cloud computing of the last 20 years.

Why do you believe that it’s better for enterprises to act on AI now and not wait for the nascent technology to mature?

I don’t think there is a hard-and-fast rule for how to do this. Every enterprise decides what to do with technology, and enterprises can be early adopters or late followers. And it depends on the moat they have that makes them who they are.

And for early adopters, it’s important to understand how to gain efficiencies against other enterprises that are early adopters. And I don’t know if you follow various reports on the adoption of AI. And there is a lot of noise in the market about how 97% of all experiments are failing and all that. And that’s great. That doesn’t really mean much. But there are more interesting reports. And one I remember was published recently: a fairly well-respected VC firm called Menlo Ventures. And they did a report on the startups and how startups are doing in AI. And compared to the previous waves, there were internet startups, there were mobile startups, and all that. So they’re comparing how quickly those startups are getting started and how their revenue growth compares to previous waves. And it’s obvious that the speed with which technology is being built and adopted using AI, and people are paying for this, the revenue growth for AI startups, both horizontal and vertical, is two to three times higher than anything that they’ve seen before.

So the speed with which value is being created by AI is unprecedented. So if you’re an enterprise and you’re thinking about it, you’re like, okay, let’s say I’m a transportation company, like in the olden days. I’m a taxi. And then Uber came in, and disrupted it. Or I was a traditional bank, and then PayPal showed up. So you can do the same thing through different means, and every enterprise needs to understand how they can be disrupted by something else.

So what that means is they have to move fast and build technology that makes them more competitive. And they also need to improve efficiency so that whatever they do can be done much faster and cheaper. Like that example I gave you with curriculum development. So every enterprise then needs to do that, and they need to decide, do they do it as an early adopter or a late follower? And that really depends on their strategy and their moat. If their moat is something that cannot be disrupted by a young startup in the use case that’s available now, they can wait and understand it, and then just buy the technology once it’s proven. If their moat decides that they will be disrupted by a young player, they better move fast. So their business model and their moat definition should decide whether they should be early adopters or late followers.

Is there anything else you want to add?

I think the most important thing that I think people are starting to understand, but it’s not yet in the mainstream, is this idea of being anchored in energy and sovereignty. And the idea here is that AI is ultimately an energy play, which wasn’t the case with technology before. And the AI software that we’re talking about is an energy efficiency play.

And when you look at it from that perspective, two things will need to happen. One is the amount of capital that will have to be put into building the infrastructure that will transform all industries through AI, which is probably commensurate with the GDP of the world. And where do you take that money? And it’s not something that software investors can put in. It’s not something governments can put in, because we are in a very indebted situation—all the governments are seriously indebted. And they can’t raise taxes. So there’ll need to be other means of funding that revolution driven by digital infrastructure. And that’s a very, very interesting angle. And these investments will come from energy and sovereign infrastructure pools.

And then, of course, to actually build the moat and protect those investors, I think an industry utility concept needs to emerge. And I don’t know if it’s too abstract what I’m describing here, but it’s not dissimilar to what happened when banks understood that they needed to transact between each other through plastic cards. And Bank of America started a BankAmericard, which then became Visa, which became an interbank thing, and it became a utility. Today, Visa is one of the largest fintech companies in the world, but it took the vision of one bank and then the resources of all banks to make Visa what it is.

Something very similar will happen in the infrastructure world with investment pools and the technology pools built around infrastructure. And I think we’re in need of building a utility business model, which I don’t know if many people are talking about, but I think that’s what will revolutionize this. And that’s why commoditization isn’t going to happen.