
Shares of CoreWeave, an AI cloud provider, started trading Friday following a much-anticipated IPO that is being seen as a litmus test for other AI companies hoping to go public.
Originally founded as a crypto mining company, CoreWeave pivoted to renting out its Nvidia graphic processing units to companies desperate to train AI. The New Jersey-based company is the first tech listing this year, but its debut does not come without controversy. While revenue is up more than 700% year over year, only two customers account for 77% of that figure, and the company has warned of “material weaknesses,” including its capacity for internal financial reporting.
The company’s shares opened at $39, and reached $41.79 earlier today after being priced at $40 in the IPO. The company—founded by Michael Intrator, Brannin McBee, and Brian Venturo—now has a market cap of around $19.44 billion. The stock closed at just under $40.
I sat down with Intrator, the company’s CEO, to hear more about what differentiates their business, and why they decided to go public.
This interview has been edited and condensed for clarity
Fortune: To be an AI company based in New Jersey is like being East Coast rap.
Michael Intrator: Funny, because that’s sort of how we feel.
So how are you feeling about the IPO?
I am unbelievably excited about what we’ve accomplished and it’s just so incredible for the company. It’s incredible for our ability to continue to execute and scale our business. I’m really, really excited about where we are.
What differentiates you in the market?
There are three things that we do as a company. The first piece is that we built a beautiful technical solution to how to run parallelized computing in the cloud. It’s a software solution that is specifically specialized to make the compute performant available, scalable, flexible, all the things that you need to build and train and serve artificial intelligence use cases. When the hyperscalers built a function for CPU computing, they built a minivan—a configuration of compute that was really good at everything, but not great at anything, and that was exactly what you needed to build a cloud for [a] CPU-based computer, sequential based. What we did is we stepped back and said, “How do you architect a beautiful technical solution to this new problem associated with how you run cloud computing for parallelized workloads?” We have a better software solution to optimize the infrastructure.
The second one is, you need to understand the power markets, the data, to ultimately make the compute available and useful for your clients. And we’re able to do that at massive scale.