CoreWeave, Inc. Class A Common Stock (NASDAQ:CRWV) Q2 2025 Earnings Call Transcript August 12, 2025
CoreWeave, Inc. Class A Common Stock misses on earnings expectations. Reported EPS is $-0.27 EPS, expectations were $-0.23196.
Operator: Thank you for standing by. My name is Tina, and I will be your conference operator today. At this time, I would like to welcome everyone to the CoreWeave, Inc. Class A Common Stock Second Quarter 2025 Earnings Call. All lines have been placed on mute to prevent any background noise. After the speakers’ remarks, there will be a question and answer session. It is now my pleasure to turn the call over to Deborah Crawford, Vice President of Investor Relations. You may begin.
Deborah Crawford: Thank you. Good afternoon, and welcome to CoreWeave, Inc. Class A Common Stock’s second quarter 2025 Earnings Conference Call. Joining me today to discuss our results are Michael Intrator, CEO, and Nitin Agrawal, CFO. Before we get started, I would like to take this opportunity to remind you that our remarks today will include forward-looking statements. Actual results may differ materially from those by these forward-looking statements. Factors that could cause these results to differ materially are set forth in today’s earnings press release, and in our quarterly report on Form 10-Q filed with the SEC. Any forward-looking statements that we make on this call are based on assumptions as of today. And we undertake no obligation to update these statements as a result of new information or future events.
During this call, we will present both GAAP and certain non-GAAP financial measures. A reconciliation of GAAP to non-GAAP measures is included in today’s earnings press release. The earnings press release and an accompanying investor presentation are available on our website at investors.coreweave.com. A replay of this call will also be available on our Investor Relations website. And now I’d like to turn the call over to Michael Intrator.
Michael Intrator: Thanks, Deborah, and good afternoon, everyone. CoreWeave, Inc. Class A Common Stock had a standout second quarter as we continue our hyper-growth journey against the backdrop of unprecedented demand for our AI cloud services. Adoption is expanding rapidly with the enterprise increasingly viewing AI as a strategic imperative and CoreWeave, Inc. Class A Common Stock as the force multiplier that enables adoption, innovation, and growth for training as well as inference workloads. As a result, revenue grew a better than expected 107% year over year to $1.2 billion for the second quarter with adjusted operating income of $200 million. This marks the first quarter in which we reached both $1 billion in revenue and $200 million of adjusted operating income.
Scaling our capacity and services remains a key ingredient for our success in this structurally undersupplied market. To that end, we ended the quarter with nearly 470 megawatts of active power and we increased total contracted power approximately 600 megawatts to 2.2 gigawatts. We are aggressively expanding our footprint on the back of intensifying demand signals from our customers, ensuring that we maintain a durable multi-year runway for growth. We are now on track to deliver over 900 megawatts of active power before the end of the year. We ended the second quarter with $30.1 billion in contracted backlog, up $4 billion from Q1 and doubling year to date. This includes not only the $4 billion expansion with OpenAI we previously discussed, but new customer wins ranging from large enterprise to AI startup.
Importantly, we’ve also signed expansion contracts with both of our hyperscale customers in the past eight weeks. Our pipeline remains robust, growing increasingly diverse. Driven by a full range of customers from media and entertainment to healthcare to finance to industrials and everything in between. The proliferation of AI capabilities into new use cases and industries is driving increased demand for our specialized cloud infrastructure and services. For instance, while it’s early stages, in 2025, we saw more than a 4x increase in our VFX cloud service product conductor. And entered a multiyear contract for NVIDIA’s GB200 NVL 72 system with Moon Valley, an AI video generation startup that lets filmmakers craft professional-grade clips with granular cinematic control.
We are seeing increased adoption in the financial services sector as we expand our relationship in proprietary trading, like Jane Street, and are adding mega-cap bank clients like Morgan Stanley and Goldman Sachs. We are also seeing significant growth from healthcare and life science verticals. And are proud of our partnership with customers like Hippocratic AI, who built safe and secure AI agents to enable better healthcare outcomes. In short, AI applications are beginning to permeate all areas of the economy. Both through startups and enterprise and demand for our cloud AI services is aggressively growing. Our cloud portfolio is critical to CoreWeave, Inc. Class A Common Stock’s ability to meet this growing demand. Our focus on delivering the industry’s most performant purpose-built AI cloud infrastructure makes us the platform of choice for both training and inference across incumbent AI labs and new entrants alike.
We’re helping these customers redefine how data is consumed and utilized globally as their critical innovation partner. And we are being rewarded for our efforts as they shift additional spend to our platform. We continued to execute and invest aggressively in our platform, up and down the stack, to deliver the bleeding-edge AI cloud services performance, and reliability that our customers require to power their AI innovations. For example, during the second quarter, we delivered NVIDIA’s GB200 NVL 72, and HGX B200 at scale deployments. Fully integrated into CoreWeave, Inc. Class A Common Stock’s mission control for reliability and performance management. Mission control continues to be the cornerstone of CoreWeave, Inc. Class A Common Stock’s ability to scale at breakneck speed, building a fully automated and rigorous process for cluster life cycle management with unmatched visibility for our customers.
In addition to chaos, we began our private preview of an innovative archive tier object storage product with automatic tiering and industry-leading economics. And with a simplified cost structure that makes optimizing storage costs for startups and enterprises seamless. As a result, customers are shifting petabytes of their core storage to CoreWeave, Inc. Class A Common Stock. The form of multiyear contracts. We are providing support for additional third-party storage systems tightly integrated into CoreWeave, Inc. Class A Common Stock’s technology stack. With large-scale production deployments of vast Weka IBM Spectrum Scale, DDN, and Pure Storage. With Weights and Biases, we deliver an integrated full-stack observability feature giving researchers immediate feedback to diagnose the factors impacting performance and reliability of their AI workloads from the data center through network fabrics and storage GPUs, and up to their machine learning code.
We launched the CoreWeave, Inc. Class A Common Stock and Weights and Biases inference service, utilizing our incredibly reliable compute platform to power research-friendly API for state-of-the-art AI models. Including OpenAI’s new open-source model Meta’s LAMA four, DeepSeek, Kimi K two, and QN three. This new product allows customers to easily bring AI inference into production on their applications with tight integration into our Weave product ensuring visibility into the service, quality, and safety. We continued our investment in Sunk, slurm on Kubernetes. Which is used by many of the largest AI labs and enterprises in the world providing improved identity federation research segmentation, and scale. We began introducing flexible capacity products to help our customers better manage their end customer demand.
In addition to our on-demand and reserved inference offering, our spot product is in customer preview and will be introduced additional capacity product over the second half of the year. We also saw significant growth in our backbone and networking service as one of our largest AI lab customers leveraged our networking backbone to connect its multi-cloud inference infrastructure. Our product development roadmap is robust and we are excited to announce new cloud services and capabilities over the remainder of the year that will further accelerate growth within the AI ecosystem. And empower customers to meet their evolving business needs. We have entered new parts of the capital markets and accessed new pools of capital. Driving our cost of capital lower.
We priced both of our inaugural and second high-yield bond offerings in the past three months. These transactions were upsized due to the strong demand and were priced at lower interest rates. More recently, we closed on a landmark secure GPU financing with many of the world’s leading banks. A novel financing structure that CoreWeave, Inc. Class A Common Stock has pioneered. As evidenced by these transactions, our access to the capital markets not only remains robust but is deepening. We are grateful for this support in our mission and expect to continue to access less expensive capital sources as we continue to execute. We will continue to verticalize our platform and enhance our control efficiency, and differentiation fueled by our investment both up the stack as you saw with our acquisition of Weights and Biases last quarter, and down the stack.
As highlighted by our proposed acquisition of Core Scientific last month. Our ability to scale state-of-the-art infrastructure will further be bolstered by the more than $6 billion data center investment we’ve announced in Lancaster, Pennsylvania, as well as a large data center project in Kenilworth, New Jersey. We are co-developing via a joint venture with Blue Owl. These new sites are perfect examples of our broader data center strategy which allow us to provide a mix of both large-scale training and low-latency inference compute across the country. Now I’d like to come back to our proposed acquisition of Core Scientific. We believe the combination will accelerate value creation for shareholders of both companies. Both the CoreWeave, Inc.
Class A Common Stock and Core Scientific management teams and boards have evaluated this transaction extensively, and concluded this is the best for both companies and their shareholders. The rationale behind the deal is quite simple and powerful. Verticalization creates tremendous operational and financial efficiencies that will strengthen our ability to serve our customers at scale. Owning the infrastructure will allow CoreWeave, Inc. Class A Common Stock to scale faster and more efficiently. The integration of Core Scientific meaningfully advances our capacity to operate one of the largest and most sophisticated AI cloud platforms in the world. Upon closing, CoreWeave, Inc. Class A Common Stock would own approximately 1.3 gigawatts of gross power capacity across Core Scientific’s national data center footprint.
With an incremental one gigawatt or more available for future expansion. This scale enhances our flexibility to take on new projects and meet accelerated customer demand. In addition, the acquisition would drive the immediate elimination of more than $10 billion in future lease liability overhead, as well as a more streamlined and efficient operating model. As a result, we anticipate $500 million in fully ramped annual run rate cost savings by 2027 benefiting both the Core Scientific and CoreWeave, Inc. Class A Common Stock shareholders directly. Vertical integration will allow us to finance infrastructure more efficiently furthering one of our key objectives in lowering our cost of cap and enabling us to grow in a more capital-efficient manner.
We and Core Scientific look forward to discussing the transaction with you in the months ahead. Our respective teams are already engaged in preintegration planning, to ensure we’re ready to hit the ground running. To that end, we are executing with pace and purpose amidst the market which the supply-demand imbalance is only deepening as new enterprise adopters increasingly compete with large AI labs for limited capacity and services. We’re building on our leadership across all key success criteria from power access to AI cloud service performance to revenue and backlog growth. And we will keep getting stronger as we verticalize our data center infrastructure and cloud services. I am excited about the momentum. We are building upon I wanna thank our customers, teams, and business and financial partners making it possible.
Now here’s Nitin.
Nitin Agrawal: Thanks, Michael, and good afternoon, everyone. Our strong second quarter results highlight the unprecedented demand environment we are seeing and our continued execution to rapidly scale our AI cloud platform to meet that customer demand. Our growth continues to be capacity constrained, with demand outstripping supply. Since our Q1 call, we have signed expansion contracts with both our hyperscaler customers. We also closed our acquisition of Weights and Biases and announced a proposed Core Scientific acquisition. In addition, we successfully raised $6.4 billion in the capital markets through two high-yield offerings, a delayed draw term loan all of which have opened access to new capital pools at an increasingly lower cost of capital.
Turning now to Q2 results. Q2 revenue was $1.2 billion growing 207% year over year driven by strong customer demand. Revenue backlog was $30.1 billion up 86% year over year and doubled year to date. While our revenue backlog is expected to scale rapidly over time, growth rates will fluctuate from quarter to quarter given the nature of our large committed contract business model, timing and size of new contract signings, and revenue recognition. Operating expenses in the second quarter were $1.2 billion including stock-based compensation expense of $145 million. We continue to ramp our investments in data center and server infrastructure to meet growing customer demand. Which contributed to the increase in our cost of revenue In addition, Technology and infrastructure spend in Q2.
the increase in sales and marketing was largely driven by marketing spend to accelerate new customer acquisition and raise awareness of our differentiated capabilities. The increase in G and A largely driven by professional services. Adjusted operating income for Q2 was $200 million compared to $85 million in Q2 2024. Our Q2 2025 adjusted operating income margin was 16%. Net loss for the quarter was $291 million compared to a $323 million net loss in 2024. Interest expense for Q2 was $267 million compared to $67 million in 2024. Due to increased debt to support our infrastructure scaling partly offset by lower cost of capital. Adjusted net loss for Q2 was $131 million compared to a $5 million adjusted net loss in 2024. The adjusted net loss was impacted by increases in interest expense, due to scaling of our infrastructure, partly offset by growth in adjusted operating income.
Adjusted EBITDA for Q2 was $753 million compared to $250 million in 2024, scaling more than 3x year over year, and our adjusted EBITDA margin was 62% roughly in line with Q2 of last year. Turning to capital expenditures. CapEx in Q2 totaled $2.9 billion which is up over $1 billion quarter over quarter as we scale rapidly to meet our accelerating customer demand. We are executing at a massive scale, and the demand continues to outpace supply. As a reminder, CapEx consists primarily of investments in property and technology equipment, and is calculated as a change in gross PP and E minus the change in construction in progress. Construction in progress represents infrastructure not yet in service so it’s not yet revenue generating. In addition, the timing of data center capacity coming online and generations of GPUs being placed into service could drive significant variation quarter to quarter.
An example of which you will see in our Q4 CapEx ramp. Now let’s turn to our balance sheet and strong liquidity position. We have designed our capital structure to enable rapid scaling. As of June 30, we had $2.1 billion in cash, cash equivalents and restricted cash. Other than payments on OEM vendor financing, and self-amortizing debt, through committed contract payments, we have no debt maturities until 2028. As Michael mentioned, we continue to see strong success in the capital markets. Growing at a rapid pace, and executing at scale requires a unique and sophisticated approach to securing the funding required. CoreWeave, Inc. Class A Common Stock continues to be not only leading AI technology partner, but also the leading innovator in financing the infrastructure required by the world’s most advanced AI labs and enterprises.
Since the beginning of 2024, we have secured over $25 billion of debt and equity to fund the build-out and scale the leading AI cloud platform. In May, we launched and closed our first unsecured high-yield offering of $2 billion which was upsized by $500 million due to strong demand. In July, we reentered the market and raised an additional $1.75 billion also oversubscribed, at a lower interest rate. More recently, closed our third delayed draw term loan facility. This $2.6 billion facility completes the financing for the $11.9 billion OpenAI contract we announced in March. Notably, the transaction was completed at a cost of capital of SOFR plus 400 a 900 basis point decrease from the non-investment grade portion of our prior facility DDTL2, and was the first one to be fully underwritten by top-tier banks.
Together, these financings highlight our ability to drive a sustained reduction in our cost of capital and the increasing depth of access we have to the capital markets. Both of which were stated goals during our IPO. Turning to tax, Again in Q2, recorded an income tax provision despite a net loss due to impacts from non-deductible items and the valuation allowance on net deferred tax assets. Our tax rate might fluctuate significantly in the future due to similar factors. Now turning to guidance for Q3 and for full year 2025. As Michael mentioned, we are seeing an acceleration of customer demand our pipeline remains robust and increasingly diversified. We are still operating in a structurally supply-constrained environment where demand far outstrips supply for our products and services.
Our operations and engineering teams are working relentlessly deploy more capacity faster for our customers. With a strong demand backdrop, we expect Q3 revenue in the range of $1.26 billion to $1.3 billion. In addition, we anticipate Q3 adjusted operating income between $160 million to $190 million as we are quickly ramping our capacity to meet customer demand. As we have discussed earlier, as we deploy scaled capacity and bring large chunks of capacity online, we incur some costs prior to revenue generation. The scale of our deployments relative to our base implies that these costs ahead of revenue have a short-term impact on our margins. We expect our Q3 interest expense to be in the range of $350 million to $390 million impacted by increased debt to support our demand-led CapEx growth partly offset by increasingly lower cost of capital.
We expect our CapEx for the third quarter to be $2.9 and $3.4 billion. In addition, like last quarter, we expect stock-based compensation to remain slightly elevated throughout the year for grants issued in connection with the IPO and incremental hiring to support our growth. Moving to full-year guidance, For the second quarter in a row, we are raising our full-year revenue guidance. For 2025, we now expect revenue in the range of $5.15 billion to $5.35 billion a $250 million increase from our prior guidance of $4.9 billion to $5.1 billion driven by continued strong customer demand. We expect adjusted operating income in the range of $800 to $830 million unchanged from our prior guidance as we remain cost disciplined while rapidly scaling our deployments at an unprecedented rate to end the year with over 900 megawatts of active power.
We expect CapEx in the range of $20 billion to $23 billion unchanged from our prior guidance in the backdrop of continued strong customer demand. A significant portion of our full-year CapEx will fall in Q4 due to the timing of go-live dates of our infrastructure. We had an outstanding first half of the year and our outlook remains strong. We are entering the second half of the year in an excellent position, with strong execution in delivering at scale for our customers as well as execution in the capital markets. And a robust backlog coupled with a very healthy demand pipeline. As we move into the second half, we’ll continue investing to meet the needs of our growing customer base while reinforcing our leadership in this transformational market.
Thank you to our investors and analysts for your support and engagement. We look forward to updating you on our progress in the quarters to come. With that, we will move to Q&A.
Q&A Session
Follow Coreweave Inc.
Follow Coreweave Inc.
Operator: Our first question comes from the line of Keith Weiss with Morgan Stanley. Please go ahead. Hi, Sean.
Nitin Agrawal: Keith, we can’t hear you.
Deborah Crawford: Operator? Perhaps we can go to the next question and then we can come back.
Operator: Alright. Thank you, Chris. Yes. Our next question comes from the line of Kash Rangan with Goldman Sachs. Please go ahead.
Kash Rangan: Thank you very much. Congrats on a really spectacular finish to the second quarter. I’m wondering if you could talk about the renewal of the hyperscaler contracts. I think one of the two is a particularly larger one, and I’m curious to see if this means that you have greater confidence that they will renew not just expand, the interim motion, but how is it more likely that they renew the big contract that they first signed with you in 2024? And one thing for Nitin, how do you look at the tweaks that you can put into the business to achieve even better return on assets as the company continues to lower its cost of capital? If you look at the core deployment model, maybe that has something to do with how quickly you can actually take the bookings into revenue.
Clearly, you saw upside in the quarter on that front. But what are the things that the companies uncovered to continue to give you conviction that the company can earn a higher and higher rate of return on your capital going forward as it translates into the revenue line item? Thank you so much.
Michael Intrator: Thank you, Kash. And, I appreciate the comments on the second quarter. We’re really excited about how we’ve closed out the first half of the year. When we think about contracts, with our hyperscaler clients, for that matter, really, with any of our clients. We generally don’t focus on the concept of renewals. We focus on the concept of expansion. And the reason that we focus on the concept of expansion is because generally speaking, the clients are purchasing hardware that is appropriately state of the art for their use case. And as new hardware comes out, as new hardware architectures are released, they tend to come back in and purchase the same top-tier infrastructure in their next renewal. And so we’re excited about renewals when we get them with our hyperscale clients where we’re excited about the renewals that we get with any of our clients across the board.
Nitin Agrawal: Thanks, Michael. And, Kash, to the second part of your question, there are a few things that we are already executing on. You know, you’ve seen us kind of acquire Weights and Biases, which is our attempt to go up stack and deliver more value-added services for our customers. You’ve seen us with our proposed acquisition of going down the stack and worker verticalizing and continuing to get cost savings. We’ve talked about the anticipated fully ramped $500 million, you know, by 2025 in terms of savings. In addition to that, we also talked about how we continue to scale rapidly for our customers and continue to reduce the time from when we start deployment when customers go online. In addition, we remain cost-conscious and disciplined across every vector in our business as we continue to scale this business at an unprecedented rate. All of those factors are working great for us we continue to deliver great results for the company.
Kash Rangan: Thank you very much.
Operator: Our next question comes from the line of Keith Weiss with Morgan Stanley.
Keith Weiss: Alright. Can you guys hear me now? Yes. Excellent. Alright. Sorry about that technical difficulty. Congratulations on a fantastic Q2. You guys really putting a lot of emphasis on the growth side of the equation there. So great to see. I wanted to ask one question on the kind of demand side of the equation, one on the supply side of the equation. On the demand side of the equation, I think a lot of investors have the impression that CoreWeave, Inc. Class A Common Stock is handling a lot of training revenues. But a lot of the stuff that, Michael, you were talking to terms of what you guys are doing with the software as well as the customers coming in the door, speak to doing more inference, more applications being built on the platform.
So can you talk to us a little bit about sort of the mix of business that you’re seeing? And also the fungibility of the platform, the ability to handle both the pretraining and the inference workloads over time. And then on the supply side of the equation, you talked about being supply constrained. Can you give us some sense of where the most acute supply challenges are? Is it on the chip level? Is it on the power level? Like, where do you guys expect to see those constraints in the near term? And how much of that can you guys work against? Like, is there any fungibility of where you could more quickly or where would it take longer to solve those supply constraints?
Michael Intrator: Sure. Thank you for the question. Let me start off with some comments around our infrastructure. The way in which we see our clients consuming the compute that we’re able to provide. So when we build our infrastructure, we really build our infrastructure to be fungible to be able to be moved back and forth seamlessly between training and inference. Right? Like, our intention is to build AI infrastructure, not training infrastructure, not inference infrastructure. It’s really infrastructure that allows our clients to be able to support the workloads that they need to be able to drive to be successful. We have seen a massive increase in our workloads that are being used for inference. And we’re able to monitor that by the profile that the power is being consumed within the data center.
So, when you have big training runs that come on and off, that’s a step function of power consumption. Either up or down. As opposed to when you are using your compute for inference which is much more incremental in its nature. In addition to that, the infrastructure that we’re building has been increasingly been used for chain of reasoning which is driving a substantial amount of consumption on the inference level. And that’s very exciting for us. As I always say, inference is the monetization of artificial intelligence. We are extremely excited to see that use case expanding within our infrastructure. On the second question, in terms of the supply side, you know, at the end of the day, right now, it’s the powered shells that are the choke point that is causing the struggle to get enough infrastructure online for the demand signals that we are seeing, not just within our company.
It’s the massive demand signals that you’re seeing across the industry. And at the end of the day, what we are looking at, and I think what you’re hearing across the board, is that this is a structurally supply-constrained market. It is a market that is really working hard to try and balance it is it there are fundamental components at the powered shell, at the power in terms of the electrons moving through the grid, at the supply chains that exist within the GPUs the supply chains that exist within the mid-voltage transformers. There’s a lot of different pieces that are constrained. But, ultimately, the piece that is the most significant challenge right now is accessing powered shells that are capable of delivering the scale of infrastructure that our clients are requiring.
Keith Weiss: Super helpful. Thank you so much.
Operator: Our next question comes from the line of Mark Murphy with JPMorgan. Please go ahead.
Mark Murphy: Congratulations on a robust RPO. Backlog figures and declining cost of capital. It’s a great combination. Michael, we’ve heard commentary that many governments actually around the world want to build their own version of Stargate and they began to reach out. Can you comment on any developments with respect to some of the sovereign governments that wanna build modern AI data centers and you know, what do you think might determine whether they’re whether they’re comfortable using a US-based, provider such as CoreWeave, Inc. Class A Common Stock? Then I have a quick follow-up.
Michael Intrator: Yeah. So it’s a very broad question. And you’re gonna have different jurisdictions, different sovereigns that are going to react differently to that question. What we have seen is that many of the sovereigns are really looking for best-in-class technical solutions to allow them to build the infrastructure that will allow their aspirations within artificial intelligence to be as successful as possible. We have a tremendous number of sovereigns that are beginning and discussing and talking through how to go about doing this, what technology to use, what software stack to use, where it should be placed, you know, right up and down the line. And we are very confident that we will continue to expand our footprint within the sovereign cloud universe.
There are other jurisdictions that are going to be less welcoming. To tech coming out of The US, and that’s just the nature of the way the world is gonna unfold for this. We’ve had some Canada. We’re really excited about our partnership with Cohere up there. We think they they’re doing a wonderful job and that infrastructure is really well positioned to be successful. We’ve done a really good job expanding infrastructure across Europe. We feel like we’ll be able to reach clients across the European theater we look for our clients to lead us into new jurisdictions where they are will become the anchor tenants will allow us to expand the build that we do and the software delivery systems that we create in order for let them order to let them become as successful as they would like to be within the AI infrastructure component of the market.
Mark Murphy: Okay. Okay. Understood. And as a follow-up, I believe that you said CoreWeave, Inc. Class A Common Stock signed the expansion contracts with both hyperscaler customers in the past eight weeks. And just since it’s August 12, could you clarify, did you mean that those expansions are already reflected in the Q2 backlog figures? In other words, you’re saying that you did incremental business in the month of June. Or did you mean that those expansions were signed in July and August?
Nitin Agrawal: Thanks, Mark, for your question. One of those contracts was signed in Q2 and is reflected in the Q2 revenue backlog number. The other one was signed in Q3 and will be reflected in our revenue backlog number.
Mark Murphy: And is there any sense of scale on those, Nitin, or whether it’s core GPU services versus an extension into Weights and Biases? Or are you unable to give that kind of detail yet? It does include services elements of our portfolio, we’ll give a wholesome update in Q3 earnings on the revenue backlog at the end of Q3.
Michael Intrator: These contracts are for GPU compute.
Mark Murphy: Excellent. Great to hear. Thank you so much.
Operator: Your next question comes from the line of Raimo Lenschow with Barclays. Please go ahead.
Raimo Lenschow: Obviously, we have, like, this big debate out how about this imbalance of demand and supply, and you talked about it a little bit. Like, you know, you from listening to you, it sounds more structural, I. E, than is kind of out longer. Can you talk a little bit about that? Because we obviously have Microsoft who is like, yeah, maybe we’re in balance soon, but then they pushed it out by another six months. Listening to you sounds a little bit longer. What’s the kind of what are the data points for you on that one? Then the other follow-up I had was, as you do more inference, how important is latency and hence location of data centers? That’s a number of the data that’s coming up a lot with us. Thank you. Congrats.
Michael Intrator: Sure. So we have been unwavering in our assessment of the structural supply constraint that exists in this market. I think that there are other entities that have repositioned restated and rethought how they are going to deliver infrastructure and when they are going to deliver infrastructure. But we have never wavered from our belief that the market is structurally supply constrained, and that is based on our discussions and relationships with the largest, most important consumers of this infrastructure in the world. And so, you know, I can’t speak to how other organizations are thinking about it I can only speak to it from what our position is based on our relationships with the buyers that come in looking for the specific solution that we provide.
And that is that this market has significant structural supply constraints. As far as the latency goes, I would encourage you to think about latency through a lens of use case. Right? And so you are in a chain of reasoning, query, latency is not particularly important. The compute is going to be more impactful than the latency or the latency and the relative distance to the query. If you’re in a different type of workload, latency becomes more important. Our approach has been since the early days to try and place our infrastructure as close to the population centers as we can in order to have the optionality associated with a low latency solution. Having said that, as we move through this cycle of developing artificial intelligence, as we see new models coming out, as we see chain of reasoning gain more traction, there’s definitely going to be significant demand for latency-insensitive workloads that will be able to live in more remote regions.
Raimo Lenschow: Very clear. Thank you. Congrats.
Michael Intrator: Thank you.
Operator: Our next question comes from the line of Brad Zelnick with Deutsche Bank. Please go ahead.
Brad Zelnick: Great. Thanks so much, and I’ll echo my congrats as well. My question follows Keith’s and Raimo’s about inference. How should we think about the economics of inferencing versus training? And then I have a follow-up as well. Thanks.
Michael Intrator: Yeah. So look. For our business model, the inference consumption and the training consumption the economics, are identical. The overwhelming majority, I spoke about this in our last earnings call, the overwhelming majority of our infrastructure has been sold in long-term structured contracts order to be able to deliver compute to our clients that need to consume it for training and for inference over time. And so we don’t see a real fluctuation in the economics associated with inference or training. Having said that, I think that it stands to reason to think that when a new model is released, and there is a rush to explore the new model, to use the new model, to drive new queries into it, you will see a spike in demand within a given AI lab that may cause there to be a spike in the short-term pricing associated with inference.
And we see those, but as we’ve said before, the on-demand component of compute is a very small percentage of our overall workloads. And we are observing inference cases on older generations of the hardwares, the A100s, the H100s, they’re still being recontracted out. They’re being bought on term in order to serve the inference loads that people continue to have, continue to see, need compute to be able to serve.
Brad Zelnick: Thanks. And that’s actually relates to my follow-up question because in your prepared remarks, you talked about flexible capacity products coming online in the back half. And a spot product in customer preview. Can you just expand about on what that looks like maybe different than what you’re already offering? What GPU generations will be available? And how we might think about pricing versus reserved or the take or pay style contracts that you more typically do?
Michael Intrator: Yeah. So look. We’re gonna continue to build up our on-demand and spot pricing offer. Right? It’s gonna take time. The biggest challenge that we have is that every time we’re able to build capacity, it is immediately consumed by one of our existing or a new client that wants to expand their exposure to additional compute to be able to serve their models. And so that has been a continual challenge for us guess it’s a good problem to have, but it’s a problem for us. But that product, we’re working diligently to be able to expand that capacity so that we’re able to provide more of a spot product. A big part of that just so you understand, is so that we can identify new users of compute, identify new companies that are coming into existence.
Identify new use cases that need compute so that we can build services that are appropriate for them, that allow them to build their businesses and sell their product into the market. And so we really want to be able to do that. We want to be able to have that offering, but it is challenging in a market that is so demand constrained.
Brad Zelnick: Makes a lot of sense. Thank you.
Operator: Your next question comes from the line of Michael Turrin with Wells Fargo. Please go ahead.
Michael Turrin: Hey, great. Thanks very much. Appreciate appreciate you taking the question. You mentioned we’ll see some variability on the backlog number. $30 billion, nearly 2x where you were a year ago. But also fairly consistent with where you were last quarter when you add in the OpenAI expansion. So I think it would just be useful as we’re all getting, you know, CoreWeave, Inc. Class A Common Stock Just if you could help us calibrate a bit more on what to expect from that metric going forward, how often is it the case that you can find a customer scale to move the needle sequentially there, and where does that $30 billion sit relative to the opportunities you still see in front of you? Thanks very much.
Michael Intrator: Thank you. So, first of all, it’s important to understand that the demand for compute that we’re seeing is from our largest, most important clients, is expanding in scale in magnitude, in you know, this is a planetary rebuild of the infrastructure that they were in order to be able to deliver their products to the market. And so when we’re looking at our pipeline, and we are looking at the contracts that are in that pipeline that we are working on, they are extremely significant. They will move the needle. Having said that, these contracts are heavily negotiated and they do take a significant amount of time in order to move through the cycle to make sure that everything is done correctly so that we can successfully deliver the product and quality that our clients require.
And so we think that going to continue to see step functions in compute as these large clients take large blocks of compute over long periods of time from CoreWeave, Inc. Class A Common Stock. They like our product. They like the way we deliver compute. They like the performance of the compute, and they will continue to buy from us as long as we can continue to build and deliver this infrastructure.
Michael Turrin: Thank you.
Operator: Our next question comes from the line of Gregg Moskowitz with Mizuho. Please go ahead.
Gregg Moskowitz: Great. And I’ll add my congratulations. The Q2, can you give us a sense of how successfully you were able to repurpose older GPU clusters that had come off contract? Any changes today vis a vis how this was trending around the start of the year?
Michael Intrator: Yeah. It’s a great question. So what we are seeing is we are seeing the infrastructure that is being delivered off of these contracts being recontracted out for additional term order to be able to continue to deliver that compute largely for inference. And so we’re talking about the H100s. We’re talking about the A100s. We’re talking about delivery of this compute into contracts that are anywhere between one and three years in extension after the initial contract is over. And so we’re pretty excited about that. We’ve also seen things, and this came up in the last call, where, you know, the OpenAI contract was contracted out for five years with two additional one-year extensions, which also provides a significant amount of transparency into how people view the run out of compute as it becomes an older generation.
Gregg Moskowitz: Very helpful. Thank you.
Operator: Our next question comes from the line of Tyler Radke with Citi. Please go ahead.
Tyler Radke: Yes. Thank you for taking the question. Two for me. So one question just on timing. You talked about the big CapEx ramp in Q4. Obviously, revenue guide also implies a pretty big step up in Q4. Can you just help us understand the timing aspect there, particularly with CapEx a little bit lighter than we expected in Q2? Is this simply kind of delays related to Blackwell? You know, or is it specific to kind of contracts that you’re expecting to ramp in Q4? And then the second question, just on the cost side, Nitin, you did highlight some increased costs. Obviously, you took up the revenue guide, but left operating income unchanged for the full year. So could you just elaborate like what are those specific costs that you have to incur ahead of these contracts? Like, what is kind of coming in bit higher than you expected? Just that would help us out. In thinking through the mechanics. Thank you.
Michael Intrator: Yeah. So I’ll take the first one, and then Nitin can follow-up on the cost side. So when we’re looking at our build-out and ramp, it goes through a series of necessary steps. Right? And so the way that this is going to work is are going to build from where we are right now an additional 400 plus megawatts of power into our online and delivered compute and power. That is followed by the CapEx spend when the power is available. Which is then followed by the revenue. And so we are very comfortable with the ramp that we are seeing in front of us in order to deliver the 900 megawatts plus power. It is going to be backloaded, as Nitin said, As we go through Q4. We knew that it was going to be backloaded as we came in. And we’re watching the build-out and scaling of that infrastructure very systematically as we continue to move through the year.
Nitin Agrawal: Michael. And the other piece I would add to that is that we’ve been operationally preparing for this ramp-up in executing and delivering this power by the end of the year. We are ready to kind of go through that exercise at this moment. When we think about the costs in particular, we do incur costs, especially associated with data center leases expenses coming online as we deploy this infrastructure and get it ready for our customers before we start generating revenue on that infrastructure. That does create a timing mismatch, especially when you’re adding capacity at the unprecedented scale we are adding, which is what you see reflected in our margin profile for the short duration as these customer contracts ramp up and the infrastructure associated with them is delivered to these customers.
Tyler Radke: Thank you.
Operator: Your next question comes from the line of Brad Sills with Bank of America. Please go ahead.
Brad Sills: Oh, wonderful. Thank you so much, and I’ll echo the congratulations on a real solid Q2 here. I wanted to ask about the different segments of the market here. You think about that in the big three, the big enterprise AI companies or hyperscalers, if you will. And then you have the AI labs and then the enterprises themselves. You know, as you’ve been embarking on this more product-led, software-led sale with Weights and Biases and the investment in Kubernetes. Splunk for Kubernetes, for example. Are you seeing more of the pipeline, more of the business coming in weighted towards that next wave of AI lab and then eventually enterprises themselves? Any commentary on just the end segments? Thank you.
Michael Intrator: Yeah. We’re seeing incredibly broad-based demand for the compute, and it is coming from the massive labs. Right? Like, that’s clear. But what is less probably recognized in the market is that you’re getting real green shoots in the sectorial growth of different parts of this marketing. Like, you know, we try to make a little bit of a reference in the initial statement. Right? Like, we saw within our the effects, we saw, you know, companies like Moon Valley. Right? Like, that’s incredible. Right? Like, it’s a new area of real growth from a new lab that’s building products for a different part of the market, and that’s really exciting. You know, the financial players are really great to see. That’s an enterprise type of client, but it also represents a different use for the actual computing infrastructure.
An uncorrelated revenue source for us that we’re really excited about. You know, the big banks are starting to really show up, and they are massive consumers of compute. And so you know, the productization within the enterprise you know, companies like, Hippocratic AI, you know, these are really representative of different parts of the economy starting to adopt AI using it to deliver services to different parts of the economy. And we think that’s tremendously exciting. Keep in mind that you’ve got a scale problem. Right? And that when you have a company like OpenAI, or an entity like OpenAI consuming compute They’re just doing it at an order of magnitude that these other companies have not achieved yet. And so we’re excited to see the green shoots.
We think it’s fantastic. We love that it is broadening the consumption of compute. But we are also well aware that for the time being, these really large consumers of compute will dominate the client component of our pipeline. One of the things that we’re incredibly excited about is how Weights and Biases has impacted our pipeline. Right? Weights and Biases brings in 1,600 new clients. They bring in clients like British Telecom. It’s just it’s fantastic to see us forming relationships with these enterprise clients that are experimenting or learning or integrating AI into what they do and it gives us an opportunity to position ourselves to become a supplier to them of the software and the hardware that they’re gonna require to be successful.
Brad Sills: Super exciting. Thank you, Michael. And then one more if I may, please. I know cost of debt is a big focus for you. Congratulations on the last two capital raises, bringing that down. You know, as equity investors, we’re not in those conversations with creditors. Would love to get your kind of sense from those conversations. What are the puts and takes that are driving that cost of debt down? Thank you so much.
Michael Intrator: It’s hard to overstate how excited I am. About the progress that we have been able to make within the capital markets, within the debt markets. You know, in a very fundamental way, what we have been doing is we have been bringing to bear the largest part of the capital markets, the debt markets, to the problem of building and scaling infrastructure. And it is an absolute necessity that that part of the market, those pools of capital are able to come to bear because of the size, scale, and cost of what needs to be done to build compute at a planetary scale. And so it’s just incredible progress and I’m incredibly proud of the team here that has been able to deliver the quality of infrastructure that lenders can understand get their arms around, and underwrite.
And it’s been three deals that really gone through this step function. There were two deals within the high-yield space and then there was the new DDTL, you know, the delayed draw term loan. And that came in at, you know, SOFR plus 400. That entirely changes the economics that are embedded in the contracts that we are delivering to the market. And it is a step function of massive importance. When Nitin is able to say, hey. We were able to drop our non-investment grade borrowing costs by 900 basis points. That’s a seismic level shift in the cost of capital.
Deborah Crawford: Sorry. We have time for one last question. Great. Thank you. Operator, we have Thank you, Michael.
Operator: And your final question comes from the line of Mike Cikos with Needham and Company. Please go ahead.
Mike Cikos: Great. Thanks for taking the questions and squeezing me in here, guys. A bit of a two-parter. I just wanted to come back to the Weights and Biases. Great to hear on the 1,600 clients and how that’s impacting the pipeline. Imagine this goes hand in hand with some of the increased OpEx investment we’re seeing, but can you talk about where we are on the sales front? As far as getting in front of those customers? You obviously had the fully connected developer event. But how is that tracking? And then the second piece I wanted to ask you about for on-demand and spot availability, it again, I think, Michael, you had said it’s a high-quality problem for sure that as soon as it comes off contract, it’s going back on. Would you need on-demand and spot availability to increase as another avenue for new logo acquisition? To tie to inference with these newer customers or not necessarily? Thank you.
Michael Intrator: Yeah. So look. The integration with Weights and Biases between the two companies has been fantastic. It’s been one of the things that is so exciting to the legacy CoreWeave, Inc. Class A Common Stock employees and the legacy Weights and Biases employees. I think that, you know, putting two organizations into a room that are so incredibly focused on the clients leads to incredible outcomes. And there have been three real products that have been developed by the combination of the Weights and Biases and the CoreWeave, Inc. Class A Common Stock teams. Right? And so we’ve integrated Weights and Biases into the mission control integration. Which gives Weights and Biases clients historically the ability to now access the incredible observability in the mission control product to improve the performance of their use of the AI infrastructure.
We’ve built a Weights and Biases inference product, which once again allows them an incredible amount of control. Over how they’re using compute, what they’re using compute, it’s impacting, you know, from really the data center all the way up. And that level of transparency clarity is just an incredible differentiator for the services that we provide. Differentiate our compete from other providers. And that’s, you know, been a huge step function. And then the final product is actually the Weave product that has been pushed out by Weights and Biases and CoreWeave, Inc. Class A Common Stock, which will allow folks to really make massive strides in optimizing all the way from their GPU through the code in their model to be able to drive performance.
And so that’s super exciting for us. And we do think that it will lead to increased traction. We have always been an organization that has leaned on the concept of land and expand. We get a client to come in and try our infrastructure. The incredible performance of our infrastructure leads to a deeper relationship, leads to larger contracts, leads to a broadening of how they use us to drive their business. And that’s a tried and true, you know, when I I’ll go back to the Moon Valley and the VFX side of the house. It’s the same concept when we acquired Conductor. It’s how you go about introducing yourself to this new area of the market and build upon it and expand upon it and bring in new clients. And so, you know, that acquisition is really moving along exactly how we had hoped it would.
With regards to your question around on-demand, spot, like, I tried to say this earlier. I’ll go about it a different way. Is that, like, different use cases approach compute in different ways. They need different things. The profile would compute the tooling that they’re going to require all of those things are incredibly important. And by having on-demand, what you’re doing is you’re creating a portion of our infrastructure that new players can use so that they can build new product. They can open up new markets. And we really wanna be able to continue to expand our footprint there because we think it’s important. We’ve been very successful with it. In the early days of our growth and we think that it’s an important part of what we will need to do to continue to provide such incredible productization of our compute over time.
Mike Cikos: Thank you very much.
Michael Intrator: Alright. So thank you for the earnings call with us. We appreciate the questions coming through the Q2 and your interest. Our standout second quarter results reflected continued execution across every dimension of the business. We’re scaling rapidly to meet unprecedented demand for our purpose-built AI cloud platform. Continues to lead the industry in both performance and scale. Remain confident that 2025 will be a landmark year for CoreWeave, Inc. Class A Common Stock. Our momentum is real, our strategy is working, and we are just getting started. Thank you again for joining us today. I look forward to speaking to you at the next quarterly earnings call. Thank you.
Deborah Crawford: Because we are clear.
Operator: Thank you again for joining today. I look forward to speaking to you at the next quarterly earnings call. Thank you.