IREN Limited (NASDAQ:IREN) Q1 2026 Earnings Call Transcript

IREN Limited (NASDAQ:IREN) Q1 2026 Earnings Call Transcript November 7, 2025

Operator: Thank you for standing by, and welcome to the IREN Q1 FY ’26 Results Briefing. [Operator Instructions] I would now like to hand the conference over to Mike Power, VP of Investor Relations. Please go ahead.

Mike Power: Thank you, operator. Good afternoon, and welcome to IREN’s Q1 FY ’26 Results Presentation. I’m Mike Power, VP of Investor Relations. And with me on the call today are Daniel Roberts, Co-Founder and Co-CEO; Anthony Lewis, CFO; and Kent Draper, Chief Commercial Officer. Before we begin, please note this call is being webcast live with a presentation. For those that have dialed in via phone, you can elect to ask a question via the moderator after our prepared remarks. Before we begin, I’d like to remind you that certain statements that we make during the conference call may constitute forward-looking statements, and IREN cautions listeners that forward-looking information and statements are based on certain assumptions and risk factors that could cause actual results to differ materially from the expectations of the company.

Listeners should not place undue reliance on forward-looking information or statements, and I’d encourage you to refer to the disclaimer on Slide 2 of the accompanying presentation for more information. With that, I’ll now turn over the call to Dan Roberts.

Daniel Roberts: Thanks, Mike, and thank you all for joining us for IREN’s Q1 2026 Earnings Call. Today, we’ll provide an overview of our financial results for the first fiscal quarter ending September 30, 2025, highlighting key operational milestones and importantly, discuss how our AI cloud strategy is driving strong growth. We’ll then open the call for questions at the end. So Q1 FY ’26 results. Fiscal year 2026 is off to a really good start. We delivered a fifth consecutive quarterly increase in revenues and a strong bottom line. Revenue reached $240 million and adjusted EBITDA was $92 million. Noting, of course, that net income and EBITDA importantly, reflected an unrealized financial gain on financial instruments. This performance reflects our continued — the team’s disciplined execution along with the benefits of having a resilient vertically integrated platform.

Microsoft and the cloud contract. So earlier this week, we announced a $9.7 billion AI cloud contract with Microsoft, which was a defining milestone for our business that underscores the strength and scalability of our vertically integrated AI cloud platform. The agreement not only validates our position as a trusted provider of AI cloud service, but also opens up access to a new customer segment among the global hyperscalers. Under this 5-year contract, IREN will deploy NVIDIA GB300 GPUs across 200 megawatts of data centers at our Childress campus. The agreement includes a 20% upfront prepayment, which helps support capital expenditures as they become due through 2026. The contract is expected to generate approximately $1.94 billion in annual recurring revenue.

Beyond the obvious positive financial impact, the contract carries strategic value of significance for us. It not only positions IREN as a contributor towards Microsoft’s AI road map, but also demonstrates to the market our ability to serve an expanded customer base, which includes a range of model developers, AI enterprises and now one of the largest technology companies on the planet. As enterprises and other hyperscalers accelerate their AI build-out, we expect that our combination of power, AI cloud experience and execution capability will continue to position us as a partner of choice. Looking ahead, we’re executing now on a plan that will see our GPU fleet scale from 23,000 GPUs today up to 140,000 GPUs by the end of 2026. When fully deployed, this expansion is expected to support in the order of $3.4 billion in annualized run rate revenue.

Importantly, this expansion leverages just 16% of our 3 gigawatts in secured power, leaving ample capacity for future expansion. With that overview in mind, let’s turn to the next section, a closer look at our AI cloud platform and how we’re positioned to scale in the years ahead. So as I alluded to earlier, a key driver of IREN’s competitive advantage in AI cloud services is our vertical integration. We develop our own greenfield sites, engineer our own high-voltage infrastructure, build and operate our own data centers and deploy our own GPUs. Simply put, we control the entire stack from the substation all the way down to the GPU. We believe strongly that this end-to-end integration and control is a key differentiator that positions us for significant growth.

This model of vertical integration eliminates dependence on third-party colocation providers and most importantly, removes all counterparty risk associated. This allows us to commission GPU deployments faster with full control over execution and uptime. For our customers, this translates into scalability, cost efficiency and a superior customer service with tighter control over performance, reliability and delivery milestones, driving tangible value and certainty. For those reasons, our customers, including Microsoft, view IREN as a strategic partner in delivering cutting-edge AI compute, recognizing our deep expertise in designing, building and operating a fully integrated AI cloud platform. On that note, we’re excited to announce a further expansion of our AI cloud service, targeting a total of 140,000 GPUs by the end of 2026.

This next phase includes the deployment of an additional 40,000 GPUs across our Mackenzie and Canal Flats campuses, which are expected to generate in the order of $1 billion in additional ARR. When combined with the $1.9 billion expected from the Microsoft contract and $500 million from our existing 23,000 GPU deployment, this expansion provides a clear pathway to approximately $3.4 billion in total annualized run rate revenue once fully ramped. Importantly, this incremental 40,000 GPU build-out will be executed in a highly capital-efficient manner through leveraging existing data centers. While we have not yet purchased GPUs for the deployment, we continue to see strong demand for air-cooled variants of NVIDIA’s Blackwell GPUs, including both the B200 and the B300.

And given their efficient deployment profile, we expect these to form the basis of this expansion. That said, we will continue to monitor customer demand closely and pursue growth in a disciplined, measured way. This full expansion to 140,000 GPUs will only require about 460 megawatts of power, representing roughly 16% of our total secured power portfolio. This leaves substantial optionality for future growth and importantly, continued scalability across our portfolio. The key takeaway here is that we have substantial near-term growth being actively executed upon, but also have significant and additional organic growth ahead of us. Turning now to Slide 8, which highlights the British Columbia data centers supporting our expansion to 140,000 GPUs. At Prince George, our ASICs to GPU swap-out program is progressing well.

A table full of technology, with bitcoin mining rigs and a laptop showing a financial graph.

The same process will soon extend to our Mackenzie and Canal Flats campuses, where we expect to migrate ASICs to GPUs with similar efficiency and speed. Together, these sites are allowing us to fast track our growth in supporting high-performance AI workloads, scaling it into what is becoming one of the largest GPU fleets in North America. Turning to Childress, where we are now accelerating the construction of Horizons 1 to 4 to accommodate the phased delivery of NVIDIA GB300 NVL72 systems for Microsoft. We’ve significantly enhanced our original design specifications to meet hyperscale requirements and also further ensure durable long-term returns from our data center assets. The facilities have been engineered to Tier 3 equivalent standards for concurrent maintainability, ensuring continuous operations even during maintenance windows.

A key feature of this next phase is the establishment of a network core architecture capable of supporting single 100-megawatt super clusters, a unique configuration that enables high-performance AI training for both current and next-generation GPUs. We’re also incorporating flexible rack densities ranging from 130 to 200 kilowatts per rack, which allows us to accommodate future chip generations and the evolving power and density requirements without major structural upgrades. While these design enhancements have resulted in incremental cost increases, they provide long-term value protection, enabling our data centers to support multiple generations and reduce recontracting risk typically associated with lower spec builds. In short, we’re building Childress not just for today’s GPUs and the Microsoft contract in front of us, but also for the next generations of AI compute.

Beyond the accelerated development of Horizons 1 through to 4, the remaining 450 megawatts, as you can see in the image on screen of secured power Childress provides substantial expansion potential for future horizons numbered 5 through to 10. Design work is underway to enable liquid cooled GPU deployments across the entire site, positioning us to scale seamlessly alongside customer demand. Finally, turning to Sweetwater, our flagship data center hub in West Texas, which has been somewhat overshadowed in recent months by the activity in Childress and Canada. At full build-out, Sweetwater will support up to 2 gigawatts, 2,000 megawatts of gross capacity, all of which has been secured from the grid. As shown in the chart, this single hub rivals and in most cases, exceeds the entire scale of total data center markets today.

While the recent headlines have naturally been dominated more about our AI cloud expansion at other sites, Sweetwater is a pretty exciting platform asset, giving us the capability to continue servicing the wave of AI compute demand. Sweetwater 1 energization continues to remain on schedule with more than 100 people mobilized on site to support construction of what is becoming one of the largest high-voltage data center substations in the United States. All exciting stuff. With that, I’ll now hand over to Anthony, who will walk through our Q1 FY ’26 results in more detail.

Anthony Lewis: Thanks, Dan, and thanks, everyone, for your attendance today. Continued operational execution was reflected in another quarter of strong financial performance. Q1 FY ’26 marked our fifth consecutive quarter of record revenues with total revenue reaching $240 million, up 20% — 28% quarter-over-quarter and 355% year-over-year. Operating expenses increased primarily on account of higher depreciation, reflecting ongoing growth in our platform and higher SG&A. The latter primarily driven by a materially higher share price, resulting in acceleration of share-based payment expense and a higher payroll tax expense associated with employees — $63 million were both significantly up, largely on account of unrealized gains on prepaid forward and cap call transactions entered into in connection with our convertible note financings.

Adjusted EBITDA was $92 million, reflecting continued margin strength, partially offset by that higher payroll tax of $33 million accrued in the quarter on account of strong share price performance. Turning now to our recently announced AI cloud partnership with Microsoft. As Dan mentioned, this is a very significant milestone for IREN. It not only delivers strong financial returns, but also creates a significant long-term strategic partnership for the business. Focusing on the financials. The $9.7 billion contract is expected to deliver approximately $1.9 billion in annual revenue once the 4 phases come online with an estimated 85% project EBITDA margin. This strong margin, which reflects our vertically integrated model incorporates all direct operating expenses across both our cloud and data center operations supporting the transaction, including power, salary and wages, maintenance, insurance and other direct costs.

These cash flows deliver an attractive return on the cloud investment, i.e., the $5.8 billion CapEx for the GPUs and ancillaries after deducting an appropriate internal colocation charge, ensuring that the project delivers robust cloud returns as well as an attractive return on our long-term investment in the Horizon data centers, which will deliver returns for many years into the future. The transaction has also a number of features that allow us to undertake the transaction in a capital-efficient way. Firstly, the payments for the CapEx are aligned with the phased delivery of the GPUs across the calendar year ’26 as we deliver those 4 phases. Secondly, the $1.9 billion in customer prepayments being 20% of total contract revenue, paid in advance of each tranche provides funding for circa 1/3 of the funding requirement at the outset.

Thirdly, the combination of the latest generation of GPUs and the very strong credit profile of Microsoft should allow us to raise significant additional funding secured against the GPUs and the contracted cash flows on attractive terms. While the final outcome will be subject to a range of considerations and factors, we are targeting circa $2.5 billion through such an initiative. And depending on final terms and pricing, there is meaningful upside to that, noting again the very high quality of our counterparty. We also have a range of options available to fund the remaining $1.4 billion, including existing cash balances, operating cash flows and a mix of equity convertible notes and corporate instruments. On that note, turning more generally to CapEx and funding.

We continue to focus on deepening our access to capital markets and diversifying our sources of funding. We issued $1 million in 0 coupon convertible notes during October, which was extremely well supported. And we also secured an additional $200 million in GPU financing to support our AI cloud expansion in Prince George, bringing total GPU-related financings to $400 million to date at attractive rates. Taking into account recent fundraising initiatives, our cash at the end of October stood at $1.8 billion. Our upcoming CapEx program, which includes the construction of the Verizon data centers for the Microsoft transaction will be met from a combination of the strong starting cash position, operating cash flows, the Microsoft prepayments, as just noted, and other financing streams that are underway.

These include the GP financing facilities that we discussed as well as a range of other options under consideration from other forms of secured lending against our fleet of GPUs and data centers through to corporate level issuance, whilst maintaining an appropriate balance between debt and equity to maintain a strong balance sheet. With that, we’ll now turn the call over to Q&A.

Q&A Session

Follow Iren Ltd (NASDAQ:IREN)

Operator: [Operator Instructions] The first question today comes from Nick Giles from B. Riley Securities.

Nick Giles: I want to congratulate you on this significant milestone with Microsoft. This was really great to see. I have a 2-part question. Dan, you mentioned strategic value, and I was first hoping you could expand on what this deal does from a commercial perspective. And then secondly, I was hoping you could speak to the overall return profile of this deal and how you think about hurdle rates for future deals.

Daniel Roberts: Sure. Thanks, Nick. I appreciate the ongoing support. So in terms of the strategic value, I think undoubtedly, proving that we can service one of the largest technology companies on the planet has a little bit of strategic value. But below that, the fact that this is our own proprietary data center design, and we’ve designed everything from the substation down to the nature of the GPU deployment and that has been deemed acceptable by a $1 trillion company, I think that’s got a bit of strategic value, both in terms of demonstrating to capital markets and investors that we are on the right track, but also importantly, in terms of the broader customer ecosystem and that validation. And look, we’ve seen that play out over the days since the announcement.

In terms of hurdle rates and returns, I think it’s worth Anthony, if you can to jump into this. I think it’s fair to say that IRRs, hurdle rates and financial models have dominated our lives for the last 6 weeks. So there’s probably a little bit we can outline in this regard.

Anthony Lewis: Sure. Thanks, Dan, and thanks for the question. The — yes, just in terms of — yes, the returns on the transaction, obviously, as I noted in the introductory comments, we — when we look at the cloud returns, we obviously take away what we think to be an arm’s length colocation rate, right, so effectively charge the deal for the cost of reaching the data center capacity. After we take that into account on an unlevered basis and assuming that there are 0 cash flows or RV associated with the GPUs after the term of the contract, we expect an unlevered IRR of low double digits. Obviously, we’ll be looking to add some leverage to the capital structure for the transaction, as we also discussed. And once we take that target $2.5 billion of additional leverage into account, you’re achieving a levered IRR in the order of circa 25% to 30%.

Obviously, that is assuming that $2.5 billion package and it also assumes that the remaining funding is coming from equity as opposed to other sources of capital, which we might also have access to. I’d also note that we said that the — might well be upside on that $2.5 billion. Obviously, at a $3 billion leverage package against the GPUs on a secured financing package, you could see those — that levered return increase by circa 10%. In terms of the RV, we’ve obviously — in those numbers, we’re just reflecting 0 economic value in the GPUs at the end of the term. If, for example, you were to assume a 20% RV, obviously, that has a material impact. Unlevered IRRs would increase to high teens and your levered IRRs would be somewhere between 35% to 50% depending on your leverage assumptions.

Daniel Roberts: Yes. I think maybe just to jump in as well. Thanks, Anthony. That’s all absolutely correct. And there are a lot of numbers in there, which is demonstrative of the amount of time we spent thinking about IRRs. So I think just to reiterate a couple of points. One is we’ve clearly divided out our business segments into stand-alone operations for the purposes of assessing risk return against a prospective transaction. So to be really clear, all of those AI cloud IRRs assume a colocation charge. So they assume a revenue line for our data centers. So our data centers, we’ve assumed to earn internally $130 per kilowatt per month escalating, which is absolutely a market rate of return, particularly considering the first 5 years is underwritten by a hyperscale credit.

So that’s probably the first point I’d make. But it’s also really important to mention that we’ve optimized elsewhere. So the 76,000 GPUs that we’ve procured for this contract at a $5.8 billion price, Dell have really looked after us to the point where they’ve got an in-built financing mechanism in that contract, where we don’t have to pay for any GPUs until 30 days after they’re shipped. So there’s further enhancements there. And then the final point I’d reiterate is this 20% prepayment, which I don’t believe we’ve seen elsewhere, accounts for 1/3 of the entire CapEx of the GPU fleet. And I guess we’ve been asked previously why we would prefer to do AI cloud versus colocation. As one very single small data point, we are getting paid 1/3 of the CapEx upfront here as compared to having to give away equity — big chunks of equity in our company to get access to a colocation deal.

So we’re really pleased to lead us towards that $3.4 billion in ARR by the end of 2026 on returns that are pretty attractive. Yes, it’s a good result.

Nick Giles: Anthony, Dan, I really appreciate all the detail there. One more, if I could. I was just wondering if you could give us a sense for the number of GPUs that will ultimately be deployed as part of the Microsoft deal. And then as we look out to year 6 and beyond, I mean, can you just speak to any of the kind of future-proofing you’ve done of the Horizon platform and what can ultimately be accommodated in the long term for future generations of chips?

Kent Draper: I’m happy to jump in and take that one, Dan. So in terms of the number of GPUs to service this contract, I draw your attention to some of our previous releases where we’ve said that each phase of Horizon would accommodate 19,000 GB300s. And obviously, we’re talking about 4 phases here with respect to that. In terms of future proofing of the data centers, there are a number of elements to it, but the primary one is that we have designed for rack densities here that are capable of handling well in excess of the GB300 rack architecture. And to give you specific numbers there, the GB300s are around 135 kilowatts of rack for the GPU racks and our design at the Horizon facilities it can accommodate up to 200 kilowatts of rack.

So that is the primary area where we have future-proofed the design. But as Dan also mentioned in the remarks on the presentation, we have enhanced the design in a number of ways, including effectively what is full Tier 3 equivalent concurrent maintainability. So yes, there are a number of elements that have been accommodated into the data centers to ensure that they can continue to support multiple generations of GPUs.

Nick Giles: Very helpful, Kent. Guys, congratulations again and keep up the good work.

Operator: The next question comes from Paul Golding from Macquarie.

Paul Golding: Congrats on the deal and all the progress with HPC. I wanted to ask, I guess, just a quick follow-on to the IRR question. Just on our back of the envelope math, it looks like pricing per GPU hour may be on the rise or at the higher end of that $2 to $3 range, assuming full utilization, so presumably potentially even higher. How should we think about the pricing dynamics in the marketplace right now on cloud given the success of this deal? And what seems to be fairly robust pricing? And then I have a follow-up.

Daniel Roberts: Sure.

Kent Draper: You go ahead, Dan.

Daniel Roberts: Look, I’ll let Kent talk a bit more about the market dynamic, but it is absolutely fair to say that we’re seeing a lot of demand. That demand appears to increase month-on-month in terms of the specific dollars per GPU hour, we haven’t specified that exactly. However, we have tried to give a level of detail in our disclosures, which allows people to work through that. I think importantly, for us, rather than focusing on dollars per GPU hour, which I think your statement is correct, is focus on the fundamental risk return proposition of any investment. And when we’ve got the ability to invest in an AI cloud, delivering what is likely to be in excess of 35% levered IRRs against the Microsoft credit, I mean, you kind of do that every day of the week.

Kent Draper: Yes. Thanks, Dan. And Paul, with regard to your specific question around demand, we continue to see very good levels of demand across all the different offerings we have. The air-cooled servers that we are installing up in our facilities in Canada lend themselves very well to customers who are looking for 500 to 4,000 GPU clusters and want the ability to scale rapidly. As we’ve discussed before, transitioning those existing data centers over from their current use case to AI workloads is a relatively quick process, and that allows us to service the growth requirements of customers in that class very well. And case in point, we’ve been able to precontract for a number of the GPUs that we purchased for the Canadian facilities well in advance of them arriving out of the sites.

And this is something that customers have historically been pretty reticent to do, but that level of demand exists in the market as well as ongoing trust and credibility of our platform with both existing and new customers that is allowing us to take advantage and pre-contract a lot of that away. And then obviously, with respect to the Horizon 1 build-out for Microsoft, this is the top-tier liquid cooled capacity from NVIDIA. We continue to see extremely strong demand for that type of capacity. And the fact that we are able to offer that means that we can genuinely serve all customer classes from hyperscalers, the largest foundational AI labs and largest enterprises with that liquid cooled offering down to top-tier AI start-ups and smaller scale inference enterprise users at the BC facilities.

Paul Golding: As a follow-up, as we look out to Sweetwater 1 energization coming up fairly soon here in April, are you able to speak to any inbound interest you’re getting on cloud at that site? I know it’s early days just from a construction perspective, maybe for the facilities themselves. But any color there and maybe whether you would consider hosting at that site given the return profile and potential cash flow profile that you would get from engaging in, in the cloud business over a period of time?

Kent Draper: Yes. In terms of the level of interest and discussions that we’re having, we’re seeing a strong degree of interest across all of the sites, including Sweetwater as well. Obviously, very significant capacity available at Sweetwater, as Dan mentioned, with initial energization there in April 2026, which is extremely attractive in terms of the scale and time to power. So I think it’s very fair to say that we’re seeing strong levels of interest across all the potential service offerings. As it relates to GPU as a Service and colocation, as previously, we will continue to do what we think is best in terms of risk-adjusted returns. Anthony outlined the risk-adjusted returns that we’re seeing in colocations — sorry, in GPU as a Service specifically at the moment.

And as we’ve outlined over the past number of months, that does look more attractive to us today. But as we continue to see increasing supply-demand imbalance within the industry, that may well feed through into colocation returns where it makes sense to do that in the future. But as it stands today, certainly, the return profile that we’re seeing in GPU as a Service, we think is incredibly attractive.

Operator: The next question comes from Brett Knoblauch from Cantor Fitzgerald.

Brett Knoblauch: On the $5.8 billion, call it, order from Dell, can you maybe parse out how much of that is allocated to GPUs and the ancillary equipment? And on the ancillary equipment, say, you wanted to retrofit the Horizon data centers with new GPUs in the future, do you also need to retrofit the ancillary equipment?

Kent Draper: So out of that total order amount, I mean, it’s fair to say the GPUs constitute the vast majority of it, but there is some substantial amounts in there for the back-end networking for the GPU clusters, which is the top-tier InfiniBand offering that’s currently available. In terms of future proofing, we’ll have to see how much of that equipment may or may not be reusable for future generations of GPUs. As I was referring to earlier, the vast majority of our data center equipment and the way that we have structured the rack densities within the data center mean that the data center itself is future-proofed. But in terms of the specific equipment for this cluster, it remains to be seen whether that will be able to be reused.

Brett Knoblauch: Perfect. And then on the — maybe the new 40,000 order that sounds like kind of be plugged in, in Canada. You talked about maybe a very efficient CapEx build for those data centers. Can you maybe elaborate a bit more on that? I know when the AI craze maybe first got started 18 months ago, you guys flagged that you guys are running GPUs up. I’m pretty sure that you built for less than $1 million a megawatt. Are we closer to that number for this? Or are we just well below maybe what the Horizon 4 cost per megawatt basis?

Kent Draper: So in terms of the basic transition of those data centers over to AI workloads, it is relatively minimal in terms of the CapEx that is required. The vast majority of the work is removing ASICs, removing the racks that the ASICs sit on and replacing those with standard data center racks and PDUs, so the power distribution units. That can accommodate the AI servers. So that is relatively minimal. As we’ve discussed before, it’s a matter of weeks to do that conversion. And from a CapEx perspective, it is not material. The one element that may be more material in terms of that conversion is adding redundancy if required to the data centers that would typically cost around $2 million a megawatt if we need to do that. But obviously, in the context of a full build-out like we’re seeing of liquid cooled capacity at Horizon, it’s extremely capital and CapEx efficient.

Operator: The next question comes from Darren Aftahi from ROTH Capital Partners.

Darren Aftahi: Congrats on the Microsoft deal as well. To start, with Microsoft, was colocation ever on the table with them? Did they come to you asking for AI cloud? Or how did those negotiations sort of fall out?

Daniel Roberts: Just think about the best way to answer this. So we’ve been talking to Microsoft for a long period of time and the nature of those conversations absolutely did evolve over time. Is there a preference for the cloud deal? Possibly. But at the end of the day, we want to focus on cloud, and that was the transaction we were comfortable with. So conversations really focused around that over the last 6 weeks or so. I think if I may, I’d talk more generically around these hyperscale customers because obviously, we weren’t just talking to Microsoft. I think there probably is a stronger preference from those to be looking at more colocation and infrastructure deals rather than cloud deals. But also is the case that there’s an appetite for a combination.

So it may be that we do some colocation in the future. Yes, I think different hyperscalers have different preferences. We’ll entertain them all. But given the nature of the deal we did with a 20% prepayment funding 1/3 of CapEx and a 35% plus equity IRR, we’re feeling pretty good about pursuing AI cloud.

Darren Aftahi: Got it. And just as a follow-up with the rest of Childress, is there any significance to the size of the Microsoft deal starting at 200 megawatts? Do they have interest in the rest of the campus? Have you talked to them about that yet?

Daniel Roberts: So again, I’m going to divert the question a little bit because we’ve got some pretty strong confidentiality provisions. So let me talk generically. There is appetite from a number of parties in discussing cloud and other structures well above the 200 megawatts that’s been signed with Microsoft.

Operator: The next question comes from John Todaro from Needham.

John Todaro: Congrats on the contract. I guess just one on that as we dig a little bit more in, any kind of penalties or anything related to the time line of delivering capacity? Just wondering if there’s guardrails around that. And then I do have a follow-up on CapEx.

Daniel Roberts: There’s always a penalty, whatever you do in life, if you don’t do what you promise you’re going to do. So we’re very comfortable with the contractual tolerances that have been negotiated, the expected dates versus contractual penalties and other consequences. I can’t comment more specifically beyond that on this call. But the other thing I would reiterate is we have never ever missed a construction or commissioning date in our life as a listed company. So I think you can take a lot of comfort that if we’ve put something forward to Microsoft and agreed it there and if we put something forward to the market, our reputations are on the line, our track record is on the line, we’re going to be very confident we can deliver it and potentially even exceed it.

John Todaro: Got it. Understood. And then just following up on the CapEx. That $14 million to $16 million on the — I think it was the data center side. Just wondering if there’s anything kind of additional in there that would get it north of the colo items other folks are talking about, if maybe there’s some networking or cabling included in that or any contribution from tariffs are being considered there?

Kent Draper: To give some additional color there. So yes, in terms of networking, et cetera, again, as Dan mentioned in his presentation earlier, this is designed — the Horizon campus is designed to be able to operate 100-megawatt super clusters. Now that does raise a significant level of additional infrastructure that is required over being able to deliver smaller clusters. And so certainly, some of the costs that are in the number that you mentioned are related to the ability to do that. And that will not necessarily be a requirement of every customer moving forward. So that probably is an element that is somewhat unique.

Operator: The next question comes from Stephen Glagola from JonesTrading.

Stephen Glagola: On your British Columbia GPUs, can you maybe just provide an update on where you guys stand with contracting out the remaining 12,000, I believe, GPUs of the initial 23,000 batch? And are you seeing any demand for your bare metal offering in BC outside of AI native enterprises?

Kent Draper: Yes. Happy to give an update there. We previously put out guidance a couple of weeks ago that we had contracted 11,000 out of the 23,000 that were on order. Subsequent to that, we have contracted a bit over another 1,000 GPUs. And primarily the ones that are not yet contracted are the ones that are arriving latest in terms of delivery time lines. As I mentioned earlier, we are seeing an increased appetite from customers to precontract. But these are GPUs that are a little further out in terms of delivery schedules relative to the ones that have already been contracted. Having said that, we continue to see very strong levels of demand, and we’re in late-stage discussions around a significant portion of the capacity that has not yet been contracted and continue to see very good demand leading into the start of next year as well and are receiving an increasingly large number of inbounds from a range of different customer classes.

So you mentioned AI natives. Yes, that has been a portion of the customer base that we’ve serviced previously. But we are also servicing a number of enterprise customers on an inference basis. So it is a pretty wide-ranging customer class that we’re servicing out of those British Columbia sites.

Operator: The next question comes from Joe Vafi from Canaccord Genuity.

Joseph Vafi: Congrats from me too on Microsoft. Just maybe, Dan, if you could kind of walk us through what you were thinking in your head. Clearly, some awesome IRRs here on the Microsoft deal. But how are you thinking about risk on a cloud deal here versus a straight colo deal, which probably wouldn’t have had the return, but maybe the risk profile may be lower there? And then I have just a quick follow-up.

Daniel Roberts: Thanks, Joe. Look, it’s funny. I actually see risk very differently. So we’ve spoken about colocation deals with these hyperscalers. And if you model out a 7% to 8% starting yield on cost and run that through your financial model, what you’ll generally see is that you’ll struggle to get your equity back during the contracted term, and then you’re relying on recontracting beyond end of that 15-year period to get any sort of equity return. So in terms of risk, I would argue that there’s a far better risk proposition implicit in the deal that we’ve signed and going down the cloud route. And then for the shorter-term contracts on the colo side where you may not have a hyperscale credit, you’re running significant GPU refresh risk against companies that don’t necessarily have the balance sheet today to support confidence in that GPU refresh.

So again, we think about it in business segments, we think about our data center business has got a great contract internally linked to Microsoft as a tenant. And that data center itself is future-proofed accommodating up to 200-megawatt rack densities. And it’s also the case that in 5 years, the optionality provides further downside protection. So upon expiry of the Microsoft contract, maybe we can run these GPUs for additional years, which we’ve seen with prior generations of GPUs like the A100s. But assuming that isn’t the case, we’ve got a lot of optionality within that business. We could sign a colocation deal at that point. We could relaunch a new cloud offering using latest generation GPUs. So my concern with these colocation deals is what you’re doing is you’re transferring an interest or an exposure to an asset that is inherently linked to this exponential world of technology and demand and the upside that, that may entail and you’re swapping that for a bond position in varying degrees of credit with the counterparties.

So if you’re swapping an asset for a bond exposure to a $1 trillion hyperscaler and you’re kind of hoping you might get your equity back after the contracted period, I mean, that’s one way to look at it. If you’re swapping your equity exposure for a bond exposure in a smaller Neo cloud without a balance sheet, then is that a good decision for shareholders? We just haven’t been comfortable.

Joseph Vafi: I get it, Dan. I mean we’ve run some DCF here and on some colo deals here in the last couple of months. And there’s a lot to be learned when you do it. There’s no doubt. And then just on this prepayment from Microsoft, I know you’ve got some strong NDAs here, but kind of a feather in your cap on getting that much in a prepayment. Any other — anything else to say on how — maybe your qualifications or how Microsoft perhaps and you came to the agreement to prefund the GP purchases out of the box?

Daniel Roberts: Look, yes, getting the 1/3 of the CapEx funded through a prepayment from the customer is fantastic from our perspective, and we’re super appreciative for Microsoft coming to the table on that. And what that allows us to do is to drive a really good IRR and return to equity for our shareholders. And again, linking back to what Anthony said earlier, we expect 35% equity IRRs from this transaction accounting after an internal data center charge. So trying to create that apples-and-apples comparison for a Neo cloud that has an infrastructure charge. Even after that, we’re looking at 35% plus. And also what’s really important to clarify is the equity portion of that IRR we have assumed is funded with 100% ordinary equity, which given our track record in raising convertibles, given the lack of any debt at a corporate level is probably conservative again.

So from a risk-adjusted perspective, linked to a $1 trillion credit and the ability to fund it efficiently, I mean, we’re really happy with the transaction. And yes, hopefully, there’s more to come.

Operator: The next question comes from Michael Donovan from Compass Point.

Michael Donovan: Congrats on the progress. I was hoping you could talk more to your cloud software stack and the stickiness of your customers.

Kent Draper: Yes, I’m happy to take that one. Yes, to date, the vast majority of our customers have required a bare metal offering, and that is their preference. These are all highly advanced AI or software companies like a Microsoft. They have significant experience in the space, and they want the raw compute and the performance benefits that, that brings, having access to a bare metal offering and then being able to layer their own orchestration platform over the top of that. So that has been by design that we have been offering a bare metal service. It lends itself exactly to what our customers are looking for. Having said all of that, we obviously are continuing to monitor the space, continuing to look at what customers want.

And we are certainly able to go up the stack and layer in additional software if it is required by customers over time. But today, as I said, we haven’t really seen any material levels of demand for anything other than the bare metal service that we’re currently offering.

Daniel Roberts: And I think maybe just to add to that, Kent, if you step back and think about it, you contract in with some of the largest, most sophisticated technology companies on the planet that want to access to our GPUs to run their software. It’s kind of upside down world to then turn around and say, “Oh, we’ll do all the software and operating layer.” Like clearly, they’re in the position they are because they have a competitive advantage in that space. They’re just looking for the bare metal. I think as the market continues to develop over coming years, it may be the case that if you want to service smaller customers that don’t have that internal capability or budget, then yes, maybe you will open up smaller segments of the market.

But for a business like ours that is pursuing scale and monetizing a platform that we spent the last 7 years building, it’s very hard to see how you get scale by focusing on software, which is, I think everyone generally accepts is going to be commoditized anyway in coming years as compared to just selling through the bare metal and letting these guys do their thing on it.

Michael Donovan: That makes sense. I appreciate that. You mentioned design works are complete for a direct fiber loop between Sweetwater 1 and 2. How should we think about those 2 sites communicate with each other once they’re live?

Kent Draper: Yes. I think really the best way to think about it is it just adds an additional layer of optionality as to the customers that would be interested in that and how we contract those projects. There are a number of customers out there who are looking particularly for scale in terms of their deployments. And obviously, being able to offer 2 gigawatts that can operate as an individual campus even though the physical sites are separated is something that we think has value, and that’s why we have pursued that direct fiber connection.

Operator: At this time, we’re showing no further questions. I’ll hand the conference back to Dan Roberts for any closing remarks.

Daniel Roberts: Great. Thanks, operator. Thanks, everyone, for dialing in. Obviously, it’s been an exciting couple of months and particularly last week. Our focus now turns to execution to deliver 140,000 GPUs through the end of 2026, but also continuing the ongoing dialogue with a number of different customers around monetizing the substantial power and land capacity we’ve got available and our ability to execute and deliver compute from that. So I appreciate everyone’s support. I look forward to the next quarter.

Follow Iren Ltd (NASDAQ:IREN)