Broadcom Inc. (NASDAQ:AVGO) Q2 2025 Earnings Call Transcript June 5, 2025
Broadcom Inc. beats earnings expectations. Reported EPS is $1.58, expectations were $1.57.
Operator: Welcome to Broadcom Inc.’s Second Quarter Fiscal Year 2025 Financial Results Conference Call. At this time, for opening remarks and introductions, I would like to turn the call over to Ji Yoo, Head of Investor Relations of Broadcom Inc.
Ji Yoo: Thank you, operator, and good afternoon, everyone. Joining me on today’s call are Hock Tan, President and CEO; Kirsten Spears, Chief Financial Officer; and Charlie Kawwas, President, Semiconductor Solutions Group. Broadcom distributed a press release and financial tables after the market closed, describing our financial performance for the second quarter of fiscal year 2025. If you did not receive a copy, you may obtain the information from the Investors section of the Broadcom’s website at broadcom.com. This conference call is being webcast live, and an audio replay of the call can be accessed for 1 year through the Investors section of Broadcom’s website. During the prepared comments, Hock and Kirsten will be providing details of our second quarter fiscal year 2025 results, guidance for our third quarter of fiscal year 2025 as well as commentary regarding the business environment.
We’ll take questions after the end of our prepared comments. Please refer to our press release today and our recent filings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward-looking statements made on this call. In addition to U.S. GAAP reporting, Broadcom reports certain financial measures on a non-GAAP basis. A reconciliation between GAAP and non-GAAP measures is included in the tables attached to today’s press release. Comments made during today’s call will primarily refer to our non-GAAP financial results. I will now turn the call over to Hock.
Hock E. Tan: Thank you, Ji, and thank you, everyone, for joining us today. In our fiscal Q2 2025, total revenue was a record $15 billion, up 20% year-on-year. This 20% year-on-year growth was all organic as Q2 last year was the first full quarter with VMware. Now revenue was driven by continued strength in AI semiconductors and the momentum we have achieved in VMware. Now reflecting excellent operating leverage, Q2 consolidated adjusted EBITDA was $10 billion, up 35% year-on-year. Now let me provide more color. Q2 semiconductor revenue was $8.4 billion, with growth accelerating to 17% year-on-year, up from 11% in Q1. And of course, driving this growth was AI semiconductor revenue of over $4.4 billion, which is up 46% year-on-year and continues the trajectory of 9 consecutive quarters of strong growth.
Within this, custom AI accelerators grew double digits year-on-year, while AI networking grew over 170% year-on-year. AI networking, which is based on Ethernet was robust and represented 40% of our AI revenue. As a standard-based open protocol, Ethernet enables one single fabric for both scale out and scale up and remains the preferred choice by our hyperscale customers. Our networking portfolio of Tomahawk switches, Jericho routers and NICs is what’s driving our success within AI clusters in hyperscalers. And the momentum continues with our breakthrough Tomahawk 6 switch just announced this week. This represents the next- generation 102.4 terabits per second switch capacity. Tomahawk 6 enables clusters of more than 100,000 AI accelerators to be deployed in just 2 tiers instead of 3.
This flattening of the AI cluster is huge because it enables much better performance in training next-generation frontier models through a lower latency, higher bandwidth and lower power. Turning to XPUs or custom accelerators. We continue to make excellent progress on the multiyear journey of enabling our 3 customers and 4 prospects to deploy custom AI accelerators. As we had articulated over 6 months ago, we eventually expect at least 3 customers to each deploy 1 million AI accelerated clusters in 2027, largely for training their frontier models. And we forecast and continue to do so a significant percentage of these deployments to be custom XPUs. These partners are still unwavering in their plan to invest despite the certain economic environment.
In fact, what we’ve seen recently is that they are doubling down on inference in order to monetize their platforms. And reflecting this, we may actually see an acceleration of XPU demand into the back half of 2026 to meet urgent demand for inference on top of the demand we have indicated from training. And accordingly, we do anticipate now our fiscal 2025 growth rate of AI semiconductor revenue to sustain into fiscal 2026. Turning to our Q3 outlook. As we continue our current trajectory of growth, we forecast AI semiconductor revenue to be $5.1 billion, up 60% year-on-year, which would be the 10th consecutive quarter of growth. Now turning to non-AI semiconductors in Q2. Revenue of $4 billion was down 5% year-on-year. Non-AI semiconductor revenue is close to the bottom, has been relatively slow to recover, but they had bright spots.
In Q2, broadband, enterprise networking and server storage revenues were up sequentially. However, industrial was down and as expected, wireless was also down due to seasonality. In Q3, we expect enterprise networking and broadband to continue to grow sequentially, but server storage, wireless and industrial are expected to be largely flat. And overall, we forecast non-AI semiconductor revenue to stay around $4 billion. Now let me talk about our infrastructure software segment. Q2 infrastructure software revenue of $6.6 billion was up 25% year-on-year, above our outlook of $6.5 billion. As we have said before, this growth reflects our success in converting our enterprise customers from perpetual vSphere to the full VCF software stack subscription.
Customers are increasingly turning to VCF to create a modernized private cloud on-prem, which will enable them to repatriate workloads from public clouds while being able to run modern container-based applications and AI applications. Of our 10,000 largest customers, over 87% have now adopted VCF. The momentum from strong VCF sales over the past 18 months since the acquisition of VMware has created annual recurring revenue or otherwise known as ARR growth of double digits in our core infrastructure software. In Q3, we expect infrastructure software revenue to be approximately $6.7 billion, up 16% year-on-year. So in total, we are guiding Q3 consolidated revenue to be approximately $15.8 billion, up 21% year-on-year. We expect Q3 adjusted EBITDA to be at least 66%.
With that, let me turn the call over to Kirsten.
Kirsten M. Spears: Thank you, Hock. Let me now provide additional detail on our Q2 financial performance. Consolidated revenue was a record $15 billion for the quarter, up 20% from a year ago. Gross margin was 79.4% of revenue in the quarter, better than we originally guided on product mix. Consolidated operating expenses were $2.1 billion, of which $1.5 billion was related to R&D. Q2 operating income of $9.8 billion was up 37% from a year ago, with operating margin at 65% of revenue. Adjusted EBITDA was $10 billion or 67% of revenue, above our guidance of 66%. This figure excludes $142 million of depreciation. Now a review of the P&L for our 2 segments. Starting with semiconductors. Revenue for our semiconductor solutions segment was $8.4 billion, with growth accelerating to 17% year-on-year, driven by AI.
Semiconductor revenue represented 56% of total revenue in the quarter. Gross margin for our semiconductor solutions segment was approximately 69%, up 140 basis points year-on-year driven by product mix. Operating expenses increased 12% year-on-year to $971 million on increased investment in R&D for leading edge AI semiconductors. Semiconductor operating margin of 57% was up 200 basis points year-on-year. Now moving on to infrastructure software. Revenue for infrastructure software of $6.6 billion was up 25% year-on-year and represented 44% of total revenue. Gross margin for infrastructure software was 93% in the quarter compared to 88% a year ago. Operating expenses were $1.1 billion in the quarter, resulting in infrastructure software operating margin of approximately 76%.
This compares to operating margin of 60% a year ago. This year-on-year improvement reflects our disciplined integration of VMware. Moving on to cash flow. Free cash flow in the quarter was $6.4 billion and represented 43% of revenue. Free cash flow as a percentage of revenue continues to be impacted by increased interest expense from debt related to the VMware acquisition and increased cash taxes. We spent $144 million on capital expenditures. Day sales outstanding were 34 days in the second quarter compared to 40 days a year ago. We ended the second quarter with inventory of $2 billion, up 6% sequentially in anticipation of revenue growth in future quarters. Our days of inventory on hand were 69 days in Q2 as we continue to remain disciplined on how we manage inventory across the ecosystem.
We ended the second quarter with $9.5 billion of cash and $69.4 billion of gross principal debt. Subsequent to quarter end, we repaid $1.6 billion of debt, resulting in gross principal debt of $67.8 billion. The weighted average coupon rate and years to maturity of our $59.8 billion in fixed rate debt is 3.8% and 7 years, respectively. The weighted average interest rate and years to maturity of our $8 billion in floating rate debt is 5.3% and 2.6 years, respectively. Turning to capital allocation. In Q2, we paid stockholders $2.8 billion of cash dividends based on a quarterly common stock cash dividend of $0.59 per share. In Q2, we repurchased $4.2 billion or approximately 25 million shares of common stock. In Q3, we expect the non-GAAP diluted share count to be approximately 4.97 billion shares, excluding the potential impact of any share repurchases.
Now moving on to guidance. Our guidance for Q3 is for consolidated revenue of $15.8 billion, up 21% year-on-year. We forecast semiconductor revenue of approximately $9.1 billion, up 25% year-on-year. Within this, we expect Q3 AI semiconductor revenue of $5.1 billion, up 60% year-on-year. We expect infrastructure software revenue of approximately $6.7 billion, up 16% year-on-year. For modeling purposes, we expect Q3 consolidated gross margin to be down approximately 130 basis points sequentially, primarily reflecting a higher mix of XPUs within AI revenue. As a reminder, consolidated gross margins through the year will be impacted by the revenue mix of infrastructure software and semiconductors. We expect Q3 adjusted EBITDA to be at least 66%.
We expect the non-GAAP tax rate for Q3 and fiscal year 2025 to remain at 14%. And with this, that concludes my prepared remarks. Operator, please open up the call for questions.
Q&A Session
Follow Avago Technologies Ltd (NASDAQ:AVGO)
Follow Avago Technologies Ltd (NASDAQ:AVGO)
Operator: [Operator Instructions] And our first question will come from the line of Ross Seymore with Deutsche Bank.
Ross Clark Seymore: Hock, I wanted to jump on to the AI side and specifically some of the commentary you had about next year. Can you just give a little bit more color on the inference commentary you gave? And is it more the XPU side, the connectivity side or both that’s given you the confidence to talk about the growth rate that you have this year being matched next fiscal year?
Hock E. Tan: Thank you, Ross. Good question. I think we’re indicating that what we are seeing and what we have quite a bit of visibility increasingly is increased deployment of XPUs next year, much more than we originally thought and hand-in-hand with it, of course, more and more networking. So it’s a combination of both.
Ross Clark Seymore: And the inference side of things?
Hock E. Tan: Yes, we’re seeing much more inference now.
Operator: One moment for our next question. And that will come from the line of Harlan Sur with JPMorgan.
Harlan L. Sur: Great job on the quarterly execution. Hock, good to see the positive growth inflection quarter-over-quarter, year-over-year growth rates in your AI business. As the team has mentioned, right, the quarters can be a bit lumpy. So if I smooth out kind of first 3 quarters of this fiscal year, your AI business is up 60% year-over-year. It’s kind of right in line with your 3-year kind of SAM growth CAGR, right? Given your prepared remarks and knowing that your lead times remain at 35 weeks or better, do you see the Broadcom team sustaining the 60% year-over-year growth rate exiting this year. And I assume that, that potentially implies that you see your AI business sustaining the 60% year-over-year growth rate into fiscal ’26, again, based on your prepared commentary, which again is in line with your SAM growth figure. Is that kind of a fair way to think about the trajectory this year and next year?
Hock E. Tan: Harlan, that’s a very insightful set of analysis here, and that’s exactly what we’re trying to do here because over 6 months ago, we gave you guys a point, a year, 2027. As we come into the second half of 2025 and with improved visibility and updates we are seeing in the way our hyperscale partners are deploying data centers, AI clusters, we are providing you some level of guidance visibility what we are seeing, how the trajectory of ’26 might look like. I’m not giving you any update on ’27. We’re just still establishing the update we have in ’27 6 months ago. But what we’re doing now is giving you more visibility into where we’re seeing ’26 headed.
Harlan L. Sur: But is the framework that you laid out for us like second half of last year, which implies 60% kind of growth CAGR in your SAM opportunity, is that kind of the right way to think about it as it relates to the profile of growth in your business this year and next year?
Hock E. Tan: Yes.
Operator: One moment for our next question. And that will come from the line of Ben Reitzes with Melius Research.
Benjamin Alexander Reitzes: Hock, networking — AI networking was really strong in the quarter, and it seemed like it must have beat expectations. I was wondering if you could just talk about the networking in particular, what caused that? And how much of that is your acceleration into next year? And when do you think you see Tomahawk kicking in as part of that acceleration?
Hock E. Tan: Well, I think the AI networking, as you probably would know, goes pretty hand-in-hand with deployment of AI accelerator clusters. It isn’t — It doesn’t deploy on a timetable that’s very different from the way the accelerators get deployed, whether they are XPUs or GPUs. It does happen. And they deployed a lot in scale-out where Ethernet, of course, is the choice of protocol, but it’s also increasingly moving into the space of what we all call scale up within those data centers, where you have much higher, more than we originally thought consumption or density of switches than you have in the scale-out scenario. In fact, the increased density in scale up is 5 to 10x more than in scale out. And that’s the part that kind of pleasantly surprised us and which is why this past quarter, Q2, the AI networking portion continues at about 40% from what we reported a quarter ago for Q1. And at that time, I said I expect it to drop. It hasn’t.
Benjamin Alexander Reitzes: And your thoughts on Tomahawk driving acceleration for next year and when it kicks in?
Hock E. Tan: Tomahawk 6, yes, that’s extremely strong interest. Now we’re not shipping big orders or any orders other than basic proof of concepts out to customers, but there is tremendous demand for this new 102 terabits per second Tomahawk switches.
Operator: One moment for our next question. And that will come from the line of Blayne Curtis with Jefferies.
Blayne Peter Curtis: Great results. I just wanted to ask maybe following up on the scale-out opportunity. So today, I guess your main customer is not really using kind of an NVLink switch style scale-up. I’m just kind of curious your visibility or the timing in terms of when you might be shipping a switched Ethernet scale-up network to your customers?
Hock E. Tan: You’re talking scale up?
Blayne Peter Curtis: Scale up.
Hock E. Tan: Yes. Well, scale up is very rapidly converting to Ethernet now, very much so. For our fairly narrow band of hyperscale customers, scale up is very much Ethernet.
Operator: One moment for our next question. And that will come from the line of Stacy Rasgon with Bernstein.
Stacy Aaron Rasgon: Hock, I still wanted to follow up on that AI 2026 question. I want to just put some numbers on it, just to make sure I got it right. So if you did 60% in the first 3 quarters of this year, if you grow 60% year-over-year in Q4, it put you at like, I don’t know, $5.8 billion, something like $19 billion or $20 billion for the year. And then are you saying you’re going to grow 60% in 2026 would put you $30 billion plus in AI revenues for 2026? I’m just wondering, is that the math that you’re trying to communicate to us directly?
Hock E. Tan: I think you’re doing the math. I’m giving you the trend. But I did answer that question, I think Harlan asked earlier. The rate we are seeing now so far in fiscal ’25 and will presumably continue, we don’t see any reason why it doesn’t given lead time visibility in ’25. What we are seeing today based on what we have visibility on ’26 is to be able to ramp up this AI revenue in the same trajectory. Yes.
Stacy Aaron Rasgon: So is the SAM going up as well because now you have inference on top of training. So is the SAM still 60 to 90? Or is the SAM higher now as you see it?
Hock E. Tan: I’m not playing a SAM game here. I’m just giving a trajectory towards where we drew the line on ’27 before. So I have no response to if the SAM going up or not. Stop talking about SAM now.
Operator: One moment for our next question. And that will come from the line of Vivek Arya with Bank of America.
Vivek Arya: I had a near and then a longer-term question on the XPU business. So Hock, for near term, if your networking upsided in Q2 and overall AI was in line, it means XPU was perhaps not as strong. So I realize it’s lumpy, but anything more to read into that, any product transition or anything else? So just a clarification there. And then longer term, you have outlined a number of additional customers that you’re working with. What milestones should we look forward to? And what milestones are you watching to give you the confidence that you can now start adding their addressable opportunity into your ’27 or ’28 or other numbers? Like how do we get the confidence that these projects are going to turn into revenue in some reasonable time frame from now?
Hock E. Tan: Okay. On the first part that you’re asking, it’s like you’re trying to be — you’re trying to count how many angels on a head of a pin here. I mean, whether it’s XPU or networking. Networking is hot, but that doesn’t mean XPU is any soften. It’s very much along the trajectory we expect it to be. And there’s no lumpiness. There’s no softening. It’s pretty much what we expect the trajectory to go so far and into next quarter as well and probably beyond. So we have a — it’s a fairly, I guess, in our view, a fairly clear visibility on the short-term trajectory. In terms of going on to ’27, no, we are not updating any numbers here. We — 6 months ago, we drew a sense for the size of the SAM based on million GPU XPU clusters for 3 customers, and that’s still very valid at that point, that will be there.
And we have not provided any further updates here nor are we intending to at this point. When we get a better visibility, clearer sense of where we are, and that probably won’t happen until ’26, we’ll be happy to give an update to the audience. But right now, though, in today’s prepared remarks and answering a couple of questions, we have — we are — as we are doing — as we have done yet, we are intending to give you guys more visibility what we are seeing the growth trajectory in ’26.
Operator: One moment for our next question. And that will come from the line of C.J. Muse with Cantor Fitzgerald.
Christopher James Muse: I was hoping to follow up on Ross’s question regarding inference opportunity. Can you discuss workloads that are optimal that you’re seeing for custom silicon? And that over time, what percentage of your XPU business could be inference versus training?
Hock E. Tan: I think there’s no differentiation between training and inference in using merchant accelerators versus custom accelerators. I think the whole premise behind going towards custom accelerators continues, which is it’s not a matter of cost alone. It is that as custom accelerators get used and get developed on a road map with any particular hyperscaler, there’s a learning curve, a learning curve on how they could optimize the way the algorithms on their large language models gets written and tied to silicon. And that ability to do so is a huge value added in creating algorithms that can drive their LLMs to higher and higher performance, much more than basically a segregation approach between hardware and the software.
It’s that you literally combine end-to-end hardware and software as they take that journey. And it’s a journey. They don’t learn that in 1 year. Do it a few cycles, get better and better at it. And there lies the value — the fundamental value in creating your own hardware versus using a third-party merchant silicon that you are able to optimize your software to the hardware and eventually achieve way high performance than you otherwise could. And we see that happening.
Operator: One moment for our next question. And that will come from the line of Karl Ackerman with BNP Paribas.
Karl Ackerman: Hock, you spoke about the much higher content opportunity in scale-up networking. I was hoping you could discuss how important is demand adoption for co-packaged optics in achieving this 5 to 10x higher content for scale-up networks? Or should we anticipate much of the scale-up opportunity will be driven by Tomahawk and Thor NICs?
Hock E. Tan: I’m trying to decipher this question of yours. So let me try to answer it perhaps in the way I think you want me to clarify. First and foremost, I think most of what’s scaling up, a lot of the scaling up that’s going, as I call it, which means a lot of XPU or GPU to GPU interconnects is done on copper interconnects. And because the size of this scale-up cluster is still not that huge yet that you can get away with copper — using copper interconnects. And they’re still doing it. Mostly, they’re doing it today. At some point soon, I believe, when you start trying to go beyond maybe 72 GPU to GPU interconnects, you may have to push towards a different protocol by protocol mode at a different medium from copper to optical.
And when we do that, yes, perhaps then things like exotic stuff like co-packaging might be of silicon with optical might become relevant. But truly, what we really are talking about is that at some stage, as the clusters get larger and which means scale up becomes much bigger and you need to interconnect GPU or XPU to each other in scale up many more than just 72 or 100, maybe even 128, you start going more and more. You want to use optical interconnects simply because of distance. And that’s when optical will start replacing copper. And when that happens, the question is what’s the best way to deliver on optical. And one way is co-packaged optics, but it’s not the only way. You can just simply use — continue to use perhaps pluggable at low-cost optics.
In which case, then you can interconnect the bandwidth, the radix of a switch and our switch is now 512 connections. So you can now connect all these XPUs, GPUs, 512 for scale-up phenomenon. And that was huge, but that’s when you go to optical. That’s going to happen within my view, a year or 2, and we’ll be right at the forefront of it. And it may be co-packaged optics, which we are very much in development, but it’s a lock-in co-package or it could just be as a first step, pluggable optics. Whatever it is, I think the bigger question is when does it go for optical from copper connecting GPU to GPU to optical connecting it. And the step in that move will be huge. And it’s not necessarily co-packaged optics. So that’s definitely one path we are pursuing.
Operator: And one moment for our next question. And that will come from the line of Joshua Buchalter with TD Cowen.
Joshua Louis Buchalter: I realize it’s a bit nitpicky, but I wanted to ask about gross margins in the guide. So your revenue implies sort of $800 million incremental increase with gross profit up, I think, $400 million to $450 million, which is kind of pretty well below corporate average fall-through. I appreciate that semis is dilutive and custom is probably dilutive within semis. But anything else going on with margins that we should be aware of? And how should we think about the margin profile of custom longer term as that business continues to scale and diversify?
Kirsten M. Spears: Yes. We’ve historically said that the XPU margins are slightly lower than the rest of the business other than wireless. And so there’s really nothing else going on other than that. It’s just exactly what I said, that the majority of it quarter-over-quarter, the 130 basis point decline is being driven by more XPUs.
Hock E. Tan: There are more moving parts here than your simple analysis proves here. And I think your simple analysis is totally wrong in that regard.
Operator: And one moment for our next question. And that will come from the line of Timothy Arcuri with UBS.
Timothy Michael Arcuri: I also wanted to ask about scale up, Hock. So there’s a lot of competing ecosystems. There’s UALink, which, of course, you left. And now there’s the big GPU company opening up NVLink, and they’re both trying to build ecosystems, and there’s an argument that you’re an ecosystem of one. What would you say to that debate? Does opening up NVLink change the landscape? And sort of how do you view of your AI networking growth next year? Do you think it’s going to be primarily driven by scale up? Or will it still be pretty scale-out heavy?
Hock E. Tan: It’s — people do like to create platforms and new protocols and systems. The fact of the matter is scale up can just be done easily, and it’s currently available. It’s open standards, open source, Ethernet. Just as well, you don’t need to create new systems for the sake of doing something that you could easily be doing in networking in Ethernet. And so yes, I hear a lot of this interesting new protocols, standards that are trying to be created. And most of them, by the way, are proprietary, much as they like to call it otherwise. What is really open source and open standards is Ethernet. And we believe Ethernet will prevail as it does before — for the last 20 years in traditional networking. There is no reason to create a new standard for something that could be easily done in transferring bits and bytes of data.
Operator: And one moment for our next question. And that will come from the line of Christopher Rolland with Susquehanna.
Christopher Adam Jackson Rolland: Yes. My question is for you, Hock. It’s kind of a bigger picture one here. And this kind of acceleration that we’re seeing in AI demand, do you think that this acceleration is because of a marked improvement in ASICs or XPUs closing the gap on the software side at your customers? Do you think it’s these require tokenomics around inference, test time compute driving that, for example. What do you think is actually driving the upside here? And do you think it leads to a market share shift faster than we were expecting towards XPU from GPU?
Hock E. Tan: Yes. Interesting question, but none of the foregoing that you outlined. It’s very simple, why inference has come out very, very hot lately is — remember, we’re only selling to a few customers, hyperscalers with platforms and LLMs. That’s it. There are not that many. And we have told you how many we have, and we haven’t increased any. But what is happening is these hyperscalers and those with LLMs need to justify all the spending they’re doing. Doing training makes your frontier models smarter. There’s no question. It’s almost like science, research and science. Make your frontier models by creating very clever algorithms that consumes a lot of compute for training smarter. Training makes it smarter. You want to monetize inference.
And that’s what’s driving it. Monetize, I indicated in my prepared remarks, to drive to justify a return on investment and a lot of that investment is training and that return on investment is by creating use cases, a lot of AI use cases, AI consumption out there through availability of a lot of inference. And that’s what we are now starting to see among our small group of customers.
Operator: And one moment for our next question. And that will come from the line of Vijay Rakesh with Mizuho.
Vijay Raghavan Rakesh: Hock, just going back on the AI server revenue side. I know you said fiscal ’25 kind of tracking to that up 60%-ish growth. If you look at fiscal ’26, you have many new customers ramping [ Meta ] and probably you have the 4 of the 6 hyperscalers that you have talked in the past. Would you expect that growth to accelerate into fiscal ’26 about that kind of the 60% you talked about?
Hock E. Tan: My prepared remarks and which I clarified that the rate of growth we are seeing in ’25 will sustain into ’26 based on improved visibility and the fact that we’re seeing inference coming in on top of the demand for training as the clusters get built up bigger and bigger, still stands. I don’t think we are getting very far by trying to pass through my words or data here. It’s just a — it is and we see that going from ’25 into ’26 as the best forecast we have at this point.
Vijay Raghavan Rakesh: Got it. And on the NVLink Fusion versus the scale up, do you expect that market to go the route of top of the rack where you’ve seen some move to the Ethernet side in kind of the scale-out. Do you expect scale up to kind of go the same route?
Hock E. Tan: Broadcom do not participate in NVLink. So I’m really not qualified to answer that question, I think.
Operator: One moment for our next question. And that will come from the line of Aaron Rakers with Wells Fargo.
Aaron Christopher Rakers: I think all my questions on scale-up have been asked. But I guess, Hock, given the execution that you guys have been able to do with the VMware integration, looking at the balance sheet, looking at the debt structure, I’m curious if you could give us your thoughts on how the company thinks about capital return versus the thoughts on M&A and the strategy going forward.
Hock E. Tan: Okay. That’s an interesting question. And I agree, not too untimely, I would say, because, yes, we have done a lot of the integration of VMware now, and you can see that in the level of free cash flow we’re generating from operations. And as we said, the use of capital has always been — we’re very, I guess, measured and upfront with a return through dividends, which is half our free cash flow of the preceding year. And frankly, as Kirsten has mentioned 3 months ago and 6 months ago during the last 2 earnings call, the first choice typically of the other part of the free cash flow is to bring down our debt to a more — to a level that we feel closer to no more than 2 ratio of debt to EBITDA. And that doesn’t mean that opportunistically, we may go out there and buy back our shares as we did last quarter, and indicated by Kirsten, when we did $4.2 billion of stock buyback.
Now part of it is used to basically when employee RSUs vest basically use — we basically buy back part of the shares used to be paying taxes on the vested RSU. But the other part of it, I do admit we used it opportunistically last quarter when we see an situation when basically, we think that it’s a good time to buy some shares back, and we do. But having said all that, our use of cash outside of dividends would be, at this stage, used towards reducing our debt. And I know you’re going to ask what about M&A? Well, the kind of M&A we will do in our view would be significant, would be substantial enough that we need debt in any case. And it’s a good use of our free cash flow to bring down debt to, in a way, expand, if not preserve our borrowing capacity if we have to do another M&A deal.
Operator: One moment for our next question. And that will come from the line of Srini Pajjuri with Raymond James.
Srinivas Reddy Pajjuri: Hock, a couple of clarifications. First, on your 2026 expectation, are you assuming any meaningful contribution from the 4 prospects that you talked about?
Hock E. Tan: No comment. We don’t talk about prospects. We only talk about customers.
Srinivas Reddy Pajjuri: Okay. Fair enough. And then my other clarification is that I think you talked about networking being about 40% of the mix within AI. Is that the right kind of mix that you expect going forward? Or is that going to materially change as we, I guess, see XPUs ramping going forward?
Hock E. Tan: No. I’ve always said and I expect that to be the case going forward in ’26 as we grow that networking should be a ratio to XPU should be closer in the range of less than 30%, not the 40%.
Operator: One moment for our next question. And that will come from the line of Joe Moore with Morgan Stanley.
Joseph Lawrence Moore: You said you’re not going to be impacted by export controls on AI. I know there’s been a number of changes since — in the industry since the last time you made the call. Is that still the case? And just can you give people comfort that there’s no impact from that down the road?
Hock E. Tan: Nobody can give anybody comfort in this environment, Joe. But rules are changing quite dramatically as trade — bilateral trade agreements continue to be negotiated in a very, very dynamic environment. So I’ll be honest, I don’t know — I know as little as probably — you probably know more than I do maybe, in which case then I know very little about this whole thing about whether there’s any export control, how the export control will take place. We’re guessing. So I’d rather not answer that because no, I don’t know whether it won’t be.
Operator: And we do have time for one final question. And that will come from the line of William Stein with Truist Securities.
William Stein: I wanted to ask about VMware. Can you comment as to how far along you are in the process of converting customers to the subscription model? Is that close to complete? Or is there still a number of quarters that we should expect that, that conversion continues?
Hock E. Tan: That’s a good question. And so let me start off by saying a good way to measure it is most of our VMware contracts are about typically 3 years. And that was what VMware did before we acquired them, and that’s pretty much what we continue to do, 3 is very traditional. So based on that, the renewals, we are like 2/3 of the way, almost to the halfway — more than halfway through the renewals. So we probably have at least another year plus, maybe 1.5 years to go.
Operator: And with that, I’d like to turn the call over to Ji Yoo for closing remarks.
Ji Yoo: Thank you, operator. Broadcom currently plans to report its earnings for the third quarter of fiscal year 2025 after close of market on Thursday, September 4, 2025. A public webcast of Broadcom’s earnings conference call will follow at 2:00 p.m. Pacific. That will conclude our earnings call today. Thank you all for joining. Operator, you may end the call.