Broadcom Inc. (NASDAQ:AVGO) Q3 2025 Earnings Call Transcript September 4, 2025
Broadcom Inc. beats earnings expectations. Reported EPS is $1.69, expectations were $1.66.
Ji Yoo: Welcome to Broadcom Inc. Third Quarter fiscal year 2025 financial results conference call. At this time, for opening remarks and introductions, I would like to turn the call over to Ji Yoo, Head of Investor Relations of Broadcom Inc. Please go ahead. Thank you, Sherry, and good afternoon, everyone. Joining me on today’s call are Hock Tan, President and CEO, Kirsten Spears, Chief Financial Officer, and Charlie Coaz, President, Semiconductor Solutions Group. Broadcom distributed a press release and financial tables after the market closed describing our financial performance for 2025. If you did not receive a copy, you may obtain the information from the investors section of Broadcom’s website at broadcom.com. This conference call is being webcast live, and an audio replay of the call can be accessed for one year through the Investors section of Broadcom’s website.
During the prepared comments, Hock and Kirsten will be providing details of our third quarter fiscal year 2025 results, guidance for our 2025, as well as commentary regarding the business environment. We will take questions after the end of our prepared comments. Please refer to our press release today and our recent filings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward-looking statements made on this call. In addition to U.S. GAAP reporting, Broadcom reports certain financial measures on a non-GAAP basis. A reconciliation between GAAP and non-GAAP measures is included in the tables attached to today’s press release. Comments made during today’s call will primarily refer to our non-GAAP financial results.
I will now turn the call over to Hock.
Hock Tan: Thank you, Ji. And thank you everyone for joining us today. In our fiscal Q3 2025, total revenue was a record $16 billion, up 22% year on year. Now revenue growth was driven by better than expected strength in AI semiconductors, and our continued growth in VMware. Q3 consolidated adjusted EBITDA was a record $10.7 billion, up 30% year on year. Now, looking beyond what we are just reporting this quarter, with robust demand from AI bookings, were extremely strong. And our current consolidated backlog for the company hit a record of $110 billion. Q3 semiconductor revenue was $9.2 billion as year on year growth accelerated to 26% year on year. And this accelerated growth was driven by AI Semiconductor revenue of $5.2 billion, which was up 63% year on year and extend the trajectory of robust growth to 10 consecutive quarters.
Now let me give you more color on our XPU business, which accelerated to 65% of our AI revenue this quarter. Demand for custom AI accelerators from our three customers continued to grow as each of them journeys at their own pace towards compute self-sufficiency. And progressively, we continue to gain share with these customers. Now, further to these three customers, as we have previously mentioned, we have been working with other prospects on their own AI accelerators. Last quarter, one of these prospects released production orders to Broadcom. And we have accordingly characterized them as a qualified customer for XPUs. And in fact, has secured over $10 billion of orders of AI racks based on our XPUs. And reflecting this, we now expect the outlook for fiscal 2026 AI revenue to improve significantly from what we had indicated last quarter.
Turning to AI networking. Demand continued to be strong. Because networking is becoming critical as LLMs continue to evolve in intelligence and compute classes have to grow bigger. The network is the computer. And our customers are facing challenges as they scale to clusters beyond 100,000 compute nodes. For instance, scale up which we all know about, is a difficult challenge. When you are trying to create substantial bandwidth to share memory across multiple GPUs or XPUs with the direct Today’s AI rank scales up a mere 72 GPUs at 28.8 terabit per second bandwidth using a proprietary NVLink. On the other hand, earlier this year, we have launched Tomahawk five We’ve opened AI we’ve opened Ethernet. Sorry. Which can scale up 512 compute nodes for customers using XPUs. Moving on to scaling out across regs.
Today, the current architecture using 51.2 terabit per second requires three tiers of networking switches. In June, we launched Tomahawk six and our Ethernet based 102 terabit per second switch which flattens the network. To two tiers. Resulting in lower latency much less power. And when you scale to clusters beyond single data center footprint, You now need to scale computing across data centers. And over the past two years, we have deployed our Jericho three Ethernet router with hyperscale customers to just do this. And today, we have launched our next generation Jericho four. Ethernet fabric router with 51.2 terabit per second deep buffering intelligence intelligent congestion control, to handle classes beyond 200,000 compute nodes crossing multiple data centers.
We know the biggest challenge to deploying larger clusters of compute for generative AI will be in networking. And for the past twenty years, Broadcom has developed for Ethernet, networking is entirely applicable to the challenges of scale up, scale out, and scale across in generative AI. And turning to our forecast as I mentioned earlier, we continue to make steady progress in growing our AI revenue. For Q4, 2025, we forecast AI semiconductor revenue to be approximately $6.2 billion, up 66% year on year. Now, turning to non AI semiconductors. Demand continues to be slow to recover. In Q3 revenue of $4 billion was flat sequentially. While broadband showed strong sequential growth, Enterprise networking and service storage were down sequentially.
Wireless and industrial were flat quarter on quarter as we expect. In contrast, in Q4, driven by seasonality, we forecast non AI semiconductor revenue to grow low double digits sequentially to approximately $4.6 billion. Broadband, server storage and wireless are expected to improve. While enterprise networking remains down quarter on quarter. Now let me talk about our infrastructure software segment. Q3 Infrastructure Software revenue of $6.8 billion was 17% year on year. Above our outlook of $6.7 billion as bookings continue to be strong during the quarter. We booked in fact total contract value over $8.4 billion during Q3. But here’s one I’m most excited about. After two years of engineering development by over 5,000 developers. We deliver on a promise.
When we acquired VMware. We release VMware Cloud Foundation version nine dot zero a fully integrated cloud platform which can be deployed by enterprise customers on prem or carry to the cloud. It enables enterprises to run any application workload including AI workloads, on virtual machines and on modern containers. These provides the real alternative to public club. In Q4, we expect Infrastructure and Software revenue to be approximately $6.7 billion, up 15% year on year. And in summary, continued strength in AI and VMware would drive our guidance for Q4 consolidated revenue to approximately $17.4 billion, up 24% year on year. And we expect Q4 adjusted EBITDA to be 67% of revenue. And with that, let me turn the call over to Kirsten.
Kirsten Spears: Thank you, Hock. Let me now provide additional detail on our Q3 financial performance. Consolidated revenue was a record $16 billion for the quarter, up 22% from a year ago. Gross margin was 78.4% of revenue in the quarter, better than we originally guided on higher software revenues and product mix within semiconductors. Consolidated operating expenses were $2 billion of which $1.5 billion was research and development. Q3 operating income was a record $10.5 billion, up 32% from a year ago. On a sequential basis, even as gross margin was down 100 basis points on revenue mix, operating margin increased 20 basis points sequentially to 65.5% on operating leverage. Adjusted EBITDA of $10.7 billion or 67% of revenue was above our guidance of 66%.
This figure excludes a $142 million of depreciation. Now a review of the P and L for our two segments. Starting with semiconductors. Revenue for our Semiconductor Solutions segment was $9.2 billion with growth accelerating to 26% year on year driven by AI. Semiconductor revenue represented 57% of total revenue in the quarter. Gross margin for our Semiconductor Solutions segment was approximately 67% down 30 basis points year on year on product mix. Operating expenses increased 9% year on year to $951 million on increased investment in R and D for leading edge AI semiconductors. Semiconductor operating margin of 57% was up 130 basis points year on year and flat sequentially. Now moving on to infrastructure software. Revenue for infrastructure software of $6.8 billion was up 17% year on year and represented 43% of revenue.
Gross margin for infrastructure software 93% in the quarter compared to 90% a year ago. Operating expenses were $1.1 billion in the quarter, resulting in infrastructure software operating margin of approximately 77%. This compares to operating margin of 67% a year ago reflecting the completion of the integration of VMware. Moving on to cash flow. Free cash flow in the quarter was $7 billion and represented 44% of revenue. We spent $142 million on capital expenditures. Day sales outstanding were thirty seven days in the third quarter compared to thirty two days a year ago. We ended the third quarter with inventory of $2.2 billion up 8% sequentially in anticipation of revenue growth next quarter. Our days of inventory on hand were sixty six days in Q3, compared to sixty nine days in Q2 as we continue to remain disciplined on how we manage inventory across the ecosystem.
We ended the third quarter with $10.7 billion of cash and $66.3 billion of gross principal debt. The weighted average coupon rate and years to maturity of our $65.8 billion in fixed rate debt is 3.9% and 6.9 years, respectively. The weighted average interest rate and years to maturity of our $500 million at floating rate debt is 4.7% and 0.2 years, respectively. Turning to capital allocation. Q3, we paid stockholders $2.8 billion of cash dividends, based on a quarterly common stock cash dividend of $0.59 per share. We expect the non GAAP diluted share count to be In Q4, approximately 4.97 billion shares, excluding the potential impact of any share repurchases. Now moving to guidance. Our guidance for Q4 is for consolidated revenue of $17.4 billion up 24% year on year.
We forecast semiconductor revenue of approximately $10.7 billion up 30% year on year. Within this, we expect Q4 AI semiconductor revenue of $6.2 billion up 66% year on year. We expect infrastructure software revenue of approximately $6.7 billion up 15% year on year. For your modeling purposes, we expect Q4 consolidated gross margin to be down approximately 70 basis points sequentially primarily reflecting a higher mix of XPUs and also wireless revenue. As a reminder, consolidated gross margins through the year will be impacted by the revenue mix of infrastructure software and product mix within semiconductors. We expect Q4 adjusted EBITDA to be 67%. We expect the non GAAP tax rate for Q4 and fiscal year 2025 to remain at 14%. I will now pass the call back to Hock for some more exciting news.
Hock Tan: I don’t know about exciting, Kirsten, but I do. I thought before we move to questions, I should share an update. The board and I have agreed that I will continue as the CEO of Broadcom through 2030 at least. These are exciting times for Broadcom, and I’m very enthusiastic to continue to drive value for our shareholders. Operator, please open up the call for questions.
Operator: Thank you. Star one one again. Due to time restraints, we ask that you please limit And our first question will come from the line of Ross Seymore with Deutsche Bank. Your line is open.
Q&A Session
Follow Avago Technologies Ltd (NASDAQ:AVGO)
Follow Avago Technologies Ltd (NASDAQ:AVGO)
Ross Seymore: Hi, guys. Thanks for asking question. Hock, thank you for sticking around for a few more years. So I just wanted to talk about the AI business and, specifically, the XPU. When you said you’re gonna grow significantly faster than what you had thought, a quarter ago, what’s changed? Is it just the impressive prospect moving to a customer definition, so that $10 billion backlog that you mentioned? Or is it, stronger demand across the existing three customers? Any detail on that will be helpful.
Hock Tan: I think it’s both, Ross. But to a large extent, it’s the fourth quarter customer that we now add on to our roster. Which we will ship pretty strongly in 2026, I should say. So combination of increasing the volumes from existing three customers and we moved through that very progressively and steadily. And the addition of a fourth customer with immediate and fairly substantial demand. Really put our really, changes our thinking of what ’26 would be starting to look like. Thank you.
Operator: One moment for our next question. That will come from the line of Harlan Sur with JPMorgan. Your line is open.
Harlan Sur: Hi. Good afternoon. Congratulations on a well executed quarter and strong free cash flow. Know everybody’s gonna ask a lot of questions on AI, Hock. I’m gonna ask about the non AI simulators If I look at your guidance for Q4, it looks like the non AI streaming business is gonna be down about 78% year over year on fiscal twenty five if you hit the midpoint of the Q4 guidance. Good news, is that the negative year over year trends have been improving to the year, in fact, think you guys are gonna be positive year over year in the fourth quarter. You’ve characterized it as relatively close to the cyclical bottom, relatively slow to recover. However, we have seen some green shoots of positivity. Right? Broadband server storage, enterprise networking.
You’re still driving the DOCSIS four upgrade in broadband, cable, You’ve got next gen PON upgrades in China and The US in front of you. Enterprise spending on network upgrades is accelerating. So near term, from the cyclical bottom, how should we think about the magnitude of the cyclical upturn? And given your thirty to forty week lead times, are you seeing continued order improvements in the non AI segment, which would point you to continued cyclical recovery into next fiscal year?
Hock Tan: Well, you know, then if you take a look at that non AI segment, I mean, you’re right. From a year on year Q4 guidance, we are actually up, as you say, slightly. Couple one or 2% from a year ago. And it’s not much really to shout about at this point. And the and the big issue is the puts and takes. And the puts and takes and the bottom line to all this is other than seasonality that we perceive if you look at it short term, we’ve all looking year on year, but looking sequentially. We see in things like wireless and we even start to see some seasonality in server storage these days. We don’t kind of all washes out so far. The only consistent trend we’ve seen over the last three quarters that is moving up strongly is broadband.
And nothing else if you look at it from a cyclical point of view, seems to be able to sustain an uptrend so far. I don’t think it’s getting but as a whole, they are not getting worse as you pointed out, Harlan. But they are not showing a v shaped recovery as a whole. That we would like to see to see and expect to see in in cyclical semiconductor cycles. The only thing that gives us some hope is broadband at this point. And it is recovering very strongly. But then it was the business that was most impacted in the in the sharp downturn of ’24 and early twenty five. So again, one takes that with a grain of salt. But best answer to you for you is non non AI semiconductor is kind of slow to recover as I said. And Q4 year on year is up maybe low single digits.
Is the best way to to describe it at this point. So I’m expecting to see more of a u shaped recovery in non AI. And perhaps by late twenties mid twenty six, late twenty six, we’ll start to see some meaningful recovery. But as of right now, not clear.
Harlan Sur: Mhmm. Are are you starting to see that in your order trend in your order book just because your lead times are, like, forty weeks. Right?
Hock Tan: We are. But we’ve been tricked before. But we are. The bookings are up, and they are up year on year in excess of 20%. Nothing like what AI bookings look like. But 23% is still pretty good. Right?
Operator: Thank you. One moment for our next question. That will come from the line of Vivek Arya with Bank of America. Your line is open.
Vivek Arya: Thanks for taking my question and best wishes for the next part of your tenure. My question is on know, if you could help us quantify what is the new fiscal twenty six AI guidance. Because I think the last call you mentioned ’26 could grow at the 60% growth rate. So what is the updated number? Is it, you know, 60% plus the the $10 billion that you mentioned? And sort of related to that, do you expect the custom versus networking mix to stay broadly what it has been this past year or or evolve more towards customs? So any quantification on this, you know, networking versus custom mix would be very helpful. Fiscal twenty six. Okay. Let’s answer the first part first. If I could be so bold as to suggest to you when I last quarter when I said, hey, the trend of growth of ’26 will mirror that of ’25.
Which is 50, 60%. Year on year. That’s really all I say. I didn’t quote a ban. Of course, it comes out 50, percent because that’s what ’25 is. All I’m saying you want to put another way of it, looking at what I’m saying, which is perhaps more accurate, we’re seeing the growth rate accelerate. As opposed to just remain steady at that 50, 60% We are expecting and seeing 2026 to accelerate more than the growth rate we see in ’25. And I know you love me to throw in a number at you, but you know what? We’re not supposed to be giving you a forecast for ’26, but best way to describe it, it will be fairly material improvement. And the networking versus custom? Ah, good point. Thanks for reminding me. As we see it and a big part of this driver of growth will be experience.
And as a to and the reason of repeating what I said in my remarks, comes from the fact that we are continue to gain share at our three original customers They have to they’re on their journey, and each passing generation they go more to XPUs. So we are gaining share from this three. We now have the benefit of an additional four four significant customer. I would just say fourth and very significant customer. And that combination will mean more XPUs. And as I said, as the ratio as the as we create more and more experience among four guys, the networking we get the networking with these four guys, but now the mix of networking from outside these four guys will now be a smaller be diluted, be a smaller share. So I expect actually networking percentage of the pool to be a declining percentage.
Going into twenty six.
Operator: Thank you, And one moment for our next question. And that will come from the line of Stacy Rasgon with Bernstein Research. Your line is open.
Stacy Rasgon: Hi, guys. Thanks for taking my question. Was wondering if you could help me parse out this $110 billion backlog. Did I that number right? Could you give us some color on on on the makeup of it? Like, how far out does that go and, like, how much of that $110 billion is AI versus non AI versus software?
Hock Tan: Well, I guess, Stacy, we generally don’t break up backlog. I’ve just given a total number to give you a sense of how strong the business is as a whole for the company. And it’s largely driven by AI, as a grow in terms of growth. Software continue to add on a steady basis and non AI as as I indicated, has grown double digits. Nothing compared to AI which has grown very strongly. Mhmm. Give you a sense, perhaps, fully 50% of it at least, is semiconductors.
Stacy Rasgon: Okay. And it’s fair to say that of that semiconductor piece, it’s gonna be much more AI than non AI?
Hock Tan: Right.
Stacy Rasgon: Yeah. Got it. That’s helpful. Thank you.
Operator: One moment for our next question. And that will come from the line of Ben Reitzes with Melius. Your line is open.
Ben Reitzes: Hey, guys. Thanks a lot. I appreciate it. Hock, congrats on able to guide to the AI revenue well above 60%. For next year. So I wanted to be a little greedy and ask you about maybe 27 and the other three customers or so. How is the dialogue going beyond these four customers? In the past, you’ve talked about having seven. Now we’ve added a fourth to production. And then there were three. Are are you hearing from others? And how’s the trend going with maybe with the other three maybe beyond the 26 into 27 and beyond? How’s that momentum you think going to shape up? Thanks so much.
Hock Tan: Ben, you are definitely greedy and definitely overthinking this for me. Thank you. But yeah. You know, that’s if asking for qualify, subjective qualification. And, frankly, I I don’t wanna give that. I’m not comfortable giving that because sometimes we stumble into production in fairly in time frames that are fairly unexpected surprisingly. Equally, it could get delayed. So I rather not give you any more color on prospects than just tell you these prospects are real prospects and continue to be very closely engaged towards developing each of their own experience with every intent of going into substantial production. Like the four we have today who are custom.
Ben Reitzes: Yeah. You still think that that that million units by, you know, goal for these seven though, is still intact.
Hock Tan: For the oh, for the three, I said. Now they are four. That’s silly. Only for the for the prospects, no comment. I’m not positioned to judge on that. But for our four three, four customers now, yes.
Ben Reitzes: Alright. Thanks a lot. Congrats.
Operator: One moment for our next question. And that will come from the line of Jim Schneider with Goldman Sachs. Your line is open.
Jim Schneider: Good afternoon. Thanks for taking my question. Hock, I was wondering if you could give us a little bit more color not necessarily on the prospects, which you still have in the pipeline, but how you view the universe of additional prospects beyond the seven you know, customers and prospects you’ve already identified. Do you still see there being additional prospects that would be worthy of a of a custom chip. And I know you’ve been relatively know, circumspect in terms of the the number of customers that are out there and the volume that they can provide and and selective in terms of the opportunities you’re interested in. So maybe frame for us the additional prospects as you see them beyond the v seven. Thank you.
Hock Tan: That’s a very good question. And let me let me answer it in a fairly broader basis. Well, as I said before and perhaps said repeat a bit more. We’re very send in we look at this market into broad segments. You know, that’s one is simply the guys, the parties, the customers, who develop their own LLM. And the rest of the other market I consider is collectively lump as enterprise. That is markers that run that will run AI workloads for enterprise. Whether it’s on prem or GPU XPU or whatever as a service. The enterprise. We don’t address that market, to be honest. We That’s because that’s that’s a hard market for us to address. And we’re not set up to address that. We instead address this LLM market and as I said many times before, it’s a very few narrow markets.
Few players driving frontier models on a consistent on a very accelerated trend towards super intelligence for one plagiarizing the term of someone else, but you know what I mean. And the other guys who would invest, who need to invest a lot initially, my view on training. Training of ever larger and larger clusters of ever more capable accelerators but also as for these guys, you know, they got to be accountable to shareholders or accountable to being able to create cash flows that can sustain their path they start to also invest in inference in a massive way to monetize their models. These are the players we work with. These are individually people or players who spend a lot of money on on a lot of compute capacity just that there are only so few of them.
And right we have I have indicated, identified seven. Four of which now our customers Three continues to be prospects we engage with. And we’re very picky and still careful. I should say, Shenandoah, picky. Careful. Who qualifies under that. And I indicated it. They have a they are building a platform or have a platform and I’m investing very much on leading LLMs models. And we’re severed and I think that’s about it. We may get see one more perhaps as a prospect But again, we are very thoughtful and careful about even making that qualification. But right now, for sure, we have seven. And that’s for now, it’s pretty much what we have.
Operator: Thank you. One moment for our next question. And that will come from the line of Tom O’Malley with Barclays. Your line is open.
Tom O’Malley: Congrats on the really good results. I wanted to ask on Jericho four commentary. NVIDIA talked about the XGET switch and now is talking about scale across. You’re talking about Jericho four. It sounds like this market is really starting to develop. Maybe you could talk about when you see material uplift in revenue there and why it’s important to start thinking about those type of switches as we move more towards inferencing. Thank you, Hock.
Hock Tan: Great. Well, thank you for picking that up. Yes. Scale across is a new term now. Right? Drawn in. They scale up, which is within the rack you know, within the which computing within the rack. Scale out doing across racks but within the data center. But now when you get to clusters, that are I’m not 100% sure where the cutoff is, but say above a 100,000 GPU or XPUs. That’s you’re talking about probably many cases in because of limitation of PowerShell. That the data send that you don’t do one single data center footprint site. To hand to sit with over a 100,000 of those XPUs in one site. Power may not be easily available. Land may not be. It’s cumbersome. So many some outcome is most of all our customers now we see create multiple data center sites close at hand not far away, within range 100 kilometers.
It’s kind of the level. But be able to then put in homogenous XPUs or GPUs in this multiple location three or four, and network across them so that they behave like in fact, a single cluster. That’s the coolest part. And that technology which requires because of distance, deep buffering, very intelligent congestion control. It’s technology that exist for many many years in the likes of the telcos of AT and T and Verizon doing network routing. Except this is for even somewhat more trickier workloads, but the same. And we’ve been shipping that to a couple of hyperscalers over the last two years. As Jericho three. As the scale of these clusters and the bandwidth required for AI training expense, We now launched this Jericho four fifty one terabit per second to handle more bandwidth, but same technology we have tested, proven for the last ten, twenty years.
Nothing new. Don’t need to create something new for that. It’s running in Ethernet. And very proven, very stable, and as I said, last two years under Jericho three, which runs 256 connections, and no compute nodes. We’ve been selling to a couple of our hyperscale customers.
Operator: One moment for our next question. And that will come from the line of Carl Ackerman with BNP Paribas. Your line is open.
Carl Ackerman: Hock, have you completely converted your top 10,000 accounts from vSphere to the entire vSphere Cloud Foundation virtualization stack. I asked because I think last quarter, 87% of accounts had adopted that. And that’s certainly a marked increase versus less than 10%. For those customers who bought the entire suite before the deal, And I guess as you address that, what interest level are you seeing with the longer tail of enterprise customers adopting BCF? And are you seeing tangible cross selling benefits of your merchant semiconductor storage and networking business as those customers adopt VMware. Thank you.
Hock Tan: Okay. To answer your first part of the question, yeah, pretty much virtually way over 90% is is bought VCF. Now like to I am careful about choice of what. Because we have sold them on it, and they bought licenses to deploy doesn’t mean they are fully deployed. Here comes the other part of our work, which is to take this 10,000 customers or big chunk of them who have taken who have bought the vision of a private cloud on prem and working with them to enable them to deploy and operate it successfully on their infrastructure on prem. That’s the net the hard work over the next two years that we see happening. And as we do it, we see expansion across their foot IT footprint on VCF private cloud. Running on the dataset in within their data center.
That’s the key part of it. And we see that continuing. And that’s the second phase of my VMware story. First phase is convince these people to convert from perpetual subscription so doing purchase VCF. Second phase now, is make that purchase they made on VCF very create the value they look for in private cloud on their premise, on their IT data center. That’s what’s happening. And then and that will sustain for quite a while because on top of that, we will start selling advanced services, security, disaster recovery, even AI running AI workloads on it. All that is very exciting. Your second question is, is that able to let me enable me to sell more hardware No. Why it’s quite independent? In fact, as they virtualize their data centers, we consciously accept the fact that we are commoditizing the underlying hardware in the data center.
Commoditizing servers, commoditizing storage, commoditizing even networking. And that’s fine. And by so commoditizing, we’re actually reducing cost of investments in hardware in a in data centers for enterprises. Now beyond the largest largest 10,000 are we seeing a lot of success? We’re seeing some. But again, two reasons why we do not expect it to be as necessary successful. One is the you know, the the value the TCO as they call it, that comes from it will be much less. But the more important thing is the skill sets that needs to not just deploy that you can get. Services and ourselves to help them, but to keep operating might not be something that they can take on. And we shall see. This is a area this is an area we’re still learning. And be interesting to see.
Well, VMware has 300,000 customers. We see the top 10,000 as making a for as being people it makes a lot of sense, derive a lot of value, in deploying private cloud using VCF. We now are looking at whether the next 20, 30,000 midsized companies see the same way. Stay tuned. I’ll let you know.
Carl Ackerman: Very clear. Thank you.
Operator: One moment for our next question. And that will come from the line of CJ Muse with Cantor Fitzgerald. Your line is open.
CJ Muse: Yes, good afternoon. Thanks for taking the question. I was hoping to focus on gross margins I understand that the guide down 70 bps, particularly with software lower sequentially. And greater contributions from wireless and XPU. But to to hit that 77 spot seven, I either have to model semiconductor margins flat, which I would think would be lower, or software gross margins to 95% you know, up 200 bps. So can you kinda help me under better understand kind of the moving parts there? To to allow only a 70 bit drop
Kirsten Spears: Yeah. I mean, the TPUs will be going up along with wireless. As I said on the call. And our software revenue will be, coming up just a bit as well.
CJ Muse: You mean XPUs? XPUs. Yes.
Kirsten Spears: And one moment. I was is typically our our our heaviest quarter, right, of the year for wireless. So you have wireless in TPUs with generally lower margins. Right? And then our software revenue coming up.
Operator: And one moment for our next question. And that will come from the line of Joe Moore with Morgan Stanley. Your line is open.
Joe Moore: Great. Thank you. In terms of the fourth customer, I think you’ve talked in the past about potential customers four and five were more hyperscale. And six and seven were more like, you know, the LLM makers themselves. Can you give us a sense for if you could help us categorize that? If not, that’s fine. And then the $10 billion of orders, can you give us a time frame on that? Thank you.
Hock Tan: Okay. Yeah. No. It’s towards the end of the day, all seven do LLMs Not all of them have a current have a has a huge platform we’re talking about. But one could imagine eventually, all of them will have or create a platform. So it’s hard to differentiate the two. But coming back coming on the second and third and the delivery of the $10 billion, I’ll probably be in around, I would say, the second half of our fiscal quarter year 2026. I would say, to be more even more precise, likely to be Q3 of our fiscal twenty six.
Joe Moore: Okay. Okay. Q3, it starts or Q3 like, how what time frame does it take to deploy $10 billion? Starts and ends in Q3.
Hock Tan: Alright. Thank you.
Operator: One moment for our next question. And that will come from the line of Joshua Buchhalter with TD Cowen. Your line is open.
Joshua Buchhalter: Hey, guys. Thank you for taking my question and congrats on the results. I was hoping you could provide some comments on momentum for your first scale up Ethernet and how it compares with, you know, UA link and PCIe solutions out there. You know, how big of a how meaningful is it to have the Tomahawk Altros product out there with a lower latency? And, know, how meaningful do you think scale up Ethernet opportunity could be over the next year as we think about your AI networking business? Thank you.
Hock Tan: Well, that’s a good question. And we we ourselves are thinking about that too because to begin with, Ethernet our Ethernet solutions are very disaggregated from the AI’s accelerators Anybody does. It’s separate. We treat them as separate. Even though you’re right, the network is a computer. We have always believed that Ethernet is open source. Anybody should be able to have choices we keep it separate from my XPU. And but the truth of the matter is for our customers, who use the XPU, we develop and we optimize our networking switches and other components that relate to being able to network signals in the in any classes hand in hand with it. In fact, all these XPUs have developed with interface that handles Ethernet.
Very, very much so. So that’s in a way, we’ve XPUs with our customers, we are openly enabling. Ethernet as a as a networking protocol of choice. Very, very openly. And it need not be our Ethernet switches. It could be any other, but somebody else Ethernet switches that does it. It just happens to be when the lead in this business so we get that. But beyond it, especially when it comes to a closed system of GPUs, we see less of it. Except in the hyperscalers. Where the hyperscalers are able to architect the GPUs clusters very separate from the networking side, especially in scale scale out. In which case, on those hyperscalers, we sell a lot a lot of these Ethernet switches that does scaling out. And we suspect when it goes to scaling across now, even more Ethernet that are desegregated from the GPUs that are in the in the place.
As far as the XPUs are concerned, for sure, it’s all Ethernet.
Joshua Buchhalter: Thank you.
Operator: One moment for our next question. That will come from the line of Christopher Rolland with Susquehanna. Your line is open.
Christopher Rolland: Thank you for the question and congrats on the contract extension, Hock. So, yeah, my my my questions are about competition. Both on the networking side and the ASIC side. You kinda answered some of that, I think, in the last question. But do you view any competition on the ASIC side, particularly from US or Asian vendors, or do you think the is decreasing? And on the networking side, do you think UA link or PCIe even has a chance of displacing Sue in 2027 when it’s expected to ramp. Thanks.
Hock Tan: Thank you for embracing Sue. Thank you. I didn’t expect that to come out. And I appreciate that. Well, you know I’m biased, to be honest. But it’s so obvious I can’t help but being biased. Because Ethernet is well proven. Ethernet is so known to the engineers, the architects that sits in all these hyperscalers. Developing, designing AI data center, data AI infrastructure. It’s the logical thing for them to use. And they are using it. And they are focusing on it. And the development of separate individualized protocol, frankly, You know, it’s beyond my imagination why they bother. Ethernet is there. It’s been well used. It’s proven. It can keep going up. The only thing people talk about is perhaps latency. Especially in scaling up.
Hence, the the the emergence of NVLink. And even then, as I indicated, it’s not hard for us, and we are not the only one who can who can do that. Quite a few others in Ethernet can do it in the switches. You can just tweak the switches to make the latency super good. Better than NVLink, better than InfiniBand. Less than 250, you know, nanoseconds. Easily. And that’s what we did. So it’s not that hard. And perhaps this is why I say that because we we have been doing it as the in as the Ethernet has been around the last twenty five years at length. So it’s there that they know there’s no need to go and create some cook up protocols that now you have to bring people around. Ethernet is the way to go and we and there’s plenty of competition too because it’s an open source system.
So I think Ethernet is way to go and for sure in developing x XPUs for our customers, all these experience with the agreement of customers are made compatible interface with Ethernet. And not some fancy other interface that one has to keep going as bandwidth increase. And we and and I assure you, we have competition, which is one of the why the hyper scalars like the net. It’s not just us. They can find somebody else if for whatever reason they don’t like us, and we’re open to that. It’s always good to do that. It’s an open source system and lock and there are players in that market. Not any core system. Switching on to XPU competition, Yeah. You you hear about we hear about competition and all that. It’s just that it’s it’s a competition that’s it’s an area that we always see competition and our only way to secure a position is we try to out invest and now innovate anybody else in this game.
We have been fortunate to be the first one creating this XPU model of a six. On silicon. And we also have been fortunate to be probably one of the largest IP developers of semiconductor out there. Things like serializer, deserializer, SerDes been able to develop the best packaging been able to do redesign things that are very low power. So just have to keep investing in it, which we do. To outrun the the competition in this space. And I believe we’re doing a fairly decent job of doing it at this point.
Christopher Rolland: Very clear. Thanks, Hock. Sure.
Operator: Thank you. And we do have time for one last question. And that will come from the line of Harsh Kumar with Piper Sandler. Your line is open.
Harsh Kumar: Hey, guys. Thanks for squeezing me in. Hock, congratulations on all the exciting AI metrics, and thanks for everything you do for Broadcom and sticking around. Mark, my question is, you’ve got three to four existing customers that are ramping. As the data centers for AI clusters get bigger and bigger, it makes sense to have differentiation, efficiency, etcetera. Therefore, the case for XPUs why should I not think that your XPU share at these three or four customers that that are existing will be bigger than the GPU share in the longer term. It will be. It’s a logical conclusion. Yeah. As you’re correct. And we are seeing that step by step. As I say, it’s a journey It’s a multiyear journey because it’s multigenerational.
Because this experience don’t stay still either. I’m doing multiple versions least two versions, two generation version for each of these customers we have. And with each newer generation, they increase the consumption, the usage of the XPU as they gain confidence, as the model improves, they deploy it even more. So that’s the logical trend that XPUs will keep in this few customers of ours, whereas they are successfully deployed and their software stabilizes the software stack, the libraries that sits on these chips stabilizes and proves itself out. They’ll get they’ll have the confidence to keep using a higher and higher percentage of their compute footprint in their own XPUs. For sure. And we see that And that’s why I say we progressively gain share.
Harsh Kumar: Thank you, Hock.
Operator: Thank you. I would now like to turn the call back over to Ji Yoo, Head of Investor Relations for any closing remarks.
Ji Yoo: Thank you, Sherry. This quarter, Broadcom will be presenting at the Goldman Sachs Communicopia and Technology Conference on Tuesday, September 9 in San Francisco. And at the JPMorgan US All Stars Conference on Tuesday, September 16 in London. Broadcom currently plans to report its earnings for the fourth quarter and fiscal year 2025 after close of market on Thursday, December 11, 2025. A public webcast of Broadcom’s earnings conference call will follow at 2PM Pacific. That will conclude our earnings call today. Thank you all for joining. Sherry, you may end the call.
Operator: This concludes today’s program. Thank you all for participating. You may now disconnect.