Broadcom Inc. (NASDAQ:AVGO) Q3 2023 Earnings Call Transcript

Karl Ackerman: Thank you. Just on gross margins, you had a tough compare year-over-year for your semiconductor gross margins, which, of course, remains some of the best in semis, but is there a way to think or quantify about the headwind to gross margins this year from still elevated logistics costs and substrate costs as we think about the supply chain perhaps freeing up next year that perhaps could be a tailwind? Thank you.

Hock Tan: You know, Karl, it is Hock. Let me take a stab at this question because it’s really a more holistic answer, and here’s what I mean. The impact to us on gross margin more than anything else, it’s not related to transactional supply chain issues. I’m sure they have in any particular point in time, but not as material and not as sustained in terms of impacting trends. What drives gross margin largely for us as a company is, frankly, a product mix, it’s a product mix. And as I mentioned earlier, we have a broad range of products even as we try to make order out of it from a viewpoint of communication and segment them classify them into multiple end markets. Within the end market, your products, and they all have different gross margins depending on the — on where they used and the criticality and various other aspects.

So they’re different. So we have a real mixed bank. And what drives the trend in gross margin more than anything else is the pace of adoption of next-generation products in each product category, so think in that way. And you measure it across multiple products. And each time a new generation of product — of a particular product gets adopted, we get the opportunity to lift — uplift gross margin. And therefore, the rate of adoption matters, for instance, because for some products that changes gross margin every few years versus one that’s more extended one. You have different gross margin growth profile. And this is what is all tied to the most important variable. Now the more interesting thing to come down to us on a specifically your question is during ’21, ’22, in particular, with an up cycle in the semiconductor industry.

We had a lot of lockdowns, change in behaviour, and a high level of demand for semiconductors. Or put it this way, a shortage of supply to demand. There was accelerated adoption of a lot of products, accelerated adoption. So we benefited, among other things, not just revenue, as I indicated, we benefited from gross margin expansion across the board as a higher percentage of our products out there gets adopted into the next-generation faster. We pass this. There is probably some slowdown in the adoption rate. And so gross margin expansion might actually not expand as fast. But it will work itself out over time. And I’ve always told you guys, the model this company has seen and is it’s empirical, but based on this underlying basic economics, it’s simply that when we have the broad range of products we have and each of them a different product life cycle of upgrading and next generation.

We have seen over the years on a long-term basis, an expansion of gross margin on a consolidated basis for semiconductors that ranges from 50 to maybe 150 basis points on an annual basis. And that’s a long-term basis. In between, of course, you’ve seen numbers that go over to 200 basis points. That happened in 2022. And so now later, you have to offset that with years where gross margin expansion might be much less like 50. And I think with that the process, you will see us go through on an ongoing basis.

Karl Ackerman: Thank you.

Operator: Thank you. One moment for our next question. That will come from the line of Harsh Kumar with Piper Sandler. Your line is open.

Harsh Kumar: Yes, Hock. So congratulations on our textbook soft landing. I mean it’s perfectly executed. I had a question, I guess, more so on the takeoff timing. You’ve got a lead time that is about 1 year for your — most of your product lines. So I suppose you see visibility a year out. The question really is, are you starting to see growth in backlog about a year out? So in other words, we can assume that we’ll spend time at the bottom for about a year and then start to come back? Or is it happening before that time frame or maybe not even a year out? Just any color would be helpful. And then, as a clarification, Hock, is China approval needed for VMware or not needed?

Hock Tan: Let’s start with lead times and asking me to predict when the up cycle would happen. It’s still too early for me to want to predict that, to be honest with you, because even though we have 50 weeks lead time, I have overlaid on it today. Nice, a lot of bookings related to generative AI. A decent amount of bookings related to wireless, too. So that kind of like buyers, what I’m looking at. So the answer to you — a very unsatisfactory, I know answer to your question is too early for me to tell, but we do have a decent amount of orders. All right.

Harsh Kumar: And then on VMware, Hock?

Hock Tan: Let me say this. I made those specific notes or remarks on regulatory approval. I ask that you think it through, read it through and let’s stop right there.

Harsh Kumar: Okay. Fair enough. Thank you, Hock.

Hock Tan: Thank you.

Operator: Thank you. And one moment for our next question. And that will come from the line of Aaron Rakers with Wells Fargo. Your line is open.

Aaron Rakers: Yes. Thanks for taking the question and congrats also on the execution. I’m just curious, as I think about the Ethernet opportunity in AI fabric build-outs. Just Hock, any kind of updated thoughts now with the Ethernet Consortium that you’re part of thoughts as far as Ethernet relative to InfiniBand, particularly at the East West layer of these AI fabric build-outs with Tomahawk5, Jericho3 sounding like it’s going to start shipping in volume maybe in the next six months or so. Is that an inflection where you actually see Ethernet really start to take hold in the East-West traffic layer of these AI networks? Thank you.

Hock Tan: That’s a very interesting question. And frankly, my personal view is InfiniBand has been the choice in the old — for years and years, generations of high — what we call — what we have called before high-performance computing, right? And high-performance computing was the old term for AI, by the way. So that was it because it was very dedicated application workloads and not a scale out as large language models drive today. We launched language models driving and most of — all this large language models are now being driven a lot by the hyperscale. Frankly, you see Ethernet getting huge amount of traction. And Ethernet is shipping. It’s not just getting traction to the future. It is shipping in many hyperscales. And — it coexist best way to describe it with InfiniBand.

And it all depends on the workloads. It depends on the particular application that’s driving it. And at the end of the day, it also depends on, frankly, how large you want to scale your AI clusters. The larger you scale it, the more tendency you have to basically open it up to Ethernet.

Aaron Rakers: Yeah, thank you.

Operator: Thank you. One moment for our next question. And that will come from the line of Matt Ramsay with TD Cowen. Your line is open.

Matthew Ramsay: Yes. Thank you very much. Good afternoon. Hock, I wanted to ask a question. I guess maybe a two-part question on your custom silicon business. Obviously, the large customer is ramping really, really nicely as you described. But there are many other sort of large hyperscale customers that are considering custom silicon, maybe catalyzed by Gen AI, maybe some not. But I wonder if the recent surge in Gen AI spending and enthusiasm has maybe widened the aperture of your appetite to take on big projects for other large customers in that arena? And secondly, any appetite at all to consider custom — switching routing products for customers or really a keen focus on merchant in those areas? Thank you.

Hock Tan: Well, thank you. That’s a very insightful question. We only have one large customer in AI engines. We’re not a GPU company, and we’re not — we don’t do much compute, as you know, other than offload computing having said that, but it’s very customized. And I mean, what I’m trying to say is that I don’t want to mislead you guys. The fact that I may have engagement, and I’m not saying I do on a custom program should not at all be translated into your minds as oh, yes, this is a pipeline that will translate to revenue. Creating hardware infrastructure to run these large language models of hyperscalers is an extremely difficult and complex test and — for anyone to do. And the fact that even if there is any engagement, it does not translate easily to revenues.