Rambus Inc. (NASDAQ:RMBS) Q4 2023 Earnings Call Transcript

Mehdi Hosseini: Great, thank you. And my follow-up —

Luc Seraphin: Yes, thanks, Des. And if I may add maybe, what’s happening with our inventory is the mix of our inventory is changing. We see a slow burn of our DDR4 inventory, because we see a slow burn of DDR4 in general. And our inventory is more increasing in strategic inventories for three generations of DDR5 that have to go to market. And it’s really, really important when our customers are ramping these three generations of products. And when they ask products from us, we’re ready to ship them immediately. So, we do see a decline of our DDR4 inventory, but we see a strategic increase on the DDR5 inventory in the three generations to make sure we capture the share that we need to capture as the market ramps DDR5.

Mehdi Hosseini: Great. And thanks for additional color. And then, Luc, actually my second question is, maybe you can help me here, as I look into the CPU roadmap which is more relevant to your buffer chip not so much of a DDR5 bit, I see a standardization around a 12-memory channel per CPU. So, the market is no longer going to be bifurcated between an eight and a 12. It seems to me that everyone is — most of the CPUs coming out late this year, early to next year, are going to have 12-channel per CPU and then two DIMM per channel. So, effectively, you would have 24 DIMM per CPU. And again, this will remove the bifurcation of the past few years. Is that the right way of thinking about how your business model is going to scale, especially with your core RCD buffer chip?

Luc Seraphin: Yes, you are correct. I think the first thing I would say is that our customers and our customers’ customers are asking for more bandwidth and more capacity. And there are different ways of doing this. Just increasing the DRAM capacity itself, increasing the DIMM capacity, or increasing the number of DIMMs per channel or increasing the number of channels; there are all of these ways of increasing, bandwidth and capacity. You’re correct to see that or to say that our customers are converging on 12 channels per processor, with the capability of having two DIMMs per channel, and that’s how we model our potential growth in the long run. I think there are physical limitations to go beyond 12 channels on each one of the processors. There are also physical constraints with adding more than two DIMMs per channel. So, I think the industry on the current architecture is going to converge to these 12 channels and two DIMMs per channel.

Q – Mehdi Hosseini: Thank you.

Desmond Lynch: Thanks Mehdi.

Operator: Thank you. The next question is from the line of Kevin Cassidy with Rosenblatt Securities. You may proceed.

Kevin Cassidy: Yes, thanks for taking my question. Maybe just to expand on what you just talked about. The DDR5 devices, the DRAM themselves are increasing in density, so the modules will have higher density. Do you see that as a headwind at all, or will they still populate as much as they possibly can?

Desmond Lynch: You know higher density on DRAM is a good thing for the industry in general. Of course, if you have a higher density DRAM and higher density DIMMs for a fixed amount of memory you would use fewer DIMMs. But as I said, the demand for capacity is trumping all of this. So, we’re using all vectors to add to that request for more capacity. So, although at a first look, it could look as a headwind, we actually see this as a good thing. Everyone is trying to add capacity to the systems because this is what’s limiting the system capabilities today. It’s the lack of capacity.

Kevin Cassidy: Great, thanks. And just as a follow-up, do you see new markets opening up for your RCDs, you’ll say high end gaming or even what’s been popular discussion is the AI enabled PC?

Desmond Lynch: So, when it comes to AI servers, as we indicated in earlier calls, all AI servers also contain traditional servers for basic functions like storage, caching, data grooming. So, all of these are going to drive demand for standard servers within an AI box and typically those, standard servers are high capacity, high bandwidth servers. So, this is typically those servers that will use the latest memory, the highest density memory and the highest number of DIMMs per bus. So, that’s going to be a driver for LCD chips going forward.

Kevin Cassidy: Okay. I guess I was asking is there in the PC just they’re going to do both CPU manufacturers talked about having a AI enabled PCs. Will they need RDIMMs?

Desmond Lynch: So, what we see is on the client space when the speed on the bus exceeds above 6,000 mega transfers per second. We will need functions similar to the RCD chips on the client side that could be the case for high-end PC, gaming PC, or inference, PCs used for inference. But we do see from a technology standpoint, that when you exceed 6.4 mega transfers per second, then you need those clock regeneration chips, which is very similar to the RCD. So, that’s something we are investing in, because after this wave of AI, training applications that we see, there will be a wave of AI inference as well, and we’re going to see requirements for higher performance on the client side as well. So, that’s an area we’re investing in.

Kevin Cassidy: Great, thank you.

Operator: Thank you. The next question is from the line of Nam Kim with Arete Research. You may proceed.

Nam Kim: Thank you for taking my question. Sorry, I missed all the part of Q&A. I’m not sure if this was addressed. Can you share qualification update on your companion chips? I was expecting your companion chip sales would start gaining some momentum in DDR5 Gen2. So, what’s your expectation on companion chip sales this year? Or any color would be great. Thank you.