Netlist, Inc. (PNK:NLST) Q2 2023 Earnings Call Transcript

Page 2 of 2

Suji Desilva: Understood. It’s the — clock ticks on that. I understand. Okay. Then the — on the product side maybe Gail. The hynix product resell in check is it more SSD now? Is it DRAM that you’re reselling today with hynix at these low levels? And how does the gross margin recover as the — which of those areas improved? And how does the gross margin track with that revenue recovery as you increase your hynix resales?

Gail Sasaki: It’s mainly DRAM, Suji, but also mixed in are some SSDs. The gross margin will recover as we see demand picking up and inventories at all our customers start to minimize, which we are starting to see today. But it’s still not going to be overnight but we do see improvement.

Suji Desilva: Great. Thanks Gail. And then one last question for Chuck. The Gen AI trend obviously we’ll be watching this with you guys very closely. You have a lot of memory technology that could be very important to that. Just trying to understand Chuck, we’ve talked about hybrid DIMM for many years. Can you just talk about the NAND controller that kind of arbitrages NAND for DRAM? And is that technology or even the hybrid DIMM product itself, the name has a play into this going forward, or have you kind of repositioned that technology in some of the new products you’d be planning to target gen AI?

Chuck Hong: Yes Suji. I think that’s an important part of what we’re doing here in the R&D side. We’ve been working on the hybrid DIMM use of — using a lot of NAND to replicate DRAM performance for the CXL buzz. So the way you can distinguish an AI server from a standard enterprise server is that the AI server operates first off of a GPU, which requires hundreds of gigabytes of HBM, a standard enterprise server that’s been around for 34 years do not require HBM. So that’s one. And we have part of the win that we got against Samsung was on our HBM patents. The other thing is that in main memory, two other elements. So one is HBM. The other two elements are that main memory in an AI server is four times to five times the capacity of a standard enterprise server.

That requires then a move to high capacity, particularly, MRDIMM. And there we are in a very strong position again with dozens of patents covering MRDIMM. And then lastly is CXL. The AI servers as speeds go up they will have less memory sockets meaning that they will have to rely on CXL memory to pull data from. And that is where we’re working on we have been for the last five years with a very large engineering staff. And we did not — we don’t mention it on every call, but we’ve made tremendous progress there in creating ASIC system on a chip SoC with the software and firmware that can bring multiple terabytes of NAND that looks to the system and operates like DRAM. So we believe that we’ve got a very strong position in AI servers in all three fronts: IP coverage on HBM MRDIMM and then both IP and a physical product for CXL.

And that technology we believe there are not very many companies out in the world that are working on a similar technology. So that – yes, that’s the AI server front.

Suji Desilva: Appreciate the thorough walk of landscape. Thank you, Chuck. Thanks, Gail.

Chuck Hong: Thanks, Suji.

Gail Sasaki: Thanks, Suji.

Operator: This concludes our question-and-answer session. The conference has now ended. You may now hang up. Thank you for attending today’s presentation.

Follow Netlist Inc (NASDAQ:NLST)

Page 2 of 2