NVIDIA Corporation (NASDAQ:NVDA) Q3 2024 Earnings Call Transcript

And so that’s basically it, bringing AI to Ethernet for the world’s enterprise.

Operator: Thank you. Your next question comes from the line of Joe Moore of Morgan Stanley. Your line is open.

Joseph Moore: Great. Thank you. I’m wondering if you could talk a little bit more about Grace Hopper and how you see the ability to leverage kind of the microprocessor, how you see that as a TAM expander. And what applications do you see using Grace Hopper versus more traditional H100 applications?

Jensen Huang: Yeah. Thanks for the question. Grace Hopper is in production — in high volume production now. We’re expecting next year just with all of the design wins that we have in high performance computing and AI infrastructures, we are on a very, very fast ramp with our first data center CPU to a multi-billion dollar product line. This is going to be a very large product line for us. The capability of Grace Hopper is really quite spectacular. It has the ability to create computing nodes that simultaneously has very fast memory, as well as very large memory. In the areas of vector databases or semantic surge, what is called RAG, retrieval augmented generation. So that you could have a generative AI model be able to refer to proprietary data or a factual data before it generates a response, that data is quite large.

And you can also have applications or generative models where the context length is very high. You basically store it in entire book into end-to-end system memory before you ask your questions. And so the context length can be quite large this way. The generative models has the ability to still be able to naturally interact with you on one hand. On the other hand, be able to refer to factual data, proprietary data or domain-specific data, you data and be contextually relevant and reduce hallucination. And so that particular use case for example is really quite fantastic for Grace Hopper. It also serves the customers that really care to have a different CPU than x86. Maybe it’s a European supercomputing centers or European companies who would like to build up their own ARM ecosystem and like to build up a full stack or CSPs that have decided that they would like to pivot to ARM, because their own custom CPUs are based on ARM.

There are variety of different reasons that drives the success of Grace Hopper, but we’re off to a just an extraordinary start. This is a home run product.

Operator: Your next question comes from the line of Tim Arcuri of UBS. Your line is open.

Tim Arcuri: Hi. Thanks. I wanted to ask a little bit about the visibility that you have on revenue. I know there’s a few moving parts. I guess, on one hand, the purchase commitments went up a lot again. But on the other hand, China bans would arguably pull in when you can fill the demand beyond China. So I know we’re not even into 2024 yet and it doesn’t sound like, Jensen, you think that next year would be a peak in your Data Center revenue, but I just wanted to sort of explicitly ask you that. Do you think that Data Center can grow even in 2025? Thanks.

Jensen Huang: Absolutely believe the Data Center can grow through 2025. And there are, of course, several reasons for that. We are expanding our supply quite significantly. We have already one of the broadest and largest and most capable supply chain in the world. Now, remember, people think that the GPU is a chip. But the HGX H100, the Hopper HGX has 35,000 parts, it weighs 70 pounds. Eight of the chips are Hopper. The other 35,000 are not. It is — even its passive components are incredible. High voltage parts. High frequency parts. High current parts. It is a supercomputer, and therefore, the only way to test a supercomputer is with another supercomputer. Even the manufacturing of it is complicated, the testing of it is complicated, the shipping of it complicated and installation is complicated.

And so, every aspect of our HGX supply chain is complicated. And the remarkable team that we have here has really scaled out the supply chain incredibly. Not to mention, all of our HGXs are connected with NVIDIA networking. And the networking, the transceivers, the mix, the cables, the switches, the amount of complexity there is just incredible. And so, I’m just — first of all, I’m just super proud of the team for scaling up this incredible supply chain. We are absolutely world class. But meanwhile, we’re adding new customers and new products. So we have new supply. We have new customers, as I was mentioning earlier. Different regions are standing up GPU specialist clouds, sovereign AI clouds coming out from all over the world, as people realize that they can’t afford to export their country’s knowledge, their country’s culture for somebody else to then resell AI back to them, they have to — they should, they have the skills and surely with us in combination, we can help them to do that build up their national AI.