NVIDIA Corporation (NASDAQ:NVDA) Q4 2023 Earnings Call Transcript

And so I think the universality of our computing — accelerated computing platform, the fact that we’re in every cloud, the fact that we’re from cloud to edge, makes our architecture really quite accessible and very differentiated in this way. And most importantly, to all the service providers, because of the utilization is so high, because you can use it to accelerate the end-to-end workload and get such a good throughput, our architecture is the lowest operating cost. It’s not — the comparison is not even close. So — anyhow those are the 2 answers.

Operator: Your next question comes from the line of C.J. Muse with Evercore.

Christopher Muse: I guess, Jensen, you talked about ChatGPT as an inflection point kind of like the iPhone. And so curious, part A, how have your conversations evolved post ChatGPT with hyperscale and large-scale enterprises? And then secondly, as you think about Hopper with the transformative engine and Grace with high-bandwidth memory, how have you kind of your outlook for growth for those 2 product cycles evolved in the last few months?

Jensen Huang: ChatGPT is a wonderful piece of work, and the team did a great job, OpenAI did a great job with it. They stuck with it. And the accumulation of all of the breakthroughs led to a service with a model inside that surprised everybody with its versatility and its capability. What people were surprised by, and this is in our — and close within the industry is well understood. But the surprising capability of a single AI model that can perform tasks and skills that it was never trained to do. And for this language model to not just speak English, or can translate, of course, but not just speak human language, it can be prompted in human language, but output Python, output Cobalt, a language that very few people even remember, output Python for Blender, a 3D program.

So it’s a program that writes a program for another program. We now realize — the world now realizes that maybe human language is a perfectly good computer programming language, and that we’ve democratized computer programming for everyone, almost anyone who could explain in human language a particular task to be performed. This new computer — when I say new era of computing, this new computing platform, this new computer could take whatever your prompt is, whatever your human-explained request is, and translate it to a sequence of instructions that you process it directly, or it waits for you to decide whether you want to process it or not. And so this type of computer is utterly revolutionary in its application because it’s democratized programming to so many people really has excited enterprises all over the world.

Every single CSP, every single Internet service provider, and they’re, frankly, every single software company, because of what I just explained, that this is an AI model that can write a program for any program. Because of that reason, everybody who develops software is either alerted or shocked into alert or actively working on something that is like ChatGPT to be integrated into their application or integrated into their service. And so this is, as you can imagine, utterly worldwide. The activity around the AI infrastructure that we build Hopper and the activity around inferencing using Hopper and Ampere to inference large language models, has just gone through the roof in the last 60 days. And so there’s no question that whatever our views are of this year as we enter the year has been fairly, dramatically changed as a result of the last 60, 90 days.

Operator: Your next question comes from the line of Matt Ramsay with Cowen & Company.