NVIDIA Corporation (NASDAQ:NVDA) Q2 2024 Earnings Call Transcript

Page 1 of 9

NVIDIA Corporation (NASDAQ:NVDA) Q2 2024 Earnings Call Transcript August 23, 2023

Operator: Good afternoon. My name is David, and I’ll be your conference operator today. At this time, I’d like to welcome everyone to NVIDIA’s Second Quarter Earnings Call. Today’s conference is being recorded. All lines have been placed on mute to prevent any background noise. After the speakers’ remarks, there will be a question-and-answer session. [Operator Instructions] Thank you. Simona Jankowski, you may begin your conference.

Simona Jankowski: Thank you. Good afternoon, everyone and welcome to NVIDIA’s conference call for the second quarter of fiscal 2024. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer. I’d like to remind you that our call is being webcast live on NVIDIA’s Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the third quarter of fiscal 2024. The content of today’s call is NVIDIA’s property. It can’t be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations.

These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today’s earnings release, our most recent Forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, August 23, 2023, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.

And with that, let me turn the call over to Colette.

Colette Kress: Thanks, Simona. We had an exceptional quarter. Record Q2 revenue of $13.51 billion was up 88% sequentially and up 101% year-on-year, and above our outlook of $11 billion. Let me first start with Data Center. Record revenue of $10.32 billion was up 141% sequentially and up 171% year-on-year. Data Center compute revenue nearly tripled year-on-year, driven primarily by accelerating demand from cloud service providers and large consumer Internet companies for HGX platform, the engine of generative AI and large language models. Major companies, including AWS, Google Cloud, Meta, Microsoft Azure and Oracle Cloud as well as a growing number of GPU cloud providers are deploying, in volume, HGX systems based on our Hopper and Ampere architecture Tensor Core GPUs. Networking revenue almost doubled year-on-year, driven by our end-to-end InfiniBand networking platform, the gold standard for AI.

There is tremendous demand for NVIDIA accelerated computing and AI platforms. Our supply partners have been exceptional in ramping capacity to support our needs. Our data center supply chain, including HGX with 35,000 parts and highly complex networking has been built up over the past decade. We have also developed and qualified additional capacity and suppliers for key steps in the manufacturing process such as [indiscernible] packaging. We expect supply to increase each quarter through next year. By geography, data center growth was strongest in the U.S. as customers direct their capital investments to AI and accelerated computing. China demand was within the historical range of 20% to 25% of our Data Center revenue, including compute and networking solutions.

At this time, let me take a moment to address recent reports on the potential for increased regulations on our exports to China. We believe the current regulation is achieving the intended results. Given the strength of demand for our products worldwide, we do not anticipate that additional export restrictions on our Data Center GPUs, if adopted, would have an immediate material impact to our financial results. However, over the long term, restrictions prohibiting the sale of our Data Center GPUs to China, if implemented, will result in a permanent loss and opportunity for the U.S. industry to compete and lead in one of the world’s largest markets. Our cloud service providers drove exceptional strong demand for HGX systems in the quarter, as they undertake a generational transition to upgrade their data center infrastructure for the new era of accelerated computing and AI.

The NVIDIA HGX platform is culminating of nearly two decades of full stack innovation across silicon, systems, interconnects, networking, software and algorithms. Instances powered by the NVIDIA H100 Tensor Core GPUs are now generally available at AWS, Microsoft Azure and several GPU cloud providers, with others on the way shortly. Consumer Internet companies also drove the very strong demand. Their investments in data center infrastructure purpose-built for AI are already generating significant returns. For example, Meta, recently highlighted that since launching Reels, AI recommendations have driven a more than 24% increase in time spent on Instagram. Enterprises are also racing to deploy generative AI, driving strong consumption of NVIDIA powered instances in the cloud as well as demand for on-premise infrastructure.

Whether we serve customers in the cloud or on-prem through partners or direct, their applications can run seamlessly on NVIDIA AI enterprise software with access to our acceleration libraries, pre-trained models and APIs. We announced a partnership with Snowflake to provide enterprises with accelerated path to create customized generative AI applications using their own proprietary data, all securely within the Snowflake Data Cloud. With the NVIDIA NeMo platform for developing large language models, enterprises will be able to make custom LLMs for advanced AI services, including chatbot, search and summarization, right from the Snowflake Data Cloud. Virtually, every industry can benefit from generative AI. For example, AI Copilot such as those just announced by Microsoft can boost the productivity of over 1 billion office workers and tens of millions of software engineers.

Billions of professionals in legal services, sales, customer support and education will be available to leverage AI systems trained in their field. AI Copilot and assistants are set to create new multi-hundred billion dollar market opportunities for our customers. We are seeing some of the earliest applications of generative AI in marketing, media and entertainment. WPP, the world’s largest marketing and communication services organization, is developing a content engine using NVIDIA Omniverse to enable artists and designers to integrate generative AI into 3D content creation. WPP designers can create images from text prompts while responsibly trained generative AI tools and content from NVIDIA partners such as Adobe and Getty Images using NVIDIA Picasso, a foundry for custom generative AI models for visual design.

NVIDIA software

jose-g-ortega-castro-sKZ_qkgg8T0-unsplash

Visual content provider Shutterstock is also using NVIDIA Picasso to build tools and services that enables users to create 3D scene background with the help of generative AI. We’ve partnered with ServiceNow and Accenture to launch the AI Lighthouse program, fast tracking the development of enterprise AI capabilities. AI Lighthouse unites the ServiceNow enterprise automation platform and engine with NVIDIA accelerated computing and with Accenture consulting and deployment services. We are collaborating also with Hugging Face to simplify the creation of new and custom AI models for enterprises. Hugging Face will offer a new service for enterprises to train and tune advanced AI models powered by NVIDIA HGX cloud. And just yesterday, VMware and NVIDIA announced a major new enterprise offering called VMware Private AI Foundation with NVIDIA, a fully integrated platform featuring AI software and accelerated computing from NVIDIA with multi-cloud software for enterprises running VMware.

VMware’s hundreds of thousands of enterprise customers will have access to the infrastructure, AI and cloud management software needed to customize models and run generative AI applications such as intelligent chatbot, assistants, search and summarization. We also announced new NVIDIA AI enterprise-ready servers featuring the new NVIDIA L40S GPU built for the industry standard data center server ecosystem and BlueField-3 DPU data center infrastructure processor. L40S is not limited by [indiscernible] supply and is shipping to the world’s leading server system makers (ph). L40S is a universal data center processor designed for high volume data center standing out to accelerate the most compute-intensive applications, including AI training and inventing through the designing, visualization, video processing and NVIDIA Omniverse industrial digitalization.

NVIDIA AI enterprise ready servers are fully optimized for VMware, Cloud Foundation and Private AI Foundation. Nearly 100 configurations of NVIDIA AI enterprise ready servers will soon be available from the world’s leading enterprise IT computing companies, including Dell, HP and Lenovo. The GH200 Grace Hopper Superchip which combines our ARM-based Grace CPU with Hopper GPU entered full production and will be available this quarter in OEM servers. It is also shipping to multiple supercomputing customers, including Atmos (ph), National Labs and the Swiss National Computing Center. And NVIDIA and SoftBank are collaborating on a platform based on GH200 for generative AI and 5G/6G applications. The second generation version of our Grace Hopper Superchip with the latest HBM3e memory will be available in Q2 of calendar 2024.

We announced the DGX GH200, a new class of large memory AI supercomputer for giant AI language model, recommendator systems and data analytics. This is the first use of the new NVIDIA [indiscernible] switch system, enabling all of its 256 Grace Hopper Superchips to work together as one, a huge jump compared to our prior generation connecting just eight GPUs over [indiscernible]. DGX GH200 systems are expected to be available by the end of the year, Google Cloud, Meta and Microsoft among the first to gain access. Strong networking growth was driven primarily by InfiniBand infrastructure to connect HGX GPU systems. Thanks to its end-to-end optimization and in-network computing capabilities, InfiniBand delivers more than double the performance of traditional Ethernet for AI.

For billions of dollar AI infrastructures, the value from the increased throughput of InfiniBand is worth hundreds of [indiscernible] and pays for the network. In addition, only InfiniBand can scale to hundreds of thousands of GPUs. It is the network of choice for leading AI practitioners. For Ethernet-based cloud data centers that seek to optimize their AI performance, we announced NVIDIA Spectrum-X, an accelerated networking platform designed to optimize Ethernet for AI workloads. Spectrum-X couples the Spectrum or Ethernet switch with the BlueField-3 DPU, achieving 1.5x better overall AI performance and power efficiency versus traditional Ethernet. BlueField-3 DPU is a major success. It is in qualification with major OEMs and ramping across multiple CSPs and consumer Internet companies.

Now moving to gaming. Gaming revenue of $2.49 billion was up 11% sequentially and 22% year-on-year. Growth was fueled by GeForce RTX 40 Series GPUs for laptops and desktop. End customer demand was solid and consistent with seasonality. We believe global end demand has returned to growth after last year’s slowdown. We have a large upgrade opportunity ahead of us. Just 47% of our installed base have upgraded to RTX and about 20% of the GPU with an RTX 3060 or higher performance. Laptop GPUs posted strong growth in the key back-to-school season, led by RTX 4060 GPUs. NVIDIA’s GPU-powered laptops have gained in popularity, and their shipments are now outpacing desktop GPUs from several regions around the world. This is likely to shift the reality of our overall gaming revenue a bit, with Q2 and Q3 as the stronger quarters of the year, reflecting the back-to-school and holiday build schedules for laptops.

In desktop, we launched the GeForce RTX 4060 and the GeForce RTX 4060 TI GPUs, bringing the Ada Lovelace architecture down to price points as low as $299. The ecosystem of RTX and DLSS games continue to expand. 35 new games added to DLSS support, including blockbusters such as Diablo IV and Baldur’s Gate 3. There’s now over 330 RTX accelerated games and apps. We are bringing generative AI to gaming. At COMPUTEX, we announced NVIDIA Avatar Cloud Engine or ACE for games, a custom AI model foundry service. Developers can use this service to bring intelligence to non-player characters. And it harnesses a number of NVIDIA Omniverse and AI technologies, including NeMo, Riva and Audio2Face. Now moving to Professional Visualization. Revenue of $375 million was up 28% sequentially and down 24% year-on-year.

The Ada architecture ramp drove strong growth in Q2, rolling out initially in laptop workstations with a refresh of desktop workstations coming in Q3. These will include powerful new RTX systems with up to 4 NVIDIA RTX 6000 GPUs, providing more than 5,800 teraflops of AI performance and 192 gigabytes of GPU memory. They can be configured with NVIDIA AI enterprise or NVIDIA Omniverse inside. We also announced three new desktop workstation GPUs based on the Ada generation. The NVIDIA RTX 5000, 4500 and 4000, offering up to 2x the RT core throughput and up to 2x faster AI training performance compared to the previous generation. In addition to traditional workloads such as 3D design and content creation, new workloads in generative AI, large language model development and data science are expanding the opportunity in pro visualization for our RTX technology.

One of the key themes in Jensen’s keynote [indiscernible] earlier this month was the conversion of graphics and AI. This is where NVIDIA Omniverse is positioned. Omniverse is OpenUSD’s native platform. OpenUSD is a universal interchange that is quickly becoming the standard for the 3D world, much like HTML is the universal language for the 2D [indiscernible]. Together, Adobe, Apple, Autodesk, Pixar and NVIDIA form the Alliance for OpenUSD. Our mission is to accelerate OpenUSD’s development and adoption. We announced new and upcoming Omniverse cloud APIs, including RunUSD and ChatUSD to bring generative AI to OpenUSD workload. Moving to automotive. Revenue was $253 million, down 15% sequentially and up 15% year-on-year. Solid year-on-year growth was driven by the ramp of self-driving platforms based on [indiscernible] or associated with a number of new energy vehicle makers.

The sequential decline reflects lower overall automotive demand, particularly in China. We announced a partnership with MediaTek to bring drivers and passengers new experiences inside the car. MediaTek will develop automotive SoCs and integrate a new product line of NVIDIA’s GPU chiplet. The partnership covers a wide range of vehicle segments from luxury to entry level. Moving to the rest of the P&L. GAAP gross margins expanded to 70.1% and non-GAAP gross margin to 71.2%, driven by higher data center sales. Our Data Center products include a significant amount of software and complexity, which is also helping drive our gross margin. Sequential GAAP operating expenses were up 6% and non-GAAP operating expenses were up 5%, primarily reflecting increased compensation and benefits.

We returned approximately $3.4 billion to shareholders in the form of share repurchases and cash dividends. Our Board of Directors has just approved an additional $25 billion in stock repurchases to add to our remaining $4 billion of authorization as of the end of Q2. Let me turn to the outlook for the third quarter of fiscal 2024. Demand for our Data Center platform where AI is tremendous and broad-based across industries on customers. Our demand visibility extends into next year. Our supply over the next several quarters will continue to ramp as we lower cycle times and work with our supply partners to add capacity. Additionally, the new L40S GPU will help address the growing demand for many types of workloads from cloud to enterprise. For Q3, total revenue is expected to be $16 billion, plus or minus 2%.

We expect sequential growth to be driven largely by Data Center with gaming and ProViz also contributing. GAAP and non-GAAP gross margins are expected to be 71.5% and 72.5%, respectively, plus or minus 50 basis points. GAAP and non-GAAP operating expenses are expected to be approximately $2.95 billion and $2 billion, respectively. GAAP and non-GAAP other income and expenses are expected to be an income of approximately $100 million, excluding gains and losses from non-affiliated investments. GAAP and non-GAAP tax rates are expected to be 14.5%, plus or minus 1%, excluding any discrete items. Further financial details are included in the CFO commentary and other information available on our IR website. In closing, let me highlight some upcoming events for the financial community.

We will attend the Jefferies Tech Summit on August 30 in Chicago, the Goldman Sachs Conference on September 5 in San Francisco, the Evercore Semiconductor Conference on September 6 as well as the Citi Tech Conference on September 7, both in New York. And the BofA Virtual AI conference on September 11. Our earnings call to discuss the results of our third quarter of fiscal 2024 is scheduled for Tuesday, November 21. Operator, we will now open the call for questions. Could you please poll for questions for us? Thank you.

See also 20 Most Consumed Candies in the US and 15 Countries That Own the Most U.S. Debt.

Q&A Session

Follow Nvidia Corp (NASDAQ:NVDA)

Operator: Thank you. [Operator Instructions] We’ll take our first question from Matt Ramsay with TD Cowen. Your line is now open.

Matt Ramsay: Yes. Thank you very much. Good afternoon. Obviously, remarkable results. Jensen, I wanted to ask a question of you regarding the really quickly emerging application of large model inference. So I think it’s pretty well understood by the majority of investors that you guys have very much a lockdown share of the training market. A lot of the smaller market — smaller model inference workloads have been done on ASICs or CPUs in the past. And with many of these GPT and other really large models, there’s this new workload that’s accelerating super-duper quickly on large model inference. And I think your Grace Hopper Superchip products and others are pretty well aligned for that. But could you maybe talk to us about how you’re seeing the inference market segment between small model inference and large model inference and how your product portfolio is positioned for that? Thanks.

Jensen Huang: Yeah. Thanks a lot. So let’s take a quick step back. These large language models are fairly — are pretty phenomenal. It does several things, of course. It has the ability to understand unstructured language. But at its core, what it has learned is the structure of human language. And it has encoded or within it — compressed within it a large amount of human knowledge that it has learned by the corpuses that it studied. What happens is, you create these large language models and you create as large as you can, and then you derive from it smaller versions of the model, essentially teacher-student models. It’s a process called distillation. And so when you see these smaller models, it’s very likely the case that they were derived from or distilled from or learned from larger models, just as you have professors and teachers and students and so on and so forth.

And you’re going to see this going forward. And so you start from a very large model and it has a large amount of generality and generalization and what’s called zero-shot capability. And so for a lot of applications and questions or skills that you haven’t trained it specifically on, these large language models miraculously has the capability to perform them. That’s what makes it so magical. On the other hand, you would like to have these capabilities in all kinds of computing devices, and so what you do is you distill them down. These smaller models might have excellent capabilities on a particular skill, but they don’t generalize as well. They don’t have what is called as good zero-shot capabilities. And so they all have their own unique capabilities, but you start from very large models.

Operator: Okay. Next, we’ll go to Vivek Arya with BofA Securities. Your line is now open.

Vivek Arya: Thank you. Just had a quick clarification and a question. Colette, if you could please clarify how much incremental supply do you expect to come online in the next year? You think it’s up 20%, 30%, 40%, 50%? So just any sense of how much supply because you said it’s growing every quarter. And then Jensen, the question for you is, when we look at the overall hyperscaler spending, that buy is not really growing that much. So what is giving you the confidence that they can continue to carve out more of that pie for generative AI? Just give us your sense of how sustainable is this demand as we look over the next one to two years? So if I take your implied Q3 outlook of Data Center, $12 billion, $13 billion, what does that say about how many servers are already AI accelerated? Where is that going? So just give some confidence that the growth that you are seeing is sustainable into the next one to two years.

Page 1 of 9