NVIDIA Corporation (NASDAQ:NVDA) Q3 2023 Earnings Call Transcript

Page 1 of 4

NVIDIA Corporation (NASDAQ:NVDA) Q3 2023 Earnings Call Transcript November 16, 2022

NVIDIA Corporation misses on earnings expectations. Reported EPS is $0.58 EPS, expectations were $0.69.

Operator: Good afternoon. My name is Emma, and I will be your conference operator today. At this time, I would like to welcome everyone to NVIDIA’s Third Quarter Earnings Call. All lines have been placed on mute to prevent any background noise. After the speakers’ remarks, there will be a question-and-answer session. Simona Jankowski, you may begin your conference.

Simona Jankowski: Thank you. Good afternoon, everyone, and welcome to NVIDIA’s conference call for the third quarter of fiscal 2023. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer. I’d like to remind you that our call is being webcast live on NVIDIA’s Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the fourth quarter and fiscal 2023. The content of today’s call is NVIDIA’s property. It can’t be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations.

These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today’s earnings release, our most recent Forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, November 16, 2022, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.

With that, let me turn the call over to Colette.

Colette Kress: Thanks, Simona. Q3 revenue was $5.93 billion, down 12% sequentially and down 17% year-on-year. We delivered record data center and automotive revenue, while our gaming and pro visualization platforms declined as we work through channel inventory corrections and challenging external conditions. Starting with data center. Revenue of $3.83 billion was up 1% sequentially and 31% year-on-year. This reflects very solid performance in the face of macroeconomic challenges, new export controls and lingering supply chain disruptions. Year-on-year growth was driven primarily by leading U.S. cloud providers and a broadening set of consumer internet companies for workloads such as large language models, recommendation systems and generative AI.

As the number and scale of public cloud computing and internet service companies deploying NVIDIA AI grows our traditional hyperscale definition will need to be expanded to convey the different end market use cases. We will align our data center customer commentary going forward accordingly. Other vertical industries, such as automotive and energy, also contributed to growth with key workloads relating to autonomous driving, high-performance computing, simulations and analytics. During the quarter, the U.S. government announced new restrictions impacting exports of our A100 and H100 based products to China, and any product destined for certain systems or entities in China. These restrictions impacted third quarter revenue, largely offset by sales of alternative products into China.

That said, demand in China more broadly remains soft, and we expect that to continue in the current quarter. We started shipping our flagship H100 data center GPU based on the new Hopper architecture in Q3. H100-based systems are available starting this month from leading server makers including Dell, Hewlett Packard Enterprise, Lenovo and Supermicro. Early next year, the first H100 based cloud instances will be available on Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure. H100 delivered the highest performance and workload versatility for both, AI training and inference in the latest MLPerf industry benchmarks. H100 also delivers incredible value compared to the previous generation for equivalent AI performance it offers 3x lower total cost of ownership while using 5x fewer server nodes and 3.5x less energy.

Earlier today, we announced a multiyear collaboration with Microsoft to build an advanced cloud-based AI supercomputer to help enterprises train, deploy and scale AI including large state-of-the-art models. Microsoft Azure will incorporate our complete AI stack, adding tens and thousands of A100 and H100 GPUs, Quantum-2 400 gigabit per second InfiniBand networking and the NVIDIA AI enterprise software suite to its platform. Oracle and NVIDIA are also working together to offer AI training and inference at scale to thousands of enterprises. This includes bringing to Oracle cloud infrastructure the full NVIDIA accelerated computing stack and adding tens of thousands of NVIDIA GPUs, including the A100 and H100. Cloud-based high-performance is adopting NVIDIA AI enterprise and other software to address the industrial scientific communities’ rising demand for AI in the cloud.

NVIDIA AI will bring new capability to rescale high-performance computing as a service offerings, which include simulation and engineering software used across industries. Networking posted strong growth driven by hyperscale customers and easing supply constraints. Our new Quantum-2 40 gigabit per second InfiniBand and Spectrum Ethernet networking platforms are building momentum. We achieved an important milestone this quarter with VMware, whose leading server virtualization platform, vSphere, has been rearchitected over the last two years to run on DPUs and now supports our BlueField DPUs. Our joint enterprise AI platform is available first on Dell PowerEdge servers. The BlueField DPU design win pipeline is growing and the number of infrastructure software partners is expanding, including Arista, Check Point, Juniper, Palo Alto Networks and Red Hat.

The latest top 500 list of supercomputers released this week at Supercomputing €˜22 has the highest ever number of NVIDIA-powered systems, including 72% of the total and 90% of new systems on the list. Moreover, NVIDIA powers 23 of the top 30 of the Green500 list, demonstrating the energy efficiency of accelerated computing. The number one most energy-efficient system is the Flatiron Institute’s Henry, which is the first top 500 system featuring our H100 GPUs. At GTC, we announced the NVIDIA Omniverse Computing System, or OVX, reference designed featuring the new L40 GPU based on the Ada Lovelace architecture. These systems are designed to build and operate 3D virtual worlds using NVIDIA Omniverse Enterprise. NVIDIA OVX systems will be available from Inspur, Lenovo and Supermicro by early 2023.

Lockheed Martin and Jaguar Land Rover will be among the first customers to receive OVX systems. We are further expanding our AI software and services offerings with NVIDIA and BioNeMo large language model services, which are both entering early access this month. These enable developers to easily adopt large language models and deploy customized AI applications for content generation, tech summarization, chatbox, co-development, protein structure and biomolecular property predictions. Moving to gaming. Revenue of $1.57 billion was down 23% sequentially and down 51% from a year ago, reflecting lower sell-in to partners to help align channel inventory levels with current demand expectations. We believe Channel inventories are on track to approach normal levels as we exit Q4.

Sell-through for our gaming products was relatively solid in the Americas and EMEA, but softer in Asia Pac as macroeconomic conditions and COVID lockdowns in China continued to weigh on consumer demand. Our new Ada Lovelace GPU architecture had an exceptional launch. The first Ada GPU, the GeForce RTX 4090 became available in mid-October and a tremendous amount and positive feedback from the gaming community. We sold out quickly in many locations and are working hard to keep up with demand. The next member of the Ada family, RTX 4080 is available today. The RTX 40 series GPUs features DLSS 3, the neural rendering technology that uses AI to generate entire frames for faster game play. Our third-generation RTX technology has raised the bar for computer graphics and helped supercharge gaming.

For example, the 15-year old classic game Portal — now reimagined with full ray tracing and DLSS 3 has made it on Steam’s top 100 most wish-listed gains. The total number of RTX games and applications now exceeds 350. There is tremendous energy in the gaming community that we believe will continue to fuel strong fundamentals over the long term. The number of simultaneous users on Steam just hit a record of 30 million, surpassing the prior peak of 28 million in January. Activision’s Call of Duty Modern Warfare 2 set a record for the franchise with more than $800 million in opening weekend sales, topping the combined box office openings of movie blockbusters, Top Gun: Maverick; and Dr. Strange in the Multiverse of Madness. And this month’s League of Legends World Championship in San Francisco sold out in minutes with 18,000 Esports fans packed the arena where the Golden State Warriors play.

We continue to expand the GeForce NOW cloud gaming service. In Q3, we added over 85 games to the library, bringing the total to over 1,400. We also launched GeForce NOW on the new gaming devices, including Logitech, Cloud G, handheld, cloud gaming Chromebooks and Razr 5G Edge. Moving to ProViz. Revenue of $200 million was down 60% sequentially and down 65% from a year ago, reflecting lower sell-in to partners to help align channel inventory levels with the current demand expectations. These dynamics are expected to continue in Q4. Despite near-term challenges, we believe our long-term opportunity remains intact, fueled by AI, simulation, computationally intensive design and engineering workloads. At GTC, we announced NVIDIA Omniverse Cloud Services, our first software and infrastructure as a service offering, enabling artists, developers and enterprise teams to design, publish and operate metaverse applications from anywhere on any device.

Omniverse Cloud Services runs on Omniverse cloud computer, a computing system comprised of NVIDIA OVX for graphics and physics simulation. NVIDIA HDX for AI workloads and the NVIDIA graphics delivery network, a global scale distributed data center network for delivering low-latency metaverse graphics on the edge. Leaders in some of the world’s largest industries continue to adopt Omniverse. Home improvement retailer, Lowe’s is using it to help design, build and operate digital twins for their stores. Charter Communications and advanced analytics company, HEAVY.AI are creating Omniverse power digital twins to optimize Charter’s wireless network. And Deutsche Bahn, operator of German National Railway is using Omniverse to create digital twins of its rail network and train AI models to monitor the network, increasing safety and reliability.

See also 15 Biggest Dry Bulk Shipping Companies in 2022 and 20 Companies That Sponsor H1B Visas.

Moving to automotive. Revenue of $251 million increased 14% sequentially and 86% from a year ago. Growth was driven by an increase in AI automotive solutions as our customers’ DRIVE Orin-based production ramps continue to scale. Automotive has great momentum and is on its way to be our next multibillion-dollar platform. Volvo Cars unveiled the all-new flagship Volvo EX90 SUV powered by the NVIDIA DRIVE platform. This is the first model to use Volvo’s software-defined architecture with a centralized core computer containing both, DRIVE Orin and DRIVE Xavier, along with 30 sensors. Other recently announced design wins and new model introductions include Hozon Auto, NIO, Polestar and XPeng. At GTC, we also announced that NVIDIA DRIVE Thor Superchip, the successor to Orin in our automotive SoC roadmap.

DRIVE Thor delivers up to 2,000 teraFLOPS of performance and leverages technologies introduced in our Grace Hopper and Ada architectures. It is capable of running both the automated drive and in-vehicle infotainment systems, simultaneously offering a leap of performance while reducing cost and energy consumption. DRIVE Thor will be available for automakers 2025 models with Geely-owned automaker ZEEKR as the first announced customer. Moving to the rest of the P&L. GAAP gross margins was 53.6% and non-GAAP gross margins was 56.1%. Gross margins reflect $702 million in inventory charges largely related to lower data center demand in China, partially offset by a warranty benefit of approximately $70 million. Year-on-year, GAAP operating expenses were up 31%, and non-GAAP operating expenses were up 30%, primarily due to higher compensation expenses related to headcount growth and salary increases and higher data center infrastructure expenses.

Sequentially, both GAAP and non-GAAP operating expense growth was in the single-digit percent, and we plan to keep it relatively flat at these levels over the coming quarters. We returned $3.75 billion to shareholders in the form of share repurchases and cash dividends. At the end of Q3, we had approximately $8.3 billion remaining under our share repurchase authorization through December €˜23. Let me turn to the outlook for the fourth quarter of fiscal 2023. We expect our data center revenue to reflect early production shipments of the H100, offset by continued softness in China. In gaming, we expect to resume sequential growth with our revenue still below end demand as we continue to work through the channel inventory correction. And in automotive, we expect the continued ramp of our Orin design wins.

All-in, we expect modest sequential growth driven by automotive, gaming and data center. Revenue is expected to be $6 billion, plus or minus 2%. GAAP and non-GAAP gross margins are expected to be $63.2 million and 66%, respectively, plus or minus 50 basis points. GAAP operating expenses are expected to be approximately $2.56 billion. Non-GAAP operating expenses are expected to be approximately $1.78 billion. GAAP and non-GAAP other income and expenses are expected to be an income of approximately $40 million, excluding gains and losses on nonaffiliated investments. GAAP and non-GAAP tax rates are expected to be 9%, plus or minus 1%, excluding any discrete items. Capital expenditures are expected to be approximately $500 million to $550 million.

Further financial details are included in the CFO commentary and other information available on our IR website. In closing, let me highlight upcoming events for the financial community. We’ll be attending the Credit Suisse Conference in Phoenix on November 30th, the Arete Virtual Tech Conference on December 5th, and the JPMorgan Forum on January 5th in Las Vegas. Our earnings call to discuss the results of our fourth quarter and fiscal 2023 are scheduled for Wednesday, February 22. We will now open the call for questions. Operator, could you please poll for questions?

Q&A Session

Follow Nvidia Corp (NASDAQ:NVDA)

Operator: Thank you. Your first question comes from the line of Vivek Arya with Bank of America Securities.

Vivek Arya: Colette, I just wanted to clarify first. I think last quarter, you gave us a sell-through rate for your gaming business at about $2.5 billion a quarter. I think you said China is somewhat weaker. So, I was hoping you could update us on what that sell-through rate is right now for gaming. And then, Jensen, the question for you. A lot of concerns about large hyperscalers cutting their spending and pointing to a slowdown. So if, let’s say, U.S. cloud CapEx is flat or slightly down next year, do you think your business can still grow in the data center and why?

Colette Kress: Yes. Thanks for the question. Let me first start with the sell-through on our gaming business. We had indicated, if you put two quarters together, we would see approximately $5 billion in normalized sell-through for our business. Now, during the quarter, sell-through in Q3 was relatively solid. We’ve indicated that although China lockdowns continue to channel — excuse me, challenge our overall China business, it was still relatively solid. Notebook sell-through was also quite solid and desktop, a bit softer, particularly in that China and Asia areas. We expect though stronger end demand, though, as we enter into Q4 driven by the upcoming holidays as well as the continuation of the Ada adoption

Jensen Huang: Vivek, our data center business is indexed to two fundamental dynamics. The first has to do with general purpose computing no longer scaling. And so, acceleration is necessary to achieve the necessary level of cost efficiency scale and energy efficiency scale, so that we can continue to increase workloads while saving money and saving power. Accelerated computing is recognized generally as the path forward as general purpose computing slows. The second dynamic is AI. And we’re seeing surging demand in some very important sectors of AIs and important breakthroughs in AI. One is called deep recommender systems, which is quite essential now to the best content or item or product to recommend to somebody who’s using a device that is like a cell phone or interacting with a computer just using voice.

You need to really understand the nature, the context of the person making the request and make the appropriate recommendation to them. The second has to do with large language models. This is — this started several years ago with the invention of the Transformer, which led to BERT, which led to GPT-3, which led to a whole bunch of other models now associated with that. We now have the ability to learn representations of languages of all kinds. It could be human language. It could be the language of biology. It could be language of chemistry. And recently, I just saw a breakthrough called genes LM, which is one of the first example of learning the language of human genomes. The third has to do with generative AI. You know that the first 10 years, we’ve dedicated ourselves to perception AI.

Now, the goal of perception, of course, is to understand context. But the ultimate goal of AI is to make a contribution to create something to generate product — and this is now the beginning of the era of generative AI. You probably see it all over the place, whether they’re generating images or generating videos or generating text of all kinds and the ability to augment our performance to enhance our performance to make productivity enhanced to reduce cost and improve whatever we do with whatever we have to work with, productivity is really more important than ever. And so, you could see that our company is indexed to two things, both of which are more important than ever, which is power efficiency, cost efficiency and then, of course, productivity.

And these things are more important than ever. And my expectation is that we’re seeing all the strong demand and surging demand for AI and for these reasons.

Operator: Your next question comes from the line of C.J. Muse with Evercore.

C.J. Muse: You started to bundle on NVIDIA AI enterprise now with the H100. I’m curious if you can talk about how we should think about timing around software monetization? And how we should kind of see this flow through the model, particularly with the focus on the AI Enterprise and Omniverse side of things?

Jensen Huang: Yes. Thanks, CJ. We’re making excellent progress in NVIDIA AI Enterprise. In fact, you saw probably that we made several announcements this quarter associated with clouds. You know that NVIDIA has a rich ecosystem. And over the years, our rich ecosystem and our software stack has been integrated into developers and start-ups of all kinds. But more so — more than ever, we’re at the tipping point of clouds. And that’s fantastic because if we could get NVIDIA’s architecture and our full stack into every single cloud, we could reach more customers more quickly. And this quarter, we announced several initiatives, one — has several partnerships and collaborations, one that we announced today, which has to do with Microsoft and our partnership there.

It has everything to do with scaling up AI because we have so many start-ups clamoring for large installations of our GPU so that they could do large language model training and building their start-ups and scale out of AI to enterprise and all of the world’s internet service providers. Every company we’re talking to would like to have the agility and the scale, flexibility of clouds. And so, over the last year or so, we’ve been working on moving all of our software stacks to the cloud €“ all of our platform and software stacks to the cloud. And so today, we announced that Microsoft and ourselves are going to standardize on the NVIDIA stack, for a very large part of the work that we’re doing together so that we could take a full stack out to the world’s enterprise.

That’s all software included. We, a month ago, announced the same — similar type of partnership with Oracle. You also saw that Rescale, a leader in high-performance computing cloud has integrated NVIDIA AI into their stack. Monite has been integrated into GCP. And we announced recently NeMo large language model and BioNeMo large language model to put NVIDIA software in the cloud. And we also announced Omniverse is now available in the cloud. The goal of all of this is to move the NVIDIA platform full stack software into the cloud, so that we can engage customers much, much more quickly and customers could engage our software. If they would like to use it in the cloud, it’s per GPU instance hour; if they would like to utilize our software on-prem, they could do it through software license and so — license and subscription.

And so, in both cases, we now have software available practically everywhere you would like to engage it. The partners that we work with are super excited about it because NVIDIA’s rich ecosystem is global, and this could bring both, new consumption into their cloud for both them and ourselves, but also connect all of these new opportunities to the other APIs and other services that they offer. And so, our software stack is making really great progress.

Operator: Your next question comes from the line of (ph) with Credit Suisse.

Page 1 of 4