Advanced Micro Devices, Inc. (NASDAQ:AMD) Q1 2023 Earnings Call Transcript

Advanced Micro Devices, Inc. (NASDAQ:AMD) Q1 2023 Earnings Call Transcript May 2, 2023

Operator: Hello, and welcome to the AMD First Quarter 2023 Earnings Conference Call. . It’s now my pleasure to turn the call over to Ruth Cotter. Please go ahead, Ruth.

Ruth Cotter: Thank you, and welcome to AMD’s First Quarter 2023 Financial Results Conference Call. By now, you should have had the opportunity to review a copy of our earnings press release and accompanying slideware. If you’ve not reviewed these documents, they can be found on the Investor Relations page of amd.com. . We will refer primarily to non-GAAP financial measures during this call. The full non-GAAP to GAAP reconciliations are available in today’s press release and slides posted on our website. Participants on today’s conference call are Dr. Lisa Su, our Chair and Chief Executive Officer; and Jean Hu, our Executive Vice President, Chief Financial Officer and Treasurer. This is a live call and will be replayed via webcast on our website.

Before we begin today’s call, we would like to note that Jean Hu will attend the JPM Annual Technology, Media and Communications Conference on Tuesday, May 23; Dan McNamara, Senior Vice President and General Manager, Server Business Unit, will attend the Bank of America Global Technology Conference on Tuesday, June 6. And our second quarter 2023 quiet time is expected to begin at the close of business on Friday, June 16. Today’s discussion contains forward-looking statements based on current beliefs, assumptions and expectations, speak only as of today and as such, involve risks and uncertainties that could cause actual results to differ materially from our current expectations. Please refer to the cautionary statement in our press release for more information on factors that could cause actual results to differ materially.

Now with that, I’ll hand the call over to Lisa. Lisa?

Lisa Su: Thank you, Ruth, and good afternoon to all those listening in today. We executed very well in the first quarter as we delivered better-than-expected revenue and earnings in a mixed demand environment, launched multiple leadership products across our businesses and made significant progress accelerating our AI road map and customer engagements across our portfolio. First quarter revenue was $5.4 billion, a decrease of 9% year-over-year. Sales of our data center and embedded products contributed more than 50% of overall revenue in the quarter as cloud and embedded revenue grew significantly year-over-year. Looking at the first quarter business results. Data Center segment revenue of $1.3 billion was flat year-over-year with higher cloud sales, offset by lower enterprise sales.

In cloud, the quarter played out largely as we expected. EPYC CPU sales grew by a strong double-digit percentage year-over-year but declined sequentially as elevated inventory levels with some MDC customers resulted in a lower sell-in TAM for the quarter. Against this backdrop, we were pleased the largest cloud providers further expanded their AMD deployments in the quarter to power a larger portion of their internal workloads and public instances. 28 new AMD instances launched in the first quarter, including multiple confidential computing offerings from Microsoft Azure, Google Cloud and Oracle Cloud that take advantage of the unique security features of our EPYC processors. In total, we now have more than 640 AMD-powered public instances available.

Enterprise sales declined year-over-year and sequentially as end customer demand softened due to near-term macroeconomic uncertainty. We continued growing our enterprise pipeline and closed multiple wins with Fortune 500 automotive, technology and financial companies in the quarter. We made strong progress in the quarter, ramping our Zen 4 EPYC CPU portfolio. All of our large cloud customers have generally running in their data centers and are on track to begin broad deployments to power their internal workloads and public instances in the second quarter. For the enterprise, Dell, HPE, Lenovo, Super Micro and other leading providers entered production on new general server platforms that complement their existing third-gen EPYC platforms. We are on track to launch Bergamo, our first cloud-native server CPU and Gena X, our fourth Gen EPYC processor with 3D chiplets for leadership in technical computing workloads later this quarter.

Although we expect server demand to remain mixed in the second quarter, we are well positioned to grow our cloud and enterprise footprint in the second half of the year based on the strong customer response to the performance and TCO advantages of Genoa, Bergamo and GenX. Now looking at our broader data center business. In networking, Microsoft Azure launched their first instance powered by our Pensando DPU and software stack that can significantly increase application performance for networking intensive workloads by enabling 10x more connections per second compared to non-accelerated instances. We expanded our data center product portfolio with the launch of our first ASIC-based Alveo data center media accelerator that supports 4x the number of simultaneous video streams compared to our prior generation.

In supercomputing, the MAX Plug Society announced plans to build the first supercomputer in the EU, powered by fourth-gen EPYC CPUs and instinct MI-300 accelerators that is expected to deliver a 3x increase in application performance and significant TCO improvements compared to their current system. Our AI activities increased significantly in the first quarter, driven by expanded engagements with a broad set of data center and embedded customers. We expanded the software ecosystem support for our instinct GPUs in the first quarter, highlighted by the launch of the widely used Pie Torch 2.0 framework, which now offers native support for our software. In early April, researchers announced they used the LUMI supercomputer powered by third-gen EPYC CPUs and instinct MI250 accelerators to train the largest finished language model to date.

Customer interest has increased significantly for our next-generation instinct MI300 GPUs for both AI training and inference of large language models. We made excellent progress achieving key MI300 silicon and software readiness milestones in the quarter, and we’re on track to launch MI300 later this year to support the El Capitan exascale supercomputer win at Lawrence Livermore National Laboratory and large cloud AI customers. To execute our broad AI strategy and significantly accelerate this key part of our business, we brought together multiple AI teams from across the company into a single organization under Victor Peng. The new AI group has responsibility for owning our end-to-end AI hardware strategy and driving development of a complete software ecosystem, including optimized libraries, models and framework expanding our full product portfolio.

Now turning to our client segment. Revenue declined 65% year-over-year to $739 million as we shipped significantly below consumption to reduce downstream inventory. As we stated on our last earnings call, we believe the first quarter was the bottom for our client processor business. We expanded our leadership desktop and notebook processor portfolio significantly in the quarter. In desktops, we launched the industry’s fastest gaming processors with our Ryzen 7000 X3D series CPUs that combine our Zen 4 core with industry-leading 3D chiplet packaging technology. In mobile, the first notebooks powered by our Dragon range CPUs launched a strong demand, with multiple third-party reviews highlighting how our 16 core Ryzen 9 7945HS CPU is now the fastest mobile processor available.

We also ramped production of our Zen 4-based Phoenix Ryzen 7040 series CPUs in the first quarter for ultrathin and gaming notebooks to support the more than 250 ultrathin gaming and commercial notebook design wins on track to launch this year from Acer, Asus, Dell, HP and Lenovo. Looking at the year, we continue to expect the PC TAM to be down approximately 10% for 2023 to approximately 260 million units. Based on the strength of our product portfolio, we expect our client CPU sales to grow in the second quarter and in the seasonally stronger second half of the year. Now turning to our Gaming segment. Revenue declined 6% year-over-year to $1.8 billion as higher semi-custom revenue was offset by lower gaming graphics sales. Semi-custom SoC revenue grew year-over-year as demand for premium consoles remain strong following the holiday cycle.

In gaming graphics, channel sell-through of our Radeon 6000 and Radeon 7000 series GPUs increased sequentially. We saw strong sales of our high-end Radeon 7900 XTX GPUs in the first quarter, and we’re on track to expand our RDNA 3 GPU portfolio with the launch of new mainstream Radeon 7000 series GPUs this quarter. Looking at our Embedded segment. Revenue increased significantly year-over-year to a record $1.6 billion. We saw strength across the majority of our embedded markets, led by increased demand from industrial, vision and health care, test and emulation, communications, aerospace and defense and automotive customers. Demand for our adaptive computing solutions continues to grow as industrial vision and health care customers actively work to add more advanced compute capabilities across their product lines.

We also released new Vitis AI software libraries to enable advanced visualization and AI capabilities for our medical customers and launched our next-generation platform that provides a turnkey solution to deploy our leadership adaptive computing capabilities for smart camera, industrial and machine vision applications. In Communications, we saw strength with wired customers as new infrastructure design wins ramped into production. We also launched Zinc RF SoC products to accelerate 4G and 5G radio deployments in cost-sensitive markets and formed our first telco solutions lab to validate end-to-end solutions based on AMD CPU, adaptive SoCs, FPGAs, DPUs and software. In automotive, deployments of our adaptive silicon solutions for high-end ADAS and AI features grew in the quarter, highlighted by Subaru rolling out its AMD-based iSIGHT 4 platform across their full range of vehicles.

In addition, we expanded our embedded processor portfolio with the launches of Ryzen 5000 and EPYC 9000 embedded series processors with leadership performance and efficiency as we focus on growing share in the security, storage edge server and networking markets. Looking more broadly across our embedded business, we are making great progress in bringing together our expanded portfolio and scale to drive deeper engagements with our largest embedded customers. In summary, I’m pleased with our operational and financial performance in the first quarter. In the near term, we continue to see a mixed demand environment based on the uncertainties in the macro environment. Based on customer demand signals, we expect second quarter revenue will be flattish sequentially with growth in our client and data center segments, offset by modest declines in our Gaming and Embedded segments.

We remain confident in our ability to grow in the second half of the year, driven by adoption of our Zen 4 product portfolio, improving demand trends in our client business and the early ramp of our instinct MI300 accelerators for HPC and AI. Looking longer term, we have significant growth opportunities ahead based on successfully delivering our road maps and executing our strategic data center and embedded property priorities, led by accelerating adoption of our AI products. We are in the very early stages of the AI computing era, and the rate of adoption and growth is faster than any other technology in recent history. And as the recent interest in generative AI highlights, bringing the benefits of large language models and other AI capabilities to cloud, edge and endpoints require significant increases in compute performance.

AMD is very well positioned to capitalize on this increased demand to compute based on our broad portfolio of high-performance and adaptive compute engines, the deep relationships we have established with customers across a diverse set of large markets, and our expanding software capabilities. We are very excited about our opportunity in AI. This is our #1 strategic priority, and we are engaging deeply across our customer set to bring joint solutions to the market, led by our upcoming instinct MI300 GPUs, Ryzen 7040 Series CPUs with Ryzen AI, Zynq UltraScale+ MPSoCs, LVO V70 data center inference accelerators and Versal AI adaptive data center and edge SoCs. I look forward to sharing more about our AI progress over the coming quarters as we broaden our portfolio and grow the strategic part of our business.

Now I’d like to turn the call over to Jean to provide some additional color on our first quarter results. Jean?

Jean Hu: Thank you, Lisa, and good afternoon, everyone. I’ll start with a review of our financial results for the first quarter and then provide our current outlook for the second quarter of fiscal 2023. As a reminder, for comparative purposes, first quarter 2022 results included the only partial quarter financial results from the acquisition of Xilinx, which closed in February 2022. Revenue in the fourth quarter was $5.4 billion, a decrease of 9% year-over-year as Embedded segment strength was offset by lower Client segment revenue. Gross margin was 50%, down 2.7 percentage points from a year ago, primarily impacted by Client segment performance. Operating expenses were $1.6 billion, increasing 18% year-over-year, primarily due to inclusion of a full quarter of expenses from Xilinx and Pensando acquisitions.

Operating income was $1.1 billion, down $739 million year-over-year, and the operating margin was 21%. Interest expense, taxes and other was $128 million. For the first quarter, diluted earnings per share was $0.60 due to better-than-expected revenue and operating expenses. Now turning to our reportable segment for the first quarter. Starting with the Data Center segment, revenue was $1.3 billion, flat year-over-year, driven primarily by higher sales of EPYC processors to cloud customers, offset by lower enterprise server processor sales. Data Center segment operating income was $148 million or 11% of revenue compared to $427 million or 33% a year ago. Lower operating income was primarily due to product mix and increased R&D investments to address large opportunities ahead of us.

Client segment revenue was $739 million, down 65% year-over-year as we shipped significantly below consumption to reduce downstream inventory. We expect an improvement in second quarter Client segment revenue and a seasonally stronger second half. Client segment operating loss was $172 million compared to operating income of $692 million a year ago, primarily due to lower revenue. Gaming segment revenue was $1.8 billion, down 6% year-over-year. Semi customer revenue grew double-digit percentage year-over-year, which was more than offset by lower gaming graphics revenue. Gaming segment operating income was $314 million or 18% of revenue compared to $358 million or 19% a year ago. Decrease was primarily due to lower gaming graphics revenue. Embedded segment revenue was $1.6 billion, up $967 million year-over-year, primarily due to full quarter of Xilinx revenue and the strong performance across multiple end markets.

Embedded segment operating income was $798 million or 51% of revenue compared to $277 million or 46% a year ago, primarily driven by the inclusion of a full quarter of Xilinx. Turning to the balance sheet and the cash flow. During the quarter, we generated $486 million in cash from operations, reflecting our strong financial model despite the mixed demand environment. Free cash flow was $328 million. In the first quarter, we increased inventory by $464 million, primarily in anticipation of the ramp of new Data Center and Client product in advanced process node. At the end of the quarter, cash, cash equivalents and such short-term investment were $5.9 billion, and we returned $241 million to shareholders through share repurchases. We have a $6.3 billion in remaining authorization for share repurchases.

In summary, in an uncertain macroeconomic environment, the AMD team executed very well, delivering better-than-expected top line revenue and earnings. Now turning to our second quarter 2023 outlook. We expect revenue to be approximately $5.3 billion, plus or minus $300 million, a decrease of approximately 19% year-over-year and approximately flat sequentially. Year-over-year, we expect the Client, Gaming and the Data Center segment to decline, partially offset by Embedded segment growth. Sequentially, we expect Client and Data Center segment growth to be offset by modest Gaming and Embedded segment decline. In addition, we expect non-GAAP gross margin to be approximately 50%. Non-GAAP operating expenses to be approximately $1.6 billion. Effective tax rate to be 13%.

And the diluted share count is expected to be approximately 1.62 billion shares. In closing, I’m pleased with our strong top line and bottom line execution. We have a very strong financial model, and we’ll continue to invest in our long-term strategic priorities, including accelerating our AI offerings to drive sustainable value creation over the long term. With that, I’ll turn it back to Ruth for Q&A session.

Ruth Cotter: Thank you, Jean. And Kevin, we’re happy to pull the audience for questions.

Q&A Session

Follow Advanced Micro Devices Inc (NASDAQ:AMD)

Operator: . Our first question today is coming from the Vivek Arya from Bank of America.

Vivek Arya: For my first one, Lisa, when I look at your full year Data Center outlook for some growth, that implicitly suggest Data Center, right, could be up 30% in the second half versus the first half, right? And I’m curious, what is your confidence and visibility and some of the assumptions that go into that view? Is it — do you think there is a much bigger ramp in the new products? Is it enterprise recovery? Is it pricing? So just give us a sense for how we should think about the confidence and visibility of the strong ramp that is implied in your second half Data Center outlook?

Lisa Su: Right. So Vivek, thanks for the question. Maybe let me give you some context on what’s going on in the Data Center right now. First of all, we have said that it’s a mixed environment in the Data Center. So the first half of the year, there are some of the larger cloud customers that are working through some inventory and optimization as well as a weaker enterprise. As we go into the second half of the year, we see a couple of things. First, our road map is very strong. So the feedback that we’re getting working with our customers on, it’s ramping well. It is very differentiated in terms of TCO and overall performance. So we think it’s very well positioned. Much of the work that we’ve done in the first half of the year — in the first quarter and here in the second quarter is to ensure that we complete all of that work such that we can ramp across a broader set of workloads as we go into the second half of the year.

And then I would say, from an overall market standpoint, I think enterprise will still be mixed with the notion that we expect some improvement. It depends a little bit on the macro situation. And then as we go into the second half of the year, in addition to Genoa, we’re also ramping Bergamo. So that’s on track to launch here in the second quarter and will ramp in the second half of the year. And then as we get towards the end of the year, we also have our GPU ramp of MI300. So with that, we start the ramp in the fourth quarter of our supercomputing wins as well as our early cloud AI wins. So those are all the factors. Of course, we’ll have to see how the the year play about how we’re positioned from an overall product and road map standpoint for Data Center.

Vivek Arya: All right. And for my follow-up, Lisa, how do you see the market share evolve in the Data Center in the second half? Do you think that the competitive gap between your and your competitors’ products, has that narrowed? Or you still think that in the second half, you have a chance to gain market share in the Data Center?

Lisa Su: Yes, absolutely, Vivek. Well, I mean, we’ve gained share nicely over the last 4 years. When you look at our Data Center progression, it’s actually been pretty steady. As we go into the second half of this year, I think we continue to believe that we have a very strong competitive position. So we do think that positions us well to gain share. In the conversations that we’re having with customers, I think they’re enthusiastic about Zen 4 and what it can bring into cloud workloads as well as enterprise workloads. I think actually Genoa is extremely well positioned for enterprise where we have been underrepresented. So we feel good about the road map. I mean, obviously, it’s competitive, but we feel very good about our ability to continue to gain share.

Operator: Next question is coming from Toshiya Hari from Goldman Sachs.

Toshiya Hari: Lisa, I wanted to ask about the Embedded business. It’s been a really strong business for you since the acquisition of Xilinx, you’re guiding the business down sequentially in Q2. Is this sort of the macro/cycle kicking in? Or is it something supply related? If you can kind of expand on the Q2 outlook there and your expectations for the second half, that would be helpful.

Lisa Su: Yes, absolutely, Toshiya. Thanks for the question. I mean I think the Embedded business has performed extremely well over the last 4 or 5 quarters. Q1 was another record for the Embedded business. When we look underneath it, there is a broad set of market segments that we have exposure to, and the majority of them are actually doing very well. . Our thought process for sort of modest decline into Q2 is that we did have a bunch of backlog that we’re in the process of clearing and that backlog will clear in Q2, and then we expect that the growth will moderate a bit. We still very much like the positioning of sort of our aerospace and defense, our industrial, our test and emulation business our automotive business. We expect wireless trends to be a little bit weaker as well as consumer trends. So those are the kind of the puts and takes in the market. But I would say the business has performed well above our expectations.

Toshiya Hari: That’s helpful. And then as my follow-up, maybe one for Jean on the gross margin side of things. In your slide deck, I think you’re guiding gross margins up half over half in the second half. Can you maybe speak to the puts and takes and the drivers as you think about gross margins over the next 6 to 9 months?

Lisa Su: Yes. Thank you for the question. Our gross margin is primarily driven by mix. If you look at the first half — first quarter performance and second quarter guide, we are very pleased with the strong gross margin performance in both the Data Center and the embedded segment. We haven’t been seeing headwinds from a Client segment impacting our gross margin. Going to the second half, we do expect gross margin improvement because Data Center is going up and the Embedded continue to be relatively strong. The pace of improvement in the second half actually will be largely dependent on the Client segment. We think the Client segment gross margin is also going to improve. But overall, it’s going to be below corporate average.

So the pace of improvement of gross margin could be dependent on the pace for the Client business recovery in second half. But in the longer term, right, when we look at our business opportunities, the largest incremental revenue opportunities are going to come from Data Center and Embedded segment. So we are building very good about longer-term gross margin going up continuously.

Operator: Next question is coming from Aaron Rakers from Wells Fargo.

Aaron Rakers: I’ve got two as well. I guess the first question going back on just like the cadence of the server CPU cycle with Genoa and Bergamo. And I think it’s great to hear that you guys are on track to launch Bergamo here. But there’s been some discussion here throughout this last quarter around some DDR5 challenges. I think there’s PMEC issues. I’m just curious of how you — if those issues have presented themselves or how you would characterize the cadence of the ramp cycle of Genoa at this point?

Lisa Su: Yes. Sure, Aaron. So yes, look, I think Genoa, we always said, as we launched it, that it would be a little bit more of a longer transition compared to Milan because it is a new platform. So it is the new DDR5, it’s PCI Gen 5. And for many of our top customers, they’re also doing other things other than upgrading the CPUs. So from that standpoint, I would say the ramp is going about as we expected. We’ve seen a lot of interest, a lot of customer engineering work that we’re doing together in the data centers with our customers, we feel great about the set of workloads, and we see expansion in the workloads going forward. So overall, our expectation is, particularly as we go into the second half, we’ll see Genoa ramp more broadly. But Genoa and Milan are going to coexist throughout the year, just given sort of the breadth of platforms that we have.

Aaron Rakers: And anything specific on the DDR5 questions that come up? And then I’m curious if my second question just real quick is the MI300, if we look out beyond just the deployment through the course of this year, how do you guys think about success in that data center GPU market to we think about beyond just?

Lisa Su: Yes. Sure, Aaron. So back to the DDR5 question, we haven’t seen anything specific on DDR5. And it’s just normal platform bring up that we’re seeing. Now as it relates to your question about MI300, look, we’re really excited about the AI opportunity. I think that is success for us is having a significant part of the AI overall opportunity. AI for us is broader than cloud. I mean it also includes what we’re doing in Clients and Embedded. But specifically, as it relates to MI300, MI300 is actually very well positioned for both HPC or supercomputing workloads as well as for AI workloads. And with the recent interest in generative AI, I would say the pipeline for MI300 has expanded considerably here over the last few months, and we’re excited about that.

We’re putting a lot more resources. I mentioned on the prepared remarks, the work that we’re doing, sort of taking our Xilinx and sort of the the overall AMD AI efforts and collapsing them into one organization that’s primarily to accelerate our AI software work as well as platform work. So success for MI3100 is for sure, a significant part of sort of the growth in AI in the cloud. And I think we feel good about how we’re positioned there.

Operator: Next question is coming from Matt Ramsey from TD Cowen.

Matthew Ramsay: Yes. Lisa, my first question, I think just the way the business has trended, right, with enterprise and with China in the data center market being a bit softer recently, and it seems like that’s kind of continuing into the second quarter. It occurs to me that a big percentage of your data center business, particularly in server in the second half of the year is going to be driven by U.S. hyperscale. And I guess my question is the level of visibility you have to unit volumes, to pricing, to timing of ramps. If you could walk us through that a little bit, given it’s customer concentrated. I imagine you have some level of visibility. And you mentioned growth for the year-end data center. If you could be a little bit more precise there, that’s — I understand there’s market dynamics, but it’s a bit of a big comment and helps to just push me to ask about quantifying the growth for the year.

Lisa Su: Sure. Matt, thanks for the question. So look, I think as we work with our largest cloud customers in the Data Center segment, particularly with our EPYC CPUs, we have very good conversations in terms of what their ramp plans are, what their qualification plans are, which workloads, which instances. So I feel that we have good visibility. Obviously, some of this is still dependent on overall macro situation and overall demand. But our view is that there is a lot of good progress in the Data Center. Now in terms of quantification, as I said, there’s a lot of puts and takes. My view is that enterprise will improve as we go into the second half, and we’re even seeing, I would say, some very early signs of some improvement in China as well.

So our view is, I think, double-digit Data Center growth is what we currently see. And certainly, we would like to ramp Genoa and Bergamo as a large piece given the strength of those products, we’d like to see them grow share here over the next couple of quarters.

Matthew Ramsay: Lisa, that’s helpful. For my follow-up question, I think in the prepared scripts, we’re obviously undershipping sell-through to clear the channel in the Client business in the first half of the year. And I think the language that was used was seasonal improvements in the second half. So are you guys expecting to come back to shipping in line with sell-through so to stop under-shipping demand? And then for on top of that seasonal improvements in the market? Or — and if you could just kind of help me think about the magnitude and the moving pieces in Client for the second half?

Lisa Su: Yes. So we’ve been undershipping sort of consumption in the Client business for about 3 quarters now. And certainly, our goal has been to normalize the inventory in the supply chain so that shipments would be closer to consumption. We expect that, that will happen in the second half of the year, and that’s what the comment meant that we believe that there will be improvements in the overall inventory positioning. And then we also believe that the Client market is stabilizing. So Q1 was the bottom for our business as well as for the overall market. From what we see although it will be a gradual set of improvements, we do see that the overall market should be better in the second half of the year. We like our product portfolio a lot.

I’m excited about having AI-enabled on our Ryzen 7000 series. And we have leadership notebook platforms with Dragon Range. Our desktop road map is also quite strong with our new launch of the Verizon 7000 X3D products. And so I think here in the second quarter, we’ll still undership consumption a bit. And by the second half of the year, we should be more normalized between shipments and consumption, and we expect some seasonal improvement into the second half.

Operator: Your next question is coming from Joe Moore from Morgan Stanley.

Joseph Moore: Yes, I guess same question in terms of the cloud business. You mentioned some combination of kind of digestion of spending and inventory reduction. Can you give us a sense of how much inventory was there in hyperscale? How much has it come down? And how much are you sort of maybe undershipping demand in that segment.

Lisa Su: Yes. I think, Joe, this is a bit harder because every customer is different. What we’re seeing is different customers are at a different place in their sort of overall cycle. But let me say it this way though. I think we have good visibility with all of our large customers in terms of what they’re trying to do for the quarter, for the year. Obviously, some of that will depend on how the macro plays out. But from our viewpoint, I think we’re also going through a product transition between Milan and Genoa and some of these workloads. So if you put all those things into the conversation, that’s why our comment was that we will — we do believe that the second quarter will grow modestly and then there’ll be more growth in the second half of the year as it relates to the Data Center business.

So there lots of puts and takes, every customers in a bit of a different cycle. But overall, the number of workloads that they’re going to be using AMD on, we believe, will expand as we go through the next few quarters.

Joseph Moore: Great. And then for my follow-up, I mean, you mentioned interest in MI300 around generative AI. Can you talk to — is that right now kind of a revenue pipeline with major hyperscalers? Or is that sort of more indication of interest level? Just trying to figure out where you are in terms of establishing yourself in that market?

Lisa Su: Yes. I would say, Joe, we’ve been at this for quite some time. So AI has been very much a strategic priority for AMD for quite some time. With MI250, we’ve actually made strong progress. We mentioned in the prepared remarks some of the work that was done on the LUMI supercomputer with generative AI models. We’ve continued to do quite a bit of library optimization with MI250 and software optimization to really ensure that we could increase the overall performance and capabilities. MI300 looks really good. I think from everything that we see, the workloads have also changed a bit in terms of — whereas a year ago, much of the conversation was primarily focused on training. Today, that has migrated to sort of large language model inferencing, which is particularly good for GPUs. So I think from an MI300 standpoint, we do believe that we will start ramping revenue in the fourth quarter with cloud AI customers and then it will be more meaningful in 2024.

Operator: Next question is coming from Harlan Sur from JPMorgan.

Harlan Sur: Good to see the strong dynamics in Embedded, very diverse end markets. And given their strong market share position here, the Valens team is in a really good position to catalyze EPYC attach or Ryzen attached to their FPGA and adaptive compute solutions, right? I think embedded x86 is like a $6 billion to $8 billion per year market opportunity. So, Lisa, given you’re a year with Xilinx in the portfolio, can you just give us an update on the synergy unlock and driving higher AMD compute attached to Xilinx sockets?

Lisa Su: Yes. Thanks, Harlan. It’s a great question. The Xilinx portfolio has done extremely well with us. Very strong, I would say, we continue to get more content attached to the FPGAs and the adaptive SoCs. We have seen the beginnings of good traction with the cross-selling and that is opportunity to take both Ryzen and EPYC CPUs into the broader embedded market. I think the customers are very open to that. I think we have a sales force and a go-to-market capability across this customer set that is very helpful for that. So I do believe that this is a long-term opportunity for us to continue to grow our embedded business. And we’ve already seen some design wins as a result of the the combination of the Xilinx portfolio and the AMD portfolio, and I think we’ll see a lot more of that going forward.

Harlan Sur: Great. And in terms of other opportunities, there appears to be this trend towards more of your cloud and hyperscale customers opting to do their own silicon solutions around accelerated compute or AI offload ends, right? And if I look at it right, there are less than a handful of the world semiconductor companies that have the compute, graphics, connectivity IP portfolio that you guys have as well as the capabilities to design these very complex offload SoCs, right? Does the team have a strategy to try and go after some of the semi-custom or full-blown ASIC-based hyperscale programs?

Lisa Su: We do, Harlan, and I would put it more broadly. And the more — the broader point is, I think we have a very complete IP portfolio across CPUs, GPUs, FPGAs, adaptive SoCs, DPUs and a very capable semi-custom team. And so beyond hyperscalers, I think when we look at sort of higher volume opportunities, we think there are higher volume opportunities beyond game consoles that there are custom opportunities available. So I think that combination of IP is very helpful. I think it’s a long-term opportunity for us, and it’s one of the areas where we think we can add value to our largest customers.

Operator: Next question is coming from Ross Seymore from Deutsche Bank.

Ross Seymore: Lisa, I just want to talk about the pricing environment in a general sense. You guys have done a great job of increasing the benefits to your customers and being able to raise prices, pass along cost increases, those sorts of things. But the competitive intensity and the weakness in the market, at least currently seems to — that it could work against that. So in the near term and then perhaps exiting this year into the next couple of years, can you just talk about where you think pricing is going to go across both your Data Center market, most importantly, but then also in your Client market?

Lisa Su: Yes. I think, Ross, what I would say is a couple of things. I think in the Data Center market, the pricing is relatively stable. And what that comes from is — our goal is to add more capability, right? So it’s a TCO equation where as we’re going from Milan to Genoa, we are adding more cores, more performance. And the performance per dollar that we offer to our customers is one where it’s advantageous for them to adopt our technologies and our solutions. So I expect that. I think in the Client business, given some of the inventory conditions in there, I think there’s — it’s a more competitive environment. We’re all — from my standpoint, we’re focused on normalizing the inventory levels. And with that normalization, the most important thing is to ensure that we get the shipments more in line with consumption because I think that’s a healthier business environment overall.

And then, again, it’s back to product values, right? So we have to ensure that our products continue to offer superior performance per dollar, performance per watt capabilities in the market.

Ross Seymore: And pivoting for my follow-up on the AI side and MI300. I just wanted to know what you would describe as your competitive advantages? Everybody knows that’s a market that’s exploding right now. There’s tons of demand. You guys have all the IP to be able to attack it. But there’s a very large incumbent in that space as well. So when you think about what AMD can bring to the market, whether it’s hardware, software, heterogeneity of the products you can bring, et cetera, what do you think is the core competitive advantage that can allow you to penetrate that market successfully?

Lisa Su: Yes. There’s a couple of aspects, Ross. And — yes, since we haven’t yet announced MI300, all of the specifications will — some of those will come over the coming quarters. MI300 is the first solution that has both the CPU and GPU together, and that has been very positive for the supercomputing market. . I think as it relates to generative AI, and we think we have a very strong value proposition from both a hardware and again, it’s a performance per dollar conversation, I think there’s a lot of demand in the market. And there’s also — I think given our deep customer relationships on the EPYC side, there’s actually a lot of synergy between the customer set between the EPYC CPUs and the sort of 300 GPU customers. So I think when we look at all these together, our view is that demand is strong for AI.

And I think our position is also very strong given there are very, very few products that can really satisfy these large language model sort of needs. And I think we feel confident that we can do that.

Operator: Next question is coming from Tim Arcuri from UBS

Timothy Arcuri: Lisa, there was a lot more talk on this call about AI. And obviously, PyTorch 2.0 now supporting ROCm is a great step forward. But how much would you say software is going to dictate how successful you can be for these workloads? You had mentioned that you’re forming this new group, this new AI group. Do you have the internal software capabilities to be successful in AI?

Lisa Su: Tim, I think the answer is yes. I think we have made significant progress even over the last year in terms of our software capabilities. And the way you should think about our AI portfolio is it’s really a broad AI portfolio across client sort of edge as well as cloud. And with that, I think the Xilinx team brings a lot of background and capability, especially in inference. We’ve added significant talent in our AI software as well. And the beauty of particularly the cloud opportunity is it’s not that many customers, and it’s not that many workloads. So when you have sort of very clear customer targets, we’re working very, very closely with our customers on optimizing for a handful of workloads that generate significant volume.

That gives us a very clear target for what winning is in the market. So we feel good about our opportunities in AI. And I’d like to say that it’s a multiyear journey. So this is the beginning what we think is a very significant market opportunity for us over the next 3 to 5 years.

Timothy Arcuri: And I guess as my follow-up. So can I — can you just give us a sense of sort of the overall profile that you see for revenue into the back half? I know you said that Data Center and Embedded will be up this year. It sounds like Data Center probably up double digits. But I also wanted to confirm that you think that total revenues also will be up this year year-over-year.

Lisa Su: Right, Tim. So I think as we said, we’re not guiding the full year just given all the puts and takes. So we see Q2 is flattish second half return to growth. We’ll have to see exactly how the macro plays out across PCs and enterprise. But yes, we feel good about growth in Embedded, growth in Data Center — on the data center side, double-digit growth sort of half year-over-year. And then we’ll see how the rest of the segments play out. .

Operator: Your next question is coming from Ambrish Srivastava from BMO Capital Markets.

Ambrish Srivastava: Actually, I wanted to come back to the first quarter for Data Center. That’s a pretty big gap between — on a Q-over-Q basis on your business versus Intel. And I think almost 2x. This is the first time you would have lost share on a Q-over-Q basis in a long time. So could you please address that? And I acknowledge that quarters can be pretty volatile, but it seems to be a pretty large gap. And then for my follow-up, just remind us again, please, for the full year growth for Data Center, kind of what’s embedded in the assumptions for cloud as well as enterprise?

Lisa Su: Yes. Let me make sure I get your question, Ambrish. So you’re asking about Q1 Data Center and whether we think we’ve lost share on a sequential basis?

Ambrish Srivastava: Right. If I look at just your report versus what Intel reported on a Q-over-Q basis, it clearly on a year-over-year basis, you have gained share. But I’m just comparing down $14 million versus what you reported, and so that would imply that you had a share loss versus them unless the Data Center GPU and the Xilinx business was down significantly also on a Q-over-Q basis.

Jean Hu: Yes. Ambrish, maybe I’ll give you a little bit of color is definitely in Q1, the other networking business, including GPU, have been down that definitely is the case. But from a share perspective, when we look at overall Q1 reported revenue from both sides and analyze the data, we don’t believe we lost the share.

Lisa Su: Yes, that’s right, Ambrish. So I think you just have to go through each of the pieces. But I think from an EPYC or a server standpoint, we don’t believe we lost share. If anything, we might have gained a little bit. But I think overall, I wouldn’t look at it so closely on a quarter-by-quarter basis because there are puts and takes. From what we see overall, we believe that we have a good overall share progression as we go through the year.

Ambrish Srivastava: And then the underlying assumptions for full year for Data Center?

Lisa Su: Underlying assumptions for the full year. I think the key pieces that I talked about are Q2, let’s call it, modest growth. We still expect some cloud optimization to be happening. As we go into the second half of the year, we’ll see a stronger ramp of Genoa and the beginnings of the ramp of Bergamo, we think enterprise is still more dependent on macro, but we do believe that, that improves as we go into the second half of the year. And then we’ll have the beginnings of our MI300 ramped in the fourth quarter for both supercomputing and some early AI workloads.

Operator: Next question is coming from Stacy Rasgon from Bernstein Research.

Stacy Rasgon: For my first one, Lisa, can you just like clarify this explicitly for me. So you said double-digit Data Center. Was that a full year statement? Or was that a second half year-over-year statement? Or was that a half-over-half statement for Data Center?

Lisa Su: Yes. Let me be clear. That was a year-over-year statement. So double-digit Data Center growth for the full year of 2023 versus 2022.

Stacy Rasgon: Got it. Which just given what you did in Q1 and sort of are implying for Q2 needs something like 50% year-over-year growth in the second half to get there. So you’re endorsing those — you’re endorsing that now?

Lisa Su: I am…

Jean Hu: Yes, your math is right.

Stacy Rasgon: Okay. For my second question, Jean, you made a comment on gross margins where you said the increase of gross margins in the second half was dependent on gross margin in Client getting better. I just want to make sure, did I hear that right? And why should I expect Client margins would get better, especially given what Intel has been doing in that space to protect everything? Like why is that something that’s going to happen?

Jean Hu: Yes, Stacy, that’s a good question. The way to think about it is if you look at our Q1 gross margins and the Q2 guide around 50%. And as you know, both our Data Center and Embedded have a very strong gross margin performance. And so what the headwinds that impact our gross margin is really PC client on the side, which, as we talked about, is we are shipping significantly under the consumption and also to digest inventory in the downstream supply chain. As you know, typically, that’s the time you get a significant pressure on the ASP side and on the funding side, that’s why our gross margin in the Client segment has been challenged. In second half, we know it’s going to be normalized. That’s very important fact is when you normalize the demand and the supply, and we continue to plan a very competitive environment, so don’t get us wrong on that front.

But it will be better because you are not digesting the inventory, the channel funding, everything, those kind of price reduction will be much less. So we do think the second half will side, the gross margin will be better than first half.

Stacy Rasgon: Got it. And I apologize, I misspoke as well. 50% half of were half in Data Center, not year-over-year. So we’re all doing it.

Operator: Our next question is coming from Blayne Curtis from Barclays.

Blayne Curtis: I had two. Maybe just to start with following on Stacy’s prior question. Could you just comment on what client ASPs did in the March quarter? I mean I assume they’re down a decent amount. Your competitor was down, but any color you could provide on what the environment was in March?

Lisa Su: Yes, sure, Blayne. So the ASPs were down quite a bit on a year-over-year basis, if you’re talking about the overall Client business. And what that is, is that’s also the Client ASPs were higher in the first half of ’22, if you just think about what the supply environment was or the demand environment was in that. And given that we’re undershipping in the first quarter, the ASPs are lower.

Blayne Curtis: Got it. And then I just wanted to ask you on the Data Center business, the operating profit is down a ton sequentially. And you talked about enterprise being down, I think that’s part of it. But it’s a big drop, and it looks like gross margin probably is down a bunch too. Can you just comment on why that drop in profitability in Data Center?

Jean Hu: Yes, Blayne, that’s a good question. I think when you look at the year-over-year, you’re absolutely right. Revenue is largely flattish, but operating margin dropped significantly. There are 2 major drivers. The first is we have increased the investment significantly, especially in networking and AI. As you may recall, we closed the Pensando last May or June. So this is the full quarter for Pensando expenses versus last year. Plus we also increased GPU investment under AI investment. That’s all under the Data Center bucket. Secondly, I mentioned about product mix. Lisa said year-over-year cloud sales grew double digits significantly and enterprise actually declined. So in Q1, our revenue in data center is heavily indexed to the cloud market. versus last year in Q1. Typically, cloud gross margin is lower than enterprise. We do expect, even in Q2, it will be balanced — more balanced and going forward, we do think the enterprise side will come back.

Blayne Curtis: But I guess the big decline was sequential. So I’m assuming cloud was down sequentially?

Jean Hu: Yes. Sequential, it’s revenue. If you look at the revenue, it was down very significantly, right? And the mix also is a little bit more indexed to cloud sequentially too.

Lisa Su: Yes. It’s the same factors.

Jean Hu: Yes.

Lisa Su: So both the mix to cloud as well as the R&D expense has increased just given the large opportunities that we have across the data center and especially AI.

Operator: Your final question today is joined from Harsh Kumar from Piper Sandler.

Harsh Kumar: Lisa, I had a question. I wanted to ask you about your views on the inferencing market for generative AI 3+. Specifically, I wanted to ask because I think there’s some cross currents going on. We’re hearing that CPUs are the best way to do inferencing, but then we’re hearing the timeliness of CPUs is not there as a function to be able to enable these kind of instances. So I was curious what you guys think. And then I had a follow-up.

Lisa Su: Well, I think, Harsh, if you were saying — I mean, I think today, inference is used a lot. CPUs are used a lot for inference. Now where the demand is highest right now is for generative AI and large language model inferencing, you need GPUs to have the horsepower to train sort of the most sophisticated model. So I think those are the 2, as you say, crosscurrents. I think inference becomes a much more important workload just given the adoption rate of AI across the board. And I think we’ll see that for smaller tasks on CPUs, but for the larger tasks on GPUs.

Harsh Kumar: Okay. So it still defers back to GPUs for those. And then a similar question on the MI300 series. I know that you talked a lot about success in the HPC side. But specifically, I was curious if you could talk about any wins or any kind of successes or success stories you might have on the generative Ai side with MI300 or 250 series?

Lisa Su: Yes. So as we said earlier, we’ve done some really good work on MI250 with AI and large language models. The example that is public is what we’ve done with and the training of some of the finish models. We’re doing quite a bit of work with large customers on MI300. And what we’re seeing is very positive results. So we think MI300 is very competitive for generative AI. we’ll be talking more about sort of that customer and revenue evolution as we go over the next couple of quarters.

Ruth Cotter: Great. Operator, that concludes today’s call. Thank you to everyone for joining us.

Operator: Thank you. You may now disconnect and have a wonderful day.

Follow Advanced Micro Devices Inc (NASDAQ:AMD)