Intel Corporation (NASDAQ:INTC) Q3 2025 Earnings Call Transcript

Intel Corporation (NASDAQ:INTC) Q3 2025 Earnings Call Transcript October 23, 2025

Intel Corporation beats earnings expectations. Reported EPS is $0.23, expectations were $0.01781.

Operator: Thank you for standing by, and welcome to Intel Corporation’s Third Quarter 2025 Earnings Conference Call. [Operator Instructions] As a reminder, today’s program is being recorded. And now I’d like to introduce your host for today’s program, John Pitzer, Vice President, Investor Relations. Please go ahead, sir.

John Pitzer: Thank you, Jonathan, and good afternoon to everyone joining us today. By now, you should have received a copy of the Q3 earnings release and earnings presentation, both of which are available on our Investor Relations website, intc.com. For those joining us online today, the earnings presentation is also available in our webcast window. I am joined today by our CEO, Lip-Bu Tan; and our CFO, David Zinsner. Lip-Bu will open up with comments on our third quarter results as well as provide an update on our progress implementing strategic priorities. Dave will then discuss our overall financial results, including fourth quarter guidance before we transition to answer your questions. Before we begin, please note that today’s discussion contains forward-looking statements based on the environment as we currently see it and as such, are subject to various risks and uncertainties.

It also contains references to non-GAAP financial measures that we believe provide useful information to our investors. Our earnings release, most recent annual report on Form 10-K and other filings with the SEC provide more information on specific risk factors that could cause actual results to differ materially from our expectations. They also provide additional information on our non-GAAP financial measures including reconciliations where appropriate to our corresponding GAAP financial measures. With that, let me turn things over to Lip-Bu.

Lip-Bu Tan: Thank you, John, and let me add my welcome this afternoon. we delivered a solid Q3 with revenue, gross margin, earnings per share above guidance. This marks the fourth consecutive quarter of improved execution delivered by the underwriting growth in our core markets and the steady progress we are making to rebuild the company. While we are still a long way to go we are taking the right steps to create sustainable shareholder value. We significantly improved our cash position and liquidity in Q3, a key focus for me since becoming CEO in March. This includes accelerated funding from the United States government, important investments from NVIDIA and SoftBank Group and monetizing portion of Altera and Mobileye. The action we took to strengthen the balance sheet give us greater operational flexibility and position us well to continue to execute our strategy with confidence.

In particular, I’m honored by the trust and confidence President Trump and Secretary Lutnick has placed in me. Their support highlights Intel’s strategic role as the only U.S.-based semiconductor company with leading-edge logic R&D and manufacturing. We are fully committed to advancing the Trump administration’s vision to restoring semiconductor production and proudly welcome the U.S. government as our essential partners in our efforts. We also made tangible progress to improve our execution this quarter. We remain on track not only to rightsize the company by year-end but also to evolve the talent mix, reestablish the engineering-first mindset and optimize the executive and management levels across the organization. We are seeing a significant increase in day-to-day energy and collaborations as our employees return to office after a sustained period of remote and hybrid work.

Let me dive deeper into our underlying business trend. Over the course of my career, I have had the privilege of contributing multiple ways of disruptive innovation. But I can’t recall a time that I have been more excited about the future of computing and opportunities in front of us. We are still in the early stage of AI revolution, and I believe Intel can and will play a much more significant role as we transform the company. Let’s start with our core x86 franchise, which continues to play a critical role in the age of AI. AI is clearly accelerating demand for new compute architectures, hardware models and algorithms, at the same time, is fueling renewed growth of traditional compute as the underwriting data and the resulting insights continue to rely heavily on our existing products from cloud to edge.

AI is driving near-term upside to our business, and it is a strong foundation for sustainable long-term growth as we execute. In addition, with unmatched compatibility, security and flexibility by virtue of being the largest installed base of general purpose compute, x86 is well positioned to power the hybrid compute environment that AI workloads demand, particularly for inference edge workloads and agentic system. It is a great starting point from which to rebuild our market position to revitalizing and rejuvenating the x86 ISA and positioning for the new era of computing with great products and partnerships. Our collaboration with NVIDIA is a prime example. We are joining forces to create a new class of products and experience spanning multiple generation that accelerate the adoption of AI for the hyperscale, enterprise and consumer markets.

By connecting our architectures to NVIDIA NVLink, we combined Intel CPU and x86 leadership with NVIDIA unmatched AI and accelerated computing strength, unlocking innovative solutions that will deliver better customer experience and provide a big hit for Intel in the leading AI platform of tomorrow. We need to continue to build on this momentum and capitalize on our position by improving our engineering and design execution. This includes hiring, promoting top architecture talent as well as reimagining our core road map to ensure it is the best-in-class features. To accelerate this effort, we recently created the Central Engineering Group, which will unify our horizontal engineering functions to drive leverage across foundational IP development, test chip design, EDA tools and design platforms.

This new structure will eliminate duplications, improve time to decision-making and enhance coherence across all product development. In addition and just as important, the group will spearhead the build-out of our new ASIC and design service business to deliver purpose-built silicon for a broad range of external customers. This will not only extend the reach of our core x86 IP but also leverage our design strength to deliver an array of solutions from general purpose to fixed function computing. In client, we are on track to launch our first Panther Lake SKU by year-end, followed by additional SKUs in the first half of next year. This will help us to solidify our strong position in the notebook segment across both consumer and enterprise with cost-optimized products across our full PC stack from our entry-level offering to our mainstream core family, up through our highest-performing Core Ultra family.

In high-end desktop, competition remains intense, but we are making steady progress. Arrow Lake shipments have increased throughout the year, and our next-generation Nova Lake product will bring new architecture and software upgrades to further strengthen our offerings, particularly in the PC gaming [ hollow ] space. With this lineup, we believe we will have the strongest PC portfolio in years. In traditional servers, AI workloads are driving both refresh of the installed base and capacity expansion, fueled by rapid growth in tokenization, the increased demands around data storage and processing, and a need to elevate power and space constraints. We remain the AI head nodes of choice with strong demand for Granite Rapids, including instances across every major hyperscalers.

We are listening to what customers need, and strong performance per watt and TCO are top of mind, as I shared with you last quarter, the key part, including improving our multi-trading capabilities as we close existing gap and work to regain shares. Finally, on our AI accelerator strategy, I continue to believe that we can play a meaningful role in developing compute platforms for emerging inference workloads driven by agentic AI and physical AI. This will be a far larger market than that for AI training workloads. We will work to position Intel as a compute platform of choice for AI inference, and we look to partner with arrays of incumbents as well as emerging companies that are defining this new compute paradigm. This is a multiyear initiative, and we will strike partnership when we can deliver true differentiation and market-leading products.

In the near term, we will continue delivering AI capabilities to Xeon, AI PCs, Arc, GPUs and our open software stack. Looking ahead, we plan to launch successive generation of inference-optimized GPUs on the annual cadence that features enhanced memory and bandwidth to meet enterprise needs. Turning to Intel Foundry. Our momentum continues. We are making steady progress on Intel 18A. We are on track to bring Panther Lake to market this year. Intel 18A yields are progressing at a predictable rate and Fab 52 in Arizona, which is dedicated to high-volume manufacturing, is now fully operational. In addition, we are advancing our work on Intel 18AP, and we continue to hit our PDK milestones. Our Intel 18A family is the foundation for at least next 3 generation of client and server products.

I will work with U.S. government within the secure enclave and other committed customers. It is a critical node that will drive wafer volumes well into the next decade and generate a healthy return on our investment. On Intel 14A, the team continued to focus on technology definition, transistor architecture, process flow, design enablement and foundation IPs. We remain active, engaged with potential external customers and are encouraged by the earlier feedback which help us to drive and inform our decisions. Lastly, our advanced packaging activities continue to progress well, especially in the area like EMIB and EMIB-T, which we have 2 differentiations. Like our Intel products, my conviction in the market potential for Intel Foundry continue to grow.

The rapid expansion of critical AI infrastructure is fueling unprecedented demand for wafer capacity and advanced packaging services that present a substantial opportunity demanding multiple suppliers. Intel Foundry is uniquely positioned to capitalize on this unprecedented demand as we execute. As I mentioned last quarter, our investment in foundry will be disciplined, and we will focus on capability and scalability, giving us flexibility to ramp quickly, and we will only add capacity when we have committed external demand. Building a world-class foundry is a long-term effort founded on trust. As a foundry, we need to ensure that our process can be easily used by a variety of customers, each with a unique way of building their own products.

A technician soldering components for a semiconductor board.

We must learn to delight our customers as they call on us to build wafers to meet all their needs for power, performance, yield, cost and schedule. This is only by doing this that they can rely on us as a true long-term partner to ensure their success. This requires a change of mindset that I’m driving across Intel Foundry as we position this business for long-term success. As we look ahead, my focus remains firmly on the long-term opportunity across every market we serve today and those we will enter tomorrow. Our strategy is crystallized around our unique strength and value proposition, supported by the accelerating and unprecedented demand for compute in the AI-driven economy. Our leadership continues to strengthen. Our culture is becoming more accountable, collaborative and execution oriented.

And my confidence in the future grow stronger every day. I look forward to keep you updated as we advance our journey. I will now turn it over to Dave for detail on our current business trends and financials.

David Zinsner: Thank you, Lip-Bu. In Q3, we delivered the fourth consecutive quarter of revenue above our guidance, driven by continued strength in our core markets. Although we remain vigilant regarding macroeconomic volatility, customer purchasing behavior and inventory levels are healthy and industry supply has tightened materially. Furthermore, we are increasingly confident that the rapid adoption of AI is driving growth in traditional compute and reinforcing momentum across our businesses. In client, we are 5 years post the COVID pull forward and are benefiting from the refresh of a larger installed base. Enterprises continue to migrate to Windows 11, and AI PC adoption is growing. In data center, the accelerating build-out of AI infrastructure is positive for server CPU demand from head nodes, inference, orchestration layers and storage.

We are cautiously optimistic that the CPU TAM will continue to grow in 2026 even as we have work to do to improve our competitive position. Third quarter revenue was $13.7 billion, coming in above the high end of our guidance range and up 6% sequentially. Capacity constraints, especially on Intel 10 and Intel 7 limited our ability to fully meet demand in Q3 for both data center and client products. Non-GAAP gross margin was 40%, 4 percentage points better than our guidance on higher revenue, a more favorable mix and lower inventory reserves, partially offset by higher volume of Lunar Lake and the early ramp of Intel 18A. We delivered third quarter earnings per share of $0.23 versus our guidance of breakeven EPS driven by higher revenue, stronger gross margins and continued cost discipline.

Q3 operating cash flow was $2.5 billion with gross CapEx of $3 billion in the quarter and positive adjusted free cash flow of $900 million. One of our top priorities for 2025 was shoring up our balance sheet. To that end, we executed on deals to secure roughly $20 billion of cash, including 3 important strategic partnerships. We exited Q3 with $30.9 billion of cash and short-term investments. In Q3, we received $5.7 billion from the U.S. government, $2 billion from SoftBank Group, $4.3 billion from the Altera closure and $900 million from the Mobileye stake sale. We expect NVIDIA’s $5 billion investment to close by the end of Q4. Finally, we repaid $4.3 billion of debt in the quarter and we will continue prioritizing deleveraging by paying maturities as they come due in 2026.

Moving to segment results for Q3. Intel products revenue was $12.7 billion, up 7% sequentially and above our expectations across client and server. The team executed well to support upside in the quarter given the current tight capacity environment, which we expect to persist into 2026. We are working closely with customers to maximize our available output, including adjusting pricing and mix to shift demand towards products where we have supply and they have demand. CCG revenue was $8.5 billion, up 8% quarter-over-quarter and above our expectation due to a seasonally stronger TAM, Windows 11-driven refresh and a stronger pricing mix with the ramp of Lunar Lake and Arrow Lake. Within the quarter, CCG further advanced its relationship with Microsoft through a collaboration with Windows ML and the deep integration of Intel vPro manageability with Microsoft Intune enabling secure cloud connected fleet management for businesses of all sizes.

The team also met all key milestones in support of launching Core Ultra 3, code-named Panther Lake. We expect the client consumption TAM to approach 290 million units in 2025, marking 2 straight years of growth off the post-COVID bottom in 2023. This represents the fastest TAM growth since 2021, and we’re prudently preparing for another year of strong demand in 2026 as Core Ultra 3 ramps into a healthy PC ecosystem. PC AI revenue was $4.1 billion, up 5% sequentially, above expectations, driven by improved product mix and higher enterprise demand. The strength in host CPUs for AI servers and storage compute continued in the quarter even as supply constraints limited additional upside. Our latest Xeon 6 processors, code-named Granite Rapids, offer significant benefits, including up to 68% TCO savings and up to 80% less power as compared to the average server installed today.

It is increasingly clear that CPUs play a critical role today and will going forward within the AI data center as AI usage expands and especially as inference workloads outpaced that of training. Some data center customers are beginning to ask about longer-term strategic supply agreements to support their business goals due to the rapid expansion of AI infrastructure. This dynamic, combined with the underinvestment in traditional infrastructure over the last couple of years should enable the revenue TAM for server CPUs to comfortably grow going forward. Operating profit for Intel products was $3.7 billion, 29% of revenue and up $972 million quarter-over-quarter on stronger product margin, lower operating expenses and a favorable compare due to period costs in Q2.

Before discussing Intel Foundry, I want to acknowledge the tireless effort of the NVIDIA and Intel teams. There’s a lot of work in front of us, but the collaboration we announced this quarter was the culmination of almost a year of hard work with a company that cuts no corners and prioritizes engineering excellence above all. The x86 architecture has been the foundation of the digital revolution that powers the modern world. AI is the next phase of that revolution, and we’re on a path to ensure x86 remains at the heart of it. Engagements like this one with NVIDIA are critical to this effort. Moving to Intel Foundry. Intel Foundry delivered revenue of $4.2 billion, down 4% sequentially. In Q3, Intel Foundry delivered Intel 10 and 7 volume above expectations, met key 18A milestones and released hardened 18A PDKs to the ecosystem.

Foundry also advanced the development of Intel 14A and continues to make progress expanding its advanced packaging deal pipeline. Intel Foundry operating loss in Q3 was $2.3 billion, better by $847 million sequentially primarily on favorable comparison due to the approximately $800 million impairment charge in Q2. As Lip-Bu discussed, our confidence in the long-term foundry TAM continues to grow, bolstered by accelerating deployment and adoption of AI and the growing need for wafers and advanced packaging services. Projections are calling for a greater than 10x increase of gigawatts of AI capacity by 2030, creating significant opportunities for Intel Foundry with external customers, both for wafers and our differentiated advanced packaging capabilities like EMIB-T.

We continue the work to earn the trust of our customers, and our improved balance sheet flexibility will allow us to quickly and responsibly respond to demand as it comes. Turning to all other. Revenue came in at $1 billion, of which Altera contributed $386 million and was down 6% sequentially due to the intra-quarter closure of Altera. The 3 primary components of all other in Q3 were Mobileye, Altera and IMS. Collectively, the category delivered $100 million of operating profit. Now turning to guidance. For Q4, we’re forecasting a revenue range of $12.8 billion to $13.8 billion. At the midpoint, and adjusting for the Altera deconsolidation, Q4’s revenue is roughly flat quarter-over-quarter. We expect Intel products up modestly sequentially but below customer demand as we continue to navigate supply environment.

Within Intel products, we expect CCG to be down modestly and PC AI to be up strongly sequentially as we prioritize wafer capacity for server shipments over entry-level client parts. We expect Intel Foundry revenue up quarter-over-quarter on increased Intel 18A revenue and its external foundry revenue up due to the deconsolidation of Altera. For all other, which now excludes Altera, we expect revenue to decline consistent with Mobileye’s guidance, partially offset by sequential growth in IMS. At the midpoint of $13.3 billion, we forecast a gross margin of approximately 36.5%, down sequentially due to product mix, the impact of the first shipments of Core Ultra 3, which has the typically higher cost you see in the early stages of a new product ramp and the deconsolidation of Altera.

We forecast a tax rate of 12% and EPS of $0.08, all on a non-GAAP basis. We expect noncontrolled income to be approximately $350 million to $400 million in Q4 on a GAAP basis, and we forecast average fully diluted share count of roughly 5 billion shares for Q4. Moving to CapEx. We continue to anticipate 2025 gross capital investment will be approximately $18 billion, and we expect to deploy more than $27 billion of CapEx in 2025 versus $17 billion deployed in 2024. I’ll wrap up by saying we exit Q3 with a significantly stronger balance sheet, solid demand in the near term and growing confidence in our core x86 franchise as well as the longer-term opportunities in foundry, ASICs and accelerators. We also recognize the work we need to reach our full potential.

We continue to add external talent and unlock our workforce to improve our execution across product and process development as well as manufacturing. We will closely manage what’s in our control, react quickly as the environment evolves and focus on delivering long-term shareholder value. At this time, I’ll turn it back to John to start the Q&A.

John Pitzer: Thank you, Dave. We will now transition to the Q&A potion of our call. [Operator Instructions] With that, Jonathan, can we take the first question, please?

Q&A Session

Follow Intel Corp (NASDAQ:INTC)

Operator: And our first question comes from the line of Ross Seymore from Deutsche Bank.

Ross Seymore: Congratulations on the strong results. Lip-Bu, the first one for you is going to be on the foundry side. You guys announced a ton of collaborations in the quarter. You very much strengthened your balance sheet. And the tone you took in your preamble sounds much more confident on the progress you’re making in foundry. Do any of these collaborative announcements or equity investments go into that increased confidence? Or are there some sort of technical merits that you’re seeing that are rising your optimism in that part of your business?

Lip-Bu Tan: Yes, Ross, thank you so much for the questions. So I think a couple of announcements we make is, I think, clearly more on the product side. And also, one is the SoftBank because they are building up all the infrastructure, AI infrastructure. That definitely will need more capacity on the foundry side. So I think that would be the answer. But meanwhile, I’ve been saying that, I think, clearly, from what I received from the 18A and 14A, we made tremendous good progress, the steady progress on 18A. And Panther Lake will depend on it. And then clearly, we see the yield in a more predictable way. And I visited fab 52 that fully in operation for the 18A. And then on the 14A, clearly, we’re engaging with multiple customers in terms of milestone basis.

And we’re also really driving some of the yield and performance, reliability that are seeing improvement. And also more exciting, the advanced packaging, we also see important demands from some of the key customer from foundry — from the cloud able and also enterprise side. So I think overall, I think we are looking quite excited to build this long-term trust with some of the customers and scaling it. And we also focus on hiring some of the top talent, driving some of the process technology improvement.

John Pitzer: Ross, do you have a follow-up question?

Ross Seymore: Yes, I do. One for Dave on the gross margin side of things. You talked through the upside in the third quarter and the sequential downside in the fourth. But could you just walk us through some of the pluses and minuses as we think about 2026, just kind of directionally? And I guess where I’m going is it seems like the biggest improvement has to come on the foundry gross margin side of things. Is that the biggest driver? What drives it? And as those gross margins go up, does that have any impact on the Intel products gross margin?

David Zinsner: Yes, sure. So obviously, we’re not going to guide ’26, but I think I can give a little bit of color. I think, first of all, just be mindful that Altera is out of the numbers in ’26. They were in the large part of the numbers for ’25, so that’s probably a point of margin headwind for us because they were accretive to our gross margins. So that’s going to be a little bit of a challenge to overcome it. I’d say, we still believe in this 40% to 60% fall-through for margins. Of course, it’s a little bit of a range. You could drive a truck through quite honestly. But a lot of it is because of mix. I mean we’ll have — obviously, Lunar Lake will be a big component in the — at least in the first half of the year, and that is a dilutive product to us.

And then Panther Lake, while, obviously, it’s going to be a great cross structure for us over time given the wafers are fabbed internally — initially, obviously, when you got a new product on a new process, they’re pretty expensive products to start with. And so it’s dilutive in the beginnings of the year, and then it gets better over the course of the year. I do agree. We should see gross margins improve on the foundry side for sure. Partly, that is the scale dynamic that we will see benefit from but also as we move towards more leading-edge mix, 18A for sure but also even Intel 4, 3, those products have better pricing and a better cost structure. And so those margins should be accretive. And really, the dynamic about how much it improves will largely be a function of how the mix plays out through the year.

Operator: Our next question comes from the line of Joseph Moore from Morgan Stanley.

Joseph Moore: I was really interested in a lot of the prepared remarks around the sort of differences to your approach to foundry, and you talked about this last quarter and this quarter that you’re sort of looking for customer commitments before you make the investment. Can you just talk about how those conversations are going? And I certainly — I can see the trade-off from a customer standpoint. They’re making a commitment to you. Do they expect that capacity to be built ahead of time? Just is there a bit of a chicken and egg aspect to these investments? And just how are you approaching those conversations?

Lip-Bu Tan: Yes, Joseph, thank you so much for the question. I think on the foundry side, clearly, we are engaging with multiple customers. And as the building the trust of the customer, you need to really show the yield improvement, reliability and also, you need to have all the specific IP that they require. It’s a service industry. You need to have all the right IP. That’s why I formed the Central Engineering to get all the right IP to matching with the customer requirement. And then I think the best way is really show the performance, the yield and then we can [ data ] test chip so that they can really work on it. And then they can starting to deploy their most important revenue wafer to depend on us so that we can drive the success for them.

So I think those are very important. In terms of potential investment and collaboration, I think with a different customer, different requirements, we are working with them. But more important is to get their commitment to the foundry and the support. I think that’s building the trust that’s more important.

David Zinsner: Yes. Maybe just to add on, I would say that I think customers understand that it takes time from the time you deploy capital to the time where you have output. And so our expectation is we will get those commitments firmed up in time to deploy the capital, in time to meet the demand. I’d also say we’re in a reasonably decent position given the CapEx investments we’ve already made. So we have a lot of the assets on the books and in what we call assets under construction. We’ve made a lot of investments around the shelf space. So we do see line of sight to driving a reasonable amount of supply for our external foundry customers with our existing footprint and quite honestly, with the use of the assets under construction and reuse of equipment that we have on the books today. So I think we have flexibility. Obviously, if things go better, we may be looking to invest more into that more quickly, but we feel reasonably confident we can react to the situation.

John Pitzer: Joe, do you have a follow-up question?

Joseph Moore: I do, yes. Separately, the supply constraints in server CPUs and other CPUs, we see those in the market, I guess, but your growth was 5% sequentially, single-digit growth year-on-year. I guess where is the shortage coming from? Is there just better demand ahead that you’re not able to meet? Is it some of the transitions that you guys have managed? Just — and I certainly see that tightness in the marketplace, so I’m not arguing with that. I’m just curious where you see that shortage coming from and how it will get resolved.

David Zinsner: Yes. I mean, shortage is pretty much across our business, I would say. We are definitely tight on Intel 10 and 7. Obviously, we’re not looking to build more capacity there. And so as we get more demand, we’re constrained. In some ways, we’re living off of inventory. We’re also trying to kind of demand shape to get customers to other products. There’s also shortages even beyond our specific challenges on the foundry side. I think there’s widely reported substrate shortages for example. So obviously, I think the demand — there’s a lot of caution coming into the year, I think, across the board. And it looks like things are going to be stronger this year, and probably that continues well into next year. And I think everybody is trying to manage through it.

Operator: Our next question comes from the line of C.J. Muse from Cantor Fitzgerald.

Christopher Muse: I guess a follow-up on the current outlook for demand outpacing supply into 2026. Curious if that’s a comment largely focused on server or also including clients and I guess depending on your thoughts there. How should we be thinking about Q1 trends versus normal seasonality, which typically, I guess, would be down high single digits, low double digits?

David Zinsner: Yes. It’s both. Although as we said, we are yielding a bit of the small core market and client to fulfill customer requirements more broadly on the client space and more specifically in the server space. So that’s how we’re going to kind of manage it. As you look into Q1, obviously, again, this is something we’ll probably give you a lot more color around in January. I would just say we may actually be at our peak in terms of shortages in the first quarter because we’ve lived through the Q3 and Q4 with a little bit of inventory to help us and just cranking the output as much as we could with the factory. We probably won’t have as much of that luxury in Q1. So I’m not sure we’ll buck the trend on seasonality given the fact that we’re going to be really, really tight in the first quarter. After that, I think we’ll start to see some improvements, and we can get ourselves caught up as we get through the rest of the year.

John Pitzer: C.J., do you have a follow-up question?

Christopher Muse: I do, John. I guess given the investments from the U.S. government and NVIDIA, SoftBank, et cetera, I’m curious, with that improved cash position and liquidity, how has your thinking evolved in terms of investments in either CapEx or other investments in your product businesses.

David Zinsner: Yes. I mean, obviously, we’re in a great position. I’d say, as we think about this cash, our first focus is to delever. I mean that’s one of the things we really wanted to — when Lip-Bu came in, he really was upset about the balance sheet. So we’ve done a lot to work on that and improve that for him. We took $4.3 billion of debt off the books this quarter, and all the maturities next quarter or next year should come off and we’ll repay that. I think as you look at CapEx, it puts us in a position of flexibility on CapEx, but we want to be very disciplined around CapEx. So we will absolutely be looking at demand. Lip-Bu’s been very direct with us on this. He wants to see the whites of the eyes of the customer that we can believe in that demand.

And if that demand exists, of course, we will amp up the CapEx as necessary. As you think about investment, we still think that $16 billion of OpEx investment for next year is the right amount. Although Lip-Bu and I are constantly now looking at how we mix that $16 billion to drive the best possible growth and return for investors, and we will be making those changes. Beyond that, we’ll see how things go. We want to be pretty disciplined about our OpEx as a percent of revenue and drive leverage. But we do see opportunities to make investments that can, I think, deliver great returns for shareholders, and we’re not afraid to do that either.

Operator: Our next question comes from the line of Blayne Curtis, Jefferies.

Blayne Curtis: I had 2. Just on the CapEx, I think you reiterated $18 billion, but I think you spent, I guess, less than I was modeling in Q3. So is that really still the number? And I’m just kind of curious, as you start to ramp this 18A in Arizona, is there a way to think about the timing of when you add capacity there?

David Zinsner: On the $18 billion, yes, I think that’s still the number. Obviously, CapEx can be lumpy. It depends on when things get — when all the requirements associated with paying the invoice are completed, that’s when we make the payments. And so we would expect to be somewhere in that range. Obviously, there’s an error bar around that. It might be a little bit less or a little bit more than that. 18A, we still have to ramp this. I wouldn’t expect significant capacity increases in the near term. But I think as we said, we are not at peak supply for 18A. In fact, we don’t get there until the end of the decade. And we do think that this node will be a fairly [ long-lived ] node for us. And so we will continue to make investments on 18A over time. There will be CapEx investments next year, but I wouldn’t expect the supply to — at least capacity to significantly change vis-a-vis our expectations right now.

John Pitzer: Blayne, do you have a question — follow-up?

Blayne Curtis: Yes. Just I wanted to follow up on the gross margin trajectory as 18A layers in. I know comparing it to probably the prior couple nodes, not a great compare but maybe to a successful one. When you say yields are in a good spot and improving, is there a way to think about where those 18A yields are versus the successful product that you’ve seen in your history and kind of think about how that layers in, in the first half?

David Zinsner: Yes. I would say, in general, I don’t — I’m not sure yields in older nodes has been a big focus of ours, quite honestly. So we’re blazing new trail on this. Yields are — what I would say, the yields are adequate to address the supply, but they are not where we need them to be in order to drive the appropriate level of margins. And by the end of next year, we’ll probably be in that space. And certainly, the year after that, I think they’ll be in what would be kind of an industry acceptable level on the yields. I would tell you, on 14A, we’re off to a great start. And if you look at 14A in terms of its maturity relative to 18A at that same point of maturity, we’re better in terms of performance and yield. So we’re off to an even better start on 14A. We just got to kind of continue that progress.

Operator: Our next question comes from the line of Stacy Rasgon from Bernstein Research.

Stacy Rasgon: I wanted to go back and ask about the supply constraints again. So you talked a lot about how AI was driving a lot of demand across servers and across PCs, but at the same time, it doesn’t look like customers want your AI products. In fact, they can’t get enough of the older stuff. So I guess, you — I mean, you must have plenty of supply for Granite and for Meteor and even for Lunar Lake. So how are you going to get the customers off of the older products where they haven’t shown any desire to get off of them so far even given the constraints that they’ve been under? And I guess, how do we think about the transition of those customers? Because you’re clearly — I mean, you even said it yourself. You’re not adding any more of the older capacity. In fact, you took some of it offline, right?

David Zinsner: Yes. Yes, good question, Stacy. I would — I think it’s a misnomer to say AI hasn’t done well. I mean it was sequentially up double digits quarter-over-quarter, and we talked about a number of about — we would ship about 100 million units by the end of this year on AI PC, and we’re going to be first order in that range. So I think it’s going pretty well. Clearly, though, the older nodes have also done well, and that was probably the part that was more unexpected. We — I think we’ve just got to participate in making sure that the ecosystem drives enough applications for AI in the PC space. And we work with the ISVs regularly to drive that. They’re getting there. Like any market, it starts relatively immature and kind of builds out over time.

But even in our company, we’re starting to find uses for AI PC. In fact, IR is coming up with one here that we’ll be using. So I think it’s just kind of time. Now that said, what clearly is happening is the Windows refresh is happening more significantly than I think we expected. And that’s not necessarily an AI PC story. And so Raptor Lake is also a product that addresses that. And so we’re just seeing upside in that part of the market as well.

Stacy Rasgon: Dave, I want to follow up on 2 things that I think I heard you say on 18A. I thought I heard you say, number one, the yields would not be in a great place at least until the end of next year. And then I thought I also heard you say that you were not going to be adding a lot of 18A capacity next year. Did I hear those wrong? I mean, how can such the latter one, how can how can that be true if you’re ramping Panther? Or is that like a…

David Zinsner: We’re obviously at our infancy. What I’m saying is relative to the CapEx plan, it’s not like we’re going to incrementally add supply for 18A next year. But yes, of course, we’re going to be ramping the volume over the course of the next year. I wouldn’t say 18A yields are in a bad place. I mean, they’re where we want them to be at this point. We had a goal for the end of the year, and they’re going to hit that goal. But to be fully accretive in terms of the cost structure of 18A, we need the yields to be better. I mean that’s like every process. That’s what happens. And it’s going to take all of next year, I think, to really get to a place where that’s the case.

Operator: Our next question comes from the line of Joshua Buchalter from TD Cowen.

Joshua Buchalter: I wanted to ask about some comments Lip-Bu made in the prepared remarks about fixed function computing and potentially supporting more ASICs. Was this — maybe could you provide more context on the scope of this? Is this for potential foundry customers? Or are these products? And if it’s products, what types of applications do you expect to be supporting with custom silicon?

Lip-Bu Tan: Good question. So I think, first of all, I just mentioned about the Central Engineering. We are driving the ASIC design, and that will be enhanced — actually is a good opportunity for us to enhance, extend our reach of the core x86 IP and also drive some of the purpose-built silicon for some of our systems and cloud players and customer. And then definitely with the foundry and packaging, also they were helping us in terms of their requirement. So all in all, I think this AI will be driving a lot of growth especially in the double down, the Moore’s Law, and that will help us a lot in our 86 uplift. And that’s an opportunity for us to build the whole ASIC design to serve some of the customer requirement.

John Pitzer: Josh, do you have a quick follow-up?

Joshua Buchalter: Yes. So on the last quarter, obviously, the disclosure that you may decide to abandon 14A got a lot of attention. I just wanted to ask, given your balance sheet is in a lot different spot than it was 3 months ago, has anything changed from that regard? I don’t think the Q is out, so I haven’t seen if any of the language changed there. But was curious if anything had moved around since last quarter given all the changes in your balance sheet.

Lip-Bu Tan: Yes. Since the last balance — quarter, I think clearly, our engagement with the customer for 14A increase, and we are very heavily engaging with the customer in terms of defining the technology, the process, the yield and the IP requirement to serve them. And they clearly see the tremendous demand that they need to have Intel to be strong on the 14A. And so we are delighted and more confident. And meanwhile, we’re also attracting some of the key talent for the process technology that can really drive success, and that’s why it gives me a lot more confidence to drive that.

Operator: Our next question comes from the line of Ben Reitzes from Melius.

Benjamin Reitzes: Lip-Bu, anything — can we get an update on the NVIDIA relationship timing of products? Have you gotten any feedback from customers in terms of your ability to articulate on the materiality of the relationship and in terms of timing and materiality or any other color you want to give us on that?

Lip-Bu Tan: Sure. Thank you. And this is a very important collaboration with NVIDIA. It’s a great company, as you guys know. And I’ve been known as a friend of Jensen for more than 30 years. And we are very excited about this effort of Intel CPU 86 leadership, and their unmatched AI and accelerated computing and then connecting with their NVLink and that will be — really create a new class of product in the multigenerations. And this is something that very heavy engineering-to-engineering engagement. And that will be driving some of the new product that custom data center and PC product and that really optimize for the AI era. So I think all in all, I think this is going to be multiple years of engagement and then addressing our market that we are excited and also driving some of the requirement for the AI infrastructure.

David Zinsner: And maybe just one more addition. Just what makes this really special for us is it’s not attacking our existing TAM. It’s an incremental opportunity for us to expand the TAM. And so these are great opportunities for us.

John Pitzer: Ben, do you have a follow-up?

Benjamin Reitzes: Yes. Lip-Bu, you mentioned a little bit about your AI strategy now to attack the inference market and that there’s — you see room for Intel solutions. And it sounds like you’re going to partner a lot there. Is this strategy more about partnering? Is it more about — is there a specific Intel IP for inferencing that you’re excited about? Or is it more of a Switzerland approach where you could partner with a lot of the existing players out there to attack more of the TAM?

Lip-Bu Tan: Yes, good question. So I think, first of all, I think with the AI driving a lot of growth, we definitely want to play in that. I think this is a very early inning. And so I think that’s an opportunity for us. And I think one area that we are focused on is our revitalizing our 86 and to really tailor to purpose build CPU, GPU requirement for the new AI workload and then really addressing the power-efficient agentic and managing all the different agents. This is a new way of compute platform of choice and that we’ll be also applying to the system and software. They’re going to say to you, I think we’re going to partnering with some of the incumbents and also the emerging companies that driving some of these changes.

Operator: Our next question comes from the line of Timothy Arcuri from UBS.

Timothy Arcuri: Dave, it’s not that often we see high fixed cost businesses that are constrained that have gross margins less than 40%. And I certainly get that most of this is because of the wafer cost for Intel 10 and 7 and you still have low yields on 18A. But I’m wondering — and this is probably a hard question to ask — to answer, but I wonder if you could maybe just kind of fast forward a bit and say like what would gross margin be if you were off of 10 and 7 and you were on 18A. Is there a way you could like normalize it for us?

David Zinsner: Yes. I mean obviously you may be listening in on some of my conversations with the team here because that’s definitely something I’ve been making the point of. I would say there’s 2 dynamics, 1 of which you’re hitting on, and that is the high cost of older processes versus the better cost structure for the newer processes. And that’s obviously meaningful. I mean we’re in negative gross margin territory for foundry. That makes a meaningful improvement if you move it even into the positive territory. But the other aspect of our gross margin is a function of just the product quality. We’re in reasonably decent shape on client in terms of product performance and competitiveness with a few exceptions, but we’re not where we need to be on a cost basis.

And so we’ve got to make improvements there. And we have that on the road map. The team recognizes it, but that’s a multiyear process to get there. But it’s more pronounced on the data center side. Not only do we not have the right cost structure, but we also don’t have the right competitiveness to really get the right margins from our customers. And so we’ve got work to do there. And so that’s what Lip-Bu and the team has pulled in, are hyper-focused on, is getting great products at the right cost structure to drive better gross margins. That to me, I think, is the linchpin in all this. The improvements on the foundry side are just going to come, I think. We’re going to mix higher and higher to Intel 3, 4 and then 18A and ultimately, 14A. The cost structures of all of those are actually pretty similar.

And it will just be a function of the fact that the value that’s provided by those leading edge nodes is going to be significantly more, and that’s going to just materially drive the gross margins up. The other thing that I would say is we are seeing a lot of start-up costs by virtue of the fact that we’ve jammed a whole bunch of new processes in kind of in rapid fashion. As we get into 14A, our cadence will be more normalized. And so you won’t see so much start-up costs stacked on top of each other, which is affecting gross margins. And that’s billions of dollars. So I think as you get beyond a few years, that rolls off and will also help.

Timothy Arcuri: I do. Yes. Lip-Bu, you didn’t give an update last call on Diamond Rapids launch date. I know the whole road map is under review, but you did sound — the company sounds fairly optimistic about Coral Rapids. Can you just give us sort of an update on the data center road map a bit here?

Lip-Bu Tan: Yes. Thank you. Good questions. So I think clearly, the Diamond Rapids is getting stronger hyperscale feedback. And then we also focus on the new product, the Coral Rapids, and that will be included SMT, the multithreading, and that can drive higher performance. We’re in the definition stage. And then we will work out the road map and then we’re going to execute that [ would be ] going forward.

Operator: Our final question for today comes from the line of Aaron Rakers from Wells Fargo.

Aaron Rakers: Just a couple of real quick ones. I guess going back to the NVIDIA relationship. I can appreciate that the announcement was really tied to the NVLink Fusion strategy and integrating that with the x86 ecosystem. But I think there’s been some — also some recent reports about maybe using Gaudi for some dedicated inference workloads within a stack of NVIDIA. How do you — is this relationship a starting point? And should we expect to see more potential integration beyond NVLink going forward?

Lip-Bu Tan: Yes, I think — let me answer that. I think NVLink is kind of more the hub to connecting the 86 and GPU. In terms of the AI strategy, we clearly — we are defining the Crescent Island that we talk about. And also, we also have a new product line in the lineup. So they’re addressing the agentic and the physical AI and more in the inference side. And so I think stay tuned. We will update that.

John Pitzer: Aaron, do you have a quick follow-up?

Aaron Rakers: I do, and I’ll make it really short. Can you just update us on how we think about the NCI, the noncontrolling interest expense, as we look through this year and think about that? I think you’ve given some comments in the past of how we should think about that into ’26.

David Zinsner: Yes. I think ’26, we’re looking at somewhere in the $1.2 billion to $1.4 billion range is probably a good estimate. Obviously, we’re focused on that, and we’ll work to minimize that as much as possible.

Lip-Bu Tan: With that, I want to thank everyone for joining us today. We are on the journey of rebuild Intel, and we have a lot of works ahead of us, but we are making solid progress in Q3. I look forward seeing many of you throughout the quarter and provide you another update in January.

Operator: Thank you, ladies and gentlemen, for your participation in today’s conference. This does conclude the program. You may now disconnect. Good day.

Follow Intel Corp (NASDAQ:INTC)