Astera Labs, Inc. Common Stock (NASDAQ:ALAB) Q2 2025 Earnings Call Transcript

Astera Labs, Inc. Common Stock (NASDAQ:ALAB) Q2 2025 Earnings Call Transcript August 6, 2025

Operator: Good afternoon. My name is Rebecca, and I will be your conference operator today. At this time, I would like to welcome everyone to the Astera Labs Second Quarter Earnings Conference Call. [Operator Instructions] I will now turn the call over to Leslie Green, Investor Relations for Astera Labs. Leslie, you may begin.

Leslie Green: Thank you, Rebecca. Good afternoon, everyone, and welcome to the Astera Labs Second Quarter 2025 Earnings Conference Call. Joining us on the call today are Jitendra Mohan, Chief Executive Officer and Co-Founder; Sanjay Gajendra, President, Chief Operating Officer and Co-Founder; and Mike Tate, Chief Financial Officer. Before we get started, I would like to remind everyone that certain comments made in this call today may include forward-looking statements regarding, among other things, expected future financial results, strategies and plans, future operations and the markets in which we operate. These forward-looking statements reflect management’s current beliefs, expectations and assumptions about future events, which are inherently subject to risks and uncertainties that are discussed in detail in today’s earnings release and in the periodic reports and filings we file from time to time with the SEC, including the risks set forth in our most recent annual report on Form 10-K and our upcoming filing on Form 10-Q.

It is not possible for the company’s management to predict all risks and uncertainties that could have an impact on these forward- looking statements or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statements. In light of these risks, uncertainties and assumptions, the results, events or circumstances reflected in the forward-looking statements discussed during this call may not occur, and actual results could differ materially from those anticipated or implied. All of our statements are made based on information available to management as of today, and the company undertakes no obligation to update such statements after the date of this call, except as required by law.

Also during this call, we will refer to certain non-GAAP financial measures, which we consider to be an important measure of the company’s performance. These non-GAAP financial measures are provided in addition to and not as a substitute for financial results prepared in accordance with U.S. GAAP. A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed through the Investor Relations portion of our website. And with that, I would like to turn the call over to Jitendra Mohan, CEO of Astera Labs. Jitendra?

Jitendra Mohan: Thank you, Leslie. Good afternoon, everyone, and thanks for joining our second quarter conference call for fiscal year 2025. Today, I’ll provide an overview of our Q2 results, followed by a discussion around our rack-scale connectivity vision. I will then turn the call over to Sanjay to walk through Astera Labs’ near and long-term growth profile. Finally, Mike will give an overview of our Q2 2025 financial results and provide details regarding our financial guidance for Q3. Astera Labs delivered strong results in Q2, with all financial metrics coming in favorable to our guidance. Quarterly revenue of $191.9 million was up 20% from the prior quarter and up 150% versus Q2 of last year. Growth during the quarter was driven by both our signal conditioning and switch fabric product lines establishing a meaningful new revenue baseline for the company to build upon.

This quarter, we achieved a key milestone with our market-leading Scorpio P-Series switches, supporting PCIe 6 scale-out applications ramping into volume production to support the deployment and general availability of customized rack-scale AI system designs based on merchant GPUs. Strong demand for our PCIe solutions helped to drive material top line upside during the quarter. Scorpio exceeded 10% of total revenue, making it the fastest ramping product line in the history of Astera Labs. Furthermore, we continue to see strong activity and engagement across both our Scorpio P-Series and X-Series PCIe Fabric Switches, and we are pleased to report that we won new designs across multiple new customers during the quarter. We remain on track for Scorpio to exceed 10% of total revenue in 2025, while becoming the largest product line for Astera Labs over the next several years.

Our Aries product family grew during the quarter and continues to diversify across both GPU and custom ASIC-based systems for a variety of applications, including scale-up and scale-out connectivity. Additionally, our first-to-market Aries 6 solution supporting PCIe 6 began volume ramp during the quarter within rack-scale merchant GPU-based systems. Our Taurus product family demonstrated strong growth driven by AEC demand, supporting the latest merchant GPUs, custom AI accelerators as well as general- purpose compute platforms. Leo continues to ship in preproduction quantities as customers expand their development rack clusters to qualify new systems, leveraging the recently introduced CXL capable data center CPU platforms. In addition to strong financial and operational performance during Q2, we continue to expand our strategic relationships across both customers and ecosystem partners as the industry pushes forward with innovative new technologies.

First, we broadened our collaboration with NVIDIA to support NVLink Fusion, providing additional optionality for customers to deploy NVIDIA AI accelerators while leveraging high-performance scale-up networks based on NVLink technology. Next, we announced a partnership with Alchip Technologies to advance the silicon ecosystem for AI rack-scale infrastructure by combining our comprehensive connectivity portfolio with their custom ASIC development capabilities. Within the CXL ecosystem, industry progress continues with SAP recently highlighting their collaboration with Microsoft featuring Intel’s Xeon 6 processors to optimize SAP HANA database performance by utilizing CXL memory expansion. Lastly, we joined AMD on stage during their Advancing AI 2025 keynote presentation as a trusted partner to showcase UALink, which is the only truly open memory semantic- based scale-up fabric purpose-built for AI workloads.

To continue the relentless pursuit of AI model performance, data center infrastructure providers are beginning a transformation to what we call AI Infrastructure 2.0. We define this AI Infrastructure 2.0 transition as the proliferation of open standards-based AI rack-scale platforms that leverage broad innovation, interoperability and a diverse multi-vendor supply chain. This transition is in its early stages, and we are strategically crafting our road maps to help lead these secular connectivity trends over the coming years. The transition to AI Infrastructure 2.0, especially significant at the rack level, as modern AI workloads demand ultra-low latency communication between hundreds of tightly integrated accelerators over a scale-up network.

Astera Labs is well positioned to support this infrastructure transformation as an anchor solution partner with expertise across the entire connectivity staff. First, we support a variety of interconnect protocols, including UALink and PCIe for scale up, Ethernet for scale out and CXL for memory. We are very excited about the momentum behind the UALink scale-up connectivity standard, which exemplifies the open ecosystem approach by combining the low latency of PCIe and the fast data rates of Ethernet to deliver best-in- class end-to-end latency and bandwidth. Next, we provide a broad suite of Intelligent Connectivity products to address the entire rack across both purpose-built silicon and hardware solutions, all featuring our COSMOS software for best-in-class fleet monitoring and management.

Lastly, our deep partnerships across the entire ecosystem continue to expand as we work closely with ASIC and GPU vendors to align features, interoperability and road maps to solve the rack-scale connectivity challenges of tomorrow. In summary, Astera Labs has demonstrated strong momentum in our business and the prospects for continued diversification and scale are driving our road maps and R&D investment. We are in the early stages of the AI Infrastructure 2.0 transformation which Astera Labs is uniquely positioned to help proliferate over the coming years. Scale-up connectivity for rack-scale AI infrastructure alone will add close to $5 billion of market opportunity for us by 2030. And we remain committed to supporting our customers as they choose the architectures and technologies that best suit their AI performance goals and business objectives.

With that, let me turn the call over to our President and COO, Sanjay Gajendra to outline our vision for growth over the next several years.

Sanjay Gajendra: Thanks, Jitendra, and good afternoon, everyone. Today, I want to provide an update on our recent execution, followed by an overview of the meaningful market opportunities and growth catalysts that Astera Labs will address within the forthcoming transition to AI Infrastructure 2.0. Our goal is to deliver a purpose-built connectivity platform that includes silicon, hardware and software solutions for rack-scale AI deployments. To achieve this goal, our approach has been to increase our addressable dollar content in AI servers by rapidly expanding our product lines to provide a comprehensive connectivity platform and capture higher-value sockets that includes smart cable modules, gearboxes and fabric solutions.

We also see increasing attach rates driven by higher speed interconnects in platforms deployed by customers who are collectively investing hundreds of billions of dollars on AI infrastructure annually. Starting in Q2 of 2025, Astera Labs executed the next step in its high-growth evolution by ramping our PCIe Scorpio Fabric Switches and Aries 6 retimers into volume production. This latest wave of growth has further diversified our overall business as we now have 3 product lines contributing about 10% of total sales. During this transition, our silicon dollar content opportunity has expanded into the range of multiple hundreds of dollars per AI accelerator which has effectively established a new revenue baseline for the company. Looking ahead, we are excited about the opportunities enabled by scale-up interconnect topologies.

Given the extreme importance of scale-up connectivity to overall AI infrastructure performance and productivity, we see Scorpio X-Series solutions as the anchor socket with the next-generation AI racks. We are engaged with over 10 unique AI platform and cloud infrastructure providers who are looking to utilize our fabric solutions for their scale-up networking requirements. We look for Scorpio X-Series to begin shipping for customized scale-up architectures in late 2025, with a shift to high-volume production over the course of 2026. With the ramp of Scorpio X-Series for scale-up connectivity topologies next year, we expect our overall silicon dollar content opportunity per AI accelerator to significantly increase. Overall, we expect this to be another step-up from a baseline revenue standpoint.

Also, given the size of the scale-up connectivity opportunity, we expect our Scorpio X-Series revenue to quickly outgrow Scorpio P-Series revenue. In 2026 and beyond, cloud platform providers and hyperscalers will begin to deploy next-generation platforms as the industry transitions to AI Infrastructure 2.0. We believe the fastest path to this transformation lies in purpose-built solutions developed within open ecosystems with a multi-vendor supply chain. For Astera Labs, this transformation will be the catalyst for the next wave of overall market opportunity and revenue growth. Our expertise and support for major interconnect protocols, including PCIe, Ethernet, CXL and UALink puts us in an excellent position to participate in these next-generation design conversations.

UALink represents the cleanest and most optimized scale-up strategy for AI accelerator providers given its robust performance potential, open ecosystem, diverse supply chain and purpose-built approach. Early industry momentum has been very encouraging with multiple hyperscalers and several compute platform providers looking to incorporate UALink into their accelerator road map and engaging with RFPs as an indication of strong interest. As a leading promoter of UALink, Astera Labs is committed to developing and commercializing a broad portfolio of UALink connectivity solutions ranging from AI fabrics to signal conditioning solutions and other I/O components. Proliferation of UALink in 2027 and beyond will represent a long-term growth vector for Astera Labs.

In conclusion, we are proud of our execution over the past several years demonstrating strong and profitable revenue growth, diversification of customers and application and exposure to a broadening range of AI infrastructure applications and use cases. We believe this momentum is in its early stages as we fully embrace an industry transition to AI Infrastructure 2.0, which will expand our opportunity across even more customers and platforms. Over the next several years, we look to build upon this newly established baseline of business as we partner tightly with our customers and the broader ecosystem to deliver and deploy best-in-class rack-scale solutions to fuel the next wave of AI evolution. With that, I will turn the call over to our CFO, Mike Tate, who will discuss our Q2 financial results and our Q3 outlook.

Michael T. Tate: Thanks, Sanjay, and thanks to everyone for joining the call. This overview of our Q2 financial results and Q3 guidance will be on a non-GAAP basis. The primary difference in the Astera Labs’ non-GAAP metrics is stock-based compensation and its related income tax effects. Please refer to today’s press release available on the Investor Relations section of our website for more details on both our GAAP and non-GAAP Q3 financial outlook as well as a reconciliation of our GAAP to non-GAAP financial measures presented on this call. For Q2 of 2025, Astera Labs delivered quarterly revenue of $191.9 million, which was up 20% versus the previous quarter and 150% higher than the revenue in Q2 of 2024. During the quarter, we enjoyed revenue growth from both our Aries and Taurus product lines supporting but scale up and scale out PCIe and Ethernet connectivity for AI rack-level configurations.

Scorpio Smart Fabric Switches transitioned to volume production in Q2 with our P-Series product line for PCIe 6 scale-up applications deployed within leading GPU customized rack-scale systems. Leo CXL Controller shift in preproduction volumes as customers continue to work towards qualifying platforms ahead of volume deployment. Q2 non-GAAP gross margin was 76% and was up 110 basis points from March quarter levels, with product mix remaining largely constant across higher volumes. Non-GAAP operating expenses for Q2 of $70.7 million were up roughly $5 million from the previous quarter as we continue to scale our R&D organization to expand and broaden our long-term market opportunity. Within Q2 non- GAAP operating expenses, R&D expenses were $48.9 million.

Sales and marketing expenses were $9.4 million and general and administrative expenses were $12.4 million. Non-GAAP operating margin for Q2 was 39.2%, up 550 basis points from the previous quarter. Interest income in Q2 was $10.9 million. Our non-GAAP tax rate for Q2 was 9.4%. Non-GAAP fully diluted share count for Q2 was 178.1 million shares, and our non- GAAP diluted earnings per share for the quarter was $0.44. Cash flow from operating activities for Q2 was $135.4 million, and we ended the quarter with cash, cash equivalents and marketable securities of $1.07 billion. Now turning to our guidance for Q3 of fiscal 2025. We expect Q3 revenues to increase to within a range of $203 million and $210 million up roughly 6% to 9% from the second quarter levels.

For Q3, we expect Aries, Taurus and Scorpio to drive growth in the quarter. For Aries, we are seeing growth from a number of end customer platforms where we support scale-up and scale-out connectivity. Taurus growth is driven by new designs going into volume production for scale-out connectivity. Scorpio will primarily be driven by the continued deployment of our P-Series solutions for scale-out applications on third-party GPU platforms. We expect non-GAAP gross margins to be approximately 75% with the mix between our silicon and hardware module businesses remaining largely consistent with Q2. We expect third quarter non-GAAP operating expenses to be in the range of approximately $76 million to $80 million. Operating expense growth in Q3 is driven by the continued investment in our research and development function as we look to expand our product portfolio and grow our addressable market opportunity.

Interest income is expected to be $10 million. Our non-GAAP tax rate should be approximately 20%. The increase in our non-GAAP Q3 tax rate reflects the impact of the recent change in the tax law passed in July with an expectation that our full year non-GAAP tax rate for 2025 to now be approximately 15% following this tax law change. Our non-GAAP fully diluted share count is expected to be approximately 180 million shares. Adding this all up, we are expecting non-GAAP fully diluted earnings per share of a range of $0.38 to $0.39. This concludes our prepared remarks. And once again, we appreciate everyone joining the call, and now we will open the line for questions. Operator?

Operator: [Operator Instructions] Your first question comes from the line of Harlan Sur with JPMorgan.

Q&A Session

Follow Astera Labs Inc.

Harlan L. Sur: Congratulations on the very strong results. Within your Scorpio family of switching products, good to see the strong ramp of Scorpio P this past quarter. Within the same portfolio, it looks like the team is qualified and set to ramp the Scorpio X-Series for XPU to XPU ASIC connectivity. You talked about 10 platform wins. What’s been the biggest differentiator? Is it performance, i.e. latency, throughput? Is it fully optimized with your signal conditioning products? Is that a consideration? And how much does the familiarity with COSMOS software play a role? And you guys have always called this an anchor product, which pulls in more of your solutions alongside your COSMOS software suite. Is this how it’s playing out with your basic XPU customers you lead with Scorpio X and you’ve been successful at driving higher attach with your other products?

Jitendra Mohan: Harlan, thank you so much for the question. And you’re absolutely right. The success that we have enjoyed so far is rooted on primarily, I would say 3 things. First is just our closeness to our customers. So over this time period, we have earned the kind of a trusted partner status with our customers. So we get a ringside view of what their plans are, what it is that they’re planning to deploy and when. The second part of that is really our execution track record. We have shown time and again that our team executes with purpose, and we deliver to our promises. So with both of these, we get the first sort of call for developing new products for going into new product platforms at our customers. And that’s where the COSMOS software suite comes in.

COSMOS, for the audience here, is our software suite that unites all of our products together. And this is how we allow our products to be customized, optimized for unique applications as well as collect a lot of very rich diagnostics information that allows our customers to really see how their connectivity infrastructure is operating. So with the use of COSMOS, we can customize our products to deliver higher performance which translates to sometimes lower latency, sometimes higher throughput, sometimes different diagnostic features for our customers. And as a result of that, we’ve been able to use Scorpio as an anchor socket in these applications because this is something that gets designed in upfront. And then we figure out single conditioning opportunities with our Aries and Taurus products in these platforms.

And the Scorpio X in particular, because the customers use kind of derivatives of PCI Express, we have been able to customize Scorpio X to deliver this lower latency and higher throughput.

Harlan L. Sur: Very insightful. And for my second question, just over the past 90 days, we’ve put a lot of focus on announcements on scale-up networking connectivity. On UALink, as you mentioned, right, the team did the Wall Street [ teaching ] back in May, obviously, the team is a key member of the UALink consortium. AMD recently fully endorsed UALink as it’s scale-up networking architecture of choice for all future generations of it’s rack-scale solutions, and we know of at least 1 other ASIC XPU vendor that’s going to be moving to UALink as well. Beyond this, like what’s been the reception and interest level on UALink and can or will the Astera team speed up its time to market on UALink-based products? Or is the timing still to sample products next year with volume deployment in calendar ’27?

Sanjay Gajendra: Yes. Harlan, this is Sanjay here. Thank you for the question. To your point, absolutely, we see tremendous amount of interest with UALink. There are, obviously, the technical advantages that you get with low latency and familiarity with how the transport layer works based on its routes, which is PCIe. Also, the fact that it supports memory semantics natively is also a strong reason why customers are liking that interface. The big upside, of course, is the physical layer, which now has been upgraded to support up to 200 gig on the Ethernet side. So there are several technical reasons that are going in favor of UALink. So customers that were using PCIe or PCIe-like fabrics, see this as a natural progression in order to support the AI infrastructure needs going forward.

Now what we’ll also note is that it’s not just about technical stuff, it’s about ecosystem and the broad availability of components that are required for scale up, and that’s, again, where UALink shines in the sense that it’s truly an open standard. It’s truly a multi-vendor supply chain. And those are additional reasons why customers tend to gravitate towards UALink. And we do have, like you noted, several customers, we are counting 10-plus right now that are looking at leveraging some of these open standards, whether it’s PCIe in the short term, combination of PCIe and UALink in the midterm and transitioning perhaps to a broader UALink deployment in 2027 and later. So overall, I think the momentum is shifting positively, and we are excited to be in the middle of it and driving the adoption of open and scalable supply chain in the market.

Operator: Your next question comes from the line of Ross Seymore with Deutsche Bank.

Ross Clark Seymore: A couple of questions and congrats on the strong results and guidance. Maybe to no surprise, I wanted to stay on the Scorpio family. The diversity of engagements is also interesting to me. And as far as you’re talking about it as an anchor tenant, I just wondered if you could go into a little bit of the profile of the types of customers, how it’s changed from your initial customer. And then perhaps how much incremental business and interest those customers are showing in other products as they realize as well, it’s an anchor tenant? Sort of how are you leveraging that Scorpio relationship to bring in more business? Any sort of illustrations of that would be helpful.

Sanjay Gajendra: Yes, absolutely. Again, thank you for that question. So just to kind of remind we have 2 product series within Scorpio. One is the Scorpio P-Series that just started ramping to production to support some of the third-party GPUs that are ramping. And the P-Series is designed for scale-out connectivity, very broad use case from interconnecting GPUs to custom mix, to storage and things like that. So Scorpio P-Series, we have a broad base of customers that are leveraging the solution, designing in, going to production, deep in technical evaluations and so on. So that would be a broad play for us with PCIe-based scale-out interconnect and storage type of interconnect. Scorpio X-Series, which is designed for scale-up networking to interconnect the GPUs and accelerator.

This, we see, like you noted, as an anchor socket because that is truly the socket that holds all the GPUs together, and today, like you noted, we have 10-plus customers that we are engaging when it comes to scale-up networking using Scorpio X-Series. And this is also pulling in rest of our products both because of the advantages that COSMOS brings to the table by unifying all of our products, plus at the same time, the fact that someone is using a fabric solution and they would need a gearbox or a retimer or other controller type of products, those are all playing into having that first call with the customer or having an early access at an architectural stage which translates into an opportunity for us where we can not only offer the fabric device, but also the surrounding components that come along with it as the connectivity platform.

Ross Clark Seymore: And I guess as my second question, 1 for Mike, and I think the first 1 is going to be pretty quick, so I might have a clarification in there as well. The gross margin is beaten and you’re staying solidly above your 70% long-term target. So I guess the question is, there anything that slows down your trajectory to the 70%? And the clarification would be the tax rate at 20%. Is that this year but not next year, which is the number we should think of going forward, the 15%, the 20% or the 10% you used to be?

Michael T. Tate: Okay. Thanks, Ross. I’ll start with the taxes. The 20% is specifically to Q3 because that was the quarter that the tax law changed. So we had to catch up for the previous 2 quarters. For Q4, you should expect it to normalize around 15%. And then longer term, with this new tax law in place, it’s probably in the — around the 13% range. For the gross margins, when we have an inflection up in revenues like we did, you do have the benefit of higher revenues over fixed operating costs. So that was the incremental benefit for us. We do expect to see some pretty good growth from our hardware modules going into the back half of this year, into 2026. So as we make it through 2026, we still encourage people to think of our long-term target model of 70% as something that we’ll be delivering.

Operator: Your next question comes from the line of Blayne Curtis with Jefferies.

Blayne Peter Curtis: I’ll echo the congrats on the results. I guess I want to ask on the Scorpio products. I mean, I think 10% in the June quarter was ahead of what many people were looking at. So maybe you can just help us with the shape of that product. I mean you still said 10% for the year, I’m assuming it’s greater than 10%, but I’m sure it’s much greater than that. I mean, can you help us a little bit with as you look to September, you have $15 million of growth, how to think about areas for Scorpio and any kind of thoughts on how to guide us to model the Scorpio product line this year.

Michael T. Tate: Yes. This is Mike. Yes. For Q2, the Scorpio P launched into volume production, a little ahead of what we anticipated. So it provided the upside in the quarter. From this base level, now it is, it continues to grow in Q3 and Q4. But we have more P-Series designs kind of coming into play to layer on top of, but that’s more in 2026. For the X-Series, we do have preproduction volumes here, but really, that starts to go into high volume production during the course of 2026 and layering even more growth. Ultimately, what we called out is the X-Series is going to grow to be bigger than P-Series. So it’s a very exciting opportunity, just given the dollar value of the design opportunities are much higher than the P-Series, just given the use cases of the scale- up connectivity, so both will grow.

We did reiterate that it will exceed 10% of our revenues for the year, which is quite an accomplishment for the first year out of a product line. It is poised to be our largest product line of the company as we make it through the following 2 years.

Blayne Peter Curtis: And I just want to ask, I think, in terms of the scale-up opportunity, clearly, you were clear that X will be more material next year kind of preproduction this year. Just wanted to ask this because there was a lot of rumors out there in terms of are there any opportunities for scale out with Scorpio P? Or maybe in short, are you going to be shipping anything material this year for scale up versus the scale out you already talked about?

Michael T. Tate: The scale up this year is predominantly preproduction volumes. And these systems are pretty complex that they’re shipping into. So we like to try to be conservative on how we telegraph those going forward. But the volume opportunities of scale-up connectivity for switching is a much bigger dollar opportunity for us as we look forward. But those designs really will start to enter into full volume production during the course of 2026. So not a driver in the next couple of quarters.

Operator: Your next question comes from the line of Joe Moore with Morgan Stanley.

Joseph Lawrence Moore: I wonder if you could talk about UALink versus other architectures, and I guess your involvement with NVLink Fusion. Are you agnostic to those various solutions? Are you more favorable towards open source or proprietary? Just kind of walk us through the potential outcomes for you with these battles are being fought.

Jitendra Mohan: Joseph, this is Jitendra. Happy to do that. So let’s start with NVLink just because NVLink is perhaps the most widely deployed scale- up architecture that’s available today. And we are very happy to be part of the NVLink Fusion ecosystem. So if you look at the history of NVLink, it really is a fabric that is built ground up for AI. It uses memory semantics to make sure that all of the GPUs can be addressed as their 1 large GPU. It has low latencies, it does add Ethernet-based SerDes to get the higher speeds. And of course, NVIDIA has popularized that with their NVL72 deployment. If you go from there to, let’s say, UALink, you find many similarities. UALink also has this genesis in PCI Express. It is a memory semantics-based protocol.

It uses lossless networking, several other technical advancements that are suitable for AI workloads. And then the whole protocol is really custom-built for optimizing the throughput for AI type of traffic. So I think it does offer several advantages over other more proprietary protocols some of which happened to be Ethernet-based and some are completely proprietary as well. The other advantage of UALink is it’s an open ecosystem. We know that many hyperscalers are part of the promoter Board members as well as many vendors, frankly, who are working to deploy solutions for this UALink. And as a result, we expect to see a very vibrant ecosystem of provider — vendors and customers with this UALink. And I think that will be a defining characteristic and why we believe UALink will be adopted widely over time.

And as promoter members of UALink Consortium ourselves, we are very happy to both participate in this standard and not only participate, we come up with a full portfolio of solutions that includes switches, retimers, cables and what have you, to enable our customers to build a full UALink. So to answer the question, the other question that you asked, with UALink, we have a lot of dollar content opportunity. But at the same time, we will continue to service our customers who are today using PCI Express, and we have a huge opportunity there as well as Ethernet for scale-out applications, for cabling applications and over time also with NVLink Fusion.

Joseph Lawrence Moore: That’s very helpful. And then I get the question a lot. If you guys can size your exposure to merchant GPU platforms versus ASIC, I know there’s probably a little bit higher content opportunity for you on the ASIC side. But any sense for what that split looks like and where that may be going over time?

Jitendra Mohan: Yes. Joseph, we do address both of these opportunities. Our opportunity on the merchant GPU platform comes when our customers customize the rack designs. This is the opportunity for both our Aries and Scorpio P-Series that Sanjay and Mike touched upon earlier, we saw a lot of ramp happening with that in this last quarter. In addition to that, we are also shipping the Taurus Ethernet cables for scale-out applications. But when you go to the scale up, that becomes a very big opportunity for us just because of the density of interconnect, when you’re trying to connect all of these GPUs together, and when that network happens to be based on PCI Express, we have an even larger attach rate, which drives our dollar content on these XPU platforms into several hundreds of dollars per XPU. So over time, we do see the Scorpio X family as our largest revenue contributor and largely deployed on XPUs.

Operator: Your next question comes from the line of Tom O’Malley with Barclays.

Thomas James O’Malley: You mentioned that you were engaged with 10-plus customers on the X switch side. Could you just give us a picture of how many of those are engaged on PCIe today and how many of those are engaged on the UAL side? And if you’re engaged with 1 on PCIe, are you often engaged with 1 on UAL as well. Can you maybe talk about that split right now?

Sanjay Gajendra: Yes. So this is Sanjay here. So what we can note is that the 10-plus opportunities that we highlighted, these are both hyperscalers as well as AI platform providers. And these are all today based on PCIe. So these are nearer-term opportunities that we’re tracking. Having noted that, like Jitendra highlighted, UALink is a standard — an open standard that contemplates the requirements of scale-up networking in terms of speed and other capabilities going forward. So many of these customers that we’re engaging with today with PCIe are also looking at UALink. Some of them might continue to stay with PCIe. Some of them will transition to UALink in the midterm. But longer term, as UALink ecosystem develops and matures, we do expect that UALink will continue to be a solution that both the merchant GPU as well as custom accelerator provides would standardize on.

Thomas James O’Malley: Helpful. And then as my follow-up, I’m curious, and there’s been obviously a lot of news articles intra-quarter about switching attach rates with XPUs and then also general purpose silicon. So if you look at the large guy in the market in a 72 array, there’s 9 switch trays, a couple of switches per. So like a 25% switching attach rate to a single XPU or general piece of silicon. In that instance, like when you’re ramping an XPU with a custom silicon customer, can you maybe walk us through like specifically with the X switch, if that attach rate is higher or lower? Or what’s the reason for that? That would be super helpful.

Sanjay Gajendra: Yes. So the — obviously, we don’t comment on individual platforms and customer deployment scenario. But in general, the Scorpio switches, X-Series switches interconnect GPUs, and there are depending on the platform, there are different configurations for number of GPUs in their port. So within Astera and the product portfolio that we are developing is designed in a way that it addresses a variety of different use cases and the attach rate vary. So it probably will be a broad answer to your question. But in general, we have the engagements. We have the design wins. Now it’s a matter of all of these platforms getting qualified and ramping to production. With due course, of course, as they get into production, we’ll be able to add more color on how that’s shaping our revenue and our growth.

Operator: Your next question comes from the line of Tore Svanberg with Stifel Financial Corp.

Tore Egil Svanberg: And let me add my congratulations as well. I guess my first question is on you talked about this new revenue base. I mean you now have 3 product lines in production, that obviously doubled your revenue base, now you’re talking about Infrastructure 2.0 and Scorpio series or X-Series really sort of creating a new revenue level. So I mean, should we sort of infer with that, that you will double the sort of run rate again as X-Series starts to ramp? Is that the way we should look at it?

Sanjay Gajendra: Yes. Great question, but I always like to make this correction is not retimer, it’s retimer, just to keep our engineering folks happy. But you make a great point, and that’s exactly we believe, is the beauty of our business model where we have approached the business in a series of growth steps. We started the journey being on all the NVIDIA-based platforms with the PCIe retimers which got the company off the ground from a revenue growth standpoint. The second step that we hit was to expand our PCIe retimer and Ethernet retimer business to go after custom ASICs. So this transition happened in Q3 of last year. Now where we are is our third step in that growth journey where we are ramped up our Scorpio P-Series, PCIe-based switch products, along with our Aries 6 retimers.

So that’s going on all the third-party NVIDIA-based GPU platforms that are ramping. The fourth step that we are highlighting as part of the call today is the Scorpio X-Series which is designed for scale-up networking, and that transition is currently underway in the sense that we are still in preproduction. And like we highlighted, throughout 2026, we expect that wave to transition to high-volume production, providing us a new baseline for revenue. And these are, of course, higher value sockets, meaning the dollar content with the Scorpio X-Series switches are significantly higher than what we have done so far. So you could expect that to play into the overall revenue projections that we would have as we get towards 2026. And the fifth step that we called out as part of the communication is the UALink, that is going to be a growth story in 2027, and that is a greenfield application for us with a much broader deployment of scale-up networking along with a variety of other products that we intend to develop for UALink.

And that is going to be the fifth step that we are executing towards.

Tore Egil Svanberg: Yes. Thank you for walking through all that, Sanjay. I really appreciate it. And as my follow-up, and related to UALink, it does feel like the standard is sort of regaining a lot of traction. I’m just curious why that is. Is it because of AI moving more into inferencing? Is it because of the 128-gig version? It just feels like there’s been a little bit of a change over the last few months. So any color you can add on that would be great.

Sanjay Gajendra: If you don’t mind, could you repeat your question? We didn’t quite get the question that you asked.

Tore Egil Svanberg: Yes, I was asking about UALink sort of regaining a lot of traction, at least that’s the way it feels to us and I’m just wondering why that is. Is it because of AI moving more towards inferencing? Is it because of the 128-gig version? Or is there anything else that’s going on there?

Jitendra Mohan: Thank you for clarifying that, Tore. Yes, so UALink is gaining actually a lot of traction. If you just as a reminder, UALink was only introduced — the specification was only introduced in — towards the end of Q1 of this year. So since then, it has gained tremendous amount of traction. We’ve got AMD talked about it very recently in Taipei as part of the OCP Summit. And several of the hyperscalers are very closely engaged in figuring out what their road map intercepts would look like for UALink for all the reasons that we talked about earlier in the call. I will also say that majority of these engagements are a 200-gigabit per second per lane rate and not at the 128.

Operator: Your next question comes from the line of Sebastien Naji with William Blair.

Sebastien Cyrus Naji: A lot of the focus is rightfully on the AI tailwinds. But could you maybe comment on what you’re seeing in non-AI adoption and in particular, which you might be seeing on Gen 5 PCIe adoption and general-purpose servers. And could that be a meaningful contributor to Aries growth going forward?

Sanjay Gajendra: Yes, absolutely. Thanks for highlighting that. We always overlook the general compute nowadays. But to your point, that’s a transition that we’re tracking. AMD released their Venice CPU, which does support PCIe Gen 6 as well. So we do see that sort of playing out in terms of design opportunities and a new set of production ramps happening for our Aries product line, both on the retimer class devices as well as other sockets that we develop, whether it is a Taurus modules or gearbox devices. So in general, those are additional opportunities for us to grow our business and we are tracking those things as part of our overall outlook. And let’s not forget Leo products, which are our CXL controllers, these are designed for memory expansion for CPUs in particular.

So finally, we have CPUs that support CXL technology and ready for deployment. So we are excited about the opportunities that we’re tracking between all the 3 product lines: Aries, Taurus, and Leo going into the general compute use cases.

Sebastien Cyrus Naji: That’s really helpful. And if I could, a second question, I want to ask about the use of Ethernet and scale up going forward? You have Broadcom positioning itself to address both the scale out and scale up part of the network with its latest generation of Ethernet chips. And I’m wondering how do you see scale-up Ethernet potentially eating into that PCIe part of the market where Astera has such a strong position?

Jitendra Mohan: This is Jitendra. Maybe I’ll take this question. So if you look at our customers today, they are deploying the scale-up network with the technologies that are available to them, which is NVLink for NVIDIA designs, of course, PCI Express for several of the customers that we touched upon earlier in the call. And some of the customers are also using Ethernet. And largely, this has to do with the availability of the switching infrastructure. The 2 protocols PCI Express as well as NVLink are basically kind of custom built for memory access for memory semantics. So you can use that to make your multiple GPUs in a cluster look like 1 large GPU. Ethernet is a fantastic protocol, but it was never designed for scale up.

It was designed for kind of large-scale Internet traffic, and it is very, very good at that. However, because of the availability of the switches, some of the customers have tried to run RDMA and other proprietary protocols over Ethernet to do scale up. And in that scenario, it does suffer from higher latencies and throughput. Now I think what you’re referring to is scale-up Ethernet, where Broadcom is trying to actually borrow several of the same features that are present in PCI Express and UALink such as memory semantics, lossless networking, et cetera, and put them on top of Ethernet. At that point, it looks something quite different from Ethernet. And so the switching infrastructure as well as the SUE infrastructure has to evolve for somebody to use that.

But I believe that the real differentiation between the 2 has to do with the openness of the ecosystem. The SUE is still dominated by Broadcom whereas if you look at UALink, it’s a very open ecosystem, very vibrant ecosystem with multiple vendors working on products and multiple hyperscalers looking to really take their destiny in their own hands and relying UALink over time.

Operator: Your next question comes from the line of Quinn Bolton with Needham & Company.

Nathaniel Quinn Bolton: Jitendra just wanted to follow up on that question about SUE. Broadcom introduced their Tomahawk Ultra switch recently with a 250- nanosecond latency, which seems like it significantly reduces the latency problems that traditionally Ethernet has had. Can you give us some sense how does that 250-nanosecond latency for SUE compared to what you’re able to achieve on PCI Express and UALink? And then I’ve got a follow-up.

Jitendra Mohan: Yes. So we are able to achieve even lower latencies with some of the products that we have and the other products that we have in development. But again, it comes back to designing something that is purpose-built for AI. It is not about just the point-to-point latency. If you look at the end-to-end latency in the system, we believe that UALink and indeed PCI Express today is going to be lower latency. And the second point about that is utilization of bandwidth, even though over time, the current offering from Broadcom uses 100-gigabit per second per lane. But over time, every standard will migrate towards 200-gigabit per second per lane, both UALink, Ethernet as well as NVLink is already there today. However, how efficiently you use that raw data rate depends — varies from protocol to protocol.

UALink has been designed to be extremely efficient with that and really achieve very high utilization of the data pipe that is available. So on a technical basis, I do think that UALink will be superior to other protocols. But again, not to sort of mention this yet again, but the big advantage of UALink is in its openness that it’s an open standard that our customers, the hyperscalers can build their infrastructure once and then ideally plug in whichever GPU or XPU, they want that supports an open interoperable ecosystem like UALink.

Nathaniel Quinn Bolton: Got it. My follow-up question, I think in the script, you guys talked about an expansion in the opportunities with Taurus. And I kind of wondered if you could expand on that. Is that — are you seeing sort of adoption of higher per lane speeds on that Taurus product and adoption of 800-gig cables? Are you seeing adoption beyond your lead customer in Taurus? Just any additional color you could provide on Taurus would be helpful.

Jitendra Mohan: Yes. So like you correctly said, and what we have shared in the past as well is that we expect broader adoption of AECs when the Ethernet, that rate transitions to 800-gig, that’s starting to happen. We expect most of the deployments to be ramping up in volume in 2026. And to that standpoint, again, we are tracking and we are engaged with the customers that are deploying it. One point to keep in mind is that our business model for AECs is designed for scale. In other words, we developed this cable modules that fit into the cable assemblies of existing cable vendors and there are a variety of them that service the data center market. So our business model is to go after the ramp and not necessarily the initial few volume that might be deployed.

So to that standpoint, we’re tracking and we’re engaged with the right customers. And as the volume starts ramping, we do expect to have a significant diversification and growth in our Taurus module business, but most of this, we are modeling it in 2026 versus this year.

Nathaniel Quinn Bolton: Got it. So it sounds like the volume this year continues to be more 50-gig per lane and then you see that diversification in 2026 is 100- gig per lane becomes more — sees wider adoption?

Jitendra Mohan: Exactly. And our business model, like you noted, is designed for that multi-vendor cable supply chain. And we do believe that’s the right strategy, and that’s what hyperscalers look for. The initial POCs limited volume deployment, they might go with 1 vendor but very quickly, each 1 of these hyperscalers do want to have the diversity and as well as the supply chain capacity to drive volume, and that has essentially been our focus when it comes to our business model on the AEC side.

Operator: Your next question comes from the line of Papa Sylla with Citi.

Papa Talla Sylla: Congrats for the great results. I guess my first question is kind of following your announcement of a partnership with a high kind of performance ASIC leader recently. I guess can you touch a little bit more on the kind of extent of that collaboration. You did more at a chip level in terms of the I/O chip type of kind of partnership? Or is it more at a kind of device level with your Aries and Scorpio portfolio?

Jitendra Mohan: Yes. So I’ll answer that question by sort of sharing our vision and goal that we are executing towards. So our vision is to provide purpose-built connectivity platform for AI infrastructure that includes silicon products, hardware products and software products. Of course, the focus for us has been on the connectivity side of the AI rack when you think of an AI rack, there are other components that go, which primarily includes the compute nodes, whether it’s based on third-party merchant GPUs and CPUs or custom ASICs that Alchip and others develop for hyperscalers. So what we are a strong believer in is that the AI rack, the way it’s defined today is not scalable in the sense that it’s more proprietary as the industry transitions to what we are calling AI Infrastructure 2.0. The entire AI rack has to be based on an open, scalable, multi – vendor type of approach.

And to that standpoint, what we’re doing is not only developing the connectivity products for addressing the various aspects of an AI rack, whether it’s scale up or scale out and other connectivity. At the same time, we are partnering with third- party GPU vendors, we talked about the announcement that we did with AMD. We are also engaging with custom ASIC providers, including Alchip so that end of the day, the hyperscalers who are our common customers get a rack that is well-tested, interoperable, the software is all consistent and so on to ensure that it delivers the highest level of performance. So that is the scope of the collaboration that we are having with Alchip and other providers. And over time, you will see us announce more partnerships as we seek to establish the open rack that we believe is critical for deploying AI at scale.

Papa Talla Sylla: Got it. No, that’s very helpful. And if I can squeeze just 1 more, and this might be more for Mike. On the gross margin, it seems like over the last 2 quarters, particularly since the Scorpio announcement, kind of gross margin keeps going up. But in the September quarter, you are guiding it to 75% which at the very least at the midpoint seems to be kind of down a little bit. I’m just curious on — just any additional color on that? Because it seems like by all indications, Scorpio will continue to go up and the mix trend we are seeing currently seems to be kind of moving in the same direction in September as well. So we were just kind of curious on that guide down in gross margin in the September quarter.

Michael T. Tate: Yes. We do see growth from Scorpio, but we also see a good solid growth in Taurus as well during the quarter. So Taurus as a module, it’s hardware. So it carries a little bit lower gross margin than standalone silicon, so you’ll see that dynamic play out to a smaller extent in the quarter. And as we move into 2026, we still want to have people thinking of us going towards our longer-term model of 70%.

Operator: Your next question comes from the line of Suji Desilva with ROTH Capital.

Suji Desilva: Hi, Jitendra, Sanjay, Mike, congrats on the strong quarter here. Maybe you could give us a framework on the retimer content for a link that’s for scale out versus scale up. Maybe it’s similar but maybe there’s some differences, would be curious to understand what the unit opportunities might be and how they might be different.

Jitendra Mohan: Yes. So when you look at the retimers, the contrast with the switches is the following, which is the switches get designed in right at the inception at the architecture stage. Customers will think about how they are going to connect either their GPU to other GPUs in a scale up or the GPU to NICs or storage as part of that scale-out system. So once the switch is designed in and as the rack starts to get put together, then we look at the question of reach. And sometimes you find that you need retimers in a link other times, actually, you don’t need retimers in the link. Sometimes, the retimers go on the board as a kind of a chip-down format. At other times, they are better suited to be put in cables in an AEC format.

The good news with us, with Astera is that we provide this full portfolio of devices for our customers to choose from, from switches to gearboxes to chip-down retimers to retimers in active electrical cables. So they can look at 1 company, 1 Astera to figure out their entire — all the solutions at the rack level.

Suji Desilva: Okay. And just so neither 1 will be higher than the other necessarily, just to be clear.

Jitendra Mohan: Can you repeat that.

Suji Desilva: Neither one will be higher than the other, scale up versus scale out necessarily.

Jitendra Mohan: Yes, it really depends upon the system architecture. In scale up, there are many, many more links than there are in scale out. However, it is prohibitive from a power standpoint to put retimers in all the links. So typically, you will see the links that are shorter where you’re able to go from the switch to the GPU over a shorter distance will not use retimers, but the links that are longer will potentially use retimers. Sometimes we have scale-up domains that exceed 1 rack. So you might have 2 racks side by side that are part of a scale-up domain and in which case, you end up with cable solution and you need retimers in the scale-up in those scenarios.

Suji Desilva: Helpful. And then my follow-up on Scorpio X, you talked about 10 customer engagements. I’m wondering if that implies multiple programs per customer, if they’re going to think about using you standard in their platforms? Any color on how those are kind of shaping up would be helpful in programs versus customers.

Sanjay Gajendra: Yes. So 10-plus we noted are unique customers. Now within each customer, there are multiple opportunities that we’re tracking. Some of them are design wins, and some of them are ramping to production. Some of them are design ins going through qualification. Some of those are early engagement. So in general, we are very pleased with the amount of traction that we’re seeing for our Scorpio family.

Operator: There are no further questions at this time. I will turn the call back over to Leslie Green for closing remarks.

Leslie Green: Thank you, everyone, for your participation today and questions, and please refer to our Investor Relations website for information regarding upcoming financial conferences and events. Thanks so much.

Operator: This concludes today’s conference call. You may now disconnect.

Follow Astera Labs Inc.