Astera Labs, Inc. Common Stock (NASDAQ:ALAB) Q1 2025 Earnings Call Transcript May 6, 2025
Astera Labs, Inc. Common Stock beats earnings expectations. Reported EPS is $0.33, expectations were $0.28.
Operator: Good afternoon. My name is Amy, and I will be your conference operator today. At this time, I would like to welcome everyone to the Astera Labs First Quarter 2025 Earnings Conference Call. [Operator Instructions] It is now my pleasure to turn the call over to Leslie Green, Investor Relations with Astera Labs. Leslie, you may begin.
Leslie Green: Thank you, Amy. Good afternoon, everyone, and welcome to the Astera Labs First Quarter 2025 Earnings Conference Call. Joining us on the call today are Jitendra Mohan, Chief Executive Officer and Co-Founder; Sanjay Gajendra, President and Chief Operating Officer and Co-Founder; and Mike Tate, Chief Financial Officer. Before we get started, I would like to remind everyone that certain comments made in this call today may include forward-looking statements regarding, among other things, the expected future financial results, strategies and plans, future operations and the markets in which we operate. These forward-looking statements reflect management’s current beliefs, expectations and assumptions about future events which are inherently subject to risks and uncertainties that are discussed in detail in today’s earnings release and the periodic reports and filings we file from time to time with the SEC, including our risks set forth in our most recent annual report on Form 10-K and our upcoming filing on Form 10-Q.
It is not possible for the company’s management to predict all risks and uncertainties that could have an impact on these forward-looking statements or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statements. In light of these risks, uncertainties and assumptions, the results, events or circumstances reflected in the forward-looking statements discussed during this call may not occur, and actual results could differ materially from those anticipated or implied. All of our statements are based on information available to management as of today, and the company undertakes no obligation to update such statements after the date of this call, except as required by law.
Also during this call, we will refer to certain non-GAAP financial measures, which we consider to be an important measure of the company’s performance. These non-GAAP financial measures are provided in addition to and not as a substitute for financial results prepared in accordance with U.S. GAAP. A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed through the Investor Relations portion of our website. With that, I would like to turn the call over to Jitendra Mohan, CEO of Astera Labs. Jitendra?
Jitendra Mohan: Thank you, Leslie. Good afternoon, everyone, and thanks for joining our first quarter conference call for fiscal year 2025. Today, I’ll provide an overview of our Q1 results, followed by a high-level discussion around our long-term growth strategy. I will then turn the call over to Sanjay to walk through the key AI and cloud infrastructure applications that are driving our market opportunity. Finally, Mike will give an overview of our Q1 2025 financial results and provide details regarding our financial guidance for Q2. Astera Labs had a strong start to 2025, with Q1 results coming in above guidance. Quarterly revenues of $159.4 million was up 13% from the prior quarter and up 144% versus Q1 of last year. Our Aries product family continues to see strong demand and is diversifying across both GPU and custom ASIC-based systems for a variety of applications, including scale-up and scale-out connectivity.
Our Taurus product family also demonstrated strong growth driven by continued deployment on AI and general purpose systems at our leading hyperscaler customer. We will continue to shift in preproduction volumes as customers progress through the qualification of their next-generation systems, leveraging new CXL capable data center server CPUs. Finally, we expect our Scorpio PCB switches and Aries 6 retimers to shift from preproduction builds to volume production in the late Q2 time frame to support the ramp of customized GPU-based rack-scale AI system design. On the organizational front, we are very excited to announce the appointment of Dr. Craig Barratt as an addition to our Board of Directors. Craig brings a wealth of experience across execution and innovation that will help Astera Labs expand our connectivity leadership position in cloud and AI infrastructure.
Our vision is to provide a broad portfolio of connectivity solutions for the entire AI rack through purpose-built silicon hardware and software to support computing platforms based on both custom ASICs and merchant GPUs. Significant progress towards this vision began in the second half of 2024 as the company transitioned from primarily supplying PCIe retimer for NVIDIA AI servers to now becoming an integral supplier for AI rack-level connectivity topologies. We are in an ideal position to enable these new rack-level connections with our broadening portfolio Scorpio Fabric Switches, Ethernet and PCIe active cable modules, retimers, CXL memory expansion, optical interconnects and other products under development. Our revenue is diversifying across multiple AI platforms based on both custom ASICs and merchant GPUs across different product families, thereby enhancing our revenue profile and delivering increasing value across the AI rack.
We are well-positioned to continue to deliver above-market growth over the long term, and we will continue to increase our investments in R&D to support our vision to own the connectivity infrastructure within the AI rack. On the product front, we recently expanded our market-leading PCIe 6 connectivity portfolio to now also include gearboxes and optical connectivity technology to complement our existing fabric switches, retimers and smart cable modules. The new Aries 6 PCIe Smart Gearbox is purpose built to bridge the speed gap between the latest generation PCIe 6 devices and the existing PCIe 5 ecosystem. Multiple hyperscalers are designing our gearbox into both AI and general purpose compute platforms. We also demonstrated our PCIe 6 over optics technology to enable AI accelerator scale-up clustering across racks.
This holistic portfolio approach is essential given the increasing complexity of PCIe 6 topology. Point solutions are no longer sufficient. Our broad first-to-market PCIe 6 connectivity portfolio once again puts us in a leadership position to deliver the most reliable and widely interoperable solutions into the ecosystem. Increasing our market opportunity also remains a crucial focus for Astera Labs. We believe we are in a great position to address a large emerging opportunity associated with the scale of connectivity, open industry standard Ultra Accelerator link, or UALink. Last month, the UALink 1.0 specification was released to enable 200 gig per lane connections by supporting up to 1,024 accelerators. UALink combines the best of two worlds, natively offering memory semantics of PCIe and the fast speed of Ethernet, but without the software complexity and performance limitations of Ethernet.
The adoption of UALink will enable the industry to move beyond proprietary solutions towards a scalable and interoperable AI ecosystem. With broad industry support and adoption, the proliferation of UALink can represent a multibillion-dollar additional market opportunity for Astera Labs by 2029. Beyond UALink, we look for next-generation standards, including PCIe Gen 7, 800-gig Ethernet and CXL 3 to drive additional market opportunity for Astera Labs to increase unit shipments and higher dollar content per platform. Scaling our platform approach is another important strategic priority for the company. Our rack scale connectivity focus encompasses our complete product portfolio, spanning stand-alone silicon, hardware solutions and the COSMOS software suite.
The Astera Labs Intelligent Connectivity Platform provides technology breadth to our hyperscaler customers while also enhancing performance and productivity driven by our better together product portfolio design approach. As an example, our customers can integrate Scorpio and Aries solutions in combination to obtain even more advanced diagnostics and telemetry capabilities. While Aries can track the reliability and robustness of PCIe 6 links, Scorpio provides packet level visibility for increased observation of data center traffic. The synergy between our products, underpinned by COSMOS, ensures comprehensive support and seamless connectivity to drive system efficiency and performance across various applications. In summary, we continue to take advantage of robust secular trends and strong business momentum by accelerating the pace of our R&D investments to access new and emerging market growth opportunities to service the entire AI rack.
With that, let me turn the call over to our President and COO, Sanjay Gajendra, to discuss our growth strategy in more detail.
Sanjay Gajendra: Thanks, Jitendra, and good afternoon, everyone. Today, I want to provide an update on our progress and opportunities within 3 key AI and cloud infrastructure application categories as we establish Astera as a critical connectivity supplier for the entire AI rack. First off, scale-up connectivity for AI and cloud infrastructure represents a significant and rapidly growing marketplace. Increasing accelerator cluster sizes, faster interconnect requirements and overall system complexity challenges are creating substantial dollar content opportunities. These opportunities are driving strong demand for our reach extension solutions in the near term, and we expect these trends to drive additional revenue growth across multiple product lines over the longer-term.
We’re also pleased by the growing interest in Scorpio X-Series solutions for scale-up connectivity, designed to maximize AI accelerator utilization with consistent performance and reliability, Scorpio X solutions will be central to next-generation AI racks. This trend will increase our silicon content opportunity to hundreds of dollars per accelerator and serve as an anchor socket for integrating additional Astera Labs connectivity solutions, along with our COSMOS software suite at a rack scale. I’m excited to share that we will begin shipping preproduction volumes for Scorpio X-Series starting late this quarter. Longer term, hyperscalers will also look to the UALink protocol to deliver faster data transfer rates with a more scalable architecture.
Our expanding road map is providing customers with a long-term strategy towards scaling their AI accelerated clusters and infrastructure. We expect to deliver UALink solutions in 2026 to solve scale-up connectivity challenges for next-generation AI infrastructure. Next, the scale-out connectivity application. Front-end scale-out connectivity topologies are becoming increasingly intricate as next-generation AI accelerators necessitate faster speeds while also supporting comprehensive interoperability with other peripherals. Over the past few years, we have established a robust business within scale-out topologies through our reach extension portfolio of Aries and Taurus products across PCIe and Ethernet protocols. As the market transitions to PCIe 6-capable GPUs, we now see an expanded market opportunity that includes our Scorpio P-Series product family and Aries 6 Retimer and Gearbox solutions.
Our COSMOS software framework enables seamless expansion into these additional sockets in our customers’ platforms. Utilizing PCIe 6 data rates to support 800-gig scale-out connectivity will be a primary focus for AI infrastructure providers in the coming years. The Scorpio P-Series, combined with Aries 6, represents the first-to-market solution specifically designed to achieve this objective. Looking ahead to Q2, we anticipate accelerated shipments of Scorpio P-Series switches and Aries 6 retimers on customized rack scale AI platform based on market-leading GPUs. Additionally, we continue to identify further opportunities for Scorpio P-Series outside of rack scale systems with multiple engagements on modular topologies that support enhanced customization.
Our reference design and collaboration with Wistron on NVIDIA Blackwell-based MGX systems exemplifies this expanding opportunity set as we aim to bolster our presence within OEM and enterprise channels. We remain the sole connectivity provider that has demonstrated complete end-to-end PCIe 6 interoperability with NVIDIA’s Blackwell GPUs, and we are actively working across the ecosystem to enable future-proof infrastructure capable of leveraging the increased throughput and performance of the PCIe 6 standard. Finally, we continue to see large and growing opportunities within the general purpose compute infrastructure market. We expect revenue growth from general compute-based platform opportunities featuring next-generation CPUs, network cards and SSDs with our Aries PCIe 6 retimers, Aries PCIe Gearboxes, Taurus Ethernet SCMs and Leo CXL product families.
For the next couple of years, the transition of data center server CPUs to support PCIe 6 will be a catalyst for additional unit growth and higher ASPs for our Aries product family. For Ethernet, Taurus continues to see growth on general-purpose platforms leveraging 400-gig switching port speeds. CXL will also expand our general-purpose compute exposure with expected volume ramps on hyperscaler customer programs in second half of 2025. Overall, general purpose compute remains an important application for our intelligent connectivity platform and is expected to drive diversification of our revenue profile over the long term. In conclusion, the strong customer traction with Scorpio, along with increasing opportunities in AI connectivity and general purpose computing, allows us to drive future growth.
By innovating and expanding our product offerings, we aim to meet the evolving needs of our customers and capitalize on our vision to deliver high-performance connectivity solutions for AI racks to support PCIe and UALink based scale-up, Ethernet-based scale-out, PCIe-based peripheral and CXL-based memory connectivity, with all these components seamlessly integrated with our COSMOS software suite for advanced absorbability, fleet management and rapid market deployment. With that, I will turn the call over to our CFO, Mike Tate, who will discuss our Q1 financial results and our Q2 outlook.
Mike Tate: Thanks, Sanjay, and thanks to everyone for joining the call. This overview of our Q1 financial results and Q2 guidance will be on a non-GAAP basis. The primary difference in Astera Labs’s non-GAAP metrics is stock-based compensation and its related income tax effects. Please refer to today’s press release available on the Investor Relations section of our website for more details on both our GAAP and non-GAAP Q2 financial outlook, as well as a reconciliation of our GAAP to non-GAAP financial measures presented on this call. For Q1 of 2025, Astera Labs delivered quarterly revenue of $159.4 million, which was up 13% versus the previous quarter and 144% higher than the revenue in Q1 of 2024. During the quarter, we enjoyed strong revenue growth from both our Aries and Taurus product lines supporting both scale up and scale out PCIe and Ethernet connectivity for AI rack-level configurations.
Leo CXL controllers and Scorpio Smart Fabric Switches both shipped preproduction volumes as our customers work to qualify their platforms for volume deployment in mid to late 2025. Q1 non-GAAP gross margins was 74.9% and was up slightly from the December quarter levels as product mix remained largely constant. Non-GAAP operating expenses for Q1 of $65.6 million were up from the previous quarter as we continue to scale our R&D organization to expand and broaden our long-term market opportunity. Within Q1 non-GAAP operating expenses, R&D expenses were $45.4 million. Sales and marketing expenses were $9.4 million, and general and administrative expenses were $10.9 million. Non-GAAP operating margin for Q1 was 33.7%. Interest income in Q1 was $10.4 million.
Our non-GAAP tax rate for Q1 was 7.1%. Non-GAAP fully diluted share count for Q1 was 178 million shares, and our non-GAAP diluted earnings per share for the quarter was $0.33. Cash flow from operating activities for Q1 was $10.5 million, and we ended the quarter with cash, cash equivalents and marketable securities of $925 million. Now turning to our guidance for Q2 of fiscal 2025. We are aware and focused on navigating the rapidly changing and dynamic macro environment. Policy initiatives, including tariffs and changing export restrictions are few of the variables that are likely to have at least some impact on demand across the global economy, including the AI and cloud infrastructure markets. Despite these factors, our business continues to have strong momentum as we execute on our long-term growth strategy.
With that said, we expect Q2 revenue to increase to within a range of $170 million and $175 million, up roughly 7% to 10% from the prior quarter. For Q2, we expect Aries and Taurus revenues to grow on a sequential basis. Our Leo CXL controller family will continue shipping in preproduction quantities to support ongoing qualifications ahead of an expected production ramp in the second half of 2025. Finally, we expect our Scorpio product revenues to grow sequentially in Q2 as the initial designs of customized GPU-based rack-level systems begin to ramp in volume late in the quarter. We continue to expect Scorpio revenue to comprise at least 10% of our total revenue for 2025. We expect non-GAAP gross margins to be approximately 74% as the mix between our silicon and hardware modules remains largely consistent with Q1.
We expect second quarter non-GAAP operating expenses to be in a range of approximately $73 million to $75 million. Operating expenses — operating expense growth in Q2 in — is being driven by continued investment in our research and development function as we look to expand our product portfolio and grow our addressable market opportunity. Interest income is expected to be approximately $10 million. Our non-GAAP tax rate should be approximately 10%, and our non-GAAP fully diluted share count is expected to be approximately 178 million shares. Adding this all up, we are expecting our non-GAAP fully diluted earnings per share in a range of approximately $0.32 to $0.33. This concludes our prepared remarks. And once again, we appreciate everyone joining the call.
And now we’d like to open the line for questions. Operator?
Q&A Session
Follow Astera Labs Inc.
Follow Astera Labs Inc.
Operator: [Operator Instructions] Our first question comes from the line of Harlan Sur with JPMorgan.
Harlan Sur: High level question. On the overall AI and data center spending environment, there have been some concerns on the CapEx spending momentum potentially peaking this year, maybe some AI compute digestion. We also kind of recently saw some of the AI bands to China recently. And then obviously, tariff and trade concerns, as you guys articulated in your prepared remarks. On the flip side, you’ve got the strong ramp of your merchant GPU customers on their next-generation AI platforms, strong new AI ASIC XPU ramps. So you’ve got new entrants into the AI ASIC XPU market. So since last earnings, I mean, has anything changed meaningfully positive or negative on the customer programs or the demand outlook for this year? And more importantly, your confidence level on continued strong growth for next year?
Mike Tate: Thanks, Harlan. Yes. First off, on the tariffs, we have not seen any material impact on our business, but it is fluid, and the rules are still subject to change. So it’s something we’re watching closely. But so far, we have not seen an impact. We do note that the hyperscalers stuck to their CapEx in the recent calls, and one actually increased. So that was encouraging. So we’ll continue to monitor that. In regards to restrictions to China, we did see an impact on that in this year. We do have designs that — we were the retimer on those programs. So — to the extent our customers cannot procure the GPUs, that does have a headwind for us that we’ve contemplated in our guidance.
Sanjay Gajendra: And then on the — Yes. On the customer on the business side, just to touch on that question that you asked. The great thing about our overall revenue profile is that there are multiple ways in which we are approaching the market, the diversity across both custom ASIC-based platforms versus merchant GPU-based platforms, scale up versus scale out and the multiple product lines that we have enables us to approach the market in many different ways. And to that standpoint, for us, for first half, what we are expecting is that our revenue would be driven largely by the PCIe scale-up and the Ethernet scale-out opportunities along with the initial shipment of Scorpio P-Series and Aries 6 going into the customized rack.
And second half, of course, lays nicely on top with some of the production ramps that we’re expecting with the customized racks, which again, for us, is the Scorpio switches, along with the PCIe 6 retimers. These are now qualified. So we are starting to see that shipments start becoming significant. So that’s part of the second half, and second half, of course, we have CXL initial shipments that we’re expecting for production volumes and the Scorpio X switches for the scale up going into the custom ASICs. Those are also expected to start hitting production — initial production volumes in the second half of this year, which essentially gives us multiple ways, if you will, and sets us up nicely for future revenue growth even beyond ’25.
Operator: Your next question comes from line of Blayne Curtis with Jefferies.
Blayne Curtis: I wanted to talk about scale up. You mentioned it several times. I think you even said a couple of hundred dollars per accelerator. Today, I think you’re selling some retimers and then some PCIe cabling. Can you walk us through the progression of scale up in your participation and kind of can you maybe set some timing? Because I know UAL is probably later next year. So what’s the scale-up opportunity for you in between now and then?
Jitendra Mohan: Blayne, this is Jitendra. Scale up presents a very good opportunity for us. As you know, so far, our revenues have been driven primarily by scale-out opportunities. But for the first half, as Sanjay laid out, we have a significant contribution from scale-up. And the reason that’s so important for us is scale up is really a very rich opportunity of high-speed interconnects that need to deliver low latency and high throughput. And that’s where we play today with our Aries retimer products and starting shipments of Scorpio X family. And we do expect this opportunity to continue to grow as cluster sizes grow and the data rates increase. So we have significant opportunities that we are working on for PCI Express based scale-up networks based on our current Scorpio X family.
But then it also dovetails very nicely into UAL, and we expect this to be a multibillion-dollar opportunity as we provide a full holistic portfolio of devices to address UAL infrastructure. And as far as the UAL itself is concerned, the spec is not final. It’s been released as the 1.0 spec. And so you can imagine that the products will start to be worked on now and start to see first samples in 2026 with the revenue contribution the following year. So that is a very big opportunity that we are very well positioned to take advantage of.
Blayne Curtis: And then I want to ask you on Taurus. You called out growth in March and then continued growth in June. Can you talk about — I don’t know if you want to frame how big that business has gone, whether there’s some rough metrics you can kind of give us? And then just kind of curious, the diversity of the customer base beyond the lead customer as well?
Mike Tate: Currently, we’re shipping the 50-gig solution, that continues to grow. We have multiple designs at our lead customer, which are — the largest one being the internal AI accelerator-based platform, which is still in a kind of a ramp phase. We do look to broaden it out the beyond our lead customer. That’s probably going to be the next technology jump to 100-gig speeds.
Operator: Your next question comes from the line of Ross Seymore with Deutsche Bank.
Ross Seymore: On last quarter’s call, you really made a big point about the diversification of your customer base. Tonight, it seems like it’s a lot more about the diversification of your products that you’re offering. So I guess if we kind of blend those two together, what are you seeing as the changes in the environment on the ASIC side versus the merchant GPU and how you’re broadening technology is being applied differently between them?
Sanjay Gajendra: Yes. So for us, again, we play in both the space with different strategies and different products. Now the key thing that we are excited about is the growing interest on Scorpio X family. So these are fabric switches that are used to interconnect multiple accelerators together. So to that standpoint, a, it’s not only a significant dollar opportunity because the ASP of this product tends to be high. But these are also products that are turning out to be anchor sockets for us. If you think of an AI rack being built, you have the accelerators and then you have the fabric that interconnects the accelerators. So what we are transitioning and what we’re excited about is that the Scorpio X device is now translating to be an anchor socket.
Think of it as like a mothership around which we are able to now add a lot more products that go along with it, whether it’s the silicon level products or module or other form factors that we’re considering. So overall, I want to say that from an opportunity space standpoint, for Astera, the custom ASIC-based implementation tends to offer a lot more opportunities. And with Scorpio X becoming more and more — or gaining more and more traction, we do believe that we are in a good position looking forward not just to service near-term business, but also longer term, with potentially UALink being the industry-wide standard for scale-up topologies.
Ross Seymore: I guess as my follow-up, just sticking on the custom silicon side of things. Does the competitive environment change at all there? I think this was a question that was asked on the last call as well. But considering the XPU providers are oftentimes your primary connectivity competitors, I know the best technology always wins, but does the bundling capability that could occur in that XPU market actually lead to more competition on your side? Or is that something you’re not seeing?
Sanjay Gajendra: The competition will always be there and competition will always try to sell more. I think that’s a given. What you need to keep in mind is that we are working with large hyperscalers that needs to also consider the supply chain and ecosystem and other considerations from a risk management standpoint. So that’s where we — is what we see. And in general, it also comes down to technology differentiation. For us, the fabric devices, and in fact, all of our connectivity devices are developed ground up for AI type of workloads. So there is a clear advantage and benefit that we offer, which is recognized and valued by our customers. Our COSMOS software provides unprecedented visibility into what’s happening in the network, being able to predict performance, being able to predict upcoming failures and so on, which are all critical requirements if you think about how complex an AI rack looks like and then will continue to become more complex.
So all in all, for the reasons I — like I noted, both from a risk management, commercial and technology differentiation standpoint, we are seeing that customers continue to work with us, and we see an increasing interest for our Scorpio line of products.
Operator: Your next question comes from the line of Thomas O’Malley with Barclays.
Thomas O’Malley: First one is for you, Mike. You mentioned that there was a China impact on your sales. It’s never been a significant portion of your model. But could you give us a feeling just how large that impact was and what that impact will be over the next couple of quarters?
Mike Tate: Yes. So we ship into China with our retimers predominantly right now and they were attached to third-party merchant GPU systems, both were restricted hard stop during the quarter. So there was a modest impact that we have to overcome. China revenues, when you look at end customer demand, is less than 10% of our revenues. So it’s been manageable enough and given the strength of our business and other product lines to continue to grow through this challenge.
Thomas O’Malley: Helpful. And then Jitendra, maybe a broader question. So you guys have been very consistent kind of describing the year as first half PCIe scale up, Ethernet scale out and a lot of the custom silicon. With the second half, you see more Scorpio, more retimers. There’s been a ton of noise in the market, and that’s just the price you pay from being very visible and attached to NVIDIA. But you heard about a lot of differences in terms of like what the ramp cadence was coming into this year versus where we were today. Some large hyperscalers, maybe some weakness in their programs. And then potentially with your large customers, some systems that are maybe delayed and moving to more DGX-style solutions. I understand you have to be sensitive about talking about customers, but to the best extent you can, coming into this year versus where you are today, could you maybe describe if there’s any differences to what you saw in the ramp of your revenue?
And maybe comment on anything that maybe has changed that can help us understand what’s going on?
Jitendra Mohan: Tom, that’s a good thing — a good point that you bring up. These systems are incredibly complex. And a lot of things have to go right in order for the full system to not only get deployed, but then get deployed at scale. And we, of course, try to do our best to make sure that we are never the bottleneck in terms of the deployment of these systems. And as Sanjay mentioned earlier, we have done a pretty good job so far with our preproduction shipments and getting the products qualified and so on. But we always have to take some buffer or some kind of conservativeness in terms of what it would take for our end customers to kind of complete that qualification and deployment. And so far, I would say that our expectations have come through largely unchanged from when we started the year, which is what our judgment was based on what we were hearing from our customers. Sanjay, do you want to add any more color?
Sanjay Gajendra: Yes. I think the — so we will always continue to be conservative, just to underline what Jitendra said. But having said that, the revenue models and the guidance that we are providing or the outlook that we’re sharing comprehends all this because we are so close to this customer that we see a lot of stuff, and we’re able to consider and contemplate that when we provide guidance. So to that step point, we do feel comfortable and confident about where we are. We just need to continue to make sure that we execute and deliver our part of the subsystem. And then rest of it, like we noted, will account for that in a conservative model, knowing how complex the systems are to get deployed.
Operator: Your next question comes from the line of Joe Moore with Morgan Stanley.
Joseph Moore: Great. I guess we’ve heard a lot of the large language model developers talk about sort of tightness in inference markets and kind of a lack of hardware to deal with inference. Are you guys seeing that? Does that translate into strength in any part of the business for you? Or just anything you can do to corroborate or mitigate those concerns.
Jitendra Mohan: Thanks, Joe. The — for us, to the first level, inference and training are about the same. Same products get used in both of these systems. So we do benefit from both training and inference. And as you look at some of these larger models, a mixture of experts and so on, what we are finding is that the amount of compute that’s required to draw inference from these models is even higher, 10x higher than previously. And as a result, the basic unit of compute is starting to become a rack — kind of a rack level system of GPU, which also happens to be the same basic unit for training. And so with the increased complexity of these rack level systems, we actually see more opportunities overall for both inference as well as for training.
As Sanjay mentioned earlier, this quarter, we also released a Scorpio-based inference system for smaller-scale inference. So that should allow us to kind of benefit from some of the smaller-scale inference systems as well. So having said all of that, we do see today that our customers are also using the same set of systems to do both inference and training, and we benefit from both.
Joseph Moore: Great. That’s helpful. And for my follow-up, you mentioned racks. A lot of the rack scale systems seem to be sort of tricky to get up and running. I guess those issues seem to be worse a few months ago. But does that affect you guys? I know you have good content across both rack and non-rack merchant solutions and ASICs. But just — is that kind of change in ramp schedule, is that having any impact on you guys?
Sanjay Gajendra: There is always complexity, Joe, I would say. But like we noted, we are modeling that and providing guidance that takes a conservative view on how some of the systems are being built and deployed. That’s probably the right thing to do from a business outlook standpoint. But having said that, going forward, what we’re doing is also something I do want to share, Joe, which is to take more of an AI rack level view in how we approach the market. The vision that we outlined is to be the connectivity supplier for the entire AI rack. And like I noted in my prepared remarks, focused on 4 main protocols, which is PCIe and UALink for e-up, Ethernet for scale-out, PCIe for peripheral connectivity and CXL for memory expansion and approach it holistically so we are providing a variety of different products, whether it’s retimers, gearboxes, fabric devices, switches across both copper and optical so that going forward for next-generation AI system at least from a connectivity infrastructure standpoint, it’s a holistic solution that not only considers silicon products, but also hardware and software.
That’s how we see the evolution, if you will. And at Astera, we believe, is well suited and to deliver the entire connectivity at the rack level, which we are executing both in terms of what we are servicing today and then going forward, of course, with UALink.
Operator: Your next question comes from the line of Tore Svanberg with Stifel.
Tore Svanberg: Congrats on the quarter. Jitendra, I was hoping you could elaborate a little bit more on the Aries 6 upgrade cycle here, especially in reference to gearbox products. If you could add any color on how diversified are the use case is going to be for Aries 6 gearbox products?
Sanjay Gajendra: Yes. So let me just take a second to kind of explain what a gearbox device does. It’s primarily used to match two generation of the same standard. Meaning in this particular case, one side of the device talks PCIe Gen 5, the other side talks Gen 6. The reason these products are essential is if you look at the CPUs today, they are still at Gen 5. GPUs have already transitioned for Gen 6. The same thing could happen with the networking and storage interfaces. So what a gearbox device does is two things. One, it takes care of the signal quality, similar to a retimer. And on top of that, takes care of matching the protocol generations on two ends of the check. So in terms of opportunities, we have multiple engagements today that we’re servicing for the gearbox device.
In fact, we have already started shipping preproduction volume for supporting some of the opportunities. And this would, again, not create an additional TAM to our Aries business because it’s adding to the retimer TAM, but essentially bringing in a higher level of ASP simply because you’re able to not only do retiming, but also do some of the speed matching that I noted. And we will continue to offer products like that, which is critical because if you think of it of a typical engineer that’s working on an AI system, they need multiple tools. And Astera is providing that to them, both in terms of retimers, gearboxes, fabric devices, providing the software. So overall, we are providing a holistic portfolio that is also opening up that added interest and momentum for customers to use our products.
Tore Svanberg: Thank you for that color, Sanjay. As my follow-up for Mike, not to nitpick here, but I mean your DSOs came in at 40 days. I think throughout most of last year, they were around mid-20s. Anything going on there to note?
Mike Tate: It’s really just the linearity was more balanced in the quarter than previous quarters. I think this going forward will be a more typical level of DSOs as we grow as a company and have more multiple product line shipping.
Operator: The next question comes from the line of Quinn Bolton with Needham & Company.
Nathaniel Bolton: Maybe just a quick clarification, guys. Mike, I think in your prepared comments, you said that Aries and Taurus would grow in the June quarter. Just wanted to make sure that Aries, is that only the on the board retimer products, does that include the Aries SCM for scale-up applications? I know you talked about strength in scale-up, but just wondered — just wanted to see if you could make a specific comment on the Aries SCM product? Because it certainly sounds like the lead ASIC platform is still in the ramp phase.
Sanjay Gajendra: Yes. It is. The Aries SCM is doing very well for us, but also other internal AI accelerator cloud platform providers are also ramping. So we’re seeing growth, and that’s more chip on board for scale out. And the Aries SCMs for scale up sourcing both contributing growth. So these are on internal AI accelerator systems.
Nathaniel Bolton: Got it. Okay. And then maybe Jitendra or Sanjay, just longer term, I know you’re ramping Scorpio P-Series switches first, but it sounds like the X-Series is a larger TAM. As that starts to enter production late this year with higher dollar content per accelerator, would you expect X-Series to potentially cross over P revenue say, by the end of ’26 or would that be more of a ’27 event? Just trying to get some sense of what you think the pace of the ramp might be once X-Series enters production.
Jitendra Mohan: Yes. So if you consider the P-series and the X-Series TAMs, so we kind of outlined them to be both about $2.5 billion each. But if you look at the X-Series, the available time today is nearly zero [indiscernible] outside of NVIDIA. So it’s a very rapidly growing market for us, and we do estimate that it will be the single largest sort of product line for us. As Sanjay mentioned earlier, the X family shipments are just going to start — have just started in this quarter and will start to ramp later on this year, with the full volume of them really coming in 2026. So we do expect in the ’26 and ’27 time frame for the X family to become a larger contribution to our revenues.
Sanjay Gajendra: If I can add to that, not just revenue, like noted, X-Series is our anchor socket. It’s the socket around which we are building the entire product line. COSMOS operates at a completely higher level when it comes to supporting the X-Series. So overall, we do believe that, that anchor socket and the fact that this is a greenfield TAM that’s rapidly growing puts us in a great position to be able to offer multiple product lines in order to service the entire AI rack.
Operator: Your next question comes from the line of Atif Malik with Citi.
Unidentified Analyst: This is [indiscernible] in for Atif. I guess my first question might be more of a broader question for Jitendra or Sanjay. I believe one year ago, kind of Astera announced some progress made around PCIe over optics and this year at OFC, we saw various demos on the technology. I guess at a higher level, does Astera still see PCIe over optic as a path forward to extend PCIe beyond copper and beyond scale up? Or perhaps the focus is still on copper?
Jitendra Mohan: So a good question. The focus right now is definitely on copper, as we have discussed through the entirety of this call. But as Sanjay mentioned earlier, our job really is to provide all of the tools that our customers require in order for them to deploy their AI infrastructure. . Our customers usually prefer to deploy over copper just because it’s kind of more reliable, low power, better TCO and so on. And so we will continue to support that for as long as we can. But at some point in time, as data rates go up and the reach requirements increase, we will have to go to optical solutions, and that’s where we have a very innovative PCI Express over optical demonstration to provide that additional capability to our customers.
More broadly speaking, as we go into — from PCI Express into UAL, where the line rates are even higher, we believe that copper will still be the dominant media at 200 gigabits per second. Now if you start looking at speed even beyond that, that’s where optical may start to have a play, and we are very actively working with our customers to figure out what that intercept is and the different type of innovations that are required to deploy a scale of optic-based solutions at those data rates.
Unidentified Analyst: Got it. No, that’s helpful. And for my kind of follow-up, this question might be more for Mike. And I totally get you don’t guide beyond the next quarter. But if our math is right, Scorpio sales in September and December could reach mid-teens to 20% of sales on a quarterly basis if you take into account the 10% plus sales you gave out for the year. With that in mind, can we think of — actually, the second half gross margin actually being up within the first half qualitatively?
Mike Tate: Yes. Overall, for gross margins, as we diversify our product line, we’re going to see a wider range of margins per product offering, just given how quick the market is moving, you have less time to optimize the product portfolio for cost optimization. You’re always chasing the next generation for revenue growth. So with that wider range of margins, we still expect our longer-term gross margin targets of 70% to be the direction we’re heading, not this year, but over time. So I would still encourage people to think about the margins as we grow the company to trend towards 70%.
Operator: Thank you. Your next question comes from the line of Srini Pajjuri with Raymond James.
Srinivas Pajjuri: A question on the custom racks and also, what sort of mix I think you’re assuming for custom racks as we go into second half. Maybe first, you can talk about what’s driving that inflection? I know it’s been — it’s always been middle of the year for custom racks, but is there any, I guess, software or is it related to the Blackwell Ultra? Or some other hardware component that’s kind of enabling customers to switch over to custom racks? And as I said before, if you could talk about what sort of mix assumptions you’re making as we go into the second half of the year?
Jitendra Mohan: Great question. It’s unfortunately difficult to give out an exact mix because it actually keeps changing or keeps evolving. But to go back to the point that you made, Blackwell, first of all, is a fantastic architecture, a fantastic platform for customers to take advantage of. But they also have the challenges or restrictions of what their data center looks like and so on. So there is a lot of incentive for our customers to customize this rack to take advantage of their existing data centers, their existing hardware, the software infrastructure that they have, security, even their supply chain. So that’s why we do believe that many of the hyperscalers will customize the racks and deploy in their data centers.
However, the timing of that will vary. So hyperscaler customers that are kind of more focused on time to market or low engineering effort will likely side with the reference designs to get started and then customize the racks later on. We can definitely talk about the information that’s now publicly released. We are the first — first application for this customization was to use in-house smart mix in order to attach them to the Blackwell platform. And that’s a fantastic opportunity for us, and we expect that to start to ramp in the second half of this year.
Srinivas Pajjuri: Got it. And then more of a longer-term question, Jitendra. On UALink, obviously, it seems like a very large market potential out there. But at the same time, some of your competitors are focusing on Ethernet. These are obviously pretty large competitors. And I’m guessing some of your customers are probably going to stick to Ethernet as well. Just wondering how you think about how the market might shake out longer term, Ethernet versus UALink in terms of what, I guess, sort of applications might adapt Ethernet for scale up versus UALink?
Jitendra Mohan: Yes. So UALink, as you correctly pointed out, it’s a brand-new standard, right? It’s been purpose built for AI for specific workloads around training as well as inference, whereas Ethernet has been very widely deployed. I mean, it’s probably the most widely deployed protocol for scale out. And now what’s happening is with the UEC, Ultra Ethernet Consortium, they’re trying to retrofit Ethernet to maybe work on scale up. So if you look at kind of longer-term horizon, probably fair to say that you would be able to build scale up networks using both UAL as well as UEC. However, there are pretty significant differences between the two. And I can maybe categorize them into 2 buckets. One is on technical performance, and the other one is ecosystem diversity.
So if you just look at this from a technical performance standpoint, UAL brings the best of two worlds, right? It gets the memory semantics, the ease of software, the lower latencies of a PCI Express protocol, and then combines it with the fastest speeds that are available for Ethernet. So the end of the day, you get a very efficient system with lossless packet transmission, very low latency and very high throughput. Whereas in the case of Ethernet, you are trying to retrofit some of these features while keeping Ethernet backwards compatible while keeping UEC backwards compatible. So overall, our belief is that just purely from a performance standpoint, UAL will be more performant compared to UEC. The other part of it is ecosystem diversity and being on the Board of UAL, we see a lot of traction both from customers as well as for vendors to build products into this ecosystem.
We believe that over time, we will have a rich ecosystem where multiple vendors are building different components that will go into a UAL based rack of UAL infrastructure. And certainly, Astera Labs is also very well positioned to play into that ecosystem. Overall, we believe that hyperscaler customers will prefer this diverse ecosystem over one that is more proprietary or locked into one large component provider.
Operator: And our final question comes from the line of Suji Desilva with ROTH Capital Partners.
Suji Desilva: Quick question on Scorpio products. Should we think of there being any appreciable price difference between the X product and the P product? Are they fairly similarly priced?
Sanjay Gajendra: Yes. So in general, the functionality is much more valuable, if you will, from — on the X-Series because it’s used to interconnect GPUs, and the GPU utilization, of course, being a prime factor if you think of the overall performance of a training cluster or an inference cluster. So to that standpoint, X-Series does bring in a lot more value, and therefore, you can assume that the ASPs tend to be significantly higher. And that’s — again, there are different — the X-Series is not one device, to be very clear, there are multiple part numbers. So there would be situations where maybe one part number is not at the same level as P-Series. But in general, you can just look at it from a per lane standpoint or per port standpoint, and look at the value delivered. And on that basis, the X-Series will always be a much more valuable, much more higher ASP product than a P-Series.
Suji Desilva: Okay. Great. And then the other question I have is sort of updating on the thoughts versus kind of coming out of the IPO of mostly chips, but then Taurus being a paddleboard. Can you update on your thoughts on chips versus using board-level solutions or even do module level solutions make sense you’ve given how much content you have to integrate all of that?
Sanjay Gajendra: Yes. So I think like Jitendra laid out, right, our vision really is to be a connectivity supplier at an AI rack scale, right? We want to solve our customers’ needs and challenges. And we are sort of uniquely set up as a company in the sense that we have our silicon engineering team, we have our hardware engineering team, and we have a software engineering team that includes what the service on the COSMOS side. So what I want to say is, in that context, obviously, we are trying to provide a rack scale solution to our customers. With the one variable, which is the compute trace coming from either third-party or internally developed ASIC platforms. But the rest of the connectivity, whether it’s based on copper or optical, we seek to address that and service that.
And the exact form factors that we will take really depends on the customer needs. But we have the capability to go from silicon to hardware or software. So we’ll always look at trying to maximize the opportunity that’s available to Astera so we can continue to grow and thrive as a company.
Operator: Thank you. There are no further questions at this time. So I’d like to turn the call back over to Leslie Green for closing remarks.
Leslie Green: Thank you, Amy, and thank you, everyone, for your participation and questions. We look forward to updating you on our progress. As we announced today, we will be conducting a webinar hosted by JPMorgan on expanding opportunities in AI infrastructure with UALink. You can find full details in our press release, and you can also check the Investor Relations portion of our website for this and other upcoming financial conferences and events. Thank you.
Operator: Thank you. This concludes today’s conference call. You may now disconnect.