Aehr Test Systems (NASDAQ:AEHR) Q1 2026 Earnings Call Transcript October 6, 2025
Aehr Test Systems reports earnings inline with expectations. Reported EPS is $0.01 EPS, expectations were $0.01.
Operator: Greetings. Welcome to the Aehr Test Systems fiscal 2026 First Quarter Financial Results Conference Call. A question and answer session will follow the formal presentation. Anyone should require operator assistance during the conference. Please note, this conference is being recorded. I will now turn the conference over to your host, Jim Byers of Pondell Wilkinson Investor Relations. May begin.
Jim Byers: Thank you, operator. Good afternoon. Welcome to Aehr Test Systems’ first quarter fiscal 2026 financial results conference call. But with me on today’s call are Aehr Test Systems’ President and Chief Executive Officer, Gayn Erickson and Chief Financial Officer, Chris Siu. Before I turn the call over to Gayn and Chris, I’d like to cover a few quick items. This afternoon, right after market close, Aehr Test Systems issued a press release announcing its first quarter fiscal 2026 results. That release is available on the company’s website at aehr.com. This call is being broadcast live over the Internet for all interested parties, and the webcast will be archived on the Investor Relations page of Aehr Test Systems’ website.
I’d like to remind everyone that on today’s call, management will be making forward-looking statements that are based on current information and estimates and are subject to a number of risks and uncertainties that could cause actual results to differ materially from those in the forward-looking statement. These factors are discussed in the company’s most recent periodic and current reports filed with the SEC and are only valid as of this date, and Aehr Test Systems undertakes no obligation to update the forward-looking statements. Now with that said, I’d like to turn the conference call over to Gayn Erickson, President and CEO.
Gayn Erickson: Thanks, Jim. Afternoon, everyone, and welcome to our first quarter fiscal 2026 earnings conference call. I’ll begin with an update on exciting markets Aehr Test Systems is targeting for semiconductor test and burn-in with an emphasis on how these markets seem to share a common thread of market growth related to the massive expansion of data center infrastructure and AI. After that, Chris will provide a detailed review of our financial performance, and finally, we’ll open up the floor for your questions. Although we started with the typical low first quarter revenue, consistent with the last few years, and actually higher on both top and bottom lines in Wall Street analyst consensus, we’re pleased with our start to this fiscal year.
We had revenue from several market segments and strong momentum in sales and customer engagement in both wafer level and packaged part test and burn-in of artificial intelligence or AI processors. Again, although we did not provide guidance for the quarter, our first quarter results surpassed analyst consensus estimates for both the top and bottom lines. We saw continued momentum in the qualification and production burn-in of packaged parts for AI processors, which is fueling sales growth in our new Sonoma ultra-high power package part burn-in systems and consumables. During the quarter, our lead production customer, a leading hyperscaler, placed multiple follow-on volume production orders for Sonoma Systems, requesting shorter lead times to support higher than expected volumes as they accelerate the development of their own advanced AI processors.
Q&A Session
Follow Aehr Test Systems (NASDAQ:AEHR)
Follow Aehr Test Systems (NASDAQ:AEHR)
Receive real-time insider trading and news alerts
This customer is one of the premier large-scale data center providers and has already outlined plans to expand capacity for this device and introduce new AI processors over the coming year to be tested and burned in on our Sonoma platform at one of the world’s leading test houses. We’re also collaborating with them on future generations to ensure we can meet their long-term production needs for both package and even wafer level burn-in. Hyperscalers like Microsoft, Amazon, Google, and Meta are increasingly designing and deploying their own application-specific integrated circuits or ASICs for AI processing to meet the unique demands of their massive scale workloads and gain a competitive advantage. Aehr Test Systems allows customers to perform production burn-in screening qualification, and reliability testing for GPUs, AI processors, CPUs, and network processors directly in packaged form.
Our Sonoma systems provide what we believe to be the industry’s most cost-effective solution enabling customers to smoothly move from early reliability testing to full production burn-in and early life failure screening, which helps reduce costs, improve quality, and speed up time to market. In the last year, Aehr Test Systems has implemented several enhancements to the Sonoma system to meet qualification in broad production test and burn-in requirements across a wide range of AI processor suppliers, test labs, and outsourced assembly and test houses or OSATs. Major upgrades include increasing power per device to 2,000 watts, boosting parallelism, and adding full automation with a new fully integrated package device handler. Over the last quarter, including a very successful customer open house we held last week at our Fremont, California headquarters, 10 different companies visited Aehr Test Systems to see our next-generation Sonoma system and new features, including the fully automated device handler for completely hands-free operation, which we’ve installed here at our Fremont facility.
Customer feedback regarding these enhancements has been very positive, and we expect these new features to open up new applications and generate additional orders this fiscal year. As I’ve mentioned before, one of the biggest benefits of our acquisition of InCal Technology one year ago is that it gives us a front-row seat to the future needs of many top AI processor customers, providing us with close insight into their burn-in requirements. As the only company worldwide that offers both proven wafer level and packaged part burn-in systems for qualification and production burn-in of AI processors, Aehr Test Systems is ideally positioned to assist them regardless of their burn-in method. Consequently, we are experiencing increased interest in our Sonoma high-volume production solution for package level burn-in, and some of these same customers, as well as other AI processor companies, are approaching us to learn about our production wafer level burn-in capabilities.
This past year, we delivered the world’s first production wafer level burn-in systems for AI processors. Importantly, these systems are installed at one of the largest OSATs worldwide, providing a highly visible showcase to other potential AI customers of our proven solution for high-volume testing and burn-in of AI processors in wafer form, thereby strengthening our market position. We anticipate follow-on orders from this innovative AI customer as volumes increase, and other AI processor suppliers have already approached us about the feasibility of wafer level burn-in of their devices. We’re also developing a strategic partnership with this world-leading OSAT to provide advanced wafer level test and burn-in solutions for high-performance computing and AI processors.
This joint solution, already in operation at their facility, marks a significant milestone for the industry. By combining Aehr Test Systems’ technological leadership with this OSAT’s global reach, we can provide unique capabilities to the market. This model offers a complete turnkey solution from design to high-volume production, and several customers have already begun discussions to learn more about our high-volume wafer level test and burn-in solutions for AI processors. This OSAT and Aehr Test Systems have a long history of innovation together, including the first FOX NP wafer level burn-in system installed in an OSAT for high power silicon photonics wafers. Now the world’s first wafer level test and burn-in of HPC AI products, using Aehr Test Systems’ FOX XP systems.
And they’re also one of the largest installed bases of Aehr Test Systems’ Sonoma system for high power AI and high-performance computing processors. Additionally, this last quarter, we launched an evaluation program with a top-tier AI processor supplier for production wafer level and burn-in for one of their high-volume processors. This paid evaluation, which includes a custom high power wafer pack, and the development of a production wafer level burn-in test program, will feature a comprehensive characterization and correlation plan to validate Aehr Test Systems’ FOX XP production systems wafer level burn-in and functional testing of one of this supplier’s high-performance, high power, processors on 300 millimeter wafers. We believe this represents a significant step toward wafer level burn-in as an alternative to later stage burn-in and into future generations of their products.
Our FOX XP multi-wafer test and burn-in system is the only production-proven solution for full wafer level test and burn-in of high-powered devices such as AI processors, silicon carbide, and gallium nitride power semiconductors, and silicon photonics integrated circuits. Beyond AI processors, we’re seeing signs of increasing demand in other segments we serve, including silicon photonics, hard disk drives, gallium nitride, and silicon carbide semiconductors. We’re experiencing ongoing growth in the silicon photonics market driven by the adoption of optical chip-to-chip communication and optical network switching. This quarter, we upgraded another one of our major silicon photonics customers, 3.5 kilowatt of power per wafer in a nine-wafer configuration.
This latest system shipment includes our fully integrated and automated WaferPak aligner, configured for single touchdown test and burn-in of all devices on their 300 millimeter wafers. We anticipate additional orders and shipments this fiscal year to support their production capacity needs for their optical IO silicon photonics integrated circuits. In hard disk drives, AI-driven applications are generating unprecedented amounts of data, creating ever-increasing demand for data storage and driving new read-write technology for higher density drives, particularly for data center applications. We’re ramping and have shipped multiple FOX CP wafer level test and burn-in systems integrated with the high power WaferProper and unique WaferPak high power contactors to a world-leading supplier of hard disk drives to meet the test, burn-in, stabilization needs of a new device used in their next-generation read-write heads.
This customer is one of the top suppliers of hard disk drives worldwide and has indicated they’re planning additional purchases in the near term as this product line grows. Gallium nitride devices are increasingly used for data center power efficiency, solar energy, automotive systems, and electrical infrastructure. Gallium nitride offers a much broader application range than silicon carbide and is set for significant growth in the next decade. Our lead production customer is a leading automotive semi supplier and a key player in the GaN power semiconductor market, and we have multiple new engagements with other potential GaN customers in progress. We’re currently in design and development of a large number of wafer packs for new device designs targeted for high volume manufacturing on our FOX XP systems.
Although silicon carbide growth is expected to be weighted toward the second half of the year, we continue to see opportunities for upgrades, wafer packs, and capacity expansion as that market recovers. Demand for silicon carbide remains heavily driven by battery electric vehicles, but silicon carbide devices are also gaining traction in other markets, including power infrastructure, solar, and various industrial applications. Late in last fiscal year, we shipped our first 18-wafer high voltage FOX XP system, extending beyond our previous nine-wafer capability to test and burn-in 100% of the EV inverter devices on six or eight-inch wafers in a single pass with up to plus or minus 2,000 volt test and stress conditions at high temperature. We believe we’re well-positioned in this market with a large customer base and industry-leading solutions for wafer level burn-in.
I also want to give a quick update on the flash memory level burn-in benchmark we’ve discussed earlier. This benchmark is ongoing, and we’ve now begun testing with our new fine pitch WaferPak that can meet the finer pitches and higher pin count more cost-effectively for flash memory, but also be applicable for DRAM and even AI processors if they require fine pitch wafer probing. This is the first WaferPak full wafer contactor demonstrating the capability. The benchmark has gone slower than expected with some challenges with the test system bring-up, but appears to show positive results of the new WaferPak, our ability to do an 18-wafer test cell, and using our full automated wafer handler and WaferPak aligner for the 300 millimeter NAND flash wafers.
Interestingly, the market for NAND flash is in a state of flux, with earlier announced transition to hybrid bonding technologies for higher density NAND flash on 300 millimeter wafers, driving new requirements for higher parallelism and higher power. To now a push for high bandwidth flash, or HBF, which drives very different requirements in terms of test system capabilities. This is exciting news for Aehr Test Systems as both are driving power requirements up substantially, which is right in our wheelhouse. High bandwidth flash or HBF is an emerging technology developed by two of the flash market leaders and aims to provide a massive capacity memory tier for AI workloads by combining the DRAM high bandwidth memory or HBM-like packaging with 3D NAND flash.
This innovation is said to offer eight to 16 times the capacity of HBM DRAM at a similar cost, delivering comparable bandwidth to dramatically accelerate AI inference and process larger models more efficiently while using less power than traditional DRAM. We’re working with one of these lead customers on the now newer tester requirements to provide them with a proposal to meet even these newer higher performance and higher power requirements within our FOX XP wafer 18-wafer test and burn-in system infrastructure. We expect to have yet another update out at next quarter’s earnings call. The rapid advancement of generative artificial intelligence and accelerating electrification of transportation and global infrastructure represent two of the most significant macro trends impacting the semiconductor industry today.
These transformative forces are driving enormous growth in semiconductor demand while fundamentally increasing the performance, reliability, safety, and security requirements of these devices across computing and data infrastructure, telecommunications networks, hard disk drive and solid-state storage solutions, electric vehicles, charging systems, and renewable energy generation. As these applications operate at ever higher power levels, and in increasingly mission-critical environments, the need for comprehensive test and burn-in has become more essential than ever. Semiconductor manufacturers are turning to advanced wafer level and package level burn-in systems to screen for early life failures, validate long-term reliability, and ensure consistent performance under extreme electrical and thermal stress.
This growing emphasis on reliability testing reflects a fundamental shift in the industry, from simply achieving functionality to guaranteeing the dependable operation throughout a product’s lifetime, a requirement that continues to expand alongside the scale and complexity of next-generation semiconductor devices. To conclude, we’re excited about the year ahead and believe nearly all of our served markets will see order growth in the fiscal year, with silicon carbide growth expected to strengthen further into fiscal 2027. Although we remain cautious due to ongoing tariff-related uncertainty and are not yet reinstating formal guidance, we’re confident in the broad-based growth opportunities ahead across AI and our other markets. With that, let me turn it over to Chris, and then we’ll open up the lines for questions.
Chris Siu: Thank you, Gayn, and good afternoon, everyone. Looking at our Q1 performance, results exceeded analyst expectations for both revenue and profit. First quarter revenue was $11 million, a $16 million from $13.1 million in the same period last year. It is important to note that last year’s Q1 benefited from a very strong consumables revenue quarter, which makes direct comparisons challenging. This quarter’s revenue was primarily driven by demand for our FOX CP and XP products. In Q1, we shipped multiple FOX CP single wafer test and burn-in systems, featuring an integrated high power wafer prover for new high volume application involving burn-in and stabilization of new devices for our lead customer in the hard disk drive industry.
Contact revenues, including WaferPaks for wafer level burn-in business, and beams and bips for our packaged part burn-in business, totaled $2.6 million and made up 24% of our total revenue in the first quarter, significantly lower than $12.1 million or 92% of the previous year’s first quarter revenue. As we have discussed in the past, this consumable business is ongoing even when customers are not purchasing capital equipment for expansion. We feel that this revenue will continue to grow both in terms of absolute value but also as a percentage of our overall revenue over time. Non-GAAP gross margin for the first quarter was 37.5%, down from the 54.7% year over year. The decline in non-GAAP gross margin was mainly due to lower sales volume and a less favorable product mix compared to the previous year, which included a higher volume of higher margin WaferPaks.
Also, our products shipped this quarter included lower margin probers and an automated aligner, both manufactured by third parties and sold as part of our overall product offerings. Non-GAAP operating expenses in the first quarter were $5.9 million, an 8% increase from $5.5 million last year. Operating expenses increased due to higher research and development expenses for our ongoing project that we can lead into resources and have support AI engine initiative and the memory project. As we previously announced, we successfully closed the InCal facility on May 30, 2025, and completed the consolidation of personnel and manufacturing into the Fremont facility at the end of fiscal 2025. In connection with the fleet consolidation, we eliminated a small number of headcount due to redundancy in our global supply chain and incurred a one-time restructuring charge of $219,000 in our fiscal first quarter.
In 2026, we received $1.3 million of employee pension credit from the press for eligible businesses affected by the COVID-19 pandemic. We report this cash credit minus the professional fee to process the refund and other income on our income statement. In Q1, we recorded an income tax benefit of $800,000, and the effective tax rate was 26.5% for the first quarter, which includes the impact of soft base compensation. Action related compared to $2.2 million or 7¢ per diluted share in 2025. To consensus net income for the first even. Our backlog at the end of Q1 was $15.5 million, with $2 million bookings received in the first five weeks of 2026. Our effective backlog now totaled 17.5 cash flows and balance sheet. During the first quarter, we used $300,000 in operating cash flows.
We ended the quarter with $24.7 million in cash, cash equivalents, and restricted cash compared to $26.5 million at the end of Q4, mainly due to the final $1.4 million payment for facility renovation. In total, we have spent $6.3 million on remodeling our manufacturing facility. With the renovation now complete, we have significantly upgraded our manufacturing floor, customer and application test labs, and clean room space for wafer pack, full wafer contactors. Improvements also increase our power and water cooling capacity, enabling us to manufacture all of our phosphate level burning products and package burning products, including Sonoma, Tahoe, and Macrol products, on a single floor. We are very excited about this renovation, and it was specifically designed to enable us to manufacture more high power systems for AI configuration.
We believe investment in this facility renovation has increased our overall manufacturing capacity by at least five times, depending on the current configuration, and we are more ready than ever to support the growth of our customers. We celebrated these upgrades for the customer open house that was well attended and received very positively. Over the past quarter, we hosted many packaged parts with the level earning customers who had the opportunity to see our expanded capabilities firsthand. Importantly, we do not expect and anticipate additional capital expenditures for facility expansion in the near future. We have no debt and continue to invest our excess cash in market funds. As Gayn mentioned, we started the year by withholding formal guidance due to ongoing tariff-related uncertainty.
Since we remain cautious, we will continue with that approach for now. However, looking ahead, we’re confident in a broad-based growth opportunity across AI and our other markets. Lastly, look at the investor relations calendar. NetHunt will meet with investors at the seventeenth Annual CEO Summit in Phoenix tomorrow, Tuesday, October 7. The following month, we participate by Calum sixteenth Annual Alpha in New York on Tuesday, November 18. And on Tuesday, December 16, we will return to New York City to attend the NYC CEO Summit. We hope to see you soon at these conferences. This concludes our prepared remarks. We’re now ready to take your questions. Operator, please go ahead. Thank you.
Operator: At this time, we will be conducting a question and answer session. If you would like to ask a question, please press 1 on behalf of the team. Confirmation tone will be in line with the question.
Igor Domachev: You may press 2 if you would like to remove your question from the queue.
Operator: For participants using speaker equipment, it may be necessary to pick up your handset before pressing the star key. One moment, please, while we pull for questions. Once again, please press 1 if you have a question or a comment. Please continue to hold while we adjust some sound tech issues. One moment.
Christian Schwab: Thank you for standing by. This is the operator once again. And we’ll
Operator: Christian Schwab, your line is live. Please go ahead.
Christian Schwab: Great. That sounds like a much better connection. So, Gayn, you know, as we kinda get into the second half of the year, and kind of these more open-ended growth opportunities in AI that you’ve talked about in particular. You know, when do you think we’ll see a material improvement in bookings to drive revenue down the road? Well, that sounds an awful lot like guidance again here. But so what we believe and what we’ve tried to communicate in our previous calls as well is that you know, our lead our first AI Waveflow earning production customer we anticipate that they will need additional capacity that would be both bookings and revenue for this year. And that could be more than last year, and and we won’t put a a a top on that.
So know, the question is timing of that. We’re we’re not sitting on an order. We didn’t get it yet and just put it in our pocket. But as that order comes in, we typically will announce those within, you know, a couple of business days or so. What we are seeing is additional wafer level customer engagements. It’s pretty interesting that kinda span from processors and ASIC Okay. So and I’m sorry? Okay. Hold on. That was a oh, oh, can you hear me okay? Oh, wow. Alright. I can hear you again. Okay. Alright. So I’ll I’ll assume that Christian is is on mute or something that he can hear me as well. So we’re we’re seeing it across several different groups from hyperscalers, AI processors, kind of across the board. And it’s interesting. We had direct people that have come in saying, that’s what they’re interested in.
We have people that are talking to us about Sonoma for their because their current customer is already doing, qualifications and are looking to do burn in for the first time and are looking to be their package and also now exploring the wafer level side of things. So these generally do take some time. And, you know, so I would probably guess these tend to be more second half, this being the ’26 for us. But, you know, I at this point, we’re just scrambling as fast as we can to address all the requests and requirements. And keeping our head down to to focus on them. On the package parts, same thing. Both additional calls and additional processes that are being put on our system and its enhancements to the Sonoma. As well as we’ve got customer interest to do additional production customers know, with and without the fully automated integration of the of the pick and place handler that bolts right onto the front of Sonoma.
So I think it’s ongoing and very interesting, and we’re just really happy to have this number of engaged and active customers. Operator, can you hear us?
Operator: Yes. I can hear you. Are you ready for next question?
Christian Schwab: Yeah. Christian, do have any other questions, or was there Seems a little abrupt this time. Yeah. Have a question coming from Christian Schwab. Christian, your line is live. Go ahead, please.
Christian Schwab: Yeah. That’s best. Sorry about that, Gayn. I was telling you I could hear you, but it wasn’t working. So we we have, you know, a few customers here currently. You talked about, you know, a bunch of more customers coming in there. As we look to the end of your fiscal year, do you have a target number of customers that you think you’ll be in the process of shipping to by then or shipping to barely shortly afterwards. That’s a good question in terms of targets. Actually, we do have some discrete quantity targets. In fact, some of the KBOs, are the bonus structures for our officers, are based upon not only numbers, but specific targeted AI customers. I’m really giving a lot of insight, but I would say in plural.
For additional package part and also for wafer level. So you know, at this point, we’re not really limiting ourselves, but we’re just trying to be cautious about oversetting expectations either in terms of the timeline of it. But it’s actually one of the things that was interesting that really came to fruition, and I’ve apologize if I said this before on the last call, is I’m starting to also understand a of things going on. One of them that was kinda new is there are many of the ASIC suppliers in particular, and there’s some evidence with within the you know, the GPU or just a processor supplies themselves. They don’t do a production burn in like you think about it, like using one of our tools. They’re doing it at system level, like, as in the rack.
So these processors are getting all the way to the end and then they’re simply running them in rack form, sometimes at elevated temperatures and sometimes not. To try and get, you know, the first seven days of failures out of them. Which is so inefficient and extra uses a ton of power And, you know, there’s only so many processors per rack, if you will. And so I was sort of surprised at some of this, you know, some of the test vectors that were getting from customers are not this is on a production tool today. This is just an HTOL, which is like qualification vector instead of a production vector, and that’s because they weren’t doing production yet. So, you know, you know, you’re really at the leading edge of this. But one thing is really clear from the data we’ve seen so far, the devices are failing.
We do see the failures in the burn in, so they’re absolutely able to screen them using our tools at Wafer and production. And so, you know, that creates the leading edge of this market and why we’re so excited about it. Mean, it’s really easy. I mean, obviously, every single call you get on, your CEOs are talking about how they’re using AI one way or the other. But this is really happening to us. I mean, it was 40% of our business last year from zero. It’s we think it’s gonna grow both package and wafer level this year. And, you know, we’re still seeing the other businesses grow as well. We’re really glad to have gotten the you know, the, facility upgrade you know, behind us. There’s a lot of work to get that there. Now we have the capacity to be able to ship so many more systems, particularly the high power ones, And if you come on our floor right now, you’ll see AI wafer level burn in systems right next to Sonoma systems.
Being built today. So you know, I think that we believe that we have the opportunity to capture you know, multiple customers in both package and wafer level. And then my last question, Gayn, is, you know, last last call, you you you were quite enthusiastic about the the the TAM for you know, AI driven products for you to be three to five times bigger than than than silicon carbide And is there a time frame that that we should be thinking about that that you know, becomes evident? Again, I kinda asked it on the backlog question. But I’ll ask it again more directly. You know, is is you know, are we gonna see material orders from you know, one or two customers you know, this fiscal year. Or is that something that it’s just too early to know but, yeah, you feel confident it’s gonna come.
How should we be thinking about that?
Gayn Erickson: So I I feel the latter is the easy out. To say that I comp they’re I’m confident they’ll come. I think timing it is can both be, you know, a lot more guidance than than we’re we’re providing right now. But there’s also just some of these evaluations as we prove it the customers can actually start contemplating how how many and when they would want to install them. You know, the the the new evaluation, I think we already alluded to it. It’s for a processor that is expected to go into volume production. At the end of next year or in the ’26 is through May ’26. If you talk about calendar twenty six, there’s a lot of opportunities in play that need to play out that would be production for both wafer level as well as package.
So it’s not that far away. I mean, even something that seems like is one year away in our space, there’s a lot of work that needs to be done to actually ramp a customer to be one year out. And so, we’ll keep you know, we’ll keep focused on this thing as we get little closer. We’d hope to give you answers. To to be candid, this will probably feel like you know, you’ll hear enthusiasm and we think we’re winning, and, you know, the customer has gotten good results. Those will be early indicators. And then, you know, we’re gonna surprise everyone with a large production order. Not unlike what happened with the first wafer level system. Except for some of these customers are just significantly bigger.
Christian Schwab: Great. Thank you. No other questions. Thank you.
Gayn Erickson: Thanks, Christian.
Operator: Thank you. Your next question is coming from Jed Dorsheimer. Jed, your line is live. Please go ahead.
Jed Dorsheimer: Hey, Gayn. You have Mark Schuter on for Jed Dorsheimer. Congrats on the success in this quarter and the announcements and for the AI customers. Mean, that’s great. Can you give us a little color? And how should we think about the engagement in the qualification cycle for these customers? Do you need, like, a new product cycle to occur? Like, do need to slide in between Blackwell and Rubin? And if you can give us a little bit of what’s it what’s it like in the room with the customers, the tenor of these guys risk aversion? Or is the overwhelming demand spur some willingness to to try a new equipment like Aehr Test Systems?
Gayn Erickson: Oh, that’s actually there’s a there’s a lot in there. That those are good ones. Alright. So let me talk about sort of the qualification process. So far in the engagements that we’ve had so far, we don’t need a new product. K? So we are doing some things depending on the pitch of their probe cards. Which we call our WaferPaks, they we may need to do some things specifically for that. We have some design for testability features that we have been touting to our customer base. That allow them, you know, very short lead time high volume, low cost wafer packs. We can also supply them at higher cost and a little bit longer lead time if they don’t hit those DFT targets. We’ve got some of both. And so, like, one of the engagements, they they we made a conversation related to them about their pitch of their devices.
And we’re like, wow. You know, you happen to choose a pitch on these so many pins. That’s driving the cost of your wafer pack up. They’re like, well, why didn’t you tell me before? And they kinda joke because they hadn’t talked to us. Before. And they’re like, well, this will be no problem to cut in for our next generation. We’re just gonna have to live with it on the current one. So, you know, they’re they’re engaged with us in kind of a roll up the sleeves working. The qualification, in some cases, is just validating that we can do the same type of and power delivery as we’ve done with the other processors. On their devices. I think customers I get it. They’re they’re kind of like, I it’s hard to imagine that we can really pull this off, you know, if they haven’t seen it with their own eyes.
And so we’re just showing it and demonstrating it to them somewhat like what we ended up doing with the first silicon carbide customers. And then at some point, people get it. Now one thing that also seems to be going on is, you know, these are pretty visible. I I already said that they’re these systems are sitting on an OSAT. And there aren’t that many of them. K? So especially not that many of the biggest. Right? There’s a lot of people out there that are aware of the success of this. And even though the analysts are still trying to figure out everything, there’s a lot of people that have pretty intimate knowledge and seem to know what’s happening. And so they’re they’re like, can you do can I do it that way too? So they’re leaning in, so it’s a little less of you know, complete disbelief.
Can you do it? But more of, can you prove it for me? Now from a timing perspective, it’s it you know, just typical to industry. Normally, you you when people are buying test equipment, like semiconductor, that’s equipment like ours, you do it at some disconnect. Either you’re putting a new fab in if you’re an IBM, or it’s with some new product. Or if just simply the volume is growing so fast, that you wanna buy a tool that has more you know, output per dollar or so. So in this case, you know, outside of one supplier, everybody’s using TSMC today, and eventually Tesla will be using Samsung stuff. But there it’s not like there’s a new fab. Although there are new fabs coming online. People are just getting access to those TSMC wafers and then want to be able to test them.
And they either do it in a package for burn-in on something Sonoma or a system level test or or awfully all the way back at the rack. So customers are in engaging because they need to buy a capacity for these new products. And for new things coming out. So it is a fairer way of looking at it. To look at the intercept between product a to product b. That’s at least what’s been communicated to us with this latest one we just announced. K? And, similarly, our first customer intercepted us with their new their transition to a newer device. We announced that a year ago. So that’s pretty typical, and sometimes that’s the gating item of their timing. And sometimes that’s fast or slow, but it’s sort of you need to time it with that. Just the tenor or the tone, So you know, if you guys people that have followed us understand that you know, our value proposition and our pitch, if you will, is that semiconductors are growing you know, extremely high.
So, you know, within it took forty years to get to 500,000,000,000. It’s gonna take less than 10 to double that. K? Much of that is driven by either directly AI or all of the pieces surrounding all of the explosive data center growth. K? What’s happening is people these devices are not more reliable. For multiple reasons. The smaller and smaller geometries the fact that they’re putting multiple devices into one package because they can’t make these devices any bigger, are driving the requirements for reliability and burn-in test. And if you look at the road maps from all of the players, every single one of them from, you know, from all of the NVIDIA products all, you know, to everyone else, from the ASIC suppliers, All of their all their products going forward are pulling multiple compute processors to make it generic in a single package along with many, many stacks of HBM and ultimately optical IO chipsets.
Put these on these complex advanced packaging substrates, and they’re extremely expensive. And I always remind people, the reason you burn them in is because they fail. And when they fail, you take out all the other devices. So the value proposition, if someone could ever do wafer level burn-in, is overwhelming because the cost of the wafer level burn-in is cheaper than the yield loss. I actually alluded to it in my prepared remarks. That our lead customer for packaged part burn-in is gonna do a couple few generations in packaged part and then wants to switch to wafer level. So it you know, what what are they gonna do with all those sonomas? It doesn’t matter. The yield like, yield advantage of moving it to wafer level pays for it all. So, you know, that’s that’s a thing that’s a a macro trend heading our way.
And it’s not just AI. It it happened to us in the silicon carbide side of things. We see it in stacked memories in both DRAM and flash. We see it in other complex devices in GaN. Are going to automotive that are mixing different devices together and why it’s driving for wafer level. And these large trends are good for both reliability as tide that’s rising for all and really good for us. But also for our unique products with the like, particularly the Sonoma and the high power wafer level burning systems we have with our FOX products.
Mark Schuter: Gayn, all that color is very helpful. Thank you. To dig in a bit around that last part of the Sonoma versus Fox products, what are the what’s the gating factor of why customers are going first with the Sonoma and not right to wafer level burn-in? What needs to be proven out for wafer level burn-in? Yeah. For for those customers? And and how you know, I’m assuming there’s there’s a sales cycle there of you’d like to start with SNOMA and then push people to wafer level burn-in. So how does that transition go?
Gayn Erickson: Yeah. You know, the way we look at it is we say we’re we’re just neutral. If you wanna do package part, or you wanna do wafer level, we love you both. K? It’s it’s not easy to just go you know, talk someone out or whatever is they’re used to. So in this case, we don’t have to. We just say, listen. We think we make the best machine for qualification reliability of your complex packages with Sonoma. Can test all the processors, HBM, and all the chipsets inside of it in a single pass during your calls. If you want, we’ll do it in in production as well. And we’re now adding automation to it. But if you’d like to kinda go to the next step, you could do take the high failing devices out of there, and do a wafer level burning of them before you put them in those packages.
And our data would suggest you don’t need to burn them in again. But, you know, if you still need a little bit of urine, that may be fine. But it you don’t wanna have the mass yield loss Some of these processors have four and eight CPU chips in them. Right? Process yeah. Compute chips. And have another, you know, six or eight HBM stacks on it. Just the COWAS substrate is extremely expensive and rare. And so, you know, it makes sense to go to wafer level, but you know, to be candid, one year ago, twelve months ago, we had didn’t even have the first order. There was not one machine in the world that could do a wafer level burn-in of an AI processor. K? None. We’re the only ones, and we now have just, you know, shipped our first systems And, you know, we’re at the front end of this thing.
I understand people are sort of in a doubting mode. Let us prove it to them. And I for those that are on the call, if you have a processor, we can you can sit down with us under nondisclosure. We can get tell you which exact specific files we need and we can do a paper benchmark and give you an answer within a couple of days. As to the feasibility of your devices. And so far, we have not found one that we haven’t been able to test that we’ve been given that detailed data of. So I’m sure there are some out there. But for now, we’re on a roll. Okay.
Mark Schuter: Much appreciated.
Operator: Thank you. Your next question is coming from Bradford Ferguson. Bradford, your line is live. Please go ahead.
Bradford Ferguson: Hello, Gayn. I’m curious about the cost to wait till you get to the motherboard or the package part. Or the final part. We were talking about silicon carbide, you could have twenty four or forty eight sick devices on on launch in one inverter, And, you know, then the whole inverter’s bad, and maybe that’s maybe that’s a a a thousand or $2,000, but you know, the retail price on these Nvidia is what? For $40,000?
Gayn Erickson: Well, rumor is they have really high margins. And I’d love it if the customers would give me credit for their sales price. They really only give me credit for their cost, but fair enough. But their cost is significantly higher than any silicon carbide module ever would be. Fair enough. Yeah. I mean, it’s you know? And and by the way, the the to me, the craziest thing is how many people are doing it at the rack level. Like, you’re you’re talking about all the way at the computer level side of things. And burning it in. And, you know, obviously, a a failure there is you know, some a lot more expensive than it would be all the way back at wafer level. So you wanna move with in our industry, we prefer to shift left.
You want it to go as far left in the process as possible because it’s way more cost effective In this case, we had the first two steps in the left side, wafer level. And when it’s just the module level before that module is then put actually into the system level, where you’d start to see all of the power supplies and everything else on it. You know, like the black the like a g v 200 module itself. And then you’d and then you’d certainly, before it goes over to super micro or to gal or something in some main frame wrap. So, you know, one thing to put in perspective, and I don’t think this is the value proposition yet, but it’s it is interesting. We know that people are doing this burn in half the rack level. Or the computer level? K? When you’re in the computer level, basically, what burn in does is you’re basically applying stress condition of power via voltages or current and temperature.
And what it does is it accelerates the life of the part without killing it. So I can take a device and then twenty four hours make it look like it’s one year old, And if it hasn’t died by then, it’s gonna last twenty years. There’s all kinds of books on it. You can read it, Google it, or something, and you can find out about the basic process of burn in and why you do it. K? The key here is you wanna do it in, you know, twenty four hours or four hours or two hours or something along those lines to get the infant mortality rate out so it doesn’t ship to the customer or take down your large language model compilation. K? Now when you’re at system level, you can’t run that rack at a 125 degrees c. Everything will burn up. In fact, those racks are running cold water through them.
Probably running 30 degrees C temperature. Maximum. I know of a company that was trying to do some things to try and get an isolate isolation of the GPU or the processors. To 60 degrees C and their burn in time was measured in days at the system level. That’s what they were doing. Now by moving it to wafer level, we can actually run the devices at a junction temperature at a 125 degrees C, which is an accelerant that’s more than 10 x. We can also run the voltages extremely close to their edge. We can get the burn in times to come down. So when we do that, we’re actually applying only power to the processor, not the HPM, not all the inefficiencies everywhere else, not the rack and etcetera. Just to the processor, and we can do it for a significantly less amount of time.
The long and short of it is I can burn it in to the same level of quality at a fraction of the power. Now I don’t think anyone’s gonna buy our system because of that. Per se, although there’s some argument for But you know what’s hard? Getting a permit for a megawatt burn in floor for your racks. So people may buy our systems because they can get the power infrastructure to burn in hundreds of wafers at a time in parallel in a regular four eighty, you know, volt maybe four you know, 1,000 or multi thousand amp circuit like we have in our building. You wouldn’t be able to do that if you had to burn in a bunch of racks in our building, you wouldn’t be able to do it. But I could have you know, 10 systems running with nine wafers apiece and test a 100 wafers at a time with the power that I have in my facility.
Which is not that atypical of a facility in the Bay Area in Silicon in Silicon Valley. So there is a value proposition there. In addition to the real cost savings, it might just be feasibility of power.
Bradford Ferguson: And and so you mentioned a the high high bandwidth flash. I’m hearing from some systems makers that they’re they’re focused on burning more just because of how expensive the it is to you know, scrap the whole motherboard or whatever. Do do you have any kind of end to high bandwidth memory, or is it mainly the high bandwidth flash?
Gayn Erickson: Yeah. I mean, we talked at kind of our first our belief was that the engagements and the interest was first on the HP yeah. On the flash side of things. There is some things There’s discussions on the DRAM side of things. I mean, people are really scrambling to try and solve that. Through all kinds of mechanisms. And I I I won’t get in all the technological things that we understand, You know, there’s very different implications when you talk about Micron Samsung, and Hynix and what they do and how they stack their memories and how they test them. And burn them in. That have know, kind of key differentiating features amongst themselves that make test interesting. We have a pretty good insight to that. I’m not gonna talk about it publicly.
But that makes that interesting. Bottom line is you know, high bandwidth memory and then eventually high bandwidth flash, needs to be burnt in. And needs to have a cycle and stress to remove that somehow, or it’s gonna show up as it has been in the processors, in the AI stacks. You know? And that, you know, that’s widely known and understood and you know, NVIDIA came out last what, six months ago, yelled at everybody and said, you need to figure out how to burn these things in before you ship them to me. We’re sick and tired of it. So, you know, I’m not I’m not creating rumors. Those are those are widely understood reports. And so right now, what we’re seeing in the test community is sort of, you know, people overuse, you know, the Wild West, but there’s just people scrambling for good ideas on how to address this and running as fast as they can.
And, you know, it makes it exciting every day when you show up to work and you’ve got people that are like, how can you know, how can you help us? So I love our hand. I love the cards we’re dealt right now. I love our position. I love our visibility that we have within. Pretty much all We’ve pretty I think we can now say we have communicated with every single one of the AI players. And, you know, we have a line into them and some thread either package or wafer level related. That gives us some great insights that I think we may be completely unique in that realm. So I think I think the HBF is it looks pretty interesting. Again, you know, that’s stuff takes time. But more and more things are breaking the infrastructure of tests because of power at wafer level, and that’s a good thing for us.
We’re really good at that. Our system you know, I just throw out 3.5 kilowatts per wafer and, you know, most people would not know what that means. That’s crazy. That’s I mean, you know, the world has wafer probers you know, thousands of those installed. That have 300 watts of power capability. If you try to go get a prober that has 1,500 to 2,000 watts, it’s a specialized half a million dollar prober. It’s what we ship with the CP to the hard disk drive guys. That’s one wafer’s capacity. Our systems can do three and a half thousand watts on each of nine wafers. In one machine. Nobody can do three and a half watts on one machine. K, or on or I’m sorry, on one wafer on one machine. And so people are, you know, coming to us because of the thermal capabilities that are unique, Many, if not most of them, are patented around the whole WaferPak concept.
And what we in the blade where we deliver thermal power without a wafer prober, to create uniformity across a 3,000 plus watt wafer. Is is really awesome, and it’s fun to talk about with the technical people. And they’re you know, I’d say that people are quite impressed with with what they hear. And so it’s great to rotate people through here and and let and by the way, they see it. We can show them it in operation when they come. This is not a story. So I think, you know, the more and more of these things, the rising tide, the, you know, the the better shape we’re in. And we’re not abandoning our silicon carbide customers that are listening. I know they have ramps. They have opportunities. There’s new fabs. There’s new capacity coming on. They have new technologies.
We’re not abandoning the OEMs that that the electric vehicle suppliers that we have met with personally and help them to develop the burden structures and burden plans that they drive their vendors towards We’re fully committed to those guys. And we’ll be there as they rep. And we have more capacity than we ever had to be able to address their their needs at a lower price point. And so I think we got that covered. We’re not pivoting the company. We’re just adding to it with this AI stuff.
Bradford Ferguson: On silicon carbide, the sub be my last one. Thank you for your generosity. On Semi, I think one reason for their success is how aggressively they adopted Aehr Test Systems, Fox XP Systems. And, you know, we have a a pretty large bankruptcy that happened with you know, one of their competitors. Is there some kind of risk for the other chip makers if they don’t take if they don’t take burn in more seriously that it could spell issues for them.
Gayn Erickson: So let me answer it this way. I have been invited to be a keynote speaker I’ve spoken at multiple technical conferences around the world until carbide and galvanitride conferences. I’ve sat on several panels, and I have been very almost emotional in some of those discussions. Because we have seen the test and burn in data of more almost all of the wafers in the world. K? That’s pretty bold. K? Certainly more than anyone by far K? Everybody would like to think that they are special. And their devices are just so much better than everybody else The reality is that these devices fail during burn in, that represent the actual duty cycle or what called the mission profile of electric vehicles. What that means is if you do not burn them in, it is our belief the data that we have, they will fail during the life of the car.
Period. We’ve talked about that. I’ve I think I’ve quoted several times. Whatever you do, it is my opinion. Never buy an electric vehicle that didn’t have burn in. For something in the, you know, six to eighteen hours depending on the size of the engine and things like that. And there are OEM suppliers that have the data. Have failed customers who tried to qualify without doing an extensive burn in and kicked them out. And there have been very large suppliers that have lost in the industry because of quality and reliability. So my my call to arms for everybody is there’s no reason not to do wafer level burn in or package part if you know, if you don’t wanna go with us. Whatever you do, don’t skip it. And we now with our 18 wafer system, even at high voltage, K.
So we’ve extended the capability with more cape capabilities The cost of test at high voltage on our system with a capital depreciation of five years, etcetera, is about 0.5¢ per die on an eight inch silicon carbide inverter wafer. Per hour. You can do twenty four hours of burn in for 12¢ a day. And we have been very clear with that. To all the OEMs, and they understand it. And so they drive for a level of quality that they can measure directly on our tools from their suppliers. And I think there is a difference between the people that have adopted a high level of quality and reliability in their market share. And all I’ll say is, you know, I think ON Semiconductor has done an incredible job you know, In 02/2019, I think the year before, they had done $10,000,000 in silicon carbide, and they’re now, you know, kinda neck and neck for market leadership.
And they have won well more than their fair share of the industry across the and I’m just repeating what they have said, across Europe, The US, Japan, and even China. They have done really, really well, and I commend them for that.
Operator: Thank you. Your next question is coming from Larry Chlebina. Larry, your line is live. Please go ahead.
Larry Chlebina: Hi, Gayn. The news today on the AMD, hookup with OpenAI Mhmm. Does that does that accelerate your evaluation process that you have with that second process, or will that put more pressure on getting that done?
Gayn Erickson: I we have not talked to the level of detail to determine who it is. We’ve given up hints. That it’s amongst the top suppliers of AI. They it’s not one of the ASIC guys. So I I’m gonna try and avoid being more specific. I will restate we are in conversation with every one of the suppliers, and I will then say including those guys. K? So my interpretation of that is you know, it it honestly just sort of warms my heart to see the different people’s commitment to the different types of processors. I mean, I without going into whether they are or could or might already be a customer or not, One thing about AMD is and and we’ve used that again, not as an endorsement to them. We’ve used them as one of the examples because their, MI three twenty five has eight processor chips in addition to, like, think, at least that many HBM stacks.
Plus a chipset. In one substrate. If there’s anyone that I would be doing wafer level burn in, they would be amongst them. Okay? But, you know, for exam you know, right now, we provide opportunities for our customers including the likes of those guys, to buy our tools for their their burden requirements for qualifications, either themselves or to use it at one of the many test houses that have our systems. To use our systems for package part burn in for the lowest cost alternative to things like system level test systems that are being used out there. And if the most advanced process would be to do wafer level burn in over time. So you know, I won’t comment on anything more than that. Sorry, Larry. But No, sir. You know, I think I in general, you know, I think good news for the processor market is generally good for us right now.
Larry Chlebina: The optical IO opportunity, is that going to involve actually new machines instead of upgrading existing machines? Is that that transition gonna happen here shortly, or is they have more machines that they’re they’re gonna
Gayn Erickson: The forecast includes both. So Okay. More upgrades and more new machines.
Larry Chlebina: They’re be running out of machines upgrade, don’t they? Sooner or later. Right?
Gayn Erickson: Yeah. But there’s also a scenario where they also have a bunch of products on the current machines. That haven’t that haven’t gone away. And so know, it’s sort of, you know, while you’re upgrading these systems, they’re backwards compatible. So you can still use the old WaferPaks and everything on them. But, nevertheless, there it’s both. And then the other thing, and it’s subtle, and those don’t know so we we introduced a couple years ago a a front end to the Fox systems that allow you for fully hands free operation with a WaferPak aligner. So you can come up to that with FUPS, in this case, with 300 millimeter, both overhead or AGV, automatic ground vehicles with an e 86 compliant port, that allows you to not even come and touch the machine.
And the wafers can run around on the fab, and they can run a burden cycle then move on and go to the next step of test. And So you can you can you can upgrade them with the automation as well then? Exactly. So we took what so we actually took their tools that they had bought in the past with our older 300 millimeter fabs of, like, memory big AI processors, you know, even these silicon photonics you kinda wanna do it But, you know, that’s the best way of doing it. Full automation. If they don’t if they want offline, they can do that too with us.
Larry Chlebina: On this HBF opportunity, is this a different company other than who you’ve been working with for Nope. A year and a half What’s that? Same same company.
Gayn Erickson: Just Same company. Requirements.
Larry Chlebina: Okay. Yep. I mean, is there do you expect anything to break loose on the original enterprise flash application, or is this gonna continue on
Gayn Erickson: It’s yeah. It kinda feels like this is I was gonna say trumping it, but that that word means something different these days. This it feels like they’re they’re this is such an enormous opportunity to the flash guys. That it’s sort of like, you know, the shiny bright light. That may actually be better for us. I’m not sure it’s better in terms of near term, like, like, you know, the opportunity is as fast. We’ll see. But they could know, they could configure a system. The the new system configuration is superset of the old requirements. And so we had already worked on previous one. And we’re working on an updated proposal to show them how they could build blades in our system that could do both their old devices and the new ones. So maybe that’ll help it be better I think it is, but you know, it’s always interesting when things change. But the one thing none of their old tools will work with this HP Flash.
Larry Chlebina: No. I think so.
Gayn Erickson: Yep. So that’s you know, maybe that’s a good thing for us. Right?
Larry Chlebina: Alright. That’s all I had. I’ll see you, tomorrow, I guess.
Gayn Erickson: Thanks, Larry. And Larry’s just alluding to we’re gonna be over we’re here in at semicon. In Arizona, Semicon West, and there’s the CEO Summit. That Chris alluded to. Although, Chris, I don’t know if you knew this. You were breaking up And it sounds like we had operator problems with the operator connection. The new one is been a lot better. So sorry about that too. To folks that are on the line. Operator, any other call any other questions?
Operator: Yeah. I’m showing there are no further questions in queue at this time, and I’d now like to hand the floor back to management. For closing remarks. Thank you.
Gayn Erickson: You know, I I was I I meant to try and work this in. I’m gonna do one little other thing. So the other one we haven’t talked about, and maybe next call, we’ll spend a little bit more time on. We we did a deep dive last time on the AI side of things. This time was more of an update on on things. There’s other products that we have, and one of the things I wanna highlight is the activities that we have within PackagePart outside of AI. Turns out that with the Intel acquisition, they have a low power and a medium power system called Echo and Tahoe that we’ve been shipping a lot of systems kinda quietly in the background. And recently, we’ve had some customers I think, edged on egged on by some competitors that were saying, oh, there isn’t even doing that stuff anymore, and that’s just not true.
These products are beloved by the customers for their software, their flexibility, and they did a really good job. In fact, those products were the products that honestly took care out of the packaged fire burn in market because the products were just better than ours. And, you know, we still love those. And if you come on our floor, you’ll see them being built right alongside of your Sonoma systems and our Fox systems as well. So I just a message out to our customers We still love you. There’s we’re we’re still committed to supporting those products. And we have way more manufacturing capacity than Intel ever did. So don’t be timid. We’re happy to continue to ship as we have. And we’ll we’ll we’ll give the investors a little bit more insight.
On some of the systems we’re building right now, some of the applications that they’re going into, that are also another part of this overall shift of all semiconductors needing more and more reliability tests from qualifications to burn in. So with that, I thank everybody, and we appreciate your your time and putting up with a little bit of the stuff going on with the call. We’ll work on that and make sure we do better next time. And we appreciate you. Thank you now. Goodbye.
Operator: Thank you. This does conclude today’s conference call. You may disconnect your phone lines at this time, and have a wonderful day. Thank you once again for your participation.
Follow Aehr Test Systems (NASDAQ:AEHR)
Follow Aehr Test Systems (NASDAQ:AEHR)
Receive real-time insider trading and news alerts