Aehr Test Systems (NASDAQ:AEHR) Q2 2026 Earnings Call Transcript

Aehr Test Systems (NASDAQ:AEHR) Q2 2026 Earnings Call Transcript January 8, 2026

Aehr Test Systems beats earnings expectations. Reported EPS is $-0.04, expectations were $-0.08.

Operator: Greetings. Welcome to the Aehr Test Systems Fiscal 2026 Second Quarter Financial Results Conference Call. [Operator Instructions] Please note, this conference is being recorded. I will now turn the conference over to your host, Jim Byers of PondelWilkinson, Investor Relations. You may begin.

Jim Byers: Thank you, operator. Good afternoon, and welcome to Aehr Test Systems Second Quarter Fiscal 2026 Financial Results Conference Call. With me on today’s call are Aehr Test Systems’ President and Chief Executive Officer, Gayn Erickson; and Chief Financial Officer, Chris Siu. Before I turn the call over to Gayn and Chris, I’d like to cover a few quick items. This afternoon, right after market closed, Aehr Test issued a press release announcing its second quarter fiscal 2026 results. The release is available on the company’s website at aehr.com. This call is being broadcast live over the Internet for all interested parties, and the webcast will be archived on the Investor Relations page of the company’s website. I’d like to remind everyone that on today’s call, management will be making forward-looking statements that are based on current information and estimates and are subject to a number of risks and uncertainties that could cause actual results to differ materially from those in the forward-looking statements.

These factors are discussed in the company’s most recent periodic and current reports filed with the SEC. These forward-looking statements, including guidance provided during today’s call, are only valid as of this date, and Aehr Test Systems undertakes no obligation to update the forward-looking statements. Now with that, I’d like to turn the conference call over to Gayn Erickson, President and CEO.

Gayn Erickson: Thanks, Jim. Good afternoon, everyone, and welcome to our second quarter fiscal ’26 earnings conference call. I’ll begin with an update on the key markets we’re targeting for semiconductor test and burn-in with a particular focus on the common growth drivers we’re seeing across these markets, which is namely the massive explosion of AI and data center infrastructure. After that, Chris will walk through our financial performance for the quarter, and then we’ll open up the call for questions. While second quarter revenue was softer than anticipated, we made significant progress in both wafer-level burn-in and packaged-part burn-in segments and are very excited about our prospects moving forward. Based on customer forecasts recently provided to Aehr, we believe our bookings in the second half of this fiscal year will be between $60 million and $80 million, which would set the stage for a very strong fiscal ’27 that begins on May 30.

During the quarter, we made substantial progress with wafer-level burn-in engagements and production installations across AI processors, flash memory, silicon photonics, gallium nitride and hard disk drives. We’re encouraged to see that one of our key growth strategies focused on reliability solutions for the exploding demand for AI and data center infrastructure is beginning to bear fruit. In packaged-part burn-in, we secured key new device wins for our Sonoma system supporting high-temperature operating life qualifications for AI devices. These wins are expected to drive additional capacity at test houses, including at least one customer that has elected to move into production in late calendar ’26, which we believe could result in meaningful volumes of Sonoma production systems.

In addition, in the last month, we received a very large forecast from our lead Sonoma production customer for AI ASIC production capacity. This forecast is expected to drive very strong and potentially record bookings for the company this fiscal year and position us well for significant revenue growth next fiscal year with their requested shipments starting in the first fiscal quarter of our next fiscal year. Taken together, our increased visibility across multiple end markets gives us great confidence in our outlook. As a result, we’re reinstating financial guidance in fiscal ’26, which we’ll touch on later in today’s call. Now let’s talk about our key segments. Starting with our wafer-level burn-in during the quarter, we expanded engagements and completed additional production installations across several end markets.

Our lead AI wafer-level burn-in customer continues development of its next-generation processor and is currently discussing additional capacity with us. They’re forecasting additional system and WaferPak capacity orders this fiscal year and plan to transition to our fully integrated automated WaferPak aligner for 300-millimeter wafers. We expect this customer to continue scaling and excited to support their growth. We also announced a strategic expansion of our partnership with ISE Labs during the quarter to deliver advanced wafer-level test and burn-in services for next-generation high-performance computing and AI applications. This partnership accelerates time to market, improves performance and gives customers the option of either packaged-part or wafer-level test and burn-in for their production volumes.

ISE, together with its parent company, ASE, represents the world’s leading outsourced semiconductor assembly and test or OSAT platform, serving a global roster of top-tier semiconductor customers. As part of our benchmark evaluation program with a top-tier AI processor supplier we announced last quarter, we completed development of our new fine-pitch WaferPaks for wafer-level burn-in of high-current AI processors. These are currently in test with this potential customer’s processors and are designed to validate our FOX-XP production systems for wafer-level burn-in and functional test of their high-performance, high-power AI processors. We’re currently completing start-up procedures such as power-up sequencing, thermal profiling, test vectors, timing and high-speed differential clocks and expect to complete data collection this quarter.

While we’re demonstrating our new fine-pitch high-current WaferPaks for this benchmark, many customers can utilize lower-cost WaferPak designs if certain design for test rules are incorporated upfront. These approaches reduce cost and lead time and are especially attractive to customers focused on faster time to market for wafer-level high-temp operating life qualification. We also have 2 additional AI processor companies planning wafer-level benchmark evaluations since last quarter’s earnings call. These benchmarks typically take about 6 months, and we expect to make meaningful progress beginning this quarter. Both customers are evaluating wafer-level test and burn-in as an alternative to packaged-part or system-level test for large advanced AI modules that combine multiple AI accelerators and stacked high-bandwidth memory.

Moving burn-in upstream to the wafer-level significantly reduces cost and yield risk by avoiding scrapping expensive substrates and memory stacks when early failures occur later in the process. We have seen estimates that show the cost of the substrate is more than a single processor and the cost of the high-bandwidth memory is even higher. Turning to flash memory. We completed our wafer-level benchmark with a global leader in NAND flash just prior to the holidays. The customer has now taken the wafers back for further processing to validate correlation with their internal process. This benchmark demonstrated our ability to test flash memory wafers with significantly higher parallelism and power than is possible using traditional probers and group probers from companies such as TEL or ACCRETECH.

We’ve also proposed a next-generation solution enabling test of a new emerging flash memory device called High Bandwidth Flash or HBF, designed for AI workloads. This proposed solution leverages our FOX-XP platform, WaferPaks and auto-aligner technology and would support single touchdown high-power test on 300-millimeter wafers. While development of this system would take over a year following customer commitment, we believe this represents a compelling entry point into a large and evolving memory market. We look forward to sharing more details as this progresses. Turning to silicon photonics. We believe that silicon photonics is used — we believe that silicon photonics used in data center and also chip-to-chip I/O is going to be a significant market driving production burn-in capacity for our FOX wafer-level burn-in systems and WaferPaks.

Our lead customer has now firmed up its production ramp, which we expect to begin early next fiscal year. While this timing is later than previously expected, it aligns with recently announced AI processor platforms and positions us well for calendar 2026 orders and deliveries in fiscal ’27. We’ve also finalized a forecast with another major silicon photonics customer initially targeting data center applications with a road map toward optical I/O. We expect to book their initial turnkey FOX system soon with delivery planned for May of this year. In gallium nitride power semiconductors, we continue to support our lead production customer, though we experienced delays related to unanticipated high-voltage fault conditions that required WaferPaks and protection circuit redesigns.

This delayed approximately $2 million in WaferPak shipments from last quarter into this quarter, along with some in-system — along with some system enhancements. Shipments have now resumed and lessons learned have significantly strengthened our GaN power supply burn-in capability. If anyone tells you that testing and burning-in full wafers of GaN power semiconductors with up to 600 volts or more is easy, don’t listen to them. We also continue to engage with multiple new potential GaN customers and are developing WaferPaks for several new device designs that are expected to go to high-volume production for applications like data center infrastructure and power delivery, automotive electrical power distribution on both ICE and hybrid electric vehicles and even power semiconductors used for electrical breakers.

Aehr has a unique solution that can deliver full turnkey, fully automated wafer handling and probing for test and burn-in of GaN wafers in sizes from 6 to 8 inches and even 12 inches or 300-millimeter wafers. Turning to silicon carbide. As we previously discussed, silicon carbide demand has been weighed toward the end of this fiscal year. Customers continue to be optimistic about this market and their capacity needs. But we’ve tried to take a very conservative stance that is mostly show us the orders before we believe them. Our lead customer recently transitioned from 150 millimeters to 200-millimeter wafers, nearly doubling output without adding new FOX-XP systems and supported by Aehr’s proprietary WaferPaks that we developed to accommodate both 150 and 200-millimeter wafers contacting 100% of the die on each in a single touchdown.

A technician overlooking a circuit board being built and tested for a semiconductor device.

They’re now seeing additional needs for WaferPaks this year, but additional capacity for systems appears to be a year out. We pushed out expected orders until next fiscal year from our near-term forecast, but have capacity of systems or WaferPaks to continue to support their surge capacity needs as well as our other silicon carbide customers. While electric vehicle-related demand has slowed industry-wide, we remain well positioned with the most competitive wafer-level burn-in solution available, and we expect to benefit when growth resumes. In semiconductors used in data center hard disk drives, we’re installing the additional FOX-CP systems for a major supplier of hard disk drives for wafer-level burn-in of their special components in their drives.

They’ve indicated plans for additional purchases later this calendar year. While their device unit volumes are very large, the overall revenue opportunity remains modest due to short stress times and the massive parallelism achieved on our FOX-CP system and proprietary high-power WaferPak wafer contactors. Now let me talk about packaged-part burn-in. We’re seeing continued momentum in packaged-part qualification and production burn-in for AI processors, driving growth in our new Sonoma ultra-high-power packaged-part burn-in systems and consumables. As we announced today in a separate press release, during our fiscal third quarter to date, we have received orders from multiple customers totaling more than $5.5 million for our Sonoma ultra-high-power packaged-part burn-in systems, including initial orders from a premier Silicon Valley test lab for our newly introduced higher-power configured Sonoma system that can also support full automation.

These orders already exceed the total Sonoma orders for the entire second quarter, highlighting the accelerating demand we’re seeing for our package-level burn-in of high-powered AI and compute devices. This quarter, we also secured key new device wins on the Sonoma platform for high-temp operating life qualification. These wins are expected to drive additional capacity at test houses, with at least one customer planning to transition to production later this calendar year, generating significant system demand. Our lead packaged-part burn-in production customer for AI processors continues to ramp and is forecasting substantial growth in 2026 and beyond. Although we have not yet received the purchase order, we have received a substantial forecast from this customer for AI ASIC production capacity with requested Sonoma production, packaged-part burn-in system and BIM shipments beginning in the fiscal first quarter of ’27.

That starts May 30, which we expect to contribute to very strong bookings in fiscal ’26 and generate significant revenue growth in fiscal ’27. This customer also plans to introduce much higher power ASICs later this year for which we are already developing the high-temp operating life qualification burn-in modules and sockets to be used on the Sonoma systems at one of the premier Silicon Valley test services companies that have many systems installed. This AI accelerator ASIC processor is also forecasted to go to production burn-in and drive even higher volume needs for production burn-in systems downstream at the OSATs in Asia. We feel we’re very well positioned with our Sonoma system for this production capacity need and believe this could drive very substantial volumes of Sonoma systems in our next fiscal year.

During the quarter, we completed development of a next-generation fully automated higher-power Sonoma system, supporting up to 2,000 watts per device. This system enables continuous flow operation, improved throughput and seamless transition from qualification to high-volume production using the same fixtures and sockets. These capabilities enable customers who are focused on high-temp operating life reliability testing to have a system that is fully software and hardware compatible with the Sonoma systems they have installed, which simplifies and accelerates time to market that is critical for HTOL testing of new AI processors. This Sonoma burn-in system can also simply bolt on a fully automated handler developed and sold by Aehr Test as a turnkey solution to allow hands-free operation with less than a couple of minutes of overhead per burn-in cycle, which is amazing for production burn-in needs.

We’re also seeing increased demand for our lower-power Echo and Tahoe packaged-part burn-in systems, driven by our installed base of more than 100 systems across over 20 semiconductor companies worldwide. But I’ll wait for another call to discuss these systems and the markets they serve in more detail. As stated last quarter, the rapid advancement of generative AI and the accelerating electrification of transportation and global infrastructure represent 2 of the most significant macro trends impacting the semiconductor industry today. These transformative forces are driving enormous growth in semiconductor demand while fundamentally increasing the performance, reliability, safety and security requirements of the devices used across computing and data infrastructure, telecommunications networks, hard disk drive and solid-state storage solutions, electric vehicles, charging systems and renewable energy generation.

All these — as these applications operate at ever higher power levels and an increasingly mission-critical environments, the need for comprehensive test and burn-in has become more essential than ever. Semiconductor manufacturers are turning to advanced wafer-level and package-level burn-in systems to screen for early life failures, validate long-term reliability and ensure consistent performance under extreme electrical and thermal stress conditions. This growing emphasis on reliability testing reflects a fundamental shift in the industry from simply achieving functionality to guaranteeing dependable operation throughout a product’s lifetime. A requirement that continues to expand alongside the scale and complexity of next-generation semiconductor devices.

This year, we’re making significant progress expanding into additional key markets for our semiconductor test and burn-in solutions, including AI processors, gallium nitride power semiconductors, data storage devices, silicon photonics integrated circuits and flash memory. This diversification of our markets and customers is significant given our revenue concentration in silicon carbide for electric vehicles the last 2 years. This progress and key initiatives expands our total addressable market, diversifies our customer base and provides us with new products, capabilities and capacity, all aimed at driving revenue growth and increasing profitability. The progress we made this quarter with a significant number of customer engagements and production installations provides improved visibility into future demand.

As a result, we’re reinstating guidance for the second half of fiscal ’26. For the second half of fiscal ’26, which began November 29, ’25 and ends this May 29, ’26, Aehr expects revenue between $25 million and $30 million. As stated earlier, although we’re not providing formal bookings guidance, based on customer forecast recently provided to Aehr, we believe our bookings in the second half of this fiscal year will be much higher than revenue between $60 million and $80 million in bookings, which would set the stage for a very strong fiscal ’27 that begins on May 30, 2026. With that, let me turn it over to Chris, and then we’ll open up the lines for questions.

Chris Siu: Thank you, Gayn, and good afternoon, everyone. I’ll begin with bookings and backlog, then walk through our second quarter financial performance, cash position, outlook and investor activity. The company recognized bookings of $6.2 million in the second quarter of fiscal 2026 compared to $11.4 million in the first quarter. At the end of the quarter, our backlog was $11.8 million. Importantly, during the first 6 weeks of the third quarter, we received an additional $6.5 million in bookings. This increase was driven primarily by an order from a premier Silicon Valley test lab for our newly introduced high-power configured Sonoma system, which we announced this afternoon. Including these recent bookings, our effective backlog has now grown to $18.3 million, providing increased visibility as we move through the remainder of fiscal 2026.

Turning to our second quarter results. Revenue was $9.9 million, down 27% from $13.5 million in prior year period. The decline was primarily driven by lower shipments of WaferPaks, partially offset by stronger demand for our Sonoma systems from our hyperscaler customer. Contactor revenues, which include WaferPaks for our wafer-level burn-in business and BIMs and BIBs for our packaged-part burn-in business totaled $3.4 million, representing 35% of total revenue. This compares to $8.6 million or 64% of revenue in the second quarter last year. Non-GAAP gross margin for the second quarter was 29.8% compared to — with 45.3% a year ago. The year-over-year decline reflects lower overall sales volume and a less favorable product mix as last year’s quarter included a higher proportion of higher-margin WaferPak revenue.

Non-GAAP operating expenses in the second quarter were $5.7 million, down 4% from $5.9 million in Q2 last year. The decrease was primarily due to lower personnel-related expenses, which were partially offset by a high research and development costs, including high project spending as we continue to invest resources in AI benchmark initiatives and memory-related programs. As previously announced, we successfully closed the Incal facility on May 30, 2025, and completed the consolidation of personnel and manufacturing into Aehr’s Fremont facility at the end of fiscal 2025. During the quarter, we negotiated an early lease termination with the landlord, reducing our obligation by 5 months of rent. As a result, we recorded a reversal of $213,000 related to a previously accrued onetime restructuring charge.

During the quarter, we recorded an income tax benefit of $1.2 million, resulting in an effective tax rate of 27.3%. Non-GAAP net loss for the quarter, which excludes the impact of stock-based compensation, acquisition-related adjustments and restructuring charges was $1.3 million or negative $0.04 per diluted share compared to net income of $0.7 million or $0.02 per diluted share in the second quarter of fiscal 2025. Turning to cash flow. We used $1.2 million in operating cash during the second quarter. We ended the quarter with $31 million in cash, cash equivalents and restricted cash, up from $24.7 million at the end of Q1. The increase was primarily due to proceeds from our at-the-market equity program. As a reminder, in the second quarter of fiscal 2025, we filed a new $100 million S-3 shelf-registration that was approved by the SEC for 3 years, followed by an ATM offering of up to $40 million.

During the second quarter of fiscal 2026, we raised $10 million in gross proceeds through the sale of about 384,000 shares. At quarter end, $30 million remained available under the ATM. We intend to utilize the ATM selectively with a disciplined approach focused on market conditions and shareholder value. Looking ahead to the second half of fiscal 2026, which began on November 29, 2025, and ends on May 29, 2026, we expect total revenue between $25 million to $30 million and non-GAAP net loss per diluted share between negative $0.09 and negative $0.05 for the 6-month period. On the Investor Relations front, last month on December 17, 2025, Lake Street Capital initiated analyst research coverage on Aehr Test, along with equity research firm, Freedom Broker, which initiated coverage last June.

There are now a total of 4 research firms covering the company. Lastly, looking at the Investor Relations calendar. We will meet with investors at the 28th Annual Needham Growth Conference in New York on Tuesday, January 13, and then return to New York in February for the 15th Annual Susquehanna Technology Conference on Thursday, February 26. We will also be participating virtually in the Oppenheimer Emerging Growth Conference on Tuesday, February 3. We hope to see you at these conferences. That concludes our prepared remarks. We’re now happy to take your questions. Operator, please go ahead.

Q&A Session

Follow Aehr Test Systems (NASDAQ:AEHR)

Operator: [Operator Instructions] Our first question comes from Christian Schwab with Craig-Hallum.

Christian Schwab: What wasn’t clear to me exactly is on the booking strength — potential booking strength of $60 million to $80 million in the second half of this fiscal year. Is that almost entirely on the AI accelerator processor line?

Gayn Erickson: There’s some silicon carbide, not much, like not very much at all. There is some silicon photonics for sure. But the bulk of it is across wafer-level and packaged-part burn-in for AI processors, yes.

Christian Schwab: Okay. Perfect. And then given that such a material bookings from the AI processor market, can you give us any indication or idea? I know we’ve talked about the opportunity in that marketplace being bigger than silicon carbide. But let’s narrow it down to kind of a multiyear time frame kind of including ’27 and ’28. Do you see that business after initial orders expanding meaningfully from there?

Gayn Erickson: We do. We do. And we’ve been taking a pretty conservative stance on how large, particularly the AI and the wafer level side of it is. And I want — conservative may not be fair. Candidly, we’re still trying to get our arms around how big it is. What we get is visibility of a specific GPU or CPU or network processor or an ASIC. And then we hear these things from the customer and then we look externally and what are they telling the Street and try and correlate to those lookups. And I’d say pretty consistently, we hear bigger numbers from the customer than the Street. I’m not sure what that all means, okay? And then as they give us test time estimates of what the burn-in conditions are, we can start to put some numbers around it.

But a single processor for some of these big guys at wafer-level burn-in is 20, 30 systems or so. And these are $4 million, $5 million machines. So you get a feel for the size of what that looks like. And the estimates of — today, if you were to look at AI spend in test between test and burn-in, is it $8 billion, $10 billion to maybe $15 billion or so. I mean it’s a really large number. So we don’t want to get ahead of ourselves here. But when customers ask you things like how many can you make, right? So can the AI business be measured in hundreds of millions of dollars for Aehr Test a few years out? Yes. for sure. Now what’s interesting is that we’re in this — I think it’s an awesome position to be in because our — the Sonoma system is a highly preferred system for HTOL, the high-temp operating life reliability testing for these AI processors.

It has the largest installed base in all the test houses around the world. We’re getting people that approach us because we are the — we are like — I don’t want to say we’re the de facto standard, that’s probably bold, but we have more capacity than everybody else. And therefore, they are saying, you’re kind of the go-to guy. I like those words. And so — and we can build lots of them. So customers are using that, and we get a front row seat to actually bring them up. Then we say, “Oh, by the way, if you want, you can take this machine, add production handling to it and do production on it.” In the meantime, if you come to our facility and you do a tour and you can see that production test cell for the Sonoma automation, we, of course, will walk you by a FOX wafer-level burn-in test cell and mention, “Oh, by the way, that happens to be doing a benchmark on a 300-millimeter wafer, we can’t tell you who it is.” And so they’re like, well, what is that?

So we are in a position to be able to talk about both of them. And the ASPs are actually higher on the wafer-level side of things. And — but the value proposition way outweighs that because of the yield advantage of doing at wafer level. The yield savings [ dwarfs any of the ] costs or the cost to test the wafer-level burn-in. So as we get our arms around the market, the market data that would be out there would be packaged part because no one is doing wafer level except for us. And so we’re creating our own models related to, okay, for that unit capacity, if you went to wafer-level burn-in, what would that look like? Kind of similar to what we had to go through in the original silicon carbide side of things of — if the whole market — and we’re not seeing — everybody including NVIDIA and Google and Microsoft and Tesla and these guys all went with us how big is that market?

We haven’t really tried to put our arms around that yet, but it’s substantial.

Christian Schwab: Great. And then I guess one last question, if I may, and follow up on your comment about capacity. How many systems do you think you’re capable of manufacturing in a year for wafer level?

Gayn Erickson: We have talked to customers about capacities exceeding 20 systems a month at either package or wafer level. If we had to, we could ship 20 systems a month of each during this calendar year. Now that’s bigger than our forecast by a lot. But you know what, when people are saying, could you do something like this and intercept something, it’s like if they gave you an order for 50 or 100 Sonomas, like how long is it going to take you to build them? Makes sense?

Christian Schwab: Makes perfect sense. No other questions.

Operator: The next question comes from Jed Dorsheimer with William Blair.

Jonathan Dorsheimer: Yes, I guess maybe just to start, on the wafer level, I think your prior comments around the timing of the benchmark, it seems like that’s taken a little bit longer. And I’m just wondering, is that a function of — is it because it’s new and what you’re seeing is from the customer is that they’re changing parameters that’s extending that out? Because I think you had maybe talked about by February time frame, and we were almost…

Gayn Erickson: Do you want me to throw my customer under the bus? Is that what you’re trying to tell me, but…

Jonathan Dorsheimer: No, no, no…

Gayn Erickson: Let me answer that. No, I got it. I got it. No, that’s totally fair, okay? What I do in all of these things is try to describe exactly what we feel, what we know, what we knew at the time. This — one of the things that’s very interesting and fun about this particular customer who is a very notable customer, okay? When they gave us, and I don’t think I’m [ overstating, ] when they gave us the vectors, the test vectors, et cetera, they were giving it off of a platform from package level, okay? Package and wafer are different. We had a huge arm wrestle with them related to what they could actually do at wafer level and ultimately, we’re able to demonstrate to them significant DFT, lower pin count modes, et cetera, to be able to do it at wafer level, which was a big deal because they never understood that because, of course, nobody has ever done this before with us, okay?

I’ll just leave it at this. They actually gave us some things that were implied based upon package that didn’t really weren’t totally applicable to wafer level, and we struggled with some of that. And it turns out — so it actually did delay a little bit. I think they — it’s mutually understood. It’s like, “Oh, sorry, that we were thinking in package, we forget about wafer and sort.” And that’s a growing thing. We’ve seen this with other customers. On the very first time you’re doing wafer level burn-in, you just don’t think about it from the challenges or the differences at what happens when you’re talking about a device that shares common substrates or from a probing environment. So is it longer? Maybe a little bit, measured in weeks or a couple of months or something.

But some of the things that like mechanically wafer physical contact to the device using our auto aligner to pack these new fine-pitch WaferPaks, the test plan itself, the vectors, those things were all going along pretty well. So I wish it was a little bit sooner, but I think we’re still very much on track to try and get them some data over the next couple of months here or even maybe even this month. So now the question, of course, parlays into what do they do with it? What’s the timing? Do you understand what device they want to cut in? We do. We’re not going to share that with you guys. are we going to make it? We believe we’re still — there’s lots of reasons to actually want to cut in wafer-level burn-in and the sooner, the better. So I’m actually — we’re really excited about this particular one.

And then now we’ve got another couple of guys that are saying, “Pick me, pick me too” and are generating the information to give us so that we can actually do design reviews and walk through a WaferPak design for them as well.

Jonathan Dorsheimer: Got it. That’s helpful. And I just want to address the potential of cannibalization between package and wafer level. And if I read through your comments, it seems like the AI processor is what’s moving along with this customer on the wafer level. You had mentioned briefly actually on the ASIC side. Do you — are you anticipating that the ASICs basically run with package level and that AI processors are wafer level? Or are you anticipating both at wafer level?

Gayn Erickson: Yes. Okay. Okay. So vocabulary for everybody that’s’ listening out there, right? So there — when you talk about processors in the AI, arguably, there’s even maybe at least 2 or 3 different broad flavors of them, okay? You’re going to have the actual GPU, if it’s an NVIDIA or ASIC when you talk about everybody else’s. In reality, the GPU is kind of an ASIC at NVIDIA too. Jensen said that at one point. These are AI accelerator platforms, okay? And then there — and they can be used for large language models or for inference type things. There’s also processors that like CPUs, like Intel or Grace or Vera-type CPUs and others that are making them that are also going through a burn-in process. And then there’s — you could argue there’s even network processors and things like that.

But generally, when we talk about AI processors, we’re generally in the CPU and GPU type or ASIC type that are combined together in these AI processor clusters. And things like you hear at GB200 is Grace CPU and 2 Blackwell AI accelerators in 1 package, if you will, or in 1 cluster. What’s happening with the road map is that devices are going from a single AI accelerator or CPU on a — in a package to a package that includes embedded memory, like high-bandwidth memory and high-bandwidth flash over time and then to having more than 1 compute chip in it. So having 2 processors in it or 4 or 8, like you look at the Intel or the AMD road map. Everyone has a road map to 2 or 4 more AI processors on a single substrate. What’s happening is that there is a — the qualification of those are all done today in a full package.

The whole device in a big substrate is done, and it can take months to even go to get the packaging and qual that. So there are people that would like to be able to qual the processor inside when it’s still in wafer form, right? From a production perspective, the value proposition is you’re burning in these devices and when they fail, you take out the other compute chip and all the memory plus the co-os substrate, which costs more than the silicon of the compute chip itself. So the road map is getting more intense. So there’s people that are like, oh, I want to evaluate this for this device, this would make sense. But boy, the next one makes twice as much sense and the one next to that is 4x as much sense because of this evolution. So a lot of trends we discussed, okay, is there a window.

Like what happens if you just missed this one device, it doesn’t feel like that it’s a treadmill of you can always step on. And the customers are like, okay, how do I cut you in? I’ve said publicly that our large package part production customer, we’ve talked about it as an ASIC hyperscaler. They’re actually on Sonoma production. We’re qualifying their next device that’s going to go to production, we believe and hope it will go on Sonoma as well. okay? The third one that are giving us design files of so we can make sure that Sonoma is ready for that, but they’ve also said, you know what, by then maybe we want to consider FOX wafer-level burn-in. And an interesting thing is it’s like, well, what will you do with all the package systems from us, who cares?

It’s like, what? Because if I could move it to wafer level, I don’t need to do it a package anymore. Now will it cut over just like that, we’ll see. I think the world is going to be both for a long time, and we’re in a great position to do both. But is there cannibalization? For sure. We had a customer come in who wanted to talk about what we thought was packaged part burn-in. Alberto, our VP over the packaged part business, and I met with them and 15 minutes into the meeting, he goes, I’d like to talk about wafer level. Alberto looked over at me and I’m like, okay, new slides. So at least we got both. And we’re in a great position. And actually, I would say it’s all 3, we do the high-temp operating life today only at package over time at wafer level, and we do production burn-in either package or wafer level.

So a great front row seat.

Operator: Our next question comes from Max Michaelis with Lake Street Capital.

Maxwell Michaelis: First one for me, just around the bookings guide. I know you previously shared that majority of around AI. But just given the distinction between the low end and the high end, if we just take the midpoint of around $70 million, I mean, what — to get to that $80 million, is that all basically around AI? Or does that suggest any improvement around silicon carbide or GaN?

Gayn Erickson: It’s the least in that number is silicon carbide, okay? And then GaN is pretty close. Hard disk drive is a little bigger. Then silicon photonics is a chunk. I mean we’ve got production systems in there for our production — our lead customer. We have a new customer that wants a system. They want a chip by May. We’re suggesting to them that they really should get their order in before we ship it, joke, joke. I’m kidding, it’s a challenge right now because they’re like, please, please build it. we actually have a system on our floor. And if they get their PO in if you’re listening, you get to get it, if not, we’ll give it to the next guy, but anyhow. And then it would be wafer-level burn-in. And then I think package is the biggest. I’m sorry, wafer level burn-in AI and then packaged part AI is the biggest.

Maxwell Michaelis: Okay. So — and yes, that’s just actually, the $60 million to $80 million, the $80 million suggests just greater volume orders from wafer level burn-in… Okay. And then lastly, I haven’t had time to run through the entire press release, but that $5.5 million order you noted in your prepared remarks. Can you go any — can you share some more detail on that? Is there anything new that we should be looking for? Or is just kind of standard?

Gayn Erickson: You know what, it has a mix of some customers that already had Sonomas that were buying more that were AI related. It had some burn-in modules. That was important because it was for a new design of a really expected to be high runner that’s going to production. It has a big order from a what we call our a premier Silicon Valley test services company, we’ll leave it to that. They actually bought a number of the new Sonoma configurations, which are the very high power ones that allow them to go to 2,000 watts. We have some devices that we’re going to be testing this spring that are almost 2,000 watts per device, right? And everybody is out there talking about how can you do — what does it take to get to 1,000 watts, we’re jumping right past that.

And this is in a high-volume Sonoma system. So they’ll be able to test a large number of devices in that system. And I’m trying — I think the numbers I should note this number. I think it’s 44 devices. But I mean it’s a large number of devices to be able to test those. And it’s — by the way, it’s either 22 or 44. I should know that. Sorry, folks. Go through the math on that particular application because of the number of resources and power supplies and things. But it’s the biggest part we’ve seen that’s in development, and that’s going to be going to production. So that’s a big deal. So it’s a combination of several different orders. Every one of them is kind of sort of strategic to us.

Operator: The next question comes from Larry Chlebina with Chlebina Capital.

Larry Chlebina: We try to line up your ramp or at least your demand for the systems that you’re working on developing for these customers on the AI processors with what’s publicly disclosed in terms of the product launch. Is there a case where they may start up on packaged part wherever they have the capacity to do that. And then when they feel comfortable, maybe if it’s after the products launched, would they cut over the wafer level burn-in because it’s so much more efficient and saves them money? Would they do that? Or would they just do it initially on a brand-new product launch at the beginning? That’s kind of — do you have a sense of that?

Gayn Erickson: Okay. So I wouldn’t — there’s 2 things in there. What I definitely see happening is we know for a fact a customer was doing system level or rack test, okay? The only time they identified infant mortality or early life failures was when it’s installed in the data center pretty nasty okay? That’s test or not or burn-in. So they said, we’ll run it for 2 weeks, and it hasn’t died we’ll accept it kind of thing, and then they’ll actually plug it into the network, pretty expensive way of doing it. Then there are companies like AEM and Advantest and Teradyne that have talked about system-level test machines, which is a type of ATE machine, that is designed to be doing a high-speed insertion and boot up like the operating system.

It’s a great way to do a very high degree of test coverage for a specific application. People were saying, oh, we’re going to do burn-in with that. Well, that doesn’t really — those systems are designed for high speed. They’re designed to be at the user mode. They’re designed to run cold. They’re not really designed for burn-in, and they’re quite expensive and large. But the market was pulling on that because it’s sure better than doing it in a rack. And there wasn’t another system available in what a lot of people refer to us as ovens, which is a large-scale system that you put lots of burn-in modules or trays with lots of devices and test all at once. Those were like from KYC or maybe 600 watts and below or something. And there really wasn’t a tool out there for that.

This is where Sonoma was pulled up because we were doing — Incal was using it for the Hi-Tec operating life, but it’s like, well, wait a minute, can I use that in production? Can you add automation? Can you do these things support? And can you quadruple or 50x your capacity? So that’s where Sonoma is coming in. When Sonoma enters that market doing system-level test or rack test makes no sense whatsoever. So it’s highly competitive as that. Now having said that, wafer-level burn-in is even better. But a lot of people may say, well, I need to think through that. Where do I put that insertion, I might need to implement some design for test modes to be able to implement it at least to take advantage of the very low cost full wafer contactors from Aehr Test and things like that.

So I think it’s an evolution. But I think the conversation we have with customers is they’re like, I need package for burn-in, let’s talk about that. But boy, wafer-level burn-in would be better, how do we engage on that? And then specifically on a per customer basis, I don’t want to get too carried with our strategy. But if you have an installed base of something, a packaged part burn-in systems or I could go in and displace you with maybe Sonoma but it’s probably better for me to go displace you with wafer level burn-in because it’s not even a price thing in that sense. It’s yield. It’s so — or capacity. So we — it depends on the customer, and we have some customers that have some devices that want to think about wafer level, something they want to think about package, something they want to think about package and then eventually the wafer level over time.

I hope that was — as I look back, that was pretty confusing. But there’s — it’s an evolution of it. And guess what we do, the customer is always right. You tell me what you want and we’re in.

Larry Chlebina: Well, if the — if you — all these evaluations, they have going on with wafer level burn-in, if it takes longer and the product ends up getting launched, would they still cut over to some portion of the production on wafer level burn-in once it’s proven out for the particular product or the predictor — would they do that midstream?

Gayn Erickson: I think it depends — I don’t — it’s not a slam dunk. I mean, I think traditionally, people will start a product and do the release of that one product on one test platform or something. And then you cut in on the next one. I think that’d be fair to say, but there are certain devices we know that their intended application, there’s 2 or 3 different applications for it. So for a large language model, maybe they think about it one way, but if it’s going to be automotive, then that’s a different thing, right? So even within a product, there might be an evolution or they get by until they can implement wafer level burn-in. That particularly comes in the fact when you think about a multichip module, right? As soon as you could do wafer-level burn-in, if I could save you 1% yield per die on a 4-die AI processor that has a $15,000 BOM. Of course, you would do that, right?

Larry Chlebina: I’m not sure if they would.

Gayn Erickson: Yes. So we’re trying to be as open as we can. We know as much as we know, but there’s definitely advantages to do wafer level. I mean ultimately, that’s the most — kind of the best place you could ever do it. And if you implement some DFT and some of the things we do, I can build you a WaferPak in 8 weeks. Have you on wafer.

Larry Chlebina: I’ll shift gears on the flash benchmark that you completed right a little bit ago before holidays. When do you expect the customer to get back to you and more importantly, when do you expect them to come with an order.

Gayn Erickson: I was waiting for somebody. Yes, that’s where my head’s at, too. My guess is, Larry, in the next couple of months or so for them really to get back, depending on how they — the wafer is going back to test, which is tested at wafer. I don’t think they’re going to package it up and go through some stress qualification, that might be something. But we’ve already had some design reviews with them on our new tester and planted the seeds, they were very impressed is how I would describe it. The big trend — the big shift here was when we even started this thinking to do the benchmark with them, which is what like a year ago, if I get that right.

Larry Chlebina: Yes, over 1.5 years ago.

Gayn Erickson: Yes. Yes. Fair enough, right? When we were starting to even build up to get the design files and what wafer we are going to be testing with them, it was not aimed at high bandwidth flash because that didn’t even exist, right? They were looking at it for like commodity data center SSDs. Now with the HBF, it broke their infrastructure, the power supplies, IO pins, et cetera, and parallelism, and now they have a power problem, which we love. Well, we’re good at power. So people that have power problems that’s music to our ears, so yes.

Larry Chlebina: I recall you originally said the driver, their motivation was as the 3D NANDs got higher levels of — they’re even talking about getting the 400 level.

Gayn Erickson: Layers, layers, yes.

Larry Chlebina: Layers, that required more power and exceed the power in their existing systems so that they need your high power. So here we are 1.5 years later. And so how are they getting by to this point? And don’t they need your high-power capability?

Gayn Erickson: They’re doing — they’re having to — they can’t test a whole wafer in one touch down as an example. But that — what I described there, which people — if you follow along with that, that was actually referred to as hybrid bonded flash same letters by the way, right? Hybrid bonded flash was a novel idea that the base substrate layer was logic done on the logic process and then you build up just the stacked memory, and you do that in a memory process and then you bond them together. The result of it is that memory stack is a taller building with a smaller footprint, so you get more die per wafer. That’s good, right? But the power was much higher, HBF as in high-bandwidth flash is, in some ways, architecturally similar, except for its more power because of its speed it has additional power supplies, and it’s taller, it actually is even more of a problem for them, which I guess, if you’re a tester guy, the bigger the problem, you have more to solve.

But we had to go back and redesign the tester because we were originally aiming it at the other device.

Larry Chlebina: I would think they would need more capacity for the enterprise flash part of it before they ever start needing something for HBF. So the enterprise flash, I’m wondering when is something going to happen there? It seems like it’s overdue.

Gayn Erickson: Yes. I mean our goal, in this case, would be — we had originally hoped to finish the benchmark at the end of last year, okay? So like we’re 6 months later. And I think as shared with you, if you read through all of the notes, around March, it was like it felt like you’re pushing a rope, something was going on. If you knew who the company was it’d be the very obvious what was going on, okay? But that what really happened is they kind of shifted from enterprise focus to HBF. And so that slowed some things down in terms of even reviewing our tester. And then they came back to us in the summer and we’re like, okay, here’s the new tester we’d like. So okay, maybe that’s good. It’s for people that you’re tapping your fingers, it’s taking a long time, but that’s part of what happened there.

But at this point, again, we walked up there actually — they thought we were just going to take their wafer and stick it into one of like our NPs with a manual setup and we showed them a fully integrated machine. So they walked up and we put their wafer at a FOP, put the FOP onto the Sierra automated WaferPak Aligner, ran the wafer. It opened up the blade, stick the wafer — put the wafer in the WaferPaks, put the WaferPaks in the blade, close the blade, ran the tests, gave them the results. It’s pretty impressive.

Larry Chlebina: So you’re ready to go for production. So it seems like they need. They’re going to need more capacity based on everything that’s going on in the memory market.

Gayn Erickson: Exactly. And right now, they’re all flushed with margins. How is that, right? So I agree, you know what, we’ve been, Larry, you as people that follow Larry is our greatest cheer leader, along with me in memory strategy for us. We are spending money, okay? It is part of — as Chris alludes to, we could be doing better, well, at these revenue levels, this is — we’re not happy with these revenue levels, right? We’re not making money at these levels. But we would be making more money. We’re spending money. We got our foot on the gas. And in fact, it’s our expectation that we’ll increase the R&D spend particularly in the AI wafer-level burn-in, a little bit in the package because we spent a lot of money on that in just this last year for package getting this new product out and then the memory system which will be a blade in our FOX system basically.

Larry Chlebina: It should be — it should pay off. Let’s — so hopefully soon, sooner rather later.

Gayn Erickson: I vote yes, too. As a shareholder, I think it’s good money to be spent.

Larry Chlebina: That’s all I have. Thank, Gayn.

Gayn Erickson: Thank you, Larry.

Operator: [Operator Instructions]. Okay. I’m showing no further questions in the queue. I would like to turn the call back to management for closing remarks.

Gayn Erickson: Thank you, operator, and thank you, everybody. We really appreciate you guys taking the time to spend an hour with us. I think about that exactly again. And we’ll keep you guys updated. Stay tuned. We’re really excited about this and hope that the orders will come in shortly enough to be able to make this less dramatic as we go forward and set us up for a really strong year heading into next year. So I appreciate it. If you are in town, we are in Fremont, California, Silicon Valley, give us call, set something up, come by take a look at the facility. If you haven’t seen our tools, they’re very impressive and get a feel of the capacity because we have a lot of systems on the manufacturing line right now. So take care, and Happy New Year to everyone.

Operator: This concludes today’s conference, and you may disconnect your lines at this time. Thank you for your participation.

Follow Aehr Test Systems (NASDAQ:AEHR)