Cadence Design Systems, Inc. (NASDAQ:CDNS) Q1 2023 Earnings Call Transcript

John Wall: We should get back to more normal lead times by the middle of the year, but we thought it was really important in the first half to prioritize deliveries to customers that have been waiting the longest for the hardware. I mean, as you know, we have multiple uses for the hardware. We want to set up demo models for customers for future sales and things like that. But the first quarter was heavily weighted towards deliveries to customers that have been waiting a long time for those orders. We’re still working through those lead times, but we expect to be back to more normal lead times by the middle of the year.

Ruben Roy: That’s great. Thanks, John.

Operator: Your next question comes from the line of Joe Vruwink with Baird. Your line is now open.

Joe Vruwink: Great. Thanks for squeezing me in. I wanted to take another crack at the topic of AI and adoption. So, when you think about — maybe the best example is Cerebrus, within implementation efforts, if you think about the total block engineers as an account, what share of those engineers are typically using the product at this point? And in your mind, is that something as we enter the next round of renewals, it could get more widely deployed across the entire team?

Anirudh Devgan: Yes, Joe, what I would say is that, I mean, like we talked about, I think, out of top 20 customers, I think 10 of them are using Cerebrus for production. And then, we are engaged with all the top customers and then five hyperscalers are using. And I think — but still it’s not — I think there’s still a lot of opportunity for growth there. Because the way I look at it is, especially Cerebrus or JedAI or all these platforms that I think over time, they will become the cockpit. So, in the old days, in case of digital implementation, Innovus was the cockpit. So, the customers would run in Innovus or try different experiments with Innovus. But now Cerebrus can do that mathematically with AI. And then, you can still combine that with the — you can still do manual experiments on top of that.

So, I think overall, I would expect in three to five years, almost all designers would be using Cerebrus what they were using Innovus in the past, okay? And same thing with Optimality, same thing with Allegro X AI. So, we are still ways from that. So, there is still this progression that has to happen. So, I think we are engaged with all the customers. They’re using it. But I think over time, it will become the dominant way of running products will be using Cerebrus rather than using the old way. It’s like going from manual cars to automatic cars. Some people may still want to drive manual, but more and more people will drive automatically using Cerebrus. So, I think in that, we are still in the early innings. So, it’s still like years to go in that.

And that’s good. We are — in our business, like we mentioned earlier, we’re looking at annual contract value and let the natural adoption happens over the next few years.

Joe Vruwink: Okay. That’s great. And then, on the system design segment, can you — I don’t think I heard it, just an update on where you expect growth to be in 2023? And then, in reflecting on the development here and kind of the upside you’re seeing in bookings, is it possible to pinpoint at something like — you talked about the repeat orders on the organic solvers. You’ve obviously built a bigger CFD business. There’s some new channel initiatives. Are any of these things more important than others in terms of driving the upside you’ve seen?

Anirudh Devgan: I think 2023, I still expect a very good year for system design and analysis. In terms of initiatives, I think there are a whole bunch of initiatives we are driving. I think what we always say is we are obsessed with best-of-class products first. So that’s the most important thing. If the products are differentiated, the customers always use it. And all these channel initiative helps, and awareness of our products with marketing helps. But in the end, we are always focused on developing and supporting making best-in-class products. So on that, we’ve made a lot of progress and benefits. I mean, recently, I talked about it in my prepared remarks. But one thing I think in system design and analysis, like I mentioned, I’m very optimistic about use of GPUs. And GPUs have done wonders in AI, right, by accelerating AI computation.

And traditionally, GPUs haven’t worked that well in EDA. They do help EDA, but it can dramatically help SDA, because SDA is a more kind of the — it’s more physics-based simulation. So, it’s more kind of matrix multiply, which is similar to (ph) AI. So, like recently with our collaboration with NVIDIA, Jensen talked about that Cadence CFD on GPU for the same cost is giving a 9x improvement in speed up and 17x improvement in power efficiency. And GPUs are slightly more expensive than CPUs. I mean, typically, I would guess, at least 3x to 5x. So, you’re getting 30x to 50x speed up on GPUs that normalized for cost is still getting 10x or 9x improvement in speed. So that’s a huge improvement based on our special algorithms, because we have a long history of massive parallelism in CPUs and now we are applying it to GPUs, both especially in SD&A, both for electromagnetic and CFD.