Ginkgo Bioworks Holdings, Inc. (NYSE:DNA) Q1 2024 Earnings Call Transcript

Page 1 of 4

Ginkgo Bioworks Holdings, Inc. (NYSE:DNA) Q1 2024 Earnings Call Transcript May 9, 2024

Ginkgo Bioworks Holdings, Inc. isn’t one of the 30 most popular stocks among hedge funds at the end of the third quarter (see the details here).

Megan LeDuc: Good evening, I’m Megan LeDuc, Manager of Investor Relations at Ginkgo Bioworks. I’m joined by Jason Kelly, our Co-Founder and CEO; and Mark Dmytruk, our CFO. Thanks as always for joining us, we’re looking forward to updating you on our progress. As a reminder, during the presentation today, we will be making forward-looking statements, which involve risks and uncertainties. Today, in addition to updating you on the quarter, we’re going to provide more detail into our drive towards adjusted EBITDA breakeven, and the necessary steps we’re talking to get there. As usual, we’ll end with a Q&A session, and I’ll take questions from analysts, investors and the public. You can submit those questions to us in advance via X at #GinkgoResults or email investors@ginkgobioworks.com. All right. Over to you, Jason.

Jason Kelly: Thanks, everyone for joining us. We always start with our mission of making biology easier to engineer, and that’s especially critical today. Ginkgo’s a founder-led company and myself and the other founders have been pouring our lives into this company for the past 15 years, and many of our senior leaders for more than a decade. The advantage of this is we’re very motivated to see the most out of Ginkgo. We’ve invested a ton of our lives in it. And as a consequence, we want to see the most out of the investment of your capital in Ginkgo as well. So today, we’re going to be announcing major changes to how we do our work at Ginkgo. These are going to be difficult for many on the team, and I want to say that upfront, it’s going to involve substantial headcount reductions, alongside important changes to improve our operations.

A close up of a laboratory beaker filled with colorful chemicals, signifying the company's specialty chemicals.

The mission of what we’re doing matters at Ginkgo to everyone at the company. And you will see us collectively take difficult, but decisive action when needed to ensure we deliver on it and today is one of those days. So Ginkgo is an increasingly important part of the technology ecosystem in biotech, and that’s why I think it’s important we get this right. I’m really proud of this customer list. It’s unbelievably broad, it showcases our core thesis that a common platform can provide biotechnology R&D services for very demanding customers across ag, food, industrial, biopharma and consumer biotech. I’m also happy with how we’ve been expanding this list. In particular, many of the big names in that biopharma column were added in just the last 18 months, Merck, Novo Nordisk, Boehringer, Pfizer.

However, the next step for Ginkgo is to take what we’ve been learning across now hundreds of customer programs and make changes in the business that deliver those programs more efficiently. In particular, I’m going to talk later about how we can achieve greater scalability via simplification of the business. We want to simplify both our technology back-end, ultimately consolidating to consolidate to a single automation platform and simplify on the front-end. We’ve gotten a lot of feedback from mobile logos on this page about what they like and don’t like about our deal terms. So we’re going to be simplifying those, too, and that hopefully will increase sales velocity and simplify our deal making. More on that in a minute, but first, Mark is going to walk you through our Q1 performance.

And there are a couple of things that are indications that we do need to change course. In particular, you’ll see an increase in programs without a matching increase in revenue. This is a problem that I’ll be working to fix via the changes you’re going to hear about today. We’re fortunate to be in a position of financial strength as we execute these changes. We have $840 million in cash. We have no bank debt. And so we have a large margin of safety, which is really the position you want to be in when you make large changes like this. In other words, we’re not doing this with our back against the wall, and that’s a very deliberate choice on our part. We’re also setting a target of achieving adjusted EBITDA breakeven by the end of 2026. The attitude internally at Ginkgo, and I know many at the company are listening right now, will be to collectively set our plan for reaching that, which is going to involve input from all the folks on the team and then commitment from all of us to not spend outside of that tight plan.

See also 20 Countries with the Highest Literacy Rate in the World and 20 Most Greek States in the US.

Q&A Session

Follow Genentech Inc

Over the past few years, we’ve learned a lot by trying different avenues to drive growth. We have all that data now and the team, and we have a team that can set the right plan and determine who are the best folks to deliver on it. And we’re going to be doing that in the coming weeks internally. This also aligns well with what we’ve heard from many investors, especially those of you who’ve been waiting on the sidelines to invest in Ginkgo. The most common thing I hear is I love the vision, I see a path where Ginkgo ends up being the horizontal services platform serving all of biotech, massively scaled up. You get better with scale. But Jason, can you get there with the capital you have on hand. And I think our plans today will give you confidence that we can.

Okay. I’m now going to ask Mark to share more details on our Q1 financials, and I’ll follow with an explanation of how we’re going to execute our targeted plan. Over to you, Mark.

Mark Dmytruk: Thanks, Jason. I’ll start with the cell engineering business. We added 17 new cell programs and supported a total of 140 active programs across 82 customers on the cell engineering platform in the first quarter of 2024. This represents a 44% increase in active programs year-over-year with solid growth across most verticals. Cell Engineering revenue was $28 million in the quarter, down 18% compared to the first quarter of 2023. Cell engineering services revenue, which excludes downstream value share was down 15% compared to the prior year, driven primarily by a decrease in revenue from early-stage customers, partially offset by growth in revenue from larger customers. We believe the mix shift to be an overall positive and is indicative of market conditions, our refocused sales efforts on cash customers and the increased penetration of larger biopharma and government customers that we have discussed over the past few quarters.

That said, the revenue in the quarter was below our expectation and the pipeline indicates a weaker-than-expected revenue ramp for the rest of the year. Jason will be discussing later in the presentation, both how we’re thinking about demand and our offering in this environment and efforts we’re taking to further focus the customer base. Now, turning to biosecurity. Our biosecurity business generated $10 million of revenue in the first quarter of 2024 at a gross margin of 8%. We do expect the gross margin to improve in upcoming quarters based on the revenue mix in our contracted backlog. We’re continuing to build out both domestic and international infrastructure for biosecurity, especially with our recently announced biosecurity products, Ginkgo Canopy and Ginkgo Horizon.

And now, I’ll provide more commentary on the rest of the P&L. Where noted, these figures exclude stock-based compensation expense, which is shown separately. And we are also breaking out M&A-related expenses to provide you with additional comparability. OpEx. Starting with OpEx, R&D expense, excluding stock-based compensation, and M&A-related expenses decreased from $109 million in the first quarter of 2023 to $94 million in the first quarter of 2024. G&A expense, excluding stock-based compensation and M&A-related expenses decreased from $71 million in the first quarter of 2023 to $51 million in the first quarter of 2024. The significant decrease in both R&D and G&A expenses was due to the cost reduction actions we completed in 2023 including cost synergies related to the Zymergen integration and subsequent deconsolidation.

Stock-based compensation, you’ll again notice a significant drop in stock-based comp this quarter, similar to what we saw in each quarter in 2023 as we complete the roll-off of the original catch-up accounting adjustment related to the modification of restricted stock units when we first went public. Additional details are provided in the appendix to this presentation. Net loss, it is important to note that our net loss includes a number of noncash income and/or expenses as detailed more fully in our financial statements. Because of these noncash and other nonrecurring items, we believe adjusted EBITDA, is a more indicative measure of our profitability. We’ve also included a reconciliation of adjusted EBITDA to net loss in the appendix. Adjusted EBITDA in the quarter was negative $100 million, which was flat year-over-year as the decline in revenue was offset by a decline in operating expenses.

And finally, CapEx in the first quarter of 2024 was $7 million as we continue to build out the Biofab1 facility. Now normally, I would speak to our guidance next, but given our plans to accelerate our path to adjusted EBITDA breakeven through both customer demand related changes and significant cost-related restructuring, Jason is going to first walk through those plans and then discuss guidance at the end. Before I hand it over to Jason, I’d like to provide some color on the cost restructuring we are planning. High level, we are committed to taking out $200 million of operating expenses on an annualized run rate basis by the time we have completed our site consolidated, which we expect by mid 2025. We expect at least half of that savings target to be achieved on a run rate basis by the fourth quarter of this year.

The majority of our cost structure is in our people and facilities costs, and so workforce reductions across both G&A and R&D and site rationalization are the primary focus, though we see significant opportunities in other areas of cost as well. For clarity, our cost takeout estimate includes an assumption relating to our ability to manage our lease expenses relating to space we will no longer require. As I said, Jason will speak to the overall plan in more detail, including importantly the customer demand side of this. And so now, Jason, back over to you.

Jason Kelly : Thanks, Mark. The big theme for today is how we’re going to grow revenue while decreasing costs in order to reach adjusted EBITDA breakeven by the end of 2026. I’ll start by talking about why we’re not seeing revenue growth alongside program growth. I mentioned this earlier, and what we’ll be doing to simplify our back-end automation technology to improve scalability there. Second, I want to talk about the front end, what customers like and not like about our service terms and how we’ll be simplifying on the front end to expand our offerings and simplify our offerings to reflect what we’re hearing from our customers. And then finally, we’re taking decisive action to reduce our costs. Specifically, we plan to reduce our annualized run rate OpEx by $200 million by mid-2025 in order to achieve adjusted EBITDA breakeven by the end of 2026.

And we’ll dive into the high-level plan of how we’ll execute on this Okay. Let’s jump in. So the charts here are a big part of what’s driving our decisions today. You can see from the chart on the left, the number of active programs on our platform grew significantly over the year. This is a good thing. Really excited about this. But alongside that, we saw a decline in our revenue from service fees. And so this is, again, ignored downstream value share. Just look at that fee number, that’s gone down. This is particularly frustrating to me because we actually have a large amount of fee bookings across these many deals. And — but we’re not converting those into revenues in the near term. And the core challenge is the rate that we’re bringing these programs to full scale on our automation at Ginkgo.

And I’m going to explain that, but I want to give you a little more detail so you understand that challenge because what we’re trying to fix. So on the left-hand side here is the basic process by which an R&D leader at one of our customers develops a biotech product. Okay? So they’re R&D leaders engaging with the senior scientists on their team. They’re specifying a particular product scientific deliverable, okay, right? So to give some examples from Ingo programs of what a leader might ask for. Maybe it’s an mRNA design that performs a certain way in humans like we have for Pfizer, or microbes that capture nitrogen for Bayer or an improved manufacturing process for Novo Nordisk. These are all scientific deliverables, all right? And in the case on the left, the customer’s internal scientists will then design experiments, they think will help deliver that outcome to their boss and more junior researchers will perform those experiments at the lab bench by hand.

And then the data will come back to be analyzed, and you go around that loop. Now this is a manual process and generate small amounts of data, but it does work, right? I want to highlight this is how all these biotech drugs are developed every year. And the strength of it is the flexibility right? The scientists can run any new experiment tomorrow very quickly that they want as long as their two hands can pull it off, all right? And again, when that data comes back to the senior scientists, they repeat this all over again, all right? Now, of course, an R&D leader at Pfizer, Bayer and Novo in these cases are all choosing to instead pay to have a Ginkgo scientists give them the same deliverables instead of using their internal infrastructure.

So why are they doing that? And the major reason is that Ginkgo scientists they do that same loop, but they do it in a different way. They design experiments, but instead of small amounts of manually generated lab data from a team, they get large amounts of data generated either via automation or via pooled approaches that leverage high-throughput DNA sequencing and barcoding. And Ginkgo is a world expert in both of these large data generation approaches. That’s really the big difference, small data generation versus large data generation. And that’s really our expertise. So the short answer is, why was the customer choosing to use us instead of all that in-house infrastructure they have, is they’re coming to us asking for a scientific deliverable that they think will need a lot of data to get to the answer, all right?

And that’s not every project, but it is an increasing number, as you see. But our approach, I want to be clear, is not strictly better than doing it manually mainly because it takes more time to get a new protocol running at large scale. And this is the heart of why we’re not seeing revenue come up with our programs in the near term. It’s not a perfect correlation, but generally, the faster that a Ginkgo scientist can start to order large amounts of lab data, the faster we then see revenue coming out of all those customer projects. Now fortunately, the acquisition of Zymergen and the follow-on tech development we’ve been doing over the past couple of years put us in a great place to resolve this issue. And so I want to talk about how we’re planning to do that.

Okay. So to give you a little bit of background on how lab automation works today, there’s basically three levels. At the first level, that’s what you’re seeing in our customers often — scientists working by hand. This is the overwhelming amount of lab data generated in product development in biotech today is at this first level manual. Second level, a scientist walks up to a robot, put samples on it and program it to do a specific task, okay? This task targeted. The third level, and you’ll see a lot of these around Ginkgo are work cells. So you can work with an automation vendor and you have a robotic arm sitting in the middle of a set of equipment and it moves samples through multiple steps, but it basically does the same steps over and over.

Okay? And this is, again, what a majority of our foundry looks like today. And you can see on that spectrum at the top, you go from very flexible, low amounts of data per dollar to very inflexible large amounts of data per dollar, right? And that has been the historical trade-off in lab automation. Okay. We believe that the automation paradigm invented at Zymergen and then expanded on in the last two years at Ginkgo since the acquisition, ultimately offers flexibility and low-cost data at the same time. The simple idea is that each piece of equipment, is its own removable cart. It has a robotic arm connecting it to a magnetic track to deliver samples, you can see it in the video. And when you need a new equipment, you can just add that equipment to the track.

It’s like adding a little Lego block to that track and incorporate it without needing to build a whole new work cell like you would in Level 3 automation. And when you want to do a different protocol on the same equipment, that can be done with the software quickly. And we’ve been seeing that. We’ve seen instances where we’ve taken smaller batch protocols and moved them on to the rack system relatively fast compared to what would have been a multi-month project on a work cell. And then over the last year, we’ve also been seeing — and this is early data on this, but 80% to 90% labor time reduction, 60% cycle time reduction. We have not done this for the majority of our protocols yet, but the signals are good that we could. And I’ll be talking about that as part of our plans for efficiency gains in the third section of the talk today.

Since acquiring Zymergen, we also focused on simplifying the cart designs. You can see our second generation rack carts that were recently delivered to our facility in Boston. And if you’re a tracking Ginkgo from April, you saw these important in-person. Importantly, these are easy to assemble. So these are in-house designs are proprietary to Ginkgo. So it makes it faster for us to order more manufactured as needed. We also standardized the sizes. So we have these three signs here that allow us to incorporate a wide variety of different equipment while again, keeping manufacturing costs down, all right? So these rack systems are made to be very scalable. It’s a very different paradigm than what you see with work cell-based automation. Today, we are at the closed loop rack system scale that one you see there, we’ve have ordered 15 systems.

But we’ve been planning much larger integrated systems as part of achieving long-term efficiency goals in flexible lab data generation. Towards that end, our purpose-built facility that we’ve been talking about to house these large rack installations, Biofab1, we’ll be opening in mid-2025. And the best way to think of this facility is a lab data center. So we have these big data centers in compute and what you offer there is common scale hardware that does lots of different types of compute. Very similar idea here, common scaled hardware in the form of the racks that can generate a diverse array of lab data output quickly for customers. And hopefully, this means as we sign more programs, they can very quickly scale to generate large amounts of data.

This leads to more revenue, but more importantly, to happier customers who greatly desire both speed and scale of lab data generation. In other words, our customers be more than happy if we were more rapidly extracting revenue out of our bookings, because it means their programs are happening more quickly on our infrastructure. So that is a win-win for Ginkgo and for our customers, and that’s what we’re trying to do with this change to how we operate. Okay. So that’s a bit on the technology. I believe it will simplify the back-end, allowing one automation platform ultimately, the racks to replace many different workflows at Ginkgo and work cells. Now I want to talk about the front-end and how we engage with customers when we sell those cell programs.

So this is a slide I showed you earlier and as I mentioned, customers are choosing to use Ginkgo rather than their internal infrastructure when they think they need large amounts of data. However, as part of the business model, we’ve also asked for a few things customers don’t love. You can see these up here. We have Ginkgo scientists run the projects. We have scientific control over the experimental design. Number two, we have IP rights. Ginkgo can reuse the data generated and keep it in our code base. And by the way, that is valuable to Ginkgo, don’t get me wrong, us being able to reuse that is valuable. It helps us with future deals, but customers really don’t like it. I’ll talk about that. And then finally, downstream value share. We get milestones or royalties on your future product sales.

And now, look, I designed a lot of these service terms, right? I was responsible for our business model at Ginkgo and I battle test it. I’m out there talking to customers every week and it varies a little by market. So like, for example, in biopharma, there’s a lot more tolerance for milestones and royalties. If you’re doing a strategic deal with a customer, many deals are done like that. In industrial biotech like in the chemical industry where margins are much lower, oh, they hate. So again, one size fits all there, not a great idea. And then second, in biopharma, there’s much more sensitivity to IP, so given how much it makes up the competitive moats around a drug, so they have a lot of sensitivity to a section, you’re going to reuse some of this data that I’m paying for to potentially bring to a customer.

That creates resistance in deals. So when you see us adding all these programs, know that we’re fighting through that resistance with customers to get them done. And we felt that was important. And I think in the context of where we are today and the rate of revenue I’m seeing and what I’m hearing from customers, we should change it. And so we’re going to stop fighting customers on these things, update our terms to give customer IP reuse rights, and in many cases not include downstream value share. There’ll be some exceptions where we are bringing a lot of product-relevant background IP. We do have that. But by and large, remove that downstream value share. And our hope is this will speed dealmaking as we spend huge amounts of time negotiating these IP terms.

It also allows us to scale the number of deals we do without needing to scale legal and financial resources due to reduced deal complexity. Okay. So beyond the issue of IP rights and DBS, customers sometimes have a problem also giving us scientific control of their experiments. And so you’ll see us working on simplifying that, too. In other words, they might say, yeah, I love all the data you can generate, Jason, but like, I really trust my own scientists and their expertise around this particular problem. I just wish they could use your infrastructure. And so we announced, just to pick up from that, in April, LabData as a Service, which is exactly this. The key idea is that a customer scientist can design the experiments and analyze the data, but they have Ginkgo’s infrastructure again, remember all those racks available to them to quickly generate large data sets that they wouldn’t be able to do with their in-house research team.

They might still use that team for other problems that favor small batch, manual, rapid work. Again, I think that is actually a valuable piece of the puzzle internal to our customers. But if it’s a large data generation need, they can just order it. And this will be the best of both worlds for many customers. Now, there’s a subtlety here that took me actually a while to understand, even though I’m at the cold face with customers all the time. When we sell our usual process, where a Ginkgo scientist is in control, that’s really sold to the customer, like, as a strategic deal over often a couple of years. And it’s coming out of a special budget, kind of sold up through corp dev, that funds that kind of work, these kind of research partnerships.

And we actually do a ton of those deals, right? And I think we’ve scaled that kind of deal-making more than almost anyone in biotechnology today. But there’s this whole other big budget, often billions of dollars at pharma companies, that is the everyday R&D budget at the biotech company that’s in the hands of internal scientists at various levels. And with LabData as a service, we can sell smaller deals directly into those scientists. This is both a big new market for us and is also a great mission fit for Ginkgo. Our goal has always been to make biology easier to engineer, but thus far, that’s been limited to Ginkgo scientists. By allowing our customer scientists to access the foundry directly, we’re making it easier to engineer for them, too.

And that’s really important to me. It’s really important to the team. If you watch my talk at Ginkgo for a minute, I spend a fair bit of time on that. Finally, I want to mention we think we can really be the picks and shovels to all the folks that are inventing amazing new AI models in biotechnology. What we are hearing again and again, there’s many new startups getting funded, large big funding rounds. Most of the large biopharmas have now a person in charge of the AI strategy. And what we hear from these people again and again is that data is the missing piece for building new and better models in biology. Okay? And again, we have huge, large English language data sets and things like that to train AI models for English language or videos or images.

In biology, the missing piece is actually the data. And so our lab data as a service is exactly the right offering for these — we can generate large multimodal data sets. And we expect to do business here with customers wanting to access both our automation scale and our expertise, like I said earlier, in conducting large pooled assays. That type of assay generation is particularly important. And both of those are available right now on a fee-for-service basis. You own the IP. There’s no royalties or milestones for any AI company that’s tuning in. We’d love to do that work for you, and you can get that data much faster than anyone else. Okay. So those are the big changes we’re making both on the back end and the front end of our platform to drive scalability through simplification.

We expect these simplifications and others to allow for substantial cost takeouts in the coming months. So I want to talk about those cost savings and how all of these pieces tie to our path to adjusted EBIT to break even. So if you look at our Q1 numbers, our annualized OpEx comes in at approximately $500 million. This is simply too high relative to near-term revenues, right? We plan to cut this back by $100 million by Q4 2024 by significantly consolidating our footprint and reducing labor expenses across both G&A and R&D, which is enabled by the simplifications I just spoke about in the previous two sections. We’re also targeting reducing our annualized run rate cash OpEx by another $100 million, totaling $200 million by mid-2025. The big takeaway is we plan to eliminate discretionary spending that isn’t very specifically focused on how we get to adjusted EBITDA breakeven by end of year 2026.

Page 1 of 4