DigitalOcean Holdings, Inc. (NYSE:DOCN) Q4 2025 Earnings Call Transcript February 24, 2026
DigitalOcean Holdings, Inc. misses on earnings expectations. Reported EPS is $0.2436 EPS, expectations were $0.38.
Operator: Good morning, and thank you for standing by. My name is John, and I will be your conference operator today. At this time, I would like to welcome everyone to the DigitalOcean Fourth Quarter Earnings Conference Call. [Operator Instructions]. I would now like to turn the conference over to Melanie Strate, Head of Investor Relations. Please go ahead.
Melanie Strate: Thank you, and good morning. Thank you all for joining us today to review DigitalOcean’s Fourth Quarter and Full Year 2025 financial results and an investor update. Joining me on the call today are Paddy Srinivasan, our Chief Executive Officer; and Matt Steinfort, our Chief Financial Officer. Before we begin, let me remind you that certain statements made on the call today may be considered forward-looking statements, which reflect management’s best judgment based on currently available information. Our actual results may differ materially from those projected in these forward-looking statements, including our financial outlook. I direct your attention to the risk factors contained in our filings with the SEC as well as those referenced in today’s press release that is posted on our website.
DigitalOcean expressly disclaims any obligation or undertaking to release publicly any updates or revisions to any forward-looking statements made today. Additionally, non-GAAP financial measures will be discussed on this conference call and reconciliations to the most directly comparable GAAP financial measures can be found in today’s earnings press release as well as in our investor presentation that outlines the discussion on today’s call. A webcast of today’s call is also available in the IR section of our website. And with that, I will turn the call over to Paddy.
Padmanabhan Srinivasan: Thank you, Melanie. Good morning, everyone, and thank you for joining us. We had a fantastic quarter and a very strong finish to the year, and I’m excited to share the details with all of you. We ended the year with 18% revenue growth in Q4, reaching $901 million for the full year. We delivered $51 million in incremental organic ARR, the highest in the company’s history. Our 1 million customers reached $133 million in ARR, growing at 123% year-over-year. We maintained financial discipline and strong profitability with 42% adjusted EBITDA margins and 19% adjusted free cash flow margins for the year. There is a lot to be excited about. And given this momentum that we are seeing and the progress we are making against our long-term strategy, we wanted to provide a more comprehensive update today rather than wait for a separate Investor Day.
Our prepared remarks will be slightly longer than usual. We’ll advance slides from our earnings presentation on the webcast as we go, and we’ll leave plenty of time for questions. AI is reshaping entire industries, and we are built for this shift. Software is being disrupted, not by incremental AI features, but by a structural shift to agentic systems operating at scale. Cloud and AI native disruptors are moving beyond AI [indiscernible] at a breakneck speed. We are deploying agents that reason, act retain memory and run continuously. In this structural shift, we see a secular hyperscale size opportunity by serving AI and cloud native companies driving this disruption. When markets are disrupted like this, there is typically a short window to take advantage of the opportunity, and let me tell you how we are seizing it.
First, our top customers are now our growth engine. We have turned what was once viewed as a weakness into a competitive strength. Our top digital native customers or [ D&E ] which include cloud and AI native companies are now our fastest-growing cohort and in fact, growing significantly faster than the market on [ DL ]. In a nutshell, Scaling our top customers was [ 1 second ] train. Today, it’s our growth engine. Second, we are on the right side of software disruption driven by AI. Modern cloud and AI native companies are going after large markets with disruptive AI-centric software innovation. They are increasingly choosing DigitalOcean at their natural platform to build and scale their [ IdentiKI ] software. And when these companies disrupt and scale at unprecedented rates on our platform, we win.
Third, we put the cloud in Neocloud. These AI natives need more than just GPU rentals or inference APIs. They need access to optimized AI models, both closed and open source, production-grade inferencing and a full stack cloud for their software, all working together at global scale. We deliver all of it in one integrated agentic inference cloud. And finally, we are building a durable and profitable growth engine. We are investing responsibly while driving balanced growth. Without chasing the GPU training arm [indiscernible], we expect to deliver 21% revenue growth in 2026, reaching 25% plus growth by Q4 2026 and 30% growth in 2027. We are on a path to being a weighted rule of 50 company next year on the back of our existing committed data center capacity alone.
Put simply, we are accelerating growth the DigitalOcean way. In December, we crossed a major milestone, surpassing $1 billion revenue run rate. This is a remarkable achievement for a company that was founded through [ Techstars ] in 2012. This success is a testament to our passionate team and the vision of our original founders. I also extend my deepest gratitude to all our incredible customers who have supported us throughout this journey. But what matters more than this milestone is where we are going. We exited 2025 at 18% year-over-year growth and are on a path to deliver 21% growth in 2026 with an exit growth rate of 25% plus in Q4 of 2026. We are picking up momentum, and we have outgrown the old narrative. Let me elaborate. Our top customers are now our growth engine.
For our first decade, we built an iconic developer cloud. That foundation still matters, and we have over 4 million active developers on our platform that absolutely love us. Over the last several quarters, we have deliberately shifted focus towards serving our top DNE and eliminating any reason for them to leave DigitalOcean as their scale and that focus is working. In Q4, we delivered a record organic incremental ARR of $51 million and $150 million on a trailing 12-month basis, both surpassing even our peak COVID era quarters. This record trailing 12-month incremental ARR was balanced across AI and cloud customers. ARR from D&E reached $604 million in Q4, which is now 62% of total ARR, growing 30% year-over-year. And our D&E NDR reached 102%, continuing to outperform developer NDR.
And like I’ve been reporting for a while now, our largest customers in the D&E cohort are accelerating the fastest. Our $100,000 customers are growing at 58%, our $500,000 customers are growing at 97%, and our $1 million customers who reached $133 million in ARR are growing at 123% year-over-year, all well ahead of market growth rates. And NDR also increases meaningfully as these customers scale. Q4 was 102% for our $100,000 customers, 106% for our $500,000 customers and 115% for our 1 million customers. Churn for our $1 million customers was 0 in Q4 and has averaged 0% over the last 12 months which clearly shows that our top customers are now scaling with us and becoming our growth engine. You should also effectively debunk any misconception that our most successful customers will outgrow our platform.
Recapping this section, we are accelerating past the $1 billion revenue run rate milestone and our top customers are driving this acceleration. We are no longer defined just by entry-level developers experimenting on our platform. We are defined by high-growth cloud and AI native companies running production workloads scaling revenue and building their businesses on DigitalOcean. Said simply scaling our top customers was once a constraint. Today, it’s our growth engine. On to the next point. We are on the right side of software disruption. There is a structural shift happening in software and DigitalOcean is emerging as a preferred platform for cloud and AI native companies that are driving this disruption. The last generation of Software as a Service or SaaS monetized per user per seat, value, scale with headcount.
This next generation of AI-centric software monetizes per token for inference request Value scales with intelligence delivered as AI model capabilities accelerate entire categories of horizontal and vertical software are being reinvented. Incumbents are reacting to transformational change by layering AI into their workflows, seeking to enhance their existing software. But AI native companies are starting from first principles. For them, AI isn’t a feature. It is the very engine that defines their product. Every time they deliver value, [indiscernible], tokens are consumed and intelligence is produced. DigitalOcean is uniquely positioned to serve these disruptors, and that is evident in the traction we are getting from leading AI native companies.
We have signed and expanded production workloads with scale, cloud and AI native companies like character.ai, workato and Hippocratic AI, companies with product market fit, real revenue and rapidly scaling demand. Our work with character.ai demonstrates this clearly. We delivered 100% throughput increase and roughly 50% lower cost per token. For character.ai on our production inference cloud powered by AMD Instinct GPUs at production scale. This is not a [ lab ] benchmark. This is on live traffic across tens of millions of customers. This demonstrates our ability to support production scale inferencing for leading AI companies with our differentiated performance cost efficiency and integrated AI and cloud platform built for inference first production workloads.
Another AI native with a proven product market fit is Hippocratic AI who builds health care-focused conversational AI, designed to support clinical workflows and patient engagement. Hippocratic AI selected DO’s agentic inference Cloud to power HIPAA-compliant clinical AI workloads. This validates not just our performance but our enterprise-grade security and compliance. For Hippocratic AI, we optimize their multimodal deployment on NVIDIA hardware, reinforcing the importance of vertical innovation from GPUs to networking, [ cortile ] optimization, cloud integration and inference software. These AI native also scale very differently. While traditional cloud customers may take years to reach $1 million in ARR, AI native can cross that threshold in months or even weeks.
When inference is your product demand compounds quickly. DigitalOcean is purpose-built for these disruptors. As software becomes more intelligent and AI-centric, we are building the vertically integrated inferencing cloud designed to power the next generation of AI natives, putting us squarely on the right side of this AI-driven disruption and our Agentic Inference Cloud is capitalizing these disruptors. Next, let me explain how we are enabling this. We do this by putting the cloud in Neocloud. Over the last couple of years, the new category of Neocloud has emerged that is largely optimized for one thing, large-scale AI model training, dense GPU farms, high-performance networking, frontier AI model training workloads. This is an important layer of the AI stack, but serving inferencing is different.
As AI diffuses into every software company, workloads shift from training a handful of frontier models to running millions of real-world applications. and real-world AI-centric software needs more than GPU farms. They need compute, storage, databases, networking, observability, security, all working seamlessly together with predictable and transparent unit economics. Over the past 4 quarters, we have evolved our Agentic Inference Cloud to meet that reality. We have combined specialized inference infrastructure with our full stack cloud platform, purpose-built for production AI while staying true to what defines DigitalOcean, simplicity, open standards, enterprise-grade performance and SLAs and predictable and transparent unit economics. A good recent example of this in action is [ Open law], which recently took the world by storm by demonstrating the power of agentic software, giving us a glimpse into what AI-centric software future will look like.
[ Open Cloud ] is an open source AI agent framework that allows developers to run real-world task-driven agents. When customers deploy [ open cloud ] on big solution, they need more than just GPUs, because AI agents are stateful. They reason, they take action, they retain memory. They interact with third-party APIs. All this requires more than just a GPU form. It takes a full cloud and AI stack working together side by side. Customers increasingly understand this as inference is the heartbeat of modern AI native. It is their primary operating cost, their performance level and their competitive moat. Their production traction scales directly with model quality, inference performance and unit economics. As they grow, they don’t build their products around a single close source model, but rather orchestrate multiple models in real time, often leveraging open source and a mixture of expert approaches to optimize both accuracy and unit economics.
Our platform delivers flexibility at every layer, from serverless inference APIs to dedicated clusters and GPU droplets, allowing customers to precisely match performance and cost to their workload requirements. We pair that with performance optimized open source models, delivering high accuracy, strong throughput, low latency and compelling unit economics. And this isn’t a stand-alone inference platform. It is deeply integrated with our full stack cloud that we have hardened over the last dozen years so that customers can build, deploy and scale their entire AI application in one integrated environment with enterprise SLAs. Our agent development platform takes them from experimentation to production with real-world AI agents. Underpinning all of this is a deep lineup of GPUs from NVIDIA and AMD, supported by rapidly expanding global data center footprint, built and operated with years of operational expertise supporting mission-critical workloads.
This integrated platform and flexibility of choice is precisely what makes DigitalOcean a natural platform for agentic software. Let me explain this again using [ open cloud ] as an example. Customers can build and deploy [ open cloud ] agents on distillation in 2 distinct ways, depending on their need for control, scale and operational complexity. The first path optimizes on simplicity and speed. Customers can launch a preconfigured one-click GPU droplet and have an [ open cloud ] agent running in minutes. This model gives full control over the environment. Ideal for experimentation, customization, performance tuning and for teams that want direct access to the infrastructure layer. The second path optimizes for global scale. Customers can deploy [ open cloud ] on DO’s managed serverless platform where DigitalOcean handles provisioning, scaling, security, container orchestration and operational management.
This approach is ideal for teams that are scaling a global application. Both approaches run on the same integrated cloud with access to managed databases for agentic memory object storage for artifact, virtual private cloud networking, observability and GPU backed inference. That’s what vertical integration looks like in the inference economy, not just providing bare metal GPUs or even just generating inference tokens, but providing a secure, scalable and manageable foundation for intelligent stateful systems. Within days of launching [ Open Cloud ], nearly 30,000 native DigitalOcean, 1-click [ open cloud droplets were created, and that was just the starting point. thousands of other open cloud deployments were activated by customers, signaling the emergence of a new ecosystem almost overnight.
The success of [ open cloud ] is an early view of how the AI market will continue to evolve and can serve as a blueprint for AI native businesses on how a new generation of software will be built around autonomous agents that orchestrate complex multistep workflows across systems, continuously reason with data and context and execute tax end-to-end with minimal human involvement. As these AI native companies move from proof of concept to production agents, the richness of the underlying platform, the security posture, manageability, scalability and predictable unit economics become mission-critical. And that is exactly where distillation is fast emerging as the natural platform for building and scaling AI agentic software. The competitive landscape is crowded with companies speaking to their ability to address the inference market, but our differentiation from these competitors is very clear.
Neoclouds rent out GPUs. Inference [ rapper ] providers stop at inference APIs and model libraries. We continue to effectively compete with hyperscalers who bring scale, but also come with complexity and cost structures that are aimed at traditional large enterprise companies. While each of these competitors address a component of the inference value chain, real-world identic software requires a tightly integrated environment where inference, orchestration, persistence, networking and security are designed to work together with simplicity, global scale, enterprise SLAs and predictable unit economics. That is where DigitalOcean wins. This differentiation is clear to our customers, but it’s also very clear in our financial profile. As a full stack cloud provider, that has operated mission-critical workloads for cloud and AI native for over a decade, we look very different from a financial perspective than other players chasing the AI training market or components of the inference market.

Where Neocloud has very high revenue concentration with just a few very large customers making up the vast majority of their revenue, [ dissolutions ] top 25 customers represent only 10% of our revenue. While GPU rental providers own bare metal revenue and margins on their infrastructure, DigitalOcean drives higher revenue and margin from our full stack inference and cloud solutions. And when a growing number of Neoclouds are investing massive amounts of capital and burning near-term profits and cash for future returns, this solution is already profitable and generating cash. Our traction with cloud and AI native is no accident. It is the result of relentless focused investment and disciplined execution. We recently strengthened our executive team by adding Vinay Kumar as our Chief Product and Technology Officer.
As a founding member of Oracle Cloud Infrastructure, or OCI, Vinay brings deep hyperscale expertise and leads our product, platform, infrastructure and security teams, having built a hyperscaler from the ground up at OCI, he looks forward to scaling up another one at DigitalOcean, one that is purpose-built to meet the complex needs of cloud and AI native workloads globally. In the meantime, our R&D team has been very busy continuing to ship products and features that are helping our customers scale on our platform. On [ GoreCloud], we launched remote MCP support embedding AI directly into the control plane, enabling secure 0 setup infrastructure management. On our AI platform, we introduced the age and development kit, an enhanced agent evaluation tools to help customers move from experimentation to production with measurable performance and reliability.
With GPU observability, managed NFS and multi-node GPU support, we significantly expanded our ability to run large-scale mission-critical inference in production. This is what vertical integration looks like, infrastructure, inference, observability, agent tooling, all built to seamlessly work and scale together. And we’re just getting started. We’ll share the next wave of innovation on our Agentic Inference Cloud at our next deploy conference in San Francisco on April 28, as we continue building the platform, purpose built for the inference economy. Our differentiation is durable and will continue to grow as the market shifts from training to inference. To give investors clearer visibility into this momentum, we are introducing a new metric, ,AI customer revenue.
AI customer revenue includes all revenue from customers leveraging our AI products, including both inference and core cloud services. Because AI natives don’t just buy GPUs, they build, operate and scale applications which need a full stack inference cloud. In fact, 70% of our AI customer ARR in Q4 2025 was already coming from inference services or general-purpose cloud products rather than from bare [ metal ] GPU rentals. And these customers are growing rapidly with Q4 AI customer ARR reaching $120 million, growing 150% year-over-year, now making up 12% of total ARR. In summary, we don’t just rent GPUs. We run production AI. We are not a GPU landlord. We are an AI cloud platform. We deliver hyperscaler grade infrastructure and reliability purpose-built infant services co-located and integrated with a full stack general-purpose cloud designed for the next generation of AI native.
Or put simply, DigitalOcean puts the cloud in Neocloud. Now on to my final takeaway. We are building a durable and profitable growth engine. At our Investor Day last April, we laid out a plan to return the business to 18% to 20% growth by 2027. On our last earnings call, we pulled that growth projection forward by a full year guiding that we would reach that 18% to 20% growth range in 2026. And just 9 months after setting that original plan, we’ve already reached the bottom end of the target range at 18% growth in Q4 of 2025, achieving it 2 full years ahead of our original target. And the momentum we are seeing gives us even greater confidence. We now expect to deliver 21% revenue growth for the full year 2026 with an exit growth rate of 25% plus by Q4 and reaching 30% growth in 2027.
As we ramp into our committed 31 megawatts incremental capacity this year, there will be measured near-term pressure on gross margin and adjusted EBITDA, but we remain confident in our 18% to 20% unlevered adjusted free cash flow margin guide for the year. The near-term pressure is just a physics problem, given the start-up cost timing and revenue ramp characteristics of quickly adding new capacity. It is the natural result of pursuing high-return growth opportunities, but we remain disciplined operators. Demand continues to far outstrip supply. And we will take advantage of opportunities to further accelerate growth when they present themselves. We will do so responsibly and we’ll continue to pursue investments with attractive returns match investments with revenue timing, maintain a strong balance sheet and allocate capital with trigger even as we accelerate.
Growth and discipline are not trade-offs for us. They are both operating principles. With that, I will turn it over to Matt to walk through the quarter and the year in more detail and to provide additional color on our updated outlook. Matt, over to you.
Matt Steinfort: Thanks, Paddy. Good morning, everyone, and thanks for joining us today. As Paddy just shared, we’re a very different company today than we were just a few years ago. It’s an exciting time at DigitalOcean. We are a rapidly growing and profitable company that is incredibly well positioned to take advantage of the hyperscale sized inference market opportunity. This excitement is clearly evident in both our recent financial performance, and in our higher near-term and long-term outlook. Revenue growth has reaccelerated. We’ve reversed declines from our top customers, turning them into a key driver of our growth. We have scaled our AI customer ARR to $120 million, growing 150% year-over-year. And we’ve done this profitably, growing adjusted EBITDA and adjusted free cash flow on both an absolute and a margin basis.
While we are pleased with our progress over the past several years, it is our recent momentum that gives us the confidence to further increase our near-term and long-term outlook. Fourth quarter revenue was $242 million, up 18% year-over-year and we closed ’25 with full year revenue of $901 million. We delivered sustained acceleration through the back half of 2025, driving a 500 basis point increase in Q4 growth from the same period just a year ago. We delivered the accelerated revenue growth with strong margins and growing profits even as we increased our investments. Fourth quarter gross profit was $142 million, up 13% year-over-year, with a gross margin of 59%. For the full year, gross profit was $540 million, up 16% year-over-year, with a gross margin of 60%.
Adjusted EBITDA in the fourth quarter was $99 million, an adjusted EBITDA margin of 41%. Full year adjusted EBITDA was $375 million, a 42% adjusted EBITDA margin. Trailing 12-month adjusted free cash flow was $168 million in Q4 or 19% of revenue. We maintained our attractive free cash flow margins in ’25, in part by expanding our financial toolkit to include equipment financing. This better aligns infrastructure investment timing with the revenue that it supports. We will continue to utilize a combination of upfront asset purchases and equipment leasing as we invest to fuel our growth. We continue to be disciplined financial stewards for our investors. We prudently use stock-based compensation to attract and retain our critical talent while repurchasing shares to mitigate to [indiscernible].
[indiscernible] declined to 9% of revenue in 2025, down from 12% in the prior year. To put that number in context, we have a 33% margin if you subtract [ SBC ] from adjusted EBITDA. At 33% margin, we are just above the 80th percentile of a broad software comp set on an adjusted EBITDA less SBC basis. And we are well above the 13% median of that group. Non-GAAP weighted average shares outstanding increased slightly from 103 million to 105 million over the same period. To reduce dilution, we repurchased 2.4 million shares in 2025 for $82 million at an average price of approximately $35. Note that we ended 2025 with our full $100 million buyback authorization in place and that authorization continues through July 31, 2027. While we continue to view share repurchases as an important long-term tool, our near-term capital allocation priorities are squarely focused on organic growth and balance sheet flexibility.
GAAP diluted net income per share in the quarter was $0.24 and $2.52 for the full year. 183% year-over-year increase. Non-GAAP diluted net income per share in the quarter was $0.44. For the full year, non-GAAP diluted net income per share was $2.12, a 10% year-over-year increase. As a quick reminder, recall that our 2025 net income per share metrics were impacted by the actions we took in ’25 to strengthen our balance sheet. In 2025, we proactively addressed the upcoming maturity of our 2026 convertible notes. We did this through a series of successful financing transactions that have given us significant balance sheet flexibility. These transactions included the establishment of an $800 million bank facility, the issuance of $625 million of 2030 convertible notes and the repurchase of the majority of our then outstanding [ 26 ] convertible [indiscernible].
Excluding the effects of these financing transactions, non-GAAP diluted net income per share would have been $2.29 for the year and $0.53 for the quarter. With our 2026 notes largely addressed, we ended the year with a strong balance sheet. We have sufficient liquidity and projected cash generation to address the remaining $312 million balance of our outstanding [ ’26 ] convertible notes. Having drawn down the remaining $120 million on our Term Loan A in February, we will repurchase or redeem the remaining [ 26 ] notes for cash before or at the maturity in December of ’26. Beyond this, we have no other material maturity until 2030, and we entered 2025 with approximately 3.2x net leverage. Before I get into guidance, I want to highlight an action we are taking to further concentrate our investments on our key growth levers.
We are sunsetting a small legacy dedicated Bare Metal CPU offering. We expect approximately $13 million of ARR to roll off by the end of Q1 2026. As this revenue is noncore, we have excluded this legacy product revenue from our customer-specific year-over-year growth metrics. 0 Shifting back to guidance. We entered 2026 with tremendous momentum and confidence. Paddy spoke of the material demand we’re seeing for our Agentic Inference Cloud. We also continue to improve visibility on our net [ term ] revenue growth as we increased RPO in Q4 to $134 million, up 121% sequentially, up close to 500% year-over-year. With this growing demand and visibility, we are again increasing our near-term growth outlook. For the first quarter of 2026, we expect revenue in the range of $249 million to $250 million, which is approximately 18% to 19% year-over-year growth.
We expect first quarter adjusted EBITDA margins in the range of 36% to 37%. We expect non-GAAP diluted net income per share of $0.22 to $0.27 based on approximately 111 million to 112 million weighted average fully diluted shares outstanding. For the full year 2026, we expect revenue growth between 19% and 23%. This is 21% at the mid — beyond the 18% to 20% growth outlook that we shared just last quarter. And it is important to highlight that this would be 21% to 24% projected growth if we exclude the impact of our discontinued legacy Bare Metal CPU offering. We will deliver this accelerated growth while maintaining attractive margins. We project full year 36% to 38% adjusted EBITDA margin and 18% to 20% unlevered adjusted free cash flow margins, which is $207 million at the midpoint.
We expect non-GAAP diluted net income per share of $0.75 to $1 on 111 million to 112 million weighted average fully diluted shares outstanding. This growth outlook is based on the incremental data center and GPU capacity investments that we have already committed that will come online over the course of 2026. As we look at the quarterly progression within 2026, it is important to understand the timing of this incremental capacity and how that timing impacts our financials. We are bringing 31 megawatts of new data center capacity online and 3 new facilities in 2026. The smallest of our 3 new facilities will start ramping revenue in the second quarter. The remaining [indiscernible] start ramping revenue in the second half of 2026. Aligned with this capacity ramp, we expect second quarter revenue growth to remain around 18% to 19%, with revenue growth then ramping in Q3 before exiting the year at 25% plus in Q4.
While there are always supply chain and implementation timing risk to manage, we believe our implementation time line is realistic. Increased data center lease expense and equipment depreciation expense will both hit our financials several months before we generated our first revenue in these facilities. Given this lag between expenses and revenue, cost of goods sold from higher GPU-related depreciation and operating expenses from new data center operating leases will increase in the early part of the year as we ramp into the new capacity. These increased costs will cause the expected upfront drops in gross margin and net income that we have seen when we turned our previous data centers. The initial impact will just be larger as we are turning up more capacity at one time than we’ve done in the past.
Near-term adjusted EBITDA margins will also be impacted somewhat from these dynamics although the impact is less as adjusted EBITDA is only impacted by the higher data center operating. Net leverage is projected to be above 4x and in short term as we add finance lease obligations to fund our GPU and CPU investments, this increases net debt several months ahead of revenue and adjusted EBITDA ramp. We anticipate returning below 4x net leverage over the medium to long term as we increase utilization in these data centers and ramp revenue and adjusted EBITDA. We will achieve these growth targets by focusing on our 2 primary growth levers, scaling our top D&E customers and expanding our base of AI native customers. We will focus our investments on meeting the needs of our top D&E customers so that they can continue to scale on DigitalOcean as they grow their own businesses.
We will continue to invest both in our differentiated Agentic Inference Cloud and in the data center and GPU capacity required to support AI native. While we are excited by our growth potential in 2026, we are just getting started as we reach full utilization on our existing committed capacity, we expect to reach 30% revenue growth in 2027. We will drive this growth while delivering projected 20% plus unlevered adjusted free cash flow margins, which would make us a rule of 50-plus company in 2027. We will achieve this while making smart investments, earning attractive margins and maintaining a healthy balance sheet. We have both the tools and the discipline in place to continue to take advantage of opportunities as they arise. We will continue to share details on our leading indicators and our progress as we execute.
We are increasingly confident in our ability to build a durable and profitable growth engine. With that, I’d like to turn it back over to Paddy to close this out before we get to Q&A.
Padmanabhan Srinivasan: Thank you, Matt. Before we move to Q&A, let me leave you with a few thoughts. We crossed $1 billion revenue run rate in December, but that milestone is not the headline. The headline is where we are heading. We are no longer a niche developer cloud, with a platform that high-growth cloud and AI natives are increasingly choosing to run production AI workloads at scale. We are projecting to exit 2026 at 25% plus revenue growth with a clear path to 30% growth in 2027 with the existing committed data center capacity alone. Our top customers are accelerating and are growing significantly faster than the market on DO. We have outgrown the old DigitalOcean narrative. Scaling our top customers was once a constraint.
Today, it’s our growth engine. Our $1 million customers are at $133 million ARR, growing at 123% year-over-year. The world of software is shifting from seats to tokens from experimentation to production for model training to inferencing at scale. And in that shift, the winners in inference will be more than just GPU landlords. They will be vertically integrated AI cloud platforms that deliver performance, great unit economics and simplicity that embraces open source, exactly what we have and what we continue to build. Our AI customer ARR reached $120 million in Q4, growing 150% year-over-year with 70% of that coming from inference and core cloud products, not from Bare Metal. And we’re doing it without chasing the GPU training arms race. Without sacrificing discipline, without compromising profitability, we are building something durable.
AI is reshaping entire industries, and we are built for this shift. I’m incredibly excited to be part of DigitalOcean at this critical inflection point where a new era of software is being ushered in. I take incredible pride in building a platform that AI pioneers are increasingly leveraging to disrupt software. I thank all of you for your partnership and support, and I hope you will join us in San Francisco on April 28 to learn about our platform, our innovation and our customers. With that, let’s open it up for your questions.
Q&A Session
Follow Digitalocean Holdings Inc. (NYSE:DOCN)
Follow Digitalocean Holdings Inc. (NYSE:DOCN)
Receive real-time insider trading and news alerts
Operator: [Operator Instructions] Our first question comes from the line of Raimo Lenschow with Barclays.
Raimo Lenschow: Congrats from me. That’s amazing how a company is transforming right in front of my eyes. Paddy, can you talk a little bit about the customers that you’re seeing? Like the talk in the market, a lot of that is just opening on trades, maybe Google and they are basically doing everything and nobody else really comes up. When you talk looking at your customers, looking at the pipeline of customers out there. How do you see that inference market evolving in terms of how broad that will be? Is it just unproper doing everything? Or what are you seeing out there in the field? And then I had one follow-up for that.
Padmanabhan Srinivasan: Yes, Raimo, thank you for the question. It’s a very thoughtful way to get started. Of course, OpenAI, Gemini and Anthropic get all the headlines in the mainstream news coverage. But as we talk to AI native companies and even examples that I was using in my script, and you will hear a lot more about this at our deploy conference with very specific benchmarks and data. But what we are hearing from these AI native companies is that while these close source models are really, really good, the open source alternatives are extraordinarily important to manage the unit economics as these companies came because the cost per token for the open source model is about 90% cheaper, right? So with a very comparable accuracy as these open source models mature.
So we have many AI native customers that are using as I mentioned, a variety of open source models at real time when they’re doing inferencing, they want us to manage a multitude of open source models and even route their request intelligently to these open source models, and of course, use close source expensive model on a case-by-case basis, it could be for certain prompts, which are better served by these close source models and route everything else to these open source models so that they can have a balanced unit economics. So it is by no means — and if you look at data from open router, 30% of the traffic already today is served by open source. That is without a lot of optimization that is without companies like DigitalOcean really stepping up and taking full ownership and guardianship of these open source models.
So we are doing a lot of work in this regard over the next couple of months, and you will see it in our [ deploy ] conference. But this 30% is only going to grow as these real-world AI native workloads explore we are going to see a lot of open source adoption. Even in the open deployments that we are seeing, there is a very healthy adoption of open source model serving these open class agentic — agent farms. So it is really interesting to see how this is evolving. And I want to say there is definitely a world beyond these closed source models. The open source ecosystem is thriving, and it is only going to grow in strength from here on.
Raimo Lenschow: Yes. Okay. Perfect. And Matt, one question that comes up a lot at the moment is on the weighted rule of 50 numbers. If you look at your [ waiting], and then there’s a lot of questions about the free cash flow margins that you think about in 2027. Can you maybe kind of go a little bit deeper there because that comes up a lot here at the moment?
Matt Steinfort: Yes. Thanks, Raimo. The weighted rule of 50 is pretty simple for us. We multiply revenue growth by 1.5% and add 0.5x the free cash flow margin. And that’s effectively saying that you’re counting revenue growth 3x as valuable as the point of free cash flow margin. But the important thing to note is while we talk about weighted rule of 50, if you look at the growth projections we provided were actually a regular weighted — a regular rule of 50 as well with projected 30% revenue growth in ’27 with 20% unlevered free cash flow margins. So that is, I think, a very big testament to the growth opportunity that we have in front of us. But also the disciplined financial discipline that we’ve been employing with the ability to accelerate revenue growth while still maintaining very attractive EBITDA margins and very attractive free cash flow margins is kind of part of the model, and it’s the benefit of us not chasing the GPU training kind of arms race.
It’s — we believe that we’ll differentiate based on software and a differentiated platform, and we see a tremendous opportunity to drive really attractive margins as we expand and invest appropriately.
Operator: Our next question comes from the line of Kingsley Crane with Canaccord Genuity.
William Kingsley Crane: Congrats to the whole team on the results. I think you’ve done an excellent job with the investor update. I actually want to circle back to the inference cloud dynamic with open source models. We’ve been looking open router data as well. I mean some of these models come and go pretty quickly, have many [indiscernible] communicate to. How are you thinking about quickly providing support for those classes and models? Is there any operational tax to quickly provide support? And then just how to think about them driving growth, both from a revenue and profit standpoint. Could there be more of a [ Jevons ] paradox dynamic there with the lower cost models?
Padmanabhan Srinivasan: Yes. Thank you, Kingsley. That’s a good question. So you asked 2 different questions. One, from an operational overhead in terms of days or support to these models, obviously, we’ve been extending day support for a majority of these open source models as they come out. And there are a couple of things there. One is, obviously, there’s a little bit of manual overhead in supporting these models. But a large portion of this test and readiness harness is automated. And it is only going to grow in automation, and you will see a lot more details around this at our deployed conference. And the second part of your question was really around the [ Jevons ] paradox of as these open source models proliferate, how should we think about the growth profile of not just our platform but also these companies, I think it is only going to aid in the deployment of AI native software in pretty much every segment of the market.
And I think we should also not think about AI native workloads at open source or close source. What we are seeing is the mixture of both for the same use case, for the same inference call even, some parts of the application stack — based on the pumps, we do intelligence routing. Right now, it’s fairly manual, but we are working on different types of algorithms to route it in a much more intelligent and smart fashion. So you will see a universe going into the future where prompts are going to get routed to different models all working together at the same time to deliver high throughput, low latency, acceptable accuracy with great unit economics of token throughput. So this is coming. We are already seeing it from many of our AI native workloads.
And that is how I see the market evolve as open source model continue to catch up with these closed source systems. The close source systems are really important to be on the bleeding edge of innovation, but a vast majority of these long-running agentic software like [ Open Cloud ] can very materially run on these open source systems.
William Kingsley Crane: Thanks, Paddy. That’s really helpful. And then for Matt, obviously, $22 million for ARR per megawatt is a clear differentiator. I’m curious now that Atlanta is close to full utilization. Any insights you have on just what a full utilized megawatt can look like in terms of a revenue efficiency standpoint for AI.
Matt Steinfort: Yes, that’s a great question, Kingsley. If you look at the public data that’s available to like a Neocloud, which is more of a bare metal model, they show like, what, $9 million to $12 million, I think, in ARR per megawatt. Clearly, we believe we can deliver more than that. And if you look at the guidance that we’ve given, what you’ll see is that while it’s [ 22 ] now, that’s, again, with a small less than 10% or right around 10% of our ARR in AI. So as we grow AI, it will come down. We’ll add incremental ARR per megawatt greater than what you’re seeing from the Neoclouds, but the drop from a bigger mix of AI by the end of ’27 once we’re fully ramped with the incremental [ 31 ], it will only drop by a couple of million.
That will be around $20 million. And so if you think of us as not having, okay, we’ve got AI investments and we’ve got core cloud investments, but we have more of an overall AI cloud platform that has GPUs, it’s got CPUs. It’s got core compute and bandwidth and all the capabilities that you need, we still expect to deliver materially higher ARR per megawatt than what you’re seeing in the Neocloud space. So we feel really good about the returns that we’re getting and the margin that we’re able to drive. And this is only going to increase. I mean, you saw the chart in the deck about how many — how much of the AI customer revenue is coming from non Bare Metal that’s 70%. That’s only going to increase. And that smaller lever of core cloud is only going to increase as customers become entrenched on our platform and they start putting in database and storage and some of the other higher-margin capabilities that are sticky.
We’re very excited about our ability to serve the kind of full addressable wallet of the AI native.
Operator: Our next question comes from the line of Josh Baer with Morgan Stanley.
Josh Baer: Congrats on the strong results and impressive targets. Just wanted to clarify, the incremental 31 megawatts that all comes online by the end of ’26 driving that 25% revenue growth exiting the year. But then as utilization increases, the capacity is enough to reach the full 30% growth in 2027 revenue?
Matt Steinfort: That’s absolutely right, Josh. You nailed it. as we said in the call that the smallest of the 3 facilities, which is 6 megawatts, is going to come — start ramping revenue in the second quarter. But the other 2 start ramping in the second half. And just with the — what we believe is appropriate assumptions around the timing and the ramp of that we’ll hit 25% in Q4 as an exit growth rate, 25% plus. And then if all we did was kind of fill those — continue to fill those up, we’ve hit 30% for the full year in 2027. And we feel very good about, again, the returns that we would generate there and the growth trajectory that we would be on at that point.
Josh Baer: Okay. That’s helpful. And I was just hoping you could sort of review some of what Vinay Kumar’s top priorities are at this point, there’s been so many positive changes from a product and innovation perspective over the last couple of years. What are his priorities? What changes should we expect going forward?
Padmanabhan Srinivasan: Yes. Thanks, Josh. So as I was mentioning in my prepared remarks, given his background at RF Cloud, he has really hit the ground running. His top 1 or 2 priorities are going to be building — continue to build out the inference cloud. And you will see a lot of very detailed announcements on April 28 at our deploy conference on how the next generation of this inference cloud capabilities is going to look like. The team is super [ head down ] and busy working on it now. We also will continue to raise the bar on our core cloud capabilities because our cloud native digital native enterprise companies are also scaling tremendously on our platform and they require continuous innovation from our side on advanced things like different types of databases and different scale aspects, scalability aspects of our database as a service and various parts of our core cloud infrastructure like high-performance storage, network file systems.
So one of the things that Vinay is working on is delivering innovation in our core infrastructure that is applicable to both AI native and cloud native. So there is a huge intersection that when you look at companies like the AI native that we are rapidly scaling up on our platform, they require very similar things from, say, high-performance storage as an example. Like I don’t want to preannounce stuff that we are working on, which we will come out on April 28 with. But a lot of those things are very similar to what our cloud-native companies can also benefit from. So there is a quite a robust lineup of capabilities that we are working on for both the inference cloud as well as some of the underlying infrastructure enhancements that will be applicable to digital native enterprise companies.
So that’s what he’s focused on delivering. And as I mentioned, given his background, he’s almost hit the ground running in terms of ramping up the innovation on the core inference cloud.
Operator: Our next question comes from the line of Wamsi Mohan with Bank of America.
Wamsi Mohan: Great to see this growth acceleration here. Firstly, maybe, Paddy, just visibility around the 30% growth. How should we think about that in terms of — I mean, historically, obviously, dilution is a very different company today. But historically, you really not had like long-term contract, long-term visibility. You’re talking about very meaningful acceleration as you go to 30% plus. Maybe if you could dissect some of the underlying drivers of what you’re looking at, which gives you the confidence? And maybe despite [indiscernible] that between Infrastructure as a Service and Platform as a Service, that would be maybe a different way to slice and give people a view over there. And I have a quick follow-up.
Padmanabhan Srinivasan: Thank you, Wamsi. The — I think Matt broke down some of the physics of the acceleration, right? So we have new capacity that is ramping up throughout this year and going into next year as well. So that gives us a lot of visibility. So first of all, maybe I should take a step back and talk about the fact that our — the demand that we are seeing now is very, very robust. And it far exceeds the supply that we currently have from an infrastructure point of view. So we are being super responsible in ramping up our capacity. We are being super aggressive in the time lines. We are working very closely together with the data center providers and [indiscernible] to get this capacity online and the fastest possible speed of flight scenario as much as we can.
So given the schedule that we are currently working on, we feel very confident that as we bring this capacity online, we have enough demand in the pipeline to be able to fill up this capacity with very responsible unit economics. So that’s what is giving us the confidence to provide the outlook of 25% plus exiting this year and 30% for next year. And also, our RPO has been going up steadily, and that is one leading indicator. But also, I should add the fact that inferencing is very different, right? I mean this is a — these are real-world workloads. As opposed to training where a company can just raise venture capital money and just commit to a 2-year, 3-year contract to burn dollars to build a frontier model inferencing workloads are typically paid by end customers.
So for us, that is super exciting because we are typically working with post product market fit companies that have real revenue, working with real consumers or business to business like hypocritic AI. They’re deploying in some of the world’s largest health care providers. So we know that as their demand picks up, they’re going to need more and more inference capabilities. So our confidence really stems from the visibility we are getting into our customers and the real-world influence demand. So I feel if you look at it from a customer perspective or you look at it from a capacity point of view, those are the one — those are the data points that we used to triangulate our guidance for exiting this year and next year.
Wamsi Mohan: Okay. And then maybe one quick one for Matt. So can you just talk a little bit about the margin progression? I guess you mentioned some near-term margin compression given your capacity ramp should we expect that will persist through all of 2026, given the timing of the ramp? And then as you ramp into ’27, we should be back to 2025 levels.
Matt Steinfort: Thanks, Wamsi. Yes, there’s certainly going to be some near-term pressure, as we said, on the gross margin, for example, but the metrics that we think are the best indicators of profitability for us are continue to be adjusted EBITDA margin and free cash flow margin, both on an unlevered basis and on a levered basis. And if you look at the margin guidance that we provided for the full year ’26 and the ranges for ’27, you see exactly what you just described, which is we’ll have a little bit more pressure this year as we ramp. But then as we grow into that, the utilization increases, that kind of catches back up and you should see an upward trajectory on the margins. The mix of AI services versus the core cloud, that’s certainly — that’s a longer duration impact because as we add more AI capabilities and more AI revenue, margins are lower than the core cloud margin.
So you’ll have a little bit of a mix impact in addition to the timing impact, but all of that is net out in the very, very strong adjusted EBITDA margins that we’re projecting and the very strong adjusted free cash flow margins and unlevered adjusted free cash flow margins.
Operator: Our next question comes from the line of Gabriela Borges with Goldman Sachs.
Gabriela Borges: Congratulations to the DigitalOcean team. Paddy, I have a little bit of a longer-term question for you. If I think about DigitalOcean core value proposition, on democratizing access to cloud. That has been true for many years now. My question for you is, what do you think is structurally different with the AI compute cycle that will allow digitation to essentially capture and hold on to a higher share of wallet in AI inference compute relative to the cloud cycle. And the reason asking is because there are 32 companies that show up in the semi analysis across the [ MAX ] benchmarking airport. We know that [indiscernible] is early. We know the inference cycle is early. How do you think about DigitalOcean’s ability to durably capture higher share relative to the 31 of the competitors in the long term?
Padmanabhan Srinivasan: Thank you, Gabriela. And I’m sure if semi analysis was around in 2011 or 2012 when cloud was taking off, they would have been 32 BPS providers as well. And we went from that to a $1 billion run rate in 12, 13 years. And [ Vinay ], if I take a step back and think about how durable our mission is in the world of AI, I think I hit on a few different things, right? I fundamentally believe that inference workloads are also workloads or real-world applications as well as the application scales, you need a variety of different things all working together, right? AI native, [indiscernible] do want to just use one provider for token generation, go to another provider for database, go to a third provider for their application experience and go to a fourth provider for some of the other core storage and other artifacts.
They want an integrated cloud that is co-located and all of these perimeters to work hand-in-hand together so that they can focus on building their business are not mess around with infrastructure. The other part that I feel very confident is something that we are going to be talking and dealing with a lot in our deployed conference on April 28, which is this emergence of a mixture of AI models that is required to run efficient unit economics in inferencing mode. So the difference in the unit economics between closed source models and open source models is 90 — so open source models are 90% more cost effective than it compares to close source models. And it already has 30% market share with just a handful of open source models on the market.
So I feel this is only going to go from strength to strength. And that has been a big differentiator for DigitalOcean throughout the years as well. So we talk about 32 companies showing up in some of these market landscapes. But when [ Open Cloud ] became viral a couple of weeks ago or a month ago, we were one of the natural places where developers started deploying it. As I mentioned, we have more than 30,000 of these agents running, and we barely did anything from a marketing point of view. In fact, we did no marketing. All we did is scramble our jets to make sure that developers have first-class experience deploying these agents on our platform and we were such a natural choice for running these long-running agentic software because they need a lot more than just access to GPUs or just access to inference tokens.
So I feel very good that we — our product strategy is working, and we are able to serve the needs of inference workloads running in production. So we’re already starting to see the proof points for where different parts of our inference cloud are getting lit up. And the slide that I walked through in terms of our AI customer revenue, 70% of our revenue already is from non Bare Metal. And that should give us a lot of confidence that our platform services, higher-margin services are resonating with our customers. They’re increasingly coming on — coming to us as they recognize that bare metal is not going to be sufficient for them.
Gabriela Borges: Yes. Really good color. I’ll stay on this one — 70% non Bare Metal data point. And I’ll ask the question to Matt. Payback period on GPUs. The last time we talked about this, I think you’ve told us it was around 3 years. But that before your [indiscernible] head focused on maximizing or improving the ARR per megawatt of capacity. So my question is, how is payback periods in GPU engine?
Matt Steinfort: Well, that’s a great question, Gabriela. And one of the things that I want to make sure everybody understand is if you think about why did we lease gear, like why are we doing equipment leasing, it’s to address exactly this challenge. If you said, okay, well, you’re going to spend hundreds of millions of dollars on GPUs and you’re going to have to wait 3, 4 years to pay them back. That’s a model. That’s not the model that we’re pursuing. Our model is we’re leasing the gear, which means we’re earning more ARR per megawatt and pretty associated GPU investment than what a Neocloud would earn. But we’re also earning cash on that within months of actually deploying it. right? As soon as we deploy that and we start earning revenue and it ramps, we’re paying on a monthly basis for that gear over 4 or 5 years, and we’re earning more than 2x that in revenue.
So from a payback period, we still have the same kind of payback hurdles that we’ve had before. You’d like to see 3-year paybacks on most of your investments. You might be willing to extend that to win some early customers. But if you actually think about the mechanics, that there isn’t — that’s a little bit of an intellectual exercise because we’re paying — we’re already paying our gear back within a month or 2 because we’re earning more cash than we’ve spent on that gear. And that’s the reason you align your investment with revenue.
Operator: Our next question comes from the line of Param Singh with Oppenheimer.
Paramveer Singh: First of all, Paddy, I wanted to get a sense of your [indiscernible] AI platform, obviously, that’s driving a lot of growth. But where do you think some of the missing pieces are in terms of your technology given that the Neoclouds are starting to get a little bit more aggressive, do you think you have a sustainable competitive advantage? And how do you plan to sustain that?
Padmanabhan Srinivasan: Yes. Param, not only do I think we have an advantage now and our lead is increasing compared to other Neoclouds because they are coming from a training world, which is which is totally different, right? The needs all the way from the way GPUs are network and the cluster sizes, everything is so different. Inferencing is very different, as I explained, and if you look at the slide that shows the richness of our inference cloud stack, each one has taken us years and years to perfect. And as we work very closely with cloud AI native companies, we are — we are understanding and getting an appreciation for their real challenges, right? The example that I was talking to you about where customers need orchestration across different AI models at real time when they are trying to parse out a prompt and so that query or make real-time decisions.
So we are getting so much intelligence just working hand-in-hand with our customers. I feel like our lead is only going to increase from here on. And it’s not to say that we won’t have competition, but I feel very confident in our ability to out-invent these other companies in terms of our inference cloud. And the durability is for you to see are we have 0% churn in our $1 million per customer. So something is working, and that is our agent inference cloud.
Paramveer Singh: And as my follow-up, do you feel you’re constrained by the availability of power and physical location at this point? Or put conversely, given the opportunity to invest even heavier and grow faster, given the demand from the AI natives, what would you prefer at this time? Or would you rather have a slower pace of investment? Happy if you could give me some insight on really appreciated.
Matt Steinfort: Yes. As Paddy said, we have more demand than we have supply. But we’re also making, I think, very prudent and appropriate investment decisions. We don’t want to go all in with like a single customer. We don’t want to go all in on a single generation of GPU technology. We believe that building a diverse set of customers that are very heavy in the inferencing workloads and not chasing training, we’ll build a durable model for us. And so we’ll continue to evaluate opportunities to continue to accelerate our growth and we’ll make good appropriate financial decisions, and we’re doing it in a very balanced way across a diverse set of customers. But we’re very, very highly concentrated on what we’re good at and where we’re differentiated and where we can earn a good return, and that’s what’s driving our investment decisions.
Operator: Our next question comes from the line of Radi Sultan with UBS.
Radi Sultan: First one for Paddy, kind of on a similar line of questioning, just sort of that longer-term capacity add framework. As you guys think about sort of how much capacity you want to procure and maybe stretch it out over the next several years? Like what are you looking at specifically to inform that decision? And then what gives you confidence in being able to fill that capacity over the next several years?
Padmanabhan Srinivasan: Yes. Thank you. We look at many, many factors, but the dominant one is we look at our customer demand, look at what they’re dealing with, how they are projecting their needs. So that is a big, big input factor for us. The second one is we look at the footprint from the perspective of for inferencing, obviously, we need to have a really good geographic spread co-locating and for all of our new data centers, we have both core cloud as well as AI capacity, all running on the same server stack. So that’s an important aspect for us to have all of these things colocated. The third thing we always look at is how we are going to keep up with the generational leap frogs of OEMs, including AMD and NVIDIA and perhaps others in the future.
So these are all important factors that we take into account as we consider how our footprint is going to look like over the next several years. And we are always making this evaluation, we are looking at various options as we build out our long-term plan. And as I said, primary driver is always looking at our customer needs, customer demand, what kind of workloads are they ramping up. The demand for their application is a big driver for us. So those are some of the input factors that we use to plan our capacity.
Radi Sultan: Got it. Just a quick follow-up for Matt. Does the [ 27% ] EBITDA margin and free cash flow guidance contemplate any additional capacity investments next year? Or is that just reflect some of the 31 megawatts you’re bringing online this year?
Matt Steinfort: It’s just reflective of the 31 megawatts that we’re bringing on this year.
Operator: Our next question comes from the line of James Fish with Piper Sandler.
James Fish: Maybe just following up on that. If AI is growing as fast as it is, you guys are needing to bring on capacity now to meet all this demand, aren’t you going to need more capacity then? And Matt, additionally, it looks like you’re excluding finance leases and your free cash flow metric. Why treating it like this as if it wasn’t finance, you’d still have CapEx, and it does seem to imply, I’m getting a lot of this question premarket here. It seems like you’re implying about 10% reported free cash flow on ’27. So can you walk us through that? And I know this is a loaded question, but A lot of those that are providing lease servers are implementing memory cost increases. So I guess how are you thinking about what commitments you actually have from them and potential pass-through of memory costs?
Matt Steinfort: Yes. Just I’ll take that in reverse. So yes, we’ve seen increased component costs, the same as others in the industry, and that’s all reflected in our guidance. And again, it hasn’t changed our return expectations or the economics that we’d see. It’s just it means that there’s more costs associated with some of the service that we’re bringing on. But this is — I’m glad, Fish, you brought this up, which is you got to think about our free cash flow in tiers, right? So you say, okay, well, you got unlevered free cash flow, which, again, people should be using from a valuation standpoint, and that we’re talking about being in the 18% to 20% range. When you add the interest expense, you get the levered free cash flow, which is what we’ve historically — that’s our adjusted free cash flow margin.
And you’re only you’re only giving up a couple of percentage points there. And that interest right now is half like the TLA and it’s half equipment leasing. And then as you point out, you have the principal payments that are more of a financing transaction. That’s why they’re not captured in either the adjusted free cash flow or leverage adjusted free cash flow. But if you take those financing transactions and if you’re going to lump everything in it and you say, what about the mandatory prepayment of $25 million a year on your term loan, okay? We’ll throw that in there. If you take all of the cash payments, including the principal payments, including the prepayment of the Term Loan A, so that’s all financing stuff. So again, you’re mixing metaphors here.
So if you throw that all in, we’re still generating cash. So you’re saying, hey, well, it’s [ 10]. I’m like, hey, it’s [ 10]. I think we’re generating cash. while we’re accelerating the growth of this business into the 30s and on an unlevered free cash flow basis, it’s 18% to 20%. So it’s a testament to our ability to dramatically accelerate growth. We’ve taken growth from 11%, 12%, 13% to guiding to 30%, and we’re generating incredibly strong unlevered free cash flow. We’re generating very strong levered free cash flow. And if you throw the kitchen sink in there and all the payments that we have to make, we’re still generating cash. I mean that’s an incredibly strong position to be in, and we have a very flexible balance sheet. So we feel very good about the cash generation that we’re setting out while we’re delivering this growth.
James Fish: Yes. I mean the growth acceleration looks good. And Paddy, for you on Slide 20, I got asked a couple of questions ago to a degree. But probably you point out the difference between you guys and Neocloud and inference wrappers. And maybe being humble about it, you point out that you’re about 75% of the way in the first 3 categories. And so is this something that we should be expecting to hear about at the April event? Or what do you guys need to do to get to that full 100% difference?
Padmanabhan Srinivasan: Yes. Fish, I don’t know if I will ever call myself 100% in those things because that market is changing so fast. Like if we ask 5 of our customers today, what they want versus what they thought they wanted 3 months ago is meaningfully different, right? Because as they are going into their customer base and deploying their solutions, new things come up all the time. The capability of AI models evolve all the time. So this is going to be a moving target for the next couple of years. But the first part of your question, absolutely, that is where our R&D team is super heads down, inventing new technologies, inventing new parts of the stack. So you will hear a lot more about this on April 28. But I would say this is where I feel very confident that we already have a lead and that lead is only going to grow over the next few quarters.
Operator: Next question comes from the line of Thomas Blakey with Cantor Fitzgerald.
Thomas Blakey: Congratulations on a great quarter and a great outlook here. Maybe some follow-ups to my peers. Paddy, you mentioned, I think it was to a previous question about demand outstripping supply and giving you great visibility that you’ve kind of alluded to in this call, not expecting you to give calendar 28 commentary if you wanted to because like you looked out 2 years on the April 25 call, that would be great. But in addition to that, I’m interested in what you’re seeing in a pricing dynamic. If demand is outstripping supply, you’re lining up these new AI natives. Just maybe some commentary on pricing would be helpful.
Padmanabhan Srinivasan: Yes. Thanks, Tom. So I think we have already talked about what we are going to talk about in — for 2027. But in terms of the — yes, the demand is clearly there, and we are moving as fast as we can to first deliver on these 3 data centers that Matt talked about. From a pricing point of view, it is — we have competition from all kinds of different players and the pricing is holding. And in some cases, it has gone up. And we are very, very attuned to what is going on in the market. And there is a lot of scarcity of supply across the board. So we are also in a position where we work very closely with our customers to ensure that we are calibrating the prices that we have, both on demand as well as contractual prices to keep pace with what the market dynamics are at this point.
But I would say nothing has materially changed. And the pricing is also a function of the generation of the GPUs that we are talking about, right? At the lowest level, if a customer wants access to GPUs, it is priced GPU dollars per hour. And at that layer, it really depends on the generation of the GPU, whether it is [ Blackwell ] or the [ hopper ] series for NVIDIA or the 350, 355 from AMD or the 300 or 325. So it really depends on the nature of the generation. There are also other dependencies like the cluster sizes, the cluster configuration, what kind of networking they want and so forth. And as you move up stack, if you look at my Slide 19, as — and each — the one thing that I did not mention in Slide 19 is that customers can enter our stack at pretty much any layer of the stack, right?
So the higher up you go in the stack, you’re not pricing by per GPU hour, but your pricing per token. And there, we have a lot more degrees of freedom in terms of how we price versus competition. Because there, you’re doing dollar per token, but also you have the flexibility of running it in different types of hardware. You can also change up the AI model that is servicing this token request. So we have more degrees of freedom in customers, some customers need that flexibility, and they are willing to live with the higher orders of the stack rather than dictating which generation of hardware they want to run in.
Thomas Blakey: Right. That’s super helpful, Paddy. And just maybe as an extension of that flexibility, it was impressive to hear about the 0% churn in the large $1 million-plus cohort with 115% NRR. I’d love to know what the overlap there is with regard to the AI native exposure, if you could maybe kind of talk about just those customers and how much of that is from AI and for Matt relatedly, is — and you’re improving — are we finally including AI and ML revenue there? And if not, when can we expect that?
Matt Steinfort: Yes. Thanks, Tom. So it’s about — on a customer account basis, it’s about half of the $1 million customers. Our AI customers and half are core cloud or general purpose cloud only. It’s a little bit more on a revenue basis or an ARR basis, a little bit more AI, but not a lot. It’s not too far off of 50-50. And as you saw in the materials, 3 or 4 — 48% of the trailing 12-month incremental ARR is coming from those — from AI customers. So that’s kind of how the split is. In terms of the — no, it’s not in there yet. And the reason that we disclosed the AI customer revenue, and we will continue to disclose that as a metric in the growth rate. And also looking at the RPO, which is, again, a decent chunk of that, not all, but a decent chunk of that is also AI.
We’re trying to give you better leading indicators of the performance of the AI customer base. The [ MDR], if you look at some of the charts that we showed with some of the bigger inferencing providers, those — they just got started on the platform in kind of the June, July time frame. And there’s a big difference in the, I’d say, the size and caliber of the customers that we’ve been winning in the last 6 months on what now 7, 8 months, I guess, on the AI side. Those, we think, will have more of your traditional kind of NDR like characteristics where they grow and expand on the platform using inferencing, which is more of a production workload versus a lot of our earlier customers were smaller customers doing experimentation, doing projects, and they just don’t look like revenue was growing like crazy because we would be adding a ton of those customers.
But if you look at any of the individual customers, it was it was hard to see a pattern. And what MDR is a SaaS metric is it looks for, okay, it looks for patterns where you bring on a customer and you can expect them to do XYZ over the next 12 months. And we just didn’t see that. There’s no noise in our AI customer revenue kind of lumpiness early that we see changing. So we’ll continue to evaluate that every quarter and at the appropriate time, we’ll contemplate rolling that in. But it’s probably still 12 months away.
Operator: Our next question comes from the line of Patrick Walravens from Citizens.
Patrick Walravens: Congratulations on the quarter. And I have to say congratulations on the slide deck. It’s fantastic, and I’m sure all of your investors are going to appreciate it. So Paddy, I was looking back at my note from 2 years ago when you joined and at the time, one of the things you said was that our durable competitor differentiator for us long term is going to be in the software layer. And you said you were focused on bringing simple, easy-to-use AI ML capabilities on both hardware and software to developers. So what I’m wondering is, as you look back — and you’ve got — and you’re growing 11% when you joined and decelerating, right? So as you look back, what parts — which of the growth drivers that have caused you to accelerate, now we’re talking about 30%. Did you anticipate and which were fortuitous is probably the wrong word, what favors the prepared, but which were sort of unexpected?
Padmanabhan Srinivasan: Yes. Thank you, Patrick. I would say what was pricing — and maybe I’ll take some creative liberty in answering your question. So what took a few quarters for us to get right was — as I mentioned several times during this call, we had — we had a constraint in keeping up with customers that were scaling rapidly and scaling big on our platform that I joined. So it took us a few quarters to really understand, get to the bottom of their needs — and there was a lot of work that had to be done for us to get to the 0% churn that I was so proud to share with all of you this morning. So that took a lot of engineering effort. And I’m super proud of my team and it’s a lot of very complex technology work all the way from advanced networking to fortifying our storage to inventing new things in our database offering and so forth.
So that took a tremendous amount of heavy lifting and that job is not done yet as we get to — we started with 100,000 customers, then we focused on 500,000 customers. Now we are focused on million-dollar customers. And who knows in the next couple of years, we’ll be talking about $5 million and $10 million customers. So that bar racing is an ongoing endeavor for us. And on the more fun side of things is literally participating from the starting point with the AI native ecosystem. So we are learning at their learning, and we are inventing alongside them, and that is a great luxury to have because we feel like we can write their growth curve and as their needs increase, and they’re learning the right way to do this from a workload perspective.
We are just trying to keep up pace and they’re super appreciative of us inventing on their behalf to make their lives easier so that they can focus on their domain and invent new things for their customers. So we’ll share a lot more of this on April 28, but that’s how I would answer your question, Patrick.
Operator: Next question comes from Mike Cikos with Needham & Company.
Michael Cikos: Congrats on the strong growth [indiscernible] you’re providing us. Matt, if I could just come back, and I know that the free cash flow topic has come up a couple of times here, but you can see as well as anybody, just how sensitive the investors are in this market to the AI CapEx investments that are required or different financing vehicles that are out there. Just to be clear, when we look at the calendar ’26 versus the calendar ’27 guide, that unlevered to just free cash flow or just the free cash flow guide the 3-point delta is expected to widen to about 10 points in calendar ’27. If we take that one step further, and I know that your guidance for — or those guardrails for ’27 currently don’t contemplate additional capacity coming online.
But it seems fair that we should be assuming more capacity. And if that’s the case, would those different — would that delta between the unlevered and the levered free cash flow margin widen further from there? Is that fair?
Matt Steinfort: The way I think you got to think about it is, again, if you’re looking at the levered free cash flow, it’s got other stuff in it besides the equipment leases. It’s got [ TLA ] interest. It’s got other things. If you look at the — as Fish was saying, if you look at the other cash, there’s mandatory prepayments of the term loan A. So you got to be real careful about what you’re using for what purpose, right? So if you said, hey, what’s the steady state cash flow generation capability of this business. Again, because we lease equipment, we don’t have an upfront capital requirement that makes it super lumpy. We can make that smoother and we can grow. However, when you’re growing a business even with that model and you’re adding data center capacity, you have a couple of months where you’re actually taking data center lease expense and you haven’t generated any revenue when you lease year unlike if you buy gear, you put it in your warehouse, you actually don’t expense it until you actually deploy it.
When you lease gear, you start that lease expense as soon as it’s shipped. So you have front-loaded cost that don’t catch up the revenue right away. But because you didn’t have a big giant slug of capital, as soon as revenue starts generating, you’re immediately generating cash and you’re improving your margins with utilization. So the steady state, like if you said, that’s why we’ve been very crisp about what’s included in the numbers. It’s to give you a sense of what the margins look like on a steady-state basis. If we just continually assume, well, we’re going to add incremental capacity, which I can’t tell you how much incremental capacity we’re going to have because we haven’t contracted it, and we haven’t committed anything to incremental capacity.
So what we’re showing is when we add 31 megawatts as an example, and you roll that forward a year, you have incredibly strong cash flow characteristics to that. And there’s going to be a short-term impact on gross margins and net income because of the timing thing I described, but that works itself through relatively quickly. And so you would expect that as we saw other opportunities to accelerate our business with similar economics, that we would make similarly good decisions and that engine will keep going. And so it’s — I view it in a very different way than what you’re describing. I view it as we’re going to commit to more capacity. It’s because we have more growth opportunities and the returns are incredibly compelling, and we’re doing it in a way where we match the revenue and the costs, and we’re not going out above our skis beyond our skis and making massive commitments chasing the data center and GPU arms race.
We’re doing it methodically. We’re doing it where we have an advantage, where we earn a good return and we’re able to do it. Well, again, taking 13%, 11%, 13% revenue growth to $30 million while still maintaining really good margins. So we’re really excited about the potential we have and the economics that we’re delivering.
Michael Cikos: Maybe for a quick follow-up here. Understood on the accelerating growth you guys are looking at throughout calendar ’26, just based on the megawatts coming online. One thing I wanted to ask, again, I’m sure that you guys have your own models as you’re looking at the AI customers ramping, but to drive that 25%-ish growth exiting calendar ’26. Can you provide any additional color for what you’re assuming in terms of ARR directly from those AI customers, if I’m thinking about the $120 million that we see today exiting ’25?
Matt Steinfort: The only thing I would say is what we said is that AI customer ARR in Q4 was $120 million grow at 150%. We have more demand than we have supply. We’re bringing on supply. You should expect that it doesn’t slow down.
Operator: Our next question comes from the line of Mark Zhang with Citi.
Mark Zhang: Just given the strong demand environment, should we see more capacity comments coming, I guess, like you announced today? And this — if that’s not the case and is there enough incremental capacity or [indiscernible] capacity in your current footprint to support continued growth. Just any insights there will be appreciated.
Matt Steinfort: Sure. So Mark, as we said, there’s enough capacity in committed capacity. There’s enough growth potential in the committed capacity to get us to 30% growth in 2027. Clearly, we’re very cognizant of the data center market and very active in terms of the evaluation of that. We haven’t made any commitments at this juncture to share with the market. And if we get to a point where we make a commitment, we’ll certainly share that. But at this point, again, we thought it was incredibly important for people to understand how to digest capacity as we bring it on. And that’s why we’ve guided to what we have based solely on the 31 megawatts we’ve already committed and it gives you a good sense of how it ramps and what the economics are. And should we bring down the incremental capacity, you’ll have a good model to add on to the growth ramps that we’ve already articulated.
Mark Zhang: Okay. Great. And then maybe related to that, can you — is there a sense of utilization of your current estate, maybe like given in terms of the what the current capacity is we know the current capacity, could there maybe any sense of the contracted capacity that you have on the books?
Matt Steinfort: Yes. So from a contracted capacity standpoint, again, if you’re talking about data centers, we’ve got 31 megawatts that we’re adding to our roughly kind of, call it, 43 or 44 , which will put us at 70 — just about 75 megawatts when we’re done. So the 6 megawatts — so we’re sitting at, call it, sitting at 43 and we’re adding 6 that will come online or generating revenue in the second quarter and the balance of the incremental 31, which is about $25 million will come on and start ramping revenue and in the second half. And we expect to reach — whether we are at full utilization as a function of whether we decide to fill them all with GPs right away or we do it over time because we’d like to strike out the generations of GPUs. We don’t like to go all in on one type of a generation of GPUs but we’ll be at a very healthy utilization in — at some point in 2027, which is enabling us to get to that 30% growth.
Operator: At this time, we have no further questions. That concludes our Q&A session and today’s conference call. We would like to thank you for your participation. You may now disconnect your lines. Have a pleasant day.
Follow Digitalocean Holdings Inc. (NYSE:DOCN)
Follow Digitalocean Holdings Inc. (NYSE:DOCN)
Receive real-time insider trading and news alerts



