Akamai Technologies, Inc. (NASDAQ:AKAM) Q1 2026 Earnings Call Transcript May 7, 2026
Akamai Technologies, Inc. reports earnings inline with expectations. Reported EPS is $1.61 EPS, expectations were $1.61.
Operator: Good morning, and welcome to the Q1 2026 Akamai Technologies, Inc. Earnings Conference Call. All participants will be in a listen-only mode. After today’s presentation, there will be an opportunity to ask questions. To ask a question, you may press star and then one on your telephone keypad. To withdraw your question, you may press star and then two. Please note that this event is being recorded. I would now like to turn the conference over to Mark Stoutenberg. Thank you, and over to you.
Mark Stoutenberg: Good afternoon, everyone, and thank you for joining Akamai Technologies, Inc.’s First Quarter 2026 Earnings Call. Speaking today will be F. Thomson Leighton, Akamai Technologies, Inc.’s chief executive officer, and Edward J. McGowan, Akamai Technologies, Inc.’s chief financial officer. Please note that today’s comments include forward-looking statements that include revenue and earnings guidance. These forward-looking statements are based on current expectations and assumptions that are subject to certain risks and uncertainties and involve a number of factors that could cause actual results to differ materially from those expressed or implied. The factors include, but are not limited to, any impact from macroeconomic trends, the integration of any acquisition, geopolitical developments, and other risk factors identified in our filings with the SEC.
The statements included on today’s call represent the company’s views on 05/07/2026, and we assume no obligation to update any forward-looking statements. As a reminder, we will be referring to certain non-GAAP financial metrics during today’s call. A detailed GAAP to non-GAAP reconciliation is available in the Investor Relations section of akamai.com under financials. With that, I will now hand the call over to our CEO, F. Thomson Leighton.
F. Thomson Leighton: Thanks, Mark. I am pleased to report that Akamai Technologies, Inc. is off to a strong start to the year. In just a few months, we have achieved major milestones for our cloud computing strategy, marking a definitive turning point in the growth and evolution of our business. Akamai Technologies, Inc. has long been known for operating the world’s largest distributed platform for delivery and security solutions at global scale, with a reputation for reliability, quality, and trust. Now we are leveraging our global footprint and years of experience supporting the world’s largest enterprises to become an industry infrastructure provider for the AI-driven economy. At GTC in March, we unveiled the industry’s first global-scale implementation of NVIDIA’s AI grid, and we announced the rollout of thousands of NVIDIA RTX Pro 6000 GPUs. By integrating NVIDIA AI infrastructure into Akamai Technologies, Inc.’s massive distributed platform and by leveraging intelligent workload orchestration across our network, we intend to move the market for AI beyond isolated AI factories toward a unified, distributed grid for AI inference.
By pushing AI inference to the edge, and combining it with our massive deployment of CPUs for delivery, security, and functions as a service, we are enabling customers to run complex models within milliseconds of their end users, with the responsiveness of local compute and the scale of the global web, optimizing performance while reducing latency and cost. Those who attended GTC heard NVIDIA position Akamai Technologies, Inc. as a vital player in the industry’s ecosystem for AI infrastructure, and we have seen very positive market reaction to our rapidly expanding capabilities from a wide spectrum of enterprises. Today, we are very excited to announce another major milestone for our cloud computing strategy and the evolution of Akamai Technologies, Inc.: the signing of a landmark seven-year $1.8 billion commitment for our cloud infrastructure services by a leading frontier model company.
This is the largest customer deal in Akamai Technologies, Inc. history, and it comes on the heels of the $200 million CIS deal we announced in February with a major U.S. tech company also at the forefront of the AI revolution. These leaders in AI have chosen Akamai Technologies, Inc. because their AI workloads need the scale, performance, and reliability that our cloud platform provides. Many other enterprises have chosen Akamai Technologies, Inc. for similar reasons. For example, since the start of the year, a leading cloud and digital infrastructure provider in Asia chose our GPUs to support their low-latency live streaming media service. An AI company in the U.S. chose our GPU platform to power their voice-first solution to optimize business operations.
An AI-powered video intelligence platform in India chose our GPU platform to scale video analytics and computer vision workloads for retailers. A consumer AI platform in the U.S. chose Akamai Technologies, Inc. Cloud to run and scale live personalized agents. An AI commerce company in India chose our distributed inference platform to power their ad personalization engine. And two premier global retail brands chose our distributed data capabilities to improve the performance and resilience of their online retail applications. But all this is just the beginning. We have a large and rapidly expanding pipeline of prospects that are looking to Akamai Technologies, Inc. for cloud solutions, including some with very large needs. To satisfy this strong and growing demand for our cloud infrastructure services, we expect to continue to build out both our physical infrastructure and our cloud sales and support teams.
And as Ed will talk about in a few minutes, we now anticipate significant acceleration of our overall revenue growth heading into 2027 and beyond. Turning to security, I am pleased to report that Q1 was also strong for our security portfolio, where revenue grew 11% year over year as reported and 9% in constant currency. Our security growth was led by strong demand for our market-leading web application firewall, API security, and Guardicore segmentation solutions. Our WAF, in particular, is seeing growing interest from customers eager to deploy the latest defenses for vulnerabilities that could be exposed by the ever-strengthening frontier models and AI-powered attacks. Frontier models are changing vulnerability management, and we are proud to be one of the industry’s must-have security providers partnering with the frontier model companies to help ensure the safe, rapid deployment of AI-enhanced defenses.
With our early access to their vulnerability detection programs, we are applying our expertise to help keep major enterprises and critical infrastructure safe. Of course, and this is important to understand, attackers will also be using more advanced AI technology to develop even more potent ways to cause harm. This means that major enterprises will need Akamai Technologies, Inc. security solutions even more than before. For example, there are many legacy systems and billions of deployed devices that cannot be patched. They will become a lot more vulnerable with the advances in AI and they will need our security solutions to keep them safe. For the devices and systems that can be patched, the patching process still takes time, often days or weeks, and they will need our protection until that is done.
We have seen this happen before when zero-day attacks emerged, and with the advances in AI, we can expect zero-day attacks to occur much more frequently. There is also an increasing challenge with scale. Because AI is enabling attackers to take over more devices and create enormous bot armies, we are now seeing attacks with unprecedented volumes. Just in the last few weeks, we neutralized a series of app-layer attacks with millions of malicious requests per second from millions of widely distributed IPs. Akamai Technologies, Inc. can defend against such attacks because of our widely distributed platform. Our WAF runs in 4,300 locations across 700 cities to intercept the attack traffic right where it enters the internet and well before it can coalesce onto the target.
Having a great WAF with the needed defenses for the latest attacks is obviously important, but that alone is not enough in the coming age of AI. The WAFs need to be deployed across a vast distributed platform, and this need provides a unique advantage for Akamai Technologies, Inc. when compared to the competition. In summary, we believe that Akamai Technologies, Inc.’s security portfolio will be needed more than ever before as attackers take advantage of the advances in AI. That is because of our massive platform scale to absorb attacks, our unparalleled access to real-time attack data, our tight integration with the early warning ecosystem to provide up-to-the-minute defenses for the latest zero-day attacks, our large and very experienced human security operations team that is equipped with the latest AI tools to enhance visibility and minimize response times, and our innovative, rapidly evolving and AI-enabled product suite to help prevent penetrations and to limit the damage when penetrations do occur.
Customers who selected Akamai Technologies, Inc. in Q1 for that kind of protection for their APIs included one of the largest telecom groups in Africa, a major investment management company in South America, one of the premier investment banks in the Middle East, and one of the world’s leading fintech companies in the U.S. Customers who added or expanded their use of our Guardicore segmentation solution in Q1 included the leading telecom carrier and media company in South Korea, one of the largest banking groups in Europe, and a leading healthcare company in the U.S. Many of the large renewals we signed in Q1 also included expansions of our security services. For example, after we protected one of America’s leading retailers from unwanted bots during the holiday shopping season, they increased the use of our services in a contract worth $24 million.
We signed an expansion contract worth $80 million over two years with one of the world’s largest video game companies. We signed an expansion contract worth more than $20 million with a global consumer electronics company in Korea. And one of the largest global professional services companies in the world expanded their use of our ZTNA solution to secure large-scale remote access as they move critical applications to a zero trust model. Our security solutions continue to receive top recognitions from the major analyst firms for their effectiveness. For example, last quarter, Akamai Technologies, Inc. achieved a 99% recommendation rating as Customers’ Choice at Gartner’s Peer Insights report on microsegmentation. And last month, Akamai Technologies, Inc.

was the only provider to be named Customers’ Choice in Gartner’s Peer Insights report on API protection. In closing, we are thrilled by the way our growth strategy has taken hold and is generating transformative opportunities for our business. We believe that Akamai Technologies, Inc. is uniquely positioned to enable and benefit from the development of the AI-driven economy. By bringing powerful compute directly to the data and the users at the edge, Akamai Technologies, Inc. is enabling and securing the next generation of agentic AI. With each quarter, the massive opportunity we see ahead becomes more evident, and we are making bold investments to capitalize on that opportunity and enable Akamai Technologies, Inc. to do for cloud and AI what we have done for security and CDN to generate significant future growth for our business.
Now I will turn the call over to Ed for more on our results and our outlook for Q2 and the year. Ed?
Edward J. McGowan: Thank you, Tom. Before I get started, and to build on Tom’s remarks, I want to personally underscore my excitement regarding the $1.8 billion new customer win announced today. This is a powerful validation of the Akamai Technologies, Inc. value proposition in the age of AI and a clear indicator of the scale at which we can operate. To fully capitalize on this momentum and support the accelerated growth we anticipate, we will be investing slightly ahead of revenue. You will see this reflected in the updated capital expenditure and operating margin outlook I will discuss during the guidance portion of my remarks. We view these investments in our CIS portfolio as critical to ensure we have the foundation to meet the significant demand we see on the horizon.
Also, driven by today’s announced $1.8 billion win, the $200 million four-year CIS deal we announced last quarter, and our rapidly accelerating pipeline, we now expect total company annual top-line revenue growth to reach double digits in 2027. We look forward to sharing more details in the coming quarters. Clearly, this is an incredibly exciting time for Akamai Technologies, Inc. With that, let us dive into the Q1 results. We delivered strong first quarter results with total revenue of $1.074 billion, which was up 6% year over year as reported and 4% in constant currency. Cloud infrastructure services, or CIS, revenue got off to a robust start to the year with revenue of $95 million, up 40% year over year as reported and 39% in constant currency.
As Tom noted, we are seeing CIS wins across a wide spectrum of industries, geographies, and use cases. Even more encouraging, the pipeline for AI-specific use cases is building rapidly. We also maintained very strong momentum in security with revenue of $590 million, up 11% year over year as reported and 9% in constant currency. The strength in the first quarter continued to be driven by our fast-growing API security and Guardicore segmentation solutions along with strong growth from our largest product, web application firewall. Moving to delivery and other cloud applications. Revenue was $389 million, down 7% year over year as reported and down 8% in constant currency. These results were in line with expectations, driven by the wrap-around impact of the Edgeio transaction in 2025.
We expect this effect and the rate of decline to moderate throughout the remainder of the year. International revenue was $530 million, up 9% year over year or up 5% in constant currency, representing 49% of total revenue in Q1. Foreign exchange fluctuations had a positive impact on revenue of $2 million on a sequential basis and a positive $19 million on a year-over-year basis. Moving to profitability. In Q1, we generated non-GAAP net income of $239 million or $1.61 of earnings per diluted share, down 5% year over year as reported and in constant currency. These results include our expanded colocation investments, higher depreciation, and increased headcount costs, all tied to our strategic investment in cloud infrastructure services during the first quarter.
Our non-GAAP operating margin for Q1 was 26%, in line with our expectations. We expect operating margin to remain in this range for the remainder of this year as we ramp up our investment to capture the exciting growth opportunities ahead of us. Our Q1 CapEx was $206 million, or 19% of revenue. First quarter CapEx was slightly below our guidance, primarily driven by timing and favorable pricing. Specifically, some expenditures shifted from Q1 into Q2, and we benefited from some lower-than-expected component costs. Moving to cash and our capital allocation strategy. During the first quarter, we spent approximately $206 million to buy back approximately 2 million shares. We ended the first quarter with approximately $975 million remaining on our current repurchase authorization.
Our intention with capital allocation remains the same: to continue buying back shares to offset dilution from employee equity programs over time and to be opportunistic in both M&A and share repurchases. As of March 31, we had approximately $1.7 billion of cash, cash equivalents, and marketable securities. Now, before I provide Q2 and full-year 2026 guidance, I want to touch on a few housekeeping items. First, for Q2, CapEx is expected to jump significantly as we start to take delivery of the NVIDIA GPUs we discussed on our last quarterly earnings call, and we catch up on some of the CapEx that pushed from Q1 into Q2. Second, we expect to see an increase in operating expenses in the second quarter due primarily to continued investments in go-to-market and the impact of our annual employee merit cycle that went into effect in April.
We anticipate revenue from the $1.8 billion customer win to start to ramp in Q4, and we expect to generate approximately $20 million to $25 million of revenue in the fourth quarter. Finally, regarding CapEx for this win, we expect to spend a total of approximately $800 million to $825 million over the next twelve months to support this customer. We expect to deploy roughly $700 million of that total in 2026, with the remaining balance falling into 2027. Moving now to guidance. For the second quarter, we are projecting revenue in the range of $1.075 billion to $1.1 billion, up 3% to 5% as reported and in constant currency over Q2 2025. If current spot rates hold, foreign exchange fluctuations are expected to have no material impact on Q2 revenue compared to Q1 levels, and a positive $2 million impact year over year.
At these revenue levels, we expect cash gross margins of approximately 70% to 71%. Gross margin is impacted by the significant increase in colocation as we accelerate the growth in our CIS business. Q2 non-GAAP operating expenses are projected to be $346 million to $357 million. We anticipate Q2 EBITDA margin of approximately 38% to 39%. We expect non-GAAP depreciation expense of $140 million to $144 million. We expect non-GAAP operating margin of approximately 25% to 26%. And with the overall revenue and spend configuration I just outlined, we expect Q2 non-GAAP EPS in the range of $1.45 to $1.65. This EPS guidance assumes taxes of $47 million to $54 million based on an estimated quarterly non-GAAP tax rate of approximately 18.5%. It also reflects a fully diluted share count of approximately 146 million shares.
Moving to CapEx. For the reasons I highlighted earlier, we expect to spend approximately $433 million to $453 million in the second quarter. This represents approximately 40% to 41% of total revenue. Looking ahead to the full year 2026, we expect revenue of $4.445 billion to $4.55 billion, which is up 6% to 8% as reported and up 5% to 8% in constant currency. For cloud infrastructure services, we are raising our outlook to at least 50% year-over-year growth in constant currency. We expect momentum in CIS to continue to build throughout 2026, driven mainly by the scaling of our AI opportunities and the impact of the two very large transactions we announced in Q4 and today. Also, we continue to expect security revenue growth in the high single digits on a constant currency basis in 2026.
And for delivery and other cloud apps, we continue to expect a decline in the mid-single digits year over year on a constant currency basis. At current spot rates, our guidance assumes foreign exchange will have a positive $20 million impact on revenue in 2026 on a year-over-year basis. Moving to operating margin. For 2026, we are estimating a non-GAAP operating margin of approximately 26% as measured at today’s FX rates. Turning to CapEx. At this time, we anticipate our full year capital will be approximately 40% to 42% of total revenue, including the $700 million impact from the $1.8 billion contract we mentioned earlier. Before I move on, I want to provide some additional color on our CapEx outlook. As Tom noted, the demand we are seeing for CIS, including our GPU deployments, is exceptional.
Our current pipeline for GPUs significantly exceeds our existing and projected inventory, meaning we may place additional GPU orders in the second half of the year to meet this demand. This is not factored into our current annual CapEx guide. We will update CapEx guidance on a subsequent earnings call if we place another GPU order before year end. Moving to EPS. For full year 2026, we expect non-GAAP earnings per diluted share in the range of $6.40 to $7.15. This EPS guidance includes the impact from the very large win. This non-GAAP earnings guidance is based on a non-GAAP effective tax rate of approximately 18.5% and a fully diluted share count of approximately 147 million shares. With that, I will wrap things up, and Tom and I are happy to take your questions.
Operator?
Q&A Session
Follow Akamai Technologies Inc (NASDAQ:AKAM)
Follow Akamai Technologies Inc (NASDAQ:AKAM)
Receive real-time insider trading and news alerts
Operator: We will now begin the question and answer session. To ask a question, you may press star and one on your touch-tone telephone. If you are using a speakerphone, please pick up your handset before pressing the keys. If at any time your question has been addressed and you would like to withdraw your question, please press star and two. At this time, we will pause momentarily to assemble our roster. We have the first question from the line of Roger Boyd from UBS. Please go ahead.
Roger Boyd: Congrats on the landmark deal there. Maybe if you can, Tom, just broad strokes about the competitive set to win that deal. Are you going toe to toe with hyperscalers or neo-clouds? And anything you can provide on the use cases—inference, is it agentic workloads? And as you think about your compute-enabled PoPs, how is this customer leveraging the Akamai Technologies, Inc. network as a whole?
F. Thomson Leighton: I cannot give any more details about this specific deal. But in general, yes, we do compete with the hyperscalers and the neo-clouds with our cloud infrastructure services. That is the primary competition. They select Akamai Technologies, Inc. because of our proven ability to manage and scale complex distributed systems, our ability to get the necessary data center space in locations around the globe, to interconnect that with the world’s largest and best-performing delivery network and leading security solutions. We offer the best in terms of latency and scalability. We probably deal with more data center companies than anybody, with being in 4,300 locations across 700 cities and 130 countries. So, yes, we have significant competition. Every deal is competitive, but we also have unique capabilities, which is, I think, why our pipeline is so strong and why we are winning some very large deals.
Roger Boyd: And on security, could you unpack what you are seeing from a demand perspective there? With the nice result in the first quarter, what are you seeing around conversion rates, sales cycles? Are you seeing more urgency from organizations that are thinking about ways to limit the blast radius and defend against an AI-fueled attack landscape?
F. Thomson Leighton: I do not think I have ever seen the CSOs more agitated and feeling more of a sense of urgency than they are now. Over the last several weeks, couple of months, I have had the chance to meet with a lot of the world’s biggest company CSOs, in many cases the CEOs and senior executives, and they are very concerned about what happens when the attackers get access to advanced AI with the latest AI frontier models, which it seems that they will. This is going to uncover a lot more vulnerabilities. We are going to see the equivalent of a lot more zero days, and they are literally scrambling now, in many cases, to make sure all their applications, their agents, their APIs are protected by Akamai Technologies, Inc.
You can imagine most of the world’s major banks rely on us for security, and they are looking at a pretty big wave of new attacks coming their way. I do not know of a comparable time where there is this much concern about what is going to happen with security, and also this much appreciation for what Akamai Technologies, Inc. provides with our security platform.
Operator: Thank you. We have the next question from the line of Patrick Edwin Colville from Scotiabank. Please go ahead.
Patrick Edwin Colville: Thank you so much for taking my question. This one is for Dr. Tom. When I think about Akamai Technologies, Inc., the value prop for the last 30-plus years has been the distributed architecture—700 cities, 130 countries. When I think about this mega deal, is that a highly distributed use case, or should we think about it as being served from a few, like, sub-10 type data centers? And then a follow-up for Ed on CapEx: you gave a CapEx guide and then made a subtle point that you might have to increase CapEx further. Help us understand why there might be an increase midyear and what that might mean.
F. Thomson Leighton: I am not at liberty to talk about the recent deal. However, I think when you are thinking about Akamai Technologies, Inc.’s value proposition, you hit a very key point with our really unparalleled distributed architecture. I did reference a bunch of use cases in the prepared remarks, and, yes, they very much rely on our distributed platform, where you want to get the agents and the applications, the business logic, close to users and close to the data so you get low latency and scalability. Particularly anything to do with video processing or video generation needs a lot of scale, and Akamai Technologies, Inc. is unique there. So I think what we are able to offer is very compelling.
Edward J. McGowan: Thanks for the question, Patrick. What I had mentioned was we have a very, very strong pipeline for our GPU platform, and we are just starting to get the bulk of those chips up and running now, and we have a very large pipeline that exceeds what we have in inventory. Obviously, we want to prosecute that pipeline, start winning those deals, converting that into contracts, etc. The reason I hedged a little bit is, one, we want to fulfill that pipeline, and two, there is some time that it takes to get the chips. So even if we were to place an order, it may slip into next year. I want to give it another quarter, and if, in fact, we are in a position to place an order and receive that by year end, we will certainly do that and let you know.
I see that as a very bullish comment, and I just did not want to surprise you with another, whatever it is, couple hundred million dollars or whatever the order may be, without at least giving you some color behind that.
Operator: Thank you. We have the next question on the line of John DiFucci from Guggenheim Securities. Please go ahead.
John DiFucci: Thanks for taking my question. My first question is for Ed, and I have a quick follow-up for Tom. Ed, thanks for all the detail on CapEx. But when I think about the CapEx for this mega deal—this is over a long time, seven years—are you accountable for, for example, higher memory costs if those rise in the future? When you locked in this deal, do you also have the supply locked in, or are you exposed if that were to happen two years from now?
Edward J. McGowan: Great question. I was fortunate enough to work very closely with the team on both sides of this transaction. We have been able to get the supply chain ready. We anticipate receiving all the goods that we need to deliver this service over the seven years within the next twelve months, with the majority of it this year, as I broke out. We anticipate receiving a significant portion. There is always the potential for some slippage and delays, but we have mechanisms in our contracts to deal with changes in prices if, for example, six months from now prices were to go up. We have taken that into consideration. From a revenue perspective, the way to think about this deal is it is a set amount of capacity that we are deploying.
There is no usage component; it is a straight committed deal over seven years. As soon as we ramp all the capacity up, we will start taking the revenue for a full year. I expect a little bit this year and then next year we will get a partial year as we receive the remainder of what is to be deployed, and then from there it will go on for the remaining six-plus years.
John DiFucci: So even though it is consumption-related technology, it will look like a subscription. Is that accurate?
Edward J. McGowan: Exactly. That is exactly the way to think about it.
John DiFucci: And Dr. Tom, a component of your delivery business is video streaming. In March, OpenAI confirmed they shut down their AI video generation system, Sora. Do you expect that to have any effect on your delivery or compute business forecast?
F. Thomson Leighton: No. We partner with OpenAI on security vulnerabilities, helping define them and protecting our customers for the associated attacks, but OpenAI is not and has not been a customer of Akamai Technologies, Inc. So there is no impact on us at all.
Operator: Thank you. We have the next question on the line of Jackson Edmund Ader from KeyBanc Capital Markets. Please go ahead.
Aidan Daniels: Hi, this is Aidan Daniels on for Jackson Edmund Ader. Thanks for taking our question. With this big deal, as you allocate capacity going forward, how should we think about the impact on any amount of on-demand GPU capacity you are able to offer? How are you balancing what you have committed from this deal with maintaining flexibility for newer incremental demand going forward? And as a quick follow-up, while you cannot talk about the deal specifically, how should we think about the proportion of CPU versus GPU inference cloud going forward?
F. Thomson Leighton: We support both on-demand, per-token or per-VM-hour access to our platform, and we also support large tranche deals. It is not really a matter at this point of trading off, and as we need more GPUs, as Ed said, we would purchase more. On the mix, in general with inference and AI, you need both CPU and GPU. Part of the value we provide is that we can help provide the computational resource that is most appropriate for the workload, which might be CPU or might be GPU, because you want to be as efficient as possible and have it be as close as possible to the user so you get the best performance. It is a mix, and every application is different in the mix of CPU versus GPU that it needs.
Operator: Thank you. We have the next question from the line of Fatima Boolani from Citi. Please go ahead.
Fatima Boolani: Good afternoon. Thank you for taking my questions. A higher-level strategic question: you have opted to take more of a dedicated capacity approach in terms of satisfying demand and supply constraints. Why steer the network and platform more towards larger customers, longer commits, and more dedicated capacity, given spot rates for rental or GPU-as-a-service can be more attractive? And as a follow-up for Ed, on sources of funds for potentially larger CapEx: can you fund intrinsically from the business, or should we expect you to tap other sources of capital?
F. Thomson Leighton: We do both. The larger deals with long-term commits are more attractive in many ways because you have the commit. In big deals, pricing would be lower, but we also support on-demand where you can buy by the token or the hour at higher pricing, though there can be more expense associated with that. Both are attractive, and we support both. It is not a matter of us doing one or the other.
Edward J. McGowan: The customers are really driving that. A lot of customers want to have dedicated capacity because there is scarcity in the marketplace. Rather than going on a pure consumption basis, they can get slightly better pricing and lock in capacity for themselves. On funding, so far no issues financing these buildouts from our own capital. We are very profitable and produce a lot of cash. In years when we are investing big, cash flow will be a bit lower, but these things have phenomenal free cash flow after the initial deployment. We have $1.7 billion in cash and equivalents on the books today, a $1 billion line of credit we can tap if needed, and excellent credit should we decide to access capital markets. So far, we have used our own funds.
Operator: Thank you. We have the next question on the line of Arti Vula from JPMorgan, for Mark Murphy. Please go ahead.
Arti Vula: Great to see the momentum you are having with large CIS deals with companies on the AI technology frontier—one last quarter, another this quarter that dwarfed the one before it. At a high level, has this been brewing for a while in the pipeline, or have these been faster? What changed that brought this business to your doorstep seemingly quickly? And as you dedicate financial and operational resources to CIS and these large deals, does it change how you think about other business segments?
F. Thomson Leighton: This has been the strategy all along, and we are pleased to be executing against it. The goal has been to deploy a distributed inference and compute platform that would be desired by enterprises across the spectrum, including many large customers. Akamai Technologies, Inc.’s customer base features many of the world’s largest enterprises; they spend 10x or more on compute than they do on our traditional delivery and security services. This is exactly what we said we were going to do, and now we are delivering those results. The platform is at a point where we can do that, and I think you will see more of this going forward.
Operator: We have the next question from the line of Sanjit Singh from Morgan Stanley. Please go ahead.
Sanjit Singh: Congrats on the largest deal in company history. On the $1.8 billion contract, is that more of a public cloud opportunity, or was it specifically for Akamai Technologies, Inc. Inference Cloud? And second, on the delivery business, with potential billions of agents, has the team revisited the thesis around secular growth prospects in delivery, or is it still a business you are mostly looking to harvest for profitability to fund security and compute?
F. Thomson Leighton: We really cannot talk more about this particular deal. More broadly, we have signed contracts across the spectrum for our inference cloud and our cloud capabilities for both GPUs and CPUs. Our value is bringing the right hardware for the application and placing it where you get the best benefit. On delivery, the biggest driver for growth in an agentic future will be the compute platform and then security. AI and agents are a whole new vulnerability surface, and there is a real tailwind for our security technology group. There will be some delivery traffic that used to be human-generated now agent-generated, but that does not make a huge swing in bits delivered unless agents are dealing with video or generating video, which can drive a lot of traffic.
We are in early days there. The biggest impact for us is in cloud and next in security. Delivery remains important, synergistic with our platform, and generates a lot of cash that we are plowing into growth of the cloud business.
Operator: Thank you. We have the next question from the line of Michael Joseph Cikos from Needham. Please go ahead.
Michael Joseph Cikos: Thanks, and congratulations on the strong quarter and customer win. On mechanics: you signed a seven-year $1.8 billion commitment. Can we expect the full $1.8 billion to show up in RPO, is that all take-or-pay, and anything else to understand the mechanics?
Edward J. McGowan: This is more of the dedicated capacity model. As soon as we get the capacity set up, we will take the revenue ratably over the contract. We will get some revenue this year and a partial year next year as we are still building and getting capacity live. In RPO, you will see most of that next quarter, and by the time we get everything delivered, it will all be in RPO. There are some mechanics in the first twelve months relating to how we are receiving the goods and our pricing mechanism for potential price moves, so there is a bit of nuance. But once fully up and running, you will see it in RPO—some next quarter, and then it will build from there.
Michael Joseph Cikos: For Dr. Leighton, it is great to hear that your largest security product, WAF, is seeing stronger growth. What is driving that? Is it really this heightened environment?
F. Thomson Leighton: There are real advances in AI, and it is getting much better at finding vulnerabilities and helping attackers penetrate enterprises and take over devices. You need our defenses now more than ever. There are billions of devices that you cannot patch, and the adversary can find ways into those devices and take them over. We are seeing much bigger attacks than before—application layer attacks from millions of distributed IPs with millions of attacks per second. You cannot defend against that with just a WAF in a data center. You need the vast platform that we have to intercept and separate the bad from the good at scale. Our platform is needed more than ever for security services, and customers know that. AI helps on defense but does not solve the problem; net-net, this is a very challenging time for CSOs, and that is why they are turning to us.
Operator: Thank you. We have the next question from the line of Frank Garrett Louthan from Raymond James. Please go ahead.
Frank Garrett Louthan: A follow-up on the $1.8 billion—does all of that come in as revenue, or will any be counted as paid-for upfront CapEx? And how many locations do you have Inference Cloud built out to currently, and what is the plan?
Edward J. McGowan: It is all revenue. There is no offset to CapEx.
F. Thomson Leighton: The Inference Cloud covers all of our 4,300 locations with functions as a service running in a serverless way in all 4,300. We have our managed container service running in well over 100 cities and can run in all 700 cities; it is active in well over 100 today. We have full IaaS capabilities in several dozen cities, and a couple dozen of those are equipped with the new 6000-series GPUs. The goal is to have all this orchestrated so that when an application or an agent needs to be run, it is run on the most computationally efficient resource as close as possible to the user. Our orchestration layer is designed to make that possible, aligned with the AI grid vision from NVIDIA—think of AI like an electrical grid, and that is what Akamai Technologies, Inc. is building.
Operator: Thank you. We have the next question from the line of William Power from Baird. Please go ahead.
William Power: Congrats on the massive deal. Two questions. First, a clarification: when you talk about needing additional GPUs, do you need more to satisfy the new deal, or is that more related to the building pipeline? And any framework for overall cost and timing? Second, how should we think about gross margin and operating margin impacts as we look into 2028 relative to today?
Edward J. McGowan: All the CapEx we need to satisfy the $1.8 billion deal is in the guidance—separate from my comment around potential additional GPU purchases tied to the broader pipeline. Demand is very strong with opportunities ranging from a couple hundred GPUs to a thousand or more per customer. The last incremental GPU order we discussed was around $250 million in CapEx; I do not have a size for a potential next order yet. On margins: for these larger dedicated-capacity deals, the biggest cost driver is depreciation over the period. Cash gross margin costs (colo, bandwidth, networking, some personnel) scale well, so over time you would expect cash gross margin and EBITDA margin to expand. Operating margin will depend on mix.
We may do very large deals at margins below the 30% operating margin, while GPU-as-a-service/rental tends to be much higher than company average. We will focus over the next year or two on capitalizing on growth, not margin expansion, but over time free cash flow margins should improve naturally with scale.
Operator: Thank you. We have the next question from the line of James Fish from Piper Sandler. Please go ahead.
James Fish: Given prior discussions around power—large sites having five to 10 megawatts and smaller sites a fraction of that—it puts you above 300 megawatts, but revenue does not align with that. How much of that power is for noncompute services? Is that why you need to bring on another roughly 40 megawatts for this deal, or can you walk us through megawatts and the plan?
Edward J. McGowan: Your math is not right in terms of what would be required to deliver this particular deal; it is significantly lower. For capacity, CDN and security use a small fraction of power—think kilowatts and, in some big CDN deployments, maybe a megawatt or two. Compute needs are a lot greater, especially for customers wanting a few thousand GPUs in particular locations or across 20–30 locations with a lot of CPU. Our typical large core compute locations are five to 10 megawatts, expandable to 20–30. We can get bigger, and there is plenty of opportunity to get additional colo. We expect to light up a lot more going forward. We are not concerned about access to power or colo. We have great relationships with data center providers, excellent credit, and we are not a DIY hyperscaler. GPUs draw more power than CPUs; equipment type matters for efficiency. We factor power into any deal and ensure profitability.
James Fish: On security, you normally give API and zero trust versus core—how did that trend? And how did compute trend in the quarter?
Edward J. McGowan: We did not break out API and Guardicore, but they remain the majority of what is driving growth, with growth rates similar to last quarter when you back out the impact of license revenue. On compute, think about enterprise compute as CIS, which we break out separately—CIS grew 40% year over year, and we expect that to accelerate.
Operator: Thank you. We have the next question from the line of Jonathan Frank Ho from William Blair and Company. Please go ahead.
Jonathan Frank Ho: Given the types of mega customers that you are bringing onto your platform, is there more opportunity to upsell once they are on your platform? Are there potential additional services, or could they come back if they continue to expand?
F. Thomson Leighton: Demand for AI is rapidly increasing, and we are really early. I would expect plenty of room to grow the existing base and, of course, add other customers of that scale.
Operator: Thank you. We have time for one last question from the line of Jeffrey Van Rhee from Craig-Hallum. Please go ahead.
Jeffrey Van Rhee: Two quick ones. First, Tom, there is a lot of blowback nationally against AI data centers and their power consumption. As you step into deals of this magnitude, how do you think about staying out of the crosshairs of that community pushback? Second, on security, given AI becoming a tailwind, would you think this year is likely a floor in terms of growth rate, with potential reacceleration into 2027 and beyond?
F. Thomson Leighton: I do not think we have a profile in the popular press anything like the giant hyperscalers, so I do not think that is really an issue for us. We are not worried about that yet—maybe that is a good problem to have once we are much larger than we are today.
Edward J. McGowan: On security growth, we gave guidance for the year and are pleased with what we saw in the first quarter. We like what we see, especially around API security—still early days with low penetration—and Guardicore is growing very consistently. We will update you as we go.
Operator: This concludes our question and answer session. The conference has now concluded. Thank you for attending today’s presentation. You may now disconnect.
Follow Akamai Technologies Inc (NASDAQ:AKAM)
Follow Akamai Technologies Inc (NASDAQ:AKAM)
Receive real-time insider trading and news alerts





