Arista Networks, Inc. (NYSE:ANET) Q3 2025 Earnings Call Transcript November 5, 2025
Operator: Welcome to the Third Quarter 2025 Arista Networks Financial Results Earnings Conference Call. [Operator Instructions] As a reminder, this conference is being recorded and will be available for replay from the Investor Relations section on the Arista website following this call. Mr. Rudolph Araujo, Arista’s Head of Investor Advocacy. You may begin.
Rudolph Araujo: Thank you, Christa. Good afternoon, everyone, and thank you for joining us. With me on today’s call are Jayshree Ullal, Arista Networks’ Chairperson and Chief Executive Officer; and Chantelle Breithaupt, Arista’s Chief Financial Officer. This afternoon, Arista Networks issued a press release announcing the results for the fiscal third quarter ending September 30, 2025. If you want a copy of the release, you can access it online on our website. During the course of this conference call, Arista Networks management will make forward-looking statements, including those relating to our financial outlook for the fourth quarter of the 2025 fiscal year longer-term business model and financial outlook for 2026 and beyond, our total addressable market and strategy for addressing these market opportunities including AI, customer demand trends, tariffs and trade restrictions, supply chain constraints, component costs, manufacturing output, inventory management and inflationary pressures on our business, lead times, product innovation, working capital optimization and the benefits of acquisitions which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10-Q and Form 10-K and which could cause actual results to differ materially from those anticipated by these statements.
These forward-looking statements apply as of today and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call. This analysis of our Q3 results and our guidance for Q4 2025 is based on non-GAAP and excludes all noncash stock-based compensation impacts, certain acquisition required charges and other nonrecurring items. A full reconciliation of our selected GAAP to non-GAAP results is provided in our earnings release. With that, I will turn the call over to Jayshree.
Jayshree Ullal: Thank you, everyone, for joining us this afternoon on our third quarter 2025 earnings call. Arista continues to drive its 19th consecutive record quarter of growth in this AI era. We achieved almost $2.31 billion this quarter with software and services contributing approximately 18.7% of revenue. Our non-GAAP gross margin of 65.2% was influenced by favorable mix and inventory benefits. Americas was strong at almost 80% and international at approximately 20%. On September 11 at our Analyst Day, we showcased both networking for AI and AI for networking with our continued momentum across our data-driven network platforms. Unlike many others, our Etherlink portfolio highlights our accelerated networking approach, bringing a single point of network control for zero-touch automation, trusted security, traffic engineering and telemetry to dramatically improve compute and GPU utilization.
Superior AI networks from Arista improves the performance of AI accelerators. Of course, we interoperate with NVIDIA, the worldwide market leader in GPUs, but we also recognize our responsibility to create a broad and open ecosystem, including AMD, Anthropic, [ Arm ], Broadcom, OpenAI, Pure Storage and [ VAST Data ] to name a few, and build that modern AI stack of the 21st century. This stack includes the trio of compute, memory storage and a solid network foundation to run training and inference models. Our stated goal of $1.5 billion AI aggregate for 2025, comprising of both back end and front end is well underway. We are now committed to $2.75 billion out of our new target of $10.65 billion in revenue, representing 20% revenue growth in 2026.
We are experiencing momentum across cloud and AI titans, neo cloud providers and the campus enterprise. The demand and scale of AI build-outs is clearly unprecedented, as we look to move data faster across multiplanar networks. People and leadership are key to our success. And to that end, we announced Todd Nightingale as our President and Chief Operating Officer last quarter. This time, we want to celebrate the promotion of Ken Duda, our President and Chief Technology Officer, not only of engineering, but our top AI and cloud segment of customers as well. Ken, as many of you know, has been a champion of architecture, innovation and culture since founding Arista over 20 years ago. Ken, would you like to say a few words?
Kenneth Duda: Thanks, Jayshree. I would like to — one of the best things about working at Arista is getting to build some of the most ambitious networks ever built, ultra-low latency trading networks, global scale cloud networks and most recently multi-petabit AI networks. Our success in AI has many sources, the sheer power and performance of our hardware platforms, our innovations in fabric architecture, our AI-focused telemetry and provisioning automation, our reputation for the highest quality software and our leadership in the Ultra Ethernet Consortium, the UEC, and our work in Ethernet Scale Up Networking or ESUN. And most importantly, the way we partner with the world’s largest AI companies. Partnership has been key to our success over and over at Arista and the AI revolution is no exception.
In addition to being a lot of fun, these partnerships benefit our company, both through the sheer revenue opportunity, but also in providing us with the opportunity to learn and innovate at the edge of what’s possible. We can then apply what we’ve learned to bring solutions to the broader networking market, helping a much larger and more diverse customer base, build the most advanced and reliable infrastructure in the industry. For example, our Etherlink distributed switch fabric powers some of the largest AI fabrics in the world. It’s also an excellent underlay for data centers of all sorts, providing a full run rate fabric with no hotspots at petabit scale for all workloads, including AI. Etherlink speeds are going from 800 gigabits today to 1.6 terabits in the near future while leveraging our EOS operating system and our NetDI diagnostics infrastructure for top hardware and software reliability.
Arista AVA or Autonomous Virtual Assist, uses AI to help our customers design, build and operate their networks. AVA draws on both our internal knowledge base and also on the customers’ data stored in NetDL, Arista’s network data lake plus AVA has agentic capabilities to help troubleshoot proactively. Our other recent innovations include SWAG, Switch Aggregation Technology, that provides the features of campus stacking along with fault containment and in-service software upgrades for maximum uptime. By running a common EOS and common NetDI platform across so many use cases, we are able to maintain alignment between our different market segments, leveraging central engineering investments efficiently as we pursue cloud, enterprise and AI markets simultaneously.
I am so grateful for the opportunity to lead the Arista engineering and cloud teams in an era with so many exciting opportunities.
Jayshree Ullal: Thank you, Ken, and congratulations on a fantastic 21-year career, a very well-deserved promotion at Arista. You have always built the always-on resilient leaf-spine architecture, both now for networking for AI workloads, and AVA to bring AI to networking. At Oracle AI World, Ken was invited to formally announce our collaboration with Oracle Acceleron. This builds upon a decade of partnership with Oracle, starting with our Exadata migration from InfiniBand to Ethernet for AI networks to RoCE, RDMA over converged Ethernet, and now multiplanar networking across cloud AI for on-time job completion in gigawatt scale AI data centers. As part of our Leadership 2.0, we have built and focused a cloud and AI mission and organization, now led by industry veteran, Tyson Lamoreaux, reporting to Ken and Hugh.
I am so delighted to formally welcome Tyson to Arista. Tyson, if you guys know him well, built the first cloud network for Amazon AWS in the 2000 era and pioneered the first AI network for a stealth sovereign AI company the last couple of years. Tyson, you’ve had a busy few weeks here. Tell us more.
Tyson Lamoreaux: Thank you, Jayshree, and thanks for the question. It’s really incredible to have joined the team at a time where Arista is building so much momentum. Spending time with customers has been a top priority for me since coming on board. And I’ve been so impressed with how strong these partnerships are, both with our long-standing titans and with our emerging customers. We’re deeply engaged with them on next-gen architectures for their cloud networks, front end, back end, scale up, scale out and scale across. I mean, really everywhere. It’s translating to a ton of wins, and I got to say it’s a lot more than I anticipated before I got here. I really love our continuing commitment to open standards and innovation like ESUN and UEC and of course, the practical here, now and always problems that we’re addressing by building the hardware systems, software, everything that delivers exceptional power efficiency, reliability, density, visibility and manageability for our customers.

I think my background as a builder and operator are really well suited to helping the team anticipate customer needs and delivering the right products for them. I guess the last thing I’d highlight is the culture. I mean it’s just tremendous. The customer focus, commitment to quality, innovation and operational excellence are a top notch here and have made me feel right at home. Thanks, Jayshree. Back to you.
Jayshree Ullal: Thank you, Tyson, and welcome home. With Tyson’s credentials and a track record, Arista is really poised to address multiple facets of the cloud and AI innovation at a system-wide level converging silicon, hardware, software, cables, optics and racks as an overall platform. At the Optical Compute Conference, OCP, Arista unveiled its first Ethernet for Scale-Up Networks or ESUN specification, along with important 12 industry experts. While we began with 4 co-founders, we are now supporting and increasing to more people so that we can build the right interoperable scale-up standard. While there’s always white noise, Arista also continues to clarify our role in white box and how we will continue to coexist like we always have the past decade or more.
The concept is clear. It’s all about good, better and best, where in some simple use cases, a commodity white box is good enough. Yet in other cases, customers seek the value of better Arista blue boxes with state-of-the-art hardware with built-in NetDI for signal integrity, physical, passive, active component and troubleshooting management. The best is, of course, the Arista branded EOS platform for the ultimate superiority. We find ourselves amid an undeniable and explosive AI megatrend. As AI models and tokens grow in size and complexity, Arista is driving network scale of AI XPUs, handling the power and performance Basically, the tokens must translate to terawatts, teraflops and terabits. We are experiencing a golden era networking with an increasing TAM now of over $100 billion in forthcoming years.
Our centers of data strategy ranging from client to branch to campus to data and now cloud and AI centers is a very consistent mission for the company. We will continue to invest in our customers, our leaders, our partners and certainly most of all, our innovative technology. And with that, Chantelle, I’d like to hand it to you as our CFO, for financial specifics.
Chantelle Breithaupt: Thank you, Jayshree. It is great to see the broadening of the AI ecosystem, and I am excited for Arista to be an innovative [ unit ]. Turning now to Q3 performance. Total revenues were $2.3 billion, up 27.5% year-over-year, above our guidance of $2.5 billion. This was supported by strong growth across all of our product sectors. International revenues for the quarter came in at $468.3 million or 20.2% of total revenue, down from 21.8% in the prior quarter. The overall gross margin in Q3 was 65.2%, above our guidance of 64%, down from 65.6% last quarter and up from 64.6% in the prior year quarter. The year-over-year gross margin improvement was primarily driven by strength in the enterprise segment. Operating expenses for the quarter were $383.3 million or 16.6% of revenue, up from last quarter at $370.6 million.
R&D spending came in at $251.4 million or 10.9% of revenue, up from $243.3 million in the last quarter. Sales and marketing expense was $109.5 million or 4.7% of revenue compared to $105.3 million last quarter. Both quarter-over-quarter dollar increases were driven by additional headcount, inclusive of the VeloCloud acquisition. Our G&A costs came in at $22.4 million or 1% of revenue, up from last quarter at $22 million. Our operating income for the quarter was $1.12 billion, landing at 48.6% of revenue. Other income and expense for the quarter was a favorable $98.9 million, and our effective tax rate was 21.2%. This resulted in net income for the quarter of $962.3 million or 41.7% of revenue. Our diluted share number was 1.277 billion shares, resulting in a diluted earnings per share number for the quarter of $0.75, up 25% from the prior year.
Now on to the balance sheet. Cash, cash equivalents and investments ended the quarter at $10.1 billion. Of the $1.5 billion repurchase program approved in May 2025, $1.4 billion remains available for repurchase in future quarters. The actual timing and amount of future repurchases will be dependent on market and business conditions, stock price and other factors. Now let’s move next to operating cash performance for the third quarter. We generated approximately $1.3 billion of cash from operations in the period, reflecting a strong business model performance. DSOs came in at 59 days, down from 67 days in Q2, driven by billing linearity. Inventory turns were 1.4x, flat to last quarter. Inventory increased to $2.2 billion in the quarter, up from $2.1 billion in the prior period.
Most of this increase is due to higher evaluation inventory, indicating uptake of our new products and new use cases. Our purchase commitments and inventory at the end of the quarter totaled $7 billion, up from $5.7 billion at the end of Q2. We will continue to have some variability in future quarters as a reflection of the combination of demand for our new products and the lead times from our key suppliers. Our total deferred revenue balance was $4.7 billion, up from $4.1 billion in Q2. As of Q3, the majority of the deferred revenue balance is product related. Our product deferred revenue increased approximately $625 million versus last quarter. We remain in a period of ramping our new products, winning new customers and expanding new use cases, including AI.
These trends have resulted in increased customer-specific acceptance clauses and an increase in the volatility of our product deferred revenue balances. As mentioned in prior quarters, the deferred balance can move significantly on a quarterly basis, independent of underlying business drivers. Accounts payable days was 55 days, down from 65 days in Q2, reflecting the timing of inventory receipts and payments. Capital expenditures for the quarter were $30.1 million. In October 2024, we began our initial construction work to build expanded facilities in Santa Clara, and we expect to incur approximately $100 million in CapEx during fiscal year 2025 for this project. Q3 delivered a strong performance, underscoring our strategic progress. This continues to give us confidence for the remainder of FY ’25 and through FY ’26.
But let’s first start with our outlook for Q4. Revenue of $2.3 billion to $2.4 billion with continued growth expected across our cloud, AI, enterprise and providers markets. Gross margin in the range of 62% to 63%, inclusive of possible known tariff scenarios, operating margin of approximately 47% to 48%. Our effective tax rate is expected to be approximately 21.5% with approximately 1.281 billion diluted shares. Incorporating this Q4 outlook, our guidance for FY ’25 is as follows: full year revenue growth of approximately 26% to 27% or $8.87 billion at the midpoint. We are on track to deliver between $750 million and $800 million for our campus segment and our AI center target of at least $1.5 billion. For gross margin, the outlook is approximately 64%, inclusive of possible known tariff scenarios.
We anticipate operating margin of roughly 48%, demonstrating Arista’s strong operational execution and scalable business model. Our outlook for FY ’26 presented at our September Analyst Day remains relatively unchanged. Full year revenue growth of approximately 20%, now at a higher dollar amount of $10.65 billion, inclusive of both a campus target of $1.25 billion and an AI center target of $2.75 billion. For gross margin, a range is expected of approximately 62% to 64%, driven by customer mix. And for operating margin, an outlook of approximately 43% to 45%, allowing for investments in relation to achieving the strategic goals of Arista. In closing, the momentum continues. The breadth and depth of our customer interactions have never been stronger nor more exciting.
In true Arista style, we remain pragmatic, yet are aware of the potential over the next few years. I wish to extend a warm welcome to Tyson. We are thrilled that you have joined our team, and congratulations to Ken on the well-deserved promotion. I will now turn the call back to Rudy for Q&A.
Rudolph Araujo: Thank you, Chantelle. We will now move into the Q&A portion of the Arista earnings call. To allow for greater participation, I’d like to request that everyone please limit themselves to a single question. Thank you for your understanding. Christa, please take it away. .
Q&A Session
Follow Arista Networks Inc. (NYSE:ANET)
Follow Arista Networks Inc. (NYSE:ANET)
Receive real-time insider trading and news alerts
Operator: [Operator Instructions] Your first question comes from the line of Tal Liani with Bank of America.
Tal Liani: I want to ask about the sequential or the guidance, and it’s more fundamental question, but I’ll back it up with numbers. If you look at last year, the growth was very consistent kind of the last 3 quarters of the year, the sequential growth rate is between 6.5%, 7.5%. When you look at this year, you started with 10% growth in 2Q, and it goes to 5% and now only 1.6%. So there is deceleration. And the question is, what is the underlying — what are the underlying drivers for the deceleration? What do we need to take from it for next year? What does it mean? And should we be concerned about the growth going forward?
Jayshree Ullal: Thanks, Tal. First of all, to answer your last line, there is no concern on our demand. I think the shipments and the revenue follows based on our supplies. So if we’re able to make the shipments, then the revenue, as you saw in Q2 went right — blew past any of our guidance, right? However, there are times we can’t ship everything despite the demand. And so you’re accordingly seeing that. I wouldn’t read too much into the quarterly variances. But I would say we feel — we have never felt more strongly about the demand aspect of this reflected in the continued commitment to 20% growth, even though the number keeps increasing from 8.75% to now 8.87. So no change in demand, some variation in shipments.
Operator: Your next question comes from the line of Aaron Rakers with Wells Fargo.
Aaron Rakers: I’ll stick to kind of the model as well. I’m curious, when I look at the gross margin guidance for this quarter, I think it was 62% to 63%. I guess if we were to assume that your services’ gross margin stays consistent at 81%, 82%, it would seem to imply that product gross margin falls below 60%. So I guess the question is, can you unpack the gross margin drivers in this quarter in terms of the guidance? How much is tariff related? Or is there other dynamics to consider? And does that change kind of the expectation as we look forward?
Jayshree Ullal: Okay. Sure, Aaron. First of all, I think you’re overestimating our services and software margins, but be that as it may. We do have a mix of product margin where it’s significantly below 60% with our cloud and AI titans driving the volume and higher obviously, for the enterprise customers. The average of which, together with services is yielding that number. So when the mix tilts heavily towards the cloud and AI, you can expect some pressure on our gross margins. But overall, I think we managed it very well the manufacturing team, now led by Todd does a fantastic job here. So again, the discipline and mix plays well together, but I don’t think it’s any change from prior years where when we have a heavy mix of AI and cloud, we feel it in our gross margins.
Chantelle Breithaupt: Yes. The only thing I would add to that is I wouldn’t miss into the last part of your question that it insinuates or offers a new model going into next year. This is a normal part of our mix conversation and well within the guide that you’ve seen us perform at these levels before.
Operator: Your next question comes from the line of Michael Ng with Goldman Sachs.
Michael Ng: Thank you for the question. I was wondering if you could just talk about Arista’s positioning as we move into more full rack solutions. Is this going to be more of a partnership model? How do you think about addressing this, I say, growing convergence between compute and networking?
Jayshree Ullal: Michael, that’s a very good question. First of all, as you heard in Analyst Day, Andy Bechtolsheim is personally driving along with the hardware team a significant number of these racks. I think at any given time, we have 5 to 7 projects with different accelerator options Obviously, NVIDIA is the gold standard today, but we can see 4 or 5 accelerators emerging in the next couple of years. Arista is being sought to bring all aspects, the cabling, the co-packaging, the power, the cooling as well as the connection to different XPU cartridges, if you may, as the network platform of choice in many of these cases. So we are involved in a lot of early designs. I think a lot of these designs will materialize as the standards for Ethernet are getting stronger and stronger.
We now have a UEC spec. You heard me talk about the Scale-Up Ethernet spec for ESUN where we can bring different work streams onto the same Ethernet headers, transport headers, data link layer, et cetera. So I think a lot of this will be underway in 2026 and really emerge in 2027 as Scale-Up Ethernet becomes a more important part of that. In terms of how we will gain more recognition of revenue, some of this will be not the classic OEM model. There may be more the blue box JDM model where we work with them on IP and have reference designs and offer them capabilities well beyond the network. And — but many of them will also entail selling the network as is in these racks.
Operator: Your next question comes from the line of Atif Malik with Citi.
Atif Malik: Jayshree, in your prepared remarks, you mentioned large language model providers like OpenAI, Anthropic, and they have announced partnerships with your cloud titans. Can you share with us who is driving the decision-making on networking hardware on these announcements? And just your commentary on your share being stable within the circle of your cloud titan?
Jayshree Ullal: Yes. So to answer your last question, first, I think our share is strong. We always, as you know, coexist with 2 other types of competitors. One is the bundling strategy with NVIDIA and the other is the white box. So we have not seen any significant changes in share up or down at the moment, it’s stable. Having said that, it’s also a massive market. And we think rising tide rises all boats and this boat is feeling pretty good. Now specific to who makes the decision, it’s really a combination. We intimately work with the software and LLM players because they certainly guide the design but we also work with the cloud titans, and it’s a shared responsibility between both of them and where the responsibility for procuring the large data centers and the power and the location and the cooling is clearly done by our cloud Titans, but the specifications are exactly what’s required on the scale up, scale out network is done by the partners like OpenAI and Anthropic.
So it’s really a joint decision.
Operator: Your next question comes from the line of Samik Chatterjee with JPMorgan.
Samik Chatterjee: Jayshree, maybe just going back to your earlier response to another question, you mentioned some variability in shipments at a customer level, maybe driving some of the lumpiness quarter-to-quarter. Just curious, you had talked about previously the Tier 1 customers you’re engaged with progressing to their cluster sizes, 100,000 and more. Like how — has there been any change relative to those plans that are driving the deceleration here in terms of the fourth quarter guide? Or what is sort of behind the variability that you’re seeing in terms of shipment? Is it supply driven at all? Any color on that front?
Jayshree Ullal: Yes. Yes. Samik, I would say it’s largely supply driven. As you know, all 4 are doing well on the 100,000 mark. 3 have already crossed it. The fourth one, I don’t know if they’ll cross it by end of the year or next year, but they’re getting there. So we’re feeling pretty good on our large GPU deployments. At the same time, the variability I was stating is demand is greater than our ability to ship. Lead times on many of our components, including standard memory and chips and merchant silicon and everything, it’s nothing like 2022, but they have very long lead times ranging from 38 to 52 weeks. So we are coping with that. And you can see Chantelle is leaning in and making greater and greater purchase commitments, we wouldn’t do that without demand.
Operator: Your next question comes from the line of Amit Daryanani with Evercore ISI.
Amit Daryanani: I guess my question, I think folks are kind of trying to ask around this a bit is the growth rate you’ve had in the last 3, 4 quarters of high 20%, call it, you sort of implying that will decelerate just — not just in December, but also in ’26, I think, at this point, right, high 20% growth goes to low 20%. Maybe just talk about what is driving that kind of deceleration? Because certainly, if you look at things like your purchase commitment, and your deferred product growth, it would almost imply things can accelerate, not decelerate in the out quarters. So just what’s driving that deceleration in the out quarters would be helpful to understand.
Jayshree Ullal: Okay, Amit, but I don’t like the word deceleration. We’re talking about big, big numbers here, guys. And I’m committing to double-digit 20% and above percentage, don’t call it deceleration, call it, variability across quarters, and demand is great. I just don’t know whether it will land in ’26 or ’27.
Chantelle Breithaupt: Yes. The only other thing I’d add to this just generally is a topic is that when you think about that the large AI use cases are acceptance clauses, it really comes down to that coming together and the timing of that. That doesn’t follow a seasonality model. That’s also for…
Jayshree Ullal: Good point. It lands when it lands. That is a very good point that Chantelle is making that in the cloud, we started having predictability of how they landed and how they got constructed. In AI, it’s taking longer.
Operator: Your next question comes from the line of David Vogt with UBS.
David Vogt: So I’m going to ask this question at the risk of Jayshree yelling at me. So — when we think about your ’26 outlook that you just raised, which we didn’t expect you to raise this early in the cycle, if I just take what you’re doing with regards to the AI-centric opportunity, campus and [ Velo ]. It doesn’t leave a lot of room for growth in the core business outside of AI and campus. Maybe can you speak to what you’re seeing in that particular market and how we should think about that progressing through 2026?
Chantelle Breithaupt: Okay. Well, just — I’ll say something as Jayshree to figure out which tone she wants to answer there. But I think that — the part that I would take a look at, again, I go back to kind of how we started it early ’25, maybe even ’24. Part of our style is to not assume 100% of everything hits to get to a number, and we’d like to leave ourselves with some optionality. And so we’re putting some goals for ourselves with the AI. We’re putting goals for ourselves with campus. It doesn’t mean we’re not focused on the rest. But I don’t think it’s the right approach to assume everything is going to be 100% and leave ourselves exposed, and we’ll continue to update as we see it right, Jayshree.
Jayshree Ullal: Yes, absolutely. And so yelling isn’t the tone I’d like to attribute it to, excitement maybe, your enthusiasm is the one I’d like you to think about, which is clearly, AI and campus is going to grow and do great guns for us as it should because they are 2 very large TAMs. Whether it is Ken and Tyson driving the AI and cloud TAM or whether it’s Todd Nightingale driving the campus and these 2 are going to grow substantially in double digits, right? So to your point, it doesn’t leave the core business with a lot of opportunity. But that’s not to say it may be flattish, it may be grow. It’s to say that our customers are putting more attention there and that the existing business, which is already on very large numbers, will have lesser growth. We don’t yet know if it’s flattish or single digit or whether more will go to AI. We frankly can’t predict the mix this early in the game on 2026, but we think we’re in for a great ride in 2026.
Operator: Your next question comes from the line of Ben Reitzes with Melius Research.
Benjamin Reitzes: I appreciate you clarifying some of those earlier questions, more on the long-term side, Jayshree. I think there was an earlier question around OpenAI and Anthropic and just some of these larger builds with the private companies that obviously are becoming hyperscalers. Maybe without naming names or whatnot, I just wanted to hear about your confidence on being able to participate in some of these builds that are affiliated with some of your cloud titans. And do you think you’ll get a lot of this growth? Is there anything that’s changing or evolving that gives you more or less confidence as we end the year here in ’25?
Jayshree Ullal: Yes, that’s a really good question, Ben, and thank you for that thoughtful question. Until now, majority of how we’ve measured our AI success through our cloud and AI titans has been number of GPUs and how much are they installing and can we verify that the Ethernet network works. The majority of it to date has been scale out. First, I want to reflect that there are 3 big use cases sitting in front of us, scale up, scale out and scale across. Arista’s participation to date has largely been in scale-out. So we’ve got 2 major use cases in addition to augmenting this, and that’s what makes the Etherlink portfolio that Ken described so eloquently so beautiful. Now how are these being built? Clearly, they’re being driven by large language models, tokens transformers, inference use cases, you name it all.
So the influence is clearly coming from these players you named. But the way they are driving the infrastructure, and I can’t keep track of the gigawatts myself, it’s 10 gigawatts here, 10 there, 30 there. It’s adding up to a lot. But I can just tell you, no matter what it is, Arista has been looked at as a very important and relevant participant, especially right now in the scale-out and scale across. We will participate in the scale up. It will take a little longer. Today, it is largely a set of proprietary technologies like NVLink or PCIe, and I think that will happen more in ’27. So that to say that as we get now confident about exceeding our $10 billion goal next year, we’re looking at our next goal of $15 billion in the next few years.
And I think AI will be a very large part of it, and so will be the companies you mentioned.
Operator: Your next question comes from the line of Tim Long with Barclays.
Timothy Long: Appreciate the question here. Jayshree, maybe if we could just dig a little bit. You mentioned blue box a few times here kind of in that middle portion of the good, better, best. Two-parter. One, could you talk a little bit about kind of the economic model margins or anything like that, that we should expect as blue box maybe becomes a bigger part of the mix over time? And second, can you talk a little bit about where we would expect to see these type of deployments? I’m assuming something like scale up might be, as you described, a little bit more simple and not need the full EOS. But from either a customer or a use case standpoint, where would you expect Arista to be most successful with blue box deployment?
Jayshree Ullal: Yes. Thank you, Tim. That’s a good — those are a good set of questions. I think I mentioned at the Analyst Day, we’re already quickly seeing success. I’ll give you one example, where they were just not getting their white box to work. These are AI mission-critical workloads. And we’re seeing a neo-cloud come right in with, in this case, non-NVIDIA GPUs, in fact, where they’re looking to deploy Arista with its excellent hardware. And at first, they wanted to do an open NOS, but now they are adopting a hybrid strategy where it’s not only an open NOS, but Ken’s EOS is coming to shine in its full glory in this use case. So in this case, I think it’s a Blue Box to start with, but it’s quickly going into a hybrid state of blue and branded EOS box.
The economics on that is not too different from our cloud and AI titans, generally speaking, although there will be scenarios, like you rightly mentioned, hasn’t yet come to play. But as we go to significant scale-up volume, we expect more margin and economic capability coming together. In other words, the volume of these things will be larger, the pressure on margins will be greater. So — but we will carefully have a mix of scale up, scale out and scale across to not affect the overall margins, but definitely take our fair share in that. So hopefully, I answered your question on both.
Operator: Our next question comes from the line of Meta Marshall with Morgan Stanley.
Meta Marshall: Maybe a question for you, Jayshree. Just on — I know you guys aren’t breaking out kind of front end and back end anymore. But just as more inference kind of use cases are getting built out, just what are you seeing in terms of just like how the front-end network upgrades are happening maybe versus where your expectations were a year ago?
Jayshree Ullal: Yes. Thank you. I think a year or maybe even 2 years ago, Meta, I may have told you this, we were literally outside looking in at all these back-end networks that were largely being constructed by — with InfiniBand. We’ve seen a sea change, particularly this year, where obviously, more and more times, we’re being invited to construct their 800-gig, last year was more 400 gig. And I think next year will be a combination of [ 8 and 1.60 ] on the back end. The back end is putting pressure on the front end, which is why it’s getting more and more difficult for us to say, okay, what’s the back-end number that natively connects to GPUs and what is the front end. But we know of concrete cases in our cloud titans, where not only is it putting pressure on the AI number, but they’re having to go and upgrade their cloud infrastructure to deal with it.
That part is happening in a small sort of way, but what’s happening in a big sort of way is the back and front are coalescing and converging more. And it’s really becoming hard to tell, and it’s probably [ 6 of 1 and 1/2 dozen ] of the other.
Kenneth Duda: I’d just like to point out that we’re seeing that Arista, I think, is the only successful vendor outside of China selling both front end and back end. And this is where our engineering alignment is so important because we can offer the customer a consistent solution across their entire infrastructure. I think this is a unique differentiator that will really help us succeed as these networks become more and more mainstream.
Operator: Your next question comes from the line of Karl Ackerman with BNP Paribas.
Karl Ackerman: How should we think about your market opportunity between disaggregated scheduled fabrics versus nonscheduled fabrics, which appear to be used in the largest AI accelerator clusters at one of your largest customers. I mean you, in fact, happen to be the only networking switch vendor who offers both networking topologies. And I’m curious if other data center operators seek to adopt your DSF architecture given the congestion-free advantages it offers.
Jayshree Ullal: Well, I think you hit on it, and Ken hit on it, too, so I’d like him to answer part of the question. But look, we’re not religious. We jointly developed the DSF architecture with one of our leading cloud titans, Meta. And we’ve been selling the nonscheduled fabric for a very long time. So we’ve never been religious about this. And both are doing very, very well at our cloud titans and specifically the one we co-developed with.
Kenneth Duda: That’s exactly right. We’ve had both architectures in massive production scale for, I think, 15 years now. And we’ll continue to offer this range of choice to our customers, offering them their choice between the highest value fabric with deep buffers, no hotspots, congestion-free, loss-free or an unscheduled fabric, which is maybe lower cost, but also can be more difficult to operate. And they both run the same software. So it gives the customer a range of options and a consistent operating model.
Operator: Your next question comes from the line of Simon Leopold with Raymond James.
Simon Leopold: I wanted to come back to the topic around the blue box, which you’ve talked about quite a bit at the analyst meeting. So I appreciate it’s not new. But what I don’t quite think I understand is how it may be evolving or changing in that it sounds like there’s a broader base of customers that may be employing it and that this is a factor that’s in your 2026 margin guidance. Could you elaborate on what you’re assuming blue box trends are in 2026?
Jayshree Ullal: I think the blue box trends in 2026 will continue to remain with a handful of customers who know how to — who have the operational skill to deal with it. So think of that as largely our specialty cloud providers or titan. It’s not going to be mainstream. So single-digit customers probably, maybe 10, maybe 20, but it’s not going to be hundreds, number one, because they really have to have the operational excellence to take our NetDI and our hardware and build upon it and put their open NOS or whatever, right? So — however, in that scenario, you are right to point out that because we may not have the EOS layer, we will take a lower margin on that. And that’s factored into our 2026 guide and mix. And we think the combination of the blue box and the EOS branded box, if I can call it that, will continue to help us thrive with a profitable and high-growing business.
Operator: Your next question comes from the line of James Fish with Piper Sandler.
James Fish: Just on that topic of blue box shockingly, maybe not just 2026, but what do you see in terms of the mix regarding the adoption curve as to what percentage of the business could actually represent over not just next year, but 3, 5 years from now? And you guys mentioned the convergence of front end and back end. Does that take away from kind of your advantage of where you sit today, though, if that line starts to blur a little bit more and allows competition to enter?
Jayshree Ullal: Yes, please go ahead.
Kenneth Duda: In terms of the front end and back-end converging, this is purely advantageous to us because the front end requires a massive number of features. It’s incredibly mission-critical and supports a whole variety of applications, not just the straightforward if demanding communication patterns of the AI back end. So we see that the — our ability to tackle both of them effectively is a significant source of strength and a real differentiator and something that’s not easy for competitors to replicate. If you look at NVIDIA, for example, the sales volume is small in the front end and Cisco is small in the back end. And so I think we’ll see that kind of convergence being beneficial to us.
Jayshree Ullal: Yes. Thank you, Ken. And on the blue box, I’m not sure we model 3 to 5 years. But if I had to venture what I think the evolution of the blue box will be, I think it will be more significant in the scale-up use cases where there’s a higher dependency on the strength of our hardware and our NetDI capability and a lower requirement for software. So don’t know yet what that will be. I think it will be high in units, low in dollars kind of things. So the mix may still be small, but it will actually be incremental since that’s not a use case we do today.
Operator: Your next question comes from the line of Antoine Chkaiban with New Street Research.
Antoine Chkaiban: I’d like to ask about the UEC. So can you maybe tell us about the progress that the consortium is making, whether the different voices are aligned and what milestones investors should be looking out for going forward?
Rudolph Araujo: Antoine, can you repeat your question? You weren’t coming into clear?
Antoine Chkaiban: I’m asking about the Ultra Ethernet consortium. Maybe can you tell us about the program of the consortium.
Jayshree Ullal: Yes. Yes. Yes, Antoine, yes. So after 2 years of lots of hard work led by Hugh Holbrook and now Tom Emmons, UEC did publish their first specification, I believe it was 1.0 in June of 2025. Arista’s Ethernet portfolio is entirely UEC capable, compatible, and we will continue to add more and more compliance, packet trimming, packet spring, dynamic load balancing. These are all important features that our switches support. And we will augment that with the ESUN specification. As I described, we’ve been an early pioneer, 4 vendors started this together, including Broadcom, Arista and a couple of our Titan customers. I’m pretty sure it will be 20, 25, 30 over time. And having a standards-based OCP ESUN agreement will allow us to expand UEC into the scale-up configuration as well, leveraging UEC and IEEE specs. So this modular framework for Ethernet for scale-up and scale-out is a thing of beauty, and Arista is in the middle of it.
Operator: Your next question comes from the line of George Notter with Wolf Research.
George Notter: I think in the monologue, you mentioned neo cloud is an area where you’re getting more momentum. I think you guys actually said at the Analyst Day as well. I’m just curious like what are you seeing with that customer set? I guess, from my perspective I’ve historically kind of thought of that customer as being more focused on the bundle, which isn’t necessarily your game, but it sounds like you’re maybe talking a bit more positively. I’m just wondering what you’re seeing in that space.
Jayshree Ullal: Yes. No, George, I think you’re right. I think in the beginning, we were looking at them bundling. I can think of 2 examples where we went to and invited to the party because you want my GPU, you’ve got to get the network from me, so we weren’t there. But there are — leaving the 2 aside, and even I think those 2 might be — might get open-minded over time, there are many more neo clouds worldwide coming up that are really looking for Arista’s help, not only on the product, but on the network design, on the software capability they just don’t have the staff and expertise to do everything themselves, and they would rather let us satisfy their network needs. So we are taking down many neo cloud and smaller enterprises, admittedly smaller numbers of GPU clusters as well.
But if they start with 1,000 to a few thousand, then we’re hopeful they’ll grow because the one advantage they seem to all have is colo space and power, which is, as you know, is a very prestigious asset going forward.
Operator: Your next question comes from the line of Sebastien Naji with William Blair.
Sebastien Cyrus Naji: I’d like to understand a little bit more about the investments you’re making in the enterprise go-to-market. It looks like sales and marketing expenses stepped up in the quarter. Where do you think you make the most progress as we go into 2026? Is it geographic expansion? Is it investing more into the channel? Is it just trying to cross-sell more into the existing enterprise customer base? I’d love to get your thoughts there.
Jayshree Ullal: Yes. You’re hitting on a really important spot because we really have 2 sides to our coin. On one side, the AI and cloud makes us dizzy. But we’re just as excited and dizzy about the huge $30 billion TAM, and that’s why we’re so happy to have Todd here. We were just at an international innovate in London that Ken, Todd and I all got a chance to see. And the excitement and enthusiasm for a relevant high-quality network vendor has never been higher. So indeed, we want to invest there. Todd, do you want to say a few words?
Todd Nightingale: Yes. We’re excited about the growth here across the board from the enterprise space, but there’s 3 real dimensions we’re staying really focused on. One is expansion into the campus. The VeloCloud acquisition completes our portfolio there, getting great traction, pushing extremely hard around the world. It’s a ton, a ton of white space accounts for us that we haven’t gotten. I think you mentioned geographic expansion. That’s great. We saw good numbers in Asia this quarter. We’ve got a lot of opportunity, I think, to accelerate there, and we like the progress. But the last is just reaching new logos, and we’re investing in our channel to really deliver that and bring us more opportunities more at bats to find folks and introduce them to Arista for the first time.
Jayshree Ullal: That is a cricket analogy.
Todd Nightingale: Yes, it was cricket.
Jayshree Ullal: Yes, there you go. So Sebastien, we’re feeling really good, and it clearly is the other half of our numbers.
Operator: Your next question comes from the line of Ryan Koontz with Needham.
John Jeffrey Hopson: This is Jeff Hopson on for Ryan Koontz. We’ve seen a lot of the deals with the hyperscalers or the AI model companies with new data center build-outs, probably not a level since we’ve seen with the cloud build-out. So I’m just curious, is there a way to think about Arista’s opportunity with new network builds versus refreshing or upgrading existing networks?
Jayshree Ullal: Yes, that’s exactly the way to think about it because in the past, with the cloud, we rarely got to talk about gigawatts and beyond. So much of them were multi-megawatts. So these are newly constructed AI build-outs as opposed to the traditional CPU or storage-driven cloud build-outs. Of course, they will have refresh too. But frankly, they’re not getting the attention. All the attention is going to the new build-outs for AI. So that’s the right way to look at it.
Rudolph Araujo: So we have time for one last question.
Operator: That comes from the line of Ben Bollin with Cleveland Research.
Benjamin Bollin: Jayshree, you talked a little bit about some of the tightening lead time conditions out there. Curious what you’re seeing from these cloud customers around engineering and delivery lead times, how that has evolved? And in particular, just changes you’re seeing on your confidence in delivering their needs, whatever, in the next 12, 18 months. That’s it.
Jayshree Ullal: Ben, as you know, forecast is a very delicate science. I hardly get it right. So I do rely as does Tyson and Ken and the whole team on early preview and early forecast from our large customers without which we couldn’t do proper planning. Even before they put in their purchase orders, we’ve got to have a good idea of what they want. And you’re seeing that reflected in Chantelle’s purchase commitments. So when it comes to our large and intimate customer engagement, they understand and they’ve gotten burned by the 2022 supply crisis and are absolutely planning with us. Some of that is true in Todd’s areas, too, with the large enterprises because in a large data center, you have to plan ahead. It’s not like they miraculously show up.
They need power, they need space, and those are 1- or 2-year lead times. Where we have to be more vigilant, and this is something Todd and my campus and the entire manufacturing team is working on, is realize as a campus business, we had one of our best quarters. Congratulations, Todd, Kumar, this quarter on the campus. That planning cycle is a lot shorter. That tends to be days and weeks, not months or half a year or longer, right? So we’re working again on this dichotomy in our business and planning as much as we can for the AI, but also planning ahead as much as we can for the enterprise and campus. Would you like to add something, Todd?
Todd Nightingale: Yes. I’ll just add, we are getting aggressive, as Jayshree said, on to improve our campus lead times and really accelerate that business and help drive that enterprise growth that we feel pretty passionately about. And the only other thing is the investment here and the amount of dollars being put into purchasing, making those purchase commitments is key, and we’re looking for improvement in that.
Rudolph Araujo: Thanks, Jayshree and Todd. That concludes Arista Networks Third Quarter 2025 Earnings Call. We have posted a presentation that provides additional information on our results, which you can access on the Investors section of our website. Thank you for joining us today and for your interest in Arista.
Operator: Thank you for joining. Ladies and gentlemen, this concludes today’s call. You may now disconnect.
Follow Arista Networks Inc. (NYSE:ANET)
Follow Arista Networks Inc. (NYSE:ANET)
Receive real-time insider trading and news alerts




