Arista Networks, Inc. (NYSE:ANET) Q1 2024 Earnings Call Transcript

Page 1 of 4

Arista Networks, Inc. (NYSE:ANET) Q1 2024 Earnings Call Transcript May 7, 2024

Arista Networks, Inc. beats earnings expectations. Reported EPS is $1.99, expectations were $1.74. Arista Networks, Inc. isn’t one of the 30 most popular stocks among hedge funds at the end of the third quarter (see the details here).

Operator: Ladies and gentlemen, welcome to the First Quarter 2024 Arista Networks Financial Results Earnings Conference Call. During the call, all participants will be in a listen-only mode. After the presentation, we will conduct a question-and-answer session. Instructions will be provided at that time. [Operator Instructions] As a reminder, this conference is being recorded and will be available for replay from the Investor Relations section at the Arista website, following this call. Ms. Liz Stine, Arista’s Director of Investor Relations, you may begin.

Liz Stine: Thank you, operator. Good afternoon, everyone, and thank you for joining us. With me on today’s call are Jayshree Ullal, Arista Networks’ Chairperson and Chief Executive Officer and Chantelle Breithaupt, Arista’s Chief Financial Officer. This afternoon, Arista Networks issued a press release announcing the results for its fiscal first quarter ending March 31st, 2024. If you’d like a copy of this release, you can access it online from our website. During the course of this conference call, Arista Networks’ management will make forward-looking statements, including those relating to our financial outlook for the second quarter of the 2024 fiscal year, longer-term financial outlooks for 2024 and beyond, our total addressable market and strategy for addressing these market opportunities, including AI, customer demand trends, supply chain constraints, component costs, manufacturing output, inventory management and inflationary pressures on our business, lead times, product innovation, working capital optimization and the benefits of acquisitions, which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10-Q and Form 10-K and which could cause actual results to differ materially from those anticipated by these statements.

These forward-looking statements apply as of today, and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call. Also, please note that certain financial measures we use on this call are expressed on a non-GAAP basis and have been adjusted to exclude certain charges. We have provided reconciliations of these non-GAAP financial measures to GAAP financial measures in our earnings press release. With that, I will turn the call over to Jayshree.

Jayshree Ullal: Thank you Liz. Thank you everyone for joining us this afternoon for our first quarter 2024 earnings call. Amidst all the network consolidation, Arista is looking to establish ourselves as the pure-play networking innovator for the next era, addressing at least a $60 billion TAM in data-driven client to cloud AI networking. In terms of Q1 specifics, we delivered revenue of $1.57 billion for the quarter with a non-GAAP earnings per share of $1.99. Services and software support renewals contributed strongly at approximately 16.9% of revenue. Our non-GAAP gross margins of 64.2% was influenced by improved supply chain and inventory management, as well as favorable mix of the enterprise. International contributions for the quarter registered at 20% with the America strong at 80%.

As we kick off 2024, I’m so proud of the Arista teamwork and our consistent execution. We have been fortunate to build a seasoned management team for the past 10 to 15 years. Our co-founders are very engaged in the company for the past 20 years. Ken is still actively programming and writing code, while Andy is our full-time chief architect for next generation AI, silicon and optics initiatives. Hugh Holbrook, our recently promoted Chief Development Officer, is driving our major platform initiatives in tandem with John McCool and Alex on the hardware side. This engineering team is one of the best in tech and networking that I have ever had the pleasure of working with. On behalf of Arista though, I would like to express our sincere gratitude for Anshul Sadana’s 16 plus wonderful years of instrumental service to the company in a diverse set of roles.

I know he will always remain a well-wisher and supporter of the company. But Anshul, I’d like to invite you to say a few words.

Anshul Sadana: Thank you, Jayshree. The Arista journey has been a very special one. We’ve come a long way from our startup days to over an $80 billion company today. Every milestone, every event, the ups and downs are all etched in my mind. I’ve had a multitude of roles and learned and grown more than what I could have ever imagined. I have decided to take a break and spend more time with family. Especially when the kids are young. I’m also looking at exploring different areas in the future. I want to thank all of you on the call today. Our customers, our investors, our partners, and all the well-wishers over these years. Arista isn’t just a workplace, it’s family to me. It’s the people around you that make life fun. Special thanks to Arista leadership.

Chris, Ashwin, John McCool, Mark Frost, Eta and Chantel, Mark Taxi, Hugh Holbrook, Ken Duda, and many more. Above all, there are two very special people I want to thank. Andy Bechtolsheim, for years of vision, passion, guidance, and listening to me. And of course, Jayshree. She hasn’t been just my manager, but also my mentor and coach for over 15 years. Thank you for believing in me. I will always continue to be an Arista well-wisher. Back to you, Jayshree.

Jayshree Ullal: Anshul, thank you for that very genuine and heartfelt expression of your huge contributions to Arista. It gives me goosebumps hearing your nostalgic memories. We will miss you and hope someday you will return back home. At this time, Arista will not be replacing the COO role and instead flattening the organization. We will be leveraging our deep bench strength of our executives who stepped up to drive our new Arista 2.0 initiatives. In particular, John McCool, our Chief Platform Officer, and Ken Kiser, our Group Vice President, have taken expanded responsibility for our cloud, AI, Titan initiatives, operations, and sales. On the non-cloud side, two seasoned executives are being promoted. Ashwin Kohli, Chief Customer Officer, and Chris Schmidt, Chief Sales Officer, will together address the global enterprise and provide our opportunity.

Our leaders have grown up in Arista for a long time with long tenures of a decade or more. We are quite pleased with the momentum across all our three sectors, cloud and AI, Titan, enterprise, and providers. Customer activity is high as Arista continues to impress our customers and prospects with our undeniable focus on quality and innovation. As we build our programmable network underlays based on our universal least-bind topology, we are also constructing networks as a service suite of overlays, such as Zero Touch Automation, Security, Telemetry, and Observability. I would like to invite Ken Duda, our Founder, CTO, and recently elected to the Arista board to describe our enterprise NAS strategy as we drive to our enterprise campus goal of 750 million in 2025.

Over to you, Ken.

Ken Duda: Thank you, Jayshree, and thanks everyone for being here. I’m Ken Duda, CTO of Arista Networks, excited to talk to you today about NetDL, the Arista Network Data Lake, and how it supports our network as a service strategy. From the inception of networking decades ago, networking has involved rapidly changing data, data about how the network is operating, which paths through the network are best, and how the network is being used. But historically, most of this data was just simply discarded as the network changed the state, and that which was collected can be difficult to interpret because it lacks context. Network addresses and port numbers by themselves provide little insight into what users are doing or experiencing.

Recent developments in AI have proved the value of data, but to take advantage of these breakthroughs, you need to gather and store large data sets labeled suitably for machine learning. Arista is solving this problem with NetDL. We continually monitor every device, not simply taking snapshots, but rather streaming every network event, every counter, every piece of data in real time, archiving a full history in NetDL. Alongside this device data, we also collect flow data and in-band network telemetry data gathered by our switches. Then, we enrich this performance data further with user service and application layer data from external sources outside the network, enabling us to understand not just how each part of the network is performing, but also which users are using the network for what purposes, and how the network behavior is influencing their experience.

NetDL is a foundational part of the EOS stack, enabling advanced functionality across all of our use cases. For example, in AI Fabrics, NetDL enables fabric-wide visibility, integrating network data and NIC data to enable operators to identify misconfigurations or misbehaving hosts, and pinpoint performance bottlenecks. But for this call, I want to focus on how NetDL enables network as a service. Network as-a-Service, or NAS, is Arista’s strategy for up-leveling our relationship with our customers, taking us beyond simply providing network hardware and software by also providing customers or service provider partners with tools for building and operating services. The customer selects a service model, configures service instances, and Arista’s CV-NAS handles the rest, equipment selection, deployment, provisioning, building, monitoring, and troubleshooting.

In addition, CV-NAS provides end-user self-service, enabling customers to manage their service instance, provision new endpoints, provision new virtual topologies, set traffic prioritization policies, set access rules, and get visibility into their use of the service and its performance. One can think of NAS as applying cloud computing principles to the physical network, reusable design patterns, scale autonomous operations, multi-tenant from top to bottom with cost-effective automated end-user self-service. And we couldn’t get to the starting line without NetDL, as NetDL provides the database foundation of NAS service deployment and monitoring. Now, NAS is not a separate SKU, but really refers to a collection of functions in Cloud Vision. For example, Arista Validated Designs, or AVD, is a provisioning system.

It’s an early version of our NAS service instance configuration tool. Our AGNI services provide global location-independent identity management needed to identify customers within NAS. Our UNO product, or universal network observability, will ultimately become the service monitoring element of NAS. And finally, our NAS solution has security integrated through our ZTN, or Zero Trust Networking, product that we showcased at RSA this week. Thus, our NAS vision simultaneously represents a strategic business opportunity for us, while also serving as a guiding principle for our immediate Cloud Vision development efforts. While we are really excited about the future here, our core promise to our investors and customers is unchanging and uncompromised.

A technician in a server room managing a large-scale network of computers.

We will always put quality first. We are incredibly proud of the amount of success customers have had deploying our products, because they really work. And as we push hard, building sophisticated new functions in the NetDL and NAS areas, we will never put our customers’ networks at risk by cutting orders on quality. Thank you.

Jayshree Ullal: Thank you, Ken, for your tireless execution in the typical Arista way. In an era characterized by stringent cybersecurity, observability is an essential perimeter and imperative. We cannot secure what we cannot see. We launched CloudVision UNO in February 2024 based on the EOS Network Data Lake Foundation that Ken just described for Universal Network Observability. CloudVision UNO delivers fault detection, correction, and recovery. It also brings deep analysis to provide a composite picture of the entire network with improved discovery of applications, hosts, workloads, and IT systems of record. Okay, switching to AI, of course, no call is complete without that. As generative AI training tasks evolve, they are made up of many thousands of individual iterations.

Any slowdown due to network can critically impact the application performance, creating inefficient wait states, and idling away processor performance by 30% or more. The time taken to reach coherence, known as job completion time, is an important benchmark achieved by building proper scale-out AI networking to improve the utilization of these precious and expensive GPUs. Arista continues to have customer success across our innovative AI for networking platforms. In a recent blog from one of our large Cloud and AI Titan customers, Arista was highlighted for building a 24,000-node GPU cluster based on our flagship 7800 AI Spine. This cluster tackles complex AI training tasks that involve a mix of model and data parallelization across thousands of processors, and Ethernet is proving to offer at least 10% improvement of job completion performance across all packet sizes versus InfiniBand.

We are witnessing an inflection of AI networking and expect this to continue throughout the year and decade. Ethernet is emerging as a critical infrastructure across both front-end and back-end AI data centers. AI applications simply cannot work in isolation and demand seamless communication among the compute nodes consisting of back-end GPUs and AI accelerators, as well as the front-end nodes like the CPUs alongside storage and IPWAN [ph] systems as well. If you recall, in February, I shared with you that we are progressing well in four major AI Ethernet clusters that we won versus InfiniBand recently. In all four cases, we are now migrating from trials to pilots, connecting thousands of GPUs this year, and we expect production in the range of 10K to 100K GPUs in 2025.

Ethernet at scale is becoming the de facto network and premier choice for scale-out AI training workloads. A good AI network needs a good data strategy delivered by a highly differentiated EOS and network data lake architecture. We are therefore becoming increasingly constructive about achieving our AI target of 750 million in 2025. In summary, as we continue to set the direction of Arista 2.0 networking, our visibility to new AI and cloud projects is improving, and our enterprise and provider activity continues to progress well. We are now projecting above our analyst-day range of 10% to 12% annual growth in 2024. And with that, I’d like to turn it over to Chantelle for the very first time as Arista CFO to review financial specifics and tell us more.

Warm welcome to you, Chantelle.

Chantelle Breithaupt: Thank you, Jayshree, and good afternoon. The analysis of our Q1 results and our guidance for Q2 2024 is based on non-GAAP and excludes all non-cash stock-based compensation impacts, certain acquisition-related charges, and other non-recurring items. A full reconciliation of our selected GAAP to non-GAAP results is provided in our earnings release. Total revenues in Q1 were $1.571 billion, up 16.3% year-over-year, and above the upper end of our guidance of $1.52 billion to $1.56 billion. This year-over-year growth was led by strength in the enterprise vertical, with cloud doing well as expected. Services and subscription software contributed approximately 16.9% of revenue in the first quarter, down slightly from 17% in Q4.

International revenues for the quarter came in at $316 million, or 20.1% of total revenue, down from 22.3% in the last quarter. This quarter-over-quarter reduction reflects the quarterly volatility and includes the impact of an unusually high contribution from our EMEA in-region customers in the prior quarter. In addition, we continue to see strong revenue growth in the U.S. with solid contributions from our Cloud Titan and Enterprise customers. Growth margin in Q1 was 64.2% above our guidance of approximately 62%. This is down from 65.4% last quarter and up from 60.3% in Q1 FY23. The year-over-year margin accretion was driven by three key factors. Supply chain productivity gains led by the efforts of John McCool, Mike Kappus, and his operational team, a stronger mix of enterprise business and a favorable revenue mix between product, services, and software.

Operating expenses for the quarter were $265 million or 16.9% of revenue up from last quarter at $262.7 million. R&D spending came in at $164.6 million or 10.5% of revenue down slightly from $165 million last quarter. This reflected increased headcount offset by lower new product introduction costs in the period due to timing of prototypes and other costs associated with our next generation products. Sales and marketing expense was $83.7 million or 5.3% of revenue compared to $83.4 million last quarter with increased headcount costs offset by discretionary spending that is delayed until later this year. Our G&A costs came in at $16.7 million or 1.1% of revenue up from 0.9% of revenue in the prior quarter. Income from operations for the quarter was $744 million or 47.4% of revenue.

Other income for the quarter was $62.6 million and our effective tax rate was 20.9%. This resulted in net income for the quarter of $637.7 million or 40.6% of revenue. Our diluted share number was 319.9 million shares resulting in a diluted earnings per share number for the quarter of $1.99 up 39% from the prior year. Now turning to the balance sheet. Cash, cash equivalents and investments ended the quarter at approximately $5.45 billion. During the quarter we repurchased $62.7 million of our common stock and in April we repurchased an additional $82 million for a total of $144.7 million at an average price of $269.80 per share. We have now completed share repurchases under our existing $1 billion board authorization whereby we repurchased 8.5 million shares at an average price of $117.20 per share.

In May 2024 our board of directors authorized a new $1.2 billion stock repurchase program which commences in May 2024 and expires in May 2027. The actual timing and amount of future repurchases will be dependent upon market and business conditions, stock price and other factors. Now turning to operating cash performance for the first quarter, we generated approximately $513.8 million of cash from operations in the period reflecting strong earnings performance partially offset by ongoing investments and working capital. DSOs came in at 62 days up from 61 days in Q4 driven by significant end of quarter service renewals. Inventory turns were one flat to last quarter. Inventory increased slightly to $2 billion in the quarter up from $1.9 billion in the prior period reflecting the receipt of components from our purchase commitments and an increase in switch related finished goods.

Our purchase commitments at the end of the quarter are $1.5 billion down from $1.6 billion at the end of Q4. We expect this number to level off as lead times continue to improve but will remain somewhat volatile as we ramp up new product introductions. Our total deferred revenue balance was $1.663 billion up from $1.506 billion in Q4 fiscal year 2023. The majority of the deferred revenue balance is services related and directly linked to the timing and term of service contracts which can vary on a quarter-by-quarter basis. Our product deferred revenue balance decreased by approximately $25 million versus last quarter. We expect 2024 to be a year of significant new product introductions, new customers and expanded use cases. These trends may result in increased customer specific acceptance clauses and increase the volatility of our product deferred revenue balances.

As mentioned in prior quarters, the deferred balance can move significantly on a quarterly basis independent of underlying business drivers. Accounts payable days were 36 days down from an usually high 75 days in Q4 reflecting the timing of inventory receipts and payments. Capital expenditures for the quarter were $9.4 million. Now, turning to our outlook for the second quarter and beyond. I have now had a quarter of working with Jayshree, the leadership team, and the broader Arista ecosystem, and I am excited about both our current and long-term opportunities in the markets that we serve. The passion for innovation, our agile business operating model, and employee commitment to our customer success are foundational. We are pleased with the momentum being demonstrated across the segments of enterprise, cloud, and providers.

With this, we are raising our revenue guidance to an outlook of 12% to 14% growth for fiscal year 2024. On the gross margin front, given the expected end-customer mix combined with continued operational improvements, we remain with a fiscal year 2024 outlook of 62% to 64%. Now, turning to spending and investments, we continue to monitor both the overall macro environment and overall market opportunities, which will inform our investment prioritization as we move through the year. This will include a focus on targeted hires and leadership roles, R&D, and the go-to-market team as we see opportunities to acquire strong talent. On the cash front, while we will continue to focus on supply chain and working capital optimization, we expect some continued growth in inventory on a quarter-by-quarter basis as we receive components from our purchase commitments.

With these sets of conditions and expectations, our guidance for the second quarter, which is based on non-GAAP results and excludes any non-cash stock-based compensation impacts and other non-recurring items, is as follows. Revenues of approximately $1.62 billion to $1.65 billion, gross margin of approximately 64%, and operating margin at approximately 44%. Our effective tax rate is expected to be approximately 21.5%, with diluted shares of approximately 320.5 million shares. I will now turn the call back to Liz for Q&A. Liz?

Liz Stine: Thank you, Chantelle. We will now move to the Q&A portion of the Arista earnings call. To allow for greater participation, I’d like to request that everyone please limit themselves to a single question. Thank you for your understanding. Operator, take it away.

See also 20 States with the Highest Fertility Rates in the US and 10 Best Cookies and Crackers Stocks to Buy.

Q&A Session

Follow Arista Networks Inc. (NYSE:ANET)

Operator: Thank you. [Operator Instructions] And your first question comes from the line of Atif Malik with Citi. Your line is open.

Unidentified Analyst: Hi. It’s Adrienne [ph] for Atif. Thanks for taking the question. I was hoping you could comment on your raised expectations for the full year with regards to customer mix. It sounds like from your gross margin guidance, you’re seeing a higher contribution from enterprise, but I was hoping you could comment on the dynamics you’re seeing with your Cloud Titans. Thank you.

Jayshree Ullal: Yes. So, as Chantelle and I described, when we gave our guidance in November, we didn’t have much visibility beyond three to six months. And so, we had to go with that. The activity in Q1 alone, and I believe it will continue in the first half, has been much beyond what we expected. And this is true across all three sectors, cloud and AI titans, providers and enterprise. So, we’re feeling good about all three and therefore have raised our guidance earlier than we probably would have done in May. I think we would have ideally liked to look at two quarters. Chantelle, what do you think? But I think we felt good enough.

Chantelle Breithaupt: Yes. No, I think we saw because of the diversified momentum and the mix of the momentum that gave us confidence.

Jayshree Ullal: Great. Thanks, Adrienne.

Operator: And your next question comes from the line of Samik Chatterjee with JPMorgan. Your line is open.

Samik Chatterjee: Hi. Thanks for taking my question. I guess, Jayshree and Chantelle, I appreciate the sort of raise in the guidance for the full year here. But when I look at it on a half-over-half basis in terms of what you’re implying, if I am doing the math correct, you’re implying about a sort of 5%, 6% half-over-half growth, which when I go back and look at previous years, you’re probably there’s only one year out of the last five or six that you’ve been in that sort of range or below that. Every other year, it’s been better than that. I’m just wondering, you mentioned the Q1 activity that you’ve seen across the board. Why are we not seeing a bit more of a half-over-half uptake than in sort of the momentum in the back half? Thank you.

Jayshree Ullal: Thanks, Samik. It’s like anything else. Our numbers are getting larger and larger. So activity has to translate to larger numbers. So of course, if we see it improve even more, we’ll guide appropriately for the quarter. But at the moment, we’re feeling very good just increasing our guide from 10 to 12 to 12 to 14. As you know, Arista doesn’t traditionally do that so early in the year. So please read that as confidence. But, cautiously confident or optimistically confident, but nevertheless confident.

Samik Chatterjee: Thank you.

Operator: And your next question comes from the line of Ben Reitzes with Melius Research. Your line is open.

Liz Stine: Ben, if you’re talking, we can’t hear you. Operator, can we start after Ben?

Operator: So we will move on to the next question. Mr. Reitzes, if you can hear us, please re-hit star 1. And we will move to our next question from George Notter with Jefferies. Your line is open.

George Notter: Hi, guys. Thanks a lot. I want to key in on something I think you guys said earlier in the monologue. You mentioned that Ethernet was 10% better than InfiniBand. And my notes are incomplete here. Could you just remind me exactly what you were talking about there? What is the comparison you’re making to InfiniBand? And just anything. I’d love to learn more about that.

Jayshree Ullal: Absolutely, George. Historically, as you know, when you look at InfiniBand and Ethernet in isolation, there are a lot of advantages of each technology. Traditionally, InfiniBand has been considered lossless. And Ethernet is considered to have some loss properties. However, when you actually put a full GPU cluster together along with the optics and everything, and you look at the coherence of the job completion time across all packet sizes, data has shown, and this is data that we have gotten from third parties, including Broadcom, that just about in every packet size in a real-world environment, independent of the, comparing those technologies, the job completion time of Ethernet was approximately 10% faster.

So, you can look at these things in silos. You can look at it in a practical cluster. And in a practical cluster, we’re already seeing improvements on Ethernet. Now, don’t forget, this is just Ethernet as we know it today. Once we have the Ultra Ethernet Consortium and some of the improvements you’re going to see on packet spraying and dynamic load balancing and congestion control, I believe those numbers will get even better.

George Notter: Got it. I assume you’re talking about Rocky here as opposed to just straight up Ethernet. Is that correct?

Jayshree Ullal: In all cases, right now, pre-UEC, we’re talking about RDMA over Ethernet, exactly. RoCE v2 which is the most widely deployed NIC you have in most scenarios. But with level RoCE, we’re seeing 10% improvement. Imagine when we go to UEC.

George Notter: I know you guys are also working on your own version of Ethernet. Presumably, it blends into the UEC standard over time. But what do you think the differential might be there relative to InfiniBand? Do you have a sense on what that might look like?

Jayshree Ullal: I don’t think we have metrics yet. But it’s not like we’re working on our own version of Ethernet. We’re working on the UEC compatible and compliant version of Ethernet. And there’s two aspects of it, what we do on the switch and what others do on the NIC, right? So what we do on the switch, I think, will be, we’ve already built an architecture. We call it the Etherlink architecture that takes into consideration the buffering, the congestion control, the load balancing. And largely, we’ll have to make some software improvements. The NICs, especially at 400 and 800, is where we are looking to see more improvements because that will give us additional performance from the server onto the switch. So we need both halves to work together. Thanks, George.

George Notter: Great. Thank you.

Operator: And your next question comes from the line of Ben Reitzes with Melius Research. Your line is open.

Ben Reitzes: Gosh, I hope it works this time.

Jayshree Ullal: Yes, we can hear you now.

Ben Reitzes: Oh, great. Thanks a lot. I was wondering if you can characterize how you’re seeing NVIDIA in the market right now. And are you seeing yourselves go more head to head? How do you see that evolving? And if you don’t mind, also, as NVIDIA moves to a more systems-based approach, potentially with BlackRock, how you see that impacting your competitiveness with NVIDIA? Thanks so much.

Jayshree Ullal: Yes. Thanks, Ben, for a loaded question. First of all, I want to thank NVIDIA and Jensen. I think it’s important to understand that we wouldn’t have a, you know, massive AI networking opportunity if NVIDIA didn’t build some fantastic GPUs. So, yes, we see them in the market all the time, mostly using our networks to their GPUs. And NVIDIA is the market leader there, and I think they’ve created an incremental market opportunity for us that we are very, very rejoiced by. Now, do we see them in the market? Of course we do. I see them on GPUs. We also see them on the, RoCE or RDMA Ethernet NIC side. And then sometimes we see them, obviously, when they are pushing InfiniBand, which has been, for most part, the de facto network of choice.

You might’ve heard me say last year or the year before, I was outside looking in to this AI networking. But today we feel very pleased that we are able to be the scale-out network for NVIDIA’s GPUs and NICs based on Ethernet. We don’t see NVIDIA as a direct competitor yet on the Ethernet side. I think it’s 1% of their business. It’s 100% of our business. So we don’t worry about that overlap at all. And we think we’ve got, you know, 20 years of founding to now experience to make our Ethernet switches better and better both on the front end and back end. So we’re very confident that Arista can build a scale-out network and work with NVIDIA’s scale-up GPUs. Thank you, Ben.

Ben Reitzes: Thanks a lot.

Operator: And your next question comes from the line of Amit Daryanani with Evercore ISI. Your line is open.

Amit Daryanani: Good afternoon, thanks for taking my question. I guess, Jayshree, given some of the executive transitions you’ve seen at Arista, can you just perhaps talk about the extent you can, the discussion you’ve had with the board around your desire, your commitment to remain the CEO. Does anything in touch on that would be really helpful? And then if I just go back to this job completion data that you talked about, given what you just said and the expected improvement, what are the reasons a customer would still use InfiniBand versus switch more aggressively towards Ethernet? Thank you.

Jayshree Ullal: First of all, you heard Anshul. I’m sorry to see Anshul decide to do other things. I hope he comes back. We’ve had a lot of executives, make a U-turn over time and we call them boomerangs. So I certainly hope that’s true with Anshul. But we have a very strong bench and we’ve been blessed to have a very constant bench for the last 15 years, which is very rare in our industry and in the Silicon Valley. So while we’re sorry to see Anshul make a personal decision to take a break, we know he’ll remain a well-wisher and we know the bench strength below Anshul will now step up to do greater things. As for my commitment to the board, I have committed for multiple years. I think it’s the wrong order. I wish Anshul had stayed and I’d retired, but I’m committed to staying here for a long time. Thank you.

Page 1 of 4