Nebius Group N.V. (NASDAQ:NBIS) Q2 2025 Earnings Call Transcript August 7, 2025
Nebius Group N.V. beats earnings expectations. Reported EPS is $-0.38332, expectations were $-0.42.
Neil Doshi: Thank you, and welcome to Nebius Group’s Second Quarter 2025 Earnings Conference Call. Joining me today are Arkady Volozh, Founder and CEO; and our broader management team. Our remarks today will include forward-looking statements, which are based on assumptions as of today. Actual results may differ materially as the results of various factors, including those set forth in today’s earnings release and in our annual report on Form 20-F filed with the SEC. We undertake no obligation to update any forward-looking statements. During this call, we will present both GAAP and certain non-GAAP financial measures. A reconciliation of GAAP to non-GAAP measures is included in today’s earnings press release. The earnings press release, shareholder letter and accompanying investor presentation are available on our website at group.nebius.com/investor-hub. And now I’d like to turn the call over to Arkady.
Arkady Volozh: Thanks, Neil, and thank you to everyone for joining the call today. I am pleased to say that we had an excellent quarter. We more than doubled our revenue for the whole group from Q1, and this quarter, we also became EBITDA positive in our core AI infrastructure business ahead of our previous projections. We could grow faster, but we were oversold on all of our supply of previous generation hoppers, and we decided to wait for the new generation of GPUs to come. And finally, the new Blackwells are coming to the market in masses, and in parallel, we are dramatically increasing our data center capacity. That’s why we expect to significantly increase our sales in the — by the end of this year, and that’s why we are increasing our ARR guidance for the year-end from the previous $700 million to $1 billion to a new guidance, which is now $900 million to $1.1 billion.
More — some more color on capacity front, and I see this as one of the most important updates of this call. We are aggressively ramping up. By the end of this year, we expect to have secured 220 megawatts of connected power that is either active or ready for GPU deployment, and this expansion includes our data centers in New Jersey and Finland. In addition, we have nearly closed on 2 substantial new greenfield sites in the United States. And overall, we are in the process of securing more than 1 gigawatt of power by the end of 2026 to capture industry growth next year. In addition, we made big enhancements to our software cloud platform, obviously, to support our expanding capacity and to meet the demand of those large-scale clusters. Also, we continue to significantly expand our customer base.
We started to gain real traction on the enterprise side, adding large global technology customers such as Cloudflare, Prosus and Shopify. And we still remain a leading new cloud provider for so-called native AI tech startups. We have added customers like HeyGen, Lightning.AI, Photoroom and many, many others. On the financing front, as you already know, we are fortunate to have multiple levers to finance our ambitious growth. We have raised over $4 billion in capital so far. We have a strong balance sheet, as you can see, and we have access to potentially billions of dollars more thanks to our noncore businesses and other equity stakes such as Avride, ClickHouse, Toloka. In short, this is an exciting time for Nebius. We are in the midst of a once-in-a-generation opportunity.
That’s what we believe in. The demand for AI compute is strong and will just get stronger. We are rapidly increasing our capacity to pave the way for accelerated growth in 2026 and beyond. Well — and with that, let me introduce our new Chief Financial Officer, Dado Alonso. Dado, welcome again, and the floor is yours.
Q&A Session
Follow Nebius Group N.v. (NASDAQ:NBIS)
Follow Nebius Group N.v. (NASDAQ:NBIS)
Dado Alonso: Thank you, Arkady. I’m really excited to be joining Nebius. I’ve long believed that AI will fundamentally transform our world, and Nebius is well positioned to make that happen. Of course, I’m also looking forward to getting to know our investors and analysts over the coming months. While the details of our Q2 financial performance can be found in our shareholder letter, I’d like to highlight a few key items and then conclude with guidance. We reported $105.1 million in revenue, up 625% year-over-year and up 106% quarter-over-quarter, driven by strength in our core business and a solid execution from our Tripleten team. Our AI cloud infrastructure revenue increased more than 9x year-over-year, driven by strong customer demand for our copper GPUs and near peak utilization of our platform.
Even as we achieve hyper growth, we continue to operate with discipline. This focus allowed us to achieve positive adjusted EBITDA in our core business ahead of our expectations. Below the operating income/loss line, we recorded a gain from revaluation of investment in equity securities related to our equity investment. We also reported a gain from discontinued operations. These 2 nonbusiness-related items made us for the quarter GAAP net income profitable. It is important to notice that we view these gains as onetime in nature. Turning to guidance. We see very strong momentum in our business and demand for AI compute remains exceptionally high. Given our plans to further scale our platform this year, we are updating our full year outlook. For annualized run rate revenue, as Arkady already mentioned, we are raising guidance from $750 million to $1 billion to $900 million to $1.1 billion.
This is based on closed contracts for existing and future capacity as well as sales we anticipate for the rest of the year. For our core business revenue, we are maintaining our guidance of $400 million to $600 million. Let me share a few points. We continue to experience strong demand and are building capacity to take advantage of the large opportunity in front of us. Of the 220 megawatts of connected power we expect to have at the end of the year, we will have 100 megawatts of active power. And as we are building out our data center capacity, most of our GPU installations will take place in Q4. So we expect our annualized run rate revenue and revenue to be back-end weighted. For group revenue, we are keeping the projections that we already provided, that is group revenue of $450 million to $630 million.
This excludes the 2025 revenue guidance of $50 million to $70 million we previously gave for Toloka. As we announced, effective from Q2, we have deconsolidated Toloka from the group. Turning to adjusted EBITDA. As we previously announced, we expect to be slightly positive by the end of the year at group level, but still we will be negative for the full year. Finally, we are maintaining our CapEx guidance of around $2 billion in 2025. So in closing, we are experiencing hyper growth with demand to support continued strong results. We are investing in capacity to capture the large and growing opportunity in front of us and are positioning the company to become a leader in AI cloud infrastructure. Look, I truly believe the future of Nebius is incredibly bright.
We’re not just well positioned. We have the resources, the expertise and, most importantly, the team to lead and win. Now let me turn the call over to Neil for Q&A.
Neil Doshi: Great. Thanks, Dado. We’ve started to collect questions from the online platform, and we’ll give it a minute just to consolidate. Great. All right. So our first question comes from our analyst from Goldman Sachs, Alex Duval. And maybe I’ll give this to Marc. Marc, can you maybe talk about the overall demand environment? And how does demand look like as we’re moving into the second half of this year?
Marc D. Boroditsky: Yes. And thank you, Alex, for the question. The demand environment in the second quarter, as you can tell from our results, was very strong. As we brought on more capacity, we sold through it. And by the end of the quarter, we were at peak utilization. There’s a nice trend that we’re actually starting to witness. As we bring on larger clusters, we are able to bring on new large customers who want to purchase greater and greater capacity. This allows us to expand and diversify our customer base and has been a clear signal there is growing opportunity in the market. This also suggests strong demand to support ramping up our capacity. If we had more capacity in the second quarter, we probably would have sold more as well. At the same time, we were able to improve the maturity of our platform, which has contributed nicely to increasing our competitive win rate, all of which is continuing on into this quarter.
Neil Doshi: Great. Thanks, Marc. And that was Marc, our new Chief Revenue Officer. A question on EBITDA. Dado, maybe you can take this one. It’s good to see positive adjusted EBITDA for the AI cloud business to come in ahead of expectations. How should we think about adjusted EBITDA for the core business and for the whole group going forward for the remainder of this year?
Dado Alonso: Well, look, we are very pleased to report that our core business reached adjusted EBITDA profitability this quarter ahead of our initial guidance. And looking ahead, we expect the core business to remain positive throughout the rest of the year. At the group level, we anticipate turning adjusted EBITDA positive by the end of the year. However, for the full year, it will remain negative. That said, we expect group adjusted EBITDA to be positive starting next year.
Neil Doshi: Great. Thank you, Dado. And Dado, maybe we’ll stick with you. Analyst Nehal Chokshi from Northland is asking about ARR. So really, as we think about ARR for the year, what are the dynamics around ARR? And can you give any update for ARR this quarter?
Dado Alonso: Sure. Thanks, Nehal. Reality is that we show strong momentum in Q2 with annualized run rate revenue growing from $249 million in March to $430 million in June. While we are not providing monthly ARR updates, I can say that this positive trajectory has continued into July. Looking ahead to our increased annualized run rate revenue guidance, a significant portion of it is already under contract, which gives us a strong visibility. We also see continued strong demand in the market, and as we scale up capacity, we are able to sell it quickly. With additional capacity coming online later this year, we are confident we are on track to deliver on the revised ARR guidance.
Neil Doshi: Great. Thanks, Dado. Dado, maybe staying with you online. It looks like our prior guidance for ARR was $750 million to $1 billion and $400 million to $600 million of core business revenue. We’re now increasing the ARR to $900 million to $1.1 billion, but there’s no change to the revenue guidance. Can you explain why this is?
Dado Alonso: Yes, of course. The increase in our ARR guidance reflects the strong demand we are seeing in the expected delivery of additional GPU capacity later this year, particularly the Blackwell Ultras. Because much of this capacity will come online by the end of the year, the impact will show up more in ARR than within the year revenue. That timing dynamic is why we are holding our 2025 revenue guidance steady. That said, this late year ramp will create a strong foundation heading into 2026 and will support meaningful revenue acceleration next year.
Neil Doshi: Great. Thanks, Dado. We have a question from Alex Platt, an analyst from D.A. Davidson, and he’s really asking about the 1 gigawatt. So if we’re getting to 1 gigawatt of contracted power by the end of ’26, how should we think about revenue for next year? And how should we also think about the guidance we gave last quarter for the midterm of getting to mid-single-digit billions of revenue over the next few years? Maybe Marc, do you want to take this?
Marc D. Boroditsky: Certainly. And thank you, Alex, for the question. It’s too early for us to provide ’26 guidance, and we’ll be returning to that question later this year. But for now, we do want to reaffirm our midterm outlook as we are making very good progress towards our goals. As we said in our Q1 earnings call, our base case calls for several billion dollars of revenue in the midterm, which means in the next few years. Our base case also assumes that we grow our capacity to support this type of revenue goal from our ’25 levels. We also said this guidance does not factor in a large deal from like a frontier AI lab or a hyperscaler. Those transactions would be considered incremental to this guidance. I hope everybody is gathering that our ambition is to grow much larger and much faster, and we are laying that foundation with the 1 gigawatt capacity that we’re deploying.
Neil Doshi: Great. Thanks, Marc. The next question is around tariffs. The U.S. is now exercising tariffs across most nations. How does this impact your business and margins? Tom, do you want to take this?
Tom Blackwell: Yes, sure, Neil, happy to. So yes, listen, I mean, I think the question of tariffs, this is obviously something that we’re following closely. I would say that for now, it’s a bit early to say anything definitive, including — based on the latest comments we saw overnight, we’re still looking into this. But I think the key thing is whatever is determined — obviously, this is something that affects all players in our market. And while it’s possible we could potentially see some short-term fluctuations, we’re confident that the market will be able to balance things out going forward. But as we see more, we’ll obviously update.
Neil Doshi: Thank you, Tom. All right. We get this question quite a bit. What is your return on CapEx? And Dado, maybe you can help shed some light here.
Dado Alonso: Certainly. Look, when we price our GPUs, we aim for healthy margins on a per hour compute basis. For the hopper generation, we expect to break even in roughly 2 to 3 years on a gross profit level. That includes both the cost of hardware, but also the associated operational expenses. This estimate doesn’t factor in our higher-margin software and services revenue. As those scale, we see potential to shorten the return on invested capital. As for Blackwells, we expect the price at the premium. So it’s still early to comment on specifics at this stage.
Neil Doshi: Great, Dado. All right. Another question from Alex Duval from Goldman, and this is around capacity and time line. So maybe I’ll give this to Andrey. Andrey, can you maybe walk us through the time line for the infrastructure build-out for this year? And how do we get to the 220 megawatts this year? And maybe some incremental color for next year.
Andrey Korolenko: Yes. Sure, Neil. Thanks, Alex, for the question for everyone. So we are ramping up our capacity to accelerate our growth for the next year and after. First of all, we are growing with a number of the regions where we are present. In second half of 2025, we are adding U.K., Israel, a new site in New Jersey, additional capacity in Finland. And Finland and New Jersey are our main drivers of the capacity this year. Currently, in New Jersey, we have 200 megawatts in ongoing construction phase. A good part of that will be available this year and the rest in first half of 2026. In Finland, we expect to have an additional 50 megawatts in operations this year, just like we spoke earlier. Yes.
Neil Doshi: And Andrey, kind of another part of Alex’s question is also just any more details to — for ’26 and some of the greenfield opportunity we talked about. And maybe just lumping that in with an online question, why greenfield versus build-to-suit?
Andrey Korolenko: Sure, Neil. Well, we are in advanced discussions for a couple of new greenfield sites, each one able to deliver hundreds of megawatts of power in 2026, and we sure will announce hopefully soon about that. Regarding the why greenfields versus build-to-suits or colab options, we typically — and we spoke about that a lot. We typically favor greenfields because we can control every aspect of the data center from the design to construction to the hardware installations and deployment and phasing. We can actually tailor the phasing according to our demand. And for us, it’s cheaper to build than build-to- suit, and we are not locked into the long-term leases. Also, by controlling the design of the building, starting from the — how power is piped into building and design and installation of our own racks and servers, we can achieve a lower total cost of ownership, probably around 20% less than the market average.
Neil Doshi: Thanks, Andrey. Maybe we can give this question to our native U.K. person. Tom, can you maybe shed some light on our U.K. and Israeli facilities? What do you see there — what do we see there from an opportunity perspective in those markets? And to what extent will we have local infrastructure presence to unlock that opportunity?
Tom Blackwell: Yes. No, absolutely. So I suppose given my accent, I’ll start with U.K. I think U.K. looks great. We think it’s a really exciting opportunity there. I mean, obviously, I think everyone knows it’s a massive AI market. We’re definitely the third largest, biggest outside of the U.S. and China. We’ve been paying quite close attention to what the government has been doing, and they’ve been taking some quite impressive steps to stimulate growth generally in AI, including confirming, I think, GBP 14 billion private sector investment into AI in the region. So I think probably many of you have noticed a couple of — about a month ago, we announced our intention to launch our first big facility, GPU cluster in the U.K. It’s just outside of London, and we expect that to be coming on stream in roughly early Q4.
So actually, we think we’re going to be the first to deliver B300s to the U.K. market, which we think will be a really interesting opportunity. And just generally, how we’re looking at the commercial opportunity there. There’s a vibrant market of AI native start-ups, scaleups in London and around. There’s a significant enterprise customer presence as well. I mean you also — what you’ve seen actually — what we’ve seen lately is that even a number of the big global tech companies have been setting up regional hubs, regional R&D facilities, which we think will help to also drive the growth of the ecosystem. So the other thing I would say is that as — we’re looking at some specific industry opportunities and creating verticals around them, and one of the most promising that we see right now among others is the health care and life science space.
And actually, we have a dedicated health care and life science team that’s led out of the U.K. And in fact, in this particular area — this is an area where we’re working in partnership with Nvidia, and we’ll soon be announcing some initiatives that will be helping sort of life science startups in the sector. So U.K. looks great, and we’re looking forward to being part of that. Likewise, Israel. We think there’s also a big opportunity there to sort of service what we think is really growing demand in the local AI sector. As in the U.K., the government has been doing a reasonable amount to really develop the ecosystem and stimulate demand. And just generally, we see that Israel seems to be emerging as quite a dynamic AI hub globally. So we’ll — again, we’re there.
We’ve mentioned this previously, but we’ll be launching our GPU cluster there with Nvidia and with that coming up on stream also early Q4. And just generally looking forward to being part of it, tapping into the growth of the AI ecosystem. We think there’s a big opportunity for us there. So we’ll keep you posted.
Neil Doshi: Thank you, Tom. Maybe we’ll go to Dado on this question. How do you plan to finance the capacity expansion for this year and next year? It seems like you’ll have to raise a significant amount of capital to achieve your expansion plans.
Dado Alonso: Surely, Neil. What we have seen is that our business model is working well, and as we bring new capacity online, we are able to sell it efficiently, which reinforces our confidence to continue investing. Given the strength of the market, we see a clear opportunity to scale and demand our footprint in infrastructure. We have significant cash on hand and we’ll approach any additional capital raising opportunistically, depending, of course, on timing and market conditions. At the moment, our focus is on securing land and power and moving quickly to reach our 1 gigabyte target.
Neil Doshi: Great. Thank you, Dado. Maybe, Andrey, you can take this question. You’ve announced some important updates to the software stack. What’s most important for your customers?
Andrey Korolenko: Sure, Neil. Well, our customers who train or run the AI models and have the AI connected tasks are generally looking for 3 things. They’re looking for speed, reliability and flexibility/convenience. And this quarter, we continue to execute on those things. And the improvements were also driven and geared towards the Blackwell deploy readiness. On speed, we’ve doubled the speed of our network, and that had a direct impact on our MLPerf Benchmark results. We made a great step and improved reliability by increasing like resulting number — mean time between failure. This was due to improvement in our core platform and deployment of our auto hidden and health check software that would address potential points of failure before nodes actually fail.
We also improved flexibility. We made it easy for anyone using the S3 storage to easily migrate their data to do the AI workloads on our cluster’s network. And this makes it easier for the customers to come to Nebius.
Neil Doshi: Great. Thank you, Andrey. Andrey, maybe sticking with you. Nehal from Northland is asking around some of the benchmarks that we’ve talked about this quarter. Can you maybe elaborate a little more on the MLPerf?
Andrey Korolenko: Yes, with pleasure. Thanks, Nehal. This quarter, we submitted MLPerf 3 and 5-0 results, revealing some quite impressive performance for large-scale training of Llama 3.1, the big one, 4-0, 5 billion parameters model. Basically, in cloud, as we double the size of our cluster, the speed scales linearly. So the most impressive part about this is that our results are comparable to bare metal benchmarks, but we accomplished this in the cloud. And for the customers, this is important because it’s easier, faster and more cost effective in the end. Yes.
Neil Doshi: Great. Thanks, Andrey. We have a question about our inference as a service platform. Maybe I’ll ask Roman to elaborate. Roman, can you maybe talk about our inference as a service platform? And also, it looks like you –- you’ve transitioned to a new role. So maybe you can also elaborate on your new role and what you’re working on.
Roman Chernin: Yes. Thank you, Neil. First of all, I’m always happy to talk about inference. About my transition, we now have Marc that is focusing on scaling our go-to-market and sales. And I’m happy to spend time on new initiatives. And of course, we see more and more demand shifting to inference as all the market. And the strength of Nebius is that we build a full stack. So now we are developing the next layer of our offering very naturally. We do it to enable the AI-centric ICs like product builders and enterprises that apply AI in their critical workflows, and we do it with our fully vertically integrated inference as a service product. We are building enterprise-grade platform to deploy and scale open weight AI models like Llama, Qwen — Flux just released Open AI new models — and others.
And we focus on high performance and reliability on dedicated infrastructure. Our platform runs on top of Nebius’s proven scaled infrastructure, and we target to solve the biggest pain points in production AI, unpredictable latency, GPU bottlenecks and not enough flexible platforms to build and scale.
Neil Doshi: Great. Thanks, Roman. Next question is around some of the new large customer wins like Shopify. Maybe, Marc, you can take this. Was — were these deals competitive? What are they using Nebius for? And any more color you can provide would be super helpful.
Marc D. Boroditsky: Yes. Thank you, Neil. Probably one of the important highlights that we’re observing is, as we’re making our way through the market, we’re actually getting interesting adoption like big customers like Shopify. And I want to add another one to the discussion here, like Cloudflare. I’m very excited about these customers. They are leaders in their categories. They are pushing the frontier of using AI to build and deliver great solutions, and I’ve had the privilege of partnering with both of them in the past. Shopify is utilizing Nebius’ AI infrastructure along with Toloka’s training data in order to optimize every step of the merchant’s journey, a very exciting opportunity for us. Likewise, Cloudflare is using Nebius to power inference at the edge, a very important part of their overall offering, as a part of their popular Workers API.
Both relationships are growing and both are scaling opportunities for us. We’re also seeing other similar interest from other major technology companies and leaders in their categories, reinforcing the opportunity overall in the market.
Neil Doshi: Great. Thanks, Marc. Marc, we also seem to have a question just about you it looks like. Since you’ve joined Nebius for the past couple of months now, what have been some of your observations? And what is your strategy to bringing more long-term contracts and move the company towards the enterprise market?
Marc D. Boroditsky: I couldn’t be more excited, I have to say, even more so than when I received the opportunity to join the company. This is a very exciting organization. We’ve got great technology, and that’s because we have a world-class leading team. It’s turned out as a — you’re hopefully hearing in today’s call that the market is massive and it’s growing quickly. The opportunity for Nebius is to get more structured and methodical with our go-to-market and to continue to build out our coverage to be able to proactively pursue the market opportunity. To that end, we are building out our go-to-market leadership team, including adding a world-class VP of Sales Strategy and Operations, who’s actually starting this week. We’re also adding general managers to lead our businesses in the Americas, Middle East, Asia Pacific and Japan as well as adding leadership to take on the opportunity around strategic customers and major enterprises.
In tandem, we will continue to expand our overall customer-facing capacity and distribution capabilities. In the short term, we are focused on pursuing the regional markets of AI builders and targeted software vendors and select enterprise segments in order to be able to develop a strong understanding of the use cases that are winning and then a deep understanding of the overall customer journey. Midterm and longer term, we intend to cover the entire global IT market with distribution and sales capacity.
Neil Doshi: Great. Thanks, Marc. We have a question from Alex, our Goldman analyst, around Blackwell demand. And Marc, as we’re bringing on the Blackwells, what does the demand look like for them?
Marc D. Boroditsky: Thank you again, Alex. A very thoughtful set of questions today. Well, first of all, let me just clarify. We continue to see really strong demand for the hoppers in Q2. As a matter of fact, whenever hopper capacity becomes available, we’re selling it very quickly. We did bring on the B200s, and we are actively selling through them as well. Pricing trends remain relatively stable for the hoppers even in the context of Blackwell alternatives, which are actually coming through with a healthy premium, relatively speaking. We’re also seeing interest in the Grace Blackwells that are being implemented later this year.
Neil Doshi: Great. Thanks, Marc. It looks like we’re getting a question on partnerships. It looks like you’ve added a number of partners in Q2 and continue to strengthen your relationship with Nvidia. What partnerships do you think are most meaningful? And how should we measure the success of these partnerships? Maybe I’ll give this to Daniel.
Unidentified Company Representative: Thanks, Neil. This quarter, we made strong progress expanding our reach across the AI ecosystem through several high-impact partnerships. We launched integrations with Mistral, Baseten and SkyPilot, all of which extend our ease of use of our AI cloud and dedication to our developers and model builders by supporting them across their workflows. We also partnered with Lightning AI and Anyscale, extending our presence across both open source tool sets and enterprise users. These collaborations simplify how teams scale and deploy AI workloads using Nebius. And then on the infrastructure side, we expanded our AI cloud portfolio with Nvidia AI Enterprise and became a launch NCP partner for Nvidia DGX Cloud Lepton, further strengthening our position as a high-performance AI platform.
Ultimately, we measure our success through the adoption of our partner platforms, revenue contribution and strategic access to new user segments, all of which we’ve seen trending positively.
Neil Doshi: Great. Thank you, Daniel. A question on utilization. Can you discuss utilization trends in the quarter or even by GPU family? Marc, can you take this one?
Marc D. Boroditsky: Absolutely. As we’ve discussed already, we are investing in and building out our infrastructure. And as we bring on more capacity, we’re selling through it, and we are able to bring on bigger customers who want to get greater capacity. We’re adding more capacity this and next quarter and shifting to selling against future requirements. So ideally, what we’re building is a model where we can close and drive expansion of future capacity and future versions of GPUs.
Neil Doshi: Great. Thank you, Marc. Here’s a question from Andrew Beale, our analyst from Arete, on getting large contracts. So some of your competitors are signing large multiyear deals with hyperscalers. What do you need to do to get one of these deals? And when can we see one of these deals? Arkady?
Arkady Volozh: Yes. As we previously said, of course, we see a lot of this demand coming from the top frontier AI labs. We actually believe that this will increase in the future. Millions of new GPUs are coming to the market next year and beyond. In order to capture this demand — actually, answering the question what we need to do. We’re doing the main thing. We are increasing our capacity, significantly increasing this capacity. And as I said in the beginning of the call, we are just addressing this issue right now. Going forward, we hope very much to see those big customers among our customers because finally we have capacity of their scale. And probably just again to remind that all the projections we’re making this year and midterm, they do not include these big accounts and those big deals. So if or when they will come, it will all be incremental and will be a nice surprise, I would say, surprise.
Neil Doshi: Thank you, Arkady. A lot of questions on Avride, including from Alex Platt and from Andrew Beale. Maybe Arkady, you can take this. Can you provide an update on Avride? And any update regarding their strategic partnership? And then really around the potential robotaxi launch in Dallas, how is that trending?
Arkady Volozh: Well, it’s to say — nothing to say that we’re very excited about Avride as a company and its future, taking into consideration what’s going on in this industry this year. On the future of Avride corporate structure, as we spoke many times before, we see a structure something similar to what we’ve done with Toloka. It’s a good example of a type of a partnership we are looking for when a strong partner comes to codevelop this business and who actually give up control. In the meantime, the business is performing extremely well. They continue to scale. As you know, they have 2 business lines, delivery and autonomous vehicles. And both on the first-line robot side, Avride is expanding their coverage with the existing partners.
They add new cities, new service areas, the restaurants with Uber. They’re launching new university campuses in the project with Grubhub. And they’re also entering the new verticals. Just recently, they signed with a grocery delivery for the retailer H-E-B in Texas and also indoor robot operations in Japan that came through a partnership with Mitsui Fudosan. On the autonomous vehicle side, Avride is growing its fleet. As you know, they are partners with Hyundai and they’re expanding their road tests in Dallas and they are looking forward to launch their robotaxi service with Uber later this year because they have — they signed this partnership early. So we believe it’s a great business and we believe that this is a source of significant value for our company, for the group.
Neil Doshi: Great. Thank you, Arkady. A few questions around our sources of funding. Last quarter, we talked about potentially tapping into our noncore businesses and equity investments to fund growth of the core business. Any updates that we can share? Tom, would you like to take this?
Tom Blackwell: Yes, I’m happy to kind of catch up on this. I mean I think that, as we talked about — first of all, I would maybe just touch briefestly on sort of 2 significant equity stakes. So Toloka, which you saw this quarter, actually, we were very pleased that they were able to raise growth capital in a transaction that was led by Bezos Expeditions and others. They’re doing great things, doing a lot of stuff in the kind of complex AI data task world. And actually, their customers are a number of the major AI labs and others. You may have seen in this industry — this industry generally is a hot one. Scale AI, which is a comp for them, recently sold about half of the company at a $30 billion valuation. So we think that there is very significant upside to Toloka’s business prospects and valuation.
And what was important for us in that deal was that we retained a significant majority economic interest. So we feel like we have a lot of exposure to the upside as and when we feel it’s the right time to try and tap into that. With regards to ClickHouse, it’s also — you will have seen ClickHouse in the news in the last quarter. We retain a minority economic interest in this business. The previous valuation, it had $2 billion in the transaction in ’22. But in the latest capital raise, there was a reported valuation of around $6 billion. And the way that we’re thinking about that stake is that I think that right now, we still think there’s a lot of value to be created in the business. But if there were to be a liquidity event in the coming years at a significantly higher valuation, then that’s something that we would consider and we think potentially that could be the source of really several billion dollars.
But we’ll obviously see how the business goes. And otherwise, yes — as you know, we have our wholly owned autonomous vehicle business. I think Arkady’s really probably already touched on that. But again, they’re doing really well. They’ve in the last quarter entered into partnerships with the likes of Uber, Grubhub, Rakuten and others. Waymo is obviously a comp in that sector and it has been valued around $40 billion, $50 billion. So this is sort of the — we hope the direction that we can be going in with this business. And actually, D.A. Davidson, our covering analyst from there, actually recently put out a note on Avride, setting out some of the value potential, which I refer people to. So look, these are great businesses. We don’t have any immediate need to do anything there.
We think there’s still a lot of value to be created in all of them, and we’ll watch that closely. But we do very much keep this in mind, there’s potential sources of capital that can help us accelerate investment into the core AI infrastructure business.
Neil Doshi: Great. Let’s see. A question on Lepton, NVIDIA Lepton. What — how is NVIDIA Lepton impacting our business? Maybe Roman, do you want to take this?
Roman Chernin: Yes. Thank you. I think that now — actually, it was from the launch of Lepton Marketplace. We are one of the largest partner of NVIDIA there. And we see that it generates quite a significant pipeline of the customers who start using via Lepton and then they continue directly with us. So in general, we think that this partnership is a very good extension to all the rest of the job we do together with NVIDIA. And this is one of the efforts now to develop the ecosystem partnerships, the channel partnerships and value-added partners that we mentioned already on this call.
Neil Doshi: Great. Thank you, Roman. And maybe our last question. Europe is ramping up its AI investments. Do you expect to benefit from this maybe through public or private partnerships? Arkady?
Arkady Volozh: In short, the answer is, yes, of course. A bit longer answer is that we’re very well connected in Europe. We came from Europe. We have and we’ll have even more data centers in Europe. And I’m sure that we will be — we are actually — we will stay – we’ll be one of the major AI infrastructure builders in Europe, right? One of our key markets.
Neil Doshi: Thank you, Arkady. All right. I think that’s a wrap for today. Thank you, everyone, for joining. I want to appreciate everyone for attending our call. And we’ll be talking to you all soon. Thanks.