Innodata Inc. (NASDAQ:INOD) Q1 2025 Earnings Call Transcript May 8, 2025
Innodata Inc. beats earnings expectations. Reported EPS is $0.22, expectations were $0.17.
Operator: Good afternoon, ladies and gentlemen, and welcome to the Innodata First Quarter 2025 Results Conference Call. At this time, all lines are in listen-only mode. Following the presentation, we will conduct a question-and-answer session. [Operator Instructions] This call is being recorded on Thursday, May 08, 2025. I would now like to turn the conference over to Amy Agress, General Counsel at Inodata Inc. Please go ahead.
Amy Agress: Thank you, Lovely. Good afternoon, everyone. Thank you for joining us today. Our speakers today are Jack Abuhoff, CEO of Innodata, and Marissa Espineli, Interim CFO. Also on the call today is Aneesh Pendharkar, Senior Vice President, Finance and Corporate Development. We’ll hear from Jack first, who will provide perspective about the business, and then Marissa will follow with a review of our results for the first quarter. We’ll then take questions from analysts. Before we get started, I’d like to remind everyone that during this call, we will be making forward-looking statements, which are predictions, projections or other statements about future events. These statements are based on current expectations, assumptions and estimates and are subject to risks and uncertainties.
Actual results could differ materially from those contemplated by these forward-looking statements. Factors that could cause these results to differ materially are set forth in today’s earnings press release in the Risk Factors section of our Form 10-K, Form 10-Q and other reports and filings with the Securities and Exchange Commission. We undertake no obligation to update forward-looking information. In addition, during this call, we may discuss certain non-GAAP financial measures. In our earnings release filed with the SEC today, as well as in our other SEC filings, which are posted on our website, you will find additional disclosures regarding these non-GAAP financial measures, including reconciliations of these measures with comparable GAAP measures.
Thank you. I will now turn the call over to Jack.
Jack Abuhoff: Thank you, Amy, and good afternoon, everyone. Our Q1 2025 revenue was $58.3 million, a year-over-year increase of 120%. Our adjusted EBITDA for the quarter was $12.7 million or 22% of revenue, a 236% year-over-year increase. We finished the quarter with $56.6 million of cash, which is a $9.7 million increase from last quarter. Our $30 million credit facility remains undrawn. We’re pleased with our financial results this quarter, which by the way, came in ahead of analyst revenue estimates. But what’s even more exciting is the meaningful progress we’ve made on our strategic growth initiatives, much of it in just the past few weeks. I’d like to take this opportunity to walk you through the progress we’re making across four of our most dynamic solutions areas, highlighting how we’re aligning with the evolving customer needs and how these efforts are driving, both new customer wins and meaningful account expansions.
Let’s first look at the work we do collecting and creating generative AI training data. We are very focused on building progressively more robust capabilities to feed the progressively more complex data requirements of large language models, as they advance toward artificial generalized intelligence or AGI, and eventually, artificial superintelligence or ASI. We have made and we continue to make investments toward expanding the diversity of expert domains like math and chemistry for which we create LLM training data and perform reinforcement learning, while also investing in expanding languages like Arabic and French within these domains and creating the kind of data required to train even more complex reasoning models that can solve difficult multi-step problems within these domains.
We’re also developing progressively more robust capabilities to collect pre training data at scale. The advancements that we have made and continue to make and the investments we have made and continue to make have enabled us to gain traction with both existing customers and potential new customers. I’ll take potential new customers first. We’re in the process of being onboarded by a number of potentially significant customers. I’m going to share four of them with you now. The first is a global powerhouse building mission critical systems that power everything from multinational finance and telecommunications to government operations and cloud infrastructure. It is integrating large language models and AI across its cloud infrastructure and enterprise applications to enhance automation, productivity and decision making, and also embeds generative AI directly into horizontal and vertical applications.
The second is a cloud software company that is revolutionized the way businesses manage customer relationships and it is leveraging large language model and AI to enhance customer relationship management and enterprise operations and is taking leadership position in launching in launching agentic AI capabilities to autonomously handle complex enterprise tasks. The third is a Chinese technology conglomerate that operates one of the world’s largest digital commerce ecosystems. It has built its own family of LLM models incorporating hybrid reasoning capabilities and supporting multiple modalities including text, image, audio and video. Its models are widely in use for a variety of horizontal applications as well as industry specific applications.
And the fourth is a global healthcare company that is a leader in advanced medical imaging, diagnostics, and digital health solutions. It is actively integrating LLMs and AI to enhance diagnostics, streamline clinical workflows, and improve patient outcomes, developing foundation models capable of processing multimodal data including medical images, records, and reports. Now when it comes to existing customers, we’re seeing major expansion opportunities, some we’ve already won and others we expect to win in the near term. I’ll share a few examples to illustrate the kind of traction we are now seeing. I’ll start with three of our big tech customers, which until recently were relatively small accounts for us, but which are now showing signs of meaningful expansion.
I’ll also touch on the continued strong momentum we’re seeing within our largest customer. The first example is a customer we started working with in the second quarter of last year. Now in 2024, we recognized only about $400,000 of revenue from them. But today, by contrast, we have late-stage pipeline that we value as having the potential to result in more than $25 million of bookings this year and continued growth over the next several years. This customer is one of the most valuable software companies in the world. The problem we are helping them solve is that their generative AI, both text and image, has not been doing a good job handling very specific, detailed and complex problems. They’ve shared with us that improving on these fronts was critical in order to improve product experience and provide a foundation for multimodal reasoning and agentic models of the future.
So here’s a great example of an investment we are making that has specifically resulted in traction with this customer. We developed an innovative data generation pipeline that enables domain experts to create detailed hierarchical content labels across modalities, while continuing to evolve the underlying taxonomies. Our approach supports multiple types of gen AI workflows, including detailed descriptions, reverse prompting and highly specific evaluations. The second example of an existing customer of which we’re seeing major expansion is a big tech with which we just had $200,000 of revenue last year. But again, by contrast, today, we are actively collaborating with them, resulting in two new wins in Q2 to-date, one signed and one we believe is about to be signed that we value at approximately $1.3 million of potential revenue.
We also have another opportunity with them that we value at about $6 million of potential revenue. It’s in the pipeline, and I’ll talk about that more in a few minutes. The third example is a big tech hyperscale with extensive generative AI capabilities across both its consumer and enterprise businesses, where it offers foundational models together with custom silicon optimized for AI workloads. We believe we will soon be engaged by it to support pretraining data collection for very specific specialized models. We’ll also talk in a few minutes about additional expansion that we’re driving at this account in terms of model safety and evaluation. The fourth example is one of the most highly regarded generative AI labs. We just signed a new data collection deal with them we value at approximately $900,000 of potential revenue, and we’re discussing an expansion that could potentially double that.
Pretraining data collection in the form of curated text corpora, as well as multimodal datasets remains a cornerstone for big tech companies racing to build next generation LLMs. As models grow more sophisticated, their performance hinges not just on raw computational power, but also on the breadth, depth and quality of the data they are trained on. Continuous data acquisition enables the models to better understand nuance, context, and intent across languages and domains. We believe that each of the companies I just mentioned is likely budgeting several hundred million dollars per year on generative AI data and model evaluation. So, the traction we are now seeing is super exciting and is very much the result that we have been working toward under our business plan.
Lastly, we also see expansion opportunities with our largest customer. Literally just this morning, we signed a second master SOW with our largest customer that we anticipate will enable us to deliver gen AI services funded from a distinct budget category within the customer’s organization, separate from the budget that supports our existing engagements. We believe this new budget to be materially larger. Now to prepare ourselves to deliver services under this new SOW, we are making investments in customizing our proprietary LLM data annotation platforms specifically for the work that will be required under this new SOW, and we are building some additional service support capabilities. Another major area of strategic focus for us is building agentic AI solutions for our big tech customers, as well as our enterprise customers.
With one of our smaller big tech relationships, one that I discussed a few minutes ago, we have begun a collaboration around both AI agent data set creation and AI agent building. The work we are hoping to kick off with them this quarter will involve creating approximately 200 conversational and autonomous agents across multiple domains. The work involves defining use cases, developing synthetic knowledge corpora, generating demonstration datasets, building and debugging agents, and then managing agent orchestration. We believe this opportunity has the potential to be worth approximately $6 million to start. We believe agent-based AI is going to serve as the cornerstone technology that unlocks the full value of large language models and generative AI for enterprises, transforming them from powerful but isolated tools into autonomous goal driven systems that can reason, take action, and drive measurable business outcomes at scale.
Agentic AI refers to artificial intelligence systems that can autonomously initiate and carry out complex tasks in pursuit of specified goals with minimal ongoing human input. These systems go beyond reactive execution. They exhibit goal oriented behavior. They make decisions. They adapt to changing contexts, and they even take initiative to achieve outcomes. In contrast to traditional AI, which typically responds to prompts or instructions, Agentic AI is designed to operate with a degree of independence, managing multistep processes, reasoning through uncertainty, and dynamically adjusting actions based on feedback. It represents a shift from AI as a tool to AI as a collaborator, one that can understand objectives, plan strategically, and act accordingly.
Now on the subject of unlocking value for enterprises, in the last several months, we have won engagements that we value at approximately $1.6 million helping one of the world’s largest social media companies integrate gen AI into their engineering operations. We were in active discussions about expanding this successful effort to other business units within the customer as well. We are providing integration services, prompt endearing, program management and on-site consulting for implementing generative AI. So far, we have automated five workflows, which we estimate will help our customer generate approximately $6 million in cost savings. The plan is to automate about 60% of 90 identified workflows by the end of 2025. And for this to result in at least $10 million of additional savings for this customer this year, while providing additional benefits in terms of reduced friction and increased development velocity as the engineering team can more rapidly prototype, test and refine solutions.
We are also in advanced discussions with several other companies about helping them use generative AI to enhance both products and operations. Now we’ve discussed how our investments and expanded capabilities in LLM training data creation and agentic AI are fueling a surge in customer engagement. We’re seeing that same momentum carryover into the work in generative AI trust and safety, marking a significant expansion of our presence in a fast-growing mission critical segment of the market. We are pleased to announce that we have won expanded engagements to provide trust and safety evaluations for one of our existing big tech customers, again, not our largest customer, but one of the smaller relationships that’s now successfully expanding. The engagements together have a potential value of approximately $4.5 million of what we believe will be annual recurring revenue.
We just started ramping the engagements up a couple of weeks ago. We anticipate working across several of their divisions, spanning English, Spanish, German and Japanese languages. We anticipate providing ongoing testing of both their public models as well as their beta models that they have not yet launched. Under these engagements, we anticipate testing both generic models and domain specific models as well. For example, we might help ensure that a model trained to assist chemists and nuclear scientists who refused to provide advice on how to build a bomb or create crystal meth. Again, our willingness and insight to make investments proved critical in enabling us to capture this opportunity. We bolstered our proprietary trust and evaluation platform with some innovative features that our customer found compelling.
Just last week, the customer completed security reviews of our platform, enabling us to start work this week. We believe there is a near term potential to expand further our trust and safety work with this customer. We intend to be running paid pilots for other trust and safety workflows over the next few months. And to support this opportunity, we’ve invested in methodologies for predicting emerging areas of user interaction with advanced language models, enabling us potentially to proactively surface and address high risk topics for trust and safety assessment. We recently demonstrated this capability to our customer, who responded with strong enthusiasm. Notably, part of these engagements involves evaluating LLMs embedded in physical devices and robotics, with which our teams will be working directly in our customers’ labs to test performance at the hardware level.
With another enterprise customer, one that I mentioned earlier, we have been shortlisted as lead vendor for a multiyear program aimed at evaluating the customer’s generative AI foundation models for potential harms, bias and robustness. We anticipate the annual recurring revenue of this engagement is one to be approximately $3.3 million. We are currently conducting proofs of concept that encompass adversarial testing, model probing, and early stage fine tuning pipelines. The proposed production scope includes comprehensive red teaming, implementation of guardrails, and rigorous evaluation of model behavior across text, image, video, and audio outputs. In the first quarter, we introduced our generative AI test and evaluation platform as NVIDIA’s GTC 2025.
This enterprise grade solution is designed to assess the integrity, reliability, and performance of large language models across the full development lifecycle, from pre-deployment refinement to post-deployment monitoring, enabling both internal operational use cases and external customer facing applications. MasterClass served as our inaugural charter customer, and we are now in active discussions with several additional high-profile enterprises with diverse generative AI deployments. In addition, we are in active discussions with one of the world’s leading global consulting firms, regarding a potential go-to-market partnership that would position them as a strategic distribution and implementation channel for our platform. From a competitive differentiation standpoint, the platform encapsulates a range of advanced techniques developed through our ongoing services engagements with leading big tech customers.
These capabilities are now productized into an autonomous system that allows enterprises to benchmark, evaluate, and continuously monitor their agents and foundation models. The platform supports evaluation against high quality standardized benchmarks across key safety dimensions, including hallucination, bias, factual accuracy, and brand alignment, while also enabling customization through client specific safety vectors and proprietary evaluation criteria. A key feature of the platform is its continuous attack agent, which autonomously generates thousands of adversarial props and conversational probes to uncover vulnerabilities in real time. Detected issues are flagged for review, allowing customers to take swift remedial action. Recommended mitigation strategies may include tailored system message design and the generation of supplemental fine-tuning datasets.
The platform is currently available through an early access program for enterprise customers with general availability targeted for late Q2. Trust and safety evaluation is critical at both the development and production stages of large language models. During development, rigorous testing including adversarial red teaming is essential to uncover vulnerabilities, biases, and harmful behaviors before models are deployed. This proactive approach enables developers to build safety guards into the model architecture and fine tuning processes. In production, continuous evaluation ensures that the models remain aligned with safety standards as they interact with real users and evolving contexts. Together, these measures are vital for ensuring that LLMs operate responsibly, mitigate risk, and maintain user trust at scale.
We believe the rapid adoption of agentic and multi agent systems will push us to a new phase of complexity when it comes to trust and safety. In our most recent quarterly earnings reports, the magnificent seven technology companies, Apple, Microsoft, Amazon, Alphabet, Meta, NVIDIA, and Tesla, have each underscored their commitments to generative AI investment, viewing it as a pivotal component of their future growth strategies. Microsoft has announced plans to invest approximately $80 billion in AI infrastructure during fiscal 2025, aiming to build data centers designed to handle artificial intelligence workloads. Meta has raised its capital expenditure guidance to $64 billion to $72 billion for 2025, reflecting increased investment in AI infrastructure, including the development of new AI tools such as Llama 4 and a standalone AI assistant app.
Amazon is expanding its AI capabilities, particularly within its cloud computing division, AWS. In his annual letter to shareholders, the Amazon CEO emphasized the Company’s aggressive investment in AI writing, quote, we continue to believe AI is a once in a lifetime reinvention of everything that we’ve done. Alphabet, meanwhile, reported a 20% increase in operating income and a 46% rise in net income in Q1 2025, attributing this growth to its unique full stack approach to AI, which encompasses infrastructure, models and applications. Given this sentiment and the significance of the Magnificent Seven and other large global technologies to our revenue stream, we do not believe that short term business cycles or trade policies have much of an impact on our business prospects.
It is worth noting how bullish sophisticated venture capital investors are on our sector. Our largest direct competitor is reported to be close to finalizing a secondary stock sale dialing the Company at $25 billion a multiple of 29x last year’s reported revenue of $870 million, which came with reported EBITDA loss of $150 million. Today, we are reaffirming our full year revenue growth guidance of 40% or greater. As the breadth of activity across our business illustrates, we believe the current momentum positions us well for continued strong performance. I want to say something about how we intend to manage the business over the next couple of years. Our intention is to embrace growth from both the broadening customer footprint and our largest customer.
I’ve shared with you today how we are achieving significant success with the diversity of large customers that we believe could become material contributors over the coming fiscal periods. At the same time, we also see significant growth potential with our largest customer. We believe this customer will continue to expand its overall relationship with us and we are deeply aligned with its long-term roadmap. Given that we intend to drive growth from this broadening customer footprint and our largest customer at the same time, we intend to embrace customer concentration as a natural part of our evolution. Many leading technologies companies have seen similar patterns, an early period of customer concentration followed by a broad-based growth, as the value proposition matures and adoption scales.
We believe we are following that same path and remain confident in our ability to continue executing with discipline, while building a durable, diversified revenue engine. Inevitably, customer concentration can result in quarter-to-quarter volatility. For example, with our largest customer, we exited 2024 at an annualized revenue run rate of approximately $135 million. In Q1, we were running higher than this by about 5%, and in Q2, we anticipate that we could be lower by about 5%, but the customers’ demand signals are updated continually and are highly dynamic. Going forward, we do not intend to provide granular updates at a customer level. Our 2025 financial plan reflects our conviction in the scale of the opportunity ahead. We believe we are well-positioned to drive business with an increasingly diverse group of leading big tech companies and enterprises and become a market leader in one of the most transformative technology cycles in decades.
Accordingly, we intend to reinvest a meaningful portion of our operating cash flow into product innovation, go-to-market expansion and talent acquisition, while still delivering adjusted EBITDA above our 2024 results. This too is an intentional strategy aimed at capturing long-term value in a rapidly growing and strategically important market. I’ll now turn the call over to Marissa to go over the financial results, after which Marissa, Aneesh and I will be available to take questions from analysts.
Marissa Espineli: Thank you, Jack, and good afternoon, everyone. Revenue for Q1 2025 reached $58.3 million, representing a year-over-year increase of 120% and demonstrating strong momentum to start the year. Adjusted gross margin was 43% for the quarter, up from 41% in Q1 of last year. As we’ve discussed previously, we target an adjusted gross margin of around 40%. So, we’re pleased to have exceeded that benchmark to begin the year. Our adjusted EBITDA for Q1 2025 was $12.7 million or 22% of revenue compared to $3.8 million in the same quarter last year. Net income was $7.8 million in the first quarter, up from $1 million in the same period last year. We were able to utilize the benefits of accumulated net operating losses or no call in Q1 to partially offset our tax provision.
Looking ahead, barring any changes in the tax environment, we expect that our tax rate in the coming quarters to be approximately 29%. Our cash position at the end of Q1 2025 was $66.6 million, up from $46.9 million at the end of Q4 2024 and $19 million at the end of Q1 2024, reflecting strong profitability and disciplined cash management. We still have not drawn on our $30 million of Wells Fargo credit facility. The amount drawable under this facility at any point in time doesn’t mean based on the borrowing-based formula. We’ve been actively engaged in investor relation activity over the past year and expect to build on that momentum in the months ahead. We’ll be participating in several upcoming investor conferences and non-bill roadshows to continue to increase awareness and deepen relationships with institutional investors.
Looking ahead, as Jack mentioned, we’re planning targeted investments to expand our capabilities. This includes continued investment in technology to support both current and prospective customers in their AI journey as well as increasing strategic hiring in sales and solutioning to drive long term growth. In Q2, we plan to invest approximately $2 billion to support a new statement of work and related programs with our largest customer, as Jack noted earlier. We expect that this investment will occur ahead of the associated revenue and is expected to temporarily impact margins in that quarter. We review this as a strategic investment that helps position us to meet customers’ evolving needs and to build on the land and expand success we’ve already achieved with them.
As always, we’ll remain disciplined in managing our cash and expenses, while continuing to invest where we see strong return potential and meaningful long-term value for shareholders. That’s all from my end and thanks everyone. Lovely, we’re ready to take questions.
Q&A Session
Follow Innodata Inc (NASDAQ:INOD)
Follow Innodata Inc (NASDAQ:INOD)
Operator: Thank you. Ladies and gentlemen, we will now begin the question-and-answer session. [Operator Instructions] Your first question comes from the line of George Sutton from Craig Hallum. Your line is now open. Please go ahead.
George Sutton: Thank you and thanks for all the detail on the pipeline. So, Jack, I wonder if you could walk through this statement of work with your largest customer. If I understood correctly, you suggested it could be larger and then you sort of cut off from them. So what would larger than be relative to the statement of work opportunity?
Jack Abuhoff: Sure. Well, thank you for the question. The statement of work will enable our customer to start using us in basically what you can think of as another division or another area of their gen AI spend. Up until now, we haven’t been supporting that area. With this new SOW in place, we will be and we expect to be. What’s notable about that is that this the budget associated with that new area, we believe to be significantly higher than the budget that has been supporting all of our programs to date. So, we’re very excited about that.
George Sutton: And obviously what you’re suggesting in terms of the sequential revenues from this customer in Q2, which could be down 5%, that is completely separate from the statement of work that would be meaningfully in addition to, correct?
Jack Abuhoff: I think it’s the requirements of this customer are very dynamic. We learn about things almost multiple times in the course of even a week. So what we’re going to do is bake all of that into the guidance that we’re providing. I just want to make sure that as people see that we were up in Q1, you don’t necessarily take that and assume that, that’s like a new threshold. We think in any quarter, there are projects that end, there are new projects that start. So, in terms of the new SOW, there’s clearly a greater amount of work that we can do. There’s additional share of wallet that we can tap into. And I think that bodes very well for long-term continuing to grow that account.
George Sutton: So, you walked through a number of different customer opportunities and gave a good sense of the size of the opportunities. Can you give us a sense of what you’re finding your win rate to be, when you go into these opportunities?
Jack Abuhoff: Yes. Win rate is a hard thing to track. For us, the most important thing is to get into a customer, start doing small requirements, build trust, execute very, very well and then expand from there. I think the things that — how do I put this. The things that we end up piloting that proceed, that move forward with customers, and some things don’t, some things are experimental, but the things that move forward, we win a very large percentage of those, I believe. So, we’re, what’s so exciting about what’s going on now in the business is, for a while, we’ve been talking about these multiple big tech customers and multiple magnificent seven customers that we’ve got, but some of them have been small. There are a couple of them, only $200,000 of revenue last year, $400,000 of revenue last year.
And our plan was to you know, get in there, prove ourselves, build trust and then tap into the, hundreds of millions of dollars that are being spent on data engineering. What we’re seeing now and the reason we’re just so excited by all of this is our plan is coming together. We’re doing exactly what we plan to do and we’re super excited about that.
Operator: Your next question comes from the line of Allen Klee from Maxim Group. Your line is now open. Please go ahead.
Allen Klee: George actually asked my questions, but I’ll try to come up with some others. Can you remind us what, you said some stuff last quarter related to the size of the largest customer. I’m just trying to think about the impact of them being down 5%. I think last quarter you said that with a win of $24 million of annual, they’re at around $135 million in revenue. I guess that means annually. But that annual run rate grew 5% this quarter, but then will be down 5% next quarter. Is that the way to think of it or am I mixing things up?
Jack Abuhoff: Yes. So, I think the way we’re encouraging everyone to think about it is to understand how we intend to manage the Company. We believe we have an incredible opportunity to drive growth through many of these broadening customer engagements that I described. We think that will be one source of growth, a very important source, obviously. But at the same time, we don’t intend to take our foot off the accelerator with respect to driving considerable growth, hopefully, at our largest customer. That’s a choice that we’re making. We don’t have to do that. We could choose to lower revenue concentration as an alternative and cut back on that. But we’re very plugged in with that customer. We’ve got a great relationship. We’re very tied into the things that we’re doing and we intend to that’s a choice we’re making.
We intend to grow that as well. We believe there’s lots of precedent in the market for embracing customer concentration as a natural part of that evolution. We’ve seen that at many tech companies over the years and recently at Snowflake and Datadog, even Palantir. We think over time as we execute this plan, it will result in broad based growth. In the interim, there will be quarter to quarter volatility. Specifically, what you’re referring to, I think, is last year in Q4, we announced that we had $135 million annualized revenue run rate with that customer. But the fact is, in Q1, we exceeded that. In Q2, we might be lower than that, but we don’t really know because of how dynamic the demand signals are and how their needs are changing all of the time.
We think that that quarter-to-quarter volatility, some quarters up, some quarters down, is going to be a natural part of what to expect by virtue of the strategic choice we’re making to drive growth across the board with our customer base and to respond to the tremendous growth opportunities that we see.
Allen Klee: That’s great. I guess, just in general for training, there’s no change. It sounds like you have a ton of opportunities and it’s expanding. And safety sounded really interesting because that gives you a recurring revenue. Is there just — is there a way that people are spending money more differently with you now than maybe they had a year ago?
Jack Abuhoff: I think what we’re seeing across our roster of big tech companies is a willingness and a desire to do more with us. And I think we earned our way and built trust over time. It takes a while to do it, but we’re now seeing the fruition of those efforts, which again is super exciting. Trust and safety is, obviously, it’s a huge opportunity. It’s an opportunity that becomes you know, even richer and more interesting as we think about the complexity and richness of technology ecosystems that are populated by agents potentially. Well, each one of those agents will need to be measured and monitored and assessed. So, it’s a huge opportunity for us. And the competitive advantage we think we have is we’ve been doing trust and safety work now for several of our large customers, and that’s expanding.
And we’ve developed techniques that we can take to test models and evaluate models, and we can build those techniques into a platform. And we can build that platform in a way that runs on an automated continuous basis, providing real-time feedback to people. So, yes, we’re super excited about that. I think you’re right. You point out a very important point. We believe that these opportunities on the services side and hopefully on the platform side as well we’ll be recurring. We think that there’ll be an ongoing need for that as the technology becomes more pervasive within the, our company’s ecosystems, for sure.
Operator: Your next question comes from the line of Hamed Khorsand from Beating Wall Street Financial. Your line is now open. Please go ahead.
Hamed Khorsand: Hi. I was just wondering, last quarter, you gave this guidance of 40% growth. How much of these projects that you’re talking about today were you expecting within that guidance?
Jack Abuhoff: Yes. When we give guidance, we try to take a long view. We look at what’s booked. We take a conservative view of what we think expansions can be. On our largest accounts, we estimate based on demand signals from the customer. Those are highly dynamic, as we’ve said. Of course, our predictions will only be perfect in retrospect. We know that. But we try to do our best, and we try to be reasonably conservative, as we make those predictions because we don’t want to be wrong. We’d much rather surprise on the upside. So, some of the accounts that I’m describing, especially the new relationships, they’re not baked in at all. There’s so much that we don’t know about from how long will it take, what will it look like. And so, again, we had an abundance of caution and a desire not to be wrong. We don’t as a practice, at least we try not to get ahead of our skis on any of that.
Hamed Khorsand: Yes. So, that’s where I was getting to is that, you have these, all these projects on all as far as revenue opportunity. Why keep the guidance at 40% growth? What makes you nervous that you can’t grow faster?
Jack Abuhoff: It’s a little bit like this. If we take some of the new customers that we’re now onboarding, we don’t necessarily know, even though these are exciting customers and we think there’s huge opportunities. We don’t know necessarily how fast are we going to be able to convert that into opportunity. Are they going to want to work with us for a while before really opening the spigot or is the spigot going to open more quickly? We’ve seen customers where they move fast. We’ve seen customers that require getting to know us for a while. With our largest customers, as I said, the demand is very dynamic. It changes very quickly. So, again, what we’re looking to do is be conservative. We’re looking to not be wrong in our growth estimates. And if we can meaningfully surprise on the upside like we did last year then we prefer to air it in that direction.
Operator: There are no further questions at this time. Please continue, Mr. Jack Abuhoff.
Jack Abuhoff: Thank you. So, Q1 is a great quarter with 120% year-over-year growth. And while revenue is flat sequentially, we want to be crystal clear. We believe our business right now is on fire. The growth we’re seeing year-over-year is just the beginning. What’s happening now inside the Company is really like or unlike anything we’ve seen before. We’re winning major new customers. We’re expanding existing relationships into entirely new budget categories. We’re building a pipeline that’s deeper and more advanced than at any point in our history. It feels like the engine is fully lit and we’re accelerating down the runway. Our teams are energized. Our customers are leaning in and our conviction in what comes next has never been stronger.
So, thanks everyone for joining us today, for being part of our journey. My executive team and I are all in on building Innodata into one of the defining AI solutions companies of the era. And we’re excited to keep sharing our progress with you as the year unfolds. We’re focused on delivering long term value for shareholders. And with the momentum that we’re seeing, we’ve never been more confident about what lies ahead. Thank you.
Operator: Ladies and gentlemen, this concludes today’s conference call. Thank you for your participation. You may now disconnect.