Innodata Inc. (NASDAQ:INOD) Q3 2023 Earnings Call Transcript

Tim Clarkson: When I look at your contracts, one $5 million a quarter, another one potentially up to $10 million a quarter. I mean, it’s certainly — I know you’re not giving any kind of projections for next year but it seems like you should be able to do over $30 million plus at some point next year just based on these contracts playing out.

Jack Abuhoff: Yes. I think there’s a lot that we’re figuring out about these relationships. There’s a lot of work that’s going on with our customers to figure out where they need us to go and what we’ll be doing. I think we’re going to be in a very good position or an increasingly better position to be giving guidance. I’m happy that we’re giving some guidance about Q4. I think we’ll be in a position, as I mentioned a few minutes ago, to give — shed some light on how 2024 is shaping up when we next have our call. And most certainly, I think $30 million quarters are not at all outside our reach in the near and medium term.

Tim Clarkson: Right. Now getting back to agility, it had really an excellent quarter, strong profitability and EBITDA. It looks like you’re doing just under $20 million annually there. What would be in the private market, some kind of multiple of sales with a company like that be worth?

Jack Abuhoff: I really don’t know the answer to that. In terms of the value that someone would place on that specifically. I know there are a couple of comps out there recently in private markets for companies that do what Agility does and the valuations were based on my understanding, we’re pretty rich, pretty healthy. We’re thrilled with the progress that we’ve made in Agility. We’re having strong and increasingly solid quarters in terms of booking new business. We’re seeing solid retention numbers. We’re seeing improvements in terms of the average selling price, what we call the ASP. The AI work that we’ve done within the Agility platform, the PR co-pilot is driving new wins. It’s helping bolster retention. We’ve got more capabilities that are coming out in the second half of this year and maybe into next year.

In terms of leveraging AI further into those workflows, being even more creative about how AI can be used by PR professionals. So it’s fun to watch. That business is really now hitting its stride.

Tim Clarkson: Do any of your competitors have any comparable AI capability in that area, like utility?

Jack Abuhoff: Yes, nothing like what we’ve got. We haven’t seen it.

Operator: The next question is coming from Dana Buska from Feltl.

Dana Buska: Congratulations on an excellent quarter.

Jack Abuhoff: Well, thank you so much for that.

Dana Buska: I have a couple of questions. First of all, one of the things that I’ve been reading in the literature is that there’s a big attempt to kind of automate a lot of the stuff that you do fully automated. And I was wondering, do you foresee a time when there’s going to be no need for humans in the loop for the services you provide?

Jack Abuhoff: Yes. So that’s a complex question. The quick answer is no. I mean, we don’t foresee that. There’s a lot of opportunity to automate aspects of trading for classical AI. There’s very limited opportunity to remove humans from the process of training large language models and they’re complex data science reasons for that. Now that said, you can make the work that’s being done by humans much more efficient than it might otherwise be. A lot of the technology and the workflows that we’ve got are directly applicable to applying human cognition and human capability effectively on large language models but you can’t use large language models to train other large language models. That’s not an accepted practice today.

Dana Buska: With your — with the contract that you signed or the master service agreement you signed with the company is expected to spend hundreds of millions of dollars with AI services. What is your road map strategy about going to get some of that business from that customer?

Jack Abuhoff: Dane, I mean I’m not going to lay that out the specificity for competitive reasons. But if you kind of dial it way back and think of it, it won’t be any different than any of the other relationships that we forged. You get a foot in the door, you put in place the paperwork that’s required so that the business can easily do business with you, that there are no impediments, that there isn’t a great deal of work or permission getting or data security, auditing or anything that one of their business units would need to undertake in order to work with you. You need as many people as you possibly can. You do an engagement or 2 and you do it very, very well and word starts to get out about the results that were obtained by working with you.

And you build relationships of trust based on that. You understand where they’re going. You start to build into your product pipeline and your innovation work that would then accommodate where they’re likely to go. You try to skate to where the puck is going. And you work hard. That’s basically the recipe.

Dana Buska: One of the announcements you made, you talked about creating a golden data set for medical information company or like an insurance company. Could you tell us what a golden data set is and what it means to your business?

Jack Abuhoff: Yes. So it means different things in different contexts. One of the reasons that you might use a golden data set is to benchmark a large language model. So you would create a golden data set of how you would want to see the model responding if it’s tuned properly to align with human values and to align with the business case.

Dana Buska: And what does that mean for your business that you’ve been able — that you’re able to do that? Or you’re working with this customer to do that?

Jack Abuhoff: Well, I think it’s one of very many opportunities that we’ve got to be relevant for engineering teams who are building large language models. It’s one of many things that’s required to successfully train and launch a foundational or a foundation model in generative AI. So there’s fine-tuning required, there’s reward modeling, there’s reinforcement learning. There are a lot of different components of things that are required. There’s work that you would do for evaluating the capabilities of the model, you’d be evaluating it from a trust and safety perspective within the context of that, the golden data sets can be important.