Agora, Inc. (NASDAQ:API) Q3 2023 Earnings Call Transcript

Page 1 of 3

Agora, Inc. (NASDAQ:API) Q3 2023 Earnings Call Transcript November 22, 2023

Operator: Good day, and thank you for standing by. Welcome to Agora, Inc. Third Quarter 2023 Financial Results Conference Call. [Operator Instructions] Please be advised that today’s conference is being recorded. The company’s earnings results press release, earnings presentations, SEC filings and a replay of today’s call can be found on its IR website at investor.agora.io. Joining me today are Tony Zhao, Founder, Chairman and CEO; Jingbo Wang, the company’s CFO. Reconciliations between the company’s GAAP and non-GAAP results can be found in its earnings press release. During this call, the company will make forward-looking statements about its future financial performance and other future events and trends. These statements are only predictions that are based on the company’s belief today, and actual results may differ materially.

These forward-looking statements are subject to risks uncertainties, assumptions and other factors that could affect the company’s financial results and its performance of its business, and which the company has discussed in details in its filings and SEC, including today’s earnings press release and the risk factors of other information contained in the final prospectus relating to its initial public offering. Agora, Inc. remains no obligation to update any forward-looking statements the company may take on today’s call. With that, let me turn it over to Tony. Hi, Tony.

A customer using the Video Calling services, to connect with a loved one.

Tony Zhao: Thanks, operator, and welcome, everyone, to our earnings call. In the third quarter, our revenue was $15.3 million for Agora, flat compared to last quarter at RMB 141 million for Shengwang, an increase of 7.4% quarter-over-quarter. As of the end of this quarter, we have more than 1,600 active customers for Agora and more than 4,000 for Shengwang, an increase of 26% and 6%, respectively, compared to 1 year ago. Now moving on to our business, product and technology updates for the quarter. Let’s start with Agora. We recently announced the general availability of video-based solution to power live shopping experiences. In recent years, live shopping has disrupted and transformed the entire e-commerce market in China.

We believe that the U.S. and other developed market will soon catch up and embrace live shopping as the next big trend. According to McKenzie, live shopping could account for 20% of our e-commerce sales by 2026 and the U.S. live shopping market is estimated to be worth $35 billion by 2024. We’re also proud to mention that Agora was highlighted as a leading lender in Gartner’s recent market guide for live commerce in retail. With Agora, brands, marketplace and platforms can now seize the live shopping opportunity with ease. For example, we recently helped CommentSold, the leading fashion live shopping platform for retailers to introduce a new functionality that allows sellers to invite external participants into their shows simply by sending them a link or QR code.

Sellers can easily and professionally host, celebrate — can easily add professional host, celebrities, influencers or VIP customers to join their live shopping session video to provide more engaging content and boost conversion. This quarter, we also partnered with the Sandbox, a leading decentralized gaming virtual world to promote real-time engagement and social interactions within the metaverse through voice, video and chat. The Sandbox choose Agora because of our comprehensive product stack, which can seamlessly integrate 3D spatial audio, persistent text chat and interactive live streaming functionalities at scale. As a result, players’ ability to connect, collaborate and form meaningful communities in the metaverse is significantly enhanced.

See also Billionaire Seth Klarman’s Biotech Stock Picks and Dividend Achievers List Ranked By Yield: Top 30.

Q&A Session

Follow Advanced Photonix Inc (NYSEMKT:API)

Moving on to Shengwang. We recently launched our AIGC RTE SDK, a real-time engagement solution connecting human users with large language models. So far, people have largely interact with AI models in text format. Only recently, companies such as OpenAI, have beta launched their direct voice conversation between human users and AI models. However, there is surely a significant delay of 6 to 7 seconds or more to receive a simple voice response. Our solution has been designed to tackle the latency issue so that the users can hear the response within 2 seconds. This near real-time response is close to the natural pause expected in human-to-human conversations and, therefore, provide a much more engaging and near real experience for users. Our AIGC RTE SDK also comes with great flexibility.

Therefore can easily create their application with the freedom to choose from a wide range of large language models, speech-to-text engines and synthetic voices to fit their unique use case. A video demo of our AIGC RTE solution can be found in our earnings presentation with more details on its features and capabilities. Please also feel free to download the demo app application and try it yourself. Next, let’s move to the real-world factories to see how we assist BMW in their digital transformation journey. Historically, we — when local engineers needed technical support from the specialist at BMW headquarters in Germany. They had to wait at least 24 hours before the specialist could arrive on site, which could impact manufacturing time line and even cause delays in delivery.

Our solution enables the remote specialist at BMW headquarter to remotely inspect, even the smallest detail with HD video and low latency. In addition, when the specialist makes inquiries or provide instructions, local engineers can see exactly where the specialist is marking with the hub of AR devices. This solution demonstrates our commitment to becoming a trusted partner of large enterprise for their digital transformation initiatives. Last month, we held our RTE conference for the ninth consecutive year with — where we discuss the role artificial intelligence plays in real-time engagements. We have been leveraging AI to enhance the quality of experience in real-time engagement for a long time. We developed the AI power real-time on device noise suppression and echo cancellation algorithms that significantly improve audio quality.

On the video front, our AI-based real-time on-device algorithms, such as super resolution, perceptual video Coding, object segmentation, adaptive coding and video quality assessments enable us to deliver optimal video quality and viewing experience up to 4K resolution. Many of our customers worldwide have integrated our latest SDKs with this AI technologies and more are in the process to upgrade. With our constant pursuit of more efficient AI-powered SDK to overcome the limits of device and network infrastructures, we have substantially reduced the barrier to high-quality RTE. This means that people around the world, even those without the latest smartphones or access to high-speed Internet can now enjoy a wide range of RTE use cases, just like how they have access to basic utilities such as clean water and electricity.

With billions of smartphones and devices powered by our SDK with AI-driven capabilities, we are making solid profile to realize our mission of making real-time engagement ubiquitous and allowing everyone to interact with anyone, anytime and anywhere. We have also closely followed the development of large language model and generating AI around the world. As mentioned earlier, our AIGC RTE solution enables developers to put AI-powered characters into voice conversation with users. We have been working closely with some customers and expect to see them launch their innovative use cases soon, such as AI companions or personal assistant, social deduction games with AI players and AI tutors to help users learn foreign language. As large language models continue to advance in their ability to process and generate multimodal information, substantial amount of data in video and audio format will need to be transmitted between human users and AI models.

The volume of such date — of such data will 1 day surpass today’s human-to-human traffic, and we are uniquely positioned to become the critical infrastructure that enables human users and AI models to interact with each other through video and audio. Before concluding my prepared remarks, I would also like to announce a change in our Board of Directors. Mr. Tuck Lye Koh has tendered his voluntary resignation due to personal reasons. Tuck has been working with us since our inception a decade ago, first, as an investor, then as a director since 2018. I would like to sincerely thank Tuck for his dedicated service and invaluable advice to our Board. Mr. Shawn Zhong, currently our Chief Technology Officer and Chief Scientist has been appointed as a director.

I would like to warmly welcome Shawn to join our Board. And I’m confident that Shawn’s insights and expertise will help the board and the company stay on the forefront of real-time engagement technology and create long-term value to our shareholders. With that, let me turn things over to Jingbo, who will review our financial results.

Page 1 of 3