Interview with Saul Leal, CEO of OneMeta, at CES 2024

Translation services are one of the many industries being disrupted by generative AI. That’s not new. Even prior to Gen AI becoming mainstream, services like Google Translate changed the business dynamics of translation, helping millions of people bridge the language gap without spending a penny.

But there’s a difference between looking up the meaning of a word and replicating a sentence in a different language. And an even bigger gap between translating a written document and interpreting what someone is saying in real time for others to hear in a different language. Even Google Translate can’t keep up.

Now, real-time translation is set to be transformed by generative AI. Leading the charge is OneMeta, which claims 95% accuracy in real-time compared to 84% for Google Translate and 87% for professional translators. Not only is OneMeta’s technology interpreting speech in real-time, but also predicts what will be said next. It then spits out that input in audio format in more than 150 languages.

These disruptions are bound to change how people communicate. And, according to OneMeta’s CEO, Saul Leal, they should help create a more understanding world too. Read our interview with Saul, edited for clarity, in full below.

Related reading: Lenovo floods CES 2024 with business tech including an AI assistant that can attend meetings for you


Could you please introduce yourself and OneMeta to our audience?

Sure. My name is Saul Leal and I’m the CEO and Founder of OneMeta.

At OneMeta, our vision is to create a more understanding world and we do it through artificial intelligence. We take an individual’s sentence as he says it out loud and we forecast three to four words they’ll say in advance, doing so with a high level of accuracy and before they finish their sentence.

As the words are being pronounced, we can then translate them into 152 languages. So by the time the person finishes their sentence, it’s already translated.

As you can see here in our booth, we’re seeing four languages being translated in real-time on the screen as they’re being spoken, in this case, English, Spanish, Korean and Mandarin.

Our solution can also be deployed on Microsoft Teams, locally on devices and at events. You can actually QR code this very conversation and listen to it translated in real-time on your AirPods if you’d like.

And you mentioned that it’s leveraging AI in some form.

Absolutely. Artificial intelligence and translation services have been working together for almost 30 years now, but mostly through basic machine learning and databases. However, translation accuracy has always been low. Google Translate offers around 84% accuracy, humans typically translate with 87% whereas OneMeta is recording up to 95%. And the reason for that higher accuracy is generative AI.

We work with different AI language models in order to predict what’s about to be said with accuracy. When we do so, we are not just focusing on the translation itself, but more acting like humans do, in that we are using context and interpretation of the meaning of words, to make the whole translation process a lot more accurate.

I think that the holy grail of what we do is the fact that it’s in real-time. It’s something I don’t think you’ll see anywhere else. And it’s why we’re here at CES 2024.

What can you tell us about VerbumMeetings? Is that one of the products OneMeta is showcasing at CES?

VerbumMeetings is a videoconferencing platform. To illustrate how it works, recently I was talking to a gentleman from China; I don’t speak any Chinese and the gentleman was struggling to converse in English. So I told him to speak Chinese, and that I’d speak Spanish as a test. We ended up talking for almost 90 minutes! We talked about business, family and friends… we talked about technology trends and we did so in our native language.

We were able to connect without any problem. That is the beauty of our product.

Using that example, the voice that each of you was hearing, was created by generative AI?

That is correct. We analyse your voice, identify the pitch and tone, and replicate it in a matter of seconds in any language. So the tone, the timing and the pace are all similar to the voice of the original speaker.

And who are your typical clients? And how long have you been in business?

We started the company about two and a half years ago and began seriously pursuing sales roughly 90 to 120 days ago. The Vatican is our client around the world. We’ve worked with the United Nations. Other clients so far are in Government and Public Sector, notably immigration and law enforcement, as well as industries such as legal, healthcare and fintech. We are seeing great adoption from receptionists and customer services as well.

To wrap things up, how’s business so far?

It’s going great. There’s more demand than we expected which is great. And we are identifying different challenges that we can address in different spaces. For example, across the United States there are a lot of challenges in schools related to immigration and parents and teachers being unable to communicate about children’s needs. That’s an area we can help with.

Avatar photo
Ricardo Oliveira

Ricardo Oliveira is a Senior Director at TechFinitive, where he frequently collaborates with TechFinitive's editorial team to write and produce content. He's based in Sydney, Australia.

NEXT UP