Meta launches SeamlessM4T AI model for real-time language translation

Meta Platforms, the parent company of Facebook, announced the release of an advanced AI model designed to revolutionize communication across language barriers. The AI model, Meta SeamlessM4T, can seamlessly translate and transcribe speech in numerous languages, potentially paving the way for real-time interactions between individuals from diverse linguistic backgrounds.

In a blog post on Tuesday, the company introduced the Meta SeamlessM4T model, highlighting its ability to facilitate translations between text and speech for nearly 100 languages. Additionally, it offers complete speech-to-speech translation for 35 languages, consolidating functionalities that were once only available through separate AI models.

“SeamlessM4T builds on advancements we and others have made over the years in the quest to create a universal translator. Last year, we released No Language Left Behind (NLLB). This text-to-text machine translation model supports 200 languages and has since been integrated into Wikipedia as one of the translation providers. We also shared a demo of our Universal Speech Translator, the first direct speech-to-speech translation system for Hokkien, a language without a widely used writing system. And earlier this year, we revealed Massively Multilingual Speech, which provides speech recognition, language identification, and speech synthesis technology across more than 1,100 languages,” Meta stated in the blog post.

“Meta SeamlessM4T draws on findings from all of these projects to enable a multilingual and multimodal translation experience stemming from a single model, built across a wide range of spoken data sources with state-of-the-art results,” the company added.

Mark Zuckerberg, CEO of Meta Platforms, envisions such tools as essential components for fostering interactions within the metaverse, a network of interconnected virtual worlds that he envisions as the company’s future. This AI breakthrough aligns with Zuckerberg’s vision of enabling global users to communicate effortlessly within the metaverse.

The company has made the Meta SeamlessM4T model accessible to the public for non-commercial purposes, showcasing Meta’s commitment to fostering an open AI ecosystem. This approach has led to the release of multiple AI models this year, including the notable Llama language model. Meta’s open access strategy contrasts with competitors like Microsoft-backed OpenAI and Google, who often charge for their proprietary AI models.

Meta SeamlessM4T supports

  • Speech recognition for nearly 100 languages
  • Speech-to-text translation for nearly 100 input and output languages
  • Speech-to-speech translation, supporting nearly 100 input languages and 36 (including English) output languages
  • Text-to-text translation for nearly 100 languages
  • Text-to-speech translation, supporting nearly 100 input languages and 35 (including English) output languages

Legal challenges

However, Meta Platforms, like others in the industry, face legal challenges concerning the training data used to create these AI models. In recent months, lawsuits have been filed against both Meta and OpenAI, alleging copyright infringement related to using training data derived from books without permission.

Researchers detailed that the audio training data for the Meta SeamlessM4T model was drawn from an extensive collection of “raw audio originating from a publicly available repository of crawled web data.” The specifics of this repository were not disclosed. Text data, on the other hand, was sourced from datasets created in the previous year, derived from content across Wikipedia and associated websites.

Despite its commitment to advancing open AI solutions, Meta Platforms is navigating a complex landscape of legal and ethical considerations as it propels the AI-driven future of communication and virtual interaction.

WRITTEN BY

Team Eela

TechEela, the Bedrock of MarTech and Innovation, is a Digital Media Publication Website. We see a lot around us that needs to be told, shared, and experienced, and that is exactly what we offer to you as shots. As we like to say, “Here’s to everything you ever thought you knew. To everything, you never thought you knew”
1

Leave a Reply

Your email address will not be published. Required fields are marked *