Meta to Build Better AI-Driven Audio for Virtual Reality

Meta to Build Better AI-Driven Audio for Virtual Reality (1)

Researchers at Meta Platforms Inc. have open-sourced three Artificial Intelligence models that take sound in the metaverse to a new level. Meta (formerly Facebook) has built three new Artificial Intelligence (AI) models designed to make the audio more realistic in mixed and virtual reality experiences.

The three AL models, Visual-Acoustic Matching, Visually-Informed Dereverberation, and VisualVoice, focus on human voice and sounds in video and are intended to move “us toward a more immersive reality at a faster rate,” according to a statement from the company. The audio is modified to match the space of a target image via the self-supervised Visual-Acoustic Matching model, known as AViTAR.

Despite the lack of acoustically mismatched audio and unlabeled data in in-the-wild web videos, the self-supervised training target was able to acquire acoustic matching from them, according to Meta.

Read More: https://siliconangle.com/2022/06/24/meta-building-better-ai-driven-audio-virtual-reality/