Meta introduces AudioCraft, a generative AI music and sound creation model

Meta introduces AudioCraft, a generative AI music and sound creation model

The Fundamental Artificial Intelligence Research team at Meta Platforms Inc. announced today that it is opening-sourcing a brand-new, top-notch generative AI framework that is targeted at creating lifelike audio sound and music from text-based inputs.

According to Meta’s FAIR team, the new AudioCraft will allow indie video game developers to fill virtual worlds with more lifelike sound effects while also enabling musicians to experiment with new compositions without ever picking up an instrument. Three parts make up the AudioCraft framework: MusicGen, AudioGen, and EnCodec.

The Fair team explained in a blog post that MusicGen can create new music from text-based inputs and that it was trained on Meta-owned and specifically licensed music. While AudioGen creates authentic sounds from text inputs after being trained on public sound effects.

Read More: Meta releases AudioCraft, a generative AI model for creating music and sound

For more such updates follow us on Google News TalkCMO News.