AudioCraft – Meta's new AI music creation tool
Meta has just released a new generative AI tool called AudioCraft, designed to let users create music and sounds from text.
A few minutes every morning is all you need.
Stay up to date on the world's Headlines and Human Stories. It's fun, it's factual, it's fluff-free.
The backstory: Last year, OpenAI's ChatGPT took the artificial intelligence (AI) world by storm, becoming a sensation for its generative AI capabilities. But did you know that computers have been used in making music since the 1950s? Back then, computer-assisted music was already a thing, and even some of David Bowie's lyrics were created with simple lyric-shuffling software in the 90s. Fast forward to today, and generative AI has become even more capable of creating music and sounds just from text inputs and a vast library of sound data.
One AI-generated track called "Heart on My Sleeve," featuring voices that emulated Drake and The Weeknd, went viral on social media, garnering millions of plays. It was later removed as “infringing content.” Some artists, like Grimes, are actually supporting AI-made songs. But there are concerns about copyright issues that worry record labels and musicians alike. AI-generated voices are also finding their way into Google Play and Apple Books, which are offering auto-narrated audiobooks for publishers.
More recently: Google released MusicLM in May, an AI tool that creates music based on natural language instructions. It allows users to input instructions like humming a tune or making specific music requests with ease. This opens up possibilities for genre mash-ups, multi-instrumental compositions and even unique human-like voices.
The development: Meta has just released a new generative AI tool called AudioCraft, designed to let users create music and sounds from text. It features three AI models designed for different sound generation tasks. MusicGen can generate music from text inputs and has been trained on an extensive library of music. Meanwhile, AudioGen creates audio from written prompts using publicly available sound effects data. The improved EnCodec decoder ensures a smoother audio experience with fewer artifacts when manipulating sounds.
Meta believes that AudioCraft could revolutionize music creation, much like synthesizers transformed the music industry in the past. By sharing the code, the company aims to promote diversity and reduce potential bias in generative models. It’s also taking steps to address possible bias and misuse by encouraging diverse data usage in training the AI models.
Key comments:
“In the last year, we’ve seen some really incredible breakthroughs — qualitative breakthroughs — on generative AI and that gives us the opportunity to now go take that technology, push it forward and build it into every single one of our products,” said Meta CEO Mark Zuckerberg a June statement shared with CNBC. “We’re going to play an important and unique role in the industry in bringing these capabilities to billions of people in new ways that other people aren’t going to do.”
"I'll split 50% royalties on any successful AI generated song that uses my voice. Same deal as I would with any artist i collab with. Feel free to use my voice without penalty. I have no label and no legal bindings," said musician Grimes on Twitter, now known as X.
“The training of generative AI using our artists’ music (which represents both a breach of our agreements and a violation of copyright law) as well as the availability of infringing content created with generative AI on DSPs [digital service providers], begs the question as to which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation,” said a spokesperson of streaming services by Universal Music Group (UMG) when "Heart on My Sleeve" went viral in April.
Comments ()