With Spotify’s AI DJ, the company trained an AI on a real person’s voice — that of its head of Cultural Partnerships and podcast host, Xavier “X” Jernigan. Now, the streamer may turn that same technology to advertising, it seems. According to statements made by The Ringer founder Bill Simmons, the streaming service is developing AI technology that will be able to use a podcast host’s voice to make host-read ads — without the host actually having to read and record the ad copy.
Simmons made the statements on a recent episode of “The Bill Simmons Podcast,” saying, “There is going to be a way to use my voice for the ads. You have to obviously give the approval for the voice, but it opens up, from an advertising standpoint, all these different great possibilities for you.”
He said these ads could open up new opportunities for podcasters because they could geo-target ads — like tickets for a local event in the listener’s city — or even create ads in different languages, with the host’s permission.
His comments were first reported by Semafor.
The Ringer was acquired by Spotify in 2020, but it wasn’t clear if Simmons was authorized to speak about the streamer’s plans in this area, as he began by saying, “I don’t think Spotify is going to get mad at me for this…” before sharing the information.
Reached for comment, Spotify wouldn’t directly confirm or deny the feature’s development.
“We’re always working to enhance the Spotify experience and test new offerings that benefit creators, advertisers and users,” a Spotify spokesperson told TechCrunch. “The AI landscape is evolving quickly and Spotify, which has a long history of innovation, is exploring a wide array of applications, including our hugely popular AI DJ feature. There has been a 500 percent increase in the number of daily podcast episodes discussing AI over the past month including the conversation between Derek Thompson and Bill Simmons. Advertising represents an interesting canvas for future exploration, but we don’t have anything to announce at this time.”
The subtext of this comment indicates Simmons’ statements may have been somewhat premature.
That said, Spotify has already hinted that the AI DJ in the app today would not be the only AI voice users would encounter in the future. When Jernigan was recently asked about Spotify’s plans to work with other voice models going forward, he teased, “stay tuned.”
The streamer has also been quietly investing in AI development and research, with a team of a few hundred now working on areas like personalization and machine learning. Plus, the team has been using the OpenAI model and researching the possibilities across Large Language Models, generative voice, and more.
Spotify’s ability to create AI voices specifically leverages IP from Spotify’s 2022 acquisition of Sonatic combined with OpenAI technology. It may opt to use its own in-house AI tech in the future, the company recently told us.
To create AI DJ, Spotify had Jernigan go into a studio to produce high-quality recordings, including those where he read lines with different cadences and emotions. He kept his natural pauses and breaths in the recordings, and was sure to use language he already says — like “tunes” or “bangers” instead of just “songs.” All this is then fed into the AI model which then creates the AI voice.
The company has explained to detail the process in more detail or say how long it took to turn Jernigan’s recordings into an AI DJ. But, given its possible interest in turning its podcast hosts into AI voice models, it must be developing a fairly efficient process here — and one that could possibly leverage a podcaster’s existing recordings.
While AI voices aren’t new, the ability to make them sound like real people is a more modern development. A few years ago, Google wowed the world with a human-sounding AI in Duplex that could call restaurants for you to make reservations. But the tech was initially slammed for its lack of disclosure. This month, Apple introduced an accessibility feature, Personal Vocie, that is able to mimic the user’s own voice after they first train the model by spending 15 minutes reading randomly chosen prompts, processed locally on their device.