Artificial intelligence is making its way in every industry and pushing the boundaries. In a recent incident, a new lawsuit came against a major AI firm that may have crossed a line. Voice actors are accusing the AI firm of using their voice without permission. The two plaintiffs claim they were tricked as the firm asked for their voice sample and used it for creating and selling commercial content. This incident raised a question about how AI companies acquire voice data and use it. Let’s dive in and explore more about the trajectory of this class action lawsuit.
Paul Skye Lehrman is a popular voice actor. Back in 2022, when he was at his friend’s place, a YouTube video came up from a channel called Military News. It was about Russia’s advance into Ukraine. He immediately recognised that it’s his voice that was being used in the video. He let go of it and never contacted the channel regarding that.
“It was my voice dictating the conflict and talking about weapons,” Lehrman says. “These are words I never said.”
Few years later, when he was on his way to a doctor’s appointment, he again heard his voice in a podcast. It was something about Hollywood’s dual strikes in which an AI text-to-speech tool was used to answer questions. This is when he and his wife Linnea Sage discovered that her voice was stolen in a similar manner and it is when they decided to reach out to an attorney.
On Thursday, they filed a class action lawsuit against Berkeley-based AI startup LOVO. The case is filed in the New York federal court accusing the company of misappropriating their voices. Not just that, they also used the voice of A-list talent such as Scarlett Johansson, Ariana Grande and Conan O’Brien.
“We want to make sure this doesn’t happen to other people,” lawyer Steve Cohen of Pollock Cohen, representing the plaintiffs, told Reuters. “We don’t know, of the thousands of voices Lovo says they use, how many people know that their voices were used and may still be used?”
In the class action lawsuit, it is mentioned that both Lehrman and Sage were solicited by Lovo for voiceover work on Fiverr. Lehrman received a payment by an anonymous client for his voice’s use in a “research project,” but Sage was told that her voice might be used with “test scripts for radio ads.”
Additionally, SAG-AFTRA general counsel Jeffrey Bennett says the misconduct alleged in the lawsuit is “the type of thing we’re going to see more of as people fail to understand that there are rights that exist in voices.” The union maintains that training AI systems on members’ likenesses without consent is a violation of their rights.
While representing Lovo David Case, an attorney mentioned to Times that his client (LOVO) trained an AI based voiceover product using audio samples from an English-language recordings database—but did not provide further comment when asked whether Lehrman and Sage’s voices had been used in LOVO’s AI training.
Lehrman has since said that he hopes to regain control over his own voice—and that, without taking legal action, companies like Lovo may well escape accountability.
“We hope to claw back control over our voices, over who we are, over our careers,” Lehrman told the Times. “We want to represent others this has happened to, and those that this will happen to if nothing changes.
The Impact of the Lawsuit
The outcome of this class action lawsuit is crucial and can have a ripple effect on the AI industry and legal world. If the actors prevail, it can force AI firms to be more transparent about their voice usage and other data acquisition. Not just that, this lawsuit may also lead to the growth of new rules and regulations regarding the use of voice by AI firms. They might have to become more transparent and outline the process of their voice data collection.
In Nutshell
AI-powered voice assistants and tools are rapidly becoming mainstream. These AI-voice tools can range from audiobook narrating AI tools to AI-powered assistants. This class action lawsuit is a wake-up call for being more transparent and ethical about the use of voice and other data used for training AI tools. This incident is a clear instance of why there is a need for clear and ethical guidelines for AI firms. A proper legal framework will ensure no artist feels exploited.