The emergence of tools designed to produce new imagery, video and sound based on generative artificial intelligence/machine learning is raising questions about the efficacy of current laws governing the use and protection of human made content. We recently had the opportunity to survey this legal landscape in a matter involving the use of AI to recreate the voice of a well-known public figure. Our client shared with us a video endorsing a cryptocurrency which sounded as if it had been narrated by our client. We suspected that the narration for the video had been created using an AI speech generator based on publicly available samples of the client’s voice. We were asked what could be done to stop this unauthorized use of the client’s voice for commercial exploitation.
Current State of Intellectual Property Safeguards for One’s Voice
Can someone’s voice be regarded as their intellectual property, and if so, what existing laws might be relied on to protect it?
Right of Publicity
Right of publicity protects against the misappropriation of a person’s name, image and/or likeness (NIL) for commercial benefit. Presently, the right of publicity is governed only by state law; there is no federal protection for one’s right of publicity. That said, over half of the states have enacted a statute recognizing the right of publicity for their residents. Thirty-eight states have also recognized some form of the right to privacy under common law. The consensus among legal scholars is that the right of publicity exists in every state unless it has been explicitly rejected by a state, which has never happened.
With no federal law protecting publicity rights, the extent of protection varies by state. For example, Tennessee’s Ensuring Likeness, Voice, and Image Security (ELVIS) Act of March 21, 2024, explicitly treats an individual’s voice as protected property in any medium. “Voice,” according to the act, means “sound in a medium that is readily identifiable and attributable to a particular individual, regardless of whether the sound contains the actual voice or a simulation of the voice.” The ELVIS Act guarantees post-mortem protection for 10 years if a voice is commercially exploited by the state; otherwise, protection terminates two years after death. The ELVIS Act provides for civil liability and equitable relief but makes exemptions for “fair use” — the right to use a copyrighted work under certain conditions without permission of the copyright owner — and for fleeting or incidental use.
Minnesota, on the other hand, does not have a statutory right of publicity. This became apparent when the famed professional wrestler Jesse Ventura sued Titan Sports, Inc., seeking royalties for the use of his voice in wrestling videos produced by Titan, and for the use of his name, voice or likeness in other Titan merchandise. At trial, the jury initially awarded Ventura damages for unjust enrichment. On appeal, the Eighth Circuit found that Ventura’s unjust enrichment claim could only survive if the state recognized a right of publicity, but no Minnesota statute or state court had answered that question definitively.
The Eighth Circuit Court of Appeals, therefore, had to predict whether the Minnesota Supreme Court would recognize such a right of publicity. The Court determined that a right of publicity would exist under Minnesota law, basing its decision in Minnesota’s policy underlying statutory protection of trade names under Uniform Deceptive Trade Practices Act (Minn. Stat. Ch. 325F). The Court specifically avoided grounding its decision in common law tort for invasion of privacy which Minnesota does not recognize.
No Copyright Protection of Voice
There is no copyright protection in the way that one’s voice sounds. In Midler v. Ford Motor Co., 849 F.2d 460 (9th Cir. 1988), the Ninth Circuit Court of Appeals found that, because a voice is a sound, and sounds are not fixed in a tangible medium, they are not copyrightable. Instead, a proper claim for voice imitation is based on right of publicity. Under California law, a right of publicity claim exists where there is a deliberate misappropriation for commercial purposes of a voice that is (1) distinctive and (2) widely known. The jury in the Ford Motor Co. case found Bette Midler’s voice met both criteria. A jury reached the same conclusion about Tom Waits’s voice in Waits v. Frito Lay, Inc., 978 F.2d 1093 (9th Cir. 1992).
Copyright Protection of Voice Sound Recordings
The de minimis copyright infringement doctrine in music states that the law will disregard copying where the copied material is too trivial to qualify as an infringement. The interpretation of this rule has led to a split among circuit courts with respect to the question of whether the de minimis rule applies to sound recordings. The Sixth Circuit Court of Appeals in Bridgeport Music, Inc. v. Dimension Films, 410 F.3d 792 (6th Cir. 2005) held that there is no de minimis exception to exclusive right to copy sound recordings, including sampled sounds. But other circuit courts and legal scholars have disagreed. The Ninth Circuit Court of Appeals has recognized the de minimis defense, and the federal Southern District Court of Florida refused to follow Bridgeport arguing that it is inconsistent with an Eleventh Circuit decision that the “substantial similarity” test applies to all infringement claims. Nimmer on Copyright, an authoritative treatise on copyright law, agrees, arguing that Bridgeport was wrongly decided, and courts should not rely on the decision.
The issues surrounding copyright protection of voice sound recordings have fully entered the AI age in recent so-called “RIAA lawsuits” filed by UMG Recordings, Inc., a Recording Industry Association of America member, against Uncharted Labs, Inc. and Suno, Inc. Both defendants provide generative AI music services which, the plaintiff alleges, train, develop and operate generative AI models that require copying and ingesting “decades’ worth” of popular human-made sound recordings to generate outputs imitating the qualities of human-made recordings. AI-generated, music, the plaintiff maintains, directly competes with, will cheapen, and ultimately drown out genuine sound recordings. Therefore, the RIAA lawsuits assert that the defendants have willfully infringed the recording artists’ exclusive right to reproduce sound recordings. The outcomes in these cases will have a significant bearing on the current efficacy of copyright laws to protect against unauthorized uses of copyrighted material by generative AI applications.
Other Relevant Federal and State Laws
Other laws germane to our client’s legal issue included the Federal Trade Commission Act’s general prohibition of unfair trade practices, and the Lanham Act’s provisions regarding false endorsements. While not necessarily applicable to our client’s matter, it is worth noting that the FCC has issued proposed rules requiring disclosure of AI-generated content in radio and political ads. Similarly, 18 states have passed laws regulating deepfakes in election advertisements. Minnesota adopted a new deepfake election law in 2023 that criminalizes the use of deepfakes (including digitally altering a person’s voice) to influence an election. Clearly, these issues are increasingly becoming concerns for federal and state lawmakers and regulatory agencies.
Legal Remedies Available for Voice Misappropriation
As the U.S. Copyright Office declared in the first part of its series of reports on AI, state laws are inconsistent and insufficient to police all aspects of human identity, including voice. So, what legal options were available to our client?
Depending on which state’s law applied, he could have brought claims under a state’s right of publicity statute and/or common law tort theories for misappropriation of his voice. It is important, however, to consider the jurisdiction of the plaintiff and the alleged tortfeasor when evaluating the claim. For example, the state where our client resides recognizes a statutory right of publicity.
In this case, our theory was that the client’s voice was sampled from previously existing sound recordings for which he owns a copyright. Therefore, he had a claim for infringement of the exclusive right to reproduce those sound recordings under the federal Copyright Act. Depending on which state jurisdiction’s law applied, he could also have asserted claims under an emerging deepfake law, such as the ELVIS Act. And he might have a Lanham Act claim for false and misleading advertising. Using a combination of these arguments, we sent a takedown notice to YouTube, which promptly removed the video containing the AI-generated audio.
From this experience, it is clear that the legal frameworks protecting an individual’s name, image, likeness and voice are inconsistent from state to state, and the advent of AI technology and its proliferation across the internet is going to test the effectiveness of those laws. We anticipate that there will be more legislation — and calls for legislation — in this evolving area. In the meantime, however, an individual should look to the right of publicity (if recognized in their state) and federal copyright law to protect their voice from being misappropriated in the age of AI.