
(Source: Alexandra “Lexie” Kapilian)
The rise of generative artificial intelligence (AI) poses an increasing threat to actors, singers, comedians, and other performers who make their living from their voice. Generative AI trains on large amounts of data and can quickly generate content, including voice imitations, based on the information it is fed. AI can produce “voice clones” regardless of whether a performer authorizes that imitation. As a result, voice clones can harm performers by profiting off their voices without their permission. For instance, Scarlett Johannson threatened legal action against OpenAI when it released an artificial voice that sounded identical to her voice after she had already turned their project down. In another instance, voice actors filed a class action against Lovo Inc., an AI voice generating company, after it trained on voiceovers available on Fiverr.com and created unauthorized replications of actors’ voices. The lack of federal protection for performers’ voices and gaps in state laws that protect performers’ real voices, but do not explicitly protect replications of their voices, opens performers up to potential exploitation through AI voice cloning programs.
To begin filling in this gap and protect performers from unauthorized digital replications of their voices, New York passed Senate Bill 7676B, a law that establishes requirements for contracts that involve the creation and use of digital replicas. This article evaluates the ways in which this new digital replica law extends performers’ right of publicity to their voice along with its limitations.
Right of Publicity
To understand New York’s digital replicas law, one must first understand the right of publicity, specifically the property rights that actors, singers, comedians, and other performers maintain in their voices.
The right of publicity protects a person’s commercial interest in the identifiable aspects of their identity, generally their name, image, likeness, and sometimes their voice, to protect against commercial exploitation of their identity. Currently, there is no federal right of publicity. While the U.S. Copyright Office has urged for federal legislation to protect the right of publicity and lawmakers recently introduced multiple pieces of legislation, including the No AI FRAUD Act and the NO FAKES Act, to protect this right, a right of publicity has never enjoyed federal protection.
Instead, the right of publicity is recognized by most states in statutory or common law. However, not all states recognize this right, sometimes recognizing similar rights under a right of privacy. Further, each state recognizes different legal standards for their right of publicity, such that the boundaries of a right of publicity claim vary greatly from state-to-state. The types of identifiable aspects of identity covered, the duration of the right, and how the right is protected varies greatly by the patchwork of state right of publicity laws. And as the U.S. Copyright Office noted in their first report on copyright and AI, focused on digital replicas, some state laws are too narrow to protect artists from the harms of digital replicas. For instance, in Arizona, publicity laws only protect soldiers while in Texas, publicity laws only protect those who are deceased.
Voice Rights
Most state right of publicity laws cover name, image, and likeness. However, this right expanded in several states including New York to cover “voice” in light of the Ninth Circuit case Midler v. Ford. There, renowned singer Bette Midler sued the Ford Motor Company for hiring a “sound-alike” to imitate her singing voice in a commercial after she turned it down, such that the public assumed Midler was singing. The Ninth Circuit set a precedent, upheld by the Supreme Court, that “a voice is as distinctive and personal as a face” and thus, “when a distinctive voice of a professional singer is widely known and is deliberately imitated in order to sell a product, the sellers have appropriated what is not theirs and have committed a tort in California.” In so holding, Midler v. Ford acknowledged the common law of voice misappropriation in California and justified how a performer’s voice could be a protectable aspect of their identity.
The Midler case encouraged New York lawmakers to extend the right of publicity, recognized under their right of privacy statutes, to include “voice.” Subsequently, the New York legislature expanded New York Civil Rights Law § 51 in 1995 to protect voice rights, for which one could seek remedies if exploited, and New York case law has since protected the right a performer has to their voice. The New York legislature also expanded § 50 to explicitly offers individuals a privacy right to use their voice for commercial purposes.
Further, after years of negotiation primarily between the Motion Picture Association (“MTA”) and the Screen Actors Guild – American Federation of Television and Radio Artists (“SAG-AFTRA”), New York extended § 50-f, which details performers’ right of publicity, in 2022 to include a civil right of action for the unauthorized creation of digital replicas of deceased performers. Specifically, if a deceased individual made their livelihood by acting or singing and their voice was used in a digital replica to generate an unauthorized and fictitious “new” performance, their estate or whomever their property right was transferred to post-mortem could receive damages. While this expansion had significant exceptions, such as only protecting the rights of dead, rather than live, celebrities, carving out First Amendment and disclaimer defenses, and only applying to specific types of works, this was New York’s first step to catching up with the needs of performers in the wake of massive technological advances.
New York’s “Contracts for the Creation and Use of Digital Replicas” Law
New York’s new digital replicas law was introduced to the New York State Senate on October 2, 2023 by Senator Ramos. It unanimously passed the Senate on June 6, 2024, and then passed the State Assembly the following day on June 7, 2024. Governor Hochul signed the bill, passing it into law on December 13, 2024, effective January 1, 2025. While this amendment to New York’s general obligations law does not explicitly extend New York’s right of publicity, it effectively extends the right that performers have to their voice by establishing contractual requirements for the creation and use of digital replicas.
This act makes any contractual provision that provides for the creation of a “new” performance by digital replica “void and unenforceable” if the provision does not meet three conditions. First, the provision must not permit the creation and use of a digital replica if the replica will be used to replace work that the performer otherwise would have actually done. Second, the provision for a digital replica must include a “reasonably specific description” of its intended use. Third, the performer must be represented by legal counsel who negotiated on clear terms related to licensing the performer’s digital replica rights or by a labor organization where the terms of their collective bargaining agreement explicitly address the use of digital replicas. The law defines a digital replica as a digital simulation of an individual’s voice or likeness that so closely resembles that individual’s real voice, such that the average person could not distinguish the two. Overall, this law ensures that performing artists enter into agreements with informed consent regarding how their digital voice rights will be used to ensure that they can continue making a living through their arts rather than be replaced by AI.
New York’s digital replicas law is similar to laws recently passed in California and Tennessee. It is especially to California’s Assembly Bill 2602, which was pushed for by SAG-AFTRA as well, signed by Governor Newsom on September 17, 2024, and is also effective as of the beginning of 2025. The California law requires the same three conditions as the New York law for any contract provisions that provide for the creation of digital replicas; otherwise, the provisions are void.
New York’s digital replicas law has several advantages. Notably, it is the first New York law to address unauthorized AI voice cloning. It effectively establishes industry guardrails around contracting for a performer’s digital voice rights. Similar to the California law, it prevents businesses from using AI to replicate performer’s voices without their informed consent, such that companies cannot simply profit off of the commercial value of performers’ voices. While AI companies have licensed performers’ voices independently of such laws, it is far from the industry norm. By creating a contractual standard that voices cannot be mimicked for profit but instead must be used in ways explicitly agreed upon by a performer’s representation, this should normalize the licensing of New York-based performers’ voices, which are a valuable commercial entity. This way, performers who seek to profit from digital replicas are fully informed about how their voices will be used, agree to that use, are not sought after by companies who seek these rights to avoid hiring actual people, and are paid appropriately.
Further, the law’s emphasis on clear and conspicuous terms for digital replicas will help to ensure that performers and their representatives understand exactly what they are agreeing to when they enter into contracts. Companies can neither hide language on digital replicas in their contracts nor include vague language, such that performers accidentally sign the rights to their voices away without their knowledge. As a result, this law should normalize having separate negotiations and agreements for the creation of digital replicas, rather than including digital replica language in standard form contracts that artists feel compelled to sign.
Limitations
While New York’s digital replicas law is a useful step to help performers combat the rise of voice cloning AI deepfakes, this law has substantial limitations which may not adequately cover performers’ needs.
First, this law does not expand the right of publicity generally. It only explicitly protects performers who contract to relinquish voice rights. Thus, compared to Tennessee’s Ensuring Likeness, Voice, and Image Security Act (“ELVIS Act”), which creates novel forms of secondary liability against both companies that distribute unauthorized digital replicas along with the AI companies that produce tools or software that are used to create unauthorized replicas, New York’s law is substantially more limited in scope. It will still be challenging to sue companies who, like in Scarlett Johannson’s and the voice actors’ cases, do not contract with artists but replicate and profit from their voices anyway. And there are still no causes of actions in New York to hold AI companies accountable for creating this technology that harms artists’ careers.
Further, this law only applies when a digital simulation of an individual’s voice closely resembles that individual’s real voice, such that the average person would find them indistinguishable. Thus, any claim regarding contractual provisions under this law will create a factual inquiry: can the average person tell the difference between a digital simulation of an individual’s voice and that individual’s actual voice? This may protect performers at the top of the industry who have voices that the average person may quickly recognize, like Scarlett Johansson, but could be less helpful for artists whose voices are not as recognizable. And since the majority of performing artists in New York do not have instantly recognizable voices, they will likely have a much more challenging time finding out when the digital rights to their voices have been violated, as most violations will not receive the same kind of public attention that Scarlett Johansson’s replicated voice on OpenAI received.
Additionally, as a broader concern, given that New York and several other states have different digital replica laws, such that even the definition of digital replica depends on your jurisdiction, the outcomes of similar cases may vary greatly depending on the state in which the suit is brought.
While New York’s new law is a step in the right direction, New York should create broader causes of action, like those created by Tennessee’s ELVIS Act, to ensure that the commercial value of all performers’ voices is protected against generative AI.
Suggested Citation: Alexandra “Lexie” Kapilian, The Digital Replica Contracts Act: An Evaluation of New York’s New Protections for Performing Artists, Cornell J.L. & Pub. Pol’y, The Issue Spotter, (Feb. 11, 2025), http://jlpp.org/the-digital-replica-contracts-act-an-evaluation-of-new-yorks-new-protections-for-performing-artists/.
