Combatting AI with Copyright

, ,

16 Nov 2025

(Source)

Nearly every day, it seems a different company or celebrity is swept up by an “Artificial Intelligence” (AI) scandal. To name just a few, this past June, Duolingo faced backlash following a failed effort to rebrand itself as an “AI-first” organization. Additionally, recent internet speculation suggests Taylor Swift’s “Life of a Showgirl” promotional materials featured AI-generated videos. Finally, and perhaps most unsettling, an AI-generated “actress” has raised a ruckus, as the computer-generated performer’s creator searches for a talent agency.

AI’s rapid emergence has been nothing short of controversial amongst the general public. Some enthusiastically embrace the technology, integrating AI into their daily lives.Recently, a Center for Democracy and Technology study showed 86% of students, 85% of teachers, and 75% of parents surveyed reported using AI during the previous academic year. Additionally, according to the International Energy Agency, “[w]ithin two years of ChatGPT’s launch in 2022 . . . around 40% of households in the U.S. and United Kingdom reported using AI chatbots.”

On the other end of this spectrum, however, AI’s adversaries adamantly oppose any use of the technology. For instance, the New York City subway system recently became a site for expressing such opposition, with graffiti defacing advertisements that promote “a wearable A.I. pendant that, for $129, will listen to your conversations and become your friend.” The messages painted across the adverts “rang[ed] from hostile (‘A.I. is burning the world around you’) to pleading (‘make a real friend’).”

With such stark differences, the fears motivating AI opponents are not insignificant. As the spray-painted message “A.I. is burning the world around you” suggests, a prominent concern for AI-opponents is the technology’s environmental harms. Namely, the data centers responsible for power AI systems can use “as much electricity as 100,000 households” and “suck up billions of gallons of water for systems to keep all that computer hardware cool.” Combining growing concern regarding global water shortages and the fact that the past ten years have been documented as the warmest years on record, AI’s carbon dioxide emissions and water wastage raise significant concerns regarding the sheer rate of AI usage.

As AI continues expanding and its use increases, oversight efforts are emerging. Currently, the regulation trend appears to focus on delineating “permissible” AI use and prohibiting any use that is deemed “impermissible.” A leading topic in this conversation is the unauthorized reproduction of another’s image or copyrightable material. In fact, those critiquing Taylor Swift’s alleged recent AI usage are calling attention to the singer’s September 2024 statement, where Swift voiced her “fears around AI, and the dangers of spreading misinformation.” Although facially antithetical to the singer’s recent use of the technology, Swift’s sentiment reflects a distinction amongst anti-AI advocates. That is, Swift appears to subscribe to the belief that AI is not inherently problematic; rather, unauthorized or irresponsible use is AI’s greatest threat.

Recent legislation in the field aligns with viewing AI as a question of authorization, with the May 2025 Take it Down Act introducing harsher penalties for “distributi[ng] . . . non-consensual imagery . . . [including] deepfakes created by [AI].” As Senator Amy Klobuchar noted, the Act is a “landmark move” in addressing a growing need for “establishing common sense rules of the road around social media and AI.”

Similarly, a current lawsuit against OpenAI shines a light on growing objections to AI’s unauthorized use of copyrighted material. The New York Times, alongside other major news publications, filed suit against OpenAI in January 2025, alleging OpenAI is liable for copyright infringement as ChatGPT accesses millions of copyrighted articles without consent or payment. In response, OpenAI contends ChatGPT’s alterations to copyrighted materials entitle it to fair use protection.

Ultimately, even if this case is decided in favor of the publishers, ChatGPT would not cease to exist. Rather, OpenAI would most likely be required to purchase licenses and negotiate authorized use of copyrighted material. Similarly to streaming services increasingly instituting ad-supported subscription plans, AI services such as ChatGPT would remain accessible to those willing to pay. In fact, ChatGPT already offers subscription plan options—offering a “Plus” at $20.00 a month, and a “Pro” package at $200.00 a month. As a result, little is accomplished in the way of environmental protection, as those able to pay are not subject to any maximum limit.

Moreover, a license-based regime poses multiple threats to creative industries, particularly. Even where companies are paying hefty licensing fees to use AI systems, opting for a machine over a union-represented script writer may still be ideal. AI’s elimination of management costs, training time, and human error, all while rapidly producing content, hiring humans suddenly seems irrational to those with an eye to the bottom line and shareholders to answer to. Moreover, in a world already filled with mass-produced media, eliminating human touch in favor of AI threatens art’s overall value to society.

The cost of AI’s rapid content production endangers employment opportunities, especially in creative industries where the work product is already vulnerable to undervaluation. As Morgan Stanley Research estimates, AI can lower TV and film production companies’ costs by as much as 30%.In fact, concerns of AI overtaking human contributions were a central sticking point during various entertainment industry union strikes throughout 2023—namely, the Writers’ Guild of America (WGA) nearly five-month strike, and the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) nearly four-month strike.

Beyond its profitability, if AI becomes the source for most mainstream media, the benefits art provides in the first place are quashed. Walter Benjamin’s theory of an artwork’s “aura” captures the importance of manmade art—answering questions such as “Why travel to the Louvre to see the Mona Lisa when Google is free?” or “Why attend a concert instead of opening Spotify?” In his 1935 article “The Work of Art in the Age of Mechanical Reproduction,” Benjamin describes an original work of art or live performance exudes a certain “aura,” which cultivates a certain connection between the art and its observer. This connection is found in the Mona Lisa’s discoloration, or the Statue of Liberty’s oxidation. According to Benjamin, this aura“withers in the age of mechanical reproduction” as the art is subject neither to the tests of time nor the limits of space.

Indeed, AI in some sense transcends reproduction, deriving “new” outputs from various inputs. However, its inability to move about the world exempts AI-generated art from the passage of time and corresponding weathering. Accordingly, akin to the reproductive technologies Benjamin condemned, AI lacks the authenticity required to establish a meaningful connection between art and its observer. In turn, art’s value is reduced to its price tag rather than to an individual’s impulse to hunch over a canvas for endless hours or to weave their most harrowing life experiences into a movie script for public consumption. Disconnected from degradation and reality at large, even the most mind-blowing AI-generated image of a breathtaking landscape is incapable of evoking the emotion of flimsy cardstock covered with a child’s frantic crayon scribbles posted to a fridge.

Thus far, one effective means for addressing these concerns beyond establishing a license-based regime has been through private ordering efforts. For example, Universal Music Group (UMG) made its stance on AI in the music industry clear when it banned the use of its artists’ music on TikTok for three months in early 2024. The two companies ultimately agreed to “protect human artistry” amid developments in AI. In particular, the agreement stipulates efforts to “remove unauthorized AI-generated music from the platform” and to construct “tools to improve artist and songwriter attribution.”

On a similar note, a central focus of the WGA’s 2023 strike revolved around using AI for scriptwriting. The union ended its nearly five-month strike once the relevant parties agreed to regulations which essentially enshrined a place for people in the writing room—maintaining that “[n]either traditional AI (technologies including those used in CGI and VFX) nor generative AI (GAI, meaning artificial intelligence that produces content including written material) is a writer.”

Unfortunately, benefits from private ordering efforts are subject to various limitations. First and foremost, groups such as UMG or WGA answer to those they represent. Thus, the interests of those who are not signed to the record label or do not pay WGA union dues remain unprotected. Moreover, even those whose interests are protected are still subject to the renegotiations later down the line. As a result, even the strongest private protections against AI are ultimately unreliable until they are codified in law.

Fortunately, copyright law can serve as a strong foundation for retaining human involvement. Extending the pre-existing legal framework to the AI context, a limitation of copyright protection to works with substantial human contribution provides a major economic incentive for avoiding AI integration in creative projects.

Most valuable is the “human author” requirement, barring copyright protection for non-human created works. The U.S. Copyright Office expressly denies copyright protection for any works lacking human involvement, asserting “the Office will not register works produced by machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.” Additionally, some courts have also imposed a human-creation requirement. In the case of Naruto v. Slater, for instance, the Ninth Circuit denied copyright protection for a selfie taken by a monkey, as the Copyright Act only employs terms “imply[ing] humanity.”

Carrying this framework over to AI is rather simple and would provide a significant safeguard for maintaining predominantly human creations. As copyright protection extends only to eligible works, any works AI independently generates or substantially contributes to would be ineligible for copyright protection and any entitlements attached. Thus, the scope of copyright is limited to works with material human contributions, effectively preserving human involvement in the entertainment industries.

Requiring material human contribution forces companies to adopt a human-first approach when producing creative works to enjoy in the monetary value copyright protection provides. Copyright protection entitles its owner to a series of rights, including the right to distribute a work and create derivative works—such as sequels, companion novels, etc. As such, the inability to achieve protection would allow the creators significantly less control over who can profit from its work. 

Extending the pre-existing copyright law requirements for “human authorship” encourages companies to err on the side of caution when deciding whether to negotiate with a union-represented actor or cast Tilly Norwood in a one-woman show. While AI would not be entirely outlawed, copyright law can incentivize limiting the amount of AI ultimately used. As such, one can hope that with each employed scriptwriter or extra, one less query runs through a data center—allowing the world to continue spinning and Letterboxd reviews to remain free from analyzing AI acting.


Suggested Citation: Tay Rossi, Combatting AI with Copyright, Cornell J.L. & Pub. Pol’y, The Issue Spotter, (Nov. 16, 2025), https://publications.lawschool.cornell.edu/jlpp/2025/11/16/
combatting-ai-with-copyright/.

About the Author

Tay Rossi is a second-year student at Cornell Law School. In May 2024, she graduated from Hobart and William Smith Colleges with a B.A. in Philosophy and a minor in Theater. During her 1L summer, Tay interned with the Supreme Court of Rhode Island’s Law Clerk Department. Currently, she is a member of the Cornell Campus Mediation Practicum.