Copyright Issues Raised by the Technology of Deepfakes

Li Guan | January 30, 2025

Trending Uses of Deepfakes

Deepfake technology, leveraging sophisticated artificial intelligence, is rapidly reshaping the entertainment industry by enabling the creation of hyper-realistic video and audio content. This technology can convincingly depict well-known personalities saying or doing things they never actually did, creating entirely new content that did not really occur. The revived Star Wars franchise used deepfake technology in “Rogue One: A Star Wars Story” to reintroduce characters like Moff Tarkin and Princess Leia, skillfully bringing back these roles despite the original actors, including Peter Cushing, having passed away. Similarly, in the music industry, deepfake has also been employed creatively, as illustrated by Paul Shales’ project for The Strokes’ music video “Bad Decisions.”Shales used deepfake to make the band members appear as their younger selves without them physically appearing in the video.

While deepfakes offer promising avenues for innovation, such as rejuvenating actors or reviving deceased ones, it simultaneously poses unprecedented challenges to traditional copyright and privacy norms.

Protections for Deepfakes

Whereas deepfakes generate significant concerns, particularly about protecting individuals against deepfake creations, there is also controversy over whether the creators of deepfake works can secure copyright protection for their creations.

Copyrightability of Deepfake Creations

Current copyright laws fall short in addressing the unique challenges posed by deepfakes. These laws are primarily designed to protect original works of authorship that are fixed in a tangible medium of expression. However, they do not readily apply to the intangible, yet creative and recognizable, expressions that deepfake technology replicates. This gap exposes a crucial need for legal reforms that can address the nuances of AI-generated content and protect the rights of original creators and the public figures depicted.

Under U.S. copyright law, human authorship is an essential requirement for a valid copyright claim. In the 2023 case Thaler v. Perlmutter, plaintiff Stephen Thaler attempted to register a copyright for a visual artwork produced by his “Creativity Machine,” listing the computer system as the author. However, the Copyright Office rejected this claim due to the absence of human authorship, a decision later affirmed by the court. According to the Copyright Act of 1976, a work must have a human “author” to be copyrightable. The court further held that providing copyright protection to works produced exclusively by AI systems, without any human involvement, would contradict the primary objectives of copyright law, which is to promote human creativity—a cornerstone of U.S. copyright law since its beginning. Non-human actors need no incentivization with the promise of exclusive rights, and copyright was therefore not designed to reach them.

However, the court acknowledged ongoing uncertainties surrounding AI authorship and copyright. Judge Howell highlighted that future developments in AI would prompt intricate questions. These include determining the degree of human involvement necessary for someone using an AI system to be recognized as the ‘author’ of the produced work, the level of protection afforded the resultant image, ways to assess the originality of AI-generated works based on non-disclosed pre-existing content, the best application of copyright to foster AI-involved creativity, and other associated concerns.

Protections Against Deepfakes

The exploration of copyright issues in the realm of deepfakes is partially driven by the inadequacies of other legal doctrines to fully address the unique challenges posed by this technology. For example, defamation law focuses on false factual allegations and fails to cover deepfakes lacking clear false assertions, like a manipulated video without specific claims. Trademark infringement, with its commercial use requirement, does not protect against non-commercial deepfakes, such as political propaganda. The right of publicity laws mainly protect commercial images rather than personal dignity, leaving non-celebrities and non-human entities like animated characters without recourse. False light requires proving substantial emotional distress from misleading representations, a high legal bar. Moreover, common law fraud demands proof of intentional misrepresentation and tangible harm, which may not always align with the harms caused by deepfakes. 

Given these shortcomings, it is essential to discuss issues in other legal areas, such as copyright issues, to enhance protection against the misuse of deepfake technology. In particular, the following sections will explore unauthorized uses of likeness and voice and the impacts of deepfakes on original works. These discussions are critical because they aim to address gaps left by other legal doctrines, which may not fully capture the challenges posed by deepfakes, thereby providing a broader scope for protection. 

Unauthorized Use of Likeness and Voice

Deepfakes’ capacity to precisely replicate an individual’s likeness and voice may raise intricate legal issues. AI-generated deepfakes, while sometimes satirical or artistic, can also be harmful. For example, Taylor Swift has repeatedly become a target of deepfakes, including instances where Donald Trump’s supporters circulated AI-generated videos that falsely depict her endorsing Trump and participating in election denialism. This represents just one of several occasions where her likeness has been manipulated, underscoring the broader issue of unauthorized deepfake usage.

The Tennessee ELVIS Act updates personal rights protection laws to cover the unauthorized use of an individual’s image or voice, adding liabilities for those who distribute technology used for such infringements. In addition, on January 10, 2024, Reps. María Elvira Salazar and Madeleine Dean introduced the No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act (H.R. 6943). This bill is designed to create a federal framework to protect individual rights to one’s likeness and voice against AI-generated counterfeits and fabrications. Under this bill, digitally created content using an individual’s likeness or voice would only be permissible if the person is over 18 and has provided written consent through a legal agreement or a valid collective bargaining agreement. The bill specifies that sufficient grounds for seeking relief from unauthorized use include financial or physical harm, severe emotional distress to the content’s subject, or potential public deception or confusion. Violations of these rights could lead individuals to pursue legal action against providers of “personalized cloning services” — including algorithms and software primarily used to produce digital voice replicas or depictions. Plaintiffs could seek $50,000 per violation or actual damages, along with any profits made from the unauthorized use.

Impact on Original Work

The creation of deepfakes can impact the copyright of original works. It is unclear whether deepfakes should be considered derivative works or entirely new creations.

In the U.S., a significant issue is the broad application of the fair use doctrine. Under § 107 of the Digital Millennium Copyright Act of 1998 (DMCA), fair use is determined by a four-factor test assessing (1) the purpose and character of the use, (2) the nature of the copyrighted work, (3) the amount and substantiality of the portion used, and (4) the impact on the work’s market potential. This doctrine includes protection for deepfakes deemed “transformative use,” a concept from the Campbell v. Acuff Rose decision, where the new work significantly alters the original with a new expression, meaning, or message. In such cases, even if a deepfake significantly copies from the original, it may still qualify for fair use protection if it is transformative, not impacting the original’s market value.

However, this broad application of the fair use doctrine and liberal interpretation of transformative use do not work in favor of the original creators. They may potentially protect the deepfake content even with malicious intent, which makes it difficult for original creators to bring claims under § 512 of the DMCA and § 230 of the Communication Decency Act.

Federal and State Deepfake Legislation

Copyright is designed to adapt with the times.” At present, although the United States lack comprehensive federal legislation that specifically bans or regulates deepfakes, there are still several acts that target deepfakes. 

In Congress, a few proposed bills aim to regulate AI-generated content by requiring specific disclosures. The AI Disclosure Act of 2023 (H.R. 3831) requires any content created by AI to include a notice stating, “Disclaimer: this output has been generated by artificial intelligence.” The AI Labeling Act of 2023 (S. 2691) also demands a similar notice, with additional requirements for the disclaimer to be clear and difficult to alter. The REAL Political Advertisements Act (H.R. 3044 and S. 1596) demands disclaimers for any political ads that are wholly or partly produced by AI. Furthermore, the DEEPFAKES Accountability Act (H.R. 5586) requires that any deepfake video, whether of a political figure or not, must carry a disclaimer. It is designed to defend national security from the risks associated with deepfakes and to offer legal remedies to individuals harmed by such content. The DEFIANCE Act of 2024 aims to enhance the rights to legal recourse for individuals impacted by non-consensual intimate digital forgeries, among other objectives.

On the state level, several states have passed legislation to regulate deepfakes, addressing various aspects of this technology through specific legal measures. For example, Texas SB 751 criminalizes the creation of deceptive videos with the intent to damage political candidates or influence elections. In Florida, SB 1798 targets the protection of minors by prohibiting the digital alteration of images to depict minors in sexual acts. Washington HB 1999 provides both civil and criminal remedies for victims of fabricated sexually explicit images. 

This year, California enacted AB 2839, targeting the distribution of “materially deceptive” AI-generated deepfakes on social media that mimic political candidates and are known by the poster to be false, as the deepfakes could mislead voters. However, a California judge recently decided that the state cannot yet compel individuals to remove such election-related deepfakes, since AB 2839 facially violates the First Amendment. 

These developments highlight the diverse strategies that states are employing to address the challenges presented by deepfake technology. Despite these efforts, the laws remain incomplete and continue to face challenges, such as concerns over First Amendment rights.

Conclusion

As deepfake technology evolves, it challenges copyright laws, prompting a need for robust legal responses. Federal and state legislation is crucial in protecting individual rights and the integrity of original works against unauthorized use and manipulation. As deepfake technology advances, continuous refinement of these laws will be crucial to balance innovation with ethical and legal boundaries, ensuring protection against the potential harms of deepfakes.