In the United States, courts have commonly rejected arguments to protect an artist’s “style” under copyright law. This treatment stems from the idea-expression distinction, which provides copyright protection for fixed expressions, but not abstract ideas. The copyright law regime treats style as an unprotectable idea. This means that multiple people may paint a tree with leaves, but the size, color, and shape of the tree in a particular painting is a distinctive expression that is protected. The emergence of generative artificial intelligence (AI) raises new questions and avenues to explore the limits of copyright law as it stands. The ability of AI models to produce new images “in the style of” specific artists increases the potential scope and magnitude of harm to said artists through the exploitation of their creativity, skill, and time.
The use of AI in artistic and creative contexts is impactful because it calls into question the foundational reasoning underlying the idea-expression distinction. Mark Lemley, a leading intellectual property scholar, theorizes that “increasingly, the things humans contribute in a collaboration with generative AI will be ideas and high-level concepts” and “AI will contribute the expression.” In other words, the ability of AI to take creative ideas prompted by individuals and express them in something tangible flips the idea-expression distinction on its head, disrupting the entire copyright law regime. This theory emphasizes the transformative potential of AI to challenge current law and demonstrates how drastic challenges to copyright law can open the door for reconsiderations of the protectability of an artist’s style.
While it has always been possible for individuals to copy the style of another artist’s work, the capacity for harm to artists increases exponentially due to the capabilities of AI. It is now as simple as typing a short prompt into an AI model to create an image that can be commercially marketed. Even without naming the original artist, individuals in the market may recognize their style. This scenario creates potential economic harm if the demand for original works by the artist whose style is being appropriated declines because similar works are widely available on the market. There are additional reputational implications if the subjects being depicted do not align with the artist’s individual values. Even if the artist disapproves of the image, the market will nonetheless associate the image with the artist. The artist’s lack of control over AI-created works reduces the incentive for innovation and new creation, hindering the constitutional goal of copyright to promote original works.
This blog post focuses on style in the context of visual art and takes two approaches to frame how style can be incorporated into protectable expression. First, it analogizes artistic style to the protection of architectural works in 17 U.S.C. § 102(8). Second, it outlines a theory-based route through the right of publicity that can serve as the foundation for protecting style.
The adoption of architectural works as a category of copyrightable subject matter by Congress offers an interesting analogy to evaluate and understand how elements of an artist’s style are and are not protected under current copyright law. The United States Code defines an “architectural work” as a tangible design that “includes the overall form as well as the arrangement and composition of spaces and elements in the design, but does not include individual standard features.” In other words, designers may retain a copyright in the original combination of elements that give a building a distinctive character, but may not receive protection for broader features of a building. Significantly, Congress prevents protection of architectural style categories, such as art deco, midcentury modern, or neoclassical. This means that while design features such as arches and columns are not protectable, the unique combination of such features in an original manner can produce a protectable architectural work. The statute makes clear that Congress elected to specifically protect the overall form and arrangement of an architectural design, even if broader and standard features are used.
The statute’s treatment of architectural works can be expanded and applied to other artistic works if an artist’s style is viewed as a form of expressive arrangement of techniques and aesthetic decisions. For example, while Vincent Van Gogh could not have claimed protection over post-impressionism, swirled brushstrokes, or vibrant color palettes, the combination and arrangement of these elements can be analogized to an architect’s expression of standard features. Under this view, style may be seen less as an abstract idea and more as a fixed expression made up of defined features.
With this more defined approach to viewing unique artists and their style, it is clear why generative AI poses a risk. As the AI is learning how to replicate an artist’s style, it identifies the unique combination and arrangement of elements and creates the equivalent of an artistic profile. When generating a new image “in the style of” an artist, AI has the explicit goal of making the image in the artist’s recognizable style, utilizing the combination of elements in the works it was trained on. This means that AI is creating new works based on the expressive arrangement and combination of elements that makes an artist’s style distinct, which would be protectable if the image was an architectural work. This analogous relationship and the strong theoretical argument for protecting artistic style in the context of generative AI provides one logical route toward advocating for the protection of style.
The right of publicity is a state-level right intended to prevent “unauthorized uses of a person’s identity.” The theory underlying this right offers a foundation for recognizing artistic style as part of an individual’s identity. The ability of an AI model to associate a specific style with a specific artist to create new images indicates a merging of the two concepts, with style becoming a proxy for artistic identity. It is often an individual’s unique artistic identity that the market values. If style is viewed as part of an artist’s identity, the right of publicity provides a strong mechanism for protecting it. The rationale underlying the right of publicity is “preserving the commercial value of their identity” and “protecting the autonomy of their personality.” These underlying rationales align closely with the specific harms that generative AI can have on artists through unauthorized appropriation of their styles, namely economic and reputational harms.
California legislature and courts lead in granting publicity protections and provide the strongest outline for a federally regulated and protected right of publicity. Not only does the California Civil Code § 3344 provide legislative protections for the “name, voice, signature, photograph, or likeness” of an individual, but state common law also provides broader protections. The scope of the right of publicity doctrine is further articulated through case law, which protects elements of identity and can serve as the blueprint for protecting style as an element of identity. In Midler v. Ford Motor Co., the Ninth Circuit held that the imitation of actress and comedian Bette Midler’s voice constituted misappropriation even though neither her name nor her image was associated with the advertisement. The Court stated that “[t]o impersonate her voice is to pirate her identity.” The scope of this statute was expanded in White v. Samsung Electronics, where the Ninth Circuit held that a robot dressed to resemble Vanna White and positioned to turn letters as is done on the Wheel of Fortune game show infringed on her right of publicity. The court noted that the right of publicity protects more than “a laundry list of specific means of appropriating identity” and “name and likeness” do not need to be used to infringe the right of publicity. Symbolic references emulating the attire and behavior of an individual are enough to violate one’s identity. These cases illustrate how a broad and nonliteral conception of identity is recognized under the right of publicity.
This expansive approach by the Ninth Circuit implicates the incorporation of artistic style, which is simply the combination and arrangement of elements that the public associates with an artist, as part of that artist’s identity. The Ninth Circuit has demonstrated a willingness to grant protection under a broad conception of likeness; an artist’s stylistic signature, like Midler’s vocal tone and White’s television persona, could constitute an element of identity protectable under the right of publicity. This argument is particularly powerful in the context of generative AI, which extracts patterns from an artist’s work and produces a new image intended to be closely associated with that artist, even if the artist’s name is not used. The assumption underlying the right of publicity is that the individuals whose identities are being appropriated are commercially valuable. Similarly, artists that are appropriated for their artistic style are likely to have value. Thus, the exploitation of an artist’s recognizable and imitable style resembles the type of appropriation that the Ninth Circuit sought to prevent. Given the relevant precedent of the California legislature and Ninth Circuit, the right of publicity provides a foundation for protecting an artist’s style.
The traditional arguments for not protecting an artist’s style are being challenged due to the monumental impact that generative AI has on artists. If the idea-expression distinction, a core foundation of copyright law, does not function as intended due to widespread growth of expression but lack of ideas, the copyright regime should reevaluate whether style is protectable. The two foundations outlined in this paper, architectural works and right to publicity, provide starting points for policymakers and legislators to consider in evaluating the protectability of style.
Patent law was created to incentivize innovation by granting inventors limited monopolies in exchange for public disclosure of their discoveries. In theory, this should promote competition and technological progress. In practice, however, patent law has paradoxically become a barrier to creativity in the video game industry.
In recent years, there has been a trend of game publishers, including major names like Nintendo, increasingly seeking patents on gameplay mechanics and systems. For example, Nintendo has filed 31 patents alone for the game The Legend of Zelda: Tears of the Kingdom and has received criticism from fans and experts alike regarding the validity of their patents. These developments highlight a longstanding tension within patent law. While some critics argue for protection for software innovations, others say that patents on video game mechanics are often overly broad. This article argues that overly broad video game mechanic patents undermine patent law’s core purposes by chilling creative development and disproportionately burdening smaller studios. Patent law should not prevent other developers from building upon design ideas; instead, it should bolster competition by protecting only concrete, novel implementations that meaningfully advance video game technologies.
To understand how this problem emerged, it is crucial to examine how a system designed to promote innovation through protection has, over time, come to restrict creative expression in video games.
At first, the video game industry was “indifferent to potential patent rights.” Inventor and game developer Ralph Baer filed the first patent for a tennis-like video game in 1968. His creation led to the Magnavox Odyssey, the first home video console and subsequent blueprint for the industry. Baer’s ingenuity inspired other companies including Atari, whose arcade game Pong (and later home consoles) achieved the commercial success that Magnavox struggled to obtain. As other companies rushed to emulate Atari’s success, Magnavox filed suits claiming these game companies were infringing on Baer’s patents. Through the 1970s, patents remained less central than copyrights as the distinction between hardware and software developed. Still, this marked the beginning of a legal shift that foreshadowed the modern trend of using patents to protect not only tangible inventions, but also abstract gameplay ideas.
35 U.S.C. § 101 governs patentability, allowing patents to be granted to “whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter.” While straightforward theoretically, its interpretation has been far from simple. There has been a struggle to draw a distinct line between an unpatentable, abstract idea and a patentable implementation in software; video game technology amplifies this ambiguity because of its complexities.
Following the Supreme Court’s decision in Diamond v. Diehr—which upheld a patent for a rubber-curing process that used a computer algorithm to calculate optimal cure times—courts began interpreting 35 U.S.C. § 101 more liberally, allowing patents for software and abstract algorithms. Scholars have pointed out that lower courts have misapplied Diehr by granting protections to abstract processes that patent law was never meant to cover. In Bilski v. Kappos, the Supreme Court confirmed that abstract ideas cannot become patent-eligible simply by being implemented on a computer.
Alice Corp v. CLS Bank attempted to clarify how to interpret § 101 by requiring a two-step analysis of patent infringement claims: (1) is the claim directed to an abstract idea, and if so, (2) is there an “inventive concept” that transforms the idea into a patentable application? The second step of Alice has been inconsistently applied, which has led to unpredictable outcomes such as the issuing of broad patents.
For example, in DDR Holdings, the court upheld an e-commerce software patent because “the claimed solution is necessarily rooted in computer technology to overcome a problem specifically arising in the realm of computer networks.” Within the gaming sector, this poses a very tangible threat. The varied application of the abstract-idea doctrine has led to the proliferation of overly broad video game mechanic patents. This causes legal uncertainty for developers and inventors, creating a chilling effect on innovation, as the court’s stance remains unclear.
The “rules of a game” are generally not patentable, falling under the abstract idea exception to 35 U.S.C. § 101. This exception can serve as a hurdle to patenting certain kinds of games. To overcome this obstacle, applicants draft game mechanic patents using highly specific, technical language that outlines the software implementation rather than the underlying rules. Although this filing method often satisfies the formal requirements of the Patent Act, it enables patent holders to claim broad ownership over fundamental gameplay interactions that could be implemented in innumerable ways. This drafting strategy is especially visible in recent filings by major publishers, such as Nintendo’s patent filings for The Legend of Zelda: Tears of the Kingdom.
The Legend of Zelda’s patents describe the meticulous calculations for object interactions, user interface elements such as the loading screen, and particular character abilities, yet expand to cover the broader concept of combining objects in virtual environments to solve puzzles. This is the type of mechanic other developers might independently create using entirely different lines of code and visual designs. By cloaking abstract gaming ideas in technical specifications, these patents effectively form a conceptual wall around common game design methods and patterns.
As a result, indie developers are placed in a precarious position. Small studios often lack the resources to conduct comprehensive searches of prior art before they develop games; even if they identify potentially conflicting patents, they may not have the means to design around them. The threat of litigation from major publishers may force indie developers to abandon promising projects or avoid entire genres, regardless of whether the implementation would differ, as video game “development is a capital-intensive endeavor” that “requir[es] substantial financial resources.”
The America Invents Act provides a mechanism to challenge questionable patents. Inter Partes Review (IPR) allows third parties to petition the Patent Trial and Appeal Board to reexamine a patent’s validity based on prior art at a fraction of the potential litigation costs. IPR also has a lower burden of proof because there is no presumption of patent validity. IPR provides a comparatively accessible forum for challengers and serves as a crucial tool for invalidating broad patents that were potentially granted erroneously.
However, IPR has limitations. The process requires significant legal expertise and can cost, on average, $300,000—more affordable than a multi-million-dollar trial, but beyond reach for many indie studios. Additionally, with IPR, patents can only be challenged on specific grounds, not on the abstract idea doctrine that often applies to game mechanics. Further, “IPR practitioners will no longer be able to use general knowledge to bridge evidentiary gaps and instead rely on the four corners of prior art or printed publications,” which can provide an evidentiary advantage for patent owners. Despite these constraints, Inter Partes Review, even if unsuccessful, can serve as a procedural leveling mechanism by introducing administrative scrutiny and litigation risk even where substantive invalidation is unlikely.
Patent law’s reach has expanded far beyond hardware and physical devices as games have become increasingly software-based, encouraging developers to protect algorithms, interfaces, and in-game systems that shape player experiences. However, scholars have noted that software patents often fail to provide clear notice to potential innovators, as their scope can be difficult to identify and interpret before litigation.
A controversial example of this trend is Warner Bros. Interactive Entertainment’s patent on the ‘Nemesis System’ from the game Middle-earth: Shadow of Mordor, which grants them exclusive rights over dynamic enemies that are “promoted to a bigger threat” upon defeating players. This system was meant to tailor player experience by crafting miniature bosses that were “tailored to…specific playthrough[s]” that remember the character and can develop a vendetta against them. Many of these patents are considered overly broad because they fail to meaningfully limit claims to specific technological processes and instead seek protection over high-level gameplay concepts and functional outcomes. By monopolizing such mechanics, these companies have sparked widespread concern among fans and experts.
The threat of potential litigation, rather than actual enforcement, can discourage developers from engaging in iterative, experimental design, effectively serving as a barrier to entry. A small studio developing a memory-based enemy system, for instance, might scrap the feature entirely rather than risk resembling Warner Bros.’ patent. As a result, patent law risks promoting the very stagnation it was meant to prevent.
From law journals to consumer outcry, the backlash against overly broad video game patents has become impossible to ignore. As courts continue to grapple with the boundaries of patentable subject matter under § 101, the video game industry illustrates what happens when patent protection extends too far. When abstract gameplay concepts are treated as proprietary rights, innovation suffers. If patent law is meant to promote progress, it must allow developers to build upon and refine existing concepts without sacrificing creative freedom. Meaningful reform requires limiting patents to specific technical solutions while keeping core gameplay concepts in the public domain—available for all developers to iterate upon and improve.
Over the past decade, schools across the United States have increasingly turned to digital surveillance technologies to monitor students’ online behavior in the name of safety. Among these technologies, Gaggle has emerged as one of the most widely adopted platforms, serving more than 5.8 million students at over 1,500 school districts across the United States. Gaggle’s tasks performed include scanning millions of students’ emails, documents, chats, and images for signs of self-harm, violence, or bullying. Supporters claim that Gaggle helps schools prevent tragedies and identify students in crisis, while critics warn that Gaggle’s constant monitoring compromises student privacy and autonomy and is a risk to equality for all students. This blog post examines Gaggle’s role in K–12 education by analyzing how Gaggle operates, assessing its effectiveness, weighing its benefits against its harms, and considering the adequacy of existing legal protections. Although the platform is promoted as a tool for students’ well-being, its pervasive monitoring raises serious ethical and constitutional concerns that outweigh its purported benefits.
Gaggle integrates with Google Workspace and Microsoft 365 to scan student emails, documents, and images associated with school accounts for language indicating self-harm, usage or possession of drugs, or risks of violence. Because schools can grant Gaggle access to student accounts, surveillance extends beyond school hours and onto students’ personal devices whenever and wherever they log in. Even social media notifications tied to school emails can be monitored. To identify potentially harmful material, Gaggle employs an in-house, AI-powered filtering system that compares scanned content against a proprietary “blocked-word list” containing profanity and references to self-harm, violence, bullying, or drugs. Content flagged by Gaggle’s AI system is reviewed by human moderators, who may escalate “incidents” to administrators or, in severe cases, to law enforcement. Gaggle divides flagged content into three tiers: “violations,” “questionable content,” and “possible student situations.” The last category involves imminent threats such as suicide or possible violence and triggers immediate contact with school officials. While the company claims to have helped save thousands of lives, its data are self-reported and unverifiable. Critics highlight the lack of independent evaluation and the questionable reliability of low-paid contract reviewers expected to process hundreds of incidents per hour. Despite these concerns, many educators view Gaggle as a useful tool for early intervention. However, existing evidence does not conclusively show that Gaggle reduces suicide, self-harm, or violence, suggesting that its promise may rest more on perception than on measurable results.
Proponents argue that Gaggle responds to a growing mental health crisis amongst youths. Rising rates of depression and anxiety—especially among LGBTQ+ and transgender students—make it difficult for schools to identify struggling students. Supporters claim that Gaggle helps detect warning signs, allowing earlier counseling or intervention. They also cite the platform’s ability to address cyberbullying and fulfill legal mandates under state anti-bullying laws. Additionally, given the growing fear of school shootings, administrators see Gaggle as a supplement to limited counseling resources, capable of flagging threats before violence occurs. From this perspective, Gaggle provides schools with a sense of control and readiness, offering reassurance that no cry for help will go unnoticed. Yet, this reassurance often obscures the dark side to Gaggle’s constant surveillance: the erosion of students’ fundamental right to privacy and their ability to learn freely and to express themselves without constant scrutiny.
Continuous surveillance discourages students from expressing themselves freely. Developmental psychologists emphasize that adolescence is a key period for cultivating creativity, independence, and critical thought. When students know they are constantly monitored, they self-censor and conform. This undermines what privacy scholars call “intellectual privacy”—the ability to think and communicate without fear of observation. Research shows that over half of monitored students refrain from sharing their true thoughts online, confirming that Gaggle’s presence suppresses open exploration. In effect, the system teaches young people that safety and obedience take precedence over curiosity and trust. Such lessons, internalized at a formative stage, may have lasting consequences for democratic participation and creative confidence.
Gaggle’s algorithmic bias and access patterns also disproportionately harm disadvantaged students. AI systems often reflect racial and linguistic bias, flagging language used by students of color or LGBTQ+ youth as “offensive.” Low-income students, who rely on school-issued devices, are more heavily surveilled because, for financial reasons, they cannot separate school and personal accounts. Gaggle accesses information through students’ school accounts—regardless of whether they log in from personal or school-provided devices—and can also capture notifications from social media when those accounts are used for registration. Without access to private electronic devices, these students have little choice but to rely on school accounts for all purposes, making it difficult to maintain privacy or segregate personal activity from school monitoring. Moreover, Gaggle has blocked LGBTQ+ websites and flagged terms like “gay” or “queer,” deterring students from seeking support. In some cases, its monitoring has even exposed students’ sexual orientation without consent, placing them at risk of harm at home. These harms compound preexisting inequities in education, as marginalized students already face higher rates of disciplinary action. When surveillance is piled on top of these disparities, it amplifies rather than alleviates injustice. The result is a system that treats vulnerability as suspicious and equates being different to being dangerous.
By extending monitoring to all hours and authorizing contact with law enforcement, Gaggle risks feeding into the school-to-prison pipeline. The “school-to-prison pipeline” describes the phenomenon in which heightened discipline, surveillance, and law enforcement involvement in schools push students, especially the most at-risk ones, out of educational environments and into the juvenile or criminal justice systems. Although the company claims its system is not disciplinary, its reports can reach police if administrators are unavailable. This increases criminalization, especially for minority students already subject to harsher discipline. In the post-Dobbs era, where abortion and gender-affirming care are criminalized in some states, Gaggle’s stored data could be used against students seeking medical information, raising severe privacy concerns. Students and parents rarely receive meaningful notice or the ability to opt out of the surveillance. Unlike some competitors, Gaggle operates in the background without visible indicators. While the company recommends schools notify families, many do not, leaving parents unaware that surveillance occurs. Additionally, opting out effectively bars students from using essential educational technology, mandating surveillance by default. This lack of transparency undermines informed consent and contradicts principles of digital autonomy. As a result, millions of students are subjected to 24/7 surveillance without ever being asked for permission, creating a generation of learners for whom privacy is not a right but a privilege.
Existing federal laws inadequately address the scope of student surveillance. The Children’s Internet Protection Act (CIPA) requires schools receiving federal funds to monitor minors’ online activity and block obscene content. However, it was intended to restrict access to harmful material—not to justify constant behavioral surveillance. Gaggle’s around-the-clock monitoring exceeds what CIPA envisions, and its overbroad filters have restricted legitimate LGBTQ+ educational resources. The Family Educational Rights and Privacy Act (FERPA) restricts disclosure of student records but includes a “school-official exception” allowing data sharing with third-party contractors. This loophole permits schools to grant Gaggle broad access without parental consent, undermining FERPA’s original purpose of safeguarding student records. The Children’s Online Privacy Protection Act (COPPA) governs the collection of children’s data under age thirteen. Because schools can consent on parents’ behalf, Gaggle is not required to obtain direct parental consent. Moreover, COPPA does not apply to students over thirteen, leaving middle and high schoolers unprotected. Fourth Amendment implications remain uncertain. Under New Jersey v. T.L.O., searches must be justified and reasonable in scope, but courts have split on how this applies to digital monitoring. In State v. Gaul, scanning emails on school servers was upheld since students were notified, but R.S. v. Minnewaska recognized privacy rights in personal social media messages. Gaggle’s continuous off-campus surveillance may therefore raise constitutional concerns, particularly where students lack meaningful notice or the opportunity to avoid monitoring.
Gaggle’s promise of safety comes at the expense of student privacy, equity, and trust. To ensure a fair balance, policymakers should commission independent research to evaluate effectiveness; require transparency and parental notice; mandate audits for algorithmic bias; limit surveillance to school hours and on-campus use; and update CIPA, FERPA, and COPPA to reflect modern digital realities. Schools must also consider alternatives that emphasize human connection rather than algorithmic control, such as increasing access to counseling, peer support programs, and teacher training in mental health awareness. Ultimately, while technology can support student well-being, it must not erode the freedom to think, explore, and learn without fear of being watched. Gaggle’s 24/7 surveillance, though possibly well-intentioned, risks transforming schools into digital panopticons where privacy and creativity give way to control. The task before educators and lawmakers is not to abandon safety, but to redefine it—a task that protects both students’ lives and their liberty to live as autonomous thinkers in a democratic society.
Introduction
The emergence of Artificial Intelligence (AI) contract drafting software marks a pivotal moment in legal technology, where theoretical possibilities are transforming into market realities. As vendors compete to deliver increasingly sophisticated solutions, understanding the current state of this market becomes crucial for legal practitioners making strategic technology decisions. The landscape is particularly dynamic, with established legal tech companies and ambitious startups offering solutions that range from basic template automation to sophisticated language processing systems.
Yet beneath the marketing promises lies a more nuanced reality about what these systems can and cannot do. While some tools demonstrate remarkable capabilities in routine contract analysis and generation, others reveal the persistent challenges of encoding legal judgment into algorithmic systems. This tension between technological capability and practical limitation defines the current market moment, making it essential to examine not just who the key players are, but what their software delivers in practice.
This paper provides an analysis of the current market for AI contract drafting software, examining the capabilities and limitations of leading solutions. By focusing on specific vendors and their technologies, we aim to move beyond general discussions of AI’s potential to understand precisely where these tools succeed, where they fall short, and what this means for law firms and legal departments making technology investment decisions.
Historical Context and Technical Foundation
The rise of AI in legal practice reflects a fascinating evolution from theoretical possibility to practical reality. While early experiments with legal expert systems emerged in the 1960s at the University of Pittsburgh, marking the field’s experimental beginnings, the real transformation began with the maturation of machine learning and natural language processing (NLP) in the 21st century. These technologies fundamentally changed how computers could interpret and engage with human language, creating new possibilities for automated contract analysis and drafting that early pioneers could only imagine. The shift from rule-based expert systems to sophisticated language models represents more than just technological progress—it marks a fundamental change in how we conceptualize the relationship between computation and legal reasoning. Early systems relied on rigid, pre-programmed rules that could only superficially engage with legal texts. Modern AI tools, by contrast, can analyze patterns and context in ways that more closely mirror human understanding of legal language, though still with significant limitations. This technological evolution has particular significance for contract drafting, where the ability to understand and generate nuanced legal language is essential. While early systems could only handle the most basic document assembly, today’s AI tools can engage with contractual language at a more sophisticated level, analyzing patterns and suggesting context-appropriate clauses.
Contract drafting represents a complex interplay of legal reasoning and strategic foresight. At its core, the process demands not just accurate translation of parties’ intentions into binding terms, but also the anticipation of potential disputes and the careful calibration of risk allocation. Traditional drafting requires mastery of multiple elements: precise definition of terms, careful structuring of obligations and conditions, strategic design of termination provisions, and thorough implementation of boilerplate clauses that can prove crucial in dispute resolution.
AI systems have sophisticated pattern recognition to analyze existing contracts and learn standard legal language patterns, which helps ensure accuracy and precision in expressing each party’s intentions. These systems can ensure that contract terms are legally enforceable by cross-referencing legal databases, statutes, and regulations to confirm compliance with relevant law. Furthermore, they excel at identifying common contractual conditions to obligations and suggesting appropriate risk mitigation clauses, such as force majeure clause.
The technology’s analytical capabilities extend to identifying potential areas of dispute based on historical contract analysis, enabling preventive drafting approaches. By leveraging large databases of legal documents, AI systems streamline the drafting process through automated insertion of standard provisions while maintaining consistency across documents. This automation of routine tasks allows lawyers to focus on strategic aspects of contract preparation and negotiation.
Principal Players in AI Contract Drafting
Gavel is a standout tool for document automation, designed to simplify the creation of legal documents through customizable templates and conditional logic. Its drag-and-drop interface is intuitive, making it accessible to non-technical users, and it excels at generating complex, customized documents quickly. Gavel’s ability to integrate with other systems and automate repetitive tasks, such as populating templates with data, makes it a powerful tool for legal teams looking to streamline their workflows.
However, Gavel’s focus on automation means it lacks advanced AI capabilities for contract analysis or review. It is primarily a tool for generating documents based on predefined templates, rather than analyzing or extracting insights from contracts. Additionally, the quality of its output depends heavily on the templates and data inputs, which may require significant upfront effort to configure.
Ironclad is a leader in contract lifecycle management (CLM), offering a comprehensive platform that combines AI-powered drafting with workflow automation. Its integration with Microsoft Word and other productivity tools allows users to draft, negotiate, and approve contracts within familiar environments. Ironclad’s AI is particularly effective at generating standard contracts (e.g., NDAs, service agreements) and suggesting clauses based on predefined templates. The platform’s analytics dashboard also provides valuable insights into contract performance, helping organizations optimize their workflows.
While Ironclad excels at automating routine tasks, its AI may struggle with highly complex or bespoke agreements, requiring significant customization. Additionally, its pricing structure, often tailored for enterprise-level clients, may be prohibitive for smaller firms or solo practitioners.
Zuva, spun out of Kira Systems, focuses on AI-powered document understanding and contract analysis. Its technology is designed to be embedded into other software applications via APIs, making it a versatile solution for enterprises and developers. Zuva’s AI excels at extracting key terms and clauses from contracts, enabling users to quickly identify risks and obligations. The platform also offers a robust clause library, which can be used to streamline drafting and ensure consistency across documents.
Zuva’s strength as an embeddable solution also presents a limitation: it lacks a standalone, user-friendly interface for non-technical users. Additionally, while Zuva’s AI is powerful, it may require customization to handle highly specialized legal domains or jurisdiction-specific nuances.
LawGeex specializes in AI-powered contract review, using natural language processing (NLP) to compare contracts against predefined policies and flag deviations. This makes it an invaluable tool for legal teams tasked with ensuring compliance and reducing risk. LawGeex’s AI is particularly effective at handling high-volume, routine contracts, such as NDAs and procurement agreements, where speed and accuracy are critical.
While LawGeex excels at contract review, its capabilities in contract drafting are more limited. The platform is primarily designed to identify risks and deviations rather than generate new contracts from scratch. Additionally, its effectiveness depends on the quality of the predefined policies and templates, which may require significant upfront effort to configure.
Kira Systems, now part of Litera, is a pioneer in AI-powered contract analysis, particularly in the context of due diligence and large-scale contract review. Its machine learning models are highly effective at identifying and extracting key clauses and data points from contracts, such as termination clauses, indemnities, and payment terms. Kira’s ability to handle vast volumes of documents quickly and accurately has made it a favorite among law firms and corporate legal teams, especially in industries like M&A, real estate, and financial services.
Luminance is a powerful AI platform designed for contract review and due diligence, with a particular focus on identifying anomalies and risks in large datasets. Its proprietary machine learning technology, based on pattern recognition, enables it to quickly analyze and categorize contracts without the need for extensive training. Luminance’s intuitive interface and real-time collaboration features make it a popular choice for legal teams working on complex transactions.
While Luminance excels at contract review and anomaly detection, its capabilities in contract drafting are more limited. The platform’s effectiveness may also depend on customization to handle jurisdiction-specific or industry-specific requirements.
AI in Practice: Use Cases Across Industries
Mergers and acquisitions (M&A) are among the most complex and high-stakes transactions in the legal world, requiring meticulous due diligence and the ability to process vast volumes of contracts under tight deadlines. In this context, Kira Systems has emerged as a leading solution. Kira’s machine learning models excel at extracting key clauses—such as termination provisions, indemnities, and payment terms—from large datasets, enabling legal teams to identify risks and inconsistencies quickly. For example, Clifford Chance, a global law firm, has leveraged Kira Systems to streamline clause extraction and comparison across multiple contracts, significantly reducing the time required for due diligence. Kira’s ability to handle the nuanced language of M&A agreements makes it an indispensable tool for law firms and corporate legal departments navigating these complex transactions.
The real estate sector is characterized by a high volume of contracts, including leases, purchase agreements, and mortgages. These documents often require careful review to ensure compliance with regulatory standards and to identify potential risks. Luminance has proven particularly effective in this domain. Its proprietary machine learning technology is designed to detect anomalies and categorize contracts quickly, making it ideal for real estate transactions. Luminance’s ability to analyze large datasets and flag non-standard clauses has been instrumental in helping real estate firms review leases and purchase agreements more efficiently. By automating the review process, Luminance allows legal teams to focus on strategic aspects of real estate deals, such as negotiation and risk mitigation.
The finance industry deals with a wide range of contracts, from loan agreements to derivatives, all of which must comply with strict regulatory standards. In this highly regulated environment, LawGeex has established itself as a trusted tool for contract review and compliance. LawGeex uses natural language processing (NLP) to compare contracts against predefined policies, flagging deviations and ensuring compliance with regulatory requirements. Its high accuracy rate—94% in spotting risks in non-disclosure agreements (NDAs), compared to 85% for human lawyers—makes it a valuable asset for financial institutions. By automating the review of high-volume contracts, LawGeex allows legal teams to focus on strategic risk management and regulatory compliance.
Conclusion: Algorithmic Precision Meets Strategic Expertise
The analysis of leading AI contract tools reveals a clear pattern: while each platform excels in specific domains—Kira in M&A due diligence, Luminance in anomaly detection, LawGeex in compliance—none yet offers a comprehensive solution for all contract-related tasks. This specialization reflects both the complexity of legal work and the current limitations of AI technology. The industry-specific applications demonstrate that AI tools are most effective when deployed strategically, focusing on tasks that benefit from pattern recognition and large-scale data processing, while leaving nuanced legal interpretation and strategic decision-making to human experts.
This bifurcation of responsibilities suggests an emerging model of legal practice where AI serves not as a replacement for lawyers but as a force multiplier for legal expertise. The success of platforms like Kira in M&A and LawGeex in financial compliance indicates that the future of legal technology lies not in attempting to replicate human judgment, but in augmenting it by handling routine analysis and flagging potential issues for expert review. As these technologies continue to evolve, the key challenge for legal practitioners will be developing workflows that effectively leverage AI’s analytical capabilities while preserving the critical role of human expertise in strategic legal thinking and complex decision-making.
A. Introduction
The dipartite structure of the American patent system, comprising the U.S. Patent and Trademark Office and the federal Article III court system, leads to interesting interactions between rulings made in each of the distinct subsystems. This is especially relevant to the patent system in the context of claim construction. Moreover, because the Federal Circuit has sole jurisdiction over patent cases, their unique frameworks for analyzing procedural and substantive legal issues lead to facially surprising outcomes. Two recent cases applying Federal Circuit precedent illustrate this in relation to the judicial application of collateral estoppel, or issue preclusion.
B. The Broad Strokes of Issue Preclusion
Issue preclusion stands for the idea that, “[o]nce a matter [has been] properly litigated, that should be the end of the matter for the parties to that action.” It is similar, in that sense, to res judicata, but issue preclusion has a distinct scope. Issue preclusion applies where the issue had been previously, properly litigated and the decision on the matter was material to the case in which it was decided. The more important distinction, however, between issue preclusion and res judicata is that issue preclusion does not require “mutuality” between the parties in the case where issue preclusion is being asserted. In other words, the second litigation does not need to be between the same two parties, as is the case with res judicata.
The Federal Circuit, the relevant circuit for this discussion, recognizes exceptions to issue preclusion. The exception with the most interaction with rulings from the PTAB (the appeals board of the USPTO) is that issue preclusion does not apply where the subsequent proceeding applies “a different legal standard.” This was the deciding exception in the two cases that help us understand the implications of this Federal Circuit exception to issue preclusion.
C. Standard Disparities Between PTAB and Article III Courts
The USPTO applies its own particular standards during PTAB proceedings. These are statutorily defined, further interpreted in federal regulations, and expounded upon from the Manual for Patent Examination and Procedure. For the purpose of understanding how PTAB rulings interact with decisions of Article III courts, we will focus primarily on rules relevant to claim construction and patent validity.
In evaluating the validity of patent claims, the USPTO applies a “broadest reasonable interpretation” when constructing, or determining the meaning of, the claims of the patent under examination. This is relevant during initial examination of the patent by the USPTO’s examiners and on appeal of a final rejection to the PTAB. In other words, the USPTO and PTAB look for the broadest interpretation of the language of the claims that remains reasonable. The courts, on the other hand, apply the Phillips Standard, which “constructs” the claim as one of ordinary skill in the field would understand it in light of the specification and prosecution history any record produced during the original examination of the patent.
This was a potential factor in the rejection of the application of collateral estoppel in our first case, DDR Holdings, LLC v. Priceline.com LLC. Interestingly, it was not a deciding factor in the disposition of the question of issue preclusion. As issue preclusion is an affirmative defense, it must be raised in the answer a complaint. DDR Holdings, however, failed to raise issue preclusion until their brief. The court explicitly notes that this is fatal, in and of itself, but nevertheless evaluates the merits of the request for collateral estoppel. In this case, DDR Holdings sought to estop Priceline from arguing that “merchants providing a service” were not merchants covered under DDR Holdings’ ‘399 patent’s definition of a merchant. Priceline was arguing this based on the prosecution history of the ‘399 patent. During prosecution and initial examination, DDR Holdings had deleted any reference of “providing services” from the definition of merchant within the specification. DDR Holdings had, however, incorporated this earlier, service-inclusive definition, by incorporating the containing preliminary application by reference.
In light of this, the PTAB, during Inter Partes Review initiated by Priceline over the asserted ‘399 patent, found that the portion of the specification nominally defining “merchant” was not, in fact, definitional. Instead, the PTAB applied the broadest reasonable interpretation standard during the Inter Partes Review and found “merchants” to include “producers, distributors, or resellers of the goods or services to be sold.” In other words, the specification did not limit “merchants” because the PTAB did not find sufficient evidence to show that DDR Holdings intended to define “merchant” restrictively via the specification. As such, the broadest reasonable interpretation of “merchant” within the claims would necessarily include purveyors of services.
With this particular ruling, DDR Holdings asserted that the matter had been properly litigated and, therefore, Priceline should be collaterally estopped from asserting their differing construction of merchants in the case before the court. The court noted, however, that it was not bound by the decision of the PTAB because that decision applied the “Broadest Reasonable Interpretation” instead of the court’s Phillips Standard. Because the Federal Circuit, and by extension the district court, must apply the Phillips Standard, issue preclusion could not apply. In other words, Priceline could assert its construction that would exclude “service” providing from the ‘399 patent’s definition of merchants.
Under this standard, the Federal Circuit found the discussion in the ‘399 patent’s specification of “merchant” to be definitional. Furthermore, the Federal Circuit found that the deletion of “services” between the provisional and non-provisional patent to be material and, under the Phillips Standard, found it to explicitly exclude services from the ‘399 claim coverage.
The second case, Kroy IP Holdings, LLC v. Groupon, Inc., is more narrowly focused on challenges to invalidity, in the form of Inter Partes Review, before the PTAB. During an IPR proceeding, the PTAB applies a preponderance of evidence standard when determining if a patent is valid or invalid. This is at odds with the standard applied in Article III courts, which instead apply the clear and convincing evidence standard.
In Kroy, Groupon had previously initiated IPR of patents asserted by Kroy IP Holdings. In the IPRs, Groupon prevailed and the asserted patents were found to be invalid. The district court then held that Kroy IP Holdings was collaterally estopped from re-litigating the patent validity presented. After Groupon’s motion to dismiss was granted, Kroy IP Holdings sought appeal, arguing that, given the differing standards, collateral estoppel should not have applied, among other things.
The Federal Circuit ultimately decided in Kroy IP Holding’s favor. First, they noted that, on its face, collateral estoppel—or issue preclusion—could not apply here. This finding was the result of the lowered standard of proof required in front of the PTAB versus that required in front of the court.
The court then addressed a further exception to this exception. Groupon had argued, in favor of precluding Kroy IP Holding form re-litigating patent validity, that the Federal Circuit’s previous decisions stated that PTAB invalidity findings were themselves preclusive. The Federal Circuit disagreed and clarified. The Court explained that PTAB findings on validity only became preclusive once the Federal Circuit had affirmed them. A natural result of this, as described by the court, is that claims found invalid by the PTAB remain in existence until the decision is appealed to the Federal Circuit and affirmed or a district court, applying their heightened standard, independently finds the claims invalid. In other words, only once patent validity has been evaluated under the clear and convincing standard, either on appeal to the Federal Circuit or as a matter of first impression in front of a district court, does the disposition gain preclusive effect.
Because the District Court based its dismissal on the preclusive effect of the PTAB findings and said PTAB findings had not been appealed to the Federal Circuit, the court reversed and remanded.
D. Conclusion
The cases discussed above illustrate a distinct challenge that faces the dipartite American patent system. Because of the differing standards applied by the two bodies that hold sway over patent litigation, parties can be given a functional second bite at the apple when moving between the USPTO and Article III courts. This system is ultimately imperfect and creates duplicative litigation, as demonstrated in both of these cases. This is counterbalanced by the increased efficiency the USPTO and PTAB ostensibly provide to the American intellectual property system. In summary, because of the structure of the American patent system, issue preclusion—or collateral estoppel—remains difficult to invoke in patent litigation.
Introduction
Algorithmic bias is AI’s Achilles heel, revealing how machines are only as unbiased as the humans behind them.
The most prevalent real-world stage for human versus machine bias is the job search process. What started out as newspaper ads and flyers at local coffee shops, is now a completely digital process with click-through ads, interactive chatbots, resume data translation, and computer-screened candidate interviews.
Artificial intelligence encompasses a wide variety of tools, but in context to HR specifically, common AI tools include Machine Learning algorithms that conduct complex and layered statical analysis modeling human cognition (neural networks), computer vision that classifies and labels content on images or video, and large language models.
AI-enabled employment tools are powerful gatekeepers that determine the future of natural persons. With over 70% of companies using this technology investing into the promise of efficiency and neutrality, these abilities have recently come into question as these technologies have the potential to discriminate against protected classes.
Anecdote
On February 20, 2024, Plaintiff Derek Mobley initiated a class action lawsuit against an AI-enabled HR organization WorkDay, Inc., for engaging in a “pattern and practice” of discrimination based on race, age, and disability in violation of the Civil Rights Act of 1964, Civil Rights Act of 1886, Age Discrimination Act of 1967, and ADA Amendments Act of 2008. WorkDay Inc., according to the complaint, disproportionately disqualifies African-Americans, individuals over the age of 40, and individuals with disabilities securing gainful employment.
WorkDay provides subscription-based AI HR solutions to medium and large sized firms in a variety of industries. The system screens candidates based on human inputs and algorithms and according to the complaint, WorkDay employs an automated system, in lieu of human judgement, to determine how high volume of applicants should be processed on behalf of their business clients.
The plaintiff and members of the class have applied for numerous jobs that use WorkDay’s platforms and received several rejections. This process has deterred the plaintiff and members of the class from applying to companies that use WorkDay’s platform.
Legal History of AI Employment Discrimination
Mobley vs. WorkDay is the first-class action lawsuit against an AI solution company for employment discrimination, but this is not the first time that an AI organization is being sued for employment discrimination.
In August 2023, the EEOC settled the first-of-its-kind Employment Discrimination lawsuit against a virtual tutoring company that programmed its recruitment software to automatically reject older candidates. The company was required to pay $325,000 and if they were to resume hiring efforts in the US, they are required to call back all applicants during the April-May 2020 period who were rejected based on age to re-apply.
Prior to this settlement, the EEOC issued guidance to employers about their use of Artificial Intelligence tools that extends existing employer selection guidelines to AI-assisted selections. From this guidance, employers, not third-party vendors, ultimately bear the risk of unintended adverse discrimination from such tools.
How Do HR AI Solutions Introduce Bias?
There are several steps in the job search process and AI is integrated throughout this process. Steps include: The initial search, narrowing candidates, and screening.
Initial search
The job search process starts with targeted ads reaching the right people. Algorithms in hiring can steer job ads towards specific candidates and help assess their competencies using new and novel data. HR professionals found the tool helpful in drafting precise language and designing the ad with position elements, content and requirements. But these platforms can inadvertently reinforce gender and racial stereotypes by delivering the ad to candidates that meet certain job stereotypes.
For instance, ads delivered on Facebook for stereotypically male jobs are overwhelmingly targeted at male users even though the advertising was intended to reach a gender neutral audience. Essentially, at this step of the job search process, algorithms can prevent capable candidates from even seeing the job posting in the first place that further creates a barrier to employment.
Narrowing Candidates
After candidates that have viewed and applied for the job through an ad or other source, the next step that AI streamlines is the candidate narrowing process. At this step, the system narrows candidates by reviewing resumes that best match the historical hiring data from the company or its training data. Applicants found the resume to application form data transfers helpful and accurate in this step of the process. But applicants were concerned that the model could miss necessary information.
From the company’s perspective, hiring practices from the client company are still incorporated into the hiring criteria in the licensed model. While the algorithm is helpful in parsing vast amounts of resumes and streaming this laborious process for professionals, the algorithm can replicate and amplify existing biases in the company data.
For example, a manager’s past decisions may lead to anchoring bias. If some bias like gender, education, race and age existed in the past and they are present in the employer’s current high performing employees that the company uses as a benchmark, those biases can be incorporated into the outcomes at this stage of the employment search process.
Screening
Some organizations subscribe to AI tools that have a computer vision-powered virtual interview process that analyzes the candidates’ expressions to determine whether they fit the “ideal candidate” profile, while other tools like behavior/skills games are used to screen candidates prior to an in-person interview.
Computer vision models that analyze candidate expressions to assess candidacy are found to perpetuate preexisting biases against people of color. For instance, a study that evaluates such tools, found the taxonomies of social and behavioral components creates and sustains similar biased observations that one human would make on an another because the model with those labels and taxonomies is trained with power hierarchies. In this sense, computer vision AI hiring tools are not neutral because they reflect the humans that train and rely on them.
Similarly, skill games are another popular tool used to screen candidates. However, there are some relationships AI cannot perceive in its analysis. For instance, candidates that are not adept with online games perform poorly on those games, not because they lack the skills, but they lack an understanding of the games features. Algorithms, while trained on vast data to assess candidate ability, still fall short when it comes assessing general human capabilities like the relationship between online game experience and employment skills tests.
Throughout each step of the employment search process, AI tools fall short in accurately capturing candidate potential capabilities.
Discrimination Theories and AI
Given that the potential for bias is embedded throughout the employment search process, legal scholars speculate courts are more likely to scrutinize discriminatory outcomes under the disparate impact theory of discrimination.
As a recap, under Title VII there are two theories of discrimination, disparate treatment, and disparate impact. Disparate treatment means the person is treated different “because of” their status as a protected class (i.e., race, sex). For example, if a manager were to intentionally use a bias algorithm to screen out candidates of a certain race, then this behavior would be considered disparate treatment. Note, this scenario is for illustrative purposes only.
Disparate impact applies to facially neutral processes that have a discriminatory effect. The discriminatory effect aspect of this theory of discrimination can be complex because the plaintiff must identify the employer practice that has a disparate impact on a protected group. The employer can then defend that the practice by showing it is “job related” and consistent with “business necessity.” However, the plaintiff can still show that there was an alternative selection process and the business failed to adopt it. Based on this disparate impact theory, it is possible that when AI selection tools disproportionately screen women and/or racial minorities from the applicant pool, disparate theory could apply.
Existing Methods to Mitigate Bias
Algorithmic bias in AI tools has serious implications for members of protected classes.
However, developers currently employ various tools to de-bias algorithms and improve their accuracy. One method is de-biased word embedding in which neutral associations of a word are supplemented to expand the model’s understanding of the word. For instance, a common stereotype is men are doctors and women are nurses or in algorithmic terms “doctor – man + woman = nurse.” However, with the de-bias word embedding process, the model is then trained to understand “doctor – man + woman = doctor.”
Another practice currently employed by OpenAI is external Red Teaming. In which external stakeholders interact with the product and assess its weaknesses, possibility for bias, or other adverse consequences and provide feedback to OpenAI to improve and mitigate the onsets of adverse events.
But there are limitations to these enhancements. To start, bias mitigation is not a one-size-fits-all issue. Bias is specific to its geographic and cultural bounds. For instance, a model in India may need to consider caste-based discrimination. Additionally, precision is required to capture the frame where bias is possible and solely relying on foreseeable bias from the developers’ perspective is limiting. Rather, employing some form of collaborative design that includes relevant stakeholders to contribute to the identification of bias, the identification of not biased is needed.
Lastly, a debiased model is not a panacea. A recent study in which users interacted with a debiased model that used machine learning and deep learning to provide recommendations for college majors, indicated that regardless of the debiased model’s output, users relied on their own biases to choose their majors, often motivated by gender stereotypes associated with those majors.
Essentially, solutions from the developer side are not enough to resolve algorithmic bias issues.
Efforts to Regulate AI Employment Discrimination
Federal law does not specifically govern artificial intelligence. However, existing laws including Title VII extend to applications that include AI. At this point, regulation efforts are largely at the state and local government level.
New York City is the first local government to pass an official law that regulates AI-empowered employment decision tools. The statute requires organizations to inform candidates of the use of AI in their hiring process, and before using the screening device, notify potential candidates. If candidates do not consent to the AI-based process, the organization is required to use an alternative method.
Like New York’s statute, Connecticut passed a statute specific to state agency’s use of AI and machine learning hiring tools. Connecticut requires an annual review of the tools performance, a status update on whether the tool underwent some form of bias mitigation training in an effort to prevent unlawful discrimination.
New Jersey, California, and Washington D.C. currently have bills that are intended to prevent discrimination with AI hiring systems.
Employer Considerations
With the possibility of bias embedded throughout each step of the recruiting process, employers must do their part to gather information about the performance of the AI system they ultimately invest in.
To start, recruiters and managers alike stressed the need for AI systems to provide some explanation why the applicant is rejected or selected to accurately assess the performance of the model. This need speaks specifically to AI models’ tendency to find proxies or shortcuts in the data to reach the intended outcome on a superficial level. For instance, models might find a candidate by only focusing on candidates who graduated from universities in the Midwest if most of upper management attended such schools. In this sense, employers should look for accuracy reports, ask vendors ways to identify and correct this issue in this hiring pool.
Similarly, models can focus on candidate traits that are unrelated to the job traits and are simply unexplained correlations. For example, one model in the UK linked people that liked “curly fries” on Facebook to have higher levels of intelligence. In this case, employers need to develop processes to analyze whether the output from the model was “job related” or related carrying out the functions of the business.
Lastly, employers must continue to invest in robust diversity training. Algorithmic bias reflects the bias human behind the computer. While AI tools enhance productivity and alleviate the laborious parts of work, it also increases the pressure on humans to do more cognitive-intensive work. In this sense, managers need robust diversity training to scrutinize outputs from AI models, to investigate whether the model measured what it was supposed to, whether the skills required in the post accurately reflect the expectations and culture of the organization.
Along with robust managerial training, these AI solutions often incorporate “culture fit” as a criterion. Leaders need to be intentional about precisely defining culture and promoting that defined culture in its hiring practices.
Conclusion
A machine does not know its output is biased. Humans interact with context—culture dictates norms and expectations, shared social/cultural history informs bias. Humans, whether we like to admit it or not, know when our output is biased.
To effectively mitigate unintentional bias in AI-driven hiring, stakeholders, ranging from HR professionals to developers and candidates, must understand the technology’s limitations, ensure its job-related decision-making accuracy, and promote transparent, informed use, while also maintaining robust DEI initiatives and awareness of candidates’ rights.
Trending Uses of Deepfakes
Deepfake technology, leveraging sophisticated artificial intelligence, is rapidly reshaping the entertainment industry by enabling the creation of hyper-realistic video and audio content. This technology can convincingly depict well-known personalities saying or doing things they never actually did, creating entirely new content that did not really occur. The revived Star Wars franchise used deepfake technology in “Rogue One: A Star Wars Story” to reintroduce characters like Moff Tarkin and Princess Leia, skillfully bringing back these roles despite the original actors, including Peter Cushing, having passed away. Similarly, in the music industry, deepfake has also been employed creatively, as illustrated by Paul Shales’ project for The Strokes’ music video “Bad Decisions.”Shales used deepfake to make the band members appear as their younger selves without them physically appearing in the video.
While deepfakes offer promising avenues for innovation, such as rejuvenating actors or reviving deceased ones, it simultaneously poses unprecedented challenges to traditional copyright and privacy norms.
Protections for Deepfakes
Whereas deepfakes generate significant concerns, particularly about protecting individuals against deepfake creations, there is also controversy over whether the creators of deepfake works can secure copyright protection for their creations.
Copyrightability of Deepfake Creations
Current copyright laws fall short in addressing the unique challenges posed by deepfakes. These laws are primarily designed to protect original works of authorship that are fixed in a tangible medium of expression. However, they do not readily apply to the intangible, yet creative and recognizable, expressions that deepfake technology replicates. This gap exposes a crucial need for legal reforms that can address the nuances of AI-generated content and protect the rights of original creators and the public figures depicted.
Under U.S. copyright law, human authorship is an essential requirement for a valid copyright claim. In the 2023 case Thaler v. Perlmutter, plaintiff Stephen Thaler attempted to register a copyright for a visual artwork produced by his “Creativity Machine,” listing the computer system as the author. However, the Copyright Office rejected this claim due to the absence of human authorship, a decision later affirmed by the court. According to the Copyright Act of 1976, a work must have a human “author” to be copyrightable. The court further held that providing copyright protection to works produced exclusively by AI systems, without any human involvement, would contradict the primary objectives of copyright law, which is to promote human creativity—a cornerstone of U.S. copyright law since its beginning. Non-human actors need no incentivization with the promise of exclusive rights, and copyright was therefore not designed to reach them.
However, the court acknowledged ongoing uncertainties surrounding AI authorship and copyright. Judge Howell highlighted that future developments in AI would prompt intricate questions. These include determining the degree of human involvement necessary for someone using an AI system to be recognized as the ‘author’ of the produced work, the level of protection afforded the resultant image, ways to assess the originality of AI-generated works based on non-disclosed pre-existing content, the best application of copyright to foster AI-involved creativity, and other associated concerns.
Protections Against Deepfakes
The exploration of copyright issues in the realm of deepfakes is partially driven by the inadequacies of other legal doctrines to fully address the unique challenges posed by this technology. For example, defamation law focuses on false factual allegations and fails to cover deepfakes lacking clear false assertions, like a manipulated video without specific claims. Trademark infringement, with its commercial use requirement, does not protect against non-commercial deepfakes, such as political propaganda. The right of publicity laws mainly protect commercial images rather than personal dignity, leaving non-celebrities and non-human entities like animated characters without recourse. False light requires proving substantial emotional distress from misleading representations, a high legal bar. Moreover, common law fraud demands proof of intentional misrepresentation and tangible harm, which may not always align with the harms caused by deepfakes.
Given these shortcomings, it is essential to discuss issues in other legal areas, such as copyright issues, to enhance protection against the misuse of deepfake technology. In particular, the following sections will explore unauthorized uses of likeness and voice and the impacts of deepfakes on original works. These discussions are critical because they aim to address gaps left by other legal doctrines, which may not fully capture the challenges posed by deepfakes, thereby providing a broader scope for protection.
Unauthorized Use of Likeness and Voice
Deepfakes’ capacity to precisely replicate an individual’s likeness and voice may raise intricate legal issues. AI-generated deepfakes, while sometimes satirical or artistic, can also be harmful. For example, Taylor Swift has repeatedly become a target of deepfakes, including instances where Donald Trump’s supporters circulated AI-generated videos that falsely depict her endorsing Trump and participating in election denialism. This represents just one of several occasions where her likeness has been manipulated, underscoring the broader issue of unauthorized deepfake usage.
The Tennessee ELVIS Act updates personal rights protection laws to cover the unauthorized use of an individual’s image or voice, adding liabilities for those who distribute technology used for such infringements. In addition, on January 10, 2024, Reps. María Elvira Salazar and Madeleine Dean introduced the No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act (H.R. 6943). This bill is designed to create a federal framework to protect individual rights to one’s likeness and voice against AI-generated counterfeits and fabrications. Under this bill, digitally created content using an individual’s likeness or voice would only be permissible if the person is over 18 and has provided written consent through a legal agreement or a valid collective bargaining agreement. The bill specifies that sufficient grounds for seeking relief from unauthorized use include financial or physical harm, severe emotional distress to the content’s subject, or potential public deception or confusion. Violations of these rights could lead individuals to pursue legal action against providers of “personalized cloning services” — including algorithms and software primarily used to produce digital voice replicas or depictions. Plaintiffs could seek $50,000 per violation or actual damages, along with any profits made from the unauthorized use.
Impact on Original Work
The creation of deepfakes can impact the copyright of original works. It is unclear whether deepfakes should be considered derivative works or entirely new creations.
In the U.S., a significant issue is the broad application of the fair use doctrine. Under § 107 of the Digital Millennium Copyright Act of 1998 (DMCA), fair use is determined by a four-factor test assessing (1) the purpose and character of the use, (2) the nature of the copyrighted work, (3) the amount and substantiality of the portion used, and (4) the impact on the work’s market potential. This doctrine includes protection for deepfakes deemed “transformative use,” a concept from the Campbell v. Acuff Rose decision, where the new work significantly alters the original with a new expression, meaning, or message. In such cases, even if a deepfake significantly copies from the original, it may still qualify for fair use protection if it is transformative, not impacting the original’s market value.
However, this broad application of the fair use doctrine and liberal interpretation of transformative use do not work in favor of the original creators. They may potentially protect the deepfake content even with malicious intent, which makes it difficult for original creators to bring claims under § 512 of the DMCA and § 230 of the Communication Decency Act.
Federal and State Deepfake Legislation
“Copyright is designed to adapt with the times.” At present, although the United States lack comprehensive federal legislation that specifically bans or regulates deepfakes, there are still several acts that target deepfakes.
In Congress, a few proposed bills aim to regulate AI-generated content by requiring specific disclosures. The AI Disclosure Act of 2023 (H.R. 3831) requires any content created by AI to include a notice stating, “Disclaimer: this output has been generated by artificial intelligence.” The AI Labeling Act of 2023 (S. 2691) also demands a similar notice, with additional requirements for the disclaimer to be clear and difficult to alter. The REAL Political Advertisements Act (H.R. 3044 and S. 1596) demands disclaimers for any political ads that are wholly or partly produced by AI. Furthermore, the DEEPFAKES Accountability Act (H.R. 5586) requires that any deepfake video, whether of a political figure or not, must carry a disclaimer. It is designed to defend national security from the risks associated with deepfakes and to offer legal remedies to individuals harmed by such content. The DEFIANCE Act of 2024 aims to enhance the rights to legal recourse for individuals impacted by non-consensual intimate digital forgeries, among other objectives.
On the state level, several states have passed legislation to regulate deepfakes, addressing various aspects of this technology through specific legal measures. For example, Texas SB 751 criminalizes the creation of deceptive videos with the intent to damage political candidates or influence elections. In Florida, SB 1798 targets the protection of minors by prohibiting the digital alteration of images to depict minors in sexual acts. Washington HB 1999 provides both civil and criminal remedies for victims of fabricated sexually explicit images.
This year, California enacted AB 2839, targeting the distribution of “materially deceptive” AI-generated deepfakes on social media that mimic political candidates and are known by the poster to be false, as the deepfakes could mislead voters. However, a California judge recently decided that the state cannot yet compel individuals to remove such election-related deepfakes, since AB 2839 facially violates the First Amendment.
These developments highlight the diverse strategies that states are employing to address the challenges presented by deepfake technology. Despite these efforts, the laws remain incomplete and continue to face challenges, such as concerns over First Amendment rights.
Conclusion
As deepfake technology evolves, it challenges copyright laws, prompting a need for robust legal responses. Federal and state legislation is crucial in protecting individual rights and the integrity of original works against unauthorized use and manipulation. As deepfake technology advances, continuous refinement of these laws will be crucial to balance innovation with ethical and legal boundaries, ensuring protection against the potential harms of deepfakes.
I. INTRODUCTION
In May of 2024, the Federal Circuit overruled 40 years of precedent for assessing the obviousness of design patents in LKQ Corp. v. GM Global Technology Operations LLC. Already, commentators and practitioners have a wide array of opinions about the impacts of LKQ. If recent history is any guide, however, declarative statements about the impacts of LKQ are premature, and they create risks to businesses, practitioners, and courts alike. Rather, patent law observers should adopt a wait-and-see approach for the impacts of LKQ on design patent obviousness.
II. THE LKQ DECISION
In LKQ, the Federal Circuit addressed the standard for assessing design patent obviousness under 35 U.S.C. § 103. Before this decision, to find a claimed design unpatentable as obvious, the two-part Rosen–Durling test required a primary reference that was “basically the same” as the claimed design and secondary references “so related” to the primary reference that their features suggested combination with the primary reference.
In this case, the Federal Circuit held that framework to be too rigid under the Patent Act. Instead, the court ruled that the obviousness of a claimed design is to be determined through the application of the familiar Graham four-part test used to assess the obviousness of utility patents.
A. EARLY OPINIONS ABOUT LKQ
In the months since LKQ, opinions about the impacts of the decision have poured in from academics, practitioners, and commentators alike. Some predict a seismic shift, stating that the “far-” and “wide-reaching consequences” of LKQ will likely make design patents harder to obtain and easier to invalidate. Others predict little change at all, stating that the obviousness test “is largely the same as before” and that the expected changes from LKQ are primarily procedural. Still others seem to have landed on a middle ground, expecting “noticeable differences” in the law, with “examiners [having] more freedom to establish that the prior art is properly usable in an obviousness rejection.”
B. PARALLELS WITH KSR
LKQ is not the only recent decision dealing with obviousness that evoked immediate and wide-ranging reactions. In 2007, the Supreme Court issued KSR International Co. v. Teleflex Inc., a decision addressing the obviousness standard for patents. Notably, the Court rejected the Federal Circuit’s rigid application of its “teaching, suggestion, or motivation” test for obviousness to a utility patent in that case.
In the immediate aftermath of that case, commentators and practitioners were “divided on whether the decision of the Supreme Court in KSR [was] (a) a radical departure from the Federal Circuit’s approach, or (b) unlikely to change much.” Even after the Federal Circuit began to issue decisions under KSR, some argued that the case had only a “modest impact” on the Federal Circuit, and others even questioned “whether the Supreme Court achieved anything in KSR other than giving the Federal Circuit a slap on the wrist.”
Experts were also divided on the likely business impacts of KSR in its immediate aftermath. In the summer after the decision came down, two distinguished patent law experts speaking on a panel were asked if KSR would drive up the cost of preparing and prosecuting a patent. One said yes, and the other said no.
C. CAUTIONARY TALES FROM KSR
As time went on, however, the impacts of KSR became clear. Empirical studies from years after the decision routinely proved that the impacts of KSR were anything but modest, contradicting “a commonly held belief that KSR did not change the law of obviousness significantly.” Various empirical studies revealed “strong evidence that KSR has indeed altered the outcomes of the Federal Circuit’s obviousness determinations,” “a remarkable shift in the Federal Circuit’s willingness to uphold findings of obvious below,” and that “the benefit of retrospection shows KSR did change the rate of obviousness findings.”
Thus, KSR should serve as a cautionary tale against jumping to conclusions about the impacts of obviousness decisions. In the months following KSR, any declarative statements about its impacts were mere speculation. Even after the Federal Circuit began issuing decisions under KSR, the sample size remained too small to draw conclusions. Only years after the decision could researchers illuminate the impacts of KSR through empirical studies and show which of those early opinions were right and wrong.
III. THE WISDOM OF A WAIT-AND-SEE APPROACH FOR LKQ
Since the Federal Circuit only issued LKQ in May of 2024, we remain in the window where any declarative statements about its impacts are premature. Indeed, the Federal Circuit acknowledged that “there may be some degree of uncertainty for at least a brief period” in its LKQ opinion. While the urge to jump to conclusions is understandable, a wait-and-see approach offers many advantages.
First, as KSR demonstrated, early predictions may be inaccurate and may influence practitioners to adopt misguided design patent prosecution strategies. Overstating the impacts of LKQ may lead to overly cautious design patent applications, leaving intellectual property unprotected. A wait-and-see approach will allow prosecution strategies to develop based on reliable trends, reducing the risk of costly errors.
Second, the Federal Circuit almost certainly has more to say about design patent obviousness than it included in its LKQ opinion. Faulty strategy changes based on an incomplete picture may later need to be undone at great expense. Waiting allows the courts to solidify the impacts of LKQ so that practitioners and businesses can adjust their approaches – if that is necessary – with greater certainty and lower risk.
Third, overreacting to speculative predictions could cause companies to shift their design-around strategies, leading to unnecessary and wasteful changes in product lines. A wait-and-see approach allows companies to maintain their creative momentum and keep their design strategies consistent until the impacts of LKQ are better understood.
Fourth, design patents have experienced a boom in recent years. Premature predictions about LKQ risk skewing the perceptions of business leaders and the public about the continued value in pursuing design patent protections. By waiting to confirm the impacts of LKQ, commentators avoid this risk.
Fifth, predictions about LKQ could become self-fulfilling prophecies. Widespread speculation could unintentionally influence how courts evaluate obviousness in future cases. A wait-and-see approach allows courts to evaluate obviousness free from the noise of speculative predictions, focusing exclusively on the application of the law to the facts of each case.
Lastly, practitioners face potential backlash from clients if they offer advice that turns out to be too aggressive or pessimistic. By advocating patience to their clients, practitioners can maintain client trust and offer more measured and thoughtful advice once the implications of LKQ become clear.
IV. WHEN WILL WE KNOW?
This all begs the question: when will we understand LKQ so that declarative statements about its impacts are appropriate? Again, we can turn to KSR for guidance.
More than a year after KSR was handed down, some were still questioning if the decision had any impact at all. The first empirical studies of its impacts seemed to emerge about two to three years after the decision, uniformly finding that it altered the law of obviousness. Therefore, it seems safe to assume that empirical studies will illuminate the impacts of LKQ in 2026. Until then, patent law observers should wait and see.
V. CONCLUSION
With the recent history of KSR as our guide, patent law observers should adopt a wait-and-see approach for the impacts of the Federal Circuit’s recent decision in LKQ. At this early stage, improper speculation and declarative statements about the impacts of the case creates risks for businesses, practitioners, and courts. Instead, a wait-and-see approach allows reliable trends to guide prosecution strategies and allows design patent momentum to continue. In due time, empirical studies will emerge and make the impacts of LKQ clear to all.
Proponents of virtual reality (VR) as a medium for evidence in the courtroom have argued that it can bring many benefits to jurors, including enhanced empathy and better factual understanding. However, it is also speculated that VR could increase a juror’s biases or a false sense of accuracy. As VR technology advances, the legal field faces the challenge of balancing innovation with impartiality, paving the way for standards that will determine the future role of VR in trials. By examining VR’s speculative and actual impacts in evidence presentation, we gain insight into how this technology could affect the legal landscape further.
I. What Is VR and How Does It Relate To Evidence?
In its broadest sense, VR is “a simulated three-dimensional (3D) environment that lets users explore and interact with a virtual surrounding in a way that approximates reality, as it’s perceived through the users’ senses. VR technology primarily utilizes headwear that covers your eyes completely, and you can see a three-dimensional immersive world in a 360-degree spherical field of view. While VR technology is gaining popularity, many people don’t use or come across it indaily life. While VR technology has been trendy for recreational use, such as VR video games, VR has been implemented in many professional settings for training, education, healthcare, retail, real estate, and more. The visual, auditory, and even tactile aspects of virtual reality, ranging from vibrations to even full-body haptic suits, allow the immersion to feel more ‘real’ and thus allow these practical applications.
These practical applications have led to speculation and interest in using VR technology in the legal field. One of the primary ideas is that jurors can “experience” scenes of the case rather than physically going there. Jurors have shown a desire to visit crime scenes in homicide cases when the scene itself is relevant to the conviction. VR technology can help overcome the hurdles of photographs or videos by virtually going to the scene, as juries can ‘virtually witness’ the scene and simulated events. The power of VR technology to transportjurors to the scene of the crime can also help make complex cases more understandable.
II. Evidentiary Concerns with VR Evidence
Before addressing VR evidence’s potential benefits and harms, it is necessary to consider its admissibility. Federal Rules of Evidence, such as hearsay and authentication, present unique challenges for the admissibility of VR evidence.
Under the Federal Rules of Evidence, hearsay is an out-of-court statement offered to prove the truth of the matter asserted. For example, if trying to prove person A loves VR, person B testifies that person A told them, “I love VR.” In this example, person B testifying to person A’s statement is hearsay. Hearsay is not admissible unless it meets an exemption or exception in the Federal Rules of Evidence. While the exact use of VR evidence containing out-of-court statements would vary on a case-by-case basis, a secondary purpose of VR presentation of evidence would be admissible not for the truth but to clarify other admissible evidence. For example, if there is an admissible recording of a witness describing a crime scene, VR evidence could help contextualize their testimony and immerse the jurors in the scene. In this case, the purpose wouldn’t be to prove the scene looked exactly as described or appeared in VR but to clarify the witness’s admissible testimony. While this may prevent VR demonstrations from jury deliberation, they can still be shown in the courtroom.
Another unique issue that comes with introducing VR presentation of evidence is authentication. According to the Federal Rules of Evidence, to introduce evidence, a proponent must produce evidence “sufficient to support a finding that the item is what the proponent claims it is.” This presents a unique problem for VR demonstrations because a proponent must show that the VR evidence is authentic. For example, with a photograph, a witness can authenticate it by testifying to taking the photo or confirming it accurately represents its contents. However, because VR is created as a simulation rather than a direct capture, VR wouldn’t be able to be authenticated the same way as a photograph. A proponent would rely onFederal Rule of Evidence 901(b)(9) for authentication. Because this rule alone would not be sufficient, a guideline for admitting VR evidence is that the proponent should “demonstrate that a qualified expert created the VR demonstration using an accurate program and equipment. The proponent should also show that all data used to create the demonstration was accurate and that no unfounded assumptions were made. Lastly, the proponent must present witness testimony to “verify the accuracy of the final product.”
III. Speculated and Actual Benefits of VR Evidence
As VR technology has become cheaper, more mainstream, and more widely used, it is being used in actual cases, making the potential for its wider use more achievable. One of the primary speculated benefits was the immersive nature of VR, allowing jurors to engage deeper with evidence by experiencing crime scenes and potentially re-creating events firsthand. Another speculated benefit is VR’s potential to “appeal to a jury’s emotional and subconscious” responses through its immersive nature.
Real-life implementations of VR evidence have already illustrated some of these benefits. One example is from Marc Lamber and James Goodnow, personal injury attorneys who have implanted VR technology in cases to “transport a jury to an accident scene.” Lamber and Goodnow work with engineers, experts, and production companies to recreate the scene where an injury or death occurred. This has allowed jurors to not only visualize the circumstances, events, and injury but also empathize deeper with the injured person’s suffering and aftermath of the incident. This ability to ‘transport’ the jurors to the scene can be incredibly impactful as it may be hard for jurors to visualize the scene in an isolated courtroom. One study in Australia focused on how VR can affect a jury’s ability to get the ‘correct’ verdict. Researchers, legal professionals, police officers, and forensic scientists simulated a hit-and-run scene in VR and photographs, then split jurors into groups to test the differences. This study found that VR technology required significantly less effort than photographs to construct a coherent narrative, leading jurors to reach the correct verdict 9.5 times more frequently than those who relied on photographs alone. The immersive technology also gave the jurors a better memory of the critical details of the case; the photograph group had difficulty visualizing the events of the case from the photographs alone. Researchers said this study was “unequivocal evidence that interactive technology leads to fairer and more consistent verdicts.”
IV. Speculated and Actual Harms of VR Evidence
While the immersive nature of VR technology has brought speculations about potential benefits for the legal field, concerns have emerged about possible harm or shortcomings of VR technology as evidence. The primary concerns are about potential biases and costs.
VR technology might cause jurors to impermissibly judge parties, especially defendants in criminal trials, differently according to underlying biases that they hold. One study found that mock jurors who used VR technology to understand a criminal trial were more likely to judge a black defendant more harshly than the white one. These studies used VR to simulate scenes from trials and, through computer generation, swapped out the races of the defendants and tested the differences in guilty verdicts and sentencing. Salmanowitz’s study found that using an avatar instead of accurate visual representations of the defendants can reduce implicit bias based on race. The avatars were shown by only the handheld controllers visible in the virtual space. The VR technology made no substantial difference in the jury’s decisions. However, a study by Samantha Bielen et al.found that jurors may be biased using VR against non-white defendants, finding non-white defendants were more likely to be found guilty on the same evidence as a white defendant when using the VR.
The cost of VR also presents a barrier to implementing VR technology in courts. In the Australian study, a researcher noted that using VR as an evidentiary medium is “expensive, especially in remote locations, and in some cases, the site itself has changed, making accurate viewings impossible.” VR technology is expensive, with even the cheapest consumer-grade headsets costing around $500. Further, digital recreation of the scene starts at $15,000 but can “go up to six figures depending on complexity.”
V. Conclusion
While the balance between the benefits and harms of introducing VR as a medium for evidence may vary greatly case-by-case, overall, the demonstrated advantages in improving a jurors’ factual understanding tend to outweigh the drawbacks. Although speculation is a natural reaction to new technologies, as VR finds real-world application in courtrooms, its tangible benefits and harms have been clarified. This allows revisiting the initial speculation and more effectively addressing this balance and admissibility concerns that accompany VR demonstrations use as evidence. Increased use and advancements in VR technology could amplify these benefits by increasing empathy and accuracy and tampering with the effects of emotional bias. With this evolution in VR technology, the potential for an immersive yet balanced use of VR in the courtroom grows, offering an even greater ability for jurors to engage with evidence to enhance understanding, minimize bias, and support fairer, more informed verdicts.

In April 2023, drama unfolded on Twitter, and it revolved around olive oil. Andrew Benin, the co-founder of Graza, a start-up single-origin olive oil brand that comes in two adorable green squeeze bottles, publicly called out rival Brightland for allegedly copying his squeezable olive oil idea. Mr. Benin wrote, “While friendly competition was always welcome, I do view this as a blatant disrespect and am choosing to voice my discontent.” In response, the internet angrily clapped back, as the internet does. One of these dissidents was Alison Cayne, the founder of Haven’s Kitchen, a cooking company. She wrote, “with all due respect, you did not create the squeeze bottle. Chefs and home cooks have been using it for decades.” Another commenter, Gabrielle Mustapich, a co-founder at Hardpops & Pilothouse Brands, added, “my mom was buying squeezy bottle olive oil in 2007 (and it wasn’t Graza).”
Ms. Mustapich is right – squeeze bottles have been ubiquitous in chefs’ kitchens for years. However, they seem to be growing in popularity in home kitchens. While Graza may or may not be able to get credit for that societal shift, that doesn’t mean they shouldn’t get any recognition for doing things differently in the olive oil industry. That begs the question – if they were to sue Brightland, would they win?
Though Graza doesn’t have a patent for its bottle or a trademark for anything except its name, official registration is not required to receive trade dress protection. Trade dress protection is provided by the Lanham Act, which allows the producer of a product a cause of action for the use of “any word, term, name, symbol, or device, or any combination thereof . . . which . . . is likely to cause confusion . . . as to the origin, sponsorship, or approval of his or her goods . . . . ” While trademarks are generally thought to cover things like brand names and their distinct designs, trade dress encompasses the design or packaging of a product or part of a product. For example, while the name “Nike” has a trademark, as does their “swoosh” symbol, the visual design of, say, a sweater or the box it comes in may unofficially deserve trade dress protection (209).
While courts vary slightly on the elements of a protectable trade dress, they mainly agree that three factors must be met when analyzing the design of a product. First, the trade dress must be primarily non-functional. This requirement might seem counterintuitive since a company should be rewarded for making its product useful. However, the non-functionality requirement does not concern the invention of this aspect of the product, which is left to the world of patents. The non-functionality requirement promotes competition because other companies can make similar products with the same useful qualities without legal repercussions.
The landmark 2001 Supreme Court case Traffix Devices v. Marketing Displays redefined the test for functionality. While circuits vary on their precise balancing tests, many follow that of the Ninth Circuit in Talking Rain Bev. Co. v. South Beach Bev. Co. from 2003: Whether advertising “touts the utilitarian advantages of the design,” whether “the particular design results from a comparatively simple or inexpensive method of manufacture,” whether the design “yields a utilitarian advantage,” and “whether alternative designs are available,” (603) – though, per Traffix, the “mere existence of alternatives does not render a product non-functional” (33-34).
Next, the trade dress must be inherently distinctive or have acquired a secondary meaning to customers (which can be difficult to prove). Finally, the alleged infringement must create a likelihood of confusion among customers as to a product’s source. This inquiry is a step-by-step process, so if the product design is primarily functional, the inquiry ends, and the design’s trade dress cannot be protected.
If Graza’s squeeze bottle is considered part of its product design rather than its packaging, it is functional and therefore not protectable trade dress. First, its own Instagram includes in its bio, “Fresh Squeezable Single Origin Olive Oil / As squeezed in @bonappetitmag.” The home page of its website features a video of a chef gingerly unscrewing the squeeze top and gracefully applying the oil to a hot pan. Second, the squeeze bottle itself is a simple design. Restaurants can bulk purchase 500 of these basic bottles, and the patent for the generic squeeze bottle, though not identical to Graza’s, is not terribly complex. Third and fourth, while alternative designs are available – most typical olive oils come in bulky containers with regular pour spouts – the squeeze spout is likely attractive to many consumers for its uniqueness and convenience.
But what if the bottle is not considered to be the product’s design – that is, the appearance of the product itself – but rather the packaging – the container holding the product? According to the Wal-Mart Court, product design and packaging trade dress should be analyzed differently (215). The Court reasoned that, to a consumer, product packaging may identify the source of a product much more clearly than product design would (212-13). Therefore, the court says, a producer need not prove that a package has acquired secondary meaning to receive trade dress protection.
The squeeze bottle, the pretty green color, and the charming label are all selling points for Graza, with one reviewer saying, “These olive oils are one of the few things I find attractive enough to leave out on my countertop.” Arguably, however, the main reason people are purchasing it is to get to the product inside – a quick Google search of “Graza review” brings up articles primarily reviewing the oil’s quality, not the bottle’s attractiveness of the bottle or the utility of the squeeze function. Consider the difference between a consumer purchasing an olive oil known to be delicious but in a poorly designed, ugly bottle – there, the bottle is the packaging for the product – as opposed to an olive oil known to be terrible but in a limited-edition, beautiful bottle – there, the bottle is the product, and there is no packaging.
In Graza’s case, however, it can be difficult to know what’s really in the average consumer’s head, and in many ways, the “average consumer” is a myth. When Courts determine a product’s likelihood of confusion with another product or the likelihood that a consumer will see a product or its packaging and know what brand it comes from, they are necessarily guessing. The Wal-Mart Court tried to correct for that by advising courts in ambiguous product design/product packaging cases to lean toward finding that they are design, presumably to force greater proof of distinctiveness, etc., from producers.
When it comes to Graza, though, it seems entirely possible that a typical, reasonable consumer would see the Brightland olive oil in a squeeze bottle and think: That’s just like the Graza bottle! Even if the trusty squeeze bottle has been ubiquitous in kitchens for decades, Graza may have brought it to the attention of home chefs everywhere. Similarly, though many olive oil bottles are dark green, their tall, skinny, matte green appearance with the pointed tip is unique enough – and their advertising aggressive and targeted enough – that many consumers (mostly those of the Gen Z and Millennial generations) would easily recognize the design as Graza if it flashed before their eyes.
But, if we are to follow the Wal-Mart Court’s suggestion and reason that the bottle is Graza’s product design instead of packaging in this moment of ambiguity, Graza’s argument that they deserve trade dress protection – or, more simply, credit for coming up with the squeeze bottle olive oil idea – is weak. Andrew Benin may have realized such, as he, shockingly, issued a public Twitter apology retracting his earlier accusations.

INTRODUCTION
Every day across the world many people, brands, and companies believe that their names are defamed, or trademarks have been infringed on. With over 440,000 US Trademark Applications filed in 2022 alone, disputes over these trademarks arise frequently. However, these quarrels are not always encircling such a large-scale platform like one of the highest rated television series ever created. But when they do, it often spurs up illustrious discussions by the media about intellectual property laws in the United States.
On September, 25, 2023, the United States District Court of the Southern District of New York granted the Creators of the hit Television series “Better Call Saul,” their motion to dismiss for claims of defamation and trademark infringement brought by Liberty Tax Service for the show’s depiction of a tax preparation service in an episode of the show from April 2022. The depiction of the tax service in the show offers many similarities of Liberty Tax Service and its more than 2,500 offices across the country, including the same red, white, and blue colors, as well as the Statute of Liberty logo. This case, JTH Tax LLC v. AMC Networks Inc., implicates many facets of intellectual property law, including trademark infringement and dilution under the Lanham Act, trade dress, and New York statutory defamation law. Ultimately, the court ruled that the “Rogers Test” was applicable to the defendant’s alleged use of Liberty Tax Service’s trademarks. However, the Court’s decision on the motion to dismiss could certainly be up for debate, especially if Liberty Tax Service raised certain arguments regarding the second prong of the Rogers Test.
A. APPLICABILITY OF THE “ROGERS TEST”
The “Rogers Test” was developed by the United States Court of Appeals, Second Circuit in Rogers v. Grimaldi. It is a two-pronged balancing test that can be implicated when there are opposing interests under the First Amendment and the Lanham Act. The test states,
“[w]here the title of an expressive work is at issue, the “balance will normally not support application of the [Lanham] Act unless the title has no artistic relevance to the underlying work whatsoever, or, if it has some artistic relevance, unless the title explicitly misleads as to the source or the content of the work.”
In the matter at hand, the court quickly and rightfully found that the applicability of the Rogers test was appropriate because to the extent that the defendant’s used the plaintiff’s marks, they were in furtherance of the plot of Better Call Saul by heightening the understanding of key characters for the audience.
1. ARTISTIC RELEVANCE
Under this prong, defendants argued that the alleged use of plaintiff’s marks met the purposely low threshold for artistic relevance in the Rogers Test because the reference to “Sweet Liberty” in the episode in question is undeniably ironic and clearly related to the character’s role in the series because it is the business they created with illegal intent. On top of that, the court found that the character’s use of the plaintiff’s trade dress (i.e., the design and configuration of the product itself) was simply an appropriation of patriotism that highlights their deceptiveness as crime ridden characters which all has relevance to the episode. Thus, the court concluded that the artistic relevance of the episode’s use of the plaintiff’s marks were clearly above “above zero.”
2. WHETHER THE USE OF SUCH MARKS WERE “EXPLICITLY MISLEADING”
Since the Court concluded that defendants’ alleged use of plaintiff’s marks had at least an ounce of artistic relevance to the show, the Court focused on the second prong of the Rogers Test, whether the defendants’ use of the plaintiff’s marks was “explicitly misleading,” which would allow the Lanham Act to apply to the show. The essential question to ask for the second prong is “whether the defendant[s’] use of the mark ‘is misleading in the sense that it induces members of the public to believe [the work] was prepared or otherwise authorized’ by the plaintiff.” To do so, the court focused on the eight non-exhaustive factors from Polaroid Corp. v. Polarad Elecs. Corp., 287 F.2d 492 (2d Cir. 1961), in order to assess the likelihood of confusion to satisfy this prong. The eight factors include: “(1) strength of the trademark; (2) similarity of the marks; (3) proximity of the products and their competitiveness with one another; (4) evidence that the senior user may bridge the gap by developing a product for sale in the market of the alleged infringer’s product; (5) evidence of actual consumer confusion; (6) evidence that the imitative mark was adopted in bad faith; (7) respective quality of the products; and (8) sophistication of consumers in the relevant market.”
a. THE COURT’S ASSESSMENT OF THE POLAROID FACTORS DISSECTED
The court’s assessment of the eight Polaroid factors in this matter could genuinely be up for debate, especially if the plaintiff raised stronger, or any, arguments regarding multiple factors. The Polaroid factors are weighed against each other depending on which way a court decides each factor favors the respective parties. Here, the court found that only the first factor weighed in favor of the plaintiff, as the defendants did not contest the strength of the plaintiff’s mark. The other seven factors either weighed in favor of the defendant or were deemed neutral. Most of the factors were weighed in the defendant’s favor, but mainly because the plaintiff failed to argue them in their Amended Complaint.
For example, and probably most convincing, factor five could have been evaluated in favor of the plaintiff. This factor requires evidence of actual consumer confusion, but it is considered black letter law that actual confusion does not need to be proven to prevail under the Lanham Act or this factor since it is often too challenging and expensive to prove. The Act strictly requires only that a likelihood of confusion be shown as to source, meaning that consumers will mistakenly believe the goods or services come from the same source, according to the Lois v. Sportswear, U.S.A., Inc. v. Levi Strauss & Co., case.
An opportunity that the plaintiff failed to point out that could have helped satisfy this factor stems from an example the Supreme Court of the United States prescribed in the Jack Daniels case, which involved a somewhat analogous situation where Jack Daniels sued a company on similar grounds for their creation of a dog toy that closely represented a bottle of Jack Daniels whiskey. The illustration provided in that case by the court offered a scenario where a luggage manufacturer uses another brand’s marginally altered logo on their luggage to foster growth in the luggage market. The Supreme Court compared this example with another and noted that the greater likelihood of confusion would occur in the luggage illustration because it was conveying possible misinformation about who is responsible for the merchandise. Now, as stated in the Jack Daniels case, if the plaintiff were able to have created this analogy, relevant to the facts at bar, and argued it in their Amended Complaint by displaying that the show clearly conveyed misinformation through their image of “Sweet Liberty Tax Services” and that it was blatantly a slightly modified image of Liberty Tax Service, this factor would have most likely been weighed in favor of the plaintiff. This would be due to the likelihood of confusion as to the source of the show’s representation of the character’s business.
Secondly, factor seven requires taking into account the respective quality of the product. In the Flushing Bank v. Green Dot Corp. case, the United States District Court for the Southern District of New York interpreted this factor to mean that if the quality of a junior user’s (the show) product or service is low compared to the senior user (Liberty Tax Service), the chance of actual injury when there is confusion would be increased. If confusion, as stated above, was created, then the plaintiff’s argument in their Amended Complaint regarding defendant’s use of their marks linking Liberty Tax’s Trademarks to the show’s depiction of an inferior quality of service would further the weighing of this factor in the plaintiff’s direction as well.
Lastly, the eighth factor, based on the sophistication of purchasers, could strongly be construed in favor of the plaintiff. The theory for this factor is that the more sophisticated the senior user’s purchasers are, the less likely they will be confused by the presence of similar marks. To determine consumer sophistication, courts consider the product’s nature and price. The plaintiffs failed to raise any concern as to this factor. However, there is an argument that could have been made to weigh this factor in their favor, which would have evened out the weight of the factors equally between the plaintiff and defendant. If that were the case, it could be assumed to be viewed in the light most favorable of the plaintiff, which would be the standard at the motion to dismiss stage.
If the plaintiff were to shed light on Liberty Tax Services’ prices compared to the average professional tax preparer, the evidence would clearly force this factor to fall in favor of the plaintiff. Liberty Tax Service has a basic tax preparation rate of only $69.00, while the average professional tax preparer charges an average of $129.00 for similar services. This distinction would logically prove that that Liberty Tax Service maintains a lower consumer sophistication than the average professional tax preparer, leading to the conclusion that their consumer’s would be more likely to be confused by the presence of similar marks. Therefore, this would result in the fourth of eight factors being found in favor of the plaintiff.
CONCLUSION
If the plaintiffs applied the arguments above in their Amended Complaint, which regard the second element of the Rogers Test, and further, the Polaroid factors, it would demonstrate that the plaintiff’s display of confusion is indeed plausible. Because such compelling arguments would tilt the scale in favor of the plaintiff, Liberty Tax Service, the Lanham Act would most likely be implicated. Therefore, this case may have concluded in a completely different verdict at this stage of litigation, supporting the plaintiff, rather than the creators of television series.

The advent of generative Artificial Intelligence (AI) and deepfake technology marks a new era in intellectual property law, presenting unprecedented challenges and opportunities. As these technologies evolve, their creations blur the lines between reality and fiction, escalating the risk of consumer deception and diluting brand values. Deepfakes, a fusion of the words ‘deep learning’ and ‘fake,’ stand at the forefront of this revolution. These ultra-realistic synthetic media replace a person’s image or voice with someone else’s likeness, typically using advanced artificial intelligence. Relying on deep learning algorithms, a subset of AI, deepfakes employ techniques like neural networks to analyze and replicate human facial and vocal characteristics with stunning accuracy. The result is a fabricated version of a video or audio clip that is virtually indistinguishable from the original.
The spectrum of deepfake applications is vast, encompassing both remarkable prospects and significant risks. On the positive side, this technology promises to revolutionize entertainment and education. It can breathe life into historical figures for educational purposes or enhance cinematic experiences with unparalleled special effects. However, this technology’s downside reveals consequences, particularly for businesses and brands. Generative AI and deepfakes possess the potential to create highly convincing synthetic media, obfuscating the line between authentic and fabricated content. This capability poses substantial risks for consumer deception and brand damage.
When consumers encounter deepfakes featuring well-known trademarks, it not only challenges the authenticity of media but also erodes the trust and loyalty brands have cultivated over the years. This impact on consumer perception and decision-making is central to understanding the full implications of AI on trademark integrity, as it leads to potential deception and undermines the critical connection between trademarks and consumer trust. This dual nature of deepfakes as both a tool for creative expression and a source of potential deception underscores the complexity of their impact on intellectual property and consumer relations.
As generative AI introduces opportunities on the one hand, risks abound regarding consumer deception. This direct threat to perception highlights trademarks’ growing vulnerability. At the heart of branding and commerce lies the trademarks, distinguishing one business from another. They are not just mere symbols; they represent the core identity of brands, extending from distinctive names and memorable phrases to captivating logos and designs. When consumers encounter these marks, they do not just see a name or a logo; they connect with a story, a set of values, and an assurance of quality. This profound connection underscores the pivotal role of trademarks in driving consumer decisions and fostering brand loyalty. The legal protection of these trademarks is governed by the Lanham Act, which offers nationwide protection against infringement and dilution. Infringement occurs when a mark similar to an existing trademark is used, potentially confusing consumers about the origin or sponsorship of a product. Dilution, on the other hand, refers to the weakening of a famous mark’s distinctiveness, either by blurring its meaning or tarnishing it through offensive use.
However, the ascent of generative AI and deepfake technology casts new, complex shadows over this legal terrain. The challenges introduced by these technologies are manifold and unprecedented. While it was once straightforward to distinguish between intentional and accidental use of trademarks, the line is now increasingly blurred. For instance, when AI tools deliberately replicate trademarks to deceive or dilute a brand, it is a clear case of infringement. However, the waters are muddy when AI, through its intricate algorithms, unintentionally incorporates a trademark into its creation. Imagine an AI program designed for marketing inadvertently including a famous logo in its output. This scenario presents a dilemma where the line between infringement and innocent use becomes indistinct. The company employing the AI might not have intended to infringe on the trademark, but the end result could inadvertently imply otherwise to the consumer.
This new landscape necessitates a reevaluation of traditional legal frameworks and poses significant questions about the future of trademark law in an age where AI-generated content can replicate real brands with unnerving precision. The challenges of adapting legal strategies to this rapidly evolving digital environment are not just technical but also philosophical, calling for a deeper understanding of the interplay between AI, trademark law, and consumer perception.
The Supreme Court’s decision in Jack Daniel’s Properties, Inc. v. VIP Products LLC significantly sets a key precedent regarding fair use defenses in the age of AI. This case, focusing on a dog toy mimicking Jack Daniel’s whiskey branding, highlights the tension between trademark protection and First Amendment rights.
Although the case did not directly address AI, its principles are crucial in this field. For both trademark infringement and dilution claims, the Court narrows the boundaries of fair use, particularly in cases where an “accused infringer has used a trademark to designate the source of its own goods.” The Court also limited the scope of the noncommercial use exclusion to the dilution claims, stating that it “cannot include… every parody or humorous commentary.” This narrower scope of fair use makes it tricky for AI content users to navigate fair use defenses when parodying trademarks, where the lines between illegal use, parody, and unintentionally confusing consumers about endorsement or sponsorship may blur.
This ruling has direct repercussions for AI models generating noncommercial comedic or entertainment content featuring trademarks. Even if AI-created content is noncommercial or intended as parody, it does not automatically qualify as fair use. If such content depicts or references trademarks in a way that could falsely suggest sponsorship or source affiliation, then claiming fair use becomes extremely difficult. Essentially, the noncommercial nature of AI-created content offers little protection if it uses trademarks to imply an incorrect association or endorsement from the trademark owner.
As such, AI developers and companies must be cautious when depicting trademarks in AI-generated content, even in noncommercial or parodic contexts. The fair use protections may be significantly limited if the content falsely suggests a connection between a brand and the AI-generated work.
In this light, AI-generated content for marketing and branding requires meticulous consideration. AI Developers must ensure their AI models do not generate contents that incorrectly imply a trademark’s source or endorsement. This necessitates thorough review processes and possibly adapting algorithms to prevent any false implications of source identification. At the same time, the users of the AI technology for branding, marketing or content creation need to employ stringent review processes of how trademarks are depicted or referenced to ensure their creations do not inadvertently infringe upon trademarks or mislead consumers about the origins or endorsements of products and services. With AI’s capacity to precisely replicate trademarks, the potential for unintentional infringement and consumer deception is unprecedented.
This evolving landscape calls for a critical reevaluation of existing legal frameworks. Though robust in addressing the trademark issues of its time, the Lanham Act was conceived in an era before the emergence of digital complexities and AI advancements we currently face. The Court’s ruling in Jack Daniel’s case could influence future legislation and litigation involving AI-generated content and trademark issues. Today, we stand at a critical point where the replication of trademarks by AI algorithms challenges our traditional understanding of infringement and dilution. Addressing this requires more than mere amendments to existing laws; it calls for a holistic overhaul of legal frameworks. This evolution might involve new legislation, innovative interpretations, and an adaptive approach to defining infringement and dilution in the digital age. The challenge is not just to adapt to technological advancements but to anticipate and shape the legal landscape in a way that balances innovation with the need to protect the essence of trademarks in a rapidly changing world.
Deepfakes and similar AI fabrications pose risks to not just trademarks but also individual rights, as the right of publicity shielding personal likenesses confronts the same consent and authenticity challenges in an era of scalable deepfake identity theft. The concept of the right of publicity has gained renewed focus in the age of deepfakes, as exemplified by the unauthorized use of Tom Hanks’ likeness in a deepfake advertisement. This case serves as a potent reminder of the emerging challenges posed by deepfake technology in the realm of intellectual property rights. California Civil Code Section 3344, among others, protects individuals from the unauthorized use of their name, voice, signature, photograph, or likeness. However, deepfakes, with their capability to replicate a person’s likeness with striking accuracy, raise complex questions about consent and misuse in advertising and beyond.
Deepfakes present a formidable threat to both brand reputation and personal rights. These AI-engineered fabrications are capable of generating viral misinformation, perpetuating fraud, and inflicting damage on corporate and personal reputations alike. By blurring the lines between truth and deception, deepfakes undermine trust, dilute brand identity, and erode the foundational values upon which reputations are built. The impact of deepfakes on brand reputation is not a distant concern but a present and growing one, necessitating vigilance and proactive measures from individuals and organizations. The intricate dynamics of consumer perception, influenced by such deceptive technology, underscore the urgency for a legal discourse that encompasses both the protection of trademarks and the right of publicity in the digital age.
While the complex questions surrounding AI, deepfakes, and trademark law form the core of this analysis, the disruptive influence of these technologies extends across sectors. The latest widespread dissemination of explicit AI-generated images of Taylor Swift serves as a stark example of the urgent need for regulatory oversight in this evolving landscape, particularly highlighting the implications for the entertainment industry. The entertainment industry, particularly Hollywood, is another sphere significantly impacted by AI advancements. The ongoing discussions, notably during the SAG-AFTRA strike, highlight the critical issues of informed consent and fair compensation for actors whose digital likenesses are used. The use of generative AI technologies, including deepfakes, in creating digital replicas of actors raises crucial questions about intellectual property rights and the ethical use of such technologies in the industry.
The legal and political landscape is also adapting to the challenges posed by AI and deepfakes. With the 2024 elections on the horizon, the Federal Election Commission is in the preliminary phases of regulating AI-generated deepfakes in political advertisements, aiming to protect voters from election disinformation. Additionally, legislative efforts such as the introduction of the No Fakes Act by a bipartisan group of senators mark significant steps toward establishing the first federal right to control one’s image and voice against the production of deepfakes, essentially the right of publicity.
Moreover, the legislative activity on AI regulation has been notable, with several states proposing and enacting laws targeting deepfakes as of June 2023. President Biden’s executive order in October 2023 further exemplifies the government’s recognition of the need for robust AI standards and security measures. However, the disparity between the rapid progression of AI technology and the comparatively sluggish governmental and legislative response is evident, as shown by the limited scope of interventions like President Biden’s executive order. This order, which includes the implementation of watermarks to identify AI-generated content, signals a move toward a more regulated AI environment, although the tools to circumvent these measures by removing watermarks are readily available. The order marks a step toward creating a more rigorous environment for AI development and deployment, but it also hints at the potential for the development of tools designed to undermine such security measures. These developments at the industry, legal, and political levels reflect the multifaceted approach required to address the complex issues arising from the intersection of AI, deepfakes, intellectual property, and personal rights.
The ascent of generative AI and synthetic media ushers in a complex new era, posing unprecedented challenges to intellectual property protections, consumer rights, and societal trust. As deepfakes and similar fabrications become indistinguishable from reality, risks of mass deception and brand dilution intensify. Trademarks grapple with blurred lines between infringement and fair use while publicity rights wrestle with consent in an age of identity theft.
Given the potency of deepfakes in shaping narratives, the detection of such content is essential. A number of technologies have shown promise in recognizing deepfake content with high accuracy, particularly machine learning algorithms, proving effective at spotting these AI-generated videos by analyzing facial and vocal anomalies. Their application in conflict zones is crucial for mitigating the spread of misinformation and malicious propaganda.
Recent governmental initiatives signal the beginnings of a framework evolution to suit new technological realities. However, addressing systemic vulnerabilities exposed by AI’s scalable distortion of truth demands a multifaceted response. More than mere legal remedies, restoring balance requires ethical guardrails in technological development and usage norms, plus the need for public awareness.
In confronting this landscape, maintaining foundational pillars of perception and integrity remains imperative, even as inventions test traditional boundaries. Our preparedness hinges on enacting safeguards fused with values that meet this watershed moment where human judgment confronts machine creativity. With technology rapidly outpacing regulatory oversight, preventing harm from generative models remains an elusive goal. But don’t worry. I am sure AI will come up with a solution soon.