In the United States, courts have commonly rejected arguments to protect an artist’s “style” under copyright law. This treatment stems from the idea-expression distinction, which provides copyright protection for fixed expressions, but not abstract ideas. The copyright law regime treats style as an unprotectable idea. This means that multiple people may paint a tree with leaves, but the size, color, and shape of the tree in a particular painting is a distinctive expression that is protected. The emergence of generative artificial intelligence (AI) raises new questions and avenues to explore the limits of copyright law as it stands. The ability of AI models to produce new images “in the style of” specific artists increases the potential scope and magnitude of harm to said artists through the exploitation of their creativity, skill, and time.
The use of AI in artistic and creative contexts is impactful because it calls into question the foundational reasoning underlying the idea-expression distinction. Mark Lemley, a leading intellectual property scholar, theorizes that “increasingly, the things humans contribute in a collaboration with generative AI will be ideas and high-level concepts” and “AI will contribute the expression.” In other words, the ability of AI to take creative ideas prompted by individuals and express them in something tangible flips the idea-expression distinction on its head, disrupting the entire copyright law regime. This theory emphasizes the transformative potential of AI to challenge current law and demonstrates how drastic challenges to copyright law can open the door for reconsiderations of the protectability of an artist’s style.
While it has always been possible for individuals to copy the style of another artist’s work, the capacity for harm to artists increases exponentially due to the capabilities of AI. It is now as simple as typing a short prompt into an AI model to create an image that can be commercially marketed. Even without naming the original artist, individuals in the market may recognize their style. This scenario creates potential economic harm if the demand for original works by the artist whose style is being appropriated declines because similar works are widely available on the market. There are additional reputational implications if the subjects being depicted do not align with the artist’s individual values. Even if the artist disapproves of the image, the market will nonetheless associate the image with the artist. The artist’s lack of control over AI-created works reduces the incentive for innovation and new creation, hindering the constitutional goal of copyright to promote original works.
This blog post focuses on style in the context of visual art and takes two approaches to frame how style can be incorporated into protectable expression. First, it analogizes artistic style to the protection of architectural works in 17 U.S.C. § 102(8). Second, it outlines a theory-based route through the right of publicity that can serve as the foundation for protecting style.
The adoption of architectural works as a category of copyrightable subject matter by Congress offers an interesting analogy to evaluate and understand how elements of an artist’s style are and are not protected under current copyright law. The United States Code defines an “architectural work” as a tangible design that “includes the overall form as well as the arrangement and composition of spaces and elements in the design, but does not include individual standard features.” In other words, designers may retain a copyright in the original combination of elements that give a building a distinctive character, but may not receive protection for broader features of a building. Significantly, Congress prevents protection of architectural style categories, such as art deco, midcentury modern, or neoclassical. This means that while design features such as arches and columns are not protectable, the unique combination of such features in an original manner can produce a protectable architectural work. The statute makes clear that Congress elected to specifically protect the overall form and arrangement of an architectural design, even if broader and standard features are used.
The statute’s treatment of architectural works can be expanded and applied to other artistic works if an artist’s style is viewed as a form of expressive arrangement of techniques and aesthetic decisions. For example, while Vincent Van Gogh could not have claimed protection over post-impressionism, swirled brushstrokes, or vibrant color palettes, the combination and arrangement of these elements can be analogized to an architect’s expression of standard features. Under this view, style may be seen less as an abstract idea and more as a fixed expression made up of defined features.
With this more defined approach to viewing unique artists and their style, it is clear why generative AI poses a risk. As the AI is learning how to replicate an artist’s style, it identifies the unique combination and arrangement of elements and creates the equivalent of an artistic profile. When generating a new image “in the style of” an artist, AI has the explicit goal of making the image in the artist’s recognizable style, utilizing the combination of elements in the works it was trained on. This means that AI is creating new works based on the expressive arrangement and combination of elements that makes an artist’s style distinct, which would be protectable if the image was an architectural work. This analogous relationship and the strong theoretical argument for protecting artistic style in the context of generative AI provides one logical route toward advocating for the protection of style.
The right of publicity is a state-level right intended to prevent “unauthorized uses of a person’s identity.” The theory underlying this right offers a foundation for recognizing artistic style as part of an individual’s identity. The ability of an AI model to associate a specific style with a specific artist to create new images indicates a merging of the two concepts, with style becoming a proxy for artistic identity. It is often an individual’s unique artistic identity that the market values. If style is viewed as part of an artist’s identity, the right of publicity provides a strong mechanism for protecting it. The rationale underlying the right of publicity is “preserving the commercial value of their identity” and “protecting the autonomy of their personality.” These underlying rationales align closely with the specific harms that generative AI can have on artists through unauthorized appropriation of their styles, namely economic and reputational harms.
California legislature and courts lead in granting publicity protections and provide the strongest outline for a federally regulated and protected right of publicity. Not only does the California Civil Code § 3344 provide legislative protections for the “name, voice, signature, photograph, or likeness” of an individual, but state common law also provides broader protections. The scope of the right of publicity doctrine is further articulated through case law, which protects elements of identity and can serve as the blueprint for protecting style as an element of identity. In Midler v. Ford Motor Co., the Ninth Circuit held that the imitation of actress and comedian Bette Midler’s voice constituted misappropriation even though neither her name nor her image was associated with the advertisement. The Court stated that “[t]o impersonate her voice is to pirate her identity.” The scope of this statute was expanded in White v. Samsung Electronics, where the Ninth Circuit held that a robot dressed to resemble Vanna White and positioned to turn letters as is done on the Wheel of Fortune game show infringed on her right of publicity. The court noted that the right of publicity protects more than “a laundry list of specific means of appropriating identity” and “name and likeness” do not need to be used to infringe the right of publicity. Symbolic references emulating the attire and behavior of an individual are enough to violate one’s identity. These cases illustrate how a broad and nonliteral conception of identity is recognized under the right of publicity.
This expansive approach by the Ninth Circuit implicates the incorporation of artistic style, which is simply the combination and arrangement of elements that the public associates with an artist, as part of that artist’s identity. The Ninth Circuit has demonstrated a willingness to grant protection under a broad conception of likeness; an artist’s stylistic signature, like Midler’s vocal tone and White’s television persona, could constitute an element of identity protectable under the right of publicity. This argument is particularly powerful in the context of generative AI, which extracts patterns from an artist’s work and produces a new image intended to be closely associated with that artist, even if the artist’s name is not used. The assumption underlying the right of publicity is that the individuals whose identities are being appropriated are commercially valuable. Similarly, artists that are appropriated for their artistic style are likely to have value. Thus, the exploitation of an artist’s recognizable and imitable style resembles the type of appropriation that the Ninth Circuit sought to prevent. Given the relevant precedent of the California legislature and Ninth Circuit, the right of publicity provides a foundation for protecting an artist’s style.
The traditional arguments for not protecting an artist’s style are being challenged due to the monumental impact that generative AI has on artists. If the idea-expression distinction, a core foundation of copyright law, does not function as intended due to widespread growth of expression but lack of ideas, the copyright regime should reevaluate whether style is protectable. The two foundations outlined in this paper, architectural works and right to publicity, provide starting points for policymakers and legislators to consider in evaluating the protectability of style.
Patent law was created to incentivize innovation by granting inventors limited monopolies in exchange for public disclosure of their discoveries. In theory, this should promote competition and technological progress. In practice, however, patent law has paradoxically become a barrier to creativity in the video game industry.
In recent years, there has been a trend of game publishers, including major names like Nintendo, increasingly seeking patents on gameplay mechanics and systems. For example, Nintendo has filed 31 patents alone for the game The Legend of Zelda: Tears of the Kingdom and has received criticism from fans and experts alike regarding the validity of their patents. These developments highlight a longstanding tension within patent law. While some critics argue for protection for software innovations, others say that patents on video game mechanics are often overly broad. This article argues that overly broad video game mechanic patents undermine patent law’s core purposes by chilling creative development and disproportionately burdening smaller studios. Patent law should not prevent other developers from building upon design ideas; instead, it should bolster competition by protecting only concrete, novel implementations that meaningfully advance video game technologies.
To understand how this problem emerged, it is crucial to examine how a system designed to promote innovation through protection has, over time, come to restrict creative expression in video games.
At first, the video game industry was “indifferent to potential patent rights.” Inventor and game developer Ralph Baer filed the first patent for a tennis-like video game in 1968. His creation led to the Magnavox Odyssey, the first home video console and subsequent blueprint for the industry. Baer’s ingenuity inspired other companies including Atari, whose arcade game Pong (and later home consoles) achieved the commercial success that Magnavox struggled to obtain. As other companies rushed to emulate Atari’s success, Magnavox filed suits claiming these game companies were infringing on Baer’s patents. Through the 1970s, patents remained less central than copyrights as the distinction between hardware and software developed. Still, this marked the beginning of a legal shift that foreshadowed the modern trend of using patents to protect not only tangible inventions, but also abstract gameplay ideas.
35 U.S.C. § 101 governs patentability, allowing patents to be granted to “whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter.” While straightforward theoretically, its interpretation has been far from simple. There has been a struggle to draw a distinct line between an unpatentable, abstract idea and a patentable implementation in software; video game technology amplifies this ambiguity because of its complexities.
Following the Supreme Court’s decision in Diamond v. Diehr—which upheld a patent for a rubber-curing process that used a computer algorithm to calculate optimal cure times—courts began interpreting 35 U.S.C. § 101 more liberally, allowing patents for software and abstract algorithms. Scholars have pointed out that lower courts have misapplied Diehr by granting protections to abstract processes that patent law was never meant to cover. In Bilski v. Kappos, the Supreme Court confirmed that abstract ideas cannot become patent-eligible simply by being implemented on a computer.
Alice Corp v. CLS Bank attempted to clarify how to interpret § 101 by requiring a two-step analysis of patent infringement claims: (1) is the claim directed to an abstract idea, and if so, (2) is there an “inventive concept” that transforms the idea into a patentable application? The second step of Alice has been inconsistently applied, which has led to unpredictable outcomes such as the issuing of broad patents.
For example, in DDR Holdings, the court upheld an e-commerce software patent because “the claimed solution is necessarily rooted in computer technology to overcome a problem specifically arising in the realm of computer networks.” Within the gaming sector, this poses a very tangible threat. The varied application of the abstract-idea doctrine has led to the proliferation of overly broad video game mechanic patents. This causes legal uncertainty for developers and inventors, creating a chilling effect on innovation, as the court’s stance remains unclear.
The “rules of a game” are generally not patentable, falling under the abstract idea exception to 35 U.S.C. § 101. This exception can serve as a hurdle to patenting certain kinds of games. To overcome this obstacle, applicants draft game mechanic patents using highly specific, technical language that outlines the software implementation rather than the underlying rules. Although this filing method often satisfies the formal requirements of the Patent Act, it enables patent holders to claim broad ownership over fundamental gameplay interactions that could be implemented in innumerable ways. This drafting strategy is especially visible in recent filings by major publishers, such as Nintendo’s patent filings for The Legend of Zelda: Tears of the Kingdom.
The Legend of Zelda’s patents describe the meticulous calculations for object interactions, user interface elements such as the loading screen, and particular character abilities, yet expand to cover the broader concept of combining objects in virtual environments to solve puzzles. This is the type of mechanic other developers might independently create using entirely different lines of code and visual designs. By cloaking abstract gaming ideas in technical specifications, these patents effectively form a conceptual wall around common game design methods and patterns.
As a result, indie developers are placed in a precarious position. Small studios often lack the resources to conduct comprehensive searches of prior art before they develop games; even if they identify potentially conflicting patents, they may not have the means to design around them. The threat of litigation from major publishers may force indie developers to abandon promising projects or avoid entire genres, regardless of whether the implementation would differ, as video game “development is a capital-intensive endeavor” that “requir[es] substantial financial resources.”
The America Invents Act provides a mechanism to challenge questionable patents. Inter Partes Review (IPR) allows third parties to petition the Patent Trial and Appeal Board to reexamine a patent’s validity based on prior art at a fraction of the potential litigation costs. IPR also has a lower burden of proof because there is no presumption of patent validity. IPR provides a comparatively accessible forum for challengers and serves as a crucial tool for invalidating broad patents that were potentially granted erroneously.
However, IPR has limitations. The process requires significant legal expertise and can cost, on average, $300,000—more affordable than a multi-million-dollar trial, but beyond reach for many indie studios. Additionally, with IPR, patents can only be challenged on specific grounds, not on the abstract idea doctrine that often applies to game mechanics. Further, “IPR practitioners will no longer be able to use general knowledge to bridge evidentiary gaps and instead rely on the four corners of prior art or printed publications,” which can provide an evidentiary advantage for patent owners. Despite these constraints, Inter Partes Review, even if unsuccessful, can serve as a procedural leveling mechanism by introducing administrative scrutiny and litigation risk even where substantive invalidation is unlikely.
Patent law’s reach has expanded far beyond hardware and physical devices as games have become increasingly software-based, encouraging developers to protect algorithms, interfaces, and in-game systems that shape player experiences. However, scholars have noted that software patents often fail to provide clear notice to potential innovators, as their scope can be difficult to identify and interpret before litigation.
A controversial example of this trend is Warner Bros. Interactive Entertainment’s patent on the ‘Nemesis System’ from the game Middle-earth: Shadow of Mordor, which grants them exclusive rights over dynamic enemies that are “promoted to a bigger threat” upon defeating players. This system was meant to tailor player experience by crafting miniature bosses that were “tailored to…specific playthrough[s]” that remember the character and can develop a vendetta against them. Many of these patents are considered overly broad because they fail to meaningfully limit claims to specific technological processes and instead seek protection over high-level gameplay concepts and functional outcomes. By monopolizing such mechanics, these companies have sparked widespread concern among fans and experts.
The threat of potential litigation, rather than actual enforcement, can discourage developers from engaging in iterative, experimental design, effectively serving as a barrier to entry. A small studio developing a memory-based enemy system, for instance, might scrap the feature entirely rather than risk resembling Warner Bros.’ patent. As a result, patent law risks promoting the very stagnation it was meant to prevent.
From law journals to consumer outcry, the backlash against overly broad video game patents has become impossible to ignore. As courts continue to grapple with the boundaries of patentable subject matter under § 101, the video game industry illustrates what happens when patent protection extends too far. When abstract gameplay concepts are treated as proprietary rights, innovation suffers. If patent law is meant to promote progress, it must allow developers to build upon and refine existing concepts without sacrificing creative freedom. Meaningful reform requires limiting patents to specific technical solutions while keeping core gameplay concepts in the public domain—available for all developers to iterate upon and improve.
Over the past decade, schools across the United States have increasingly turned to digital surveillance technologies to monitor students’ online behavior in the name of safety. Among these technologies, Gaggle has emerged as one of the most widely adopted platforms, serving more than 5.8 million students at over 1,500 school districts across the United States. Gaggle’s tasks performed include scanning millions of students’ emails, documents, chats, and images for signs of self-harm, violence, or bullying. Supporters claim that Gaggle helps schools prevent tragedies and identify students in crisis, while critics warn that Gaggle’s constant monitoring compromises student privacy and autonomy and is a risk to equality for all students. This blog post examines Gaggle’s role in K–12 education by analyzing how Gaggle operates, assessing its effectiveness, weighing its benefits against its harms, and considering the adequacy of existing legal protections. Although the platform is promoted as a tool for students’ well-being, its pervasive monitoring raises serious ethical and constitutional concerns that outweigh its purported benefits.
Gaggle integrates with Google Workspace and Microsoft 365 to scan student emails, documents, and images associated with school accounts for language indicating self-harm, usage or possession of drugs, or risks of violence. Because schools can grant Gaggle access to student accounts, surveillance extends beyond school hours and onto students’ personal devices whenever and wherever they log in. Even social media notifications tied to school emails can be monitored. To identify potentially harmful material, Gaggle employs an in-house, AI-powered filtering system that compares scanned content against a proprietary “blocked-word list” containing profanity and references to self-harm, violence, bullying, or drugs. Content flagged by Gaggle’s AI system is reviewed by human moderators, who may escalate “incidents” to administrators or, in severe cases, to law enforcement. Gaggle divides flagged content into three tiers: “violations,” “questionable content,” and “possible student situations.” The last category involves imminent threats such as suicide or possible violence and triggers immediate contact with school officials. While the company claims to have helped save thousands of lives, its data are self-reported and unverifiable. Critics highlight the lack of independent evaluation and the questionable reliability of low-paid contract reviewers expected to process hundreds of incidents per hour. Despite these concerns, many educators view Gaggle as a useful tool for early intervention. However, existing evidence does not conclusively show that Gaggle reduces suicide, self-harm, or violence, suggesting that its promise may rest more on perception than on measurable results.
Proponents argue that Gaggle responds to a growing mental health crisis amongst youths. Rising rates of depression and anxiety—especially among LGBTQ+ and transgender students—make it difficult for schools to identify struggling students. Supporters claim that Gaggle helps detect warning signs, allowing earlier counseling or intervention. They also cite the platform’s ability to address cyberbullying and fulfill legal mandates under state anti-bullying laws. Additionally, given the growing fear of school shootings, administrators see Gaggle as a supplement to limited counseling resources, capable of flagging threats before violence occurs. From this perspective, Gaggle provides schools with a sense of control and readiness, offering reassurance that no cry for help will go unnoticed. Yet, this reassurance often obscures the dark side to Gaggle’s constant surveillance: the erosion of students’ fundamental right to privacy and their ability to learn freely and to express themselves without constant scrutiny.
Continuous surveillance discourages students from expressing themselves freely. Developmental psychologists emphasize that adolescence is a key period for cultivating creativity, independence, and critical thought. When students know they are constantly monitored, they self-censor and conform. This undermines what privacy scholars call “intellectual privacy”—the ability to think and communicate without fear of observation. Research shows that over half of monitored students refrain from sharing their true thoughts online, confirming that Gaggle’s presence suppresses open exploration. In effect, the system teaches young people that safety and obedience take precedence over curiosity and trust. Such lessons, internalized at a formative stage, may have lasting consequences for democratic participation and creative confidence.
Gaggle’s algorithmic bias and access patterns also disproportionately harm disadvantaged students. AI systems often reflect racial and linguistic bias, flagging language used by students of color or LGBTQ+ youth as “offensive.” Low-income students, who rely on school-issued devices, are more heavily surveilled because, for financial reasons, they cannot separate school and personal accounts. Gaggle accesses information through students’ school accounts—regardless of whether they log in from personal or school-provided devices—and can also capture notifications from social media when those accounts are used for registration. Without access to private electronic devices, these students have little choice but to rely on school accounts for all purposes, making it difficult to maintain privacy or segregate personal activity from school monitoring. Moreover, Gaggle has blocked LGBTQ+ websites and flagged terms like “gay” or “queer,” deterring students from seeking support. In some cases, its monitoring has even exposed students’ sexual orientation without consent, placing them at risk of harm at home. These harms compound preexisting inequities in education, as marginalized students already face higher rates of disciplinary action. When surveillance is piled on top of these disparities, it amplifies rather than alleviates injustice. The result is a system that treats vulnerability as suspicious and equates being different to being dangerous.
By extending monitoring to all hours and authorizing contact with law enforcement, Gaggle risks feeding into the school-to-prison pipeline. The “school-to-prison pipeline” describes the phenomenon in which heightened discipline, surveillance, and law enforcement involvement in schools push students, especially the most at-risk ones, out of educational environments and into the juvenile or criminal justice systems. Although the company claims its system is not disciplinary, its reports can reach police if administrators are unavailable. This increases criminalization, especially for minority students already subject to harsher discipline. In the post-Dobbs era, where abortion and gender-affirming care are criminalized in some states, Gaggle’s stored data could be used against students seeking medical information, raising severe privacy concerns. Students and parents rarely receive meaningful notice or the ability to opt out of the surveillance. Unlike some competitors, Gaggle operates in the background without visible indicators. While the company recommends schools notify families, many do not, leaving parents unaware that surveillance occurs. Additionally, opting out effectively bars students from using essential educational technology, mandating surveillance by default. This lack of transparency undermines informed consent and contradicts principles of digital autonomy. As a result, millions of students are subjected to 24/7 surveillance without ever being asked for permission, creating a generation of learners for whom privacy is not a right but a privilege.
Existing federal laws inadequately address the scope of student surveillance. The Children’s Internet Protection Act (CIPA) requires schools receiving federal funds to monitor minors’ online activity and block obscene content. However, it was intended to restrict access to harmful material—not to justify constant behavioral surveillance. Gaggle’s around-the-clock monitoring exceeds what CIPA envisions, and its overbroad filters have restricted legitimate LGBTQ+ educational resources. The Family Educational Rights and Privacy Act (FERPA) restricts disclosure of student records but includes a “school-official exception” allowing data sharing with third-party contractors. This loophole permits schools to grant Gaggle broad access without parental consent, undermining FERPA’s original purpose of safeguarding student records. The Children’s Online Privacy Protection Act (COPPA) governs the collection of children’s data under age thirteen. Because schools can consent on parents’ behalf, Gaggle is not required to obtain direct parental consent. Moreover, COPPA does not apply to students over thirteen, leaving middle and high schoolers unprotected. Fourth Amendment implications remain uncertain. Under New Jersey v. T.L.O., searches must be justified and reasonable in scope, but courts have split on how this applies to digital monitoring. In State v. Gaul, scanning emails on school servers was upheld since students were notified, but R.S. v. Minnewaska recognized privacy rights in personal social media messages. Gaggle’s continuous off-campus surveillance may therefore raise constitutional concerns, particularly where students lack meaningful notice or the opportunity to avoid monitoring.
Gaggle’s promise of safety comes at the expense of student privacy, equity, and trust. To ensure a fair balance, policymakers should commission independent research to evaluate effectiveness; require transparency and parental notice; mandate audits for algorithmic bias; limit surveillance to school hours and on-campus use; and update CIPA, FERPA, and COPPA to reflect modern digital realities. Schools must also consider alternatives that emphasize human connection rather than algorithmic control, such as increasing access to counseling, peer support programs, and teacher training in mental health awareness. Ultimately, while technology can support student well-being, it must not erode the freedom to think, explore, and learn without fear of being watched. Gaggle’s 24/7 surveillance, though possibly well-intentioned, risks transforming schools into digital panopticons where privacy and creativity give way to control. The task before educators and lawmakers is not to abandon safety, but to redefine it—a task that protects both students’ lives and their liberty to live as autonomous thinkers in a democratic society.