Introduction
Algorithmic bias is AI’s Achilles heel, revealing how machines are only as unbiased as the humans behind them.
The most prevalent real-world stage for human versus machine bias is the job search process. What started out as newspaper ads and flyers at local coffee shops, is now a completely digital process with click-through ads, interactive chatbots, resume data translation, and computer-screened candidate interviews.
Artificial intelligence encompasses a wide variety of tools, but in context to HR specifically, common AI tools include Machine Learning algorithms that conduct complex and layered statical analysis modeling human cognition (neural networks), computer vision that classifies and labels content on images or video, and large language models.
AI-enabled employment tools are powerful gatekeepers that determine the future of natural persons. With over 70% of companies using this technology investing into the promise of efficiency and neutrality, these abilities have recently come into question as these technologies have the potential to discriminate against protected classes.
Anecdote
On February 20, 2024, Plaintiff Derek Mobley initiated a class action lawsuit against an AI-enabled HR organization WorkDay, Inc., for engaging in a “pattern and practice” of discrimination based on race, age, and disability in violation of the Civil Rights Act of 1964, Civil Rights Act of 1886, Age Discrimination Act of 1967, and ADA Amendments Act of 2008. WorkDay Inc., according to the complaint, disproportionately disqualifies African-Americans, individuals over the age of 40, and individuals with disabilities securing gainful employment.
WorkDay provides subscription-based AI HR solutions to medium and large sized firms in a variety of industries. The system screens candidates based on human inputs and algorithms and according to the complaint, WorkDay employs an automated system, in lieu of human judgement, to determine how high volume of applicants should be processed on behalf of their business clients.
The plaintiff and members of the class have applied for numerous jobs that use WorkDay’s platforms and received several rejections. This process has deterred the plaintiff and members of the class from applying to companies that use WorkDay’s platform.
Legal History of AI Employment Discrimination
Mobley vs. WorkDay is the first-class action lawsuit against an AI solution company for employment discrimination, but this is not the first time that an AI organization is being sued for employment discrimination.
In August 2023, the EEOC settled the first-of-its-kind Employment Discrimination lawsuit against a virtual tutoring company that programmed its recruitment software to automatically reject older candidates. The company was required to pay $325,000 and if they were to resume hiring efforts in the US, they are required to call back all applicants during the April-May 2020 period who were rejected based on age to re-apply.
Prior to this settlement, the EEOC issued guidance to employers about their use of Artificial Intelligence tools that extends existing employer selection guidelines to AI-assisted selections. From this guidance, employers, not third-party vendors, ultimately bear the risk of unintended adverse discrimination from such tools.
How Do HR AI Solutions Introduce Bias?
There are several steps in the job search process and AI is integrated throughout this process. Steps include: The initial search, narrowing candidates, and screening.
Initial search
The job search process starts with targeted ads reaching the right people. Algorithms in hiring can steer job ads towards specific candidates and help assess their competencies using new and novel data. HR professionals found the tool helpful in drafting precise language and designing the ad with position elements, content and requirements. But these platforms can inadvertently reinforce gender and racial stereotypes by delivering the ad to candidates that meet certain job stereotypes.
For instance, ads delivered on Facebook for stereotypically male jobs are overwhelmingly targeted at male users even though the advertising was intended to reach a gender neutral audience. Essentially, at this step of the job search process, algorithms can prevent capable candidates from even seeing the job posting in the first place that further creates a barrier to employment.
Narrowing Candidates
After candidates that have viewed and applied for the job through an ad or other source, the next step that AI streamlines is the candidate narrowing process. At this step, the system narrows candidates by reviewing resumes that best match the historical hiring data from the company or its training data. Applicants found the resume to application form data transfers helpful and accurate in this step of the process. But applicants were concerned that the model could miss necessary information.
From the company’s perspective, hiring practices from the client company are still incorporated into the hiring criteria in the licensed model. While the algorithm is helpful in parsing vast amounts of resumes and streaming this laborious process for professionals, the algorithm can replicate and amplify existing biases in the company data.
For example, a manager’s past decisions may lead to anchoring bias. If some bias like gender, education, race and age existed in the past and they are present in the employer’s current high performing employees that the company uses as a benchmark, those biases can be incorporated into the outcomes at this stage of the employment search process.
Screening
Some organizations subscribe to AI tools that have a computer vision-powered virtual interview process that analyzes the candidates’ expressions to determine whether they fit the “ideal candidate” profile, while other tools like behavior/skills games are used to screen candidates prior to an in-person interview.
Computer vision models that analyze candidate expressions to assess candidacy are found to perpetuate preexisting biases against people of color. For instance, a study that evaluates such tools, found the taxonomies of social and behavioral components creates and sustains similar biased observations that one human would make on an another because the model with those labels and taxonomies is trained with power hierarchies. In this sense, computer vision AI hiring tools are not neutral because they reflect the humans that train and rely on them.
Similarly, skill games are another popular tool used to screen candidates. However, there are some relationships AI cannot perceive in its analysis. For instance, candidates that are not adept with online games perform poorly on those games, not because they lack the skills, but they lack an understanding of the games features. Algorithms, while trained on vast data to assess candidate ability, still fall short when it comes assessing general human capabilities like the relationship between online game experience and employment skills tests.
Throughout each step of the employment search process, AI tools fall short in accurately capturing candidate potential capabilities.
Discrimination Theories and AI
Given that the potential for bias is embedded throughout the employment search process, legal scholars speculate courts are more likely to scrutinize discriminatory outcomes under the disparate impact theory of discrimination.
As a recap, under Title VII there are two theories of discrimination, disparate treatment, and disparate impact. Disparate treatment means the person is treated different “because of” their status as a protected class (i.e., race, sex). For example, if a manager were to intentionally use a bias algorithm to screen out candidates of a certain race, then this behavior would be considered disparate treatment. Note, this scenario is for illustrative purposes only.
Disparate impact applies to facially neutral processes that have a discriminatory effect. The discriminatory effect aspect of this theory of discrimination can be complex because the plaintiff must identify the employer practice that has a disparate impact on a protected group. The employer can then defend that the practice by showing it is “job related” and consistent with “business necessity.” However, the plaintiff can still show that there was an alternative selection process and the business failed to adopt it. Based on this disparate impact theory, it is possible that when AI selection tools disproportionately screen women and/or racial minorities from the applicant pool, disparate theory could apply.
Existing Methods to Mitigate Bias
Algorithmic bias in AI tools has serious implications for members of protected classes.
However, developers currently employ various tools to de-bias algorithms and improve their accuracy. One method is de-biased word embedding in which neutral associations of a word are supplemented to expand the model’s understanding of the word. For instance, a common stereotype is men are doctors and women are nurses or in algorithmic terms “doctor – man + woman = nurse.” However, with the de-bias word embedding process, the model is then trained to understand “doctor – man + woman = doctor.”
Another practice currently employed by OpenAI is external Red Teaming. In which external stakeholders interact with the product and assess its weaknesses, possibility for bias, or other adverse consequences and provide feedback to OpenAI to improve and mitigate the onsets of adverse events.
But there are limitations to these enhancements. To start, bias mitigation is not a one-size-fits-all issue. Bias is specific to its geographic and cultural bounds. For instance, a model in India may need to consider caste-based discrimination. Additionally, precision is required to capture the frame where bias is possible and solely relying on foreseeable bias from the developers’ perspective is limiting. Rather, employing some form of collaborative design that includes relevant stakeholders to contribute to the identification of bias, the identification of not biased is needed.
Lastly, a debiased model is not a panacea. A recent study in which users interacted with a debiased model that used machine learning and deep learning to provide recommendations for college majors, indicated that regardless of the debiased model’s output, users relied on their own biases to choose their majors, often motivated by gender stereotypes associated with those majors.
Essentially, solutions from the developer side are not enough to resolve algorithmic bias issues.
Efforts to Regulate AI Employment Discrimination
Federal law does not specifically govern artificial intelligence. However, existing laws including Title VII extend to applications that include AI. At this point, regulation efforts are largely at the state and local government level.
New York City is the first local government to pass an official law that regulates AI-empowered employment decision tools. The statute requires organizations to inform candidates of the use of AI in their hiring process, and before using the screening device, notify potential candidates. If candidates do not consent to the AI-based process, the organization is required to use an alternative method.
Like New York’s statute, Connecticut passed a statute specific to state agency’s use of AI and machine learning hiring tools. Connecticut requires an annual review of the tools performance, a status update on whether the tool underwent some form of bias mitigation training in an effort to prevent unlawful discrimination.
New Jersey, California, and Washington D.C. currently have bills that are intended to prevent discrimination with AI hiring systems.
Employer Considerations
With the possibility of bias embedded throughout each step of the recruiting process, employers must do their part to gather information about the performance of the AI system they ultimately invest in.
To start, recruiters and managers alike stressed the need for AI systems to provide some explanation why the applicant is rejected or selected to accurately assess the performance of the model. This need speaks specifically to AI models’ tendency to find proxies or shortcuts in the data to reach the intended outcome on a superficial level. For instance, models might find a candidate by only focusing on candidates who graduated from universities in the Midwest if most of upper management attended such schools. In this sense, employers should look for accuracy reports, ask vendors ways to identify and correct this issue in this hiring pool.
Similarly, models can focus on candidate traits that are unrelated to the job traits and are simply unexplained correlations. For example, one model in the UK linked people that liked “curly fries” on Facebook to have higher levels of intelligence. In this case, employers need to develop processes to analyze whether the output from the model was “job related” or related carrying out the functions of the business.
Lastly, employers must continue to invest in robust diversity training. Algorithmic bias reflects the bias human behind the computer. While AI tools enhance productivity and alleviate the laborious parts of work, it also increases the pressure on humans to do more cognitive-intensive work. In this sense, managers need robust diversity training to scrutinize outputs from AI models, to investigate whether the model measured what it was supposed to, whether the skills required in the post accurately reflect the expectations and culture of the organization.
Along with robust managerial training, these AI solutions often incorporate “culture fit” as a criterion. Leaders need to be intentional about precisely defining culture and promoting that defined culture in its hiring practices.
Conclusion
A machine does not know its output is biased. Humans interact with context—culture dictates norms and expectations, shared social/cultural history informs bias. Humans, whether we like to admit it or not, know when our output is biased.
To effectively mitigate unintentional bias in AI-driven hiring, stakeholders, ranging from HR professionals to developers and candidates, must understand the technology’s limitations, ensure its job-related decision-making accuracy, and promote transparent, informed use, while also maintaining robust DEI initiatives and awareness of candidates’ rights.
Trending Uses of Deepfakes
Deepfake technology, leveraging sophisticated artificial intelligence, is rapidly reshaping the entertainment industry by enabling the creation of hyper-realistic video and audio content. This technology can convincingly depict well-known personalities saying or doing things they never actually did, creating entirely new content that did not really occur. The revived Star Wars franchise used deepfake technology in “Rogue One: A Star Wars Story” to reintroduce characters like Moff Tarkin and Princess Leia, skillfully bringing back these roles despite the original actors, including Peter Cushing, having passed away. Similarly, in the music industry, deepfake has also been employed creatively, as illustrated by Paul Shales’ project for The Strokes’ music video “Bad Decisions.”Shales used deepfake to make the band members appear as their younger selves without them physically appearing in the video.
While deepfakes offer promising avenues for innovation, such as rejuvenating actors or reviving deceased ones, it simultaneously poses unprecedented challenges to traditional copyright and privacy norms.
Protections for Deepfakes
Whereas deepfakes generate significant concerns, particularly about protecting individuals against deepfake creations, there is also controversy over whether the creators of deepfake works can secure copyright protection for their creations.
Copyrightability of Deepfake Creations
Current copyright laws fall short in addressing the unique challenges posed by deepfakes. These laws are primarily designed to protect original works of authorship that are fixed in a tangible medium of expression. However, they do not readily apply to the intangible, yet creative and recognizable, expressions that deepfake technology replicates. This gap exposes a crucial need for legal reforms that can address the nuances of AI-generated content and protect the rights of original creators and the public figures depicted.
Under U.S. copyright law, human authorship is an essential requirement for a valid copyright claim. In the 2023 case Thaler v. Perlmutter, plaintiff Stephen Thaler attempted to register a copyright for a visual artwork produced by his “Creativity Machine,” listing the computer system as the author. However, the Copyright Office rejected this claim due to the absence of human authorship, a decision later affirmed by the court. According to the Copyright Act of 1976, a work must have a human “author” to be copyrightable. The court further held that providing copyright protection to works produced exclusively by AI systems, without any human involvement, would contradict the primary objectives of copyright law, which is to promote human creativity—a cornerstone of U.S. copyright law since its beginning. Non-human actors need no incentivization with the promise of exclusive rights, and copyright was therefore not designed to reach them.
However, the court acknowledged ongoing uncertainties surrounding AI authorship and copyright. Judge Howell highlighted that future developments in AI would prompt intricate questions. These include determining the degree of human involvement necessary for someone using an AI system to be recognized as the ‘author’ of the produced work, the level of protection afforded the resultant image, ways to assess the originality of AI-generated works based on non-disclosed pre-existing content, the best application of copyright to foster AI-involved creativity, and other associated concerns.
Protections Against Deepfakes
The exploration of copyright issues in the realm of deepfakes is partially driven by the inadequacies of other legal doctrines to fully address the unique challenges posed by this technology. For example, defamation law focuses on false factual allegations and fails to cover deepfakes lacking clear false assertions, like a manipulated video without specific claims. Trademark infringement, with its commercial use requirement, does not protect against non-commercial deepfakes, such as political propaganda. The right of publicity laws mainly protect commercial images rather than personal dignity, leaving non-celebrities and non-human entities like animated characters without recourse. False light requires proving substantial emotional distress from misleading representations, a high legal bar. Moreover, common law fraud demands proof of intentional misrepresentation and tangible harm, which may not always align with the harms caused by deepfakes.
Given these shortcomings, it is essential to discuss issues in other legal areas, such as copyright issues, to enhance protection against the misuse of deepfake technology. In particular, the following sections will explore unauthorized uses of likeness and voice and the impacts of deepfakes on original works. These discussions are critical because they aim to address gaps left by other legal doctrines, which may not fully capture the challenges posed by deepfakes, thereby providing a broader scope for protection.
Unauthorized Use of Likeness and Voice
Deepfakes’ capacity to precisely replicate an individual’s likeness and voice may raise intricate legal issues. AI-generated deepfakes, while sometimes satirical or artistic, can also be harmful. For example, Taylor Swift has repeatedly become a target of deepfakes, including instances where Donald Trump’s supporters circulated AI-generated videos that falsely depict her endorsing Trump and participating in election denialism. This represents just one of several occasions where her likeness has been manipulated, underscoring the broader issue of unauthorized deepfake usage.
The Tennessee ELVIS Act updates personal rights protection laws to cover the unauthorized use of an individual’s image or voice, adding liabilities for those who distribute technology used for such infringements. In addition, on January 10, 2024, Reps. María Elvira Salazar and Madeleine Dean introduced the No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act (H.R. 6943). This bill is designed to create a federal framework to protect individual rights to one’s likeness and voice against AI-generated counterfeits and fabrications. Under this bill, digitally created content using an individual’s likeness or voice would only be permissible if the person is over 18 and has provided written consent through a legal agreement or a valid collective bargaining agreement. The bill specifies that sufficient grounds for seeking relief from unauthorized use include financial or physical harm, severe emotional distress to the content’s subject, or potential public deception or confusion. Violations of these rights could lead individuals to pursue legal action against providers of “personalized cloning services” — including algorithms and software primarily used to produce digital voice replicas or depictions. Plaintiffs could seek $50,000 per violation or actual damages, along with any profits made from the unauthorized use.
Impact on Original Work
The creation of deepfakes can impact the copyright of original works. It is unclear whether deepfakes should be considered derivative works or entirely new creations.
In the U.S., a significant issue is the broad application of the fair use doctrine. Under § 107 of the Digital Millennium Copyright Act of 1998 (DMCA), fair use is determined by a four-factor test assessing (1) the purpose and character of the use, (2) the nature of the copyrighted work, (3) the amount and substantiality of the portion used, and (4) the impact on the work’s market potential. This doctrine includes protection for deepfakes deemed “transformative use,” a concept from the Campbell v. Acuff Rose decision, where the new work significantly alters the original with a new expression, meaning, or message. In such cases, even if a deepfake significantly copies from the original, it may still qualify for fair use protection if it is transformative, not impacting the original’s market value.
However, this broad application of the fair use doctrine and liberal interpretation of transformative use do not work in favor of the original creators. They may potentially protect the deepfake content even with malicious intent, which makes it difficult for original creators to bring claims under § 512 of the DMCA and § 230 of the Communication Decency Act.
Federal and State Deepfake Legislation
“Copyright is designed to adapt with the times.” At present, although the United States lack comprehensive federal legislation that specifically bans or regulates deepfakes, there are still several acts that target deepfakes.
In Congress, a few proposed bills aim to regulate AI-generated content by requiring specific disclosures. The AI Disclosure Act of 2023 (H.R. 3831) requires any content created by AI to include a notice stating, “Disclaimer: this output has been generated by artificial intelligence.” The AI Labeling Act of 2023 (S. 2691) also demands a similar notice, with additional requirements for the disclaimer to be clear and difficult to alter. The REAL Political Advertisements Act (H.R. 3044 and S. 1596) demands disclaimers for any political ads that are wholly or partly produced by AI. Furthermore, the DEEPFAKES Accountability Act (H.R. 5586) requires that any deepfake video, whether of a political figure or not, must carry a disclaimer. It is designed to defend national security from the risks associated with deepfakes and to offer legal remedies to individuals harmed by such content. The DEFIANCE Act of 2024 aims to enhance the rights to legal recourse for individuals impacted by non-consensual intimate digital forgeries, among other objectives.
On the state level, several states have passed legislation to regulate deepfakes, addressing various aspects of this technology through specific legal measures. For example, Texas SB 751 criminalizes the creation of deceptive videos with the intent to damage political candidates or influence elections. In Florida, SB 1798 targets the protection of minors by prohibiting the digital alteration of images to depict minors in sexual acts. Washington HB 1999 provides both civil and criminal remedies for victims of fabricated sexually explicit images.
This year, California enacted AB 2839, targeting the distribution of “materially deceptive” AI-generated deepfakes on social media that mimic political candidates and are known by the poster to be false, as the deepfakes could mislead voters. However, a California judge recently decided that the state cannot yet compel individuals to remove such election-related deepfakes, since AB 2839 facially violates the First Amendment.
These developments highlight the diverse strategies that states are employing to address the challenges presented by deepfake technology. Despite these efforts, the laws remain incomplete and continue to face challenges, such as concerns over First Amendment rights.
Conclusion
As deepfake technology evolves, it challenges copyright laws, prompting a need for robust legal responses. Federal and state legislation is crucial in protecting individual rights and the integrity of original works against unauthorized use and manipulation. As deepfake technology advances, continuous refinement of these laws will be crucial to balance innovation with ethical and legal boundaries, ensuring protection against the potential harms of deepfakes.
I. INTRODUCTION
In May of 2024, the Federal Circuit overruled 40 years of precedent for assessing the obviousness of design patents in LKQ Corp. v. GM Global Technology Operations LLC. Already, commentators and practitioners have a wide array of opinions about the impacts of LKQ. If recent history is any guide, however, declarative statements about the impacts of LKQ are premature, and they create risks to businesses, practitioners, and courts alike. Rather, patent law observers should adopt a wait-and-see approach for the impacts of LKQ on design patent obviousness.
II. THE LKQ DECISION
In LKQ, the Federal Circuit addressed the standard for assessing design patent obviousness under 35 U.S.C. § 103. Before this decision, to find a claimed design unpatentable as obvious, the two-part Rosen–Durling test required a primary reference that was “basically the same” as the claimed design and secondary references “so related” to the primary reference that their features suggested combination with the primary reference.
In this case, the Federal Circuit held that framework to be too rigid under the Patent Act. Instead, the court ruled that the obviousness of a claimed design is to be determined through the application of the familiar Graham four-part test used to assess the obviousness of utility patents.
A. EARLY OPINIONS ABOUT LKQ
In the months since LKQ, opinions about the impacts of the decision have poured in from academics, practitioners, and commentators alike. Some predict a seismic shift, stating that the “far-” and “wide-reaching consequences” of LKQ will likely make design patents harder to obtain and easier to invalidate. Others predict little change at all, stating that the obviousness test “is largely the same as before” and that the expected changes from LKQ are primarily procedural. Still others seem to have landed on a middle ground, expecting “noticeable differences” in the law, with “examiners [having] more freedom to establish that the prior art is properly usable in an obviousness rejection.”
B. PARALLELS WITH KSR
LKQ is not the only recent decision dealing with obviousness that evoked immediate and wide-ranging reactions. In 2007, the Supreme Court issued KSR International Co. v. Teleflex Inc., a decision addressing the obviousness standard for patents. Notably, the Court rejected the Federal Circuit’s rigid application of its “teaching, suggestion, or motivation” test for obviousness to a utility patent in that case.
In the immediate aftermath of that case, commentators and practitioners were “divided on whether the decision of the Supreme Court in KSR [was] (a) a radical departure from the Federal Circuit’s approach, or (b) unlikely to change much.” Even after the Federal Circuit began to issue decisions under KSR, some argued that the case had only a “modest impact” on the Federal Circuit, and others even questioned “whether the Supreme Court achieved anything in KSR other than giving the Federal Circuit a slap on the wrist.”
Experts were also divided on the likely business impacts of KSR in its immediate aftermath. In the summer after the decision came down, two distinguished patent law experts speaking on a panel were asked if KSR would drive up the cost of preparing and prosecuting a patent. One said yes, and the other said no.
C. CAUTIONARY TALES FROM KSR
As time went on, however, the impacts of KSR became clear. Empirical studies from years after the decision routinely proved that the impacts of KSR were anything but modest, contradicting “a commonly held belief that KSR did not change the law of obviousness significantly.” Various empirical studies revealed “strong evidence that KSR has indeed altered the outcomes of the Federal Circuit’s obviousness determinations,” “a remarkable shift in the Federal Circuit’s willingness to uphold findings of obvious below,” and that “the benefit of retrospection shows KSR did change the rate of obviousness findings.”
Thus, KSR should serve as a cautionary tale against jumping to conclusions about the impacts of obviousness decisions. In the months following KSR, any declarative statements about its impacts were mere speculation. Even after the Federal Circuit began issuing decisions under KSR, the sample size remained too small to draw conclusions. Only years after the decision could researchers illuminate the impacts of KSR through empirical studies and show which of those early opinions were right and wrong.
III. THE WISDOM OF A WAIT-AND-SEE APPROACH FOR LKQ
Since the Federal Circuit only issued LKQ in May of 2024, we remain in the window where any declarative statements about its impacts are premature. Indeed, the Federal Circuit acknowledged that “there may be some degree of uncertainty for at least a brief period” in its LKQ opinion. While the urge to jump to conclusions is understandable, a wait-and-see approach offers many advantages.
First, as KSR demonstrated, early predictions may be inaccurate and may influence practitioners to adopt misguided design patent prosecution strategies. Overstating the impacts of LKQ may lead to overly cautious design patent applications, leaving intellectual property unprotected. A wait-and-see approach will allow prosecution strategies to develop based on reliable trends, reducing the risk of costly errors.
Second, the Federal Circuit almost certainly has more to say about design patent obviousness than it included in its LKQ opinion. Faulty strategy changes based on an incomplete picture may later need to be undone at great expense. Waiting allows the courts to solidify the impacts of LKQ so that practitioners and businesses can adjust their approaches – if that is necessary – with greater certainty and lower risk.
Third, overreacting to speculative predictions could cause companies to shift their design-around strategies, leading to unnecessary and wasteful changes in product lines. A wait-and-see approach allows companies to maintain their creative momentum and keep their design strategies consistent until the impacts of LKQ are better understood.
Fourth, design patents have experienced a boom in recent years. Premature predictions about LKQ risk skewing the perceptions of business leaders and the public about the continued value in pursuing design patent protections. By waiting to confirm the impacts of LKQ, commentators avoid this risk.
Fifth, predictions about LKQ could become self-fulfilling prophecies. Widespread speculation could unintentionally influence how courts evaluate obviousness in future cases. A wait-and-see approach allows courts to evaluate obviousness free from the noise of speculative predictions, focusing exclusively on the application of the law to the facts of each case.
Lastly, practitioners face potential backlash from clients if they offer advice that turns out to be too aggressive or pessimistic. By advocating patience to their clients, practitioners can maintain client trust and offer more measured and thoughtful advice once the implications of LKQ become clear.
IV. WHEN WILL WE KNOW?
This all begs the question: when will we understand LKQ so that declarative statements about its impacts are appropriate? Again, we can turn to KSR for guidance.
More than a year after KSR was handed down, some were still questioning if the decision had any impact at all. The first empirical studies of its impacts seemed to emerge about two to three years after the decision, uniformly finding that it altered the law of obviousness. Therefore, it seems safe to assume that empirical studies will illuminate the impacts of LKQ in 2026. Until then, patent law observers should wait and see.
V. CONCLUSION
With the recent history of KSR as our guide, patent law observers should adopt a wait-and-see approach for the impacts of the Federal Circuit’s recent decision in LKQ. At this early stage, improper speculation and declarative statements about the impacts of the case creates risks for businesses, practitioners, and courts. Instead, a wait-and-see approach allows reliable trends to guide prosecution strategies and allows design patent momentum to continue. In due time, empirical studies will emerge and make the impacts of LKQ clear to all.
Proponents of virtual reality (VR) as a medium for evidence in the courtroom have argued that it can bring many benefits to jurors, including enhanced empathy and better factual understanding. However, it is also speculated that VR could increase a juror’s biases or a false sense of accuracy. As VR technology advances, the legal field faces the challenge of balancing innovation with impartiality, paving the way for standards that will determine the future role of VR in trials. By examining VR’s speculative and actual impacts in evidence presentation, we gain insight into how this technology could affect the legal landscape further.
I. What Is VR and How Does It Relate To Evidence?
In its broadest sense, VR is “a simulated three-dimensional (3D) environment that lets users explore and interact with a virtual surrounding in a way that approximates reality, as it’s perceived through the users’ senses. VR technology primarily utilizes headwear that covers your eyes completely, and you can see a three-dimensional immersive world in a 360-degree spherical field of view. While VR technology is gaining popularity, many people don’t use or come across it indaily life. While VR technology has been trendy for recreational use, such as VR video games, VR has been implemented in many professional settings for training, education, healthcare, retail, real estate, and more. The visual, auditory, and even tactile aspects of virtual reality, ranging from vibrations to even full-body haptic suits, allow the immersion to feel more ‘real’ and thus allow these practical applications.
These practical applications have led to speculation and interest in using VR technology in the legal field. One of the primary ideas is that jurors can “experience” scenes of the case rather than physically going there. Jurors have shown a desire to visit crime scenes in homicide cases when the scene itself is relevant to the conviction. VR technology can help overcome the hurdles of photographs or videos by virtually going to the scene, as juries can ‘virtually witness’ the scene and simulated events. The power of VR technology to transportjurors to the scene of the crime can also help make complex cases more understandable.
II. Evidentiary Concerns with VR Evidence
Before addressing VR evidence’s potential benefits and harms, it is necessary to consider its admissibility. Federal Rules of Evidence, such as hearsay and authentication, present unique challenges for the admissibility of VR evidence.
Under the Federal Rules of Evidence, hearsay is an out-of-court statement offered to prove the truth of the matter asserted. For example, if trying to prove person A loves VR, person B testifies that person A told them, “I love VR.” In this example, person B testifying to person A’s statement is hearsay. Hearsay is not admissible unless it meets an exemption or exception in the Federal Rules of Evidence. While the exact use of VR evidence containing out-of-court statements would vary on a case-by-case basis, a secondary purpose of VR presentation of evidence would be admissible not for the truth but to clarify other admissible evidence. For example, if there is an admissible recording of a witness describing a crime scene, VR evidence could help contextualize their testimony and immerse the jurors in the scene. In this case, the purpose wouldn’t be to prove the scene looked exactly as described or appeared in VR but to clarify the witness’s admissible testimony. While this may prevent VR demonstrations from jury deliberation, they can still be shown in the courtroom.
Another unique issue that comes with introducing VR presentation of evidence is authentication. According to the Federal Rules of Evidence, to introduce evidence, a proponent must produce evidence “sufficient to support a finding that the item is what the proponent claims it is.” This presents a unique problem for VR demonstrations because a proponent must show that the VR evidence is authentic. For example, with a photograph, a witness can authenticate it by testifying to taking the photo or confirming it accurately represents its contents. However, because VR is created as a simulation rather than a direct capture, VR wouldn’t be able to be authenticated the same way as a photograph. A proponent would rely onFederal Rule of Evidence 901(b)(9) for authentication. Because this rule alone would not be sufficient, a guideline for admitting VR evidence is that the proponent should “demonstrate that a qualified expert created the VR demonstration using an accurate program and equipment. The proponent should also show that all data used to create the demonstration was accurate and that no unfounded assumptions were made. Lastly, the proponent must present witness testimony to “verify the accuracy of the final product.”
III. Speculated and Actual Benefits of VR Evidence
As VR technology has become cheaper, more mainstream, and more widely used, it is being used in actual cases, making the potential for its wider use more achievable. One of the primary speculated benefits was the immersive nature of VR, allowing jurors to engage deeper with evidence by experiencing crime scenes and potentially re-creating events firsthand. Another speculated benefit is VR’s potential to “appeal to a jury’s emotional and subconscious” responses through its immersive nature.
Real-life implementations of VR evidence have already illustrated some of these benefits. One example is from Marc Lamber and James Goodnow, personal injury attorneys who have implanted VR technology in cases to “transport a jury to an accident scene.” Lamber and Goodnow work with engineers, experts, and production companies to recreate the scene where an injury or death occurred. This has allowed jurors to not only visualize the circumstances, events, and injury but also empathize deeper with the injured person’s suffering and aftermath of the incident. This ability to ‘transport’ the jurors to the scene can be incredibly impactful as it may be hard for jurors to visualize the scene in an isolated courtroom. One study in Australia focused on how VR can affect a jury’s ability to get the ‘correct’ verdict. Researchers, legal professionals, police officers, and forensic scientists simulated a hit-and-run scene in VR and photographs, then split jurors into groups to test the differences. This study found that VR technology required significantly less effort than photographs to construct a coherent narrative, leading jurors to reach the correct verdict 9.5 times more frequently than those who relied on photographs alone. The immersive technology also gave the jurors a better memory of the critical details of the case; the photograph group had difficulty visualizing the events of the case from the photographs alone. Researchers said this study was “unequivocal evidence that interactive technology leads to fairer and more consistent verdicts.”
IV. Speculated and Actual Harms of VR Evidence
While the immersive nature of VR technology has brought speculations about potential benefits for the legal field, concerns have emerged about possible harm or shortcomings of VR technology as evidence. The primary concerns are about potential biases and costs.
VR technology might cause jurors to impermissibly judge parties, especially defendants in criminal trials, differently according to underlying biases that they hold. One study found that mock jurors who used VR technology to understand a criminal trial were more likely to judge a black defendant more harshly than the white one. These studies used VR to simulate scenes from trials and, through computer generation, swapped out the races of the defendants and tested the differences in guilty verdicts and sentencing. Salmanowitz’s study found that using an avatar instead of accurate visual representations of the defendants can reduce implicit bias based on race. The avatars were shown by only the handheld controllers visible in the virtual space. The VR technology made no substantial difference in the jury’s decisions. However, a study by Samantha Bielen et al.found that jurors may be biased using VR against non-white defendants, finding non-white defendants were more likely to be found guilty on the same evidence as a white defendant when using the VR.
The cost of VR also presents a barrier to implementing VR technology in courts. In the Australian study, a researcher noted that using VR as an evidentiary medium is “expensive, especially in remote locations, and in some cases, the site itself has changed, making accurate viewings impossible.” VR technology is expensive, with even the cheapest consumer-grade headsets costing around $500. Further, digital recreation of the scene starts at $15,000 but can “go up to six figures depending on complexity.”
V. Conclusion
While the balance between the benefits and harms of introducing VR as a medium for evidence may vary greatly case-by-case, overall, the demonstrated advantages in improving a jurors’ factual understanding tend to outweigh the drawbacks. Although speculation is a natural reaction to new technologies, as VR finds real-world application in courtrooms, its tangible benefits and harms have been clarified. This allows revisiting the initial speculation and more effectively addressing this balance and admissibility concerns that accompany VR demonstrations use as evidence. Increased use and advancements in VR technology could amplify these benefits by increasing empathy and accuracy and tampering with the effects of emotional bias. With this evolution in VR technology, the potential for an immersive yet balanced use of VR in the courtroom grows, offering an even greater ability for jurors to engage with evidence to enhance understanding, minimize bias, and support fairer, more informed verdicts.
In April 2023, drama unfolded on Twitter, and it revolved around olive oil. Andrew Benin, the co-founder of Graza, a start-up single-origin olive oil brand that comes in two adorable green squeeze bottles, publicly called out rival Brightland for allegedly copying his squeezable olive oil idea. Mr. Benin wrote, “While friendly competition was always welcome, I do view this as a blatant disrespect and am choosing to voice my discontent.” In response, the internet angrily clapped back, as the internet does. One of these dissidents was Alison Cayne, the founder of Haven’s Kitchen, a cooking company. She wrote, “with all due respect, you did not create the squeeze bottle. Chefs and home cooks have been using it for decades.” Another commenter, Gabrielle Mustapich, a co-founder at Hardpops & Pilothouse Brands, added, “my mom was buying squeezy bottle olive oil in 2007 (and it wasn’t Graza).”
Ms. Mustapich is right – squeeze bottles have been ubiquitous in chefs’ kitchens for years. However, they seem to be growing in popularity in home kitchens. While Graza may or may not be able to get credit for that societal shift, that doesn’t mean they shouldn’t get any recognition for doing things differently in the olive oil industry. That begs the question – if they were to sue Brightland, would they win?
Though Graza doesn’t have a patent for its bottle or a trademark for anything except its name, official registration is not required to receive trade dress protection. Trade dress protection is provided by the Lanham Act, which allows the producer of a product a cause of action for the use of “any word, term, name, symbol, or device, or any combination thereof . . . which . . . is likely to cause confusion . . . as to the origin, sponsorship, or approval of his or her goods . . . . ” While trademarks are generally thought to cover things like brand names and their distinct designs, trade dress encompasses the design or packaging of a product or part of a product. For example, while the name “Nike” has a trademark, as does their “swoosh” symbol, the visual design of, say, a sweater or the box it comes in may unofficially deserve trade dress protection (209).
While courts vary slightly on the elements of a protectable trade dress, they mainly agree that three factors must be met when analyzing the design of a product. First, the trade dress must be primarily non-functional. This requirement might seem counterintuitive since a company should be rewarded for making its product useful. However, the non-functionality requirement does not concern the invention of this aspect of the product, which is left to the world of patents. The non-functionality requirement promotes competition because other companies can make similar products with the same useful qualities without legal repercussions.
The landmark 2001 Supreme Court case Traffix Devices v. Marketing Displays redefined the test for functionality. While circuits vary on their precise balancing tests, many follow that of the Ninth Circuit in Talking Rain Bev. Co. v. South Beach Bev. Co. from 2003: Whether advertising “touts the utilitarian advantages of the design,” whether “the particular design results from a comparatively simple or inexpensive method of manufacture,” whether the design “yields a utilitarian advantage,” and “whether alternative designs are available,” (603) – though, per Traffix, the “mere existence of alternatives does not render a product non-functional” (33-34).
Next, the trade dress must be inherently distinctive or have acquired a secondary meaning to customers (which can be difficult to prove). Finally, the alleged infringement must create a likelihood of confusion among customers as to a product’s source. This inquiry is a step-by-step process, so if the product design is primarily functional, the inquiry ends, and the design’s trade dress cannot be protected.
If Graza’s squeeze bottle is considered part of its product design rather than its packaging, it is functional and therefore not protectable trade dress. First, its own Instagram includes in its bio, “Fresh Squeezable Single Origin Olive Oil / As squeezed in @bonappetitmag.” The home page of its website features a video of a chef gingerly unscrewing the squeeze top and gracefully applying the oil to a hot pan. Second, the squeeze bottle itself is a simple design. Restaurants can bulk purchase 500 of these basic bottles, and the patent for the generic squeeze bottle, though not identical to Graza’s, is not terribly complex. Third and fourth, while alternative designs are available – most typical olive oils come in bulky containers with regular pour spouts – the squeeze spout is likely attractive to many consumers for its uniqueness and convenience.
But what if the bottle is not considered to be the product’s design – that is, the appearance of the product itself – but rather the packaging – the container holding the product? According to the Wal-Mart Court, product design and packaging trade dress should be analyzed differently (215). The Court reasoned that, to a consumer, product packaging may identify the source of a product much more clearly than product design would (212-13). Therefore, the court says, a producer need not prove that a package has acquired secondary meaning to receive trade dress protection.
The squeeze bottle, the pretty green color, and the charming label are all selling points for Graza, with one reviewer saying, “These olive oils are one of the few things I find attractive enough to leave out on my countertop.” Arguably, however, the main reason people are purchasing it is to get to the product inside – a quick Google search of “Graza review” brings up articles primarily reviewing the oil’s quality, not the bottle’s attractiveness of the bottle or the utility of the squeeze function. Consider the difference between a consumer purchasing an olive oil known to be delicious but in a poorly designed, ugly bottle – there, the bottle is the packaging for the product – as opposed to an olive oil known to be terrible but in a limited-edition, beautiful bottle – there, the bottle is the product, and there is no packaging.
In Graza’s case, however, it can be difficult to know what’s really in the average consumer’s head, and in many ways, the “average consumer” is a myth. When Courts determine a product’s likelihood of confusion with another product or the likelihood that a consumer will see a product or its packaging and know what brand it comes from, they are necessarily guessing. The Wal-Mart Court tried to correct for that by advising courts in ambiguous product design/product packaging cases to lean toward finding that they are design, presumably to force greater proof of distinctiveness, etc., from producers.
When it comes to Graza, though, it seems entirely possible that a typical, reasonable consumer would see the Brightland olive oil in a squeeze bottle and think: That’s just like the Graza bottle! Even if the trusty squeeze bottle has been ubiquitous in kitchens for decades, Graza may have brought it to the attention of home chefs everywhere. Similarly, though many olive oil bottles are dark green, their tall, skinny, matte green appearance with the pointed tip is unique enough – and their advertising aggressive and targeted enough – that many consumers (mostly those of the Gen Z and Millennial generations) would easily recognize the design as Graza if it flashed before their eyes.
But, if we are to follow the Wal-Mart Court’s suggestion and reason that the bottle is Graza’s product design instead of packaging in this moment of ambiguity, Graza’s argument that they deserve trade dress protection – or, more simply, credit for coming up with the squeeze bottle olive oil idea – is weak. Andrew Benin may have realized such, as he, shockingly, issued a public Twitter apology retracting his earlier accusations.
INTRODUCTION
Every day across the world many people, brands, and companies believe that their names are defamed, or trademarks have been infringed on. With over 440,000 US Trademark Applications filed in 2022 alone, disputes over these trademarks arise frequently. However, these quarrels are not always encircling such a large-scale platform like one of the highest rated television series ever created. But when they do, it often spurs up illustrious discussions by the media about intellectual property laws in the United States.
On September, 25, 2023, the United States District Court of the Southern District of New York granted the Creators of the hit Television series “Better Call Saul,” their motion to dismiss for claims of defamation and trademark infringement brought by Liberty Tax Service for the show’s depiction of a tax preparation service in an episode of the show from April 2022. The depiction of the tax service in the show offers many similarities of Liberty Tax Service and its more than 2,500 offices across the country, including the same red, white, and blue colors, as well as the Statute of Liberty logo. This case, JTH Tax LLC v. AMC Networks Inc., implicates many facets of intellectual property law, including trademark infringement and dilution under the Lanham Act, trade dress, and New York statutory defamation law. Ultimately, the court ruled that the “Rogers Test” was applicable to the defendant’s alleged use of Liberty Tax Service’s trademarks. However, the Court’s decision on the motion to dismiss could certainly be up for debate, especially if Liberty Tax Service raised certain arguments regarding the second prong of the Rogers Test.
A. APPLICABILITY OF THE “ROGERS TEST”
The “Rogers Test” was developed by the United States Court of Appeals, Second Circuit in Rogers v. Grimaldi. It is a two-pronged balancing test that can be implicated when there are opposing interests under the First Amendment and the Lanham Act. The test states,
“[w]here the title of an expressive work is at issue, the “balance will normally not support application of the [Lanham] Act unless the title has no artistic relevance to the underlying work whatsoever, or, if it has some artistic relevance, unless the title explicitly misleads as to the source or the content of the work.”
In the matter at hand, the court quickly and rightfully found that the applicability of the Rogers test was appropriate because to the extent that the defendant’s used the plaintiff’s marks, they were in furtherance of the plot of Better Call Saul by heightening the understanding of key characters for the audience.
1. ARTISTIC RELEVANCE
Under this prong, defendants argued that the alleged use of plaintiff’s marks met the purposely low threshold for artistic relevance in the Rogers Test because the reference to “Sweet Liberty” in the episode in question is undeniably ironic and clearly related to the character’s role in the series because it is the business they created with illegal intent. On top of that, the court found that the character’s use of the plaintiff’s trade dress (i.e., the design and configuration of the product itself) was simply an appropriation of patriotism that highlights their deceptiveness as crime ridden characters which all has relevance to the episode. Thus, the court concluded that the artistic relevance of the episode’s use of the plaintiff’s marks were clearly above “above zero.”
2. WHETHER THE USE OF SUCH MARKS WERE “EXPLICITLY MISLEADING”
Since the Court concluded that defendants’ alleged use of plaintiff’s marks had at least an ounce of artistic relevance to the show, the Court focused on the second prong of the Rogers Test, whether the defendants’ use of the plaintiff’s marks was “explicitly misleading,” which would allow the Lanham Act to apply to the show. The essential question to ask for the second prong is “whether the defendant[s’] use of the mark ‘is misleading in the sense that it induces members of the public to believe [the work] was prepared or otherwise authorized’ by the plaintiff.” To do so, the court focused on the eight non-exhaustive factors from Polaroid Corp. v. Polarad Elecs. Corp., 287 F.2d 492 (2d Cir. 1961), in order to assess the likelihood of confusion to satisfy this prong. The eight factors include: “(1) strength of the trademark; (2) similarity of the marks; (3) proximity of the products and their competitiveness with one another; (4) evidence that the senior user may bridge the gap by developing a product for sale in the market of the alleged infringer’s product; (5) evidence of actual consumer confusion; (6) evidence that the imitative mark was adopted in bad faith; (7) respective quality of the products; and (8) sophistication of consumers in the relevant market.”
a. THE COURT’S ASSESSMENT OF THE POLAROID FACTORS DISSECTED
The court’s assessment of the eight Polaroid factors in this matter could genuinely be up for debate, especially if the plaintiff raised stronger, or any, arguments regarding multiple factors. The Polaroid factors are weighed against each other depending on which way a court decides each factor favors the respective parties. Here, the court found that only the first factor weighed in favor of the plaintiff, as the defendants did not contest the strength of the plaintiff’s mark. The other seven factors either weighed in favor of the defendant or were deemed neutral. Most of the factors were weighed in the defendant’s favor, but mainly because the plaintiff failed to argue them in their Amended Complaint.
For example, and probably most convincing, factor five could have been evaluated in favor of the plaintiff. This factor requires evidence of actual consumer confusion, but it is considered black letter law that actual confusion does not need to be proven to prevail under the Lanham Act or this factor since it is often too challenging and expensive to prove. The Act strictly requires only that a likelihood of confusion be shown as to source, meaning that consumers will mistakenly believe the goods or services come from the same source, according to the Lois v. Sportswear, U.S.A., Inc. v. Levi Strauss & Co., case.
An opportunity that the plaintiff failed to point out that could have helped satisfy this factor stems from an example the Supreme Court of the United States prescribed in the Jack Daniels case, which involved a somewhat analogous situation where Jack Daniels sued a company on similar grounds for their creation of a dog toy that closely represented a bottle of Jack Daniels whiskey. The illustration provided in that case by the court offered a scenario where a luggage manufacturer uses another brand’s marginally altered logo on their luggage to foster growth in the luggage market. The Supreme Court compared this example with another and noted that the greater likelihood of confusion would occur in the luggage illustration because it was conveying possible misinformation about who is responsible for the merchandise. Now, as stated in the Jack Daniels case, if the plaintiff were able to have created this analogy, relevant to the facts at bar, and argued it in their Amended Complaint by displaying that the show clearly conveyed misinformation through their image of “Sweet Liberty Tax Services” and that it was blatantly a slightly modified image of Liberty Tax Service, this factor would have most likely been weighed in favor of the plaintiff. This would be due to the likelihood of confusion as to the source of the show’s representation of the character’s business.
Secondly, factor seven requires taking into account the respective quality of the product. In the Flushing Bank v. Green Dot Corp. case, the United States District Court for the Southern District of New York interpreted this factor to mean that if the quality of a junior user’s (the show) product or service is low compared to the senior user (Liberty Tax Service), the chance of actual injury when there is confusion would be increased. If confusion, as stated above, was created, then the plaintiff’s argument in their Amended Complaint regarding defendant’s use of their marks linking Liberty Tax’s Trademarks to the show’s depiction of an inferior quality of service would further the weighing of this factor in the plaintiff’s direction as well.
Lastly, the eighth factor, based on the sophistication of purchasers, could strongly be construed in favor of the plaintiff. The theory for this factor is that the more sophisticated the senior user’s purchasers are, the less likely they will be confused by the presence of similar marks. To determine consumer sophistication, courts consider the product’s nature and price. The plaintiffs failed to raise any concern as to this factor. However, there is an argument that could have been made to weigh this factor in their favor, which would have evened out the weight of the factors equally between the plaintiff and defendant. If that were the case, it could be assumed to be viewed in the light most favorable of the plaintiff, which would be the standard at the motion to dismiss stage.
If the plaintiff were to shed light on Liberty Tax Services’ prices compared to the average professional tax preparer, the evidence would clearly force this factor to fall in favor of the plaintiff. Liberty Tax Service has a basic tax preparation rate of only $69.00, while the average professional tax preparer charges an average of $129.00 for similar services. This distinction would logically prove that that Liberty Tax Service maintains a lower consumer sophistication than the average professional tax preparer, leading to the conclusion that their consumer’s would be more likely to be confused by the presence of similar marks. Therefore, this would result in the fourth of eight factors being found in favor of the plaintiff.
CONCLUSION
If the plaintiffs applied the arguments above in their Amended Complaint, which regard the second element of the Rogers Test, and further, the Polaroid factors, it would demonstrate that the plaintiff’s display of confusion is indeed plausible. Because such compelling arguments would tilt the scale in favor of the plaintiff, Liberty Tax Service, the Lanham Act would most likely be implicated. Therefore, this case may have concluded in a completely different verdict at this stage of litigation, supporting the plaintiff, rather than the creators of television series.
The advent of generative Artificial Intelligence (AI) and deepfake technology marks a new era in intellectual property law, presenting unprecedented challenges and opportunities. As these technologies evolve, their creations blur the lines between reality and fiction, escalating the risk of consumer deception and diluting brand values. Deepfakes, a fusion of the words ‘deep learning’ and ‘fake,’ stand at the forefront of this revolution. These ultra-realistic synthetic media replace a person’s image or voice with someone else’s likeness, typically using advanced artificial intelligence. Relying on deep learning algorithms, a subset of AI, deepfakes employ techniques like neural networks to analyze and replicate human facial and vocal characteristics with stunning accuracy. The result is a fabricated version of a video or audio clip that is virtually indistinguishable from the original.
The spectrum of deepfake applications is vast, encompassing both remarkable prospects and significant risks. On the positive side, this technology promises to revolutionize entertainment and education. It can breathe life into historical figures for educational purposes or enhance cinematic experiences with unparalleled special effects. However, this technology’s downside reveals consequences, particularly for businesses and brands. Generative AI and deepfakes possess the potential to create highly convincing synthetic media, obfuscating the line between authentic and fabricated content. This capability poses substantial risks for consumer deception and brand damage.
When consumers encounter deepfakes featuring well-known trademarks, it not only challenges the authenticity of media but also erodes the trust and loyalty brands have cultivated over the years. This impact on consumer perception and decision-making is central to understanding the full implications of AI on trademark integrity, as it leads to potential deception and undermines the critical connection between trademarks and consumer trust. This dual nature of deepfakes as both a tool for creative expression and a source of potential deception underscores the complexity of their impact on intellectual property and consumer relations.
As generative AI introduces opportunities on the one hand, risks abound regarding consumer deception. This direct threat to perception highlights trademarks’ growing vulnerability. At the heart of branding and commerce lies the trademarks, distinguishing one business from another. They are not just mere symbols; they represent the core identity of brands, extending from distinctive names and memorable phrases to captivating logos and designs. When consumers encounter these marks, they do not just see a name or a logo; they connect with a story, a set of values, and an assurance of quality. This profound connection underscores the pivotal role of trademarks in driving consumer decisions and fostering brand loyalty. The legal protection of these trademarks is governed by the Lanham Act, which offers nationwide protection against infringement and dilution. Infringement occurs when a mark similar to an existing trademark is used, potentially confusing consumers about the origin or sponsorship of a product. Dilution, on the other hand, refers to the weakening of a famous mark’s distinctiveness, either by blurring its meaning or tarnishing it through offensive use.
However, the ascent of generative AI and deepfake technology casts new, complex shadows over this legal terrain. The challenges introduced by these technologies are manifold and unprecedented. While it was once straightforward to distinguish between intentional and accidental use of trademarks, the line is now increasingly blurred. For instance, when AI tools deliberately replicate trademarks to deceive or dilute a brand, it is a clear case of infringement. However, the waters are muddy when AI, through its intricate algorithms, unintentionally incorporates a trademark into its creation. Imagine an AI program designed for marketing inadvertently including a famous logo in its output. This scenario presents a dilemma where the line between infringement and innocent use becomes indistinct. The company employing the AI might not have intended to infringe on the trademark, but the end result could inadvertently imply otherwise to the consumer.
This new landscape necessitates a reevaluation of traditional legal frameworks and poses significant questions about the future of trademark law in an age where AI-generated content can replicate real brands with unnerving precision. The challenges of adapting legal strategies to this rapidly evolving digital environment are not just technical but also philosophical, calling for a deeper understanding of the interplay between AI, trademark law, and consumer perception.
The Supreme Court’s decision in Jack Daniel’s Properties, Inc. v. VIP Products LLC significantly sets a key precedent regarding fair use defenses in the age of AI. This case, focusing on a dog toy mimicking Jack Daniel’s whiskey branding, highlights the tension between trademark protection and First Amendment rights.
Although the case did not directly address AI, its principles are crucial in this field. For both trademark infringement and dilution claims, the Court narrows the boundaries of fair use, particularly in cases where an “accused infringer has used a trademark to designate the source of its own goods.” The Court also limited the scope of the noncommercial use exclusion to the dilution claims, stating that it “cannot include… every parody or humorous commentary.” This narrower scope of fair use makes it tricky for AI content users to navigate fair use defenses when parodying trademarks, where the lines between illegal use, parody, and unintentionally confusing consumers about endorsement or sponsorship may blur.
This ruling has direct repercussions for AI models generating noncommercial comedic or entertainment content featuring trademarks. Even if AI-created content is noncommercial or intended as parody, it does not automatically qualify as fair use. If such content depicts or references trademarks in a way that could falsely suggest sponsorship or source affiliation, then claiming fair use becomes extremely difficult. Essentially, the noncommercial nature of AI-created content offers little protection if it uses trademarks to imply an incorrect association or endorsement from the trademark owner.
As such, AI developers and companies must be cautious when depicting trademarks in AI-generated content, even in noncommercial or parodic contexts. The fair use protections may be significantly limited if the content falsely suggests a connection between a brand and the AI-generated work.
In this light, AI-generated content for marketing and branding requires meticulous consideration. AI Developers must ensure their AI models do not generate contents that incorrectly imply a trademark’s source or endorsement. This necessitates thorough review processes and possibly adapting algorithms to prevent any false implications of source identification. At the same time, the users of the AI technology for branding, marketing or content creation need to employ stringent review processes of how trademarks are depicted or referenced to ensure their creations do not inadvertently infringe upon trademarks or mislead consumers about the origins or endorsements of products and services. With AI’s capacity to precisely replicate trademarks, the potential for unintentional infringement and consumer deception is unprecedented.
This evolving landscape calls for a critical reevaluation of existing legal frameworks. Though robust in addressing the trademark issues of its time, the Lanham Act was conceived in an era before the emergence of digital complexities and AI advancements we currently face. The Court’s ruling in Jack Daniel’s case could influence future legislation and litigation involving AI-generated content and trademark issues. Today, we stand at a critical point where the replication of trademarks by AI algorithms challenges our traditional understanding of infringement and dilution. Addressing this requires more than mere amendments to existing laws; it calls for a holistic overhaul of legal frameworks. This evolution might involve new legislation, innovative interpretations, and an adaptive approach to defining infringement and dilution in the digital age. The challenge is not just to adapt to technological advancements but to anticipate and shape the legal landscape in a way that balances innovation with the need to protect the essence of trademarks in a rapidly changing world.
Deepfakes and similar AI fabrications pose risks to not just trademarks but also individual rights, as the right of publicity shielding personal likenesses confronts the same consent and authenticity challenges in an era of scalable deepfake identity theft. The concept of the right of publicity has gained renewed focus in the age of deepfakes, as exemplified by the unauthorized use of Tom Hanks’ likeness in a deepfake advertisement. This case serves as a potent reminder of the emerging challenges posed by deepfake technology in the realm of intellectual property rights. California Civil Code Section 3344, among others, protects individuals from the unauthorized use of their name, voice, signature, photograph, or likeness. However, deepfakes, with their capability to replicate a person’s likeness with striking accuracy, raise complex questions about consent and misuse in advertising and beyond.
Deepfakes present a formidable threat to both brand reputation and personal rights. These AI-engineered fabrications are capable of generating viral misinformation, perpetuating fraud, and inflicting damage on corporate and personal reputations alike. By blurring the lines between truth and deception, deepfakes undermine trust, dilute brand identity, and erode the foundational values upon which reputations are built. The impact of deepfakes on brand reputation is not a distant concern but a present and growing one, necessitating vigilance and proactive measures from individuals and organizations. The intricate dynamics of consumer perception, influenced by such deceptive technology, underscore the urgency for a legal discourse that encompasses both the protection of trademarks and the right of publicity in the digital age.
While the complex questions surrounding AI, deepfakes, and trademark law form the core of this analysis, the disruptive influence of these technologies extends across sectors. The latest widespread dissemination of explicit AI-generated images of Taylor Swift serves as a stark example of the urgent need for regulatory oversight in this evolving landscape, particularly highlighting the implications for the entertainment industry. The entertainment industry, particularly Hollywood, is another sphere significantly impacted by AI advancements. The ongoing discussions, notably during the SAG-AFTRA strike, highlight the critical issues of informed consent and fair compensation for actors whose digital likenesses are used. The use of generative AI technologies, including deepfakes, in creating digital replicas of actors raises crucial questions about intellectual property rights and the ethical use of such technologies in the industry.
The legal and political landscape is also adapting to the challenges posed by AI and deepfakes. With the 2024 elections on the horizon, the Federal Election Commission is in the preliminary phases of regulating AI-generated deepfakes in political advertisements, aiming to protect voters from election disinformation. Additionally, legislative efforts such as the introduction of the No Fakes Act by a bipartisan group of senators mark significant steps toward establishing the first federal right to control one’s image and voice against the production of deepfakes, essentially the right of publicity.
Moreover, the legislative activity on AI regulation has been notable, with several states proposing and enacting laws targeting deepfakes as of June 2023. President Biden’s executive order in October 2023 further exemplifies the government’s recognition of the need for robust AI standards and security measures. However, the disparity between the rapid progression of AI technology and the comparatively sluggish governmental and legislative response is evident, as shown by the limited scope of interventions like President Biden’s executive order. This order, which includes the implementation of watermarks to identify AI-generated content, signals a move toward a more regulated AI environment, although the tools to circumvent these measures by removing watermarks are readily available. The order marks a step toward creating a more rigorous environment for AI development and deployment, but it also hints at the potential for the development of tools designed to undermine such security measures. These developments at the industry, legal, and political levels reflect the multifaceted approach required to address the complex issues arising from the intersection of AI, deepfakes, intellectual property, and personal rights.
The ascent of generative AI and synthetic media ushers in a complex new era, posing unprecedented challenges to intellectual property protections, consumer rights, and societal trust. As deepfakes and similar fabrications become indistinguishable from reality, risks of mass deception and brand dilution intensify. Trademarks grapple with blurred lines between infringement and fair use while publicity rights wrestle with consent in an age of identity theft.
Given the potency of deepfakes in shaping narratives, the detection of such content is essential. A number of technologies have shown promise in recognizing deepfake content with high accuracy, particularly machine learning algorithms, proving effective at spotting these AI-generated videos by analyzing facial and vocal anomalies. Their application in conflict zones is crucial for mitigating the spread of misinformation and malicious propaganda.
Recent governmental initiatives signal the beginnings of a framework evolution to suit new technological realities. However, addressing systemic vulnerabilities exposed by AI’s scalable distortion of truth demands a multifaceted response. More than mere legal remedies, restoring balance requires ethical guardrails in technological development and usage norms, plus the need for public awareness.
In confronting this landscape, maintaining foundational pillars of perception and integrity remains imperative, even as inventions test traditional boundaries. Our preparedness hinges on enacting safeguards fused with values that meet this watershed moment where human judgment confronts machine creativity. With technology rapidly outpacing regulatory oversight, preventing harm from generative models remains an elusive goal. But don’t worry. I am sure AI will come up with a solution soon.
Background: Sephora and ModiFace
In a market filled with a mixture of new direct-to-consumer influencer brands gaining traction, brick and mortar drug stores providing cheaper options known as “dupes”, and high-end retailers investing in both their online and in stores, one major player dominates: Sephora. Founded in 1970, Sephora is a French multinational retailer of beauty and personal care products. Today, Sephora is owned by LVMH Moët Hennessy Louis Vuitton (“LVMH”) and operates 2,300 stores in 33 countries worldwide, with over 430 stores in America alone.
LVMH attributes much of Sephora’s success to its “self-service” concept. Unlike its department store competitors who stock beauty products behind a counter, Sephora allows consumers to touch and test is product with the mediation of a salesperson. This transformation of a store to an interactive experience underscores Sephora’s main value proposition: providing customers with a unique, interactive, and personalized shopping experience.1 Keeping with its customer experience-centric business model, Sephora has utilized technology to continue providing its customers with a personalized beauty experience.
The tension created by two separate growing marketplaces puts significant pressure on Sephora to replicate the online shopping experience in-store and vice versa. For make-up, having a perfect complexion match for face products and flattering color of lipstick and blush requires an understanding of the undertones and overtones of the make-up shades. Typically, this color match inquiry is what makes or breaks the sale—if a shopper is not confident the make-up is a match, they are less likely to purchase. To address this friction in the customer purchase journey, Sephora rolled out “Find My Shade,” an online tool designed to help shoppers find a foundation product after inputting their preferences and prior product use. This tool provides the in-store feel of viewing multiple products at once, while providing some assurance on a color match. For Sephora, the online sale provides ample customer data: which products were searched, considered, and ultimately purchased all against a backdrop of a user’s name, geography, preferences, and purchase history. The resulting customer data is the backbone of Sephora’s digital strategy: facilitating customer purchases online by reducing friction, while mining data to inform predictions on customer preferences.
In line with its innovative in-store strategy, in 2014 Sephora announced a partnership with Modiface to launch a Visual Artist Kiosk to provide augmented reality mirrors in its brick-and-mortar stores. First introduced in Milan, Sephora sought to make testing make-up easier for customers by simulating makeup products on a user’s face in real time without requiring a photo upload.2 To begin their session at the kiosk, users provide their e-mail address and contact information either tied to their pre-existing customer account with Sephora or to provide Sephora with new customer information. Using facial recognition technology, the ModiFace 3-D augmented reality mirror uses a live capture of a user’s face and then shows the user how to apply make-up products overlaid onto the live capture user’s face. This allows users to test thousands of products tailored to the user’s unique features. Without opening a real product, users are able to see whether the product is suitable to their skin tone, thus bringing personalization and tailored options typically available only online to the store while providing Sephora with valuable consumer data. At the end of their use of the ModiFace mirror, the user receives follow-up information about the products tested via the e-mail address provided or via their Sephora account.
At the time of Visual Artist Kiosk’s introduction to stores in the United States in 2014, Sephora did not need to consider federal privacy laws—there were none to consider. Consumer data privacy laws were in their infancy, with the Federal Trade Commission (FTC) at the helm of most cases aimed to provide consumers protection from data breach and identity theft and to hold corporations accountable to their respective privacy policies.3 Significantly, however, Sephora did not consider the state-specific laws at play. Specifically, Sephora did not consider the Illinois Biometric Information Privacy Act (BIPA) which applied to all Sephora locations in the state of Illinois (IL).
Issue
In December 2018, Auste Salkauskaite (Plaintiff) brought a class action suit against Sephora and ModiFace Inc. (Modiface) claiming that both violated the Illinois Biometric Information Privacy Act (BIPA) by using ModiFace’s technology to collect biometric information about customer facial geometry at a Virtual Artist Kiosk in a Sephora store in Illinois. Plaintiff further alleged that her biometric information, cell phone number, and other personal information were collected and disseminated by both Sephora and ModiFace in an attempt for Sephora to sell products to the plaintiff.4 Plaintiff further alleged that Sephora did not inform her in writing that her biometrics were being collected, stored, used, or disseminated. Sephora allegedly did not get Plaintiff’s written or verbal consent, provide any notice that Plaintiff’s biometric information was being collected, or if it would be retained and/or sold. In pursuing a class action lawsuit, Plaintiff sought to include individuals who had their biometrics “captured, collected, stored, used, transmitted or disseminated by ModiFace’s technology in Illinois”, with an additional subclass of those who experienced the same treatment from Sephora in Illinois.”5
BIPA “governs how private entities handle biometric identifiers and biometric information (“biometrics”) in the state of IL.”6 By including a private right of action, residents of the state of IL are able to file suit against private entities who allegedly violate the law. In this case, Plaintiff claimed that Sephora and ModiFace violated three provisions of BIPA: 1) requiring a private entity in possession of biometrics to release a publicly accessibly written policy describing its use of the collected biometrics, 2) forbidding a private entity from “collecting, capturing, purchasing, receiving, or otherwise obtaining biometrics without informing the subject that biometrics are being collected and stored,” and 3) “disclosing or disseminating biometrics of a subject without consent.”7
Response
In the immediate aftermath of the suit, Sephora did not release any statements on the pending litigation. In Sephora’s answers to the Plaintiff’s complaints filed in January 2019, Sephora denied all claims by 1) pointing to its publicly available privacy statement, and 2) denying taking, collecting, using, storing or disseminating Plaintiff’s biometrics. Specifically, Sephora claimed that by using its mobile application and/or website, users agree to accept Sephora’s terms of service which releases Sephora from liability. This included the Virtual Artist Kiosk, which required users to sign and accept Sephora’s terms of service before prompting users to provide any contact information.
Sephora and the Plaintiffs (once class action status was granted) reached a settlement agreement in December 2020, which allowed for anyone who interacted with the Virtual Artist Kiosk in a Sephora store in IL since July 2018 to file a claim for a share of the settlement—which allows for claimants to get up to $500 each. As of April 2020, 10,500 notices were sent to potential claimants. Hundreds of claims were filed by potential class members, which could result in just under $500,000 in total claims. Sephora has never officially commented on the suit, despite some media coverage in IL.8
ModiFace, on the other hand, successfully moved to dismiss the claim for lack of personal jurisdiction in June 2020. The Court reasoned that ModiFace did not purposefully avail itself of the privilege of conducting business in IL. The Court cited a declaration of ModiFace’s CEO which stated that ModiFace never had property, employees, or operations in Illinois and is not registered to do business there. He further stated that ModiFace’s businesses is focused on selling augmented-reality products to beauty brand companies and does not participate in marketing, sales, or commercial activity in Illinois. ModiFace claimed that its business relationship with Sephora did not occur in Illinois, and that Sephora never discussed the use of ModiFace technology in Illinois: there was no agreement in place regarding Illinois and no transmission of biometric information between the companies. Overall, the Court found that Sephora’s use of ModiFace technology in Illinois did not establish minimum contacts.
Despite the lawsuit, ModiFace was acquired by L’Oreal in March 2018. L’Oreal is the world’s biggest cosmetics company, and, unlike Sephora, designs, manufactures, and sells its own products. L’Oreal and ModiFace worked together for about seven years before the acquisition. Like Sephora, L’Oreal ramped up investment in virtual try-on technology in an effort to decrease customer barriers to purchase. Since the acquisition, L’Oreal’s Global Chief Digital Officer has said that conversion rates from search to purchase has tripled.(citation)? At the time of the ModiFace acquisition, L’Oreal spent about 38% of its media budget on digital campaigns like virtual try-one. It’s estimated that this investment has grown significantly as L’Oreal strategizes around minimizing friction in the customer experience. More recently, Estee Lauder was also sued for alleged violation of BIPA for collecting biometric data via virtual “try-on” of make-up through a similar technology as ModiFace.9
Lessons Learned for Future: CCPA and Beyond
Sephora’s data privacy legal woes have ramped up significantly since the BIPA lawsuit in 2018. On August 24, 2022, California Attorney General Rob Bonta announced a settlement resolving allegations that the company violated the California Consumer Privacy Act (CCPA). The Attorney General alleged that Sephora failed to disclose to consumers that it was selling their personal information, failed to process user request to opt out of sale, and did not remedy these violations within 30 days after being alerted by Attorney General, as allowed per the CCPA. The terms of the settlement required Sephora to pay $1.2 million in penalties and comply with injunctive terms in line with its violations including updating its privacy policy to include affirmative representation that it sells data, provide mechanisms for consumers to opt out of the sale of their personal information, conform to service provider agreements in line with CCPA’s requirements and provide states reports to the Attorney General documenting progress.
Sephora’s settlement is the first official non-breach related settlement under the CCPA. Many legal analysts argue that the California Office of the Attorney General (OAG) intends to be more aggressive in enforcing CCPA, which they are signaling in a significant manner via their settlement with Sephora. Specifically, the OAG is expected to focus on businesses that engage in sharing or selling of information to third parties for the purpose of targeting advertising.10 Importantly, under the California Privacy Rights Act (CPRA) which goes into effect on January 1, 2023, companies will no longer benefit from the 30-day notice to remedy their alleged violations.
Like the BIPA lawsuit, Sephora did not make any official statements on the CCPA settlement. Sephora’s privacy policy is regularly updated, however, signaling the company’s attention to the regulations set forth by CCPA minimally.11 In 2022, Sephora saw revenues grow 30%, with a significant rebound in in-store activity, indicating that Sephora customers nationwide have not been deterred by its privacy litigation woes. As Sephora continues to innovate its in-store experience, it must continue to keep a watchful eye on state-specific regulation as Colorado and Virginia launch their own data privacy laws in the near future.
Due to a shortage of mental health support, AI-enabled chatbots offer a promising way to help children and teenagers address mental health issues. Users often view chatbots as mirroring human therapists. However, unlike their human therapist counterparts, mental health chatbots are not obligated to report suspected child abuse. Legal obligations could potentially shift, necessitating technology companies offering these chatbots to adhere to mandated reporting laws.
Many teenagers and children experience mental health difficulties. These difficulties include having trouble coping with stress at school, depression, and anxiety. Although there is a consensus on the harms of mental illness, there is a shortage of care. An estimated 4 million children and teenagers do not receive necessary mental health treatment and psychiatric care. Despite the high demand for mental health support, there are approximately only 8300 child psychiatrists in the United States. Mental health professionals are overburdened and unable to provide enough support. These gaps in necessary mental health care provide an opportunity for technology to intervene and support the mental health of minors.
AI-enabled chatbots offer mental health services to minors and may help alleviate deficiencies in health care. Technology companies design AI-enabled mental health chatbots to facilitate realistic text-based dialogue with users and offer support and guidance. Over forty different kinds of mental health chatbots exist and users can download them from app stores on mobile devices. Proponents of mental health chatbots contend the chatbots are effective, easy to use, accessible, and inexpensive. Research suggests some individuals prefer working with chatbots instead of a human therapist as they feel less stigma asking for help from a robot. There are numerous other studies assessing the effectiveness of chatbots for mental health. Although the results are positive for many studies, the research concerning the usefulness of mental health chatbots is in its nascent stages.
Mental health chatbots are simple to use for children and teenagers and are easily accessible. Unlike mental health professionals who may have limited availability for patients, mental health chatbots are available to interact with users at all times. Mental health chatbots are especially beneficial for younger individuals who are familiar with texting and interacting with mobile applications. Due to the underlying technology, chatbots are also scalable and able to reach millions of individuals. Finally, most mental health chatbots are inexpensive or free. The affordability of chatbots is important as one of the most significant barriers to accessing mental health support is its cost. Although there are many benefits of mental health chatbots, most supporters agree that AI tools should be part of a holistic approach to addressing mental health.
Critics of mental health chatbots point to the limited research on their effectiveness and instances of harmful responses by chatbots. Research indicates that users sometimes rely on chatbots in times of mental health crises. In rare cases, some users received harmful responses to their requests for support. Critics of mental health chatbots are also concerned about the impact on teenagers who have never tried traditional therapy. The worry is that teenagers may disregard mental health treatment in its entirety if they do not find chatbots to be helpful. It is likely too early to know if the risks raised by critics are worth the benefits of mental health chatbots. However, mental health chatbots offer a promising way to ease the shortage of mental health professionals.
AI is a critical enabler for growth across the world and is developing at a fast pace. As a result of its growth, the United States is prioritizing safeguarding against potentially harmful impacts of AI [such as, privacy concerns, job displacement, and a lack of transparency]. Currently, the law does not require technology companies offering mental health chatbots to report suspected child abuse. However, the public policy reasons underlying mandatory reporting laws in combination with the humanization of mental health chatbots indicate the law may require companies to report child abuse.
Mandatory reporting laws help protect children from further harm, save other children from abuse, and increase safety in communities. In the United States, states impose mandatory reporting requirements on members of specific occupations or organizations. The types of occupations that states require to report child abuse are those whose members often interact with minors, including child therapists. Technology companies that produce AI-enabled mental health chatbots are not characterized as mandated reporters. However, like teachers and mental health counselors, these companies are also in a unique position to detect child abuse.
Mandatory reporting is a responsibility generally associated with humans. The law does not require companies that design AI-enabled mental health chatbots to report suspected child abuse. However, research suggests that users may develop connections to chatbots in a similar way that they do with humans. Often, companies creating mental health chatbot applications take steps to make the chatbots look and feel similar to a human therapist. The more that users consider AI mental health chatbots to be a person, the more effective the chatbot is likely to be. Consequently, technology companies invest in designing chatbots to be as human-like as possible. As users begin to view mental health chatbots as possessing human characteristics, mandatory reporting laws may adapt and require technology companies to report suspected child abuse.
The pandemic compounded the tendency for users to connect with mental health chatbots in a human-like way. Research by the University of California illustrates that college students started to bond with chatbots to cope with social isolation during the pandemic. The research found that many mental health chatbots communicated like humans which caused a social reaction in the users. Since many people were unable to socialize outside of their homes or access in-person therapy, some users created bonds with chatbots. The research included statements by users about trusting the chatbots and not feeling judged, characteristics commonly associated with humans.
As chatbot technology becomes more effective and human-like, society may expect that legal duties will extend to mental health chatbots. The growth in popularity of these chatbots is motivating researchers to examine the impacts of chatbots on humans. When the research on personal relationships between chatbots and users becomes more robust, the law will likely adapt. In doing so, the law may recognize the human-like connection between users and chatbots and require technology companies to abide by the legal duties typically required of human therapists.
While the law may evolve and require technology companies to comply with the duty to report suspected child abuse, several obstacles and open questions exist. Requiring technology companies that offer AI-enabled mental health chatbots to report suspected child abuse may overburden the child welfare system. Mandatory reporting laws include a penalty for a failure to report which has led to unfounded reporting. In 2021, mandatory reporting laws caused individuals to file over 3 million reports in the United States. However, Child Protection Services substantiated less than one-third of the filed reports indicating the majority of reports were unfounded. False reports can be disturbing for the families and children involved.
Many states have expanded the list of individuals who are mandated reporters, leading to unintended consequences. Some states extended mandatory reporting laws to include drug counselors, technicians, and clergy. Requiring companies offering mental health chatbots to follow mandatory reporting laws would be another expansion and may carry unintended consequences. For example, companies may over-report suspected child abuse out of fear of potential liability. Large volumes of unsubstantiated reports could become unmanageable and distract Child Protection Services from investigating more pressing child abuse cases.
Another open question with which the legal system will have to contend is how to address confidentiality concerns. Users of mental health chatbots may worry that a mandatory disclosure requirement would jeopardize their data. Additionally, users may feel the chatbots are not trustworthy and that their statements could be misinterpreted. Such fears may stigmatize those who need help from using mental health chatbots. A decline in trust could have the inadvertent effect of disincentivizing individuals from adopting any AI technology altogether. The legal system will need to consider how to manage confidentiality and unfounded report concerns. Nevertheless, as chatbots become more human-like, the law may require technology companies to comply with duties previously associated with humans.
AI-enabled mental health chatbots provide a promising new avenue for minors to access support for their mental health challenges. Although these chatbots offer numerous potential benefits, the legal framework has not kept pace with their rapid development. While the future remains uncertain, it is possible that legal requirements may necessitate technology companies to adhere to mandatory reporting laws related to child abuse. Consequently, technology companies should contemplate measures to ensure compliance.
The monopolization of intellectual property (IP) in the video game industry has increasingly attracted attention as many antitrust-like issues have arisen. These issues are, however, not the prototypical issues that antitrust law is classically concerned with. Rather, supported by the United States’ IP system, video game publishers (VGPs) have appropriated video game IP to construct legal monopolies that exert control over individuals involved in the esports (and greater video game) industry. Particularly, this control disproportionately affects the employees of VGPs who provide live commentary, analysis, and play-by-play coverage of esports competitions (hereinafter “esports casters”) by impeding their ability to freely utilize and monetize their skills and knowledge. This restriction further hampers their capacity to adapt to evolving market conditions and secure stable employment in the field. Moreover, it creates barriers to entry for aspiring casters, thereby diminishing the industry’s diversity of voices and perspectives.
The Pieces That Make Up the Esports Industry
First, it is important to understand the structure and landscape of esports. The esports industry can be, generally, equated to any other “traditional” sport. Like in any “traditional” sport, there are players, coaches, team/player support staffs, casters & analysts, and production crews. Excluded from this list are game developers. Game developers are an integral part of the esports industry but have no comparative equivalent in “traditional” sports. From a functional perspective, the esports industry also operates incredibly similarly to “traditional” sports. There are regional leagues, seasons, playoffs, championships, team franchising & partnerships, rules (and referees), salary parameters, player trading & player contracts, practice squads, scrimmages, and sports betting.
So why are esports casters disproportionately affected if esports and “traditional” sports are structured effectively in the same manner? To answer this, it is important to understand the role legal IP monopolies play in the esports industry.
The Monopoly Power of IP in the Context of Video Games
Unlike “traditional” sports, esports exists in a unique landscape where the IP of the entire sport is legally controlled by a single entity, the VGP. As an exploratory example, let’s analogize American football to esports. No entity entirely owns the game of football, but individual entities entirely own video games (like Riot Games for League of Legends and Valorant, Valve for Counter-Strike, and Activision Blizzard for Call of Duty). Moreover, the National Football League (NFL) lacks both the legal and physical capacity to stop or otherwise place conditions on every group that would like to play football, but Riot Games, Valve, and Activision Blizzard are legally within their rights to prevent players or entire leagues from playing their video games. This inherent quality of the esports industry functions as a legal monopoly and makes the industry unlike any other broadcasted competition.
The Legal IP Monopoly is NOT a Competitive Monopoly
When people think of monopolies, they typically think of competitive monopolies. Simply put, competitive monopolies are created when a single seller or producer assumes a dominant position in a particular industry. However, the monopolies VGPs have created are limited legal monopolies and thus are not the monopolies that antitrust law is concerned with.
The precise distinction between the two monopolies (and their associated governing legal frameworks) is as follows. Antitrust law promotes market structures that encourage initial innovation with a competitive market, while IP law encourages initial innovation with the asterisk of limited exclusivity. More specifically, antitrust law enables subsequent innovation by protecting competitive opportunities beyond the scope of an exclusive IP right. When competitive opportunities are not present (i.e. a competitive monopoly has been created), antitrust law steps in to reestablish competition. Contrarily, IP law enables subsequent innovation by requiring disclosures of the initial innovation. The nature of these disclosures limit the scope of the control possessed by any particular holder of an IP right. What this means is that IP law provides narrower, but legal monopoly power over a particular innovation.
While the above discussion is patent-focused, it is equally applicable to businesses that rely more heavily on copyright and trademark. Nevertheless, VGPs rely on all forms of IP to construct a comprehensive IP wall around their respective video game titles.
Are VGPs’ Legal Monopolies Harmful?
Because the legal monopoly is not one that antitrust law is concerned with, there is no law or governing body investigating or overseeing the monopoly-like practices and problems created by VGPs. As a result, these issues have been deemed as intrinsic characteristics of the esports industry, limiting the ways in which individuals can seek remedies. While there have been isolated wins related to equal pay and discrimination, there is yet to be any comprehensive attention given to the power imbalance of this industry. What this leads to is issues of job mobility and skill transferability for esports casters.
Why Only Esports Casters?
Game developers and production crews aren’t as affected by the job mobility and skill transferability issues simply because the skills required for their respective roles are easily transferable to other industries. A game developer who works on character design, can move to the film industry and work on CGI. Similarly, a camera operator for North American Valorant competitions can go be a camera operator for Family Feud. For players, their employment is controlled by the partnered / franchised teams and not the VGPs. As it is in “traditional” sports, players are free to move within and between the leagues as their contracts permit.
Esports casters are different though. In professional football, casters, while well versed in the game of football, have substantial opportunities for job mobility if a particular situation is unfavorable. The reason being relates back to the NFL’s inability to monopolize the sport of football, meaning other football leagues can exist outside of the NFL. What this means is not only are football casters capable of transferring their skills within the NFL (such as Al Michaels’ move from NBC to Amazon), but football casters can also move to other leagues (like NCAA Football, Indoor Football, and the XFL). Comparatively, this is simply not an option for esports casters. Because VGPs are able to create a legal IP monopoly around a particular game (making them the single source and sole employer for esports casters of that respective game), they can completely control the leagues and broadcasts associated with their games. The economics of this makes it so VGPs can underpay their esports casters because (1) there is no other league that covers the same game they cast and (2) while possible, transitioning from casting one video game to another is not easy (think a football commentator becoming a basketball commentator). As a result, VGPs can create an exploitative employment structure in which esports casters are not treated in accords with the value they provide. This leads to barriers to entry for new casters, a lack of diversity in the industry, and challenges for existing casters to adapt to changing market conditions.
Possible Solutions
Esports casters’ ability to monetize their skills and knowledge is often limited by the exclusive use of IP rights by VGPs. To resolve this, a careful balance must be struck between IP rights and the livelihoods of individuals in the esports industry, including casters. One possible solution could be to consider a union-like structure that advocates for casters. This solution would give esports casters the opportunity to consolidate pay information, standardize contractual obligations, and voice their concerns about the structure of the esports industry. While the implementation of such an organization would be challenging considering the novel, underdeveloped, or constantly fluctuating nature of the industry, there are already many advocates in esports that are pushing for better compensation, inclusivity, and benefits for casters and analysts.
Even though progress is slow, the industry is improving. Hopefully with these efforts, the esports industry can become fairer and more inclusive than other “traditional” sports. Nonetheless, the only certainty moving forward is that the legal IP monopolies surrounding video games are not going anywhere.
Companies Face Massive Biometric Information Privacy Act (BIPA) Allegations with Virtual Try-On Technology
Virtual try-on technology (“VTOT”) allows consumers to use augmented reality (“AR”) to see what a retailer’s product may look like on their body. By providing the retailer’s website access to their device’s camera, consumers can shop and try on products from the comfort of their home without ever stepping into a brick-and-mortar storefront. Retailers provide customers the option to virtual-try consumer goods through their website, app, or through social media filters like Instagram, Snapchat, and TikTok. While virtual try-on emerged in the early 2010s, the COVID-19 pandemic spurred its growth and adoption amongst consumers. Retailers, however, have seen a recent uptick in lawsuits due to biometric privacy concerns of virtual-try on technology, especially in Illinois. In 2008, Illinois passed the Biometric Information Privacy Act (“BIPA”), one of the strongest and comprehensive biometric privacy acts in the country.
This blog post will explore current lawsuits in Illinois and the Seventh Circuit that could impact how retailers and consumers use virtual-try on technology, as well as the privacy and risk implications of the technology for both groups.
Background on Virtual Try-On
From trying on eyeglasses to shoes, virtual try-on technology allows consumers an immersive and fun opportunity to shop and visualize themselves with a product without ever leaving their homes. Fashion brands often use virtual try-on technology to enhance consumer experiences and find that it may positively affect purchase decisions and sales. By providing customers with the opportunity to shop from home, customers may be less likely to return the item, since they are more likely to pick the correct product the first time through the enhanced virtual-try on experience. With revenue in the AR and VR market expected to exceed $31 billion dollars in 2023 and one-third of AR users having used the technology to shop, brands are responding to the growing demand for AR and virtual try-on.
Although the pandemic really drove brands to grow their virtual try-on and AR offerings, brands have used virtual try-on for many years prior. Sephora launched an augmented reality mirror back in 2014. Maybelline allowed consumers to virtually try on nail polish. In the summer of 2019, L’Oreal Group partnered with Amazon to integrate ModiFace, the company’s AR and artificial intelligence (“AI”) company, into Amazon’s marketplace. The pandemic only pushed brands to grow those offerings.
By mid-2021, social media and tech brands expanded their AR offerings to cash in on the increasing role that virtual try-on has on consumers’ purchase decisions. Perfect Corp., a beauty technology company, integrated with Facebook in 2021 to expand the platform’s AR offerings specifically geared towards beauty. The integration allows Facebook to make it easier and cheaper for brands and advertisers to “integrate their catalogs with AR.” The integration expanded who could use Meta’s platforms for AR-enhanced shopping since any merchant who uses Perfect Corp. could advantage of Facebook’s Catalog, AR-enabled advertisements, and Instagram Shops. Perfect Corp.’s CEO Alice Chang wrote in the announcement:
“There’s no denying the impact that social media platforms like Instagram continue to play in the consumer discovery and shopping journey. This integration underlines the importance of a streamlined beauty shopping experience, with interactive AR beauty tech proven to drive conversion and enhance the overall consumer experience.”
That same week, Facebook, now Meta, announced their integration with Perfect Corp. and later announced plans to integrate ModiFace into its new advertising formats. In addition, Meta’s Instagram also partnered with L’Oréal’s ModiFace. With the swipe of a button, consumers on Instagram can try on new lipsticks and then purchase the product immediately within the app. The expansion of AR features on Meta’s platforms makes it seamless for consumers to shop not only without leaving their home and without leaving the app.
Outside of Meta, Snapchat offers consumers the chance to use various lenses and filters, including the AR shopping experiences. In 2022, Nike sponsored its own Snapchat lens to allow consumers to try on and customize their own virtual pair of Air Force 1 sneakers. Consumers could swipe through several colors and textures. Once satisfied, consumers could then select “shop now” to purchase their custom Nike Air Force 1s instantaneously.
Rising Biometric Concerns and Lawsuits
While demand for AR and virtual try-on is growing, the innovative technology does not come without major concerns. Brands like Charlotte Tilbury, Louis Vuitton, Estee Lauder, and Christian Dior have been slapped with class actions lawsuits in Illinois and the Seventh Circuit for violating BIPA.
According to the American Civil Liberties Union (“ACLU”) of Illinois, BIPA requires that private companies: (1) inform the consumer in writing of the data they are collecting and storing, (2) inform the consumer in writing regarding the specific purpose and length of time that the data will be collected, stored, and used, and obtain written consent from the consumer. Additionally, BIPA prohibits companies from selling or profiting from consumer biometric information. The Illinois law is considered to be one of the most stringent biometric privacy laws in the country and stands to be one of the only laws of its kind “to offer consumers protection by allowing them to take a company who violates the law to court.” BIPA allows consumers to recover up to $1,000 of liquidated damages or actual damages per violation, whichever is greater, in addition to attorneys’ fees and expert witness fees.
In November 2022, an Illinois federal judge allowed a BIPA lawsuit to move forward against Estee Lauder Companies, Inc. in regards to their Too Faced makeup brand. The plaintiff alleges that the company collected her facial-geometry data in violation of BIPA when she used the makeup try-on tool available on Too Faced’s website. Under Illinois law, a “violation of a statutory standard of care is prima facie evidence of negligence.” Kukovec v. Estee Lauder Companies, Inc., No. 22 CV 1988, 2022 WL 16744196, at *8 (N.D. Ill. Nov. 7, 2022) (citing Cuyler v. United States, 362 F.3d 949, 952 (7th Cir. 2004)). While the judge ruled that the plaintiff did not sufficiently allege recklessness or intent, he allowed the case to move forward because the plaintiff “present[ed] a story that holds together,” and did more than “simply parrot the elements of a BIPA claim.” The judge found that it seemed reasonable to infer that the company had to collect biometric data for the virtual try-on to work.
In February 2023, Christian Dior was able to find a BIPA exemption, which allowed a class action lawsuit filed against them to be dismissed. Christian Dior offered virtual try-on for sunglasses on their website. According to the lead plaintiff, the luxury brand failed to obtain consent prior to capturing her biometric information for the virtual-try on offering violating BIPA. The judge, however, held that the BIPA general health care exemption applied to VTOT for eyewear, including nonprescription sunglasses offered by consumer brands. BIPA exempts information captured from a “patient” in a “health care setting.” Since BIPA does not define these terms, the judge referred to Merriam-Webster to define the terms. Patient was defined as “an individual awaiting or under medical care and treatment” or “the recipient of any various personal services.” The judge found sunglasses, even nonprescription ones, are used to “protect one’s eyes from the sun and are Class I medical devices under the Food & Drug Administration’s regulations.” Thus, an individual using VTOT is classified as a patient “awaiting . . . medical care” since sunglasses are a medical device that protect vision and VTOT is the “online equivalent” of a brick-and-mortar store where one would purchase sunglasses.
Further, health care was defined as “efforts made to maintain or restore physical, mental, or emotional well-being especially by trained and licensed professionals.” The judge stated that she had “no trouble finding that VTOT counts as a ‘setting.’” Thus, under BIPA’s general health care exemption, consumers who purchase eyewear, including nonprescription sunglasses, using VTOT are considered to be “patients” in a “health care setting.”
Both cases show that while virtual try-on may operate similarly on a company’s website, the type of product that a brand is offering consumers the opportunity to “try on” may allow them to take part in exemptions. The “health care” exemption in the Christian Dior was not the first time that a company benefitted from the exemption. BIPA lawsuits can be costly for companies. TikTok settled a $92 million BIPA lawsuit in 2021 with regards to allegations that the social media app harvested biometric face and voice prints from user-uploaded videos. Although that example does not deal with virtual try-on, it exemplifies how diligence and expertise with BIPA requirements can save brands huge settlements. Companies looking to expand into the virtual try-on space should carefully consider how they will obtain explicit written consent (and other BIPA requirements, like data destruction policies and procedures) from consumers to minimize class action and litigation exposure.
INTRODUCTION
We live in an insatiable society. Across the globe, particularly in the United States, everyone with an Instagram account knows that the “phone eats first.” Young professionals rush to happy hour to post the obligatory cocktail cheers video before they take their first sip. On Friday nights, couples sprint to their favorite spot or the up-and-coming Mediterranean restaurant to quickly snap a picture of the “trio of spreads.” Everyone from kids to grandparents alike are flocking to the nearest Crumbl every Monday to share a picture of the pink box and the half-pound cookie inside. Social media has created a food frenzy. We are more obsessed with posting the picture of a meal than eating the meal itself. While a psychologist might have a negative view of the connection between social media and food, the baker or chef behind the photogenic creation is ecstatic by the way platforms such as Instagram and YouTube bring new patrons into their storefronts.
Due to the rise of social media over the past twenty years, food has become an obsession in our society. Many of us are self-proclaimed “foodies.” Historically, food has not fit neatly into the intellectual property legal scheme in the United States. Trademark, trade dress, and trade secrets are often associated with food, but we rarely see recipes or creative platings receive patent or copyright protection. Intellectual property law is not as enthralled with food as many of us are, but pairing the law with social media may create another way to protect food.
A RECIPE FOR IP PROTECTION
There are four main types of intellectual property: patents, copyrights, trade secrets, and trademarks. The utilitarian and economic perspectives are the two main theories behind intellectual property law. The utilitarian purpose of food is to be consumed. Economically, the food business in America is a trillion-billion-dollar industry. Intellectual property law aims to promote innovation, creativity, and economic growth. All three of these goals can be found within the food industry; however, the recipe for intellectual property protection has yet to be perfected.
Patent law is designed to incentivize and promote useful creations and scientific discoveries. Patent law gives an inventor the right to exclude others from using the invention during the patent’s term of protection. To qualify for a patent, an invention must be useful, novel, nonobvious, properly disclosed, and made up of patentable subject matter. Patentable subject matter includes processes, machines, manufactures, compositions of matter, and improvements thereof. Novelty essentially requires that the patent be new. It is a technical and precise requirement that often creates the biggest issue for inventors. Novelty in the context of food “means that the recipe or food product must be new in the sense that it represents a previously unknown combination of ingredients or variation on a known recipe.” According to the U.S. Court of Customs and Patent Appeals, to claim protection in food products, “an applicant must establish a coaction or cooperative relationship between the selected ingredients which produces a new, unexpected, and useful function.”A person cannot simply add or eliminate common ingredients, treating them in ways that would differ from the former practice. There are very few patents for food, but common examples include Cold Stone Creamery’s signature Strawberry Passion ice cream cake and Breyer’s Viennetta ice cream cake.
Copyright law affords protection to creative works of authorship that are original and fixed in a tangible medium. Fixation is met “when its embodiment … is sufficiently permanent or stable to permit it to be perceived, reproduced, or otherwise communicated for a period of more than transitory duration.” The Supreme Court has stated, “that originality requires independent creation plus a modicum of creativity.” Copyrights are not extended to “any idea, procedure, process, system, method of operation, concept, principle, or discovery.” Food, specifically food designs, are typically not eligible “for copyright protection because they do not satisfy the Copyright Act’s requirement that the work be fixed in a tangible medium.” A chef does not acquire rights for being the first to develop a new style of food because this creation is seen as merely ideas, facts, or formulas. Furthermore, shortly after a food’s creation, it is normally eaten, losing its tangible form. Recipes alone are rarely given copyright protection because recipes are considered statements of facts, but “recipes containing other original expression, such as commentary or artistic elements, could qualify for protection.”
Trade secrets are more favorable to the food industry. Traditionally, trade secret law has encompassed recipes. To be a trade secret the information must be sufficiently secret so that the owner derives actual or potential economic value because it is not generally known or readily ascertainable. The owner must make reasonable efforts to maintain the secrecy of the information. It is unlikely that food design, the shape and appearance of food, will be given trade secret protection as “food design presents a formidable challenge to trade secret protection: once the food is displayed and distributed to consumers, its design is no longer secret.” However, certain recipes, formulas, and manufacturing and preparation processes may be protected by trade secret law. Regarding food and intellectual property, trade secrets are probably the most well-known form of IP. Examples of still valid trade secret recipes and formulas include Coca-Cola’s soda formula, the original recipe for Kentucky Fried Chicken, the recipe for Twinkies, and the recipe for Krispy Kreme donuts.
Trademark is the most favorable type of IP protection given to the food industry. Trademarks identify and distinguish the source of goods or services. Trademarks typically take the form of a word, phrase, symbol, or design. Trade dress is a type of trademark that refers to the product’s appearance, design, or packaging. Trade dress analyzes “the total image of a product and may include features such as size, shape, color or color combinations, texture, graphics, or even particular sales techniques.”
Different types of trademarks and trade dress receive different levels of protection. For trademarks, it depends on the kind of mark. Courts determine whether the mark is an inherently distinctive mark, a descriptive mark, or a generic mark. Similarly, trade dress receives different levels of protection depending on whether the trade dress consists of product packaging or product design. In the context of food, ‘the non-functionality of a particular design or packaging is required” for a product to receive protection as trade dress. Some examples of commonly-known trademarks include Cheerios, the stylized emblematic “M” logo from McDonald’s, and the tagline “Life tastes better with KFC.” Food designs that have federally registered trademarks under trade dress include: Pepperidge Farm’s Milano Cookies, Carvel’s Fudgie the Whale Ice Cream Cake, Hershey’s Kisses, General Mills’ Bugles, Tootsie Rolls and Tootsie Pops, and Magnolia Bakery’s cupcakes bearing its signature swirl icing.
SOCIAL MEDIA – THE LAST DEFENSE
When it comes to food, there is no recipe to follow to receive intellectual property protection, but social media can be a way for bakers, chefs, and restauranteurs to be rewarded for their creations and ensure creativity in the food industry. Social media influences the way businesses conduct and plan their marketing strategies. Many businesses use social media to communicate with their audience and expand their consumer base. Social media allows a chef to post the week’s “Specials Menu” to their restaurant’s Instagram, and in a few seconds, anyone who follows that account can post that menu on their account and share it with hundreds if not thousands of people. As noted previously, this single menu would not receive IP protection because it is primarily fact-based, not a secret, is obvious, and is likely not a signifier of the restaurant to the general public. However, the power of social media will bring hundreds of excited and hungry foodies to the business.
Social media alone cannot ensure that another chef or baker won’t reverse engineer the dish featured on the special’s menu, but social media has done what IP cannot. The various social media platforms embody what the framers of the Constitution were trying to accomplish through intellectual property when they drafted Article I centuries ago — the promotion of innovation, creativity, and economic growth. The Constitution states that “Congress shall have power to … promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries….” While there is no indication that the framers ever intended food to be a part of what they knew to be intellectual property, two hundred and fifty years later, it is clear that food is a mainstay in the IP world, even if it does not fit systematically into patents, copyrights, trademarks, or trade secrets. Unfortunately, the law has fallen short when addressing IP protection for the food industry; but, luckily social media has continued to fulfill the goal of intellectual property that the framers desired when it comes to the food industry.
Social media allows others to connect with the satisfying creation and gives chefs the opportunity to be compensated for their work. After seeing the correlation between the Instagram post and the influx of guests, the chef will be incentivized to create more. The chef will cook up another innovative menu for next week, hoping that she will receive the same positive reward again. The food industry is often left out, unable to fit into the scope of IP law, but through social media, chefs and bakers can promote innovation, creativity, and economic growth at the touch of their fingers.
Alessandra Fable is a second-year law student at Northwestern Pritzker School of Law.