Author: Katherine Hsu

In April 2023, drama unfolded on Twitter, and it revolved around olive oil. Andrew Benin, the co-founder of Graza, a start-up single-origin olive oil brand that comes in two adorable green squeeze bottles, publicly called out rival Brightland for allegedly copying his squeezable olive oil idea. Mr. Benin wrote, “While friendly competition was always welcome, I do view this as a blatant disrespect and am choosing to voice my discontent.” In response, the internet angrily clapped back, as the internet does. One of these dissidents was Alison Cayne, the founder of Haven’s Kitchen, a cooking company. She wrote, “with all due respect, you did not create the squeeze bottle. Chefs and home cooks have been using it for decades.” Another commenter, Gabrielle Mustapich, a co-founder at Hardpops & Pilothouse Brands, added, “my mom was buying squeezy bottle olive oil in 2007 (and it wasn’t Graza).” 

Ms. Mustapich is right – squeeze bottles have been ubiquitous in chefs’ kitchens for years. However, they seem to be growing in popularity in home kitchens. While Graza may or may not be able to get credit for that societal shift, that doesn’t mean they shouldn’t get any recognition for doing things differently in the olive oil industry. That begs the question – if they were to sue Brightland, would they win? 

Though Graza doesn’t have a patent for its bottle or a trademark for anything except its name, official registration is not required to receive trade dress protection. Trade dress protection is provided by the Lanham Act, which allows the producer of a product a cause of action for the use of “any word, term, name, symbol, or device, or any combination thereof . . . which . . . is likely to cause confusion . . . as to the origin, sponsorship, or approval of his or her goods . . . . ” While trademarks are generally thought to cover things like brand names and their distinct designs, trade dress encompasses the design or packaging of a product or part of a product. For example, while the name “Nike” has a trademark, as does their “swoosh” symbol, the visual design of, say, a sweater or the box it comes in may unofficially deserve trade dress protection (209). 

While courts vary slightly on the elements of a protectable trade dress, they mainly agree that three factors must be met when analyzing the design of a product. First, the trade dress must be primarily non-functional. This requirement might seem counterintuitive since a company should be rewarded for making its product useful. However, the non-functionality requirement does not concern the invention of this aspect of the product, which is left to the world of patents. The non-functionality requirement promotes competition because other companies can make similar products with the same useful qualities without legal repercussions. 

The landmark 2001 Supreme Court case Traffix Devices v. Marketing Displays redefined the test for functionality. While circuits vary on their precise balancing tests, many follow that of the Ninth Circuit in Talking Rain Bev. Co. v. South Beach Bev. Co. from 2003: Whether advertising “touts the utilitarian advantages of the design,” whether “the particular design results from a comparatively simple or inexpensive method of manufacture,” whether the design “yields a utilitarian advantage,” and “whether alternative designs are available,” (603) – though, per Traffix, the “mere existence of alternatives does not render a product non-functional” (33-34). 

Next, the trade dress must be inherently distinctive or have acquired a secondary meaning to customers (which can be difficult to prove). Finally, the alleged infringement must create a likelihood of confusion among customers as to a product’s source. This inquiry is a step-by-step process, so if the product design is primarily functional, the inquiry ends, and the design’s trade dress cannot be protected. 

If Graza’s squeeze bottle is considered part of its product design rather than its packaging, it is functional and therefore not protectable trade dress. First, its own Instagram includes in its bio, “Fresh Squeezable Single Origin Olive Oil / As squeezed in @bonappetitmag.” The home page of its website features a video of a chef gingerly unscrewing the squeeze top and gracefully applying the oil to a hot pan. Second, the squeeze bottle itself is a simple design. Restaurants can bulk purchase 500 of these basic bottles, and the patent for the generic squeeze bottle, though not identical to Graza’s, is not terribly complex. Third and fourth, while alternative designs are available – most typical olive oils come in bulky containers with regular pour spouts – the squeeze spout is likely attractive to many consumers for its uniqueness and convenience. 

But what if the bottle is not considered to be the product’s design – that is, the appearance of the product itself – but rather the packaging – the container holding the product? According to the Wal-Mart Court, product design and packaging trade dress should be analyzed differently (215). The Court reasoned that, to a consumer, product packaging may identify the source of a product much more clearly than product design would (212-13). Therefore, the court says, a producer need not prove that a package has acquired secondary meaning to receive trade dress protection. 

The squeeze bottle, the pretty green color, and the charming label are all selling points for Graza, with one reviewer saying, “These olive oils are one of the few things I find attractive enough to leave out on my countertop.” Arguably, however, the main reason people are purchasing it is to get to the product inside – a quick Google search of “Graza review” brings up articles primarily reviewing the oil’s quality, not the bottle’s attractiveness of the bottle or the utility of the squeeze function. Consider the difference between a consumer purchasing an olive oil known to be delicious but in a poorly designed, ugly bottle – there, the bottle is the packaging for the product – as opposed to an olive oil known to be terrible but in a limited-edition, beautiful bottle – there, the bottle is the product, and there is no packaging. 

In Graza’s case, however, it can be difficult to know what’s really in the average consumer’s head, and in many ways, the “average consumer” is a myth. When Courts determine a product’s likelihood of confusion with another product or the likelihood that a consumer will see a product or its packaging and know what brand it comes from, they are necessarily guessing. The Wal-Mart Court tried to correct for that by advising courts in ambiguous product design/product packaging cases to lean toward finding that they are design, presumably to force greater proof of distinctiveness, etc., from producers. 

When it comes to Graza, though, it seems entirely possible that a typical, reasonable consumer would see the Brightland olive oil in a squeeze bottle and think: That’s just like the Graza bottle! Even if the trusty squeeze bottle has been ubiquitous in kitchens for decades, Graza may have brought it to the attention of home chefs everywhere. Similarly, though many olive oil bottles are dark green, their tall, skinny, matte green appearance with the pointed tip is unique enough – and their advertising aggressive and targeted enough – that many consumers (mostly those of the Gen Z and Millennial generations) would easily recognize the design as Graza if it flashed before their eyes. 

But, if we are to follow the Wal-Mart Court’s suggestion and reason that the bottle is Graza’s product design instead of packaging in this moment of ambiguity, Graza’s argument that they deserve trade dress protection – or, more simply, credit for coming up with the squeeze bottle olive oil idea – is weak. Andrew Benin may have realized such, as he, shockingly, issued a public Twitter apology retracting his earlier accusations. 

INTRODUCTION 

Every day across the world many people, brands, and companies believe that their names are defamed, or trademarks have been infringed on. With over 440,000 US Trademark Applications filed in 2022 alone, disputes over these trademarks arise frequently. However, these quarrels are not always encircling such a large-scale platform like one of the highest rated television series ever created. But when they do, it often spurs up illustrious discussions by the media about intellectual property laws in the United States.  

On September, 25, 2023, the United States District Court of the Southern District of New York granted the Creators of the hit Television series “Better Call Saul,” their motion to dismiss for claims of defamation and trademark infringement brought by Liberty Tax Service for the show’s depiction of a tax preparation service in an episode of the show from April 2022. The depiction of the tax service in the show offers many similarities of Liberty Tax Service and its more than 2,500 offices across the country, including the same red, white, and blue colors, as well as the Statute of Liberty logo. This case, JTH Tax LLC v. AMC Networks Inc., implicates many facets of intellectual property law, including trademark infringement and dilution under the Lanham Act, trade dress, and New York statutory defamation law. Ultimately, the court ruled that the “Rogers Test” was applicable to the defendant’s alleged use of Liberty Tax Service’s trademarks. However, the Court’s decision on the motion to dismiss could certainly be up for debate, especially if Liberty Tax Service raised certain arguments regarding the second prong of the Rogers Test.   

A. APPLICABILITY OF THE “ROGERS TEST” 

The “Rogers Test” was developed by the United States Court of Appeals, Second Circuit in Rogers v. Grimaldi. It is a two-pronged balancing test that can be implicated when there are opposing interests under the First Amendment and the Lanham Act. The test states,  

“[w]here the title of an expressive work is at issue, the “balance will normally not support application of the [Lanham] Act unless the title has no artistic relevance to the underlying work whatsoever, or, if it has some artistic relevance, unless the title explicitly misleads as to the source or the content of the work.”  

In the matter at hand, the court quickly and rightfully found that the applicability of the Rogers test was appropriate because to the extent that the defendant’s used the plaintiff’s marks, they were in furtherance of the plot of Better Call Saul by heightening the understanding of key characters for the audience.   

1. ARTISTIC RELEVANCE  

Under this prong, defendants argued that the alleged use of plaintiff’s marks met the purposely low threshold for artistic relevance in the Rogers Test because the reference to “Sweet Liberty” in the episode in question is undeniably ironic and clearly related to the character’s role in the series because it is the business they created with illegal intent. On top of that, the court found that the character’s use of the plaintiff’s trade dress (i.e., the design and configuration of the product itself) was simply an appropriation of patriotism that highlights their deceptiveness as crime ridden characters which all has relevance to the episode. Thus, the court concluded that the artistic relevance of the episode’s use of the plaintiff’s marks were clearly above “above zero.” 

2. WHETHER THE USE OF SUCH MARKS WERE “EXPLICITLY MISLEADING” 

Since the Court concluded that defendants’ alleged use of plaintiff’s marks had at least an ounce of artistic relevance to the show, the Court focused on the second prong of the Rogers Test, whether the defendants’ use of the plaintiff’s marks was “explicitly misleading,” which would allow the Lanham Act to apply to the show. The essential question to ask for the second prong is “whether the defendant[s’] use of the mark ‘is misleading in the sense that it induces members of the public to believe [the work] was prepared or otherwise authorized’ by the plaintiff.” To do so, the court focused on the eight non-exhaustive factors from Polaroid Corp. v. Polarad Elecs. Corp., 287 F.2d 492 (2d Cir. 1961), in order to assess the likelihood of confusion to satisfy this prong. The eight factors include: “(1) strength of the trademark; (2) similarity of the marks; (3) proximity of the products and their competitiveness with one another; (4) evidence that the senior user may bridge the gap by developing a product for sale in the market of the alleged infringer’s product; (5) evidence of actual consumer confusion; (6) evidence that the imitative mark was adopted in bad faith; (7) respective quality of the products; and (8) sophistication of consumers in the relevant market.”  

a. THE COURT’S ASSESSMENT OF THE POLAROID FACTORS DISSECTED 

The court’s assessment of the eight Polaroid factors in this matter could genuinely be up for debate, especially if the plaintiff raised stronger, or any, arguments regarding multiple factors. The Polaroid factors are weighed against each other depending on which way a court decides each factor favors the respective parties. Here, the court found that only the first factor weighed in favor of the plaintiff, as the defendants did not contest the strength of the plaintiff’s mark. The other seven factors either weighed in favor of the defendant or were deemed neutral. Most of the factors were weighed in the defendant’s favor, but mainly because the plaintiff failed to argue them in their Amended Complaint.   

For example, and probably most convincing, factor five could have been evaluated in favor of the plaintiff. This factor requires evidence of actual consumer confusion, but it is considered black letter law that actual confusion does not need to be proven to prevail under the Lanham Act or this factor since it is often too challenging and expensive to prove. The Act strictly requires only that a likelihood of confusion be shown as to source, meaning that consumers will mistakenly believe the goods or services come from the same source, according to the Lois v. Sportswear, U.S.A., Inc. v. Levi Strauss & Co., case.  

An opportunity that the plaintiff failed to point out that could have helped satisfy this factor stems from an example the Supreme Court of the United States prescribed in the Jack Daniels case, which involved a somewhat analogous situation where Jack Daniels sued a company on similar grounds for their creation of a dog toy that closely represented a bottle of Jack Daniels whiskey. The illustration provided in that case by the court offered a scenario where a luggage manufacturer uses another brand’s marginally altered logo on their luggage to foster growth in the luggage market. The Supreme Court compared this example with another and noted that the greater likelihood of confusion would occur in the luggage illustration because it was conveying possible misinformation about who is responsible for the merchandise. Now, as stated in the Jack Daniels case, if the plaintiff were able to have created this analogy, relevant to the facts at bar, and argued it in their Amended Complaint by displaying that the show clearly conveyed misinformation through their image of “Sweet Liberty Tax Services” and that it was blatantly a slightly modified image of Liberty Tax Service, this factor would have most likely been weighed in favor of the plaintiff. This would be due to the likelihood of confusion as to the source of the show’s representation of the character’s business.  

Secondly, factor seven requires taking into account the respective quality of the product. In the Flushing Bank v. Green Dot Corp. case, the United States District Court for the Southern District of New York interpreted this factor to mean that if the quality of a junior user’s (the show) product or service is low compared to the senior user (Liberty Tax Service), the chance of actual injury when there is confusion would be increased. If confusion, as stated above, was created, then the plaintiff’s argument in their Amended Complaint regarding defendant’s use of their marks linking Liberty Tax’s Trademarks to the show’s depiction of an inferior quality of service would further the weighing of this factor in the plaintiff’s direction as well.  

Lastly, the eighth factor, based on the sophistication of purchasers, could strongly be construed in favor of the plaintiff. The theory for this factor is that the more sophisticated the senior user’s purchasers are, the less likely they will be confused by the presence of similar marks. To determine consumer sophistication, courts consider the product’s nature and price. The plaintiffs failed to raise any concern as to this factor. However, there is an argument that could have been made to weigh this factor in their favor, which would have evened out the weight of the factors equally between the plaintiff and defendant. If that were the case, it could be assumed to be viewed in the light most favorable of the plaintiff, which would be the standard at the motion to dismiss stage.  

If the plaintiff were to shed light on Liberty Tax Services’ prices compared to the average professional tax preparer, the evidence would clearly force this factor to fall in favor of the plaintiff. Liberty Tax Service has a basic tax preparation rate of only $69.00, while the average professional tax preparer charges an average of $129.00 for similar services. This distinction would logically prove that that Liberty Tax Service maintains a lower consumer sophistication than the average professional tax preparer, leading to the conclusion that their consumer’s would be more likely to be confused by the presence of similar marks. Therefore, this would result in the fourth of eight factors being found in favor of the plaintiff.  

CONCLUSION 

If the plaintiffs applied the arguments above in their Amended Complaint, which regard the second element of the Rogers Test, and further, the Polaroid factors, it would demonstrate that the plaintiff’s display of confusion is indeed plausible. Because such compelling arguments would tilt the scale in favor of the plaintiff, Liberty Tax Service, the Lanham Act would most likely be implicated. Therefore, this case may have concluded in a completely different verdict at this stage of litigation, supporting the plaintiff, rather than the creators of television series.  

  1. Introduction to Generative AI and Deepfakes 

The advent of generative Artificial Intelligence (AI) and deepfake technology marks a new era in intellectual property law, presenting unprecedented challenges and opportunities. As these technologies evolve, their creations blur the lines between reality and fiction, escalating the risk of consumer deception and diluting brand values. Deepfakes, a fusion of the words ‘deep learning’ and ‘fake,’ stand at the forefront of this revolution. These ultra-realistic synthetic media replace a person’s image or voice with someone else’s likeness, typically using advanced artificial intelligence. Relying on deep learning algorithms, a subset of AI, deepfakes employ techniques like neural networks to analyze and replicate human facial and vocal characteristics with stunning accuracy. The result is a fabricated version of a video or audio clip that is virtually indistinguishable from the original. 

  1. Broad Applications and Challenges of Deepfakes 

The spectrum of deepfake applications is vast, encompassing both remarkable prospects and significant risks. On the positive side, this technology promises to revolutionize entertainment and education. It can breathe life into historical figures for educational purposes or enhance cinematic experiences with unparalleled special effects. However, this technology’s downside reveals consequences, particularly for businesses and brands. Generative AI and deepfakes possess the potential to create highly convincing synthetic media, obfuscating the line between authentic and fabricated content. This capability poses substantial risks for consumer deception and brand damage.  

When consumers encounter deepfakes featuring well-known trademarks, it not only challenges the authenticity of media but also erodes the trust and loyalty brands have cultivated over the years. This impact on consumer perception and decision-making is central to understanding the full implications of AI on trademark integrity, as it leads to potential deception and undermines the critical connection between trademarks and consumer trust. This dual nature of deepfakes as both a tool for creative expression and a source of potential deception underscores the complexity of their impact on intellectual property and consumer relations. 

  1. Trademark and Brand Identity in the Context of AI 

As generative AI introduces opportunities on the one hand, risks abound regarding consumer deception. This direct threat to perception highlights trademarks’ growing vulnerability. At the heart of branding and commerce lies the trademarks, distinguishing one business from another. They are not just mere symbols; they represent the core identity of brands, extending from distinctive names and memorable phrases to captivating logos and designs. When consumers encounter these marks, they do not just see a name or a logo; they connect with a story, a set of values, and an assurance of quality. This profound connection underscores the pivotal role of trademarks in driving consumer decisions and fostering brand loyalty. The legal protection of these trademarks is governed by the Lanham Act, which offers nationwide protection against infringement and dilution. Infringement occurs when a mark similar to an existing trademark is used, potentially confusing consumers about the origin or sponsorship of a product. Dilution, on the other hand, refers to the weakening of a famous mark’s distinctiveness, either by blurring its meaning or tarnishing it through offensive use. 

However, the ascent of generative AI and deepfake technology casts new, complex shadows over this legal terrain. The challenges introduced by these technologies are manifold and unprecedented. While it was once straightforward to distinguish between intentional and accidental use of trademarks, the line is now increasingly blurred. For instance, when AI tools deliberately replicate trademarks to deceive or dilute a brand, it is a clear case of infringement. However, the waters are muddy when AI, through its intricate algorithms, unintentionally incorporates a trademark into its creation. Imagine an AI program designed for marketing inadvertently including a famous logo in its output. This scenario presents a dilemma where the line between infringement and innocent use becomes indistinct. The company employing the AI might not have intended to infringe on the trademark, but the end result could inadvertently imply otherwise to the consumer. 

This new landscape necessitates a reevaluation of traditional legal frameworks and poses significant questions about the future of trademark law in an age where AI-generated content can replicate real brands with unnerving precision. The challenges of adapting legal strategies to this rapidly evolving digital environment are not just technical but also philosophical, calling for a deeper understanding of the interplay between AI, trademark law, and consumer perception. 

  1. Legal Implications and Judicial Responses 

The Supreme Court’s decision in Jack Daniel’s Properties, Inc. v. VIP Products LLC significantly sets a key precedent regarding fair use defenses in the age of AI. This case, focusing on a dog toy mimicking Jack Daniel’s whiskey branding, highlights the tension between trademark protection and First Amendment rights

Although the case did not directly address AI, its principles are crucial in this field. For both trademark infringement and dilution claims, the Court narrows the boundaries of fair use, particularly in cases where an “accused infringer has used a trademark to designate the source of its own goods.” The Court also limited the scope of the noncommercial use exclusion to the dilution claims, stating that it “cannot include… every parody or humorous commentary.” This narrower scope of fair use makes it tricky for AI content users to navigate fair use defenses when parodying trademarks, where the lines between illegal use, parody, and unintentionally confusing consumers about endorsement or sponsorship may blur.  

This ruling has direct repercussions for AI models generating noncommercial comedic or entertainment content featuring trademarks. Even if AI-created content is noncommercial or intended as parody, it does not automatically qualify as fair use. If such content depicts or references trademarks in a way that could falsely suggest sponsorship or source affiliation, then claiming fair use becomes extremely difficult. Essentially, the noncommercial nature of AI-created content offers little protection if it uses trademarks to imply an incorrect association or endorsement from the trademark owner. 

As such, AI developers and companies must be cautious when depicting trademarks in AI-generated content, even in noncommercial or parodic contexts. The fair use protections may be significantly limited if the content falsely suggests a connection between a brand and the AI-generated work. 

In this light, AI-generated content for marketing and branding requires meticulous consideration. AI Developers must ensure their AI models do not generate contents that incorrectly imply a trademark’s source or endorsement. This necessitates thorough review processes and possibly adapting algorithms to prevent any false implications of source identification. At the same time, the users of the AI technology for branding, marketing or content creation need to employ stringent review processes of how trademarks are depicted or referenced to ensure their creations do not inadvertently infringe upon trademarks or mislead consumers about the origins or endorsements of products and services. With AI’s capacity to precisely replicate trademarks, the potential for unintentional infringement and consumer deception is unprecedented. 

This evolving landscape calls for a critical reevaluation of existing legal frameworks. Though robust in addressing the trademark issues of its time, the Lanham Act was conceived in an era before the emergence of digital complexities and AI advancements we currently face. The Court’s ruling in Jack Daniel’s case could influence future legislation and litigation involving AI-generated content and trademark issues. Today, we stand at a critical point where the replication of trademarks by AI algorithms challenges our traditional understanding of infringement and dilution. Addressing this requires more than mere amendments to existing laws; it calls for a holistic overhaul of legal frameworks. This evolution might involve new legislation, innovative interpretations, and an adaptive approach to defining infringement and dilution in the digital age. The challenge is not just to adapt to technological advancements but to anticipate and shape the legal landscape in a way that balances innovation with the need to protect the essence of trademarks in a rapidly changing world. 

  1. Right of Publicity and Deepfakes 

Deepfakes and similar AI fabrications pose risks to not just trademarks but also individual rights, as the right of publicity shielding personal likenesses confronts the same consent and authenticity challenges in an era of scalable deepfake identity theft. The concept of the right of publicity has gained renewed focus in the age of deepfakes, as exemplified by the unauthorized use of Tom Hanks’ likeness in a deepfake advertisement. This case serves as a potent reminder of the emerging challenges posed by deepfake technology in the realm of intellectual property rights. California Civil Code Section 3344, among others, protects individuals from the unauthorized use of their name, voice, signature, photograph, or likeness. However, deepfakes, with their capability to replicate a person’s likeness with striking accuracy, raise complex questions about consent and misuse in advertising and beyond. 

Deepfakes present a formidable threat to both brand reputation and personal rights. These AI-engineered fabrications are capable of generating viral misinformation, perpetuating fraud, and inflicting damage on corporate and personal reputations alike. By blurring the lines between truth and deception, deepfakes undermine trust, dilute brand identity, and erode the foundational values upon which reputations are built. The impact of deepfakes on brand reputation is not a distant concern but a present and growing one, necessitating vigilance and proactive measures from individuals and organizations. The intricate dynamics of consumer perception, influenced by such deceptive technology, underscore the urgency for a legal discourse that encompasses both the protection of trademarks and the right of publicity in the digital age. 

  1. Additional Legal and Industry Perspectives 

While the complex questions surrounding AI, deepfakes, and trademark law form the core of this analysis, the disruptive influence of these technologies extends across sectors. The latest widespread dissemination of explicit AI-generated images of Taylor Swift serves as a stark example of the urgent need for regulatory oversight in this evolving landscape, particularly highlighting the implications for the entertainment industry. The entertainment industry, particularly Hollywood, is another sphere significantly impacted by AI advancements. The ongoing discussions, notably during the  SAG-AFTRA strike, highlight the critical issues of informed consent and fair compensation for actors whose digital likenesses are used. The use of generative AI technologies, including deepfakes, in creating digital replicas of actors raises crucial questions about intellectual property rights and the ethical use of such technologies in the industry. 

The legal and political landscape is also adapting to the challenges posed by AI and deepfakes. With the 2024 elections on the horizon, the Federal Election Commission is in the preliminary phases of regulating AI-generated deepfakes in political advertisements, aiming to protect voters from election disinformation. Additionally, legislative efforts such as the introduction of the No Fakes Act by a bipartisan group of senators mark significant steps toward establishing the first federal right to control one’s image and voice against the production of deepfakes, essentially the right of publicity. 

Moreover, the legislative activity on AI regulation has been notable, with several states proposing and enacting laws targeting deepfakes as of June 2023. President Biden’s executive order in October 2023 further exemplifies the government’s recognition of the need for robust AI standards and security measures. However, the disparity between the rapid progression of AI technology and the comparatively sluggish governmental and legislative response is evident, as shown by the limited scope of interventions like President Biden’s executive order. This order, which includes the implementation of watermarks to identify AI-generated content, signals a move toward a more regulated AI environment, although the tools to circumvent these measures by removing watermarks are readily available. The order marks a step toward creating a more rigorous environment for AI development and deployment, but it also hints at the potential for the development of tools designed to undermine such security measures. These developments at the industry, legal, and political levels reflect the multifaceted approach required to address the complex issues arising from the intersection of AI, deepfakes, intellectual property, and personal rights. 

  1. Ongoing Challenges and Future Direction 

The ascent of generative AI and synthetic media ushers in a complex new era, posing unprecedented challenges to intellectual property protections, consumer rights, and societal trust. As deepfakes and similar fabrications become indistinguishable from reality, risks of mass deception and brand dilution intensify. Trademarks grapple with blurred lines between infringement and fair use while publicity rights wrestle with consent in an age of identity theft. 

Given the potency of deepfakes in shaping narratives, the detection of such content is essential. A number of technologies have shown promise in recognizing deepfake content with high accuracy, particularly machine learning algorithms, proving effective at spotting these AI-generated videos by analyzing facial and vocal anomalies. Their application in conflict zones is crucial for mitigating the spread of misinformation and malicious propaganda. 

Recent governmental initiatives signal the beginnings of a framework evolution to suit new technological realities. However, addressing systemic vulnerabilities exposed by AI’s scalable distortion of truth demands a multifaceted response. More than mere legal remedies, restoring balance requires ethical guardrails in technological development and usage norms, plus the need for public awareness. 

In confronting this landscape, maintaining foundational pillars of perception and integrity remains imperative, even as inventions test traditional boundaries. Our preparedness hinges on enacting safeguards fused with values that meet this watershed moment where human judgment confronts machine creativity. With technology rapidly outpacing regulatory oversight, preventing harm from generative models remains an elusive goal. But don’t worry. I am sure AI will come up with a solution soon. 

Background: Sephora and ModiFace 

In a market filled with a mixture of new direct-to-consumer influencer brands gaining traction, brick and mortar drug stores providing cheaper options known as “dupes”, and high-end retailers investing in both their online and in stores, one major player dominates: Sephora. Founded in 1970, Sephora is a French multinational retailer of beauty and personal care products. Today, Sephora is owned by LVMH Moët Hennessy Louis Vuitton (“LVMH”) and operates 2,300 stores in 33 countries worldwide, with over 430 stores in America alone.  

LVMH attributes much of Sephora’s success to its “self-service” concept. Unlike its department store competitors who stock beauty products behind a counter, Sephora allows consumers to touch and test is product with the mediation of a salesperson. This transformation of a store to an interactive experience underscores Sephora’s main value proposition: providing customers with a unique, interactive, and personalized shopping experience.1 Keeping with its customer experience-centric business model, Sephora has utilized technology to continue providing its customers with a personalized beauty experience.  

The tension created by two separate growing marketplaces puts significant pressure on Sephora to replicate the online shopping experience in-store and vice versa. For make-up, having a perfect complexion match for face products and flattering color of lipstick and blush requires an understanding of the undertones and overtones of the make-up shades. Typically, this color match inquiry is what makes or breaks the sale—if a shopper is not confident the make-up is a match, they are less likely to purchase. To address this friction in the customer purchase journey, Sephora rolled out “Find My Shade,” an online tool designed to help shoppers find a foundation product after inputting their preferences and prior product use. This tool provides the in-store feel of viewing multiple products at once, while providing some assurance on a color match. For Sephora, the online sale provides ample customer data: which products were searched, considered, and ultimately purchased all against a backdrop of a user’s name, geography, preferences, and purchase history. The resulting customer data is the backbone of Sephora’s digital strategy: facilitating customer purchases online by reducing friction, while mining data to inform predictions on customer preferences.  

In line with its innovative in-store strategy, in 2014 Sephora announced a partnership with Modiface to launch a Visual Artist Kiosk to provide augmented reality mirrors in its brick-and-mortar stores. First introduced in Milan, Sephora sought to make testing make-up easier for customers by simulating makeup products on a user’s face in real time without requiring a photo upload.2 To begin their session at the kiosk, users provide their e-mail address and contact information either tied to their pre-existing customer account with Sephora or to provide Sephora with new customer information. Using facial recognition technology, the ModiFace 3-D augmented reality mirror uses a live capture of a user’s face and then shows the user how to apply make-up products overlaid onto the live capture user’s face. This allows users to test thousands of products tailored to the user’s unique features. Without opening a real product, users are able to see whether the product is suitable to their skin tone, thus bringing personalization and tailored options typically available only online to the store while providing Sephora with valuable consumer data. At the end of their use of the ModiFace mirror, the user receives follow-up information about the products tested via the e-mail address provided or via their Sephora account.  

At the time of Visual Artist Kiosk’s introduction to stores in the United States in 2014, Sephora did not need to consider federal privacy laws—there were none to consider. Consumer data privacy laws were in their infancy, with the Federal Trade Commission (FTC) at the helm of most cases aimed to provide consumers protection from data breach and identity theft and to hold corporations accountable to their respective privacy policies.3 Significantly, however, Sephora did not consider the state-specific laws at play. Specifically, Sephora did not consider the Illinois Biometric Information Privacy Act (BIPA) which applied to all Sephora locations in the state of Illinois (IL).  

Issue 

In December 2018, Auste Salkauskaite (Plaintiff) brought a class action suit against Sephora and ModiFace Inc. (Modiface) claiming that both violated the Illinois Biometric Information Privacy Act (BIPA) by using ModiFace’s technology to collect biometric information about customer facial geometry at a Virtual Artist Kiosk in a Sephora store in Illinois. Plaintiff further alleged that her biometric information, cell phone number, and other personal information were collected and disseminated by both Sephora and ModiFace in an attempt for Sephora to sell products to the plaintiff.4 Plaintiff further alleged that Sephora did not inform her in writing that her biometrics were being collected, stored, used, or disseminated. Sephora allegedly did not get Plaintiff’s written or verbal consent, provide any notice that Plaintiff’s biometric information was being collected, or if it would be retained and/or sold. In pursuing a class action lawsuit, Plaintiff sought to include individuals who had their biometrics “captured, collected, stored, used, transmitted or disseminated by ModiFace’s technology in Illinois”, with an additional subclass of those who experienced the same treatment from Sephora in Illinois.”5 

BIPA “governs how private entities handle biometric identifiers and biometric information (“biometrics”) in the state of IL.”6 By including a private right of action, residents of the state of IL are able to file suit against private entities who allegedly violate the law. In this case, Plaintiff claimed that Sephora and ModiFace violated three provisions of BIPA: 1) requiring a private entity in possession of biometrics to release a publicly accessibly written policy describing its use of the collected biometrics, 2) forbidding a private entity from “collecting, capturing, purchasing, receiving, or otherwise obtaining biometrics without informing the subject that biometrics are being collected and stored,” and 3) “disclosing or disseminating biometrics of a subject without consent.”7 

Response 

In the immediate aftermath of the suit, Sephora did not release any statements on the pending litigation. In Sephora’s answers to the Plaintiff’s complaints filed in January 2019, Sephora denied all claims by 1) pointing to its publicly available privacy statement, and 2) denying taking, collecting, using, storing or disseminating Plaintiff’s biometrics. Specifically, Sephora claimed that by using its mobile application and/or website, users agree to accept Sephora’s terms of service which releases Sephora from liability. This included the Virtual Artist Kiosk, which required users to sign and accept Sephora’s terms of service before prompting users to provide any contact information.  

Sephora and the Plaintiffs (once class action status was granted) reached a settlement agreement in December 2020, which allowed for anyone who interacted with the Virtual Artist Kiosk in a Sephora store in IL since July 2018 to file a claim for a share of the settlement—which allows for claimants to get up to $500 each. As of April 2020, 10,500 notices were sent to potential claimants. Hundreds of claims were filed by potential class members, which could result in just under $500,000 in total claims. Sephora has never officially commented on the suit, despite some media coverage in IL.8 

ModiFace, on the other hand, successfully moved to dismiss the claim for lack of personal jurisdiction in June 2020. The Court reasoned that ModiFace did not purposefully avail itself of the privilege of conducting business in IL. The Court cited a declaration of ModiFace’s CEO which stated that ModiFace never had property, employees, or operations in Illinois and is not registered to do business there. He further stated that ModiFace’s businesses is focused on selling augmented-reality products to beauty brand companies and does not participate in marketing, sales, or commercial activity in Illinois. ModiFace claimed that its business relationship with Sephora did not occur in Illinois, and that Sephora never discussed the use of ModiFace technology in Illinois: there was no agreement in place regarding Illinois and no transmission of biometric information between the companies. Overall, the Court found that Sephora’s use of ModiFace technology in Illinois did not establish minimum contacts.  

Despite the lawsuit, ModiFace was acquired by L’Oreal in March 2018. L’Oreal is the world’s biggest cosmetics company, and, unlike Sephora, designs, manufactures, and sells its own products. L’Oreal and ModiFace worked together for about seven years before the acquisition. Like Sephora, L’Oreal ramped up investment in virtual try-on technology in an effort to decrease customer barriers to purchase. Since the acquisition, L’Oreal’s Global Chief Digital Officer has said that conversion rates from search to purchase has tripled.(citation)? At the time of the ModiFace acquisition, L’Oreal spent about 38% of its media budget on digital campaigns like virtual try-one. It’s estimated that this investment has grown significantly as L’Oreal strategizes around minimizing friction in the customer experience. More recently, Estee Lauder was also sued for alleged violation of BIPA for collecting biometric data via virtual “try-on” of make-up through a similar technology as ModiFace.9  

Lessons Learned for Future: CCPA and Beyond 

Sephora’s data privacy legal woes have ramped up significantly since the BIPA lawsuit in 2018. On August 24, 2022, California Attorney General Rob Bonta announced a settlement resolving allegations that the company violated the California Consumer Privacy Act (CCPA). The Attorney General alleged that Sephora failed to disclose to consumers that it was selling their personal information, failed to process user request to opt out of sale, and did not remedy these violations within 30 days after being alerted by Attorney General, as allowed per the CCPA. The terms of the settlement required Sephora to pay $1.2 million in penalties and comply with injunctive terms in line with its violations including updating its privacy policy to include affirmative representation that it sells data, provide mechanisms for consumers to opt out of the sale of their personal information, conform to service provider agreements in line with CCPA’s requirements and provide states reports to the Attorney General documenting progress.  

Sephora’s settlement is the first official non-breach related settlement under the CCPA. Many legal analysts argue that the California Office of the Attorney General (OAG) intends to be more aggressive in enforcing CCPA, which they are signaling in a significant manner via their settlement with Sephora. Specifically, the OAG is expected to focus on businesses that engage in sharing or selling of information to third parties for the purpose of targeting advertising.10 Importantly, under the California Privacy Rights Act (CPRA) which goes into effect on January 1, 2023, companies will no longer benefit from the 30-day notice to remedy their alleged violations.  

Like the BIPA lawsuit, Sephora did not make any official statements on the CCPA settlement. Sephora’s privacy policy is regularly updated, however, signaling the company’s attention to the regulations set forth by CCPA minimally.11 In 2022, Sephora saw revenues grow 30%, with a significant rebound in in-store activity, indicating that Sephora customers nationwide have not been deterred by its privacy litigation woes. As Sephora continues to innovate its in-store experience, it must continue to keep a watchful eye on state-specific regulation as Colorado and Virginia launch their own data privacy laws in the near future.  

  1. Introduction 

Due to a shortage of mental health support, AI-enabled chatbots offer a promising way to help children and teenagers address mental health issues. Users often view chatbots as mirroring human therapists. However, unlike their human therapist counterparts, mental health chatbots are not obligated to report suspected child abuse. Legal obligations could potentially shift, necessitating technology companies offering these chatbots to adhere to mandated reporting laws. 

  1. Minors face unprecedented mental health challenges 

Many teenagers and children experience mental health difficulties. These difficulties include having trouble coping with stress at school, depression, and anxiety. Although there is a consensus on the harms of mental illness, there is a shortage of care. An estimated 4 million children and teenagers do not receive necessary mental health treatment and psychiatric care. Despite the high demand for mental health support, there are approximately only 8300 child psychiatrists in the United States. Mental health professionals are overburdened and unable to provide enough support. These gaps in necessary mental health care provide an opportunity for technology to intervene and support the mental health of minors.   

  1. AI-enabled chatbots offer a mental health solution 

AI-enabled chatbots offer mental health services to minors and may help alleviate deficiencies in health care. Technology companies design AI-enabled mental health chatbots to facilitate realistic text-based dialogue with users and offer support and guidance. Over forty different kinds of mental health chatbots exist and users can download them from app stores on mobile devices. Proponents of mental health chatbots contend the chatbots are effective, easy to use, accessible, and inexpensive. Research suggests some individuals prefer working with chatbots instead of a human therapist as they feel less stigma asking for help from a robot. There are numerous other studies assessing the effectiveness of chatbots for mental health. Although the results are positive for many studies, the research concerning the usefulness of mental health chatbots is in its nascent stages. 

Mental health chatbots are simple to use for children and teenagers and are easily accessible. Unlike mental health professionals who may have limited availability for patients, mental health chatbots are available to interact with users at all times. Mental health chatbots are especially beneficial for younger individuals who are familiar with texting and interacting with mobile applications. Due to the underlying technology, chatbots are also scalable and able to reach millions of individuals. Finally, most mental health chatbots are inexpensive or free. The affordability of chatbots is important as one of the most significant barriers to accessing mental health support is its cost. Although there are many benefits of mental health chatbots, most supporters agree that AI tools should be part of a holistic approach to addressing mental health. 

Critics of mental health chatbots point to the limited research on their effectiveness and instances of harmful responses by chatbots. Research indicates that users sometimes rely on chatbots in times of mental health crises. In rare cases, some users received harmful responses to their requests for support. Critics of mental health chatbots are also concerned about the impact on teenagers who have never tried traditional therapy. The worry is that teenagers may disregard mental health treatment in its entirety if they do not find chatbots to be helpful. It is likely too early to know if the risks raised by critics are worth the benefits of mental health chatbots. However, mental health chatbots offer a promising way to ease the shortage of mental health professionals.  

  1. The law may evolve to mandate reporting for AI-enabled therapy chatbots  

AI is a critical enabler for growth across the world and is developing at a fast pace. As a result of its growth, the United States is prioritizing safeguarding against potentially harmful impacts of AI [such as, privacy concerns, job displacement, and a lack of transparency]. Currently, the law does not require technology companies offering mental health chatbots to report suspected child abuse. However, the public policy reasons underlying mandatory reporting laws in combination with the humanization of mental health chatbots indicate the law may require companies to report child abuse.  

Mandatory reporting laws help protect children from further harm, save other children from abuse, and increase safety in communities. In the United States, states impose mandatory reporting requirements on members of specific occupations or organizations. The types of occupations that states require to report child abuse are those whose members often interact with minors, including child therapists. Technology companies that produce AI-enabled mental health chatbots are not characterized as mandated reporters. However, like teachers and mental health counselors, these companies are also in a unique position to detect child abuse.  

Mandatory reporting is a responsibility generally associated with humans. The law does not require companies that design AI-enabled mental health chatbots to report suspected child abuse. However, research suggests that users may develop connections to chatbots in a similar way that they do with humans. Often, companies creating mental health chatbot applications take steps to make the chatbots look and feel similar to a human therapist. The more that users consider AI mental health chatbots to be a person, the more effective the chatbot is likely to be. Consequently, technology companies invest in designing chatbots to be as human-like as possible. As users begin to view mental health chatbots as possessing human characteristics, mandatory reporting laws may adapt and require technology companies to report suspected child abuse.  

The pandemic compounded the tendency for users to connect with mental health chatbots in a human-like way. Research by the University of California illustrates that college students started to bond with chatbots to cope with social isolation during the pandemic. The research found that many mental health chatbots communicated like humans which caused a social reaction in the users. Since many people were unable to socialize outside of their homes or access in-person therapy, some users created bonds with chatbots. The research included statements by users about trusting the chatbots and not feeling judged, characteristics commonly associated with humans.  

As chatbot technology becomes more effective and human-like, society may expect that legal duties will extend to mental health chatbots. The growth in popularity of these chatbots is motivating researchers to examine the impacts of chatbots on humans. When the research on personal relationships between chatbots and users becomes more robust, the law will likely adapt. In doing so, the law may recognize the human-like connection between users and chatbots and require technology companies to abide by the legal duties typically required of human therapists.  

While the law may evolve and require technology companies to comply with the duty to report suspected child abuse, several obstacles and open questions exist. Requiring technology companies that offer AI-enabled mental health chatbots to report suspected child abuse may overburden the child welfare system. Mandatory reporting laws include a penalty for a failure to report which has led to unfounded reporting. In 2021, mandatory reporting laws caused individuals to file over 3 million reports in the United States. However, Child Protection Services substantiated less than one-third of the filed reports indicating the majority of reports were unfounded. False reports can be disturbing for the families and children involved. 

Many states have expanded the list of individuals who are mandated reporters, leading to unintended consequences. Some states extended mandatory reporting laws to include drug counselors, technicians, and clergy. Requiring companies offering mental health chatbots to follow mandatory reporting laws would be another expansion and may carry unintended consequences. For example, companies may over-report suspected child abuse out of fear of potential liability. Large volumes of unsubstantiated reports could become unmanageable and distract Child Protection Services from investigating more pressing child abuse cases.  

Another open question with which the legal system will have to contend is how to address confidentiality concerns. Users of mental health chatbots may worry that a mandatory disclosure requirement would jeopardize their data. Additionally, users may feel the chatbots are not trustworthy and that their statements could be misinterpreted. Such fears may stigmatize those who need help from using mental health chatbots. A decline in trust could have the inadvertent effect of disincentivizing individuals from adopting any AI technology altogether. The legal system will need to consider how to manage confidentiality and unfounded report concerns. Nevertheless, as chatbots become more human-like, the law may require technology companies to comply with duties previously associated with humans.  

  1. Conclusion  

AI-enabled mental health chatbots provide a promising new avenue for minors to access support for their mental health challenges. Although these chatbots offer numerous potential benefits, the legal framework has not kept pace with their rapid development. While the future remains uncertain, it is possible that legal requirements may necessitate technology companies to adhere to mandatory reporting laws related to child abuse. Consequently, technology companies should contemplate measures to ensure compliance. 

The monopolization of intellectual property (IP) in the video game industry has increasingly attracted attention as many antitrust-like issues have arisen. These issues are, however, not the prototypical issues that antitrust law is classically concerned with. Rather, supported by the United States’ IP system, video game publishers (VGPs) have appropriated video game IP to construct legal monopolies that exert control over individuals involved in the esports (and greater video game) industry. Particularly, this control disproportionately affects the employees of VGPs who provide live commentary, analysis, and play-by-play coverage of esports competitions (hereinafter “esports casters”) by impeding their ability to freely utilize and monetize their skills and knowledge. This restriction further hampers their capacity to adapt to evolving market conditions and secure stable employment in the field. Moreover, it creates barriers to entry for aspiring casters, thereby diminishing the industry’s diversity of voices and perspectives. 

The Pieces That Make Up the Esports Industry 

First, it is important to understand the structure and landscape of esports. The esports industry can be, generally, equated to any other “traditional” sport. Like in any “traditional” sport, there are players, coaches, team/player support staffs, casters & analysts, and production crews. Excluded from this list are game developers. Game developers are an integral part of the esports industry but have no comparative equivalent in “traditional” sports. From a functional perspective, the esports industry also operates incredibly similarly to “traditional” sports. There are regional leagues, seasons, playoffs, championships, team franchising & partnerships, rules (and referees), salary parameters, player trading & player contracts, practice squads, scrimmages, and sports betting.  

So why are esports casters disproportionately affected if esports and “traditional” sports are structured effectively in the same manner? To answer this, it is important to understand the role legal IP monopolies play in the esports industry. 

The Monopoly Power of IP in the Context of Video Games 

Unlike “traditional” sports, esports exists in a unique landscape where the IP of the entire sport is legally controlled by a single entity, the VGP. As an exploratory example, let’s analogize American football to esports. No entity entirely owns the game of football, but individual entities entirely own video games (like Riot Games for League of Legends and Valorant, Valve for Counter-Strike, and Activision Blizzard for Call of Duty). Moreover, the National Football League (NFL) lacks both the legal and physical capacity to stop or otherwise place conditions on every group that would like to play football, but Riot Games, Valve, and Activision Blizzard are legally within their rights to prevent players or entire leagues from playing their video games. This inherent quality of the esports industry functions as a legal monopoly and makes the industry unlike any other broadcasted competition. 

The Legal IP Monopoly is NOT a Competitive Monopoly 

When people think of monopolies, they typically think of competitive monopolies. Simply put, competitive monopolies are created when a single seller or producer assumes a dominant position in a particular industry. However, the monopolies VGPs have created are limited legal monopolies and thus are not the monopolies that antitrust law is concerned with. 

The precise distinction between the two monopolies (and their associated governing legal frameworks) is as follows. Antitrust law promotes market structures that encourage initial innovation with a competitive market, while IP law encourages initial innovation with the asterisk of limited exclusivity. More specifically, antitrust law enables subsequent innovation by protecting competitive opportunities beyond the scope of an exclusive IP right. When competitive opportunities are not present (i.e. a competitive monopoly has been created), antitrust law steps in to reestablish competition. Contrarily, IP law enables subsequent innovation by requiring disclosures of the initial innovation. The nature of these disclosures limit the scope of the control possessed by any particular holder of an IP right. What this means is that IP law provides narrower, but legal monopoly power over a particular innovation. 

While the above discussion is patent-focused, it is equally applicable to businesses that rely more heavily on copyright and trademark. Nevertheless, VGPs rely on all forms of IP to construct a comprehensive IP wall around their respective video game titles. 

Are VGPs’ Legal Monopolies Harmful? 

Because the legal monopoly is not one that antitrust law is concerned with, there is no law or governing body investigating or overseeing the monopoly-like practices and problems created by VGPs. As a result, these issues have been deemed as intrinsic characteristics of the esports industry, limiting the ways in which individuals can seek remedies. While there have been isolated wins related to equal pay and discrimination, there is yet to be any comprehensive attention given to the power imbalance of this industry. What this leads to is issues of job mobility and skill transferability for esports casters. 

Why Only Esports Casters? 

Game developers and production crews aren’t as affected by the job mobility and skill transferability issues simply because the skills required for their respective roles are easily transferable to other industries. A game developer who works on character design, can move to the film industry and work on CGI. Similarly, a camera operator for North American Valorant competitions can go be a camera operator for Family Feud. For players, their employment is controlled by the partnered / franchised teams and not the VGPs. As it is in “traditional” sports, players are free to move within and between the leagues as their contracts permit. 

Esports casters are different though. In professional football, casters, while well versed in the game of football, have substantial opportunities for job mobility if a particular situation is unfavorable. The reason being relates back to the NFL’s inability to monopolize the sport of football, meaning other football leagues can exist outside of the NFL. What this means is not only are football casters capable of transferring their skills within the NFL (such as Al Michaels’ move from NBC to Amazon), but football casters can also move to other leagues (like NCAA Football, Indoor Football, and the XFL). Comparatively, this is simply not an option for esports casters. Because VGPs are able to create a legal IP monopoly around a particular game (making them the single source and sole employer for esports casters of that respective game), they can completely control the leagues and broadcasts associated with their games. The economics of this makes it so VGPs can underpay their esports casters because (1) there is no other league that covers the same game they cast and (2) while possible, transitioning from casting one video game to another is not easy (think a football commentator becoming a basketball commentator). As a result, VGPs can create an exploitative employment structure in which esports casters are not treated in accords with the value they provide. This leads to barriers to entry for new casters, a lack of diversity in the industry, and challenges for existing casters to adapt to changing market conditions

Possible Solutions 

Esports casters’ ability to monetize their skills and knowledge is often limited by the exclusive use of IP rights by VGPs. To resolve this, a careful balance must be struck between IP rights and the livelihoods of individuals in the esports industry, including casters. One possible solution could be to consider a union-like structure that advocates for casters. This solution would give esports casters the opportunity to consolidate pay information, standardize contractual obligations, and voice their concerns about the structure of the esports industry. While the implementation of such an organization would be challenging considering the novel, underdeveloped, or constantly fluctuating nature of the industry, there are already many advocates in esports that are pushing for better compensation, inclusivity, and benefits for casters and analysts

Even though progress is slow, the industry is improving. Hopefully with these efforts, the esports industry can become fairer and more inclusive than other “traditional” sports. Nonetheless, the only certainty moving forward is that the legal IP monopolies surrounding video games are not going anywhere. 

Companies Face Massive Biometric Information Privacy Act (BIPA) Allegations with Virtual Try-On Technology 

Virtual try-on technology (“VTOT”) allows consumers to use augmented reality (“AR”) to see what a retailer’s product may look like on their body. By providing the retailer’s website access to their device’s camera, consumers can shop and try on products from the comfort of their home without ever stepping into a brick-and-mortar storefront. Retailers provide customers the option to virtual-try consumer goods through their website, app, or through social media filters like Instagram, Snapchat, and TikTok. While virtual try-on emerged in the early 2010s, the COVID-19 pandemic spurred its growth and adoption amongst consumers. Retailers, however, have seen a recent uptick in lawsuits due to biometric privacy concerns of virtual-try on technology, especially in Illinois. In 2008, Illinois passed the Biometric Information Privacy Act (“BIPA”), one of the strongest and comprehensive biometric privacy acts in the country.  

This blog post will explore current lawsuits in Illinois and the Seventh Circuit that could impact how retailers and consumers use virtual-try on technology, as well as the privacy and risk implications of the technology for both groups.  

Background on Virtual Try-On 

From trying on eyeglasses to shoes, virtual try-on technology allows consumers an immersive and fun opportunity to shop and visualize themselves with a product without ever leaving their homes. Fashion brands often use virtual try-on technology to enhance consumer experiences and find that it may positively affect purchase decisions and sales. By providing customers with the opportunity to shop from home, customers may be less likely to return the item, since they are more likely to pick the correct product the first time through the enhanced virtual-try on experience. With revenue in the AR and VR market expected to exceed $31 billion dollars in 2023 and one-third of AR users having used the technology to shop, brands are responding to the growing demand for AR and virtual try-on.  

Although the pandemic really drove brands to grow their virtual try-on and AR offerings, brands have used virtual try-on for many years prior. Sephora launched an augmented reality mirror back in 2014. Maybelline allowed consumers to virtually try on nail polish. In the summer of 2019, L’Oreal Group partnered with Amazon to integrate ModiFace, the company’s AR and artificial intelligence (“AI”) company, into Amazon’s marketplace. The pandemic only pushed brands to grow those offerings.  

By mid-2021, social media and tech brands expanded their AR offerings to cash in on the increasing role that virtual try-on has on consumers’ purchase decisions. Perfect Corp., a beauty technology company, integrated with Facebook in 2021 to expand the platform’s AR offerings specifically geared towards beauty. The integration allows Facebook to make it easier and cheaper for brands and advertisers to “integrate their catalogs with AR.” The integration expanded who could use Meta’s platforms for AR-enhanced shopping since any merchant who uses Perfect Corp. could  advantage of Facebook’s Catalog, AR-enabled advertisements, and Instagram Shops. Perfect Corp.’s CEO Alice Chang wrote in the announcement:  

“There’s no denying the impact that social media platforms like Instagram continue to play in the consumer discovery and shopping journey. This integration underlines the importance of a streamlined beauty shopping experience, with interactive AR beauty tech proven to drive conversion and enhance the overall consumer experience.”  

That same week, Facebook, now Meta, announced their integration with Perfect Corp. and later announced plans to integrate ModiFace into its new advertising formats. In addition, Meta’s Instagram also partnered with L’Oréal’s ModiFace. With the swipe of a button, consumers on Instagram can try on new lipsticks and then purchase the product immediately within the app. The expansion of AR features on Meta’s platforms makes it seamless for consumers to shop not only without leaving their home and without leaving the app.  

Outside of Meta, Snapchat offers consumers the chance to use various lenses and filters, including the AR shopping experiences. In 2022, Nike sponsored its own Snapchat lens to allow consumers to try on and customize their own virtual pair of Air Force 1 sneakers. Consumers could swipe through several colors and textures. Once satisfied, consumers could then select “shop now” to purchase their custom Nike Air Force 1s instantaneously.  

Rising Biometric Concerns and Lawsuits  

While demand for AR and virtual try-on is growing, the innovative technology does not come without major concerns. Brands like Charlotte Tilbury, Louis Vuitton, Estee Lauder, and Christian Dior have been slapped with class actions lawsuits in Illinois and the Seventh Circuit for violating BIPA.  

According to the American Civil Liberties Union (“ACLU”) of Illinois, BIPA requires that private companies: (1) inform the consumer in writing of the data they are collecting and storing, (2) inform the consumer in writing regarding the specific purpose and length of time that the data will be collected, stored, and used, and obtain written consent from the consumer. Additionally, BIPA prohibits companies from selling or profiting from consumer biometric information. The Illinois law is considered to be one of the most stringent biometric privacy laws in the country and stands to be one of the only laws of its kind “to offer consumers protection by allowing them to take a company who violates the law to court.” BIPA allows consumers to recover up to $1,000 of liquidated damages or actual damages per violation, whichever is greater, in addition to attorneys’ fees and expert witness fees. 

In November 2022, an Illinois federal judge allowed a BIPA lawsuit to move forward against Estee Lauder Companies, Inc. in regards to their Too Faced makeup brand. The plaintiff alleges that the company collected her facial-geometry data in violation of BIPA when she used the makeup try-on tool available on Too Faced’s website. Under Illinois law, a “violation of a statutory standard of care is prima facie evidence of negligence.” Kukovec v. Estee Lauder Companies, Inc., No. 22 CV 1988, 2022 WL 16744196, at *8 (N.D. Ill. Nov. 7, 2022) (citing Cuyler v. United States, 362 F.3d 949, 952 (7th Cir. 2004)). While the judge ruled that the plaintiff did not sufficiently allege recklessness or intent, he allowed the case to move forward because the plaintiff “present[ed] a story that holds together,” and did more than “simply parrot the elements of a BIPA claim.” The judge found that it seemed reasonable to infer that the company had to collect biometric data for the virtual try-on to work.  

In February 2023, Christian Dior was able to find a BIPA exemption, which allowed a class action lawsuit filed against them to be dismissed. Christian Dior offered virtual try-on for sunglasses on their website. According to the lead plaintiff, the luxury brand failed to obtain consent prior to capturing her biometric information for the virtual-try on offering violating BIPA. The judge, however, held that the BIPA general health care exemption applied to VTOT for eyewear, including nonprescription sunglasses offered by consumer brands. BIPA exempts information captured from a “patient” in a “health care setting.” Since BIPA does not define these terms, the judge referred to Merriam-Webster to define the terms. Patient was defined as “an individual awaiting or under medical care and treatment” or “the recipient of any various personal services.” The judge found sunglasses, even nonprescription ones, are used to “protect one’s eyes from the sun and are Class I medical devices under the Food & Drug Administration’s regulations.” Thus, an individual using VTOT is classified as a patient “awaiting . . . medical care” since sunglasses are a medical device that protect vision and VTOT is the “online equivalent” of a brick-and-mortar store where one would purchase sunglasses.  

Further, health care was defined as “efforts made to maintain or restore physical, mental, or emotional well-being especially by trained and licensed professionals.” The judge stated that she had “no trouble finding that VTOT counts as a ‘setting.’” Thus, under BIPA’s general health care exemption, consumers who purchase eyewear, including nonprescription sunglasses, using VTOT are considered to be “patients” in a “health care setting.” 

Both cases show that while virtual try-on may operate similarly on a company’s website, the type of product that a brand is offering consumers the opportunity to “try on” may allow them to take part in exemptions. The “health care” exemption in the Christian Dior was not the first time that a company benefitted from the exemption. BIPA lawsuits can be costly for companies. TikTok settled a $92 million BIPA lawsuit in 2021 with regards to allegations that the social media app harvested biometric face and voice prints from user-uploaded videos. Although that example does not deal with virtual try-on, it exemplifies how diligence and expertise with BIPA requirements can save brands huge settlements. Companies looking to expand into the virtual try-on space should carefully consider how they will obtain explicit written consent (and other BIPA requirements, like data destruction policies and procedures) from consumers to minimize class action and litigation exposure.