Background: Sephora and ModiFace
In a market filled with a mixture of new direct-to-consumer influencer brands gaining traction, brick and mortar drug stores providing cheaper options known as “dupes”, and high-end retailers investing in both their online and in stores, one major player dominates: Sephora. Founded in 1970, Sephora is a French multinational retailer of beauty and personal care products. Today, Sephora is owned by LVMH Moët Hennessy Louis Vuitton (“LVMH”) and operates 2,300 stores in 33 countries worldwide, with over 430 stores in America alone.
LVMH attributes much of Sephora’s success to its “self-service” concept. Unlike its department store competitors who stock beauty products behind a counter, Sephora allows consumers to touch and test is product with the mediation of a salesperson. This transformation of a store to an interactive experience underscores Sephora’s main value proposition: providing customers with a unique, interactive, and personalized shopping experience.1 Keeping with its customer experience-centric business model, Sephora has utilized technology to continue providing its customers with a personalized beauty experience.
The tension created by two separate growing marketplaces puts significant pressure on Sephora to replicate the online shopping experience in-store and vice versa. For make-up, having a perfect complexion match for face products and flattering color of lipstick and blush requires an understanding of the undertones and overtones of the make-up shades. Typically, this color match inquiry is what makes or breaks the sale—if a shopper is not confident the make-up is a match, they are less likely to purchase. To address this friction in the customer purchase journey, Sephora rolled out “Find My Shade,” an online tool designed to help shoppers find a foundation product after inputting their preferences and prior product use. This tool provides the in-store feel of viewing multiple products at once, while providing some assurance on a color match. For Sephora, the online sale provides ample customer data: which products were searched, considered, and ultimately purchased all against a backdrop of a user’s name, geography, preferences, and purchase history. The resulting customer data is the backbone of Sephora’s digital strategy: facilitating customer purchases online by reducing friction, while mining data to inform predictions on customer preferences.
In line with its innovative in-store strategy, in 2014 Sephora announced a partnership with Modiface to launch a Visual Artist Kiosk to provide augmented reality mirrors in its brick-and-mortar stores. First introduced in Milan, Sephora sought to make testing make-up easier for customers by simulating makeup products on a user’s face in real time without requiring a photo upload.2 To begin their session at the kiosk, users provide their e-mail address and contact information either tied to their pre-existing customer account with Sephora or to provide Sephora with new customer information. Using facial recognition technology, the ModiFace 3-D augmented reality mirror uses a live capture of a user’s face and then shows the user how to apply make-up products overlaid onto the live capture user’s face. This allows users to test thousands of products tailored to the user’s unique features. Without opening a real product, users are able to see whether the product is suitable to their skin tone, thus bringing personalization and tailored options typically available only online to the store while providing Sephora with valuable consumer data. At the end of their use of the ModiFace mirror, the user receives follow-up information about the products tested via the e-mail address provided or via their Sephora account.
At the time of Visual Artist Kiosk’s introduction to stores in the United States in 2014, Sephora did not need to consider federal privacy laws—there were none to consider. Consumer data privacy laws were in their infancy, with the Federal Trade Commission (FTC) at the helm of most cases aimed to provide consumers protection from data breach and identity theft and to hold corporations accountable to their respective privacy policies.3 Significantly, however, Sephora did not consider the state-specific laws at play. Specifically, Sephora did not consider the Illinois Biometric Information Privacy Act (BIPA) which applied to all Sephora locations in the state of Illinois (IL).
Issue
In December 2018, Auste Salkauskaite (Plaintiff) brought a class action suit against Sephora and ModiFace Inc. (Modiface) claiming that both violated the Illinois Biometric Information Privacy Act (BIPA) by using ModiFace’s technology to collect biometric information about customer facial geometry at a Virtual Artist Kiosk in a Sephora store in Illinois. Plaintiff further alleged that her biometric information, cell phone number, and other personal information were collected and disseminated by both Sephora and ModiFace in an attempt for Sephora to sell products to the plaintiff.4 Plaintiff further alleged that Sephora did not inform her in writing that her biometrics were being collected, stored, used, or disseminated. Sephora allegedly did not get Plaintiff’s written or verbal consent, provide any notice that Plaintiff’s biometric information was being collected, or if it would be retained and/or sold. In pursuing a class action lawsuit, Plaintiff sought to include individuals who had their biometrics “captured, collected, stored, used, transmitted or disseminated by ModiFace’s technology in Illinois”, with an additional subclass of those who experienced the same treatment from Sephora in Illinois.”5
BIPA “governs how private entities handle biometric identifiers and biometric information (“biometrics”) in the state of IL.”6 By including a private right of action, residents of the state of IL are able to file suit against private entities who allegedly violate the law. In this case, Plaintiff claimed that Sephora and ModiFace violated three provisions of BIPA: 1) requiring a private entity in possession of biometrics to release a publicly accessibly written policy describing its use of the collected biometrics, 2) forbidding a private entity from “collecting, capturing, purchasing, receiving, or otherwise obtaining biometrics without informing the subject that biometrics are being collected and stored,” and 3) “disclosing or disseminating biometrics of a subject without consent.”7
Response
In the immediate aftermath of the suit, Sephora did not release any statements on the pending litigation. In Sephora’s answers to the Plaintiff’s complaints filed in January 2019, Sephora denied all claims by 1) pointing to its publicly available privacy statement, and 2) denying taking, collecting, using, storing or disseminating Plaintiff’s biometrics. Specifically, Sephora claimed that by using its mobile application and/or website, users agree to accept Sephora’s terms of service which releases Sephora from liability. This included the Virtual Artist Kiosk, which required users to sign and accept Sephora’s terms of service before prompting users to provide any contact information.
Sephora and the Plaintiffs (once class action status was granted) reached a settlement agreement in December 2020, which allowed for anyone who interacted with the Virtual Artist Kiosk in a Sephora store in IL since July 2018 to file a claim for a share of the settlement—which allows for claimants to get up to $500 each. As of April 2020, 10,500 notices were sent to potential claimants. Hundreds of claims were filed by potential class members, which could result in just under $500,000 in total claims. Sephora has never officially commented on the suit, despite some media coverage in IL.8
ModiFace, on the other hand, successfully moved to dismiss the claim for lack of personal jurisdiction in June 2020. The Court reasoned that ModiFace did not purposefully avail itself of the privilege of conducting business in IL. The Court cited a declaration of ModiFace’s CEO which stated that ModiFace never had property, employees, or operations in Illinois and is not registered to do business there. He further stated that ModiFace’s businesses is focused on selling augmented-reality products to beauty brand companies and does not participate in marketing, sales, or commercial activity in Illinois. ModiFace claimed that its business relationship with Sephora did not occur in Illinois, and that Sephora never discussed the use of ModiFace technology in Illinois: there was no agreement in place regarding Illinois and no transmission of biometric information between the companies. Overall, the Court found that Sephora’s use of ModiFace technology in Illinois did not establish minimum contacts.
Despite the lawsuit, ModiFace was acquired by L’Oreal in March 2018. L’Oreal is the world’s biggest cosmetics company, and, unlike Sephora, designs, manufactures, and sells its own products. L’Oreal and ModiFace worked together for about seven years before the acquisition. Like Sephora, L’Oreal ramped up investment in virtual try-on technology in an effort to decrease customer barriers to purchase. Since the acquisition, L’Oreal’s Global Chief Digital Officer has said that conversion rates from search to purchase has tripled.(citation)? At the time of the ModiFace acquisition, L’Oreal spent about 38% of its media budget on digital campaigns like virtual try-one. It’s estimated that this investment has grown significantly as L’Oreal strategizes around minimizing friction in the customer experience. More recently, Estee Lauder was also sued for alleged violation of BIPA for collecting biometric data via virtual “try-on” of make-up through a similar technology as ModiFace.9
Lessons Learned for Future: CCPA and Beyond
Sephora’s data privacy legal woes have ramped up significantly since the BIPA lawsuit in 2018. On August 24, 2022, California Attorney General Rob Bonta announced a settlement resolving allegations that the company violated the California Consumer Privacy Act (CCPA). The Attorney General alleged that Sephora failed to disclose to consumers that it was selling their personal information, failed to process user request to opt out of sale, and did not remedy these violations within 30 days after being alerted by Attorney General, as allowed per the CCPA. The terms of the settlement required Sephora to pay $1.2 million in penalties and comply with injunctive terms in line with its violations including updating its privacy policy to include affirmative representation that it sells data, provide mechanisms for consumers to opt out of the sale of their personal information, conform to service provider agreements in line with CCPA’s requirements and provide states reports to the Attorney General documenting progress.
Sephora’s settlement is the first official non-breach related settlement under the CCPA. Many legal analysts argue that the California Office of the Attorney General (OAG) intends to be more aggressive in enforcing CCPA, which they are signaling in a significant manner via their settlement with Sephora. Specifically, the OAG is expected to focus on businesses that engage in sharing or selling of information to third parties for the purpose of targeting advertising.10 Importantly, under the California Privacy Rights Act (CPRA) which goes into effect on January 1, 2023, companies will no longer benefit from the 30-day notice to remedy their alleged violations.
Like the BIPA lawsuit, Sephora did not make any official statements on the CCPA settlement. Sephora’s privacy policy is regularly updated, however, signaling the company’s attention to the regulations set forth by CCPA minimally.11 In 2022, Sephora saw revenues grow 30%, with a significant rebound in in-store activity, indicating that Sephora customers nationwide have not been deterred by its privacy litigation woes. As Sephora continues to innovate its in-store experience, it must continue to keep a watchful eye on state-specific regulation as Colorado and Virginia launch their own data privacy laws in the near future.
Due to a shortage of mental health support, AI-enabled chatbots offer a promising way to help children and teenagers address mental health issues. Users often view chatbots as mirroring human therapists. However, unlike their human therapist counterparts, mental health chatbots are not obligated to report suspected child abuse. Legal obligations could potentially shift, necessitating technology companies offering these chatbots to adhere to mandated reporting laws.
Many teenagers and children experience mental health difficulties. These difficulties include having trouble coping with stress at school, depression, and anxiety. Although there is a consensus on the harms of mental illness, there is a shortage of care. An estimated 4 million children and teenagers do not receive necessary mental health treatment and psychiatric care. Despite the high demand for mental health support, there are approximately only 8300 child psychiatrists in the United States. Mental health professionals are overburdened and unable to provide enough support. These gaps in necessary mental health care provide an opportunity for technology to intervene and support the mental health of minors.
AI-enabled chatbots offer mental health services to minors and may help alleviate deficiencies in health care. Technology companies design AI-enabled mental health chatbots to facilitate realistic text-based dialogue with users and offer support and guidance. Over forty different kinds of mental health chatbots exist and users can download them from app stores on mobile devices. Proponents of mental health chatbots contend the chatbots are effective, easy to use, accessible, and inexpensive. Research suggests some individuals prefer working with chatbots instead of a human therapist as they feel less stigma asking for help from a robot. There are numerous other studies assessing the effectiveness of chatbots for mental health. Although the results are positive for many studies, the research concerning the usefulness of mental health chatbots is in its nascent stages.
Mental health chatbots are simple to use for children and teenagers and are easily accessible. Unlike mental health professionals who may have limited availability for patients, mental health chatbots are available to interact with users at all times. Mental health chatbots are especially beneficial for younger individuals who are familiar with texting and interacting with mobile applications. Due to the underlying technology, chatbots are also scalable and able to reach millions of individuals. Finally, most mental health chatbots are inexpensive or free. The affordability of chatbots is important as one of the most significant barriers to accessing mental health support is its cost. Although there are many benefits of mental health chatbots, most supporters agree that AI tools should be part of a holistic approach to addressing mental health.
Critics of mental health chatbots point to the limited research on their effectiveness and instances of harmful responses by chatbots. Research indicates that users sometimes rely on chatbots in times of mental health crises. In rare cases, some users received harmful responses to their requests for support. Critics of mental health chatbots are also concerned about the impact on teenagers who have never tried traditional therapy. The worry is that teenagers may disregard mental health treatment in its entirety if they do not find chatbots to be helpful. It is likely too early to know if the risks raised by critics are worth the benefits of mental health chatbots. However, mental health chatbots offer a promising way to ease the shortage of mental health professionals.
AI is a critical enabler for growth across the world and is developing at a fast pace. As a result of its growth, the United States is prioritizing safeguarding against potentially harmful impacts of AI [such as, privacy concerns, job displacement, and a lack of transparency]. Currently, the law does not require technology companies offering mental health chatbots to report suspected child abuse. However, the public policy reasons underlying mandatory reporting laws in combination with the humanization of mental health chatbots indicate the law may require companies to report child abuse.
Mandatory reporting laws help protect children from further harm, save other children from abuse, and increase safety in communities. In the United States, states impose mandatory reporting requirements on members of specific occupations or organizations. The types of occupations that states require to report child abuse are those whose members often interact with minors, including child therapists. Technology companies that produce AI-enabled mental health chatbots are not characterized as mandated reporters. However, like teachers and mental health counselors, these companies are also in a unique position to detect child abuse.
Mandatory reporting is a responsibility generally associated with humans. The law does not require companies that design AI-enabled mental health chatbots to report suspected child abuse. However, research suggests that users may develop connections to chatbots in a similar way that they do with humans. Often, companies creating mental health chatbot applications take steps to make the chatbots look and feel similar to a human therapist. The more that users consider AI mental health chatbots to be a person, the more effective the chatbot is likely to be. Consequently, technology companies invest in designing chatbots to be as human-like as possible. As users begin to view mental health chatbots as possessing human characteristics, mandatory reporting laws may adapt and require technology companies to report suspected child abuse.
The pandemic compounded the tendency for users to connect with mental health chatbots in a human-like way. Research by the University of California illustrates that college students started to bond with chatbots to cope with social isolation during the pandemic. The research found that many mental health chatbots communicated like humans which caused a social reaction in the users. Since many people were unable to socialize outside of their homes or access in-person therapy, some users created bonds with chatbots. The research included statements by users about trusting the chatbots and not feeling judged, characteristics commonly associated with humans.
As chatbot technology becomes more effective and human-like, society may expect that legal duties will extend to mental health chatbots. The growth in popularity of these chatbots is motivating researchers to examine the impacts of chatbots on humans. When the research on personal relationships between chatbots and users becomes more robust, the law will likely adapt. In doing so, the law may recognize the human-like connection between users and chatbots and require technology companies to abide by the legal duties typically required of human therapists.
While the law may evolve and require technology companies to comply with the duty to report suspected child abuse, several obstacles and open questions exist. Requiring technology companies that offer AI-enabled mental health chatbots to report suspected child abuse may overburden the child welfare system. Mandatory reporting laws include a penalty for a failure to report which has led to unfounded reporting. In 2021, mandatory reporting laws caused individuals to file over 3 million reports in the United States. However, Child Protection Services substantiated less than one-third of the filed reports indicating the majority of reports were unfounded. False reports can be disturbing for the families and children involved.
Many states have expanded the list of individuals who are mandated reporters, leading to unintended consequences. Some states extended mandatory reporting laws to include drug counselors, technicians, and clergy. Requiring companies offering mental health chatbots to follow mandatory reporting laws would be another expansion and may carry unintended consequences. For example, companies may over-report suspected child abuse out of fear of potential liability. Large volumes of unsubstantiated reports could become unmanageable and distract Child Protection Services from investigating more pressing child abuse cases.
Another open question with which the legal system will have to contend is how to address confidentiality concerns. Users of mental health chatbots may worry that a mandatory disclosure requirement would jeopardize their data. Additionally, users may feel the chatbots are not trustworthy and that their statements could be misinterpreted. Such fears may stigmatize those who need help from using mental health chatbots. A decline in trust could have the inadvertent effect of disincentivizing individuals from adopting any AI technology altogether. The legal system will need to consider how to manage confidentiality and unfounded report concerns. Nevertheless, as chatbots become more human-like, the law may require technology companies to comply with duties previously associated with humans.
AI-enabled mental health chatbots provide a promising new avenue for minors to access support for their mental health challenges. Although these chatbots offer numerous potential benefits, the legal framework has not kept pace with their rapid development. While the future remains uncertain, it is possible that legal requirements may necessitate technology companies to adhere to mandatory reporting laws related to child abuse. Consequently, technology companies should contemplate measures to ensure compliance.
The monopolization of intellectual property (IP) in the video game industry has increasingly attracted attention as many antitrust-like issues have arisen. These issues are, however, not the prototypical issues that antitrust law is classically concerned with. Rather, supported by the United States’ IP system, video game publishers (VGPs) have appropriated video game IP to construct legal monopolies that exert control over individuals involved in the esports (and greater video game) industry. Particularly, this control disproportionately affects the employees of VGPs who provide live commentary, analysis, and play-by-play coverage of esports competitions (hereinafter “esports casters”) by impeding their ability to freely utilize and monetize their skills and knowledge. This restriction further hampers their capacity to adapt to evolving market conditions and secure stable employment in the field. Moreover, it creates barriers to entry for aspiring casters, thereby diminishing the industry’s diversity of voices and perspectives.
The Pieces That Make Up the Esports Industry
First, it is important to understand the structure and landscape of esports. The esports industry can be, generally, equated to any other “traditional” sport. Like in any “traditional” sport, there are players, coaches, team/player support staffs, casters & analysts, and production crews. Excluded from this list are game developers. Game developers are an integral part of the esports industry but have no comparative equivalent in “traditional” sports. From a functional perspective, the esports industry also operates incredibly similarly to “traditional” sports. There are regional leagues, seasons, playoffs, championships, team franchising & partnerships, rules (and referees), salary parameters, player trading & player contracts, practice squads, scrimmages, and sports betting.
So why are esports casters disproportionately affected if esports and “traditional” sports are structured effectively in the same manner? To answer this, it is important to understand the role legal IP monopolies play in the esports industry.
The Monopoly Power of IP in the Context of Video Games
Unlike “traditional” sports, esports exists in a unique landscape where the IP of the entire sport is legally controlled by a single entity, the VGP. As an exploratory example, let’s analogize American football to esports. No entity entirely owns the game of football, but individual entities entirely own video games (like Riot Games for League of Legends and Valorant, Valve for Counter-Strike, and Activision Blizzard for Call of Duty). Moreover, the National Football League (NFL) lacks both the legal and physical capacity to stop or otherwise place conditions on every group that would like to play football, but Riot Games, Valve, and Activision Blizzard are legally within their rights to prevent players or entire leagues from playing their video games. This inherent quality of the esports industry functions as a legal monopoly and makes the industry unlike any other broadcasted competition.
The Legal IP Monopoly is NOT a Competitive Monopoly
When people think of monopolies, they typically think of competitive monopolies. Simply put, competitive monopolies are created when a single seller or producer assumes a dominant position in a particular industry. However, the monopolies VGPs have created are limited legal monopolies and thus are not the monopolies that antitrust law is concerned with.
The precise distinction between the two monopolies (and their associated governing legal frameworks) is as follows. Antitrust law promotes market structures that encourage initial innovation with a competitive market, while IP law encourages initial innovation with the asterisk of limited exclusivity. More specifically, antitrust law enables subsequent innovation by protecting competitive opportunities beyond the scope of an exclusive IP right. When competitive opportunities are not present (i.e. a competitive monopoly has been created), antitrust law steps in to reestablish competition. Contrarily, IP law enables subsequent innovation by requiring disclosures of the initial innovation. The nature of these disclosures limit the scope of the control possessed by any particular holder of an IP right. What this means is that IP law provides narrower, but legal monopoly power over a particular innovation.
While the above discussion is patent-focused, it is equally applicable to businesses that rely more heavily on copyright and trademark. Nevertheless, VGPs rely on all forms of IP to construct a comprehensive IP wall around their respective video game titles.
Are VGPs’ Legal Monopolies Harmful?
Because the legal monopoly is not one that antitrust law is concerned with, there is no law or governing body investigating or overseeing the monopoly-like practices and problems created by VGPs. As a result, these issues have been deemed as intrinsic characteristics of the esports industry, limiting the ways in which individuals can seek remedies. While there have been isolated wins related to equal pay and discrimination, there is yet to be any comprehensive attention given to the power imbalance of this industry. What this leads to is issues of job mobility and skill transferability for esports casters.
Why Only Esports Casters?
Game developers and production crews aren’t as affected by the job mobility and skill transferability issues simply because the skills required for their respective roles are easily transferable to other industries. A game developer who works on character design, can move to the film industry and work on CGI. Similarly, a camera operator for North American Valorant competitions can go be a camera operator for Family Feud. For players, their employment is controlled by the partnered / franchised teams and not the VGPs. As it is in “traditional” sports, players are free to move within and between the leagues as their contracts permit.
Esports casters are different though. In professional football, casters, while well versed in the game of football, have substantial opportunities for job mobility if a particular situation is unfavorable. The reason being relates back to the NFL’s inability to monopolize the sport of football, meaning other football leagues can exist outside of the NFL. What this means is not only are football casters capable of transferring their skills within the NFL (such as Al Michaels’ move from NBC to Amazon), but football casters can also move to other leagues (like NCAA Football, Indoor Football, and the XFL). Comparatively, this is simply not an option for esports casters. Because VGPs are able to create a legal IP monopoly around a particular game (making them the single source and sole employer for esports casters of that respective game), they can completely control the leagues and broadcasts associated with their games. The economics of this makes it so VGPs can underpay their esports casters because (1) there is no other league that covers the same game they cast and (2) while possible, transitioning from casting one video game to another is not easy (think a football commentator becoming a basketball commentator). As a result, VGPs can create an exploitative employment structure in which esports casters are not treated in accords with the value they provide. This leads to barriers to entry for new casters, a lack of diversity in the industry, and challenges for existing casters to adapt to changing market conditions.
Possible Solutions
Esports casters’ ability to monetize their skills and knowledge is often limited by the exclusive use of IP rights by VGPs. To resolve this, a careful balance must be struck between IP rights and the livelihoods of individuals in the esports industry, including casters. One possible solution could be to consider a union-like structure that advocates for casters. This solution would give esports casters the opportunity to consolidate pay information, standardize contractual obligations, and voice their concerns about the structure of the esports industry. While the implementation of such an organization would be challenging considering the novel, underdeveloped, or constantly fluctuating nature of the industry, there are already many advocates in esports that are pushing for better compensation, inclusivity, and benefits for casters and analysts.
Even though progress is slow, the industry is improving. Hopefully with these efforts, the esports industry can become fairer and more inclusive than other “traditional” sports. Nonetheless, the only certainty moving forward is that the legal IP monopolies surrounding video games are not going anywhere.
Companies Face Massive Biometric Information Privacy Act (BIPA) Allegations with Virtual Try-On Technology
Virtual try-on technology (“VTOT”) allows consumers to use augmented reality (“AR”) to see what a retailer’s product may look like on their body. By providing the retailer’s website access to their device’s camera, consumers can shop and try on products from the comfort of their home without ever stepping into a brick-and-mortar storefront. Retailers provide customers the option to virtual-try consumer goods through their website, app, or through social media filters like Instagram, Snapchat, and TikTok. While virtual try-on emerged in the early 2010s, the COVID-19 pandemic spurred its growth and adoption amongst consumers. Retailers, however, have seen a recent uptick in lawsuits due to biometric privacy concerns of virtual-try on technology, especially in Illinois. In 2008, Illinois passed the Biometric Information Privacy Act (“BIPA”), one of the strongest and comprehensive biometric privacy acts in the country.
This blog post will explore current lawsuits in Illinois and the Seventh Circuit that could impact how retailers and consumers use virtual-try on technology, as well as the privacy and risk implications of the technology for both groups.
Background on Virtual Try-On
From trying on eyeglasses to shoes, virtual try-on technology allows consumers an immersive and fun opportunity to shop and visualize themselves with a product without ever leaving their homes. Fashion brands often use virtual try-on technology to enhance consumer experiences and find that it may positively affect purchase decisions and sales. By providing customers with the opportunity to shop from home, customers may be less likely to return the item, since they are more likely to pick the correct product the first time through the enhanced virtual-try on experience. With revenue in the AR and VR market expected to exceed $31 billion dollars in 2023 and one-third of AR users having used the technology to shop, brands are responding to the growing demand for AR and virtual try-on.
Although the pandemic really drove brands to grow their virtual try-on and AR offerings, brands have used virtual try-on for many years prior. Sephora launched an augmented reality mirror back in 2014. Maybelline allowed consumers to virtually try on nail polish. In the summer of 2019, L’Oreal Group partnered with Amazon to integrate ModiFace, the company’s AR and artificial intelligence (“AI”) company, into Amazon’s marketplace. The pandemic only pushed brands to grow those offerings.
By mid-2021, social media and tech brands expanded their AR offerings to cash in on the increasing role that virtual try-on has on consumers’ purchase decisions. Perfect Corp., a beauty technology company, integrated with Facebook in 2021 to expand the platform’s AR offerings specifically geared towards beauty. The integration allows Facebook to make it easier and cheaper for brands and advertisers to “integrate their catalogs with AR.” The integration expanded who could use Meta’s platforms for AR-enhanced shopping since any merchant who uses Perfect Corp. could advantage of Facebook’s Catalog, AR-enabled advertisements, and Instagram Shops. Perfect Corp.’s CEO Alice Chang wrote in the announcement:
“There’s no denying the impact that social media platforms like Instagram continue to play in the consumer discovery and shopping journey. This integration underlines the importance of a streamlined beauty shopping experience, with interactive AR beauty tech proven to drive conversion and enhance the overall consumer experience.”
That same week, Facebook, now Meta, announced their integration with Perfect Corp. and later announced plans to integrate ModiFace into its new advertising formats. In addition, Meta’s Instagram also partnered with L’Oréal’s ModiFace. With the swipe of a button, consumers on Instagram can try on new lipsticks and then purchase the product immediately within the app. The expansion of AR features on Meta’s platforms makes it seamless for consumers to shop not only without leaving their home and without leaving the app.
Outside of Meta, Snapchat offers consumers the chance to use various lenses and filters, including the AR shopping experiences. In 2022, Nike sponsored its own Snapchat lens to allow consumers to try on and customize their own virtual pair of Air Force 1 sneakers. Consumers could swipe through several colors and textures. Once satisfied, consumers could then select “shop now” to purchase their custom Nike Air Force 1s instantaneously.
Rising Biometric Concerns and Lawsuits
While demand for AR and virtual try-on is growing, the innovative technology does not come without major concerns. Brands like Charlotte Tilbury, Louis Vuitton, Estee Lauder, and Christian Dior have been slapped with class actions lawsuits in Illinois and the Seventh Circuit for violating BIPA.
According to the American Civil Liberties Union (“ACLU”) of Illinois, BIPA requires that private companies: (1) inform the consumer in writing of the data they are collecting and storing, (2) inform the consumer in writing regarding the specific purpose and length of time that the data will be collected, stored, and used, and obtain written consent from the consumer. Additionally, BIPA prohibits companies from selling or profiting from consumer biometric information. The Illinois law is considered to be one of the most stringent biometric privacy laws in the country and stands to be one of the only laws of its kind “to offer consumers protection by allowing them to take a company who violates the law to court.” BIPA allows consumers to recover up to $1,000 of liquidated damages or actual damages per violation, whichever is greater, in addition to attorneys’ fees and expert witness fees.
In November 2022, an Illinois federal judge allowed a BIPA lawsuit to move forward against Estee Lauder Companies, Inc. in regards to their Too Faced makeup brand. The plaintiff alleges that the company collected her facial-geometry data in violation of BIPA when she used the makeup try-on tool available on Too Faced’s website. Under Illinois law, a “violation of a statutory standard of care is prima facie evidence of negligence.” Kukovec v. Estee Lauder Companies, Inc., No. 22 CV 1988, 2022 WL 16744196, at *8 (N.D. Ill. Nov. 7, 2022) (citing Cuyler v. United States, 362 F.3d 949, 952 (7th Cir. 2004)). While the judge ruled that the plaintiff did not sufficiently allege recklessness or intent, he allowed the case to move forward because the plaintiff “present[ed] a story that holds together,” and did more than “simply parrot the elements of a BIPA claim.” The judge found that it seemed reasonable to infer that the company had to collect biometric data for the virtual try-on to work.
In February 2023, Christian Dior was able to find a BIPA exemption, which allowed a class action lawsuit filed against them to be dismissed. Christian Dior offered virtual try-on for sunglasses on their website. According to the lead plaintiff, the luxury brand failed to obtain consent prior to capturing her biometric information for the virtual-try on offering violating BIPA. The judge, however, held that the BIPA general health care exemption applied to VTOT for eyewear, including nonprescription sunglasses offered by consumer brands. BIPA exempts information captured from a “patient” in a “health care setting.” Since BIPA does not define these terms, the judge referred to Merriam-Webster to define the terms. Patient was defined as “an individual awaiting or under medical care and treatment” or “the recipient of any various personal services.” The judge found sunglasses, even nonprescription ones, are used to “protect one’s eyes from the sun and are Class I medical devices under the Food & Drug Administration’s regulations.” Thus, an individual using VTOT is classified as a patient “awaiting . . . medical care” since sunglasses are a medical device that protect vision and VTOT is the “online equivalent” of a brick-and-mortar store where one would purchase sunglasses.
Further, health care was defined as “efforts made to maintain or restore physical, mental, or emotional well-being especially by trained and licensed professionals.” The judge stated that she had “no trouble finding that VTOT counts as a ‘setting.’” Thus, under BIPA’s general health care exemption, consumers who purchase eyewear, including nonprescription sunglasses, using VTOT are considered to be “patients” in a “health care setting.”
Both cases show that while virtual try-on may operate similarly on a company’s website, the type of product that a brand is offering consumers the opportunity to “try on” may allow them to take part in exemptions. The “health care” exemption in the Christian Dior was not the first time that a company benefitted from the exemption. BIPA lawsuits can be costly for companies. TikTok settled a $92 million BIPA lawsuit in 2021 with regards to allegations that the social media app harvested biometric face and voice prints from user-uploaded videos. Although that example does not deal with virtual try-on, it exemplifies how diligence and expertise with BIPA requirements can save brands huge settlements. Companies looking to expand into the virtual try-on space should carefully consider how they will obtain explicit written consent (and other BIPA requirements, like data destruction policies and procedures) from consumers to minimize class action and litigation exposure.