Author: Jenny Kim

INTRODUCTION

We live in an insatiable society. Across the globe, particularly in the United States, everyone with an Instagram account knows that the “phone eats first.” Young professionals rush to happy hour to post the obligatory cocktail cheers video before they take their first sip. On Friday nights, couples sprint to their favorite spot or the up-and-coming Mediterranean restaurant to quickly snap a picture of the “trio of spreads.” Everyone from kids to grandparents alike are flocking to the nearest Crumbl every Monday to share a picture of the pink box and the half-pound cookie inside. Social media has created a food frenzy. We are more obsessed with posting the picture of a meal than eating the meal itself. While a psychologist might have a negative view of the connection between social media and food, the baker or chef behind the photogenic creation is ecstatic by the way platforms such as Instagram and YouTube bring new patrons into their storefronts.

Due to the rise of social media over the past twenty years, food has become an obsession in our society. Many of us are self-proclaimed “foodies.” Historically, food has not fit neatly into the intellectual property legal scheme in the United States. Trademark, trade dress, and trade secrets are often associated with food, but we rarely see recipes or creative platings receive patent or copyright protection. Intellectual property law is not as enthralled with food as many of us are, but pairing the law with social media may create another way to protect food.

A RECIPE FOR IP PROTECTION

There are four main types of intellectual property: patents, copyrights, trade secrets, and trademarks. The utilitarian and economic perspectives are the two main theories behind intellectual property law. The utilitarian purpose of food is to be consumed. Economically, the food business in America is a trillion-billion-dollar industry. Intellectual property law aims to promote innovation, creativity, and economic growth. All three of these goals can be found within the food industry; however, the recipe for intellectual property protection has yet to be perfected.

Patent law is designed to incentivize and promote useful creations and scientific discoveries. Patent law gives an inventor the right to exclude others from using the invention during the patent’s term of protection. To qualify for a patent, an invention must be useful, novel, nonobvious, properly disclosed, and made up of patentable subject matter. Patentable subject matter includes processes, machines, manufactures, compositions of matter, and improvements thereof. Novelty essentially requires that the patent be new. It is a technical and precise requirement that often creates the biggest issue for inventors. Novelty in the context of food “means that the recipe or food product must be new in the sense that it represents a previously unknown combination of ingredients or variation on a known recipe.” According to the U.S. Court of Customs and Patent Appeals, to claim protection in food products, “an applicant must establish a coaction or cooperative relationship between the selected ingredients which produces a new, unexpected, and useful function.”A person cannot simply add or eliminate common ingredients, treating them in ways that would differ from the former practice. There are very few patents for food, but common examples include Cold Stone Creamery’s signature Strawberry Passion ice cream cake and Breyer’s Viennetta ice cream cake.

Copyright law affords protection to creative works of authorship that are original and fixed in a tangible medium. Fixation is met “when its embodiment … is sufficiently permanent or stable to permit it to be perceived, reproduced, or otherwise communicated for a period of more than transitory duration.” The Supreme Court has stated, “that originality requires independent creation plus a modicum of creativity.” Copyrights are not extended to “any idea, procedure, process, system, method of operation, concept, principle, or discovery.” Food, specifically food designs, are typically not eligible “for copyright protection because they do not satisfy the Copyright Act’s requirement that the work be fixed in a tangible medium.” A chef does not acquire rights for being the first to develop a new style of food because this creation is seen as merely ideas, facts, or formulas. Furthermore, shortly after a food’s creation, it is normally eaten, losing its tangible form. Recipes alone are rarely given copyright protection because recipes are considered statements of facts, but “recipes containing other original expression, such as commentary or artistic elements, could qualify for protection.”

Trade secrets are more favorable to the food industry. Traditionally, trade secret law has encompassed recipes. To be a trade secret the information must be sufficiently secret so that the owner derives actual or potential economic value because it is not generally known or readily ascertainable. The owner must make reasonable efforts to maintain the secrecy of the information. It is unlikely that food design, the shape and appearance of food, will be given trade secret protection as “food design presents a formidable challenge to trade secret protection: once the food is displayed and distributed to consumers, its design is no longer secret.” However, certain recipes, formulas, and manufacturing and preparation processes may be protected by trade secret law. Regarding food and intellectual property, trade secrets are probably the most well-known form of IP. Examples of still valid trade secret recipes and formulas include Coca-Cola’s soda formula, the original recipe for Kentucky Fried Chicken, the recipe for Twinkies, and the recipe for Krispy Kreme donuts.

Trademark is the most favorable type of IP protection given to the food industry. Trademarks identify and distinguish the source of goods or services. Trademarks typically take the form of a word, phrase, symbol, or design. Trade dress is a type of trademark that refers to the product’s appearance, design, or packaging. Trade dress analyzes “the total image of a product and may include features such as size, shape, color or color combinations, texture, graphics, or even particular sales techniques.

Different types of trademarks and trade dress receive different levels of protection. For trademarks, it depends on the kind of mark. Courts determine whether the mark is an inherently distinctive mark, a descriptive mark, or a generic mark. Similarly, trade dress receives different levels of protection depending on whether the trade dress consists of product packaging or product design. In the context of food, ‘the non-functionality of a particular design or packaging is required” for a product to receive protection as trade dress. Some examples of commonly-known trademarks include Cheerios, the stylized emblematic “M” logo from McDonald’s, and the tagline “Life tastes better with KFC.” Food designs that have federally registered trademarks under trade dress include: Pepperidge Farm’s Milano Cookies, Carvel’s Fudgie the Whale Ice Cream Cake, Hershey’s Kisses, General Mills’ Bugles, Tootsie Rolls and Tootsie Pops, and Magnolia Bakery’s cupcakes bearing its signature swirl icing.

SOCIAL MEDIA – THE LAST DEFENSE

When it comes to food, there is no recipe to follow to receive intellectual property protection, but social media can be a way for bakers, chefs, and restauranteurs to be rewarded for their creations and ensure creativity in the food industry. Social media influences the way businesses conduct and plan their marketing strategies. Many businesses use social media to communicate with their audience and expand their consumer base. Social media allows a chef to post the week’s “Specials Menu” to their restaurant’s Instagram, and in a few seconds, anyone who follows that account can post that menu on their account and share it with hundreds if not thousands of people. As noted previously, this single menu would not receive IP protection because it is primarily fact-based, not a secret, is obvious, and is likely not a signifier of the restaurant to the general public. However, the power of social media will bring hundreds of excited and hungry foodies to the business.

Social media alone cannot ensure that another chef or baker won’t reverse engineer the dish featured on the special’s menu, but social media has done what IP cannot. The various social media platforms embody what the framers of the Constitution were trying to accomplish through intellectual property when they drafted Article I centuries ago — the promotion of innovation, creativity, and economic growth. The Constitution states that “Congress shall have power to … promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries….” While there is no indication that the framers ever intended food to be a part of what they knew to be intellectual property, two hundred and fifty years later, it is clear that food is a mainstay in the IP world, even if it does not fit systematically into patents, copyrights, trademarks, or trade secrets. Unfortunately, the law has fallen short when addressing IP protection for the food industry; but, luckily social media has continued to fulfill the goal of intellectual property that the framers desired when it comes to the food industry.

Social media allows others to connect with the satisfying creation and gives chefs the opportunity to be compensated for their work. After seeing the correlation between the Instagram post and the influx of guests, the chef will be incentivized to create more. The chef will cook up another innovative menu for next week, hoping that she will receive the same positive reward again. The food industry is often left out, unable to fit into the scope of IP law, but through social media, chefs and bakers can promote innovation, creativity, and economic growth at the touch of their fingers.

Alessandra Fable is a second-year law student at Northwestern Pritzker School of Law.

In January 2022, after nearly one hundred years of copyright protection, Winnie-the-Pooh entered the public domain. This blog post will discuss copyright law’s grounding in the Constitution, the story of Winnie-the-Pooh’s copyright, and how the changing landscape of U.S. copyright law has affected this beloved story and the characters contained within it.

Congress’ Power to Enact Federal Copyright Law

Congress’ power to regulate federal copyright law derives from the Constitution. Specifically, Article 1, Section 8, Clause 8 (the “Intellectual Property” Clause) grants Congress the power “[t]o promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.” While this clause grants Congress general powers to govern certain aspects of intellectual property law, it does not actually supply any laws on its own. Instead, acting pursuant to this constitutional authority, Congress can write and enact federal copyright laws.

Congress enacted its first set of federal copyright laws in the late 1700’s. The laws were relatively limited in scope, protecting “books, maps, and charts for only fourteen years with a renewal period of another fourteen years.” Modern copyright laws have since become more expansive, protecting a wider variety of works for longer periods of time.

The copyright laws relevant to Winnie-the-Pooh are: (1) the 1909 Copyright Act, (2) the 1976 Copyright Act, and (3) the 1998 Copyright Term Extension Act. Under all three Acts, an original work of authorship gains copyright protection the moment it is published. The copyright protection immediately grants the author exclusive rights to reproduce, distribute, perform, and display the work. Further, all three Acts permit authors to transfer these rights to third parties, which is often exercised by authors in exchange for royalty income. While these three copyright Acts are very similar in substance, their primary differences relate to the amount of time that authors enjoy these protections.

  1. The 1909 Act

Under this Act, works could receive protection for up to 56 years. Upon publication, a work was initially protected for 28 years, and if the copyright was renewed in its 28th year, an additional protection term of 28 years was granted.

  1. The 1976 Act

The 1976 Copyright Act made one significant change to the renewal term of works created before 1978; it gave all current copyrights an additional 20 years of protection, for a total of 76 years. Additionally, this Act gave authors an opportunity to terminate any licensing agreements previously made under the 1909 Act.

  1. The 1998 Copyright Term Extension Act

The 1998 Copyright Term Extension Act added yet another 20 years to the renewal period of previously copyrighted works, which automatically applied to works “subsisting in their second term between December 31, 1976, and December 31, 1977,” and extended the maximum length of copyright protection to 95 years.

Winnie-the-Pooh’s Copyright

In 1926, Alan Alexander Milne wrote Winnie-the-Pooh, the first of several collections of short stories about a boy named Christopher Robin, his stuffed bear, Winnie-the-Pooh, and their friends in the Hundred Acre Wood. Since the book was published during the 1909 Copyright Act regime, it automatically gained copyright protection upon publication through 1954, and received an additional 28 years of protection because the copyright was renewed. The 1976 Copyright Act further extended Winnie-the-Pooh’s protection through 2001, and the 1998 Copyright Term Extension Act tacked an additional 20 years of subsequent protection. Since the 1998 Act was the final extension of copyright protection, the book’s copyright expired at the end of 2021, causing Winnie-the-Pooh and the characters contained within it to enter the public domain in January 2022.

While these copyright extensions are important, the more interesting aspect of Winnie-the-Pooh’s journey into the public domain pertains to copyright transferability. In 1930, Milne first took advantage of his ability to transfer his copyright protections by signing an agreement with Stephen Slesinger, a “television-film producer, creator of comic-book characters, and pioneer in the licensing of characters for children.” Milne granted Slesinger “exclusive merchandising and other rights on the Pooh works in the United States and Canada.” This license lasted for the entirety of Winnie-the-Pooh’s copyright, which at the time extended through 1982. Slesinger subsequently granted these rights to Walt Disney Productions in 1961, and Milne’s estate also entered into a separate agreement with Disney around the same time. Milne’s agreement with Disney gave Disney nearly all of Pooh’s remaining copyright protections. This left Disney with nearly unrestricted access to use and develop the Winnie-the-Pooh characters into the popular cartoon versions.

The 1976 Copyright Act gave Milne’s estate an opportunity to reevaluate its agreements with Slesinger and Disney, as the Act allowed an author (or his heirs) to terminate a licensing agreement made under the 1909 Act. However, instead of terminating the licensing agreements, Milne’s estate opted to renegotiate its agreement with Disney in 1983 to receive a larger portion of royalties. Other than the royalty payment provisions, the new agreement had nearly identical terms to the old agreement, so Disney retained its nearly exclusive and unrestricted access to Winnie-the-Pooh. The licensing agreement was set to expire when the work entered the public domain, which at the time was less than 20 years away.

However, Milne’s estate in 1983 was unaware that the 1998 Copyright Extension Act would later grant Winnie-the-Pooh an additional 20 years of protection, further extending the duration of the licensing agreement as well as Disney’s exclusive rights to the work. Milne’s estate attempted to terminate the licensing agreement after the 1998 Act took effect, but the language of the 1998 Act only allowed for the termination of licensing agreements made before 1978. Because Disney and Milne executed a new contract in 1983 after renegotiating its terms, Milne’s estate was unable to terminate the agreement and Disney retained nearly exclusive rights to Winnie-the-Pooh until the book entered the public domain in 2022.

What Does This Mean for Pooh?

Now that Milne’s 1926 book has entered the public domain, “the plot, dialogue, and settings in that book are open for future creators,” along with the “appearance and traits” of any characters appearing in that book. This includes Piglet, Eeyore, Rabbit, Kanga, Roo, Owl, and Christopher Robin. Tigger, on the other hand, did not appear until 1928 in The House at Pooh Corner, so he does not enter the public domain until 2024.

While entering the public domain allows “anyone [to] adapt the 1926 book into a play, musical, film, or write a prequel or sequel,” the public does not have free reign to use many of Pooh’s modern characteristics. Any adaptations that Disney made to the character under the licensing agreements, such as giving him his signature red shirt, are still protected as derivative works. As such, Disney can still prevent the public from using its modified, well-known versions of Pooh.

The horror film, Blood and Honey, serves as an example of how creators can take advantage of Pooh’s entry into the public domain. This film, set to be released in 2023, “follows Pooh and Piglet as they go on a rampage after Christopher Robin abandons them for college.” While this film uses characters like Pooh and Piglet and refers to Milne’s original settings, the film refrains from using Disney’s red-shirted, cartoon-like version of Pooh.

Other artists have used their depictions of Pooh to explain and poke fun at copyright’s boundaries. For example, artist Lukey McGarry recently created a comic strip where Pooh refers to Disney’s copyright and jokingly explains to Christopher Robin that, “as long as I don’t put a little red shirt on, I can do as I like.”

How Will Disney Respond?

Though both the horror film and comic strip appear to be staying within the permissible boundaries of public domain, only time will tell if these works and others like them can escape intellectual property challenges brought by Disney. On one hand, Disney might actually benefit from the widespread renewed interest in Milne’s characters and, as a result, may refrain from challenging public use of Pooh. However, given Disney’s longstanding monopoly on Milne’s works, Disney may have trouble relinquishing its control over the characters. As a result, I presume that Disney’s last attempt to retain control over Pooh and his friends is yet to come.

Elisabeth Bruckner is a second-year law student at Northwestern Pritzker School of Law.

Introduction

The Supreme Court’s ruling in Dobbs v. Jackson Women’s Health Organization granted state legislatures the authority to regulate abortion. The Court’s decision quickly led states, such as Texas and Arkansas, to enact trigger bans for the procedure. Prior to the Court’s ruling, data brokers had already begun selling location data for individuals visiting abortion facilities through ordinary apps. This data often provided details to where the individual traveled from and how long they stayed at the facility.

In the wake of Dobbs, concerns have come to light regarding the potential misuse of sensitive personal health data originating from period tracking apps. Questions have arisen concerning whether “femtech” app data can be used to identify and prosecute individuals violating abortion laws. Due to lax federal laws and regulations in the United States, the onus falls on femtech companies to immediately and proactively find ways to protect users’ sensitive health data.

What is “Femtech’?

The term “femtech” was coined in 2016 by Ida Tin, the CEO and co-founder of period tracking app Clue. Femtech refers to health technology directed at supporting reproductive and menstrual healthcare. The femtech industry is currently estimated to have a market size between $500 million and $1 billion. Femtech apps are widely used with popular period-tracking app Flo Health touting more than 200 million downloads and 48 million monthly users.

Apps like Clue, Flo Health, and Stardust allow individuals to record and track their menstrual cycle to receive personalized predictions on their next period or their ovulation cycle. Although femtech apps collect highly sensitive health data, they are largely unregulated in the United States and there is a growing push for a comprehensive framework to protect sensitive health data that the apps collect from being sold or provided to third parties and law enforcement.

Current Regulatory Framework

Three federal agencies have regulatory authority over femtech apps – the Federal Trade Commission (“FTC”), United States Food and Drug Administration (“FDA”), and the Department of Health and Human Services (“HHS”). Their authority over femtech data privacy is limited in scope. Furthermore, while the FDA can clear the apps for contraceptive use, greater focus has been put on the FTC and HHS in regulating femtech. The Health Insurance Portability and Accountability Act (HIPAA), administered by the HHS, fails to protect sensitive health data from being collected and sold, and femtech apps are not covered under the Act. The FTC is currently exploring rules on harmful commercial surveillance and lax data security practices following President Joe Biden’s July 2022 executive order that encourages the FTC to “consider actions . . . to protect consumers’ privacy when seeking information about and provision of reproductive health care services.” The executive order’s definition of “reproductive healthcare services” does not, however, seem to include femtech apps. Thus, a massive gap remains in protecting sensitive health data consumers willingly provide to femtech app who may sell or provide such data to law enforcement or third parties. Femtech apps generally have free and paid versions for users, which makes the issue all the more immediate.

The unease based on potential misuse of health data collected by “femtech” apps heightened following the FTC’s complaint against Flo. The agency alleged the app violated Section 5 of the Federal Trade Commission Act (“FTCA”) by misleading consumers on how it handled sensitive health data. While the app promised to keep sensitive health data private, the FTC found the app was instead sharing this data with marketing and analytics firms, including Facebook and Google. Flo ultimately settled with the FTC, but the app refused to admit any wrongdoing.

FTC Commissioners Rohit Chopra and Rebecca Kelly Slaughter issued a joint statement following the settlement stating that, in addition to misleading consumers, they believed the app also violated the FTC’s Health Breach Notification Rule (“Rule”), which requires “vendors of unsecured health information . . . to notify users and the FTC if there has been an unauthorized disclosure.” The FTC refused to apply the Rule against Flo as such enforcement would have been “novel.” Such disclosures will help users navigate the post-Dobbs digital landscape, especially in light of news reports that law enforcement in certain states has begun to issue search warrants and subpoenas in abortion cases.

There is additional concern regarding femtech app’s potential location tracking falling into the hands of data brokers. The FTC recently charged Kochava, a data brokerage firm, with unfair trade practice under Section 5 of the FTCA for selling consumers’ precise geolocation data at abortion clinics. While Kochava’s data is not linked to femtech, in light of the FTC’s settlement with Flo, concerns of sensitive reproductive health data from femtech apps being sold is not out of the realm of possibility. Despite the FTC announcement on exploring new rules for commercial surveillance and lax data security, experts have expressed concern on whether such rulemaking is best done through the FTC or Congress. This is because the FTC’s rules are “typically more changeable than a law passed by Congress.”

As noted, most femtech apps are not covered under HIPAA nor are they required to comply. HIPAA encompasses three main rules under Title II: the Security Rule, the Privacy Rule, and the Breach Notification Rule. HIPAA is not a privacy bill, but it has grown to “provide expansive privacy protections for [protected health information] (“PHI”).” Due to the narrow definition of covered entity, there is little protection that can be provided to femtech app users under the current structure of HIPAA even though these apps collect health data that is “individually identifiable.”    

Momentum for HIPAA to be amended so femtech may fall within the scope of covered entities may still fall short since HIPAA’s Privacy Rule permits covered entities to disclose protected health information (“PHI”) for law enforcement purposes through a subpoena or court-ordered warrant. While it does not require covered entities to disclose PHI, this permission could be troublesome in states hostile to abortion. Even if HIPAA’s definition of covered entities is expanded, it would still be up to the company to decide whether to disclose PHI to law enforcement. Some femtech companies, though, may be more willing to protect user data and have already begun to do so.

Future Outlook and What Apps Are Doing Post-Dobbs

In September 2022, Flo announced in an email to users that it was moving its data controller from the United States to the United Kingdom. The company wrote that this change meant their “data is handled subject to the UK Data Protection Act and the [General Data Protection Regulation].” Their privacy policy makes it clear that, despite this change, personal data collected is transferred and processed in the United States where it is governed by United States law. While Flo does not sell identifiable user health data to third parties, the company’s privacy policy states it may still share user’s personal data “in response to subpoenas, court orders or legal processes . . . .” While the GDPR is one of the strongest international data privacy laws, it still does not provide United States users with much protection.

In the same update, Flo introduced “anonymous mode” letting users access the app without providing their name, email, or any technical identifiers. Flo said this decision was made “in an effort to further protect sensitive reproductive health information in a post-Roe America.” The FTC, however, states that claims of anonymized data are often deceptive and that the data is easily traceable. Users may still be at risk of potentially having their sensitive health data handed over to law enforcement. Further, research shows femtech apps often have significant shortcomings with respect to making privacy policies easy to read and that users are often unaware of what their consent means.     

 While femtech has the potential to provide much-needed attention to a group often under-researched and underrepresented in medicine, the need to enhance current data privacy standards should be at the forefront for developers, legislators, and regulators. Although femtech companies may be incentivized to sell sensitive health data, their resources may be better spent lobbying for the passage of legislation like the American Data Privacy and Protection Act (“ADPPA”) and My Body, My Data Act otherwise the lack of data privacy measures may turn users away from femtech altogether. While no current reports show that menstruating individuals are turning away from femtech apps, it may be too soon to tell the effects post-Dobbs.

 The ADPPA is a bipartisan bill that would be the “first comprehensive information privacy legislation” and would charge the FTC with the authority to administer the Act. The ADPPA would regulate “sensitive covered data” including “any information that describes or reveals the past, present, or future physical health, mental health, disability, diagnosis, or healthcare condition or treatment of an individual” as well as “precise geolocation information.” ADPAA’s scope would extend beyond covered providers as defined by HIPAA and would encompass femtech apps. The ADPPA would reduce the amount of data available through commercial sources that is available to law enforcement and give consumers more rights to control their data. The Act, however, is not perfect, and some legislators have argued that it would make it more difficult for individuals to bring forth claims against privacy violations. While it is unlikely that Congress will pass or consider ADPAA before it convenes in January 2023, it marks a start to long-awaited federal privacy law discussions.

 On a state level, California moved quickly to enact two bills that would strengthen privacy protections for individuals seeking abortion, including prohibiting cooperation with out-of-state law enforcement regardless of whether the individual is a California resident. Although California is working to become an abortion safe haven, abortion access is costly and individuals most impacted by the Supreme Court’s decision will likely not be able to fund trips to the state to take advantage of the strong privacy laws.

As menstruating individuals continue to navigate the post-Dobbs landscape, transparency from femtech companies should be provided to consumers with regard to how their reproductive health data is being collected and how it may be shared, especially when it comes to a growing healthcare service that individuals are exploring online ­- abortion pills.

Angela Petkovic is a second-year law student at Northwestern Pritzker School of Law.

In August 2020, Marlene Stollings, the head coach of Texas Tech Women’s Basketball Team, allegedly forced her players to wear heart rate monitors during practice and games. Stollings would subsequently view the player data and reprimand each player who did not achieve their target heart rates. It could be argued that Stollings was simply pushing her players to perform better, however former player Erin DeGrate described Stollings’ use of the data as a “torture mechanism.” This is just one reported example of how athletic programs use athlete data collected from wearable technology to the student athlete’s detriment.

As of 2021 the market for wearable devices in athletics has a $79.94 billion valuation and is expected to grow to $212.67 billion by 2029. The major market competitors in the industry consist of Nike, Adidas, Under Armour, Apple, and Alphabet, Inc. so the expected growth comes as no surprise. Some wearable technology is worn by everyday consumers to simply track how many calories they have burned in a day or whether they met their desired exercise goals. On the other hand, professional and college athletes use wearable technology to track health and activity data to better understand their bodies and gain a competitive edge. While professional athletes can negotiate which types of technology they wear and how the technology is used through their league’s respective collective bargaining agreement, collegiate athletes do not benefit from these negotiation powers. Universities ultimately possess a sort of “constructive authority” to determine what kind of technology students wear, what data is collected, and how that data is used without considering the student athlete’s level of comfort. This is because if the student-athlete chooses to-opt out of wearable technology usage it may hinder their playing time or lead to being kicked off the team.

Studies show that collecting athlete biometric data has a positive effect on a player’s success and helps reduce possible injury. For instance, professional leagues utilize wearables for creating heat maps to analyze an athlete’s decision-making abilities. The Florida State Seminole basketball program also routinely uses wearables to track and monitor early signs of soft tissue damage which helped reduce the team’s overall injury rate by 88%. However, there are significant trade-offs including the invasion of an athlete’s privacy and possible misuse of the data.

Section I of this article will examine the different types of information collected from athletes and how that information is being collected. Section II will discuss a college athlete’s right to privacy under state biometric laws. Section III will discuss how data privacy laws are changing with respect to collecting athlete biometric data. Last, section IV will discuss possible solutions to collecting biometric data.

II. What Data is Collected & How?

Many people around the country use Smart Watch technology such as Fitbits, Apple Watches, or Samsung Galaxy Watches to track their everyday lifestyle. Intending to maintain a healthy lifestyle, people usually allow these devices to monitor the number of steps taken throughout the day, how many calories were burned, the variance of their heart rate, or even their sleep schedule. On the surface, there is nothing inherently problematic about this data collection, however, biometric data collected on college athletes is much more intrusive. Athletic programs are beginning to enter into contractual relationships with big tech companies to provide wearable technology for their athletes. For example, Rutgers University football program partnered with Oura to provide wearable rings for their athletes. Moreover, the types of data these devices collect include blood oxygenation levels, glucose, gait, blood pressure, body temperature, body fatigue, muscle strain, and even brain activity. While many college athletes voluntarily rely on wearable technology to develop a competitive edge, some collegiate programs now mandate students wear the technology for the athletic program to collect the data. Collegiate athletes do not have the benefit of negotiations or the privileges of a collective bargaining agreement, but the athletes do sign a national letter of intent which requires a waiver of certain rights in order to play for the University. Although college athletes have little to no bargaining power, they should be given the chance to negotiate this national letter of intent to incorporate biometric data privacy issues because it is ultimately their bodies producing the data.

II. Biometric Privacy Laws

Currently, there are no federal privacy laws on point that protect collecting student athlete biometric data. Nonetheless, some states have enacted biometric privacy statutes to deal with the issue. Illinois, for example, which houses thirteen NCAA Division I athletic programs, authorized the Biometric Information Privacy Act (BIPA) in 2008. BIPA creates standards for how companies in Illinois must handle biometric data. Specifically, BIPA prohibits private companies from collecting biometric data unless the company (1) informs the individual in writing that their biometric data is being collected or stored, (2) informs the individual in writing why the data is being collected along with the duration collection will continue for and (3) the company receives a written release from the individual. This is a step in the right direction in protecting athletes’ privacy since the statute’s language implies athletes would have to provide informed consent before their biometric data is collected. However, BIPA does not apply to universities and their student-athletes since they fall under the 25(c) exemption for institutions. Five other Illinois courts, including a recent decision in Powell v. DePaul University, explain the 25(c) exemption extended to “institutions of higher education that are significantly engaged in financial activities such as making or administering student loans.”

So, although Illinois has been praised for being one of the first states to address the emerging use of biometric data by private companies, it does not protect collegiate athletes who are “voluntarily” opting into the wearable technology procedures set by their teams.

III. Data Collection Laws are Changing

     While BIPA does not protect collegiate athletes, other states have enacted privacy laws that may protect student-athletes. In 2017 the state of Washington followed Illinois’ footsteps by enacting its own biometric privacy law that is substantively similar to the provisions in BIPA. But the Washington law contains an expanded definition of what constitutes “biometric data.” Specifically, the law defines biometric identifiers as “data generated by automatic measurements of an individual’s biological characteristics, such as a fingerprint, voiceprint, eye retinas, irises or other unique biological patterns or characteristics that are used to identify a specific individual.” By adding the two phrases, “data generated by automatic measurements of an individual’s biological characteristics,” and “other biological patterns or characteristics that is used to identify a specific individual,” the Washington law may encompass the complex health data collected from student-athletes. The language in the statute is broad and thus likely covers an athlete’s biometric data because it is unique to that certain individual and could be used as a characteristic to identify that individual.  

IV. Possible Solutions to Protect Player Biometric Data

     Overall, it’s hard to believe that biometric data on student-athletes will see increased restrictions any time soon. There is too much on the line for college athletic programs to stop collecting biometric data since programs want to do whatever it takes to gain a competitive edge. Nonetheless, it would be possible to restrict who has access to athletes’ biometric data. In 2016, Nike and the University of Michigan signed an agreement worth $170 million where Nike would provide Michigan athletes with apparel and in return, Michigan would allow Nike to obtain personal data from Michigan athletes through the use of wearable technology. The contract hardly protected the University’s student-athletes and was executed in secrecy seeing its details were only revealed after obtaining information through the Freedom of Information Act. Since the University was negotiating the use of the student athlete’s biometric data on the athlete’s behalf, it can likely be assumed that the University owns the data. Therefore, athletes should push for negotiable scholarship terms allowing them to restrict access to their biometric data and only allow the athletic program’s medical professionals to obtain the data.

One would think that HIPAA protects this information from the outset. Yet there is a “general consensus” that HIPAA does not apply to information collected by wearables since (a) “wearable technology companies are not considered ‘covered entities’, (b) athletes consent to these companies having access to their information, or (c) an employment exemption applies.” Allowing student-athletes to restrict access before their college career starts likely hinders the peer pressure received from coaches to consent to data collection. Further, this would show they do not consent to companies having access to their information and could trigger HIPAA. This would also cause the information to be privileged since it is in the hands of a medical professional, and the athlete could still analyze the data with the medical professional on his or her own to gain the competitive edge biometric data provides.

Anthony Vitucci is a third-year law student at Northwestern Pritzker School of Law.

Introduction

News headlines about facial recognition technology primarily focus on the government’s use and misuse of the technology. Likewise, technology companies and legislators frequently advocate against the government’s use of facial recognition tools to conduct mass surveillance or generate leads in investigations. For example, following widespread claims of the technology’s racial bias, Amazon, IBM, and Microsoft announced that they would stop selling facial recognition tools to law enforcement agencies. And following the arrest of an innocent black man who was falsely identified by facial recognition, major cities like San Francisco and Boston banned law enforcement from using the technology.

However, as industry commentators focus on the government’s use of facial recognition tools, private businesses in the U.S. regularly deploy facial recognition technology to secretly surveil their customers. Companies rely on the technology to gather information about customers’ identities and demographics to tailor their marketing strategies, monitor customers within stores, or sell the information to third parties. Since there are no federal regulations governing the technology, commercial uses of facial recognition technology remain relatively unchecked, even as companies invade their customers’ privacy rights without any warning.

     How Does Facial Recognition Technology Work?

Based on photos or still images, facial recognition technology scans, maps, and analyzes the geometry of a person’s face to verify their identity or collect information about their behavior. When mapping a face, the technology creates a mathematical formula ­— called a facial signature — based on the person’s distinct facial features, such as the distance between their eyes. Facial recognition systems can create and store facial signatures for each scanned image containing a face. When a user uploads a new photo, the system cross-references the generated facial signature with existing ones in the database and can verify the person’s identity with a matched signature.

Businesses have created databases of facial signatures to identify customers of interest in future video footage. In addition, businesses can use facial recognition software from companies like Clearview AI, which cross-references an uploaded photo against billions of public images to verify a person’s identity. Clearview AI is known to offer free trials of its software, luring businesses and rogue employees into using the technology. With such easy access to facial recognition software, private use of the technology has proliferated, hardly slowed by regulatory barriers.

Commercial Uses of Facial Recognition Technology

No matter the industry, facial recognition can help businesses glean more information about their customers, make informed business decisions, and increase their revenues. Shopping malls and mega-stores like Macy’s, Rite-Aid, Apple, and Walmart have used facial recognition to identify shoplifters, target loyal customers, and track customers’ reactions within the store. Amazon has sold facial recognition technology that assesses customers’ faces to discover whether they are attentive or indifferent to certain displays. While customers are surely aware these mega-stores have security cameras, they are likely unaware these stores may know their name, home address, how many times they’ve frequented the location, and whether they are happy with their in-store experience. Outside of retail stores, in cities like Miami, thousands of Uber and Lyft drivers have digital tablets in their backseats that use facial recognition technology to assess a rider’s age, gender, and demographics, in order to display ads tailored to the rider’s perceived characteristics.

In states without biometric privacy laws, any citizen who shops at a mall or grocery store, or attends a concert or sports game, will likely be the subject of unsuspecting facial recognition. Additionally, facial recognition tools can even identify an individual who rarely shows their face in public. Clearview AI created a facial recognition database by scraping ten billion images from public websites. Clearview analyzed the images and developed facial signatures for nearly half the U.S. population.

As of 2020, more than 200 companies had accounts with Clearview, including professional sports leagues, casinos, fitness centers, and banks. These companies can upload a photo of an individual’s face — pulled from security footage or driver’s licenses — and cross-reference it against Clearview’s database to find a match. With limited regulation and easy access to facial recognition tools, consumers will face the technology’s adverse consequences, such as misidentifications and loss of privacy rights.

Misidentifications and Privacy Risks

The accuracy of facial recognition technology to correctly identify a person depends on their age, gender, or race. Research from the National Institute of Standards and Technology revealed that facial recognition systems are less accurate when identifying people of color. The algorithms are more likely to misidentify African Americans, Native Americans, and Asians compared to Caucasians. Researchers also have found these algorithms to be less accurate when identifying women, transgender individuals, and children.

Misidentification can carry damaging consequences to an individual’s liberty and dignity. Robert Williams, the black man who was wrongfully arrested based on a facial recognition match, was a victim of misidentification. These same misidentifications are likely occurring at private establishments, where security guards use the technology to scan for known criminals and remove purported “matches” from their stores.

In addition to misidentifications, facial recognition technology intrudes on an individual’s right to privacy. The technology allows companies to identify customers without their consent, collecting information about customers’ demographics and preferences. Furthermore, companies that store facial templates are subject to data breaches, where thousands of their customers’ faceprints could become compromised. Unlike online passwords, a stolen faceprint is indefinitely compromised — a customer cannot change their faceprint. Last year, thousands of scammers in the U.S. tried using stolen faceprints to fraudulently obtain government-assistance benefits. As facial recognition technology grows, bad actors will attempt to use stolen faceprints for financial gain.

     Federal, State, and Local Regulations

There are no federal regulations curbing the private use of facial recognition technology, but Congress’s interest in regulating the technology is increasing. Legislators introduced three separate bills to regulate facial recognition technology in the past few years, yet none passed the introduction stage.

One of the bills introduced in the Senate, the Commercial Facial Recognition Privacy Act, would have required all private entities to obtain explicit consent from customers before collecting faceprint data. The bill’s consent requirement is based on the Illinois Biometric Information Privacy Act (BIPA), one of only three state-enacted biometric privacy laws.

BIPA requires businesses that use facial recognition technology to obtain consent from consumers before collecting their faceprint data. It also requires these businesses to provide information about how they protect and store the biometric data. BIPA permits individuals to sue companies who violate any requirement in the statute and offers significant statutory damages for violations. In February 2021, Facebook paid out $650 million to settle a BIPA class-action lawsuit. To date, more than 800 BIPA class action lawsuits have been filed against Illinois businesses.

Despite BIPA’s teeth, businesses can freely use facial recognition in almost every other state. Texas and Washington are the only other states with biometric privacy laws that regulate commercial use of the technology. Yet, neither state permits citizens to sue companies for violating the statute, meaning there is much less pressure to comply. Enforcement lies with each state’s attorney general, who can impose civil penalties on violators.

Fortunately, bans on private use are growing at the city level. In September 2020, Portland, Oregon, became the first municipality to ban private entities from using facial recognition in public places, such as shopping malls. Since then, two other cities have followed suit. New York City now requires commercial establishments to post notices when using facial recognition technology, and Baltimore banned all private sector use of the technology, even subjecting violators to criminal penalties. The recent wave of restrictions at the city level indicates that regulations may first arise where the commercial sector flourishes — in major cities.

     Calls for Regulation and Future Outlook

Despite the pervasive commercial use of facial recognition technology, sixty percent of Americans are unaware that retail stores use the technology. This lack of awareness stems in part from the lack of regulation. Aside from a few states and a handful of cities, most businesses are unregulated: free to implement facial recognition tools without warning their customers. So far, calls for regulation have primarily come from companies that have developed facial recognition technology themselves: Microsoft, IBM, and Amazon. While these calls may be aimed at influencing friendly regulations, Microsoft’s President Brad Smith has called for legislation requiring stores to provide notice and obtain consent, similar to BIPA’s consent requirement. As BIPA has revealed, requiring businesses to obtain consent from consumers would at least hold businesses accountable for their facial recognition uses.

Nevertheless, some businesses may not wait for enacted legislation before shelving their facial recognition products. In November 2021, Meta announced that Facebook will no longer use facial recognition software and plans to delete the faceprint data of one billion Facebook users. Meta’s decision was motivated by concerns about the technology’s “place in our society.” This drastic move may prompt other industry leaders to start influencing the future treatment of facial recognition technology, with the hopes of clearing up the current regulatory uncertainty that threatens innovation and investment. While some may question Meta’s sincerity or true motives, its decision could foreshadow an era of much-needed regulatory action.  

Michael Willian is a third-year law student at Northwestern Pritzker School of Law.

I. Introduction

The COVID-19 pandemic has brought the issues of personal privacy and biometric data to the forefront of the American legal landscape. In an increasingly digital world, privacy laws are more important than ever. This reality is especially true in the context of remote workplaces, where employers have facilitated a digital migration through a variety of means. The platforms employers use have the propensity to violate personal privacy through the capture and storage of sensitive biometric information. In response, states across the nation are exploring solutions to the potential privacy issues inherent in the collection of biometric data. One of the first states to do so was Illinois, enacting a standalone biometric privacy statute in 2008: the Illinois Biometric Information Privacy Act (“BIPA”). Today, BIPA is more relevant than ever and should act as a statutory blueprint for states looking to protect personal privacy and biometric data amid a global pandemic. Ultimately, though, BIPA must be supplemented by federal legislation drafted in its likeness to effectively protect individuals’ privacy on a national level.

II. Background of the Biometric Information Privacy Act

To fully understand BIPA and all its implications, one must appreciate the context in which it was enacted. The Illinois legislature passed BIPA in October 2008. The Act was passed in the immediate wake of the bankruptcy of Pay By Touch, a company which operated the largest fingerprint scan system in Illinois. Pay By Touch’s pilot program was used in grocery stores and gas stations, and its bankruptcy left users unsure of what would become of their biometric data – i.e., their fingerprints. “Biometric data – a person’s unique biological traits embodied in not only fingerprints but also voice prints, retinal scans, and facial geometry – is the most sensitive data belonging to an individual.”

Understandably, private citizens in Illinois and across the country want to safeguard their sensitive biometric data. With potential issues such as identity theft and data manipulation more prevalent than ever, people have plenty of incentives to ensure their unique identifiers stay private. In response to those concerns, legislatures have passed statutes to address biometric data and personal privacy. BIPA represents one of the most stringent of such acts in the country, setting strict requirements for the management of biometric identifiers in Illinois.

BIPA defines “biometric identifier” as (1) a retina or iris scan, (2) fingerprint, (3) voiceprint, or (4) a scan of hand or face geometry. Further, “biometric information” refers to any information, regardless of how it is captured, converted, stored, or shared, based on an individual’s biometric identifier used to identify an individual. The requirements outlined in Section 15 of the Act – which addresses the retention, collection, disclosure, and destruction of biometric data – implicate a slew of potential legal issues. The section stipulates that a private entity can collect a person’s biometric data only if it first informs the subject that a biometric identifier is being collected, informs them of the specific purpose and length of term it is being collected for, and receives a written release from the subject.

Further, the Act outlines the following concerning retention of such data:

(a) A private entity in possession of biometric identifiers or biometric information must develop a written policy, made available to the public, establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within 3 years of the individual’s last interaction with the private entity, whichever comes first.

Thus, BIPA represents a statute narrowly aimed at maintaining the security of biometric data. While BIPA was relatively unknown in Illinois between 2008-2015, a wave of litigation has since swept through the state as employees began suing their employers. Such litigation was seemingly inevitable, as BIPA provides sweeping protection for individuals against biometric data abuse. The complexities of such issues have become clearer and potential legislative solutions to them even more important in the midst of a global pandemic.

III. Personal Privacy & Biometric Data in the COVID-19 Pandemic

The issues surrounding data privacy have become increasingly relevant in the ongoing COVID-19 pandemic, which effectively digitized the workplace as we know it. As the pandemic raged in the early months of 2020, workplaces around the globe were suddenly forced to digitally migrate to an online work environment. An inevitable result of newfound online worksites has been an increase in the utilization of biometric data. In an effort to facilitate remote work, companies have had to make work-related information accessible online. Employment attorney Eliana Theodorou outlines the ensuing issues for companies undertaking such efforts in an article entitled “COVID-19 and the Illinois Biometric Information Privacy Act.” For example, Theodorou writes, “Some of these platforms involve video recording or access by fingerprint, face scan, or retina or iris scan, which may result in the capture and storage of sensitive biometric information.” Thus, the collection and retention of biometric data has necessarily increased during the pandemic as companies made information accessible remotely when they shifted online.

Potential privacy issues accompanying the storage of biometric data will become even more difficult to navigate as companies return to physical workplaces with the pandemic still raging. Per Theodorou, “As workplaces reopen, there will likely be an uptick in the collection of biometric data as employers turn to symptom screening technologies that collect biometric data.” This could include, for instance, contactless thermometers and facial recognition scanning technologies used for contactless security access. The issue will thus continue to be the collection and storage of sensitive biometric data as employers return to work with the newfound priorities of social distancing and limited contact. The reality is that biometric data is still a relatively new concept, with its own specific set of issues and potential solutions. Personal privacy becomes ever harder to maintain in a digital world, with the use of biometric information often a necessity both for remote access and in-person return to work. Ultimately, the risks associated with the collection of biometric data remain largely undefined or misunderstood by employers. That lack of understanding has been exacerbated by a global pandemic necessitating a digital work migration.

IV. Possible Solutions to the Privacy Issues Raised by COVID-19 and Remote Workplaces

Illinois has provided a stellar blueprint for biometric data privacy in BIPA. However, other states have been slow to follow. As of November 2021, only a handful of other states have enacted legislation aimed at the protection of biometric data. Texas and Washington, like Illinois, have passed broad biometric privacy laws. Other states like Arizona and New York have adopted more tailored biometric privacy approaches, while others have enacted laws specifically aimed at facial recognition technology. There are also proposed bills awaiting legislative approval in many more states. Ultimately, implementing widespread legislation on a state-by-state basis will be a slow and drawn-out process, rendering countless Americans’ biometric data vulnerable. Rather than continue this state-based campaign to solidify biometric data privacy, citizens must turn to the federal government for a more comprehensive and consistent solution.

The primary roadblock to legitimate privacy in the biometric information space is the lack of a centralized federal initiative to address it. “Despite its value and sensitivity, the federal government currently has no comprehensive laws in place to protect the biometric data of U.S. citizens.” The privacy issues inherent in the popularization of biometric data in pandemic-era remote workplaces demand federal attention. A wide-ranging statute applicable in all states is the first step in properly addressing these issues. Congress should look to BIPA as a blueprint, for it remains the only state law passed to address biometric data privacy which includes a personal call to action. It is unique in that regard, especially considering it was passed in 2008, and consequently provides the most aggressive statutory response thus far to potential privacy concerns. Whether a federal act is feasible remains unclear. In August 2020, Senators Jeff Merkley and Bernie Sanders introduced the National Biometric Information Privacy Act of 2020, which suggests the imposition of nationwide requirements similar to those outlined in BIPA. The viability of such an Act is doubtful, as previous privacy legislation has been difficult to pass. However, it is a sign of movement in the right direction – toward increased protection of personal privacy in a pandemic which has made biometric data more relevant and potentially at-risk for improper management and manipulation.

Luke Shadley is a third-year law student at Northwestern Pritzker School of Law.

If nothing else, Facebook’s recent announcement that it plans to change its name to “Meta” is a sign that the metaverse is coming and that our legal system must be prepared for it. As the metaverse, the concept of a virtual version of the physical world, gains increased popularity, individuals will engage in more transactions involving non-fungible tokens, or NFTs, to purchase the virtual items that will inhabit metaverse worlds. Accordingly, the United States will need more robust regulatory frameworks to deal with NFT transactions, especially in the gaming industry, where NFT use will likely rise significantly.

In most other areas of digital media and entertainment, NFTs are often associated with niche items, such as high-priced autographs and limited-edition collectibles. However, in the video gaming sector, existing consumer spending habits on rewards such as loot boxes, cosmetic items, and gameplay advantages provide fertile ground for explosive growth in NFT use. This article will explore the outlook for NFTs in gaming, why gaming NFT creators should consider the potential impact of financial regulations on their tokens, and how current U.S. financial regulations could apply to this ownership model.

A. Current State of Virtual Currencies and Items in Gaming

Gaming has long been the gateway for consumers to explore immersive digital experiences, thus explaining why virtual currencies and collectible items have such strong roots in this sector. Further, given the popularity of virtual currencies and collectibles in gaming, it is no surprise that cryptocurrencies and NFTs have similarly experienced success in this space.

NFTs, or non-fungible tokens, are unique digital assets that consumers may purchase with fiat currency or cryptocurrency. NFTs can be “minted” for and linked to almost any digital asset (e.g., video game items, music, social media posts), and even many physical assets. While NFTs are blockchain-based just like cryptocurrencies, the key difference between the two is that a NFT is not mutually interchangeable with any other NFT (i.e. they are non-fungible). So why are they so special? As digital experiences continue to move to the metaverse, NFTs will serve as a primary means for consumers to connect with companies, celebrities, and, eventually, each other.

In the simplest explanation, metaverse is the concept of a digital twin of the physical world, featuring fully interconnected spaces, digital ownership, virtual possessions, and extensive virtual economies. Mainstream media has already given significant coverage to metaverse activities that have appeared in popular games, such as concerts in Fortnite and weddings in Animal Crossing. However, more futuristic examples of how NFTs and metaverse could transform our daily lives exist in the Philippines with Axie Infinity and Decentraland, a blockchain-based virtual world.

In Axie Infinity, players breed, raise, battle, and trade digital animals called Axies. The game was launched in 2018, but it took off in popularity during the COVID-19 pandemic as many families used it to supplement their income or make several times their usual salary. To date, the game has generated $2.05 billion in sales. Meanwhile, plots of virtual land in Decentraland, a 3D virtual world where consumers may use the Etheruem blockchain to purchase virtual plots of lands as NFTs, are already selling for prices similar to those offered in the physical world. For example, in June 2021, a plot of land in the blockchain-based virtual world sold for $900,000.

The growth in popularity of Axie Infinity has already caught the eye of the Philippine Bureau of Internal Revenue, which has announced that Axie Infinity players must register to pay taxes. As financial regulation of NFTs looms, it will be imperative for U.S. gaming companies to consider how federal courts and the government will recognize the status of NFTs.

B. Financial Regulation and NFTs

As NFT transaction volume grows, there will undoubtedly be greater scrutiny over these transactions by financial regulators. While the current legal and regulatory environment does not easily accommodate virtual assets, there are a two primary ways NFTs may be regulated.

1. Securities Regulation

One of the most hotly discussed legal issues concerning NFTs involves whether these tokens should be recognized as securities. Under SEC v. W.J. Howey Co., a transaction is deemed an investment contract under the Securities Act where all of the following four factors are satisfied: (1) an investment of money; (2) in a common enterprise; (3) with a reasonable expectation of profits; (4) to be derived from the entrepreneurial or managerial efforts of others.

Intuitively, NFTs, in the form of virtual collectible items, don’t seem like traditional tradable securities as they are unique, non-fungible items. Indeed, they do not appear to demonstrate the type of “horizontal commonality” that federal courts have held to be necessary to satisfy the “common enterprise” aspect of the Howey test. “Horizontal commonality” is generally understood to involve the pooling of money or assets from multiple investors where the investors share in the profits and risk.

 However, the Securities Exchange Commission has stated that it “does not require vertical or horizontal commonality per se, nor does it view a ‘common enterprise’ as a distinct element of the term ‘investment contract.’” Therefore, the fungibility aspect of the token alone may not preclude it from inclusion under securities regulation.

A more interesting inquiry might involve assessing whether the reasonable expectation of profits associated with an NFT is based on the “efforts of [others],” as outlined in Howey. In evaluating this element of the Howey test, the SEC considers whether a purchaser reasonably expects to rely on the efforts of active participants and whether those efforts are “undeniably significant” and “affect the failure or success of the enterprise.” Under this lens, how an NFT is offered and sold is critical to consider.

For example, if one mints (i.e., creates a NFT for) a piece of graphic art that sits and passively accumulates value, the failure or success of purchasing such a NFT would likely not be highly reliant on the activities of others. As the SEC has noted, price appreciation resulting solely from external market forces (such as general inflationary trends or the economy) impacting the supply and demand for an underlying asset generally is not considered ‘profit’ under the Howey test. Similarly, if a consumer purchases a digital pet, like those in Axie Infinity, that actively accumulates value through winning a series of battles, the success or failure of this digital pet would also not be highly reliant on the activities of others. However, this analysis becomes more complex when considering the recent increased interest in “fractional NFTs,” or “f-NFTs”, where an investor shares a partial interest in an NFT with others. Since these fractional interests are more accessible to a larger number of smaller investors, they may be more likely to drive market trading and, as such, be recognized as securities.

2. Federal Anti-Money Laundering Statutes

Under the Bank Secrecy Act, the Financial Crimes Enforcement Network, or “FinCEN,” is the U.S. Department of Treasury bureau that has the authority to regulate financial systems to fight money laundering. Although it has yet to comment directly on NFTs, FinCEN has released guidance suggesting that the movement of monetary value through virtual currencies could trigger money transmission regulations.

A critical factor determining whether the transfer of an NFT is a money transmission service will be whether FinCEN recognizes the NFT as “value that substitutes for currency.” If the NFT’s value may be substituted for currency then the transfer of such a NFT would likely trigger money transmission regulations. If players can purchase NFTs using a virtual currency that can cash out for fiat currency, then this transfer may be subject to FinCEN regulation. Alternatively, based on FinCEN’s recent guidance, even if NFTs are purchased with virtual currency that users cannot cash out for fiat currency, money transmission regulation may be triggered. Indeed, depending on how the gaming platform facilitates the transfer of in-game currency, regulatory risks may exist when users purchase third-party goods or make virtual marketplace transactions.

Earlier this year, Congress took a significant step towards making money transmission regulations more inclusive of NFT use cases when it passed the Anti-Money Laundering Act of 2020. Under the Act, art and antiquities dealers are now subject to the same anti-money laundering regulations that previously applied to financial institutions under the Bank Secrecy Act. This development will undoubtedly have a significant impact on the potential liability that gaming platforms can face as “dealers” of NFTs.

Conclusion

The United States is still a long way away from having laws that adequately regulate the creation, selling, and purchase of NFTs. However, NFT usage continues to increase rapidly. Nearly half of all U.S. adults are interested in participating in the NFT market, and gamers are 2.6x more likely to participate in the NFT market. As regulators move quickly to keep up with the pace of this market, firms will need to stay alert to ensure that they maintain regulatory compliance.

Rohun Reddy is a third-year JD-MBA student at Northwestern Pritzker School of Law and Kellogg School of Management.