Tag: technology

Introduction

The Supreme Court’s ruling in Dobbs v. Jackson Women’s Health Organization granted state legislatures the authority to regulate abortion. The Court’s decision quickly led states, such as Texas and Arkansas, to enact trigger bans for the procedure. Prior to the Court’s ruling, data brokers had already begun selling location data for individuals visiting abortion facilities through ordinary apps. This data often provided details to where the individual traveled from and how long they stayed at the facility.

In the wake of Dobbs, concerns have come to light regarding the potential misuse of sensitive personal health data originating from period tracking apps. Questions have arisen concerning whether “femtech” app data can be used to identify and prosecute individuals violating abortion laws. Due to lax federal laws and regulations in the United States, the onus falls on femtech companies to immediately and proactively find ways to protect users’ sensitive health data.

What is “Femtech’?

The term “femtech” was coined in 2016 by Ida Tin, the CEO and co-founder of period tracking app Clue. Femtech refers to health technology directed at supporting reproductive and menstrual healthcare. The femtech industry is currently estimated to have a market size between $500 million and $1 billion. Femtech apps are widely used with popular period-tracking app Flo Health touting more than 200 million downloads and 48 million monthly users.

Apps like Clue, Flo Health, and Stardust allow individuals to record and track their menstrual cycle to receive personalized predictions on their next period or their ovulation cycle. Although femtech apps collect highly sensitive health data, they are largely unregulated in the United States and there is a growing push for a comprehensive framework to protect sensitive health data that the apps collect from being sold or provided to third parties and law enforcement.

Current Regulatory Framework

Three federal agencies have regulatory authority over femtech apps – the Federal Trade Commission (“FTC”), United States Food and Drug Administration (“FDA”), and the Department of Health and Human Services (“HHS”). Their authority over femtech data privacy is limited in scope. Furthermore, while the FDA can clear the apps for contraceptive use, greater focus has been put on the FTC and HHS in regulating femtech. The Health Insurance Portability and Accountability Act (HIPAA), administered by the HHS, fails to protect sensitive health data from being collected and sold, and femtech apps are not covered under the Act. The FTC is currently exploring rules on harmful commercial surveillance and lax data security practices following President Joe Biden’s July 2022 executive order that encourages the FTC to “consider actions . . . to protect consumers’ privacy when seeking information about and provision of reproductive health care services.” The executive order’s definition of “reproductive healthcare services” does not, however, seem to include femtech apps. Thus, a massive gap remains in protecting sensitive health data consumers willingly provide to femtech app who may sell or provide such data to law enforcement or third parties. Femtech apps generally have free and paid versions for users, which makes the issue all the more immediate.

The unease based on potential misuse of health data collected by “femtech” apps heightened following the FTC’s complaint against Flo. The agency alleged the app violated Section 5 of the Federal Trade Commission Act (“FTCA”) by misleading consumers on how it handled sensitive health data. While the app promised to keep sensitive health data private, the FTC found the app was instead sharing this data with marketing and analytics firms, including Facebook and Google. Flo ultimately settled with the FTC, but the app refused to admit any wrongdoing.

FTC Commissioners Rohit Chopra and Rebecca Kelly Slaughter issued a joint statement following the settlement stating that, in addition to misleading consumers, they believed the app also violated the FTC’s Health Breach Notification Rule (“Rule”), which requires “vendors of unsecured health information . . . to notify users and the FTC if there has been an unauthorized disclosure.” The FTC refused to apply the Rule against Flo as such enforcement would have been “novel.” Such disclosures will help users navigate the post-Dobbs digital landscape, especially in light of news reports that law enforcement in certain states has begun to issue search warrants and subpoenas in abortion cases.

There is additional concern regarding femtech app’s potential location tracking falling into the hands of data brokers. The FTC recently charged Kochava, a data brokerage firm, with unfair trade practice under Section 5 of the FTCA for selling consumers’ precise geolocation data at abortion clinics. While Kochava’s data is not linked to femtech, in light of the FTC’s settlement with Flo, concerns of sensitive reproductive health data from femtech apps being sold is not out of the realm of possibility. Despite the FTC announcement on exploring new rules for commercial surveillance and lax data security, experts have expressed concern on whether such rulemaking is best done through the FTC or Congress. This is because the FTC’s rules are “typically more changeable than a law passed by Congress.”

As noted, most femtech apps are not covered under HIPAA nor are they required to comply. HIPAA encompasses three main rules under Title II: the Security Rule, the Privacy Rule, and the Breach Notification Rule. HIPAA is not a privacy bill, but it has grown to “provide expansive privacy protections for [protected health information] (“PHI”).” Due to the narrow definition of covered entity, there is little protection that can be provided to femtech app users under the current structure of HIPAA even though these apps collect health data that is “individually identifiable.”    

Momentum for HIPAA to be amended so femtech may fall within the scope of covered entities may still fall short since HIPAA’s Privacy Rule permits covered entities to disclose protected health information (“PHI”) for law enforcement purposes through a subpoena or court-ordered warrant. While it does not require covered entities to disclose PHI, this permission could be troublesome in states hostile to abortion. Even if HIPAA’s definition of covered entities is expanded, it would still be up to the company to decide whether to disclose PHI to law enforcement. Some femtech companies, though, may be more willing to protect user data and have already begun to do so.

Future Outlook and What Apps Are Doing Post-Dobbs

In September 2022, Flo announced in an email to users that it was moving its data controller from the United States to the United Kingdom. The company wrote that this change meant their “data is handled subject to the UK Data Protection Act and the [General Data Protection Regulation].” Their privacy policy makes it clear that, despite this change, personal data collected is transferred and processed in the United States where it is governed by United States law. While Flo does not sell identifiable user health data to third parties, the company’s privacy policy states it may still share user’s personal data “in response to subpoenas, court orders or legal processes . . . .” While the GDPR is one of the strongest international data privacy laws, it still does not provide United States users with much protection.

In the same update, Flo introduced “anonymous mode” letting users access the app without providing their name, email, or any technical identifiers. Flo said this decision was made “in an effort to further protect sensitive reproductive health information in a post-Roe America.” The FTC, however, states that claims of anonymized data are often deceptive and that the data is easily traceable. Users may still be at risk of potentially having their sensitive health data handed over to law enforcement. Further, research shows femtech apps often have significant shortcomings with respect to making privacy policies easy to read and that users are often unaware of what their consent means.     

 While femtech has the potential to provide much-needed attention to a group often under-researched and underrepresented in medicine, the need to enhance current data privacy standards should be at the forefront for developers, legislators, and regulators. Although femtech companies may be incentivized to sell sensitive health data, their resources may be better spent lobbying for the passage of legislation like the American Data Privacy and Protection Act (“ADPPA”) and My Body, My Data Act otherwise the lack of data privacy measures may turn users away from femtech altogether. While no current reports show that menstruating individuals are turning away from femtech apps, it may be too soon to tell the effects post-Dobbs.

 The ADPPA is a bipartisan bill that would be the “first comprehensive information privacy legislation” and would charge the FTC with the authority to administer the Act. The ADPPA would regulate “sensitive covered data” including “any information that describes or reveals the past, present, or future physical health, mental health, disability, diagnosis, or healthcare condition or treatment of an individual” as well as “precise geolocation information.” ADPAA’s scope would extend beyond covered providers as defined by HIPAA and would encompass femtech apps. The ADPPA would reduce the amount of data available through commercial sources that is available to law enforcement and give consumers more rights to control their data. The Act, however, is not perfect, and some legislators have argued that it would make it more difficult for individuals to bring forth claims against privacy violations. While it is unlikely that Congress will pass or consider ADPAA before it convenes in January 2023, it marks a start to long-awaited federal privacy law discussions.

 On a state level, California moved quickly to enact two bills that would strengthen privacy protections for individuals seeking abortion, including prohibiting cooperation with out-of-state law enforcement regardless of whether the individual is a California resident. Although California is working to become an abortion safe haven, abortion access is costly and individuals most impacted by the Supreme Court’s decision will likely not be able to fund trips to the state to take advantage of the strong privacy laws.

As menstruating individuals continue to navigate the post-Dobbs landscape, transparency from femtech companies should be provided to consumers with regard to how their reproductive health data is being collected and how it may be shared, especially when it comes to a growing healthcare service that individuals are exploring online ­- abortion pills.

Angela Petkovic is a second-year law student at Northwestern Pritzker School of Law.

In August 2020, Marlene Stollings, the head coach of Texas Tech Women’s Basketball Team, allegedly forced her players to wear heart rate monitors during practice and games. Stollings would subsequently view the player data and reprimand each player who did not achieve their target heart rates. It could be argued that Stollings was simply pushing her players to perform better, however former player Erin DeGrate described Stollings’ use of the data as a “torture mechanism.” This is just one reported example of how athletic programs use athlete data collected from wearable technology to the student athlete’s detriment.

As of 2021 the market for wearable devices in athletics has a $79.94 billion valuation and is expected to grow to $212.67 billion by 2029. The major market competitors in the industry consist of Nike, Adidas, Under Armour, Apple, and Alphabet, Inc. so the expected growth comes as no surprise. Some wearable technology is worn by everyday consumers to simply track how many calories they have burned in a day or whether they met their desired exercise goals. On the other hand, professional and college athletes use wearable technology to track health and activity data to better understand their bodies and gain a competitive edge. While professional athletes can negotiate which types of technology they wear and how the technology is used through their league’s respective collective bargaining agreement, collegiate athletes do not benefit from these negotiation powers. Universities ultimately possess a sort of “constructive authority” to determine what kind of technology students wear, what data is collected, and how that data is used without considering the student athlete’s level of comfort. This is because if the student-athlete chooses to-opt out of wearable technology usage it may hinder their playing time or lead to being kicked off the team.

Studies show that collecting athlete biometric data has a positive effect on a player’s success and helps reduce possible injury. For instance, professional leagues utilize wearables for creating heat maps to analyze an athlete’s decision-making abilities. The Florida State Seminole basketball program also routinely uses wearables to track and monitor early signs of soft tissue damage which helped reduce the team’s overall injury rate by 88%. However, there are significant trade-offs including the invasion of an athlete’s privacy and possible misuse of the data.

Section I of this article will examine the different types of information collected from athletes and how that information is being collected. Section II will discuss a college athlete’s right to privacy under state biometric laws. Section III will discuss how data privacy laws are changing with respect to collecting athlete biometric data. Last, section IV will discuss possible solutions to collecting biometric data.

II. What Data is Collected & How?

Many people around the country use Smart Watch technology such as Fitbits, Apple Watches, or Samsung Galaxy Watches to track their everyday lifestyle. Intending to maintain a healthy lifestyle, people usually allow these devices to monitor the number of steps taken throughout the day, how many calories were burned, the variance of their heart rate, or even their sleep schedule. On the surface, there is nothing inherently problematic about this data collection, however, biometric data collected on college athletes is much more intrusive. Athletic programs are beginning to enter into contractual relationships with big tech companies to provide wearable technology for their athletes. For example, Rutgers University football program partnered with Oura to provide wearable rings for their athletes. Moreover, the types of data these devices collect include blood oxygenation levels, glucose, gait, blood pressure, body temperature, body fatigue, muscle strain, and even brain activity. While many college athletes voluntarily rely on wearable technology to develop a competitive edge, some collegiate programs now mandate students wear the technology for the athletic program to collect the data. Collegiate athletes do not have the benefit of negotiations or the privileges of a collective bargaining agreement, but the athletes do sign a national letter of intent which requires a waiver of certain rights in order to play for the University. Although college athletes have little to no bargaining power, they should be given the chance to negotiate this national letter of intent to incorporate biometric data privacy issues because it is ultimately their bodies producing the data.

II. Biometric Privacy Laws

Currently, there are no federal privacy laws on point that protect collecting student athlete biometric data. Nonetheless, some states have enacted biometric privacy statutes to deal with the issue. Illinois, for example, which houses thirteen NCAA Division I athletic programs, authorized the Biometric Information Privacy Act (BIPA) in 2008. BIPA creates standards for how companies in Illinois must handle biometric data. Specifically, BIPA prohibits private companies from collecting biometric data unless the company (1) informs the individual in writing that their biometric data is being collected or stored, (2) informs the individual in writing why the data is being collected along with the duration collection will continue for and (3) the company receives a written release from the individual. This is a step in the right direction in protecting athletes’ privacy since the statute’s language implies athletes would have to provide informed consent before their biometric data is collected. However, BIPA does not apply to universities and their student-athletes since they fall under the 25(c) exemption for institutions. Five other Illinois courts, including a recent decision in Powell v. DePaul University, explain the 25(c) exemption extended to “institutions of higher education that are significantly engaged in financial activities such as making or administering student loans.”

So, although Illinois has been praised for being one of the first states to address the emerging use of biometric data by private companies, it does not protect collegiate athletes who are “voluntarily” opting into the wearable technology procedures set by their teams.

III. Data Collection Laws are Changing

     While BIPA does not protect collegiate athletes, other states have enacted privacy laws that may protect student-athletes. In 2017 the state of Washington followed Illinois’ footsteps by enacting its own biometric privacy law that is substantively similar to the provisions in BIPA. But the Washington law contains an expanded definition of what constitutes “biometric data.” Specifically, the law defines biometric identifiers as “data generated by automatic measurements of an individual’s biological characteristics, such as a fingerprint, voiceprint, eye retinas, irises or other unique biological patterns or characteristics that are used to identify a specific individual.” By adding the two phrases, “data generated by automatic measurements of an individual’s biological characteristics,” and “other biological patterns or characteristics that is used to identify a specific individual,” the Washington law may encompass the complex health data collected from student-athletes. The language in the statute is broad and thus likely covers an athlete’s biometric data because it is unique to that certain individual and could be used as a characteristic to identify that individual.  

IV. Possible Solutions to Protect Player Biometric Data

     Overall, it’s hard to believe that biometric data on student-athletes will see increased restrictions any time soon. There is too much on the line for college athletic programs to stop collecting biometric data since programs want to do whatever it takes to gain a competitive edge. Nonetheless, it would be possible to restrict who has access to athletes’ biometric data. In 2016, Nike and the University of Michigan signed an agreement worth $170 million where Nike would provide Michigan athletes with apparel and in return, Michigan would allow Nike to obtain personal data from Michigan athletes through the use of wearable technology. The contract hardly protected the University’s student-athletes and was executed in secrecy seeing its details were only revealed after obtaining information through the Freedom of Information Act. Since the University was negotiating the use of the student athlete’s biometric data on the athlete’s behalf, it can likely be assumed that the University owns the data. Therefore, athletes should push for negotiable scholarship terms allowing them to restrict access to their biometric data and only allow the athletic program’s medical professionals to obtain the data.

One would think that HIPAA protects this information from the outset. Yet there is a “general consensus” that HIPAA does not apply to information collected by wearables since (a) “wearable technology companies are not considered ‘covered entities’, (b) athletes consent to these companies having access to their information, or (c) an employment exemption applies.” Allowing student-athletes to restrict access before their college career starts likely hinders the peer pressure received from coaches to consent to data collection. Further, this would show they do not consent to companies having access to their information and could trigger HIPAA. This would also cause the information to be privileged since it is in the hands of a medical professional, and the athlete could still analyze the data with the medical professional on his or her own to gain the competitive edge biometric data provides.

Anthony Vitucci is a third-year law student at Northwestern Pritzker School of Law.

Introduction

News headlines about facial recognition technology primarily focus on the government’s use and misuse of the technology. Likewise, technology companies and legislators frequently advocate against the government’s use of facial recognition tools to conduct mass surveillance or generate leads in investigations. For example, following widespread claims of the technology’s racial bias, Amazon, IBM, and Microsoft announced that they would stop selling facial recognition tools to law enforcement agencies. And following the arrest of an innocent black man who was falsely identified by facial recognition, major cities like San Francisco and Boston banned law enforcement from using the technology.

However, as industry commentators focus on the government’s use of facial recognition tools, private businesses in the U.S. regularly deploy facial recognition technology to secretly surveil their customers. Companies rely on the technology to gather information about customers’ identities and demographics to tailor their marketing strategies, monitor customers within stores, or sell the information to third parties. Since there are no federal regulations governing the technology, commercial uses of facial recognition technology remain relatively unchecked, even as companies invade their customers’ privacy rights without any warning.

     How Does Facial Recognition Technology Work?

Based on photos or still images, facial recognition technology scans, maps, and analyzes the geometry of a person’s face to verify their identity or collect information about their behavior. When mapping a face, the technology creates a mathematical formula ­— called a facial signature — based on the person’s distinct facial features, such as the distance between their eyes. Facial recognition systems can create and store facial signatures for each scanned image containing a face. When a user uploads a new photo, the system cross-references the generated facial signature with existing ones in the database and can verify the person’s identity with a matched signature.

Businesses have created databases of facial signatures to identify customers of interest in future video footage. In addition, businesses can use facial recognition software from companies like Clearview AI, which cross-references an uploaded photo against billions of public images to verify a person’s identity. Clearview AI is known to offer free trials of its software, luring businesses and rogue employees into using the technology. With such easy access to facial recognition software, private use of the technology has proliferated, hardly slowed by regulatory barriers.

Commercial Uses of Facial Recognition Technology

No matter the industry, facial recognition can help businesses glean more information about their customers, make informed business decisions, and increase their revenues. Shopping malls and mega-stores like Macy’s, Rite-Aid, Apple, and Walmart have used facial recognition to identify shoplifters, target loyal customers, and track customers’ reactions within the store. Amazon has sold facial recognition technology that assesses customers’ faces to discover whether they are attentive or indifferent to certain displays. While customers are surely aware these mega-stores have security cameras, they are likely unaware these stores may know their name, home address, how many times they’ve frequented the location, and whether they are happy with their in-store experience. Outside of retail stores, in cities like Miami, thousands of Uber and Lyft drivers have digital tablets in their backseats that use facial recognition technology to assess a rider’s age, gender, and demographics, in order to display ads tailored to the rider’s perceived characteristics.

In states without biometric privacy laws, any citizen who shops at a mall or grocery store, or attends a concert or sports game, will likely be the subject of unsuspecting facial recognition. Additionally, facial recognition tools can even identify an individual who rarely shows their face in public. Clearview AI created a facial recognition database by scraping ten billion images from public websites. Clearview analyzed the images and developed facial signatures for nearly half the U.S. population.

As of 2020, more than 200 companies had accounts with Clearview, including professional sports leagues, casinos, fitness centers, and banks. These companies can upload a photo of an individual’s face — pulled from security footage or driver’s licenses — and cross-reference it against Clearview’s database to find a match. With limited regulation and easy access to facial recognition tools, consumers will face the technology’s adverse consequences, such as misidentifications and loss of privacy rights.

Misidentifications and Privacy Risks

The accuracy of facial recognition technology to correctly identify a person depends on their age, gender, or race. Research from the National Institute of Standards and Technology revealed that facial recognition systems are less accurate when identifying people of color. The algorithms are more likely to misidentify African Americans, Native Americans, and Asians compared to Caucasians. Researchers also have found these algorithms to be less accurate when identifying women, transgender individuals, and children.

Misidentification can carry damaging consequences to an individual’s liberty and dignity. Robert Williams, the black man who was wrongfully arrested based on a facial recognition match, was a victim of misidentification. These same misidentifications are likely occurring at private establishments, where security guards use the technology to scan for known criminals and remove purported “matches” from their stores.

In addition to misidentifications, facial recognition technology intrudes on an individual’s right to privacy. The technology allows companies to identify customers without their consent, collecting information about customers’ demographics and preferences. Furthermore, companies that store facial templates are subject to data breaches, where thousands of their customers’ faceprints could become compromised. Unlike online passwords, a stolen faceprint is indefinitely compromised — a customer cannot change their faceprint. Last year, thousands of scammers in the U.S. tried using stolen faceprints to fraudulently obtain government-assistance benefits. As facial recognition technology grows, bad actors will attempt to use stolen faceprints for financial gain.

     Federal, State, and Local Regulations

There are no federal regulations curbing the private use of facial recognition technology, but Congress’s interest in regulating the technology is increasing. Legislators introduced three separate bills to regulate facial recognition technology in the past few years, yet none passed the introduction stage.

One of the bills introduced in the Senate, the Commercial Facial Recognition Privacy Act, would have required all private entities to obtain explicit consent from customers before collecting faceprint data. The bill’s consent requirement is based on the Illinois Biometric Information Privacy Act (BIPA), one of only three state-enacted biometric privacy laws.

BIPA requires businesses that use facial recognition technology to obtain consent from consumers before collecting their faceprint data. It also requires these businesses to provide information about how they protect and store the biometric data. BIPA permits individuals to sue companies who violate any requirement in the statute and offers significant statutory damages for violations. In February 2021, Facebook paid out $650 million to settle a BIPA class-action lawsuit. To date, more than 800 BIPA class action lawsuits have been filed against Illinois businesses.

Despite BIPA’s teeth, businesses can freely use facial recognition in almost every other state. Texas and Washington are the only other states with biometric privacy laws that regulate commercial use of the technology. Yet, neither state permits citizens to sue companies for violating the statute, meaning there is much less pressure to comply. Enforcement lies with each state’s attorney general, who can impose civil penalties on violators.

Fortunately, bans on private use are growing at the city level. In September 2020, Portland, Oregon, became the first municipality to ban private entities from using facial recognition in public places, such as shopping malls. Since then, two other cities have followed suit. New York City now requires commercial establishments to post notices when using facial recognition technology, and Baltimore banned all private sector use of the technology, even subjecting violators to criminal penalties. The recent wave of restrictions at the city level indicates that regulations may first arise where the commercial sector flourishes — in major cities.

     Calls for Regulation and Future Outlook

Despite the pervasive commercial use of facial recognition technology, sixty percent of Americans are unaware that retail stores use the technology. This lack of awareness stems in part from the lack of regulation. Aside from a few states and a handful of cities, most businesses are unregulated: free to implement facial recognition tools without warning their customers. So far, calls for regulation have primarily come from companies that have developed facial recognition technology themselves: Microsoft, IBM, and Amazon. While these calls may be aimed at influencing friendly regulations, Microsoft’s President Brad Smith has called for legislation requiring stores to provide notice and obtain consent, similar to BIPA’s consent requirement. As BIPA has revealed, requiring businesses to obtain consent from consumers would at least hold businesses accountable for their facial recognition uses.

Nevertheless, some businesses may not wait for enacted legislation before shelving their facial recognition products. In November 2021, Meta announced that Facebook will no longer use facial recognition software and plans to delete the faceprint data of one billion Facebook users. Meta’s decision was motivated by concerns about the technology’s “place in our society.” This drastic move may prompt other industry leaders to start influencing the future treatment of facial recognition technology, with the hopes of clearing up the current regulatory uncertainty that threatens innovation and investment. While some may question Meta’s sincerity or true motives, its decision could foreshadow an era of much-needed regulatory action.  

Michael Willian is a third-year law student at Northwestern Pritzker School of Law.

I. Introduction

The COVID-19 pandemic has brought the issues of personal privacy and biometric data to the forefront of the American legal landscape. In an increasingly digital world, privacy laws are more important than ever. This reality is especially true in the context of remote workplaces, where employers have facilitated a digital migration through a variety of means. The platforms employers use have the propensity to violate personal privacy through the capture and storage of sensitive biometric information. In response, states across the nation are exploring solutions to the potential privacy issues inherent in the collection of biometric data. One of the first states to do so was Illinois, enacting a standalone biometric privacy statute in 2008: the Illinois Biometric Information Privacy Act (“BIPA”). Today, BIPA is more relevant than ever and should act as a statutory blueprint for states looking to protect personal privacy and biometric data amid a global pandemic. Ultimately, though, BIPA must be supplemented by federal legislation drafted in its likeness to effectively protect individuals’ privacy on a national level.

II. Background of the Biometric Information Privacy Act

To fully understand BIPA and all its implications, one must appreciate the context in which it was enacted. The Illinois legislature passed BIPA in October 2008. The Act was passed in the immediate wake of the bankruptcy of Pay By Touch, a company which operated the largest fingerprint scan system in Illinois. Pay By Touch’s pilot program was used in grocery stores and gas stations, and its bankruptcy left users unsure of what would become of their biometric data – i.e., their fingerprints. “Biometric data – a person’s unique biological traits embodied in not only fingerprints but also voice prints, retinal scans, and facial geometry – is the most sensitive data belonging to an individual.”

Understandably, private citizens in Illinois and across the country want to safeguard their sensitive biometric data. With potential issues such as identity theft and data manipulation more prevalent than ever, people have plenty of incentives to ensure their unique identifiers stay private. In response to those concerns, legislatures have passed statutes to address biometric data and personal privacy. BIPA represents one of the most stringent of such acts in the country, setting strict requirements for the management of biometric identifiers in Illinois.

BIPA defines “biometric identifier” as (1) a retina or iris scan, (2) fingerprint, (3) voiceprint, or (4) a scan of hand or face geometry. Further, “biometric information” refers to any information, regardless of how it is captured, converted, stored, or shared, based on an individual’s biometric identifier used to identify an individual. The requirements outlined in Section 15 of the Act – which addresses the retention, collection, disclosure, and destruction of biometric data – implicate a slew of potential legal issues. The section stipulates that a private entity can collect a person’s biometric data only if it first informs the subject that a biometric identifier is being collected, informs them of the specific purpose and length of term it is being collected for, and receives a written release from the subject.

Further, the Act outlines the following concerning retention of such data:

(a) A private entity in possession of biometric identifiers or biometric information must develop a written policy, made available to the public, establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within 3 years of the individual’s last interaction with the private entity, whichever comes first.

Thus, BIPA represents a statute narrowly aimed at maintaining the security of biometric data. While BIPA was relatively unknown in Illinois between 2008-2015, a wave of litigation has since swept through the state as employees began suing their employers. Such litigation was seemingly inevitable, as BIPA provides sweeping protection for individuals against biometric data abuse. The complexities of such issues have become clearer and potential legislative solutions to them even more important in the midst of a global pandemic.

III. Personal Privacy & Biometric Data in the COVID-19 Pandemic

The issues surrounding data privacy have become increasingly relevant in the ongoing COVID-19 pandemic, which effectively digitized the workplace as we know it. As the pandemic raged in the early months of 2020, workplaces around the globe were suddenly forced to digitally migrate to an online work environment. An inevitable result of newfound online worksites has been an increase in the utilization of biometric data. In an effort to facilitate remote work, companies have had to make work-related information accessible online. Employment attorney Eliana Theodorou outlines the ensuing issues for companies undertaking such efforts in an article entitled “COVID-19 and the Illinois Biometric Information Privacy Act.” For example, Theodorou writes, “Some of these platforms involve video recording or access by fingerprint, face scan, or retina or iris scan, which may result in the capture and storage of sensitive biometric information.” Thus, the collection and retention of biometric data has necessarily increased during the pandemic as companies made information accessible remotely when they shifted online.

Potential privacy issues accompanying the storage of biometric data will become even more difficult to navigate as companies return to physical workplaces with the pandemic still raging. Per Theodorou, “As workplaces reopen, there will likely be an uptick in the collection of biometric data as employers turn to symptom screening technologies that collect biometric data.” This could include, for instance, contactless thermometers and facial recognition scanning technologies used for contactless security access. The issue will thus continue to be the collection and storage of sensitive biometric data as employers return to work with the newfound priorities of social distancing and limited contact. The reality is that biometric data is still a relatively new concept, with its own specific set of issues and potential solutions. Personal privacy becomes ever harder to maintain in a digital world, with the use of biometric information often a necessity both for remote access and in-person return to work. Ultimately, the risks associated with the collection of biometric data remain largely undefined or misunderstood by employers. That lack of understanding has been exacerbated by a global pandemic necessitating a digital work migration.

IV. Possible Solutions to the Privacy Issues Raised by COVID-19 and Remote Workplaces

Illinois has provided a stellar blueprint for biometric data privacy in BIPA. However, other states have been slow to follow. As of November 2021, only a handful of other states have enacted legislation aimed at the protection of biometric data. Texas and Washington, like Illinois, have passed broad biometric privacy laws. Other states like Arizona and New York have adopted more tailored biometric privacy approaches, while others have enacted laws specifically aimed at facial recognition technology. There are also proposed bills awaiting legislative approval in many more states. Ultimately, implementing widespread legislation on a state-by-state basis will be a slow and drawn-out process, rendering countless Americans’ biometric data vulnerable. Rather than continue this state-based campaign to solidify biometric data privacy, citizens must turn to the federal government for a more comprehensive and consistent solution.

The primary roadblock to legitimate privacy in the biometric information space is the lack of a centralized federal initiative to address it. “Despite its value and sensitivity, the federal government currently has no comprehensive laws in place to protect the biometric data of U.S. citizens.” The privacy issues inherent in the popularization of biometric data in pandemic-era remote workplaces demand federal attention. A wide-ranging statute applicable in all states is the first step in properly addressing these issues. Congress should look to BIPA as a blueprint, for it remains the only state law passed to address biometric data privacy which includes a personal call to action. It is unique in that regard, especially considering it was passed in 2008, and consequently provides the most aggressive statutory response thus far to potential privacy concerns. Whether a federal act is feasible remains unclear. In August 2020, Senators Jeff Merkley and Bernie Sanders introduced the National Biometric Information Privacy Act of 2020, which suggests the imposition of nationwide requirements similar to those outlined in BIPA. The viability of such an Act is doubtful, as previous privacy legislation has been difficult to pass. However, it is a sign of movement in the right direction – toward increased protection of personal privacy in a pandemic which has made biometric data more relevant and potentially at-risk for improper management and manipulation.

Luke Shadley is a third-year law student at Northwestern Pritzker School of Law.

What’s The Issue?

It seems logical that the creator of a work would own the rights to that work. This general idea imports easily into some industries but creates problems in the music industry. The reality is that the main rights holder of a creative musical work is often not the musicians but collective management organizations (CMOs). After pouring countless hours, days, months, and years into perfecting a single music work or album, the musician often ends up not having total control over his or her work. The music industry is driven by smoke and mirrors where the distributors and records labels often do not disclose who owns the rights to which musical work. George Howard, co-founder of a digital music distributor called TuneCore and professor at Berklee College of Music, describes the music industry as one that lacks transparency. He explains that the music industry is built on asymmetry where the “under-educated, underrepresented, or under-experienced” musicians are deprived of their rights because they are often kept in the dark about their rights as creators.

As a result of the industry having only a few power players, profit is meek for musicians. Back in the day, musicians and their labels were able to get a somewhat steady source of income through physical album sales. However, with the prominence of online streaming, their main source of income has changed. The source of this issue seems to stem from how creators’ rights are tracked and managed.

A piece of music has two copyrights, one for the composition and one for the sound recording, and it is often difficult to keep track of both because the ownership of these rights are split amongst several songwriters and performers. The music industry does not have a way to keep track of these copyrights, and this is an issue especially when there are several individuals involved in creating a single musical work. With the development of digital ledger technology and its influence in various industries, it could be time that this development makes its way into the music industry and provide a solution to compensate musicians for their lost profits.

Blockchains: the solution?

     Lately, blockchain technology has been at the forefront of conversations. For example, the variation in Bitcoin’s pricing has been a hot topic. Blockchain technology seems like a mouthful, but it is simply a “database maintained by a distributed network of computers.” Blockchains allow information to be recorded, distributed across decentralized ledgers, and stored in a network that is secure against outside tampering.

With the advancement of online music streaming, and entertainment going digital, blockchain seems like the perfect tool to be used in this industry. Since the issue of weakened profits seems to stem from disorganized tracking and monitoring of creators, blockchain technology could be utilized to improve the systems used for licensing and royalty payments. A blockchain ledger would allow a third party to track the process of a creative work and be an accessible way of managing intellectual property rights of these creative works. By tracking and monitoring their works, musicians could potentially gain back their profits, or at least recuperate some of their losses.

     In 1998, there were several companies that came together to create a centralized database to organize copyrights for copyright owners so that royalty payments would be made in an orderly fashion. This effort was called the Secure Digital Music Initiative (SDMIT) and its purpose was to “create an open framework for sharing encrypting music by not only respecting copyrights, but also allowing the use of them in unprotected formats.” Unfortunately, this initiative failed to provide a universal standard for encrypting music.

The latest venture was the Global Repertoire Database (GRD) which aimed to “create a singular, compiled, and authoritative ledger of ownership and control of musical works around the world.” This was a very ambitious move and required two rounds of financing which consisted of the initial startup funds and the funds to cover the budgeting for the year. Although there were significant contributions to this mission, some collection societies, such as the American Society of Composers, Authors and Publishers (ASCAP), started to pull out of the fund due to GRD’s failure and debt that it accumulated.

Even though this venture failed to provide a centralized database that could resolve royalty and licensing issues, there is now a growing consensus in the music industry for a global, digital database that properly, and efficiently, manages copyright ownership information. The next venture could utilize blockchain technology because of the advantages for storage, tracking, and security that it offers. In addition, not only could blockchain provide a centralized database so that music content information is accurately organized, it could provide a way to close the gap between creators and consumers and dispose of intermediaries. This would allow for a more seamless experience and transparency for the consumer and allow the creators to have more control over their works. Further, this ledger would allow these creators to upload all of their musical work elements, such as the composition, lyrics, cover art, video performances and licensing information, to a single, uniform database. This information would be available globally in an easily verified peer-to-peer system.

     On the other hand, since blockchains are tamper-resistant, the data could not be “changed or deleted without affecting the entire system” even with a central authority. This means that if someone decides to delete a file from the system, such a deletion will disrupt the whole chain. There could also be issues with implementing such a large network of systems, or computers, due to the sheer amount of music that is globally available. Additionally, to identify each registered work, the right holders have to upload digital copies of their works which would require an extensive amount of storage and computational power to save entire songs.

Nevertheless, blockchain could provide the base for implementing a centralized database using a network of systems, or computers, in order to organize royalty payments for these musicians. Proponents contend that, with the help of Congress, this could be made possible. Congress recently introduced Bill HR 3350, Transparency in Music Licensing and Ownership Act. This act, if passed, will require musicians to register their songs in a federal database or else forfeit the ability to enforce their copyright, which would prevent them from collecting their royalties for those works. Although this might seem like an ultimatum, this proposed Act would provide the best way of changing how the music industry stores its information to provide an efficient way to distribute royalties and licensing payments to these artists.

Conclusion

     People are split in their opinions about blockchain technology in the music industry. There are some who see this as a more accurate way of managing “consumer content ownership in the digital domain.” Others do not see this as a viable plan due to its lack of scalability to compensate for the vast amount of musical works. Even with the development of the music industry into the digital field, the goal is always to protect the artists’ works. Plan [B]lockchain ledger may not completely solve the royalty problem in the music industry, but it can provide a starting point in creating a more robust metadata database and, in combination with legislative change, the musical works could remain in the hands of their respectable owners.

Jenny Kim is a second-year law student at Northwestern Pritzker School of Law.

Introduction

Throughout the past two years, AI-powered stem-splitting services have emerged online, allowing users to upload any audio file and access extracted, downloadable audio stems. A “stem” is an audio file that contains a mixture of a song’s similarly situated musical components. For example, if one records a mix of twenty harmonized vocal tracks, that recording constitutes a vocal stem. Stems’ primary purpose is to ease integrating or transferring their contents into either a larger project or a different work. Traditionally, only producers or engineers created and accessed stems. Even when stem sharing became commonplace, it was only for other industry insiders or those with licenses. But AI stem-splitting technology has transformed stem access. For the first time, anyone with internet access can obtain a stem through stem extraction software, which will likely push music production’s creative envelope into new realms. One inevitable consequence, however, is the question of copyright protection over stems extracted from copyrighted works.

Section 102 of Title 17 extends copyright protection not only over the stems’ original copyrighted audio source but also over that source’s components, such as the stems. Any modification of that work, such as extracting a stem and using it elsewhere, likely qualifies as a “derivative work” under Section 103. Importantly, Section 106 allows only copyright owners to authorize making derivative works. In light of this regime, what flexibility, if any, do artists have in using AI-extracted, copyrighted stems? Three considerations shed light on an answer: fair use, de minimis use, and the use of content recognition software coupled with licensing.

Fair Use

Codified under Section 107, the fair use defense provides a possible safeguard for would-be infringers. To establish this affirmative defense, a court would need to find the statute’s four factors sufficiently weigh toward “fair use.” Unfortunately, courts reach incongruous interpretations of what permissible fair use includes, rendering the defense a muddled construct for many artists. Squaring the four factors with stem usage, however, may offer guidance.

  1. The purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit education purposes

In the seminal music fair use case, Campbell v. Acuff-Rose Music, Inc., the Supreme Court emphasized that this first factor will likely weigh toward fair use when the work is “transformative.” The Court went so far as to note “the more transformative the new work, the less will be the significance of other factors . . . .” Thus, an artist fearing infringement should strive toward transforming the copyrighted material into something distinct, used for noncommercial purposes. In the absence of a bright-line rule from the Court, however, artists will still need to use reasonable judgment about what types of stem usage is “distinct.” For example, suppose Artist A extracts a strings stem from a copyrighted work and only uses two seconds of it within another work that comprises numerous other instruments and melodies. Meanwhile, Artist B extracts the same strings stem; however, Artist B uses the entire strings melody within their work and only adds percussion and minor counter-melodies. Artist A would likely be in a more favorable legal position than Artist B given A’s efforts to materially transform the copyrighted audio.

The commercial nature of copyrighted stem usage is also unclear. An artist may choose to work with stems for solely experimental purposes. For example, an artist who shares their work via Soundcloud or YouTube does not expect another person to use those platforms to directly purchase the work. With stems’ increasing public accessibility, many will simply want to experiment with a music tool that, until recently, has largely remained a foreign concept. If this issue reaches a court, the court would need to conduct an analysis set against the landscape of such heightened accessibility. An increase in this noncommercial, creative use may offer hope to artists in the future, but it is too soon to tell.

  1. The nature of the copyrighted work

The second factor favors artists borrowing from copyrighted works with lower creative value. Unfortunately, music is typically found to be one of the most creative forms of copyrighted work. For example, a district court in UMG Recordings, Inc. v. MP3.Com, Inc. analyzing this second factor noted that the disputed material—copyrighted musical works—was “close[] to the core of intended copyright protection” and “far removed from the more factual or descriptive work more amenable to ‘fair use.’”

Though courts’ future inquiries into stem usage may differ from previous analyses of sample usage, the inquiry will likely change very little for this factor. Although a stem could potentially represent only a minute portion of the song, this factor’s inquiry focuses on the source of the stem, rather than the stem itself. Consequently, rarely will this factor work to a potentially infringing artist’s benefit, even if their stem usage is quite minor.

  1. The amount and substantiality of the portion used in relation to the copyrighted work as a whole

The third factor, however, may offer hope for such minor stem usage. Courts will undoubtedly reach differing interpretations about how minimal the copyrighted portion’s “amount” and “substantiality” must be for this factor to weigh toward fair use. A court will need to weigh numerous variables and how they intersect. For example, is an artist using a thirty-second loop of a vocal stem or a five-second loop? Does that vocal stem include the chorus of a song? What about any distinct lyrics? Just minor humming? These considerations are not entirely novel. Artists purporting to use copyrighted samples have long been able to argue—with little success—that their samples’ amount and substance pass muster under this factor. Yet, stems are not samples; in fact, they typically represent a considerably smaller portion of a work. Given just how recent and novel their public accessibility is, it remains unclear whether a court would treat stem use any differently under this factor than it has treated instances of sample use. Carefully using a minor portion of a vestigial stem to avoid a work’s core substance, therefore, could potentially facilitate a favorable outcome.  

  1. The effect of the use upon the potential market for or value of the copyrighted work

The fourth and final factor of fair use is “[u]ndoubtedly the single most important element.” This factor examines both the infringement’s effect on the potential market and “whether unrestricted and widespread conduct of the sort engaged in by the defendant . . .  would result in a substantially adverse impact on the potential market for the original.”

In the sampling realm, this factor has tipped the scales before. For example, in Estate of Smith v. Cash Money Records, Inc., a district court found fair use when the defendants inserted a thirty-five-second “spoken-word criticism of non-jazz music” into a hip-hop track. In its analysis of this fourth factor, the court emphasized that “there [was] no evidence” that pointed to overlapping markets between the spoken jazz track and the hip-hop track. The court, noting this factor’s high probative value, then weighed this factor in the defendants’ favor.

     In the stem realm, the novel nature of widespread public use means courts will need to determine both whether this factor should remain highly probative and how much deference to give stem users in analyzing market overlap. After all, an artist who incorporates a stem into a work intended for a twenty-five-person YouTube following likely affects the original work’s market differently than an artist who disseminates that work to millions of followers. This factor’s outcome will also rely on the stem’s source. Similar to sampling, if an artist uses a stem in a drastically different arena than the one for which the stem was created, this factor will weigh more toward fair use.

For example, suppose Artist A locates an insurance advertisement jingle. Artist A then extracts a stem from that advertisement audio and uses the stem in a new hip-hop track. The advertisement’s potential market is likely different from the hip-hop track’s potential market. Artist A’s work would likely have little impact, if any, on the advertisement’s market. Artist B, meanwhile, creates a hip-hop track but uses a stem from another hip-hop song produced twenty-five years ago. Though Artist B may believe the stem from the older hip-hop track no longer caters to the same hip-hop market to which Artist B is targeting, a court may be more inclined to find a material impact on the older track’s market: it would provide another way in which music listeners, particularly hip-hop listeners, could hear that older track. Nonetheless, it would remain up for a court to decide.

De Minimis Use

Artists might also be able to use extracted, copyrighted stems if such use is de minimis. The Ninth Circuit in Newton v. Diamond held de minimis use—“when the average audience would not recognize the appropriation”—is permissible. Yet, following the Newton decision, the Sixth Circuit in Bridgeport Music, Inc. v. Dimension Films foreclosed the possibility of de minimis copying. Similar to fair use analysis, it is difficult for an artist to determine whether the use of a stem in their work is de minimis under this standard.

Thus, an artist who loops only a small, relatively generic-sounding portion of a stem may find additional legal protection. But they might not. If they are in a jurisdiction that does not recognize de minimis use, or they use a stem in a way that extends beyond what a court considers de minimis in a de minimis jurisdiction, this avenue will be unavailable.

Content Recognition Software

Beyond legal defenses, a newer scheme of licensing deals coupled with content recognition software may offer protection for stem usage. For example, if a user on the content platform TikTok uploads content with copyrighted audio, TikTok’s content recognition software recognizes the audio, then pays the appropriate royalties to the audio’s copyright holder through preexisting licensing deals. Yet, because schemes like TikTok’s and stem usage are both relatively new, it remains unclear whether artists could find the same protection through individual stem use. Indeed, if an artist uses only a small part of a single stem, it may very well be impossible to detect the stem’s source; however, emerging technology may change this soon. Further, these licensing deals restrict such artists to sharing work only on particular platforms—notably, neither Soundcloud nor YouTube. Ultimately, this protection carries promising potential for expanded, authorized stem use. But perhaps not quite yet.

Matthew Danaher is a second-year law student at Northwestern Pritzker School of Law.