Tag: Privacy Law

Introduction

News headlines about facial recognition technology primarily focus on the government’s use and misuse of the technology. Likewise, technology companies and legislators frequently advocate against the government’s use of facial recognition tools to conduct mass surveillance or generate leads in investigations. For example, following widespread claims of the technology’s racial bias, Amazon, IBM, and Microsoft announced that they would stop selling facial recognition tools to law enforcement agencies. And following the arrest of an innocent black man who was falsely identified by facial recognition, major cities like San Francisco and Boston banned law enforcement from using the technology.

However, as industry commentators focus on the government’s use of facial recognition tools, private businesses in the U.S. regularly deploy facial recognition technology to secretly surveil their customers. Companies rely on the technology to gather information about customers’ identities and demographics to tailor their marketing strategies, monitor customers within stores, or sell the information to third parties. Since there are no federal regulations governing the technology, commercial uses of facial recognition technology remain relatively unchecked, even as companies invade their customers’ privacy rights without any warning.

     How Does Facial Recognition Technology Work?

Based on photos or still images, facial recognition technology scans, maps, and analyzes the geometry of a person’s face to verify their identity or collect information about their behavior. When mapping a face, the technology creates a mathematical formula ­— called a facial signature — based on the person’s distinct facial features, such as the distance between their eyes. Facial recognition systems can create and store facial signatures for each scanned image containing a face. When a user uploads a new photo, the system cross-references the generated facial signature with existing ones in the database and can verify the person’s identity with a matched signature.

Businesses have created databases of facial signatures to identify customers of interest in future video footage. In addition, businesses can use facial recognition software from companies like Clearview AI, which cross-references an uploaded photo against billions of public images to verify a person’s identity. Clearview AI is known to offer free trials of its software, luring businesses and rogue employees into using the technology. With such easy access to facial recognition software, private use of the technology has proliferated, hardly slowed by regulatory barriers.

Commercial Uses of Facial Recognition Technology

No matter the industry, facial recognition can help businesses glean more information about their customers, make informed business decisions, and increase their revenues. Shopping malls and mega-stores like Macy’s, Rite-Aid, Apple, and Walmart have used facial recognition to identify shoplifters, target loyal customers, and track customers’ reactions within the store. Amazon has sold facial recognition technology that assesses customers’ faces to discover whether they are attentive or indifferent to certain displays. While customers are surely aware these mega-stores have security cameras, they are likely unaware these stores may know their name, home address, how many times they’ve frequented the location, and whether they are happy with their in-store experience. Outside of retail stores, in cities like Miami, thousands of Uber and Lyft drivers have digital tablets in their backseats that use facial recognition technology to assess a rider’s age, gender, and demographics, in order to display ads tailored to the rider’s perceived characteristics.

In states without biometric privacy laws, any citizen who shops at a mall or grocery store, or attends a concert or sports game, will likely be the subject of unsuspecting facial recognition. Additionally, facial recognition tools can even identify an individual who rarely shows their face in public. Clearview AI created a facial recognition database by scraping ten billion images from public websites. Clearview analyzed the images and developed facial signatures for nearly half the U.S. population.

As of 2020, more than 200 companies had accounts with Clearview, including professional sports leagues, casinos, fitness centers, and banks. These companies can upload a photo of an individual’s face — pulled from security footage or driver’s licenses — and cross-reference it against Clearview’s database to find a match. With limited regulation and easy access to facial recognition tools, consumers will face the technology’s adverse consequences, such as misidentifications and loss of privacy rights.

Misidentifications and Privacy Risks

The accuracy of facial recognition technology to correctly identify a person depends on their age, gender, or race. Research from the National Institute of Standards and Technology revealed that facial recognition systems are less accurate when identifying people of color. The algorithms are more likely to misidentify African Americans, Native Americans, and Asians compared to Caucasians. Researchers also have found these algorithms to be less accurate when identifying women, transgender individuals, and children.

Misidentification can carry damaging consequences to an individual’s liberty and dignity. Robert Williams, the black man who was wrongfully arrested based on a facial recognition match, was a victim of misidentification. These same misidentifications are likely occurring at private establishments, where security guards use the technology to scan for known criminals and remove purported “matches” from their stores.

In addition to misidentifications, facial recognition technology intrudes on an individual’s right to privacy. The technology allows companies to identify customers without their consent, collecting information about customers’ demographics and preferences. Furthermore, companies that store facial templates are subject to data breaches, where thousands of their customers’ faceprints could become compromised. Unlike online passwords, a stolen faceprint is indefinitely compromised — a customer cannot change their faceprint. Last year, thousands of scammers in the U.S. tried using stolen faceprints to fraudulently obtain government-assistance benefits. As facial recognition technology grows, bad actors will attempt to use stolen faceprints for financial gain.

     Federal, State, and Local Regulations

There are no federal regulations curbing the private use of facial recognition technology, but Congress’s interest in regulating the technology is increasing. Legislators introduced three separate bills to regulate facial recognition technology in the past few years, yet none passed the introduction stage.

One of the bills introduced in the Senate, the Commercial Facial Recognition Privacy Act, would have required all private entities to obtain explicit consent from customers before collecting faceprint data. The bill’s consent requirement is based on the Illinois Biometric Information Privacy Act (BIPA), one of only three state-enacted biometric privacy laws.

BIPA requires businesses that use facial recognition technology to obtain consent from consumers before collecting their faceprint data. It also requires these businesses to provide information about how they protect and store the biometric data. BIPA permits individuals to sue companies who violate any requirement in the statute and offers significant statutory damages for violations. In February 2021, Facebook paid out $650 million to settle a BIPA class-action lawsuit. To date, more than 800 BIPA class action lawsuits have been filed against Illinois businesses.

Despite BIPA’s teeth, businesses can freely use facial recognition in almost every other state. Texas and Washington are the only other states with biometric privacy laws that regulate commercial use of the technology. Yet, neither state permits citizens to sue companies for violating the statute, meaning there is much less pressure to comply. Enforcement lies with each state’s attorney general, who can impose civil penalties on violators.

Fortunately, bans on private use are growing at the city level. In September 2020, Portland, Oregon, became the first municipality to ban private entities from using facial recognition in public places, such as shopping malls. Since then, two other cities have followed suit. New York City now requires commercial establishments to post notices when using facial recognition technology, and Baltimore banned all private sector use of the technology, even subjecting violators to criminal penalties. The recent wave of restrictions at the city level indicates that regulations may first arise where the commercial sector flourishes — in major cities.

     Calls for Regulation and Future Outlook

Despite the pervasive commercial use of facial recognition technology, sixty percent of Americans are unaware that retail stores use the technology. This lack of awareness stems in part from the lack of regulation. Aside from a few states and a handful of cities, most businesses are unregulated: free to implement facial recognition tools without warning their customers. So far, calls for regulation have primarily come from companies that have developed facial recognition technology themselves: Microsoft, IBM, and Amazon. While these calls may be aimed at influencing friendly regulations, Microsoft’s President Brad Smith has called for legislation requiring stores to provide notice and obtain consent, similar to BIPA’s consent requirement. As BIPA has revealed, requiring businesses to obtain consent from consumers would at least hold businesses accountable for their facial recognition uses.

Nevertheless, some businesses may not wait for enacted legislation before shelving their facial recognition products. In November 2021, Meta announced that Facebook will no longer use facial recognition software and plans to delete the faceprint data of one billion Facebook users. Meta’s decision was motivated by concerns about the technology’s “place in our society.” This drastic move may prompt other industry leaders to start influencing the future treatment of facial recognition technology, with the hopes of clearing up the current regulatory uncertainty that threatens innovation and investment. While some may question Meta’s sincerity or true motives, its decision could foreshadow an era of much-needed regulatory action.  

Michael Willian is a third-year law student at Northwestern Pritzker School of Law.

I. Introduction

The COVID-19 pandemic has brought the issues of personal privacy and biometric data to the forefront of the American legal landscape. In an increasingly digital world, privacy laws are more important than ever. This reality is especially true in the context of remote workplaces, where employers have facilitated a digital migration through a variety of means. The platforms employers use have the propensity to violate personal privacy through the capture and storage of sensitive biometric information. In response, states across the nation are exploring solutions to the potential privacy issues inherent in the collection of biometric data. One of the first states to do so was Illinois, enacting a standalone biometric privacy statute in 2008: the Illinois Biometric Information Privacy Act (“BIPA”). Today, BIPA is more relevant than ever and should act as a statutory blueprint for states looking to protect personal privacy and biometric data amid a global pandemic. Ultimately, though, BIPA must be supplemented by federal legislation drafted in its likeness to effectively protect individuals’ privacy on a national level.

II. Background of the Biometric Information Privacy Act

To fully understand BIPA and all its implications, one must appreciate the context in which it was enacted. The Illinois legislature passed BIPA in October 2008. The Act was passed in the immediate wake of the bankruptcy of Pay By Touch, a company which operated the largest fingerprint scan system in Illinois. Pay By Touch’s pilot program was used in grocery stores and gas stations, and its bankruptcy left users unsure of what would become of their biometric data – i.e., their fingerprints. “Biometric data – a person’s unique biological traits embodied in not only fingerprints but also voice prints, retinal scans, and facial geometry – is the most sensitive data belonging to an individual.”

Understandably, private citizens in Illinois and across the country want to safeguard their sensitive biometric data. With potential issues such as identity theft and data manipulation more prevalent than ever, people have plenty of incentives to ensure their unique identifiers stay private. In response to those concerns, legislatures have passed statutes to address biometric data and personal privacy. BIPA represents one of the most stringent of such acts in the country, setting strict requirements for the management of biometric identifiers in Illinois.

BIPA defines “biometric identifier” as (1) a retina or iris scan, (2) fingerprint, (3) voiceprint, or (4) a scan of hand or face geometry. Further, “biometric information” refers to any information, regardless of how it is captured, converted, stored, or shared, based on an individual’s biometric identifier used to identify an individual. The requirements outlined in Section 15 of the Act – which addresses the retention, collection, disclosure, and destruction of biometric data – implicate a slew of potential legal issues. The section stipulates that a private entity can collect a person’s biometric data only if it first informs the subject that a biometric identifier is being collected, informs them of the specific purpose and length of term it is being collected for, and receives a written release from the subject.

Further, the Act outlines the following concerning retention of such data:

(a) A private entity in possession of biometric identifiers or biometric information must develop a written policy, made available to the public, establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within 3 years of the individual’s last interaction with the private entity, whichever comes first.

Thus, BIPA represents a statute narrowly aimed at maintaining the security of biometric data. While BIPA was relatively unknown in Illinois between 2008-2015, a wave of litigation has since swept through the state as employees began suing their employers. Such litigation was seemingly inevitable, as BIPA provides sweeping protection for individuals against biometric data abuse. The complexities of such issues have become clearer and potential legislative solutions to them even more important in the midst of a global pandemic.

III. Personal Privacy & Biometric Data in the COVID-19 Pandemic

The issues surrounding data privacy have become increasingly relevant in the ongoing COVID-19 pandemic, which effectively digitized the workplace as we know it. As the pandemic raged in the early months of 2020, workplaces around the globe were suddenly forced to digitally migrate to an online work environment. An inevitable result of newfound online worksites has been an increase in the utilization of biometric data. In an effort to facilitate remote work, companies have had to make work-related information accessible online. Employment attorney Eliana Theodorou outlines the ensuing issues for companies undertaking such efforts in an article entitled “COVID-19 and the Illinois Biometric Information Privacy Act.” For example, Theodorou writes, “Some of these platforms involve video recording or access by fingerprint, face scan, or retina or iris scan, which may result in the capture and storage of sensitive biometric information.” Thus, the collection and retention of biometric data has necessarily increased during the pandemic as companies made information accessible remotely when they shifted online.

Potential privacy issues accompanying the storage of biometric data will become even more difficult to navigate as companies return to physical workplaces with the pandemic still raging. Per Theodorou, “As workplaces reopen, there will likely be an uptick in the collection of biometric data as employers turn to symptom screening technologies that collect biometric data.” This could include, for instance, contactless thermometers and facial recognition scanning technologies used for contactless security access. The issue will thus continue to be the collection and storage of sensitive biometric data as employers return to work with the newfound priorities of social distancing and limited contact. The reality is that biometric data is still a relatively new concept, with its own specific set of issues and potential solutions. Personal privacy becomes ever harder to maintain in a digital world, with the use of biometric information often a necessity both for remote access and in-person return to work. Ultimately, the risks associated with the collection of biometric data remain largely undefined or misunderstood by employers. That lack of understanding has been exacerbated by a global pandemic necessitating a digital work migration.

IV. Possible Solutions to the Privacy Issues Raised by COVID-19 and Remote Workplaces

Illinois has provided a stellar blueprint for biometric data privacy in BIPA. However, other states have been slow to follow. As of November 2021, only a handful of other states have enacted legislation aimed at the protection of biometric data. Texas and Washington, like Illinois, have passed broad biometric privacy laws. Other states like Arizona and New York have adopted more tailored biometric privacy approaches, while others have enacted laws specifically aimed at facial recognition technology. There are also proposed bills awaiting legislative approval in many more states. Ultimately, implementing widespread legislation on a state-by-state basis will be a slow and drawn-out process, rendering countless Americans’ biometric data vulnerable. Rather than continue this state-based campaign to solidify biometric data privacy, citizens must turn to the federal government for a more comprehensive and consistent solution.

The primary roadblock to legitimate privacy in the biometric information space is the lack of a centralized federal initiative to address it. “Despite its value and sensitivity, the federal government currently has no comprehensive laws in place to protect the biometric data of U.S. citizens.” The privacy issues inherent in the popularization of biometric data in pandemic-era remote workplaces demand federal attention. A wide-ranging statute applicable in all states is the first step in properly addressing these issues. Congress should look to BIPA as a blueprint, for it remains the only state law passed to address biometric data privacy which includes a personal call to action. It is unique in that regard, especially considering it was passed in 2008, and consequently provides the most aggressive statutory response thus far to potential privacy concerns. Whether a federal act is feasible remains unclear. In August 2020, Senators Jeff Merkley and Bernie Sanders introduced the National Biometric Information Privacy Act of 2020, which suggests the imposition of nationwide requirements similar to those outlined in BIPA. The viability of such an Act is doubtful, as previous privacy legislation has been difficult to pass. However, it is a sign of movement in the right direction – toward increased protection of personal privacy in a pandemic which has made biometric data more relevant and potentially at-risk for improper management and manipulation.

Luke Shadley is a third-year law student at Northwestern Pritzker School of Law.

When Meta’s services went down this past October, users were unable to access all of Meta’s applications, including Instagram, Messenger, and WhatsApp. This digital outage had physical consequences, as some Meta employees got locked out of their offices. The effects rippled outside of Meta’s own ecosystem, as some consumers soon discovered they were unable to log in to shop on select e-commerce websites, while others quickly found out that they could no longer access the accounts used to control their smart TVs or smart thermostats. Drawn by the ease of using Facebook accounts to log into websites, users had come to allow their Facebook account to act as a kind of digital identity. The outage, along with revelations from a fortuitously timed whistleblower, reminded users just how much individuals and governments depend on the “critical infrastructure” Facebook provides. Lawmakers in the U.S. have struggled with the question of how Meta should be regulated, or how its power should be reined in. One step towards mitigating Meta’s power would be to develop alternative digital Identity Management (“IdM”) systems.

The Legal Role of Identification

Technology has been used to verify identity for hundreds of years. Back in the third century B.C.E., fingerprints, recorded in wax, were used to authenticate written documents. For centuries, identification technology has allowed strangers to bridge a “trust gap” by authenticating and authorizing.[1]

In the present day, IdM systems have become a critical piece of technology for governments, allowing for the orderly provision of a range of services, like healthcare, voting, and education. IdM systems are also critical for the individual, because they allow a person to “prove[] one’s status as a person who can exercise rights and demand protection under the law.” The UN went so far as to describe an individual’s ability to prove a legal identity as a “fundamental and universal human right.”

Currently, there are over one billion people who live in the “identity gap” and cannot prove their legal identity. Put another way, one billion people lack a fundamental, universal human right. What makes this issue more pernicious is that the majority of individuals in the identity gap are women, children, stateless individuals and refugees. The lack or loss of legal identity credentials is correlated with increased risk for displacement, underage marriage, and child trafficking. Individuals living in the “identity gap” face significant barriers to receiving “basic social opportunities.”

Identity in Digital Age

The legal and social issues created by the “identity gap” are now evolving. In addition to the individuals who can’t prove their legal identity at all, there are over 3.4 billion people who have a legally recognized identification, but cannot use that identification in the digital world.

A 2017 European Commission Report found that an individual’s ability to have a digital identity “verg[es] on a human right.” The report then argued that one of the deep flaws of the internet is that there is no reliable, secure method to identify people online. The New York Times called this “one of biggest failures of the… internet.” Still, proving digital identity isn’t just a human rights issue; it’s also critical for economic development. A McKinsey report posited that a comprehensive digital IdM system would “unlock economic value equivalent to 3 to 13 percent of GDP in 2030.”

Digital IdM systems, however, are not without risk. These systems are often developed in conjunction with biometric databases, creating systems that are “ripe for exploitation and abuse.” 

IdM Systems

Centralized

The most common IdM scheme is a “centralized” system; in a centralized IdM scheme, a single entity is responsible for issuing and maintaining the identification and corresponding information. In centralized IdM schemes, identity is often linked to a certain benefit or right. One popular example in the United States is the Social Security Number (“SSN”); SSNs are issued by the Social Security Administration, who then use that number to maintain information about what social security benefits an individual is eligible to receive. Having an SSN is linked to the right to participate in the social security system.

The centralized IdM schemes typically verify identity in one of two ways: via a physical and anti-forgery mechanism or a registry. These systems have proved remarkably resilient for a few reasons. They are easily stored for long periods of times and can be easily presented for many different kinds of purposes. Still, both ways have shortcomings, including function creep[2] and lack of security.

Identity systems that rely on anti-forgery mechanisms, like signatures, watermarks, or special designs, can also have security flaws. First, these documents require the checking party to validate every anti-forgery mechanism; this might require high levels of skill, time, or expertise. Additionally, once a physical identification is issued, the issuing party is generally unable to revoke or control the information. Finally, anti-forgery measures constantly need to be updated because parties have great incentives to create fake documents.

Another security shortcoming of centralized IdM systems is that they rely on registries to contain all their data. Registries are problematic because they have a single point of failure. If one registry is compromised, an entire verification system can be undone. For instance, if SSNs became public, the SSN would become worthless; the value is in the secrecy.

Equally significant is the possibility of function creep, which can happen when a user loses control of their identification. SSNs, for example, were designed for a single purpose: the provisioning of social security benefits. Now, SSNs serve as a ubiquitous government identifier that is “now used far beyond its original purpose.” This is problematic because SSNs contain “no authenticating information” and can easily be forged. It’s not just governments, however, that allow function creep in centralized IdM systems. This happens for privately managed identity systems as well, as the Facebook hack showed.

The Alternatives: Individualistic and Federated IdM Models

Another type of IdM system is an individualistic or “user-centric” system. The goal of these systems is to allow the user to have “full control of all the transactions involving [their] identity” by requiring a user’s explicit approval of how their identity data is released and shared. Unlike those in “centralized” schemes, these types of identification do not grant any inherent rights. Instead, they give individuals the ability to define, manage, and prove their own identity.

To date, technical hurdles have prevented the widespread adoption of these “user-centric” systems. Governments and private companies alike have proposed using blockchain to create IdM systems that allow individuals to access their own data “without the need of constant recourse to a third-party intermediary to validate such data or identity.” There is hope that blockchain can provide the technical support to create an “individualist” IdM system that is both secure and privacy-friendly. Still, these efforts are in their infancy.

The last major type of IdM system is a federated model. Federated IdM systems require a high degree of cooperation between identity providers and service providers; the benefit is single sign on (SSO) capabilities whereby a user can use their credentials from one site to access other sites. This is similar to the Facebook model of “identity.” The lynchpin of any such system, however, is who the “trusted external party,” who acts as the verifier, is. The risk is that these systems lack transparency, meaning users might not know how their data is used.

Conclusion

Using Facebook to verify identity online is quick and easy. Yet this system is inadequate. An individual’s ability to state, verify, and prove their digital identity will be  “the key to survival,” particularly given how difficult it is to create trust in the digital space. Proving identity is a technical problem, but this technical problem is closely linked with an individual’s ability to act as a citizen, in person or online. Governments and corporations alike have recognized the importance of improved digital identity systems and have begun advocating for more standardized identity systems. Detractors of digital identification systems argue that an individual’s identity should not depend on the conferral of documents by a third party, and that relying on these types of documents is contrary to the idea that humans have inherent rights. They’ll then quickly point to examples of authoritarian governments who use identity tracking for evil purposes. These criticisms ignore the reality that proving identification is already an essential part of life and that many rights are only conferred when you have the proper identification. Further, these criticisms fail to recognize that superior identification systems will provide benefits that will accrue to society as a whole. They could be used to record vaccination status, fight identity fraud, or even to create taxation systems based on consumption.

Identification and identity are closely linked. As we transition towards even more digital services, taking steps to ensure that we have control over our digital identity will be more than a technology or privacy problem. Our ability to have and control our identity will continue to be a key driver of social and economic mobility. 


[1] In this context, authentication is the ability to prove that a user is who they say they are, and an authorization function shows that the user has the rights to do what they’re asking to do.

[2] Function creep is when a piece of information or technology is used for more purposes than it was originally intended.

Henry Rittenberg is a 2nd year student in Northwestern’s JD-MBA program.