Tag: facial recognition

Introduction

News headlines about facial recognition technology primarily focus on the government’s use and misuse of the technology. Likewise, technology companies and legislators frequently advocate against the government’s use of facial recognition tools to conduct mass surveillance or generate leads in investigations. For example, following widespread claims of the technology’s racial bias, Amazon, IBM, and Microsoft announced that they would stop selling facial recognition tools to law enforcement agencies. And following the arrest of an innocent black man who was falsely identified by facial recognition, major cities like San Francisco and Boston banned law enforcement from using the technology.

However, as industry commentators focus on the government’s use of facial recognition tools, private businesses in the U.S. regularly deploy facial recognition technology to secretly surveil their customers. Companies rely on the technology to gather information about customers’ identities and demographics to tailor their marketing strategies, monitor customers within stores, or sell the information to third parties. Since there are no federal regulations governing the technology, commercial uses of facial recognition technology remain relatively unchecked, even as companies invade their customers’ privacy rights without any warning.

     How Does Facial Recognition Technology Work?

Based on photos or still images, facial recognition technology scans, maps, and analyzes the geometry of a person’s face to verify their identity or collect information about their behavior. When mapping a face, the technology creates a mathematical formula ­— called a facial signature — based on the person’s distinct facial features, such as the distance between their eyes. Facial recognition systems can create and store facial signatures for each scanned image containing a face. When a user uploads a new photo, the system cross-references the generated facial signature with existing ones in the database and can verify the person’s identity with a matched signature.

Businesses have created databases of facial signatures to identify customers of interest in future video footage. In addition, businesses can use facial recognition software from companies like Clearview AI, which cross-references an uploaded photo against billions of public images to verify a person’s identity. Clearview AI is known to offer free trials of its software, luring businesses and rogue employees into using the technology. With such easy access to facial recognition software, private use of the technology has proliferated, hardly slowed by regulatory barriers.

Commercial Uses of Facial Recognition Technology

No matter the industry, facial recognition can help businesses glean more information about their customers, make informed business decisions, and increase their revenues. Shopping malls and mega-stores like Macy’s, Rite-Aid, Apple, and Walmart have used facial recognition to identify shoplifters, target loyal customers, and track customers’ reactions within the store. Amazon has sold facial recognition technology that assesses customers’ faces to discover whether they are attentive or indifferent to certain displays. While customers are surely aware these mega-stores have security cameras, they are likely unaware these stores may know their name, home address, how many times they’ve frequented the location, and whether they are happy with their in-store experience. Outside of retail stores, in cities like Miami, thousands of Uber and Lyft drivers have digital tablets in their backseats that use facial recognition technology to assess a rider’s age, gender, and demographics, in order to display ads tailored to the rider’s perceived characteristics.

In states without biometric privacy laws, any citizen who shops at a mall or grocery store, or attends a concert or sports game, will likely be the subject of unsuspecting facial recognition. Additionally, facial recognition tools can even identify an individual who rarely shows their face in public. Clearview AI created a facial recognition database by scraping ten billion images from public websites. Clearview analyzed the images and developed facial signatures for nearly half the U.S. population.

As of 2020, more than 200 companies had accounts with Clearview, including professional sports leagues, casinos, fitness centers, and banks. These companies can upload a photo of an individual’s face — pulled from security footage or driver’s licenses — and cross-reference it against Clearview’s database to find a match. With limited regulation and easy access to facial recognition tools, consumers will face the technology’s adverse consequences, such as misidentifications and loss of privacy rights.

Misidentifications and Privacy Risks

The accuracy of facial recognition technology to correctly identify a person depends on their age, gender, or race. Research from the National Institute of Standards and Technology revealed that facial recognition systems are less accurate when identifying people of color. The algorithms are more likely to misidentify African Americans, Native Americans, and Asians compared to Caucasians. Researchers also have found these algorithms to be less accurate when identifying women, transgender individuals, and children.

Misidentification can carry damaging consequences to an individual’s liberty and dignity. Robert Williams, the black man who was wrongfully arrested based on a facial recognition match, was a victim of misidentification. These same misidentifications are likely occurring at private establishments, where security guards use the technology to scan for known criminals and remove purported “matches” from their stores.

In addition to misidentifications, facial recognition technology intrudes on an individual’s right to privacy. The technology allows companies to identify customers without their consent, collecting information about customers’ demographics and preferences. Furthermore, companies that store facial templates are subject to data breaches, where thousands of their customers’ faceprints could become compromised. Unlike online passwords, a stolen faceprint is indefinitely compromised — a customer cannot change their faceprint. Last year, thousands of scammers in the U.S. tried using stolen faceprints to fraudulently obtain government-assistance benefits. As facial recognition technology grows, bad actors will attempt to use stolen faceprints for financial gain.

     Federal, State, and Local Regulations

There are no federal regulations curbing the private use of facial recognition technology, but Congress’s interest in regulating the technology is increasing. Legislators introduced three separate bills to regulate facial recognition technology in the past few years, yet none passed the introduction stage.

One of the bills introduced in the Senate, the Commercial Facial Recognition Privacy Act, would have required all private entities to obtain explicit consent from customers before collecting faceprint data. The bill’s consent requirement is based on the Illinois Biometric Information Privacy Act (BIPA), one of only three state-enacted biometric privacy laws.

BIPA requires businesses that use facial recognition technology to obtain consent from consumers before collecting their faceprint data. It also requires these businesses to provide information about how they protect and store the biometric data. BIPA permits individuals to sue companies who violate any requirement in the statute and offers significant statutory damages for violations. In February 2021, Facebook paid out $650 million to settle a BIPA class-action lawsuit. To date, more than 800 BIPA class action lawsuits have been filed against Illinois businesses.

Despite BIPA’s teeth, businesses can freely use facial recognition in almost every other state. Texas and Washington are the only other states with biometric privacy laws that regulate commercial use of the technology. Yet, neither state permits citizens to sue companies for violating the statute, meaning there is much less pressure to comply. Enforcement lies with each state’s attorney general, who can impose civil penalties on violators.

Fortunately, bans on private use are growing at the city level. In September 2020, Portland, Oregon, became the first municipality to ban private entities from using facial recognition in public places, such as shopping malls. Since then, two other cities have followed suit. New York City now requires commercial establishments to post notices when using facial recognition technology, and Baltimore banned all private sector use of the technology, even subjecting violators to criminal penalties. The recent wave of restrictions at the city level indicates that regulations may first arise where the commercial sector flourishes — in major cities.

     Calls for Regulation and Future Outlook

Despite the pervasive commercial use of facial recognition technology, sixty percent of Americans are unaware that retail stores use the technology. This lack of awareness stems in part from the lack of regulation. Aside from a few states and a handful of cities, most businesses are unregulated: free to implement facial recognition tools without warning their customers. So far, calls for regulation have primarily come from companies that have developed facial recognition technology themselves: Microsoft, IBM, and Amazon. While these calls may be aimed at influencing friendly regulations, Microsoft’s President Brad Smith has called for legislation requiring stores to provide notice and obtain consent, similar to BIPA’s consent requirement. As BIPA has revealed, requiring businesses to obtain consent from consumers would at least hold businesses accountable for their facial recognition uses.

Nevertheless, some businesses may not wait for enacted legislation before shelving their facial recognition products. In November 2021, Meta announced that Facebook will no longer use facial recognition software and plans to delete the faceprint data of one billion Facebook users. Meta’s decision was motivated by concerns about the technology’s “place in our society.” This drastic move may prompt other industry leaders to start influencing the future treatment of facial recognition technology, with the hopes of clearing up the current regulatory uncertainty that threatens innovation and investment. While some may question Meta’s sincerity or true motives, its decision could foreshadow an era of much-needed regulatory action.  

Michael Willian is a third-year law student at Northwestern Pritzker School of Law.