Mythbusting Facial Recognition: Separating Fact From Fiction

Although facial recognition is commonly in the news—and often not in a good way—a recent poll by Pew Research concluded that nearly 75% of all Americans have heard little to nothing at all about the technology. This lack of clarity around what facial recognition’s capabilities truly are—as well as the near-constant highlighting of facial recognition’s alleged pitfalls–have given rise to a number of frightening misconceptions about the technology. With all the noise surrounding this powerful, emerging field, we’d like to dispel five of the most common and surprisingly persistent myths around facial recognition, and show how its responsible use can actually create a safer society that helps solve crime, makes spaces safer, and protects individual rights.

Myth 1: Facial recognition software can identify everyone

You go to your local drug store, and as you’re in line to pay with your new shampoo bottle in hand, you see this sign above the cash register that you’d never noticed before: Your mind starts racing. Does this store know my name? Do they know how often I come here? The answer is no. Unless you’ve opted yourself into a facial recognition database, or have been added as part of a security watchlist as a “persons of interest,” then you cannot be identified through facial recognition.

That’s because facial recognition algorithms depend on matching faces against other faces in a database: If there’s no match, then your face cannot be identified and is therefore ignored. Now you may be wondering who qualifies as a person of interest? In accordance with local biometric security regulations, anyone known as a threat can be added to a watchlist. This list is vital to security teams and their mission to protect people and spaces, as the teams are instantaneously alerted when these threats enter the store or another commercial space.

Known threats can take many forms, such as:

  1. An ex-employee who makes violent threats against their former co-workers
  2. An abusive parent who shows up at school during the day
  3. A convicted criminal (e.g., shoplifter)  
  4. A hooligan who has instigated fights and physical violence in the stands
  5. Gamblers identified as “self-excluders” who are banned from casinos because of their gambling addiction

So you can leave your local drugstore with peace of mind. Because you’re not a threat, and because you’ve never opted into their database, the facial recognition engine doesn’t know who you are. Further, you now feel more secure because the store’s security team is automatically alerted when known threats enter the premises who can potentially harm you or steal from the retailer or commercial enterprise.

Myth 2: Facial recognition poses a major privacy risk

Contrary to the mainstream narrative, facial identification data is more secure than other biometric unique identifiers–such as fingerprints—as well as non-biometric identifiers such as Social Security Numbers or alphanumeric passwords.

This is because facial recognition engines, or neural nets, work by translating digital images of faces into long strings of 250 or so random numbers and letters. So while a facial recognition system appears to be comparing a face seen on a camera to an image saved in a database, it’s actually comparing a new string of numbers and letters to a database of known numbers and letters. No descriptive data on the face itself is stored in the database. And all of this information is encrypted–further protecting it from prying eyes.

If there’s a data breach, the hacker will be left with these sequences of numbers and letters, as well as names–not a set of facial images. This data is practically impossible to reverse engineer back into an image, and because the neural nets that create these sequences are proprietary, the data would hold virtually zero commercial value.

Some facial recognition software, including ours, also include built-in privacy features such as GDPR mode (blurs faces of non-watchlist detections) and privacy mode (discards all face detections of non-enrolled individuals). Together, these two advanced privacy features ensure that:

  1. Only data of known threats are stored and personal data of all other detections are deleted
  2. System operators can only view detections of known individuals on a watchlist.  This means that if you’re not on a watchlist, then your image and its associated data will never be visible to another human

On a more abstract level, facial recognition protects privacy by largely eliminating the need for manual investigations that invade privacy.

Let’s compare the workflows of two retail store security teams—one that doesn’t use facial recognition technology and another that does.

In both cases, the teams are on the lookout for a white man in his late 40’s who recently shoplifted $2 thousand worth of goods. The team has an image of the individual’s face.

Security Team 1 Workflow (CCTV cameras in use with no facial recognition)

  1. A white male who appears to be in his late 40’s enters the store
  2. The security guard is put on high alert. Although the guard had seen an image of the person’s face, there’s a list of 100 other faces he needs to be on the lookout for, and it’s easy to forget what a specific face looks like. It’s better for him to play it safe and keep this man on his radar than for him to ignore the suspect. The guard secretly watches the alleged suspect and tracks his movements throughout the store

This individual’s privacy has been violated, as his behavior has been watched without his consent. Security Team 2 Workflow (with AnyVision facial recognition technology)

  1. The white male in his late 40’s enters the store.
  2. The security guard does not get an automatic alert informing him that a person of interest has entered the store, meaning this man does not pose a threat. The security guard is not on high alert and does not track this individual’s movements throughout the premises.

This individual’s privacy has not been violated.

Myth 3: Facial recognition software is inherently biased against people of color

A number of recent news articles including this one from Wired make the case that facial recognition is inherently racist because, in some instances, they are less accurate at identifying people with darker skin tones. They cite an outdated statistic that black women are five times more likely than white men to be misidentified by facial recognition algorithms.

While we recognize that some algorithms show unacceptable levels of racial bias, a deeper look reveals that market-leading algorithms, including ours, are far less prone to this bias, and in fact can prevent bias in security settings. The following four points dispel the myth that facial recognition software is inherently racist.

  1.  A recent study performed by NIST, the National Institute of Standards and Technology, looked at the top facial recognition algorithms in the world and found “no good evidence for a difference in the face detection or failure-to-enroll rate between the African-American and Caucasian cohorts.” 
  2. Facial recognition accuracy, as a whole, is significantly improving with time. As of April 2020, the top facial identification algorithms had an error rate of just 0.08%, compared to 4.1% in 2014. This improvement was uniform across all races.
  3. AnyVision held the Fair Face Recognition Workshop and Challenge in late 2020, which evaluated the accuracy and bias of facial recognition algorithms with regards to gender and race on 1:1 face verification. The challenge found that the top-10 facial recognition teams exceeded 99.9% accuracy and were “able to minimize bias to the point where it was almost negligible.
  4. Facial recognition, by its very nature, is intended to minimize racial bias by taking much of the guesswork and “gut feeling” out of security operations. Without the aid of facial recognition, security teams may spend their time stalking and investigating individuals who appear to be “suspicious.” Unfortunately, there is often bias by the security guards themselves when it comes to determining which individuals are deemed suspicious.

With facial recognition technology, there simply isn’t room for this sort of bias. An algorithm has a watchlist of faces, and when there’s the match from a live camera feed, an alert is created. No judgment calls on who might look suspicious come into play.

Myth 4: Facial recognition is only used for security purposes

While the watchlist alerting use case for facial recognition is the most easy to understand, facial recognition technology can be used to enhance the wellbeing of society in a variety of different ways, including:

  1. Recognizing VIPs at sporting events, casinos, and other events. This allows customer service teams to immediately approach VIPs with their favorite drink or snack in hand, leading to greater customer loyalty
  2. Reuniting lost or kidnapped individuals with their families. In 2020, a middle-aged man who was abducted as a toddler was reunited with his family after 32 years with the help of facial recognition technology. 
  3. Diagnosing diseases that cause changes in facial appearance. Researchers with the National Human Genome Research Institute were able to use facial recognition to accurately detect 96.6% of cases of DiGeorge syndrome, a rare genetic disease that often goes undetected for years
  4. Tracking time and attendance for individuals at school & work. Utilizing facial recognition for these tasks mitigates the risk of time theft and buddy punching, and ensures students and workers are securely in the locations where they say they are when they’re supposed to be there.
  5. Validating identity at airports and ATMs. Facial recognition technology can be used to instantaneously make sure that people are who they say they are, allowing people to board flights and withdraw cash in a more efficient and secure manner.

Myth 5: Americans want to severely limit the use of facial recognition

Despite the barrage of negative media stories about facial recognition technology, Americans largely support its use, especially with regards to security applications. Recently, the Center for Data Innovation surveyed 3,151 US adults, and their findings severely cut against common perception.

  •  Only 26% of Americans think the government should strictly curb the use of facial recognition
  • Just 18% of Americans think the government should strictly limit facial recognition usage if it comes at the expense of public safety
  • Only 22% of Americans agree that the government should forbid use of facial recognition in retail stores if it can be used to reduce shoplifting
  • Only 24% of Americans disagreed with the statement that police departments should be allowed to use facial recognition technology help find suspects if the software is accurate 90% of the time 

A different survey conducted by Schoen Cooperman Research amongst 1000 interviews also found that Americans have a more positive view of facial recognition than the media leads you to believe. Here are some of the key results from their 2020 survey:

  • Nearly 60% of Americans have a favorable view toward facial recognition technology
  • 70% of Americans believe facial recognition technology is accurate in identifying people of all races and ethnicities
  • 70% of Americans support the use of Facial Recognition in office buildings
  • 66% of Americans believe facial recognition searches by investigators are non-invasive and appropriate

Conclusion

Facial recognition is an extremely promising tool that the American public largely misunderstands. While there certainly has been some misuse of the technology, we believe they’re outweighed by positive outcomes–mainly identifying violent criminals before they cause harm, securing vulnerable spaces, and greatly improving the overall effectiveness of security & access control teams. Further, the responsible use of facial recognition:

  • Captures the data of known threats, not common citizens
  • Provides organizations one of the most secure sources of identifying data possible
  • Prevents bias, doesn’t increase bias

Of course, it’s imperative that commercial entities and government bodies alike use this technology ethically. And we’ve provided guidance on how to do so here. We think the tide is turning with regards to facial recognition’s perception, as recent polls show its growing support. But even so, we must continue the effort to provide a balanced view of this technology and work to understand how and when to harness its power to protect employees, customers, and profits.

Avatar photo

About the Author

Powered by Vision AI, Oosto provides actionable intelligence to keep your customers, employees, and visitors safe.

Beyond the Fence:
Proactive Perimeter Security with Video Analytics
Beyond the Fence:
Proactive Perimeter Security with Video Analytics
INTRODUCING
Innovation Obsessed
| A Podcast by Oosto
INNOVATION OBSESSED
A Podcast by Oosto