Debunking Popular Myths about Facial Recognition in Retail

Introduction

In our previous blog post, we made the case that employing a facial recognition system in your store is an effective way to combat rising retail crime without compromising the shopping experience.

Unfortunately, the public’s perception of facial recognition–largely driven by the media–has prevented the broad adoption of this technology. It’s also important to note that most of the negative publicity surrounding facial recognition is the use of the technology by law enforcement or government agencies in public surveillance settings. Some of this negative perception has bled into the retail space which is an entirely different use case. This has led to confusion and some loud and often misguided criticisms of the technology.

There are numerous petitions calling for the technology to be banned, but the rationale cited is faulty: Some claim that facial recognition algorithms invade personal privacy and are biased, and others cite individual instances of the irresponsible use of the technology. Let’s take a closer look at each of these.

All facial recognition algorithms are biased

Responsible facial recognition does not monitor a person’s every move. Alternatively, it’s used to alert security teams when known threats enter store premises and retroactively look for known offenders after an incident has taken place.

In fact, some facial recognition software, including those AI algorithms created by Oosto, include built-in privacy features such as GDPR mode (blurs faces of non-watchlist detections) and privacy mode (discards all face detections of non-enrolled individuals). Together, these two advanced privacy features ensure that:

  • Only data of known threats are stored and personal data of all other detections are deleted.
  • System operators can only view detections of known individuals on a watchlist.

Therefore, for 99.9% of individuals who are not on a watchlist, their every move is not being monitored, and their image and its associated data will never be visible to another human.

Furthermore, it’s been proven that today’s leading facial recognition algorithms are not all biased. In late 2020, Oosto held the Fair Face Recognition Workshop and Challenge, which evaluated the accuracy and bias of facial recognition algorithms with regards to gender and race on 1:1 face verification. The challenge found that the top-10 facial recognition teams exceeded 99.9% accuracy and were “able to minimize bias to the point where it was almost negligible.”

Facial Recognition Helps Reduce Some of the Inherent Bias of Human-Based Surveillance

Facial recognition, by its very nature, is intended to minimize racial bias by taking much of the guesswork and “gut feeling” out of security operations. Without the aid of facial recognition, security teams may spend their time stalking and investigating individuals who appear to be “suspicious.” Unfortunately, there is often inherent bias by security guards themselves when it comes to determining which individuals are deemed suspicious, or perhaps which faces they end up remembering from the store’s list of known shoplifters.

With facial recognition technology, there simply isn’t room for this sort of bias. An algorithm has a watchlist of faces, and when there’s the match from a live camera feed, an alert is created. No judgment calls on who may appear “suspicious” come into play.

Reducing False Positives

When calling for a ban on facial recognition technology, advocates often point to one-off incidents when the wrong individual was identified and subsequently arrested by law enforcement. Admittedly, just one of these instances is too many—and we at Oosto recognize that with great power comes great responsibility.

While facial recognition is a powerful and effective tool to help identify threats, it cannot be the only tool. It’s why we suggest that when a known threat is identified and triggers an alert, security or service teams approach the individual to ensure it’s a positive match and then follow their store’s own security procedures for dealing with known bad actors.  Security teams and law enforcement officials need to use facial recognition as a tool in partnership with other investigative methods in their toolbox, and not as a tool for making definitive decisions.

In summary, the wrongful arrests that detractors cite were often performed in public surveillance settings (vs. retail environments) and were often the result of poor investigative processes, not shortcomings of the actual facial recognition software.

Steps to ensure responsible use of facial recognition

Here are four concrete steps to ensure that your store employs facial recognition technology in a responsible manner.

1. Informing customers

Communication is key. Clearly visible signs should be posted at store entrances informing customers that facial recognition technology is in use.  Some cities like New York City have laws that mandate these signs, but we believe that it’s a best practice to post them at all locations where the technology is in use.

We wrote a guide on how to craft this type of signage here. Ultimately, this signage should emphasize that facial recognition is being used to identify known security threats, not spy on ordinary shoppers.

2. Empty database

We don’t provide watchlists to our customers. Instead, we recommend building watchlists from the ground up based on known felons, previous offenders, and dangerous ex-employees. In essence, lists of suspects should be limited and justified for each retail location. As such, unwarranted invasion of citizen privacy can be prevented, false arrests can be reduced, and public confidence in the technology can be strengthened.

3. Data security

You need to ensure that the biometric data you’re collecting is safe from prying eyes and has the proper safeguards in place to defend against breaches. We recommend the following with regards to data security:

  • Regularly purging data that is no longer needed.
  • Encrypting the data
  • Utilizing GDPR Mode and Privacy Mode. GDPR Mode effectively blurs all faces of people not explicitly listed on an organization’s watchlist. When this feature is activated, only known threats are visible—all other people in the camera’s field of view are blurred. Privacy mode goes even further as it discards all detections of non-enrolled individuals.

4. Operational diligence

When a match is made and an alert is created, security teams and law enforcement officials must act responsibly and determine whether any other potential matches should be investigated further before rushing to apprehend a specific individual. Once again, the technology should not be used to determine someone’s guilt or innocence.

Oosto is happy to share best practices and established protocols from our experience with leading retailers. We also wrote an eBook entitled “The Rise of Ethical Facial Recognition” that you can read here.

Conclusion

As we made the case in the first part of this blog series , facial recognition technology is becoming a critical tool for retailers looking to curb loss prevention while also preserving the customer experience.

However, there are a number of popular misconceptions about the technology that have stunted its adoption. With a better understanding of how the technology actually works (i.e., it’s not tracking the movements of innocent shoppers), consumers can have greater trust and confidence that these emerging technologies are designed to enhance the shopping experience and protect the store and its staff from commercial and physical threats.

Oosto’s market-leading facial recognition algorithms are designed to strike this delicate balance and to do so in a fair, unbiased, and ethical way. This powerful technology can help protect customers, employees, and profits.

If you would like to learn more, please contact us to see how this emerging technology can help protect your retail environment and brand.

Avatar photo

About the Author

Isaac Shapot

Isaac Shapot serves as a New Business Lead for Oosto, where he is responsible for business development activities related to Oosto’s AI-based solutions for access control, video monitoring, and analytics in the North American region. Previously, Isaac worked at Walmart eCommerce and Forge Health. He is a graduate of Vanderbilt University, having obtained a bachelor’s degree in Human and Organizational Development.

From Gates to Game Time:
Lessons in Proactive Stadium Security
From Gates to Game Times:
Lessons in Proactive Stadium Security
INTRODUCING
Innovation Obsessed
| A Podcast by Oosto
INNOVATION OBSESSED
A Podcast by Oosto