What Is Facial Biometrics?
Facial biometrics refers to the measurement of facial characteristics for the purpose of identifying particular individuals. In turn, facial recognition uses biometric data to verify a person’s identity, enabling security personnel to adjudicate the appropriate response to a potential threat. Although facial recognition technologies have been around for decades, recent advances in artificial intelligence (AI) have made it feasible to employ facial recognition security systems on a broader scale.
As described by Wired magazine, facial biometric security systems “include video analytics and analyze video footage…[in order to] detect abnormal activities [and persons] that could pose a threat to an organization’s security.” AI security software “learns” what and who is normal in a particular context. The software identifies persons and behaviors that human security personnel might miss. As a learning application, facial recognition software continues to grow in intelligence over time. Your security system becomes more sophisticated and accurate in identifying threats as it receives more data.
How we train AI facial recognition software to identify persons of interest makes all the difference when it comes to its accuracy, as well as the ethical use of this technology. Remember: it is up to security officers to adjudicate the biometric data that our AI security systems provide. The technology itself is indifferent to our human biases.
Later in this article, we will address concerns related to bias in the identification of persons and the potential threats to civil liberties that this technology poses. However, we note at the onset the Chinese telecommunications giant Huawei has been testing AI software that would scan facial images to identify Uighurs, China’s Muslim minority population who have been subjected to brutal crackdowns by the government.
When it comes to matters of the accuracy and ethical use of facial recognition video security, it is largely up to human operators to make sure our systems remain unbiased.
The Types of Facial Biometric Security
Facial recognition technology falls into three broad categories: investigative face matching, real-time biometric analysis for situational awareness, and facial authentication for access control.
Face matching for Investigative Purposes and Business Intelligence
Face-match technology has been around for decades, employed in retail security and business intelligence. Takeo Kanade, one of the pioneers in face recognition, demonstrated a computer program in 1970 that could calculate the distance ratio between anatomical features of the human face such as the chin.
Face matching is typically a cloud application that searches a gallery of video images for a specific face. In this context, facial recognition requires considerable human intervention: face- matching programs reduce the number of photos for security personnel to review, but they do not produce an exact match. Rather face-matching systems generate probability estimates: they select a set of images against an array of potentially thousands of photos.
Traditionally, face-match technology has assisted security teams to track persons of interest in such locations as a building, a stadium, a theme park, or even in a city. For retail businesses, face-matching biometrics helps to identify an individual’s buying trends. Face-matching biometrics also answer questions such as why a customer went to a particular vendor in a stadium or where an intoxicated person purchased their last beverage.
In short, face matching, in the main, enables businesses to enhance productivity by focussing on consumer behaviors, as well as improve security.
Real-Time Facial Recognition
Many of today’s facial recognition and visual identification systems are capable of producing real-time intelligence for assessing threats and incidents as they arise. One of the major differences between traditional face-match technologies and real-time facial recognition is that the latter requires significantly more computing power and a much larger database to reference real-time intelligence.
Real-time facial recognition works largely without human intervention. Nonetheless, it requires security officers to make key decisions, as well use a healthy dose of common sense. Most of the current AI visual recognition systems have an accuracy well above 99% in controlled settings. Even in these instances, however, it is important that security officers exercise caution. At a musical festival, for instance, or any other large public gathering with thousands or hundreds of thousands of people, even the most sophisticated AI systems can generate false positives.
Face-matching technology is useful in preventing injury and saving lives. When there are false positives, one of the key roles we play as security practitioners is to provide feedback to the system that the identification is a false positive to correct the learning over time. The system’s accuracy can decrease over time if one doesn’t help the system learn by providing user feedback.
At the same time, however, it is important to understand that false positives do occur. Security personnel should never rely entirely on AI visual identification systems; they are not meant to replace human judgment. Rather, AI facial recognition should ideally serve to enhance security and protect lives. That means that one must always be diligent in adjudicating intelligence produced by the system.
Facial authentication is a technology used in access control management. As a method of access control, facial authentication differs from both face matching and real-time facial recognition insofar as individuals enroll and cooperate with this particular biometric tool. Facial authentication has grown significantly in popularity over the past two years as many enterprises have sought a hands-free alternative to keycards in the post-Covid era.
As a fully automated access control solution, accuracy is lss of an issue with facial authentication technology—the system collects a matrix of facial data points to improve accuracy. The biggest benefits to this type of deployment include: no contact, speed of throughput, harder to fake or masquerade as someone as it validates the authenticity, and possibly reduced costs depending on the application.
Concerns Regarding Facial Biometrics
As we raised at the onset, concerns over the ethical use of facial biometrics run from its accuracy to bias and racial profiling to privacy.
There have been significant advances in the accuracy of AI biometric facial identification. Modern visual identification software often carries a 99% accuracy in controlled environments. Still, end users must be cautious with regard to how they set the sensitivity of levels of their systems, as well as the quality of images they feed into their program making identifications.
To improve facial biometric identification, security professionals should test the accuracy of their systems prior to utilization. Facial biometric testing should encourage lots of data points rather than a single image. Different orientations and different angles will help the AI learn what makes that person who they are. At the same time, sampling should be inclusive of race, color, gender, and occlusions (i.e. shadows, beards, glasses, hats, and face masks).
We need to take racial and ethnic bias in facial recognition security seriously. Early studies suggested that this technology was less precise when it came to the identification of individuals from particular racial and ethnic groups. Later research conducted by the National Institute of Standards and Technology (NIST), however, has found that many of the initial claims about demographic bias were overblown.
In particular, a recently released report from NIST determined the following:
- The most accurate (best) facial identification algorithms have “undetectable” differences between demographic groups.
- The most accurate (best) facial identification algorithms have low false positives and false negatives across demographic groups.
- Algorithms can have different error rates for various demographic groups, but nonetheless remain highly accurate.
While all of this was under controlled conditions, this is still very good news when it comes to eliminating racial and ethnic bias in facial biometric security. The example of China’s pursuit of Uighurs, however, leads us back to the issue of the importance of using these technologies in an ethical manner.
In a 2019 document, the American National Standards Institute (ANSI) provided comments on the NIST’s request for information on artificial intelligence standards. The document states that ANSI supports NIST’s efforts to identify priority areas for federal involvement in AI standards-related activities.
ANSI-accredited standards developing organizations are already working on or considering working on several AI standards. The Institute for Electrical and Electronics Engineers (IEEE) has established an AI and Autonomous Systems Policy Committee. The IEEE has standards work underway related to cybersecurity and the ethics of AI.
In the digital age, privacy has been, and will continue to be, an evolving concept. When it comes to images of individuals captured in public places, courts in the US have routinely rejected an individual’s right to privacy. But legal caveats do exist for large aggregations of data that enable the tracking of persons.
While surveillance is prohibited by local voyeurism laws in places such as bathrooms, locker rooms, etc., there are few strict legal limits on the ability of private owners to collect images of persons on their property.
Despite the general lack of privacy laws in the application of facial biometrics, security professionals would be well advised to take public concerns into account in their collection of an individuals’ images. The well-known “Smile You’re On Camera” sign from retail establishments of old serves as a theft deterrent and an acknowledgment that you’re being recorded.
Entities should always consult with legal counsel prior to deployment of AI-based biometric analytics.
As facial biometric technology becomes more advanced and ubiquitous in the security industry, we will need to answer questions not only about its efficacy, but also its ethical use. At MCA, we have a considerable stake in assisting enterprises in not only setting up systems that protect your employees, clients, and critical assets, but also helping you navigate the potentially treacherous legal and ethical terrain that comes with the embrace of this technology.