Over the last five years, facial recognition technology has been on the rise. In fact, according to the Secure Identity Biometrics Association (SIBA), by the year 2020, the facial recognition technology industry will be worth over ~$20 billion (Chayka). “Facial recognition” is the ability to use facial features to uniquely identify an individual by their facial features. The technology is being touted as being a groundbreaking addition to surveillance technologies, a valid privacy mechanisms for phones and other personal devices, a useful method for advertising targeting, and a way to better online experiences like automatic photo-tagging. As such, it is important for us to understand not only the science behind facial recognition and its applications, but also the ethical and policy concerns behind the technology.
From a technical standpoint, facial recognition can be approached via a number of visual processing mechanisms and algorithms. Most commonly, based on a database of the pixel counts and representations for general face shapes, systems can separate face images from those of other objects. Once the actual face has been found, the system can identify the values for the key points and features that are unique between faces, by once again evaluating pixel similarities and differences. As detailed in Olzak's summary of facial recognition, according to a study by Bonsor and Johnson, there are roughly 80 such values: examples include distance between eyes, width of nose, depth of eye sockets, shape of cheekbones, and length of jawline (Olzak). These key facets are stored as the representation of a particular face. An individual can be identified, then, by comparing the values for each of the features. Other facial recognition mechanisms include normalization (reducing the range of pixel intensity) of the face image to a core, easily comparable image, utilizing 3-D sensors to capture more detailed information about face-shape, and/or using skin-texture analysis (Olzak).
Facial Recognition's applications are far-reaching, and fall into many different arenas.
Public Security: After the tragedy of September 11th, security agencies have been working to develop facial recognition technologies in order to identify criminals and other potential threats to security
Private Device Protection: Facial Recognition can also be applied as a replacement for password protection, because an individual’s facial print is a uniquely identifiable feature
Recognizing Friends: Beginning with Facebook’s automatic photo-tagging feature, image analysis is being used to uniquely identify others in social contexts
Retail and Advertising: Advertising chains and retail platforms have expressed interest in the possibilities of personalizing shopping experiences by uniquely identifying their shoppers with facial recognition technologies
Categorizing of Emotions and Genetic Screening: Facial recognition devices are being further extended in order to be able to recognize specific emotions and genetic disorders (Introna)
From a deontological perspective, we note that it is the duty of the government to protect each individual's privacy and security.
By providing a straightforward way for the government to recognize threats to security, facial recognition upholds ethics from a deontological perspective. Facial recognition allows for the identification of an individual, from a distance, with reasonably high accuracy.
However, at the same time, facial recognition violates an individual’s right to privacy. Facial recognition technologies do not require proximity or a user’s consent in order to identify an individual. An individual could easily be unknowingly monitored by a facial recognition system, therefore violating the individual's privacy. In the extreme case, when coupled with surveillance technologies, as depicted in the novel The Circle, such tracking could be used to expose all aspects of an individual's private life (Eggers). Even in the case that the user consents to such invasive surveillance, such exposure goes against a government’s duty to protect each citizen’s privacy.
Further, research with regards to emotion categorization has shown that utilizing facial recognition techniques to classify particular emotions falls under “Abusive Reductivism” – the dehumanization of a person’s independence and personal identity (Grandjean). Because of its intrusive nature, facial recognition discovers more about an individual than a human could personally identify. Thus, even specific applications of facial recognition, such as emotion categorization, violate the government’s duty towards personal freedom and one's implicit "Right to Privacy".
Overall, the ethical choice must be made between the government's deontological duties to uphold security or privacy. In a system where facial recognition is implemented, while the user's security is protected, the privacy of the individual is put at risk. Due to the magnitude of the privacy violations caused by facial recognition, from a deontological perspective, we argue that facial recognition is not ethical.
Utilitarianism
From a utilitarian perspective, the most ethical action is the one that enables the largest positive impact to the largest population. According to utilitarianism, an ethical action may cause a particular individual to feel like his or her privacy has been invaded, but, overall, it is more important to take the action that results in the largest net positive impact. In the case of facial recognition, this ethical action is the identification and prosecution of criminals and those who cause large-scale harm.
Furthermore, facial recognition can be key to preventing theft and preserving the integrity of individual devices. Facial recognition is stronger than password protection as an authentication measure, because its characteristics are unique to a particular individual, and are therefore harder to hack.
Thus, from a utilitarian perspective, facial recognition provides individual security as well as national security, and is therefore beneficial for the greater good.
Legally, it has been established that “what a person knowingly exposes to the public…is not a subject of Fourth Amendment protection” (United States vs. Katz). By this law, we note that an individual’s facial features are publicly exposed, and therefore do not legally need to be protected. However, by storing an individual’s private data alongside his or her facial identity, that individual's privacy can easily be violated, especially when the individual's data is shared from one source to another. An individual’s “right to privacy” must be maintained.
When considering a policy related to facial recognition, it is imperative to note that, because the technique is used as a security measure, some individuals may employ drastic methods to avoid identification. Plastic surgery and facial changes, for example, can enable an individual to mask his or her identity. Research indicates that building a database of common plastic surgery changes or using state-of-the-art techniques to do facial recognition after plastic surgery are possible solutions (Singh). Until then, however, since facial recognition can be fooled, it is important that facial recognition not be considered as a sole security measure.
Thus, with the ethical issues and legal concerns in mind, we propose the following policy:
I. We first note that, according to the US Supreme Court Law, unless fingerprints or DNA tests are collected voluntarily, they cannot be used in the court of law (“IBIA Privacy Best Practice Recommendations”). Because facial recognition, too, is a personal identifier that does not have to be given voluntarily, it should be treated similarly. Thus, unless facial recognition enrollment has occurred voluntarily, any data procured based on this recognition will not be legally valid.
II. Furthermore, we note that while facial recognition will serve as a preliminary checkpoint, any matches must be confirmed with another security mechanism. This will serve to protect victims against any inconsistencies in facial recognition, as well as ensure that errors based on actions like plastic surgery do not lead to mistaken results.
III. In order to protect private data, a user’s facial features must be fully encrypted and protected. This encryption will allow for fully protected data that has little room for abuse.
IV. To further protect privacy, any private usage of facial recognition must be accompanied by a detailed “Terms of Service” contract, and enrollment procedure for ensuring that individuals are aware of the data collected and stored, and that they are aware of the purposes it can be used for. The data must, then, be only used and shared within those specified conditions.