Biometric recognition is the use of physiological characteristics to confirm one's true identity. Currently, biometric recognition includes facial recognition, iris recognition, vein recognition, and fingerprint recognition, among others. Fingerprint recognition is currently the most widely used and has a lower technical threshold. Human fingerprints are formed at around four months in the womb and become stable and unchanging by the age of 14. Each person's fingerprint is unique, with almost no chance of complete similarity, which is why fingerprint recognition was already applied to collecting criminal evidence in the 19th century.

Fingerprint recognition technology combines four functions: reading fingerprint images, extracting features, storing data, and matching. After the fingerprint reading device reads the image of the human fingerprint, the original image needs to undergo preliminary processing to make it clearer. Next, fingerprint recognition software establishes the fingerprint's feature data, which is a one-way conversion that can transform fingerprints into feature data but not the other way around. Therefore, two different fingerprints will not produce the same feature data. The software finds data points on the fingerprint called minutiae, which are the coordinates of the branching, terminating, or circular points on the fingerprint ridges. These points have more than seven unique features, so the average finger has around 70 minutiae, generating about 490 data points. These data points are usually called templates and are stored in memory after software processing, with the actual fingerprint sampling data being eliminated. When the user inputs their fingerprint again, the system recalculates their template and compares it with the previously stored one to determine if the two fingerprints match.

Currently, there are roughly eight types of fingerprints: plain arch, tented arch, ulnar loop, radial loop, plain whorl, central pocket loop, double loop, and accidental.