This API requires Android API level 21 or above. Make sure that your app's build file uses a minSdkVersion value of 21 or higher.
In your project-level build.gradle file, make sure to include Google's Maven repository in both your buildscript and allprojects sections.
Add the dependencies for the ML Kit Android libraries to your module's app-level gradle file, which is usually app/build.gradle. Choose one of the following dependencies based on your needs:
For bundling the model with your app:
dependencies {
// ...
// Use this dependency to bundle the model with your app
implementation 'com.google.mlkit:face-detection:16.1.7'
}
For using the model in Google Play Services:
dependencies {
// ...
// Use this dependency to use the dynamically downloaded model in Google Play Services
implementation 'com.google.android.gms:play-services-mlkit-face-detection:17.1.0'
}
If you choose to use the model in Google Play Services, you can configure your app to automatically download the model to the device after your app is installed from the Play Store. To do so, add the following declaration to your app's AndroidManifest.xml file:
<application ...>
...
<meta-data
android:name="com.google.mlkit.vision.DEPENDENCIES"
android:value="face" >
<!-- To use multiple models: android:value="face,model2,model3" -->
</application>
You can also explicitly check the model availability and request download through Google Play services ModuleInstallClient API.
If you don't enable install-time model downloads or request explicit download, the model is downloaded the first time you run the detector. Requests you make before the download has completed produce no results.
For face recognition, you should use an image with dimensions of at least 480x360 pixels. For ML Kit to accurately detect faces, input images must contain faces that are represented by sufficient pixel data. In general, each face you want to detect in an image should be at least 100x100 pixels. If you want to detect the contours of faces, ML Kit requires higher resolution input: each face should be at least 200x200 pixels.
If you detect faces in a real-time application, you might also want to consider the overall dimensions of the input images. Smaller images can be processed faster, so to reduce latency, capture images at lower resolutions, but keep in mind the above accuracy requirements and ensure that the subject's face occupies as much of the image as possible. Also see tips to improve real-time performance.
Poor image focus can also impact accuracy. If you don't get acceptable results, ask the user to recapture the image.
The orientation of a face relative to the camera can also affect what facial features ML Kit detects. See Face Detection Concepts.
Before you apply face detection to an image, if you want to change any of the face detector's default settings, specify those settings with a FaceDetectorOptions object .
// High-accuracy landmark detection and face classification
FaceDetectorOptions highAccuracyOpts =
new FaceDetectorOptions.Builder()
.setPerformanceMode(FaceDetectorOptions.PERFORMANCE_MODE_ACCURATE)
.setLandmarkMode(FaceDetectorOptions.LANDMARK_MODE_ALL)
.setClassificationMode(FaceDetectorOptions.CLASSIFICATION_MODE_ALL)
.build();
// Real-time contour detection
FaceDetectorOptions realTimeOpts =
new FaceDetectorOptions.Builder()
.setContourMode(FaceDetectorOptions.CONTOUR_MODE_ALL)
.build();
To detect faces in an image, create an InputImage object from either a Bitmap, media.Image, ByteBuffer, byte array, or a file on the device. Then, pass the InputImage object to the FaceDetector's process method.
For face detection, you should use an image with dimensions of at least 480x360 pixels. If you are detecting faces in real time, capturing frames at this minimum resolution can help reduce latency.
You can create an InputImage object from different sources, each is explained below.
InputImage image = InputImage.fromBitmap(bitmap, rotationDegree);
FaceDetector detector = FaceDetection.getClient(options);
// Or use the default options:
// FaceDetector detector = FaceDetection.getClient();
Pass the image to the process method:
Task<List<Face>> result =
detector.process(image)
.addOnSuccessListener(
new OnSuccessListener<List<Face>>() {
@Override
public void onSuccess(List<Face> faces) {
// Task completed successfully
// ...
}
})
.addOnFailureListener(
new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception e) {
// Task failed with an exception
// ...
}
});
If the face detection operation succeeds, a list of Face objects are passed to the success listener. Each Face object represents a face that was detected in the image. For each face, you can get its bounding coordinates in the input image, as well as any other information you configured the face detector to find. For example:
for (Face face : faces) {
Rect bounds = face.getBoundingBox();
float rotY = face.getHeadEulerAngleY(); // Head is rotated to the right rotY degrees
float rotZ = face.getHeadEulerAngleZ(); // Head is tilted sideways rotZ degrees
// If landmark detection was enabled (mouth, ears, eyes, cheeks, and
// nose available):
FaceLandmark leftEar = face.getLandmark(FaceLandmark.LEFT_EAR);
if (leftEar != null) {
PointF leftEarPos = leftEar.getPosition();
}
// If contour detection was enabled:
List<PointF> leftEyeContour =
face.getContour(FaceContour.LEFT_EYE).getPoints();
List<PointF> upperLipBottomContour =
face.getContour(FaceContour.UPPER_LIP_BOTTOM).getPoints();
// If classification was enabled:
if (face.getSmilingProbability() != null) {
float smileProb = face.getSmilingProbability();
}
if (face.getRightEyeOpenProbability() != null) {
float rightEyeOpenProb = face.getRightEyeOpenProbability();
}
// If face tracking was enabled:
if (face.getTrackingId() != null) {
int id = face.getTrackingId();
}
}
When you have face contour detection enabled, you get a list of points for each facial feature that was detected. These points represent the shape of the feature. See Face Detection Concepts for details about how contours are represented.
The following image illustrates how these points map to a face, click the image to enlarge it: