For Android applications that offer to users possibilities to manage photos or images, it can be very useful to add a faces recognition feature. To achieve that, you have basically two options : first is to find a faces detection algorithm and to implement it. It can be a very good exercise but when you make Android applications, you wanna be productive and don’t reinvent the wheel. So, you go to the second option : use the Mobile Vision API provided by Google in the Google Play Services.

The Mobile Vision API aims to help developers to find objects in photos and video by providing a complete framework. This framework includes detectors letting you to locate and describe visual objects in images or video frames. You can also take part of an event driven API that tracks the position of those objects in video. The Mobile Vision API is part of Google Play Services and is compounded of a core package for common base functionality, and subpackages for specific detector implementations :

  • com.google.android.gms.vision for Common functionality
  • com.google.android.gms.vision.face for Face detector
  • com.google.android.gms.vision.barcode for Barcode detector


1. Install Mobile Vision API in your application

The Mobile Vision API being part of the Google Play Services, you can use it by adding the following dependency in your Gradle build file :

compile 'com.google.android.gms:play-services-vision:8.4.0 '

Like that, you use only the dependency to Mobile Vision API. If you prefer, you can use the complete dependency to Google Play Services like that :

compile 'com.google.android.gms:play-services:8.4.0'

 

2. Ensure that Face detector is operational before querying it

The first time an app will use the Face API, a native library will be downloaded to the device in order to do face detection. Usually, this installation is done before the app is run for the first time. Be, to be sure installation is complete and you can use detection with success, you must check if Face detection is operational before querying it. To make that check, use the following statement :

if (!detector.isOperational()) {
   // your face detection code goes here …
}

 

3. Create the Face detector

Once you have installed the Mobile Vision API on your application project, we’re going to consider that we have an image referenced by a Bitmap instance. We will apply Faces detection on that Bitmap instance. First, we need to create the FaceDetector object and initialize it with the wished options :

FaceDetector detector = new FaceDetector.Builder(context)
    .setTrackingEnabled(false)
    .setLandmarkType(FaceDetector.ALL_LANDMARKS)
    .build();

 

Here, we choose to detect all landmarks. The following options are available :

images_options

Note that setting “tracking enabled” to false is recommended for detection for unrelated individual images, since this will give a more accurate result.

 

4. Detect faces and facial landmarks

Given our bitmap, we can detect faces and facial landmarks. We create a Frame instance from the Bitmap instance to supply to the detector :


Frame frame = new Frame.Builder().setBitmap(bitmap).build();

Then, we can call synchronously the detection with the frame instance in parameter :


SparseArray<Face> faces = detector.detect(frame);

The detect() method returns a collection of Face instances. Now, we can iterate over that Face’s collection, then iterate on the landmarks for each face, and eventually draw the result based on the position of each landmark. For that, we need previously to draw the image on a Canvas instance. Iteration code is like that :


for (int i = 0; i < faces.size(); ++i) {
   Face face = faces.valueAt(i);

   for (Landmark landmark : face.getLandmarks()) {
      int cx = (int) (landmark.getPosition().x * scale);
      int cy = (int) (landmark.getPosition().y * scale);
      canvas.drawCircle(cx, cy, 10, paint);
   }
}

 

5. Realease the Face detector

Like said previously, Face detector needs a native library to work. So, you can easily imagine that Face detection uses native resource to make detection. For this reason, it’s necessary to release the detector instance got once the detection is no longer needed. It’s made by calling its release() method :


detector.release();

 

6. Conclusion

The following image shows you the result of our program :

image_sample

 

Program detects well one face with several landmarks : eyes, nose, mouth and cheeks.

Like you can see, the Mobile Vision API is a powerful API offered to developers by Google to help them to enhance their applications. API is clear and it’s very simple to work with it. So, give it a try !