Due to the wide range of tasks that digital face recognition technologies solve, interest in these systems is very high. Based on this technology, various scenarios can be supported including authenticating users for access, counting people in a location for crowd management. You can read more about the numerous areas for the application of face recognition here.
For this review, we compared the capabilities of Face ID — a face recognition service by Evergreen — and the three most widely known face recognition solutions: Microsoft Azure Face API, Google Cloud Vision API, and Amazon Rekognition Image.
During the analysis, we tested the following functional blocks:
We focused on the following criteria (ref. functional block tests):
For an objective evaluation, we used the same photos and analysed them individually in each of the services. As a quick reference, we briefly compared Face ID with the features of all three face recognition services.
Azure Face API is part of Microsoft’s Azure Cognitive Services for advanced facial recognition. Azure Face API can be used to detect, identify and analyse faces in images and videos, and uses state-of-the-art cloud-based face algorithms.
Azure Face service detects human faces and returns the rectangle location coordinates with an image. Face detection can optionally extract a series of related attributes such as head pose, gender, age, emotion, facial hair, glasses, etc., with a detection confidence for a given face. In the sample below, the detected face is a smiling 36 y.o. male with an evaluated emotion of happiness, according to the system.
Face ID can detect and locate human faces within an image and return precise face bounding boxes. It can also locate facial landmarks (chin, eyes, eyebrows, mouth corners, nose contour) and ascertain specific attributes from these features such as gender, age, smile intensity, head pose, blurriness, eye status (eyes open or closed, eyewear). The output is JSON.
For Face ID, the average detection time with a larger group of photos is 1.4 s.
In Azure Face, the Face Comparison feature is called “Face Verification”. Practically, the system evaluates whether two faces belong to the same person and frames the faces with bounding boxes. As you can see below, Azure Face has concluded that our sample photos belong to different people. Response time: 394 ms.
FaceID likewise frames and evaluates, but in addition it can estimate face similarity. In our sample photos 2 different people were identified and calculated to have a 59.2% likeness. FaceID Face Comparison not only works on an uploaded image but also a URL. Processing time: 454 ms.
Azure Face API can detect the 6 basic universal human emotions — happiness, sadness, fear, contempt , surprise and anger, and the neutral emotion — from facial expressions. This feature is called Perceived Emotion Recognition.
Azure can extract facial attributes at the detection stage. In this mode, it returns the face rectangle position and corresponding confidence scores for each of the emotions mentioned above.
At the moment, emotion recognition is not included in our standard Face ID service package. However, we can implement a custom solution that will recognise the 6 basic universal human emotions plus the neutral emotion. Follow up with our release updates, or request an individual solution.
Azure Face service can compare a target face against a set of candidate faces to find a set of similar-looking ones. The programme has two working modes: matchPerson mode returns similar faces after it filters for the same person; and, matchFace mode that returns a list of similar candidate faces that may, or may not, be one of the same. The same-person filter is ignored.
Source: https://docs.microsoft.com/en-us/azure/cognitive-services/face/overview
Up till now, we haven’t included Similarity Search in Face ID. However, we can develop and implement this feature for your project. Given a target face and a collection of faces, the search API will return a collection of similar faces, along with confidence scores to evaluate their similarity. Here’s a search example:
1. Comparison with a group of candidates:
2. Search results and response in JSON format:
Here we can see that the system recognises the image of Candidate Face 1 as being identical to that of the search inquiry reference photo, while the JSON report defines the level of probability of this being a match as “very high”.
Cloud Vision API is a comprehensive recognition product by Google that uses pre-trained models. It can detect objects and faces in photos, recognise text, automatically assign metadata, and more. For this review, we focused on features that are related to face recognition and comparison.
Face detection in Google Cloud Vision API can detect a face, or multiple faces, in an image. Additionally, general image properties (underexposure, blurriness) can also be detected. Specific individual facial recognition is not supported.
Along with multiple objects recognised in a single image, Google Cloud Vision API can also detect a person. Cloud Vision returns the following elements: a textual description, a confidence score, and normalised vertices [0,1] for the bounding polygon around the object (in JSON).
Face ID returns precise face bounding boxes and locates facial landmarks (chin, eyes, eyebrows, mouth corners, nose contour). In addition, the algorithm can return attributes such as gender, age, smile intensity, head pose, eye status (eyes open or closed, eyewear), and image properties, such as blurriness, in the JSON report.
Cloud Vision API doesn’t support face comparison and verification. It only confirms that a particular photo has a human face or multiple faces in it.
Using FaceID, you can compare two images and verify if detected faces belong to the same individual. If the similarity score is <80%, the system will indicate that there are two different people in the photos.
Two photos of the same person: 93.1% similarity (faces match). Response time: 1.163 s.
Photos of two different people:
Although both actors look alike, the system evaluated their similarity to be only 57.6% (non-matching faces). Response time: 0.459 s.
Cloud Vision API locates faces with bounding polygons and identifies specific facial landmarks (eyes, ears, nose, mouth, etc.) along with their corresponding confidence values. It returns likelihood ratings for 4 basic emotions (joy, sorrow, anger, surprise) and general image properties (underexposure, blurriness, headwear present, etc.).
At the moment, emotion recognition is not included in the Face ID service package, as mentioned previously. But we can implement an individual solution for your project at any time.
Gesture recognition works well with our service. We can configure the recognition of up to 16 different gestures and set it up as an additional verification rule in your authentication systems. Below is an example of a similar service:
Amazon Rekognition uses advanced technology for face detection in images and video. This is Amazon's answer to Google's Cloud Vision API, being a complex product for the segmentation and classification of visual content. For this article, we will be focusing on its face recognition and analysis components.
We tested face detection as part of the Amazon Rekognition Object and Scene Detection Demo. It can automatically label objects, concepts, and scenes within images and provide a confidence score. To detect faces in an image, you need to configure your face-detection algorithm following the developer guide and run the DetectFaces operation.
As can be seen, the system is 99.9% confident the face on the sample photo represents a human, and with 96.3% it is a performer. You can access the full response in JSON format.
Face ID has recognised the face in the sample photo as belonging to a 37 y.o. male locating it with a bounding box, and returning a JSON response containing facial landmarks and their coordinates (chin, eyes, eyebrows, mouth corners, nose contour), and other attributes.
Amazon Rekognition Image can compare a face in a source image with each face in the target image. For a source image that contains multiple faces, the service detects the largest face and uses that to compare with each face that is detected in the target image. In response, you receive an array of face matches, face source information, source, and target image orientation. For each matching face, the system will return a similarity score (how similar the face is to the source face) and face metadata (bounding box and facial landmarks).
Face ID can compare two photos and decide whether they belong to the same person. The system will adjudicate that the detected faces are of the same individual if the similarity score is >80%. In our example, there is a 93.1% probability that it is the same person in both photos.
In addition, Face ID can automatically detect faces in documents, for example, in a passport photo. This feature can be useful for various authentication services (e.g., anti-fraud). The average processing time, in this case, is approximately 700 ms.
For Amazon Rekognition, we tested the Face Analysis demo. The algorithm can return the following information for each detected face:
Apart from emotion detection, Face ID also returns the detected attributes in JSON. As mentioned earlier, facial landmarks, gender, age, and other characteristics are evaluated during face detection.
Amazon Rekognition can store information about detected faces in server-side containers known as “collections”. You can use the facial information to search for known faces in images and videos, and use indexing to persist information about the detected facial features into a collection. After you create a face collection and store the information for all faces, you can search the collection for face matches. These collections can be used in a variety of scenarios.
The current version of Face ID can detect, compare, and verify facial information in photos and digital documents. Our team can develop and implement Similarity Search for your project.
If you want to learn more about the technology, discuss additional recognition services, order a boxed solution, or simply try our demo or — please contact us. We are also ready to adapt the face recognition service to your specific requirements and processes.