Facebook uses the DeepFace tool that uses the deep learning algorithms for the face verification that allows the photo tag suggestions to you when you upload a photo on Facebook. The deep face identifies the faces in the digital images using neural network models. The working of DeepFace is given in below steps:
- It first scans the uploaded images. It makes the 3-D model of the image, and then rotate that image into different angles.
- After that, it starts matching. To match that image, it uses a neural network model to determine the high-level similarities between other photos of a person. It checks for the different features such as the distance between the eyes, the shape of the nose, eyes color, etc.
- Then it does the recursive checking for 68 landmark testing, as each human face consists of 68 specific facial points.
- After mapping, it encodes the image and searches for the information of that person.
DeepFace is the facial recognition system used by Facebook for tagging images. It was proposed by researchers at Facebook AI Research (FAIR) at the 2014 IEEE Computer Vision and Pattern Recognition Conference (CVPR).
In modern face recognition there are 4 steps:
Detect
Align
Represent
Classify
This approach focuses on alignment and representation of facial images. We will discuss these two part in detail.
Alignment:
The goal of this alignment part is to generate frontal face from the input image that may contain faces from different pose and angles. The method proposed in this paper used 3D frontalization of faces based on the fiducial (face feature points) to extract the frontal face. The whole alignment process