Implementing Openface In a Linux VPS Using Python and Docker

Openface provides capabilities for face detection, facial landmark detection, face recognition, and tracking facial expressions. OpenFace is an open source that's built using deep learning techniques and is capable of processing images and videos to analyze faces in real time or from recorded media.

· 3 min read
Photo by Possessed Photography / Unsplash
Photo by Possessed Photography / Unsplash

Prerequisites.

  1. Linux VPS with Ubuntu Server 20.04.
  2. Installed Docker Manager.

Next. Install a preconfigured docker image that has everything already installed.

docker pull bamos/openface
docker run -d --restart=always --name openface bamos/openface
root@tuturulianda:~# docker exec -it openface /bin/bash

Step 1 - Update the System and the Python Patch

First things first, perform upgrading system files and patching the Python 2.7 library called label.py as the following commands.

root@7ba6f9ec1f7b:/# apt update && app upgrade -y
root@7ba6f9ec1f7b:/# apt install nano

root@7ba6f9ec1f7b:/# nano /root/.local/lib/python2.7/site-packages/sklearn/preprocessing/label.py

On line 166. Change from if diff: to if diff.size > 0:

Make a folder called ./training-images/ inside the Openface folder.

root@7ba6f9ec1f7b:/# cd /root/openface
root@7ba6f9ec1f7b:~/openface# mkdir -p training-images
root@7ba6f9ec1f7b:~/openface# mkdir -p unknown-images

Step 2 - Create Image Folders

Next, create an image folder for each person you want to recognize. For instance:

root@7ba6f9ec1f7b:~/openface# mkdir -p ./training-images/tutu-rulianda/
root@7ba6f9ec1f7b:~/openface# mkdir -p ./training-images/chad-smith/
root@7ba6f9ec1f7b:~/openface# mkdir -p ./training-images/jimmy-fallon/
root@7ba6f9ec1f7b:~/openface# mkdir -p ./training-images/will-ferrell/

Step 3 - Populate Image Folders

Copy images of each person from unknown-images folder in VPS into the respective destination folders. Make sure only one face appears in each image. No need to crop the image around the face. Openface will do it automatically.

root@7ba6f9ec1f7b:~/openface# exit
root@tuturulianda:~# docker cp training-images openface:/root/openface
root@tuturulianda:~# docker cp unknown-images openface:/root/openface
root@tuturulianda:~# docker exec -it openface /bin/bash
root@7ba6f9ec1f7b:~# cd /root/openface
root@7ba6f9ec1f7b:~/openface#

Step 4 - Detect Face and Align Face

Run align-dlib.py script from inside the Openface root folder. First, do pose detection and alignment.

root@7ba6f9ec1f7b:~/openface# ./util/align-dlib.py ./training-images/ align outerEyesAndNose ./aligned-images/ --size 96

This will create a new ./aligned-images/ folder with a cropped and aligned version of each of your test images. Next, generate the representations from the aligned images using a main.lua script in the batch-represent folder.

root@7ba6f9ec1f7b:~/openface# ./batch-represent/main.lua -outDir ./generated-embeddings/ -data ./aligned-images/

After you run this, the ./generated-embeddings/ folder will contain a csv file with the embeddings for each image. Next, train your face detection model by executing classifier.py script in demos folder.

root@7ba6f9ec1f7b:~/openface# ./demos/classifier.py train ./generated-embeddings

This will generate a new file called ./generated-embeddings/classifier.pkl. This file has the SVM model you'll use to recognize new faces. At this point, you should have a working face recognizer.

Step 5 - Recognize Face

Get a new picture with an unknown face from the unknown-images folder. Pass it to the classifier script like this:

root@7ba6f9ec1f7b:~/openface# ./demos/classifier.py infer ./generated-embeddings/classifier.pkl your_test_image.jpg

You should get a prediction that looks similar to this:

root@7ba6f9ec1f7b:~/openface# ./demos/classifier.py --verbose infer ./generated-embeddings/classifier.pkl unknown-images/tutu-rulianda.jpg
Argument parsing and import libraries took 0.550723075867 seconds.
Loading the dlib and OpenFace models took 1.12552189827 seconds.

=== unknown-images/tutu-rulianda.jpg ===
  + Original size: (400, 400, 3)
Loading the image took 0.00343298912048 seconds.
Face detection took 0.12503695488 seconds.
Alignment took 0.00670099258423 seconds.
This bbox is centered at 251, 108
Neural network forward pass took 0.0909860134125 seconds.
Prediction took 0.000488996505737 seconds.
Predict tutu-rulianda with 0.82 confidence.
root@7ba6f9ec1f7b:~/openface#

If you get bad results, try adding a few more pictures of each person in Step 3 (especially pictures in different poses).

Conclusion

That's it. The Openface is already up and running.


Citation.

Adam Geitgey
https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78

Brandon Amos, Bartosz Ludwiczuk, Mahadev Satyanarayanan
https://github.com/cmusatyalab/openface.git

@techreport{amos2016openface,
title={OpenFace: A general-purpose face recognition
library with mobile applications},
author={Amos, Brandon and Bartosz Ludwiczuk and Satyanarayanan, Mahadev},
year={2016},
institution={CMU-CS-16-118, CMU School of Computer Science},
}

B. Amos, B. Ludwiczuk, M. Satyanarayanan,
"Openface: A general-purpose face recognition library with mobile applications,"
CMU-CS-16-118, CMU School of Computer Science, Tech. Rep., 2016.