Introduction: Facial Recognition With Tracking.js

In this Instructable we will go over facial recognition. Facial Recognition is integral to augmented reality and I have wanted to do something with it for a while. Recently Facebook implemented facial recognition and snapchat style filters so I though it would be fun to give it a go.

Here I will go over tracking eyes and mouths with Tracking.js and their implementation of OpenCV. We will attempt to make a snapchat style filter and also go over making some buttons and changing the overlay image on button presses. If you have never heard of this library before it is definitely worth checking out because they have some other really cool samples like tracking colors or even moving a 3D seen by turning your head.

To follow along you will need some kind of web hosting and a computer with a webcam.

You will also need the image folder from here:

https://www.matthewhallberg.com/

The folder contains some images we will use in this Instructable and you can also try out the tracking at the link above before we continue.

Step 1: Understanding Tracking.js

First of all this will really only work in the web browser. Tracking.js relies on a function called GetUserMedia() which will give you access to the users webcam. It is only supported by Firefox and Google Chrome as far as I know. It should work on Google Chrome for Android, but I could not get it to work on IOS at all. GetUserMedia() also requires a secure connection so you have to use the https protocol in your web address.

All of the main functionality can be tested on their website at www.TrackingJS.com.

In order to follow along you will need to set up a basic web development environment with a text editor. I use Sublime Text 2.

You will also need some way to send files to your server. You could use Filezilla or even the terminal with SSH but I use the SFTP plugin for Sublime Text.

Step 2: HTTPS.

Since Tracking.js relies on the function GetUserMedia(), we must use a secure connection when trying to access the user's webcam. This means that when linking to a page where we want to do some tracking, the url must be prepended with https, as in my website for example https://www.MatthewHallberg.com.

If you don't want to link to a page but rather redirect all traffic to a secured version you can add a file in your main directory called ".htaccess"

Inside this file put the following: (make sure to change the url's to what you want to redirect)

.htaccess FILE: 
RewriteEngine On
RewriteCond %{HTTPS} off [OR]
RewriteCond %{HTTP_HOST} !^www\.matthewhallberg\.com$ [NC]
RewriteRule ^(.*)$ https://www.matthewhallberg.com/$1 [L,R=301]

Step 3: Download Tracking.JS

Go to www.TrackingJS.com and download the library.

If you have not already, go to www.MatthewHallberg.com and download the image folder I provided.

Unzip both folders and put them into the working directory of your site. Upload both of them to your server.

Go back to trackingjs.com and navigate to the Face(Camera) section on the left side menu. On the bottom right of the page click view source and copy that code to a new page of your site. Fix the references at the top to point to the correct files on your server, and add references for the eyes and mouth files like this:

Step 4: Lets Do Some Tracking.

After the first script tag create an image for the x pictured above:

    var img = document.createElement("img");
    img.src = 'img/eye.png';

Pass 'eye' and 'mouth' into the ObjectTracker() constructor and set the 3 attributes like this:

var tracker = new tracking.ObjectTracker(['eye','mouth']);
      tracker.setInitialScale(1);
      tracker.setStepSize(2.7);
      tracker.setEdgesDensity(.2);

Finally, in the event.data for each loop delete everything inside for drawing the rect and replace it with this:

context.drawImage(img, rect.x - 10, rect.y - 10, rect.width * 1.5, rect.height * 2);

Step 5: It's Working!

Upload that to your server and when you visit the page you will be prompted to allow camera access. When you click OK, you will see the image of the X being drawn over your mouth and eyes!

Now you will see that more than 3 x's can get drawn at a time. As far as I know it is not possible to distinguish between what is currently being tracking in the event.data foreach loop, so if you only want 3 x's showing per frame you would have to create your own logic for checking the number of total x's drawn or maybe don't draw an x if it is too far away from the others. Word of warning I did try this, and it did not work very well.

Step 6: Horns.

Now lets get the snapchat style filter working that I showed in the intro:

First change the image source to horns.png like this
   var img = document.createElement("img");
    img.src = 'img/horns.png';

Change the object tracker and its attributes to this:

var tracker = new tracking.ObjectTracker('face');
      tracker.setInitialScale(4.7);
      tracker.setStepSize(2);
      tracker.setEdgesDensity(.1);

Finally draw the image and set its scale and position like this:

context.drawImage(img, rect.x, rect.y - 85, rect.width * 1.1, rect.height * 2);

Upload all that and you should be looking very scary.

Step 7: Become the Legend.

Now it is possible to create buttons and change the image on your face by clicking different buttons.

I have compiled some code here for 4 buttons and each one will put a different version of nick cage over top of your face. You can however change this stuff to display whatever images you would like. Here is everything you should need:

<!doctype html>
<html>
<head>
  <meta charset="utf-8">
  <title>tracking.js - face with camera</title>
  <link rel="stylesheet" href="tracking.js-master/examples/assets/demo.css">

  <script src="tracking.js-master/build/tracking.js"></script>
  <script src="tracking.js-master/build/data/face.js"></script>
  <script src="tracking.js-master/build/data/eye.js"></script>
  <script src="tracking.js-master/build/data/mouth.js"></script>
   <script src="../node_modules/dat.gui/build/dat.gui.min.js"></script>
  <script src="tracking.js-master/examples/assets/stats.min.js"></script>

  <style>
  video, canvas {
    margin-left: 230px;
    margin-top: 120px;
    position: absolute;
  }
  </style>
</head>
<body>
  <div class="demo-title">
    <p><a href="http://trackingjs.com" target="_parent">tracking.js</a> - get user's webcam and detect faces</p>
  </div>

  <div class="demo-frame">
    <div class="demo-container">
      <video id="video" width="320" height="240" preload autoplay loop muted></video>
      <canvas id="canvas" width="320" height="240"></canvas>


<div style = "text-align:center">
  <button class="button" onclick="none()">Off</button>
   <button class="button" onclick="nick1()">Nick 1</button>
   <button class="button" onclick="nick2()">Nick 2</button>
   <button class="button" onclick="nick3()">Nick 3</button>

    <style type="text/css">
      .button {
          background-color:black;
          cursor:pointer;
          align: center;
          position: relative;
          top: 50px;
          width: 100px;
          height: 40px;
          color: #ffffff;
          border: none;
          font-size: 100%;
      }
      .button:focus {
        border: 2px solid blue;
      }
    </style>
</div>

    </div>
  </div>


  <script>

    var img = document.createElement("img");
    img.src = '';

    function none() {
      img.src = '';
    }
    function nick1() {
      img.src = 'img/nick1.png';
    }
    function nick2() {
      img.src = 'img/nick2.png';
    }
    function nick3() {
      img.src = 'img/nick3.png';
    }

    window.onload = function() {
      var video = document.getElementById('video');
      var canvas = document.getElementById('canvas');
      var context = canvas.getContext('2d');

//var trackerEyesMouth = new tracking.ObjectTracker(['eye', 'mouth']); 
//EYES/MOUTH stepSize = 1.5 EdgeDensity = 1
//FACE step 2, scale 4, edge .1
var tracker = new tracking.ObjectTracker('face');
      tracker.setInitialScale(4);
      tracker.setStepSize(2);
      tracker.setEdgesDensity(.1);

tracking.track('#video', tracker, { camera: true });

tracker.on('track', function(event) {

  context.clearRect(0, 0, canvas.width, canvas.height);

  event.data.forEach(function(rect) {

    context.drawImage(img, rect.x, rect.y/4, rect.width + 11, rect.height * 1.9);

  });
});

      var gui = new dat.GUI();
      gui.add(tracker, 'edgesDensity', 0.1, 0.5).step(0.01);
      gui.add(tracker, 'initialScale', 1.0, 10.0).step(0.1);
      gui.add(tracker, 'stepSize', 1, 5).step(0.1);
    };
  </script>

</body>
</html>

Hopefully you enjoyed this stuff! Let me know in the comments if you have any questions,I probably can't answer them because I suck at web development.

Comments

About This Instructable

1,899views

56favorites

License:

Bio: My name is Matthew and I attend the University of Pittsburgh. Currently I am a senior, going for a bachelors in Information Science with a ... More »
More by matthewh8:Apple ArKit Augmented Reality AppAugmented Reality Fidget SpinnerFacial Recognition With Tracking.js
Add instructable to: