Introduction: Pan / Tilt Face Tracking With the Raspberry Pi

Picture of Pan / Tilt Face Tracking With the Raspberry Pi

With some effort I have found that controlling two servo motors to allow a webcam to pan/tilt while tracking a face in real time using the raspberry pi not as impossible as it may at first seem. With some careful tweaking and code optimization I was able to allow the pi to keep up with two servos while running OpenCV face detection at 320x240 looking for a right profile,left profile, and frontal face and adjusting the servos faster than once per second.

Step 1: Acquire the Hardware.

Picture of Acquire the Hardware.

Things needed:

A raspberry pi -- Model A will work fine, I have the original Model B which has the same specs as the new Model A (minus network).
A pan/tilt bracket
Two Servos
A GPIO Ribbin Cable
A Pi-Supported Webcam -- I used a Logitech C210

Assuming you already have a raspberry pi and a webcam the additional hardware will run you about $25

Step 2: Get Your Raspberry Pi Ready.

Picture of Get Your Raspberry Pi Ready.

Make sure you are using the Official RaspbianOS (the hard-float version) and that it is up to date.
You may want to overclock your raspberry pi. I did to 800mhz. The higher you go the faster the facial recognition will be, but the less stable your pi may be.

Install OpenCV for python: sudo apt-get install python-opencv
Get the wonderful servoblaster servo driver for the raspberry pi by Richard Hirst: here

You can download all the files as a zip archive and extract them to a folder somewhere on the pi.
To install the servo blaster driver open a terminal and CD into the directory where you extracted the servoblaster files
run the command: sudo make install_autostart
EDIT: The command for recent versions of ServoBlaster is just: sudo make install


You may want to make servoblaster time-out and stop sending signals to the servo after a second if it's not being moved.
to do this add the following line to /etc/modules: servoblaster idle_timeout=1000

start servoblaster with the following command: sudo modprobe servoblaster

The next task is to get the camera functioning as expected:

First of all, thanks a lot to Gmoto for finding this and pointing it out, it was the last piece of the "pi" to get everything running smoothly; you have to adjust some parameters in the uvcvideo module to get everything running well.
Namely, run these commands:

rmmod uvcvideo
modprobe uvcvideo nodrop=1 timeout=5000 quirks=0x80

You will need to run that every time you reboot if you plan to run the face tracking program, or alternatively add the parameters to /etc/modules like you did with the servoblaster time-out tweak.

Step 3: Put Together Your Rig

Picture of Put Together Your Rig

Build the pan/tilt brackets as per the instructions and attach the servos.
Attach your camera to the top of the bracket (i just used tape) and plug it into your raspberry pi usb port.
I was able to power it without a usb hub, but you may want to get a powered usb hub and go through that.

Step 4: Connecting the Servos

Picture of Connecting the Servos

Servoblaster considers servo-0 to be whatever is connected to GPIO 4 and servo-1 is whatever is connected to GPIO-17.
Servos have three wires, one is red which is Vin/positive, one is brown or black which is ground/negative and the other is control.
using the ribbon cable (and in my case some connector wire jammed into the holes) connect the control wire for each servo to the correct pin. The code assumes that servo-0 will control the left-right movement and servo-1 will control the up-down movement of the camera; so connect them this way.

Now it would seem to be common sense that the Vin for the servos would come from the 5v pins from the GPIO and the ground for the servos would come from the ground pins of GPIO, but this did not work in my case because I used a larger servo for the base. The large servo pulled more power than the pi was willing so supply. I was, however, able to power my smaller tilt servo with no issue. Also, Richard Hirst who made servoblaster seems to imply that he can drive multiple small servos with the GPIO 5v. I have also learned that there are some fuses in my version of the pi that were later removed related to those power pins. My instinct tells me that you could power two smaller servos from those pins on a newer pi. If you cannot, this is what you will have to do:

You will need some kind of external power source which is able to handle a heavy 5v-6v load: I used the one built into an arduino, but any 5ish volt power source should do; the servos are rated for up to 6v. The 5v pin on a computer power supply, a 5v-6v wall charger, some batteries in parallel; whatever floats your boat. Once you have your external source just connect the positive and negative lines from the servos to the positive and negative side of your power source, then connect the ground (negative) from your external power source to a ground pin on the raspberry pi GPIO.


Step 5: Run the Program

I have attached the python script to this article, it's called PiFace.py to run it just CD to it's location in terminal and type: python PiFace.py

Here are some videos of mine in action.





Step 6: Reflect and Learn

Picture of Reflect and Learn
Look though the source code, it's well commented to explain how everything works, but basically it looks for a frontal face, then a right profile face, then a left profile face. It loops until it finds one. If it finds, for instance, a left profile face, it stops searching for right and front and keeps looping and searching for left (to speed up detection). If it ever can't find that left face, it goes back to searching for all three again. When it finds a face, it gets the center coordinates of that face and uses that info to decide which way / if to move the servo motors -- and how far to move them -- and how fast to move them:

Yes, how fast, it creates two subprocesses one for each servo, when a servo is told to move, a speed is provided. The subprocess loops and increments the servo position by one with each pass- until it reaches the desired position. How fast it loops is based on the provided speed parameter. This allows you to move the servo motors at various speeds even though the motor speed is not adjustable. I originally implemented this with threads, but python proved to have very poor handling of threads at high CPU loads.

Just like pretty much any open source facial recognition application, we are using OpenCV's haar-classifier cascade to search for patterns matching those found in the included FrontalFace.xml. But there there seem to be some poorly understood and documented aspects of the parameters of the cvHaarDetectObjects function which have a major impact on the performance of the program.

The first parameter is of course the image. You pass the function the image you want to search for faces within, there seems to be some confusion even at this step - people seem to think that by first converting the image to grey-scale, the processing will be faster. A simple benchmark will show that this is untrue - in fact it will make the process slower  because you are performing an extra step. People also seem to think that first scaling the image down will make things faster - this makes intuitive sense, because now there is a smaller image to search for a face within, but this is not the most efficient method, which brings me to the next parameter,

Scalefactor -- the forum dwellers seem to give suggestions about what this should be set to without giving much explanation about what it is. To fully understand, you need to know how OpenCV detects faces:


Watch that video and pay special attention toward the end. Notice that a square moves from top left to bottom right. Each time it moves, it looks for a pattern within it -- in this case a face, but with OpenCV that pattern could be anything. See how it makes one pass and gets bigger, then goes at it again? The amount that it increases in size with each pass is the scalefactor. If you set it to 1.1 it will get 1.1 times bigger (10%) with each pass. 1.3 would make it get 1.3 times bigger with each pass -- 30%. Obviously the quicker it is growing in size, the faster it will complete, but at the expense of possibly missing a face that was there.

The next parameter has no impact on performance so far as I can tell. MinNeighbors tells the program how picky to be about what it considers a match. The function is looking for patterns and checks if those patterns match its pattern database - an xml file. I think the default it 3 -- which means that if there are 3 patterns inside it's square where it is looking, which match patterns found in the xml file, then consider it a match. I set mine to 4. The higher you set it, the more sure you can be that when it says it found a match it's right. However, set it too low and it thinks everything it sees is a face. Set it too high and it will have trouble catching actual faces.

The next parameter Flags; these are boolean values that you can enable to tweak things:
one is CV_HAAR_DO_CANNY_PRUNING. This flag was designed just for faces; it tells the function to skip searching over areas with sharp edges... because faces generally do not have any sharp edges. (See attached image, sometimes they might...) This speeds things up depending on the backdrop.
another is HAAR_FIND_BIGGEST_OBJECT this tells the function to only return the largest object it found.
another is CV_HAAR_DO_ROUGH_SEARCH which tells the function to stop looking once it finds something, it's meant to be used with HAAR_FIND_BIGGEST_OBJECT and greatly improves performance when you are only trying to find one face.

The last two parameters are important, or at least one of them is - they are MinSize and MaxSize. A common method for speeding up the search seems to be to scale down images, if you want to double the speed of the detection of a face in an 800x600 image, scale it to 400x300. The problem with that logic is that you are shrinking potential faces, and haar can't reliably find faces smaller than 20x20 pixels. Not only that, now you are using computer resources to shrink that image. You can get the same speed boost by specifying a MinSize for the face. 20x20 is the default, but 40x40 will go crazy fast in comparison. The higher you go, the faster the search will be, but you may start missing smaller faces.

Just wanted to clear that up...


I hope this helps, and I hope everyone enjoys working with the Raspberry Pi and OpenCV as much as I did.
This is my first instructable. I would love to see your comments.
Thanks,
Chris

Comments

진수장 (author)2017-06-26

i have a question. i don't know why it happen that error .

first. i found this error at this code.

(original code)

pfface = profileface.detectMultiScale(aframe,1.3,4,(cv2.cv.CV_HAAR_DO_CANNY_PRUNING + cv2.cv.CV_HAAR_FIND_BIGGEST_OBJECT + cv2.cv.CV_HAAR_DO_ROUGH_SEARCH),(80,80))

(showing error)

File "2.py", line 172, in <module>
pfface = frontalface.detectMultiScale(aframe,1.3,4,(cv2.cv.HAAR_DO_CANNY_PRUNING + cv2.cv.CV_HAAR_FIND_BIGGEST_OBJECT + cv2.cv.CV_HAAR_DO_ROUGH_SEARCH),(60,60))
AttributeError: 'module' object has no attribute 'cv'

so i changed like this

(changed code)

pfface = frontalface.detectMultiScale(aframe,1.3,4,(cv2.HAAR_DO_CANNY_PRUNING + cv2.HAAR_FIND_BIGGEST_OBJECT + cv2.HAAR_DO_ROUGH_SEARCH),(60,60))

but error happens again like these

(showing error)

Traceback (most recent call last):
File "PiFace_update.py", line 173, in <module>
pfface = frontalface.detectMultiScale(aframe,1.3,4,(cv2.HAAR_DO_CANNY_PRUNING + cv2.HAAR_FIND_BIGGEST_OBJECT + cv2.HAAR_DO_ROUGH_SEARCH),(60,60))
AttributeError: 'module' object has no attribute 'HAAR_DO_CANNY_PRUNING'

how i solve this problem?

KhalidR15 (author)진수장2017-08-20

i am also facing this problem ...if you dolved this than also share with me...

Crimsonyde made it! (author)2017-05-11

Great, still works on the Raspberry Pi 3!

Can't really remember where I got the pan/tilt bracket.

I'm using a Microsoft LifeCam HD 3000 and running 5v off from a old Dell computer power supply and the signal is coming through a adafruit pi cobbler.

It is a little inconsistent, sometimes the camera moves in a opposite direction or has a hard time tracking my face so I have to carefully position my face so that I can find it.

JochenMaria (author)Crimsonyde2017-05-15

Hey,

following the tutorial as it is - nothing works or is in place.
how did you get it running?

thanks for you help!

Crimsonyde (author)JochenMaria2017-05-16

For the most part, I did what is mentioned in this tutorial.

I had to install and run servoblaster differently than what is mentioned in this tutorial however. What you want to use is the userspace version of the servoblaster so here's what I did:

Go to the PiBits/ServoBlaster/user/ folder

cd PiBits-master/ServoBlaster/user

sudo make install

sudo servod

After that, run these commands:

rmmod uvcvideo
modprobe uvcvideo nodrop=1 timeout=5000 quirks=0x80

Then you can go ahead and try and run the program using:

sudo python PiFace.py

Make sure you have a camera plugged in otherwise you will get an error.

Also, what kind of problems are you experiencing?

mohdattaullah (author)2017-03-23

after running the program

i am getting this error

Traceback (most recent call last):

File "/home/pi/Downloads/PiFace[1].py", line 172, in <module>

fface = frontalface.detectMultiScale(aframe,1.3,4,(cv2.cv.CV_HAAR_DO_CANNY_PRUNING + cv2.cv.CV_HAAR_FIND_BIGGEST_OBJECT + cv2.cv.CV_HAAR_DO_ROUGH_SEARCH),(60,60))

error: /build/opencv-ISmtkH/opencv-2.4.9.1+dfsg/modules/core/src/array.cpp:2482: error: (-206) Unrecognized or unsupported array type in function cvGetMat

what should i do now??

highblackbone (author)2017-03-22

Hi Grintor. thank you for the nice project. I have a problem. the program runs and detect faces, but the servos doesnt move. i connected them to gpio4 and gpio 17 at raspberry pi A. what can i do?

mkim1 (author)2017-02-14

How would one get this to work with brushless gimbal motors?

touseefk1 (author)2017-01-23

Can we use raspberry pi B3 camera module instead of webcam

JoelF50 (author)2017-01-19

Hi Grintor - AMAZING WORK- Did you do any further work with this project?

JennaSys made it! (author)2014-09-08

Nice project! I used your core PiFace.py codebase as a starting point but modified it to use RPIO.PWM instead of ServoBlaster and also to use the RaspiCam instead of a USB camera. I did make one fix where the pan servo was moving the wrong direction when the image was flipped for the reverse profile search:

if lastface==3:

Cface[0]=(w/2)+(320-x-h)

In general for opencv, using an LBP cascade was a little faster for frontal face searches, but HAAR was more reliable for profile searches.

In addition to some other minor tweaks, I also added a video feed window with a face rectangle so I could see what it was looking at.

Did you ever find a more elegant way to handle the frame lag issue than just doing a bunch of successive reads?

ShawnL1 (author)JennaSys2014-10-15

Can you give me some guidance on how you added the video feed window? I am trying to do that, as the camera continuously over shoots the face and constantly corrects. I would like to see if I can determine why it is doing that. I have been trying to use imshow and having no luck. If you could share your code for adding the video display, that would be wonderful. Thanks.

JennaSys (author)ShawnL12014-10-15

Here are the code snippets I used for displaying the video feed:

#setup video at startup:

capture = cv2.VideoCapture(0)
capture.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, width)
capture.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, height)

cv2.cv.NamedWindow("video", cv2.cv.CV_WINDOW_AUTOSIZE)

#update screen with rectangle around found face after every face search in the while loop

cv2.cv.Rectangle(cv2.cv.fromarray(aframe), (x,y), (x+w, y+h), cv2.cv.RGB(255, 0, 0), 3, 8, 0)
cv2.imshow("video", aframe)
cv2.waitKey(1)

hedonist777 (author)JennaSys2014-12-28

@JennaSys any chance you could also share the code you used to get the capture from the raspberry pi camera module? I have been strugling for ours trying to figure it out ;)

JennaSys (author)hedonist7772014-12-28

I've had several people ask for my code updates for this so I went ahead and put it up on github:

https://github.com/JennaSys/facetrack

The python code is in facetrack2.py

I admittedly didn't do a good job of documenting my code changes since I was just messing around with this, so if anyone has questions, let me know and I'll go back and add explanations where necessary.

Also in case anyone is interested here is the pan/tilt hardware I used for the PiCam:

http://www.adafruit.com/products/1434 (Pimoroni camera mount)

http://www.adafruit.com/products/1968 (pan/tilt rig w/o servos)

or

http://www.adafruit.com/products/1967 (pan/tilt rig w/servos)

I also 3D printed an adapter for the camera mount so I wouldn't have to attach it to the pan/tilt rig with rubberbands:

http://www.thingiverse.com/thing:480559

When I get a chance, I'll try and make a video of it in action. Thanks again to Grintor for putting up this instructable!

JasonB113 (author)JennaSys2016-11-10

Hi Jenna. This is great. I am very new to the Pi and Python, I have the Pan/Tilt Mount from Adafruit, a Pi3 and OpenCV installed. I am a bit confused about how to connect the Pan/Tilt to the Pi and haven't been able to find any good info. Can you give me some tips? I realize they need to connect to the GPIO ports, but which wires go to which pins? Do I need a breadboard?

JennaSys (author)JasonB1132016-11-10

A breadboard definitely makes it a bit easier to make the connections - at least on a temporary basis.

In the code you'll see the lines:

pPan = 23pTilt = 24

which define the GPIO pins that the pan and tilt servos will connect to for the control signal. (You also have the red/black connections that are used to power the servos as well.)

That said, I don't know if the RPIO library used in this code will work with the Pi3. I know some work has been done on it, but I don't know if they fixed the compatibility with the newer RasPi boards yet or not.

hi, first of all give you thanks, thats more than i need to achieve my purpose.

But im having a trouble, when i run your code, the video display appears and inmediately rpi get frozen, it seems like it is too much for my raspberry, im using rpi model b+. Do you know why this happens??

I have had that happen before as well and have 2 possible causes:

1. Make sure the power to the Raspberry Pi is sufficient and that the servos aren't causing a brownout.

2. Try running the program outside of any editors (like Idle) using:

sudo python facetrack2.py

EEEngineer (author)JennaSys2016-09-05

hey Jenna what adjustments should i do when using the picamera?

Grintor (author)JennaSys2014-11-11

Thank you for finding that bug. I actually haven't done anything else with this project since I posted this. But, yours looks nicer than mine. If you make a video of it in action, i'll post it in this instructable and link you credit.

EEEngineer (author)2016-09-05

How would I record the video and send it to web or the pi sd card storage?

specsy22 (author)2016-06-11

Great instructable. I made it with cam.

KindN (author)2016-05-06

Someone should do a step by step video of this

MarcelM5 (author)2016-04-10

Hello guys,

I am trying to make this project. But i have an issue, can someone tell me how to connect the servos so that they work?
I tried to connect the servos to a GPIO Ribbin atached to Raspberry GPIO but the camera sees my face but doesn't move a bit.

steven_c94 (author)2016-02-13

how to show the video from the webcam?

antonior1 (author)2016-01-06

hi!

wrong, and what shall i do..?? please help me,... :(

Thanks in advance

//

OpenCV Error: Bad flag (parameter or structure field) (Unrecognized or unsupported array type) in cvGetMat, file /build/opencv-ISmtkH/opencv-2.4.9.1+dfsg/modules/core/src/array.cpp, line 2482

Traceback (most recent call last):

File "PiFace.py", line 168, in <module>

fface = frontalface.detectMultiScale(aframe,scaleFactor=1.3,minNeighbors=4,flags=(cv2.cv.CV_HAAR_DO_CANNY_PRUNING + cv2.cv.CV_HAAR_FIND_BIGGEST_OBJECT + cv2.cv.CV_HAAR_DO_ROUGH_SEARCH),minSize=(30,30))

cv2.error: /build/opencv-ISmtkH/opencv-2.4.9.1+dfsg/modules/core/src/array.cpp:2482: error: (-206) Unrecognized or unsupported array type in function cvGetMat

AndreaS76 (author)2015-11-04

Nice job! is there any way to make it with blob detection?

istilianakos (author)2015-09-27

its so cool.. i made it !

VeeraR (author)2015-09-23

Whether the cam comes to its initial position after the face goes out of its sight??

Truong vanT (author)2015-08-12

Traceback (most recent call last):

File "PiFace.py", line 14, in <module>

ServoBlaster = open('/home/pi/Desktop/ServoBlaster', 'w')# ServoBlaster is what we use to control the servo motors

IOError: [Errno 21] Is a directory: '/home/pi/Desktop/ServoBlaster'

can u hepl me fix it

Ariev5 (author)Truong vanT2015-08-29

It seems like /home/pi/Desktop/ServoBlaster should be a file, and not a directory.

Python's builtin function "open" opens files for reading from and writing to, so when you specify a directory, the program throws an error.

PiFace.py uses a file at /dev/servoblaster ... I'm not familiar with controlling servo's, but it seems like /dev/servoblaster is supposed to be a buffer (file) that's responsible for the communication between a the programmed instructions and the actual machine.

I hope that helps a little and sets you on the right track - but like I said, I know python but nothing about servos, so take what I say with a grain of salt. :)

Truong vanT (author)2015-08-12

amartin53 (author)2015-05-30

I was getting all 0's returned to the console for a while. Turns out my glasses were interfering with the face detection! Also helps if you find the optimal distance from the webcam.

RahilA1 (author)2015-05-02

i am using code of facetrack2.py ,i got trouble in servo,it is not running along with detection, i connected it to GPIO 23 but still it is not running, servo i am using is 'mg 90s pro'. plz tell me that if there is some thing in the code i have to do? or i am connectiing to wrong pin ? bcz there are two things regarding pins ist is GPIO pin and other is pin #, i connected it to pin# 16, i-e GPIO 23

ChrisJ12 (author)2015-03-20

Hi Chris,

How easy/hard would it be to adapt this setup to use a joystick to control the movement of this servos? I was looking at something along these lines: http://letsmakerobots.com/node/33643 where the camera does not return to the centre position.

Thanks,

Chris

jim.avery.9 (author)2014-12-09

i am building a similar project only the camera will be mounted inside a 12" steel ball so I am thinking of sending the coordinate info to an arduino and stepper controllers with step dir to control bigger stepper motors for pan tilt. Awesome project , great job!

tmatar (author)2014-09-02

Hello
did you use a bridge to controll those motors ?

Grintor (author)tmatar2014-11-11

No, they are controlled directly from the GPIO pins on the Pi

selvam27193 (author)2014-06-12

hey how do interface the web camera with pi the code send to me please,then which algorithm is best for face detection and tracking also please give the code on for image tracking in python format send to my id selvam27193@gmail.com

Grintor (author)selvam271932014-11-11

The code is in the zip file in the instructable.

DanielS13 (author)2014-11-10

Hey Grintor nice tutorial. I'm from the UK and I'm in an engineering scheme placement over here and would like some help with my team's project. We have to construct a rig to stream and control a number of cameras in a test lab at GKN, to a nearby office. We were wondering, is it better to control the servos directly from the GPIO pins, or to use an extension board such as a PiFace Digital? Thanks in advance, Dan.

Grintor (author)DanielS132014-11-11

'better' is subjective. Since raspian is not a real-time operating system (unless you recompile the kernel), the timing will be be more precise when you use an external controller like that, but I haven't seen this translate into any real-world performance impact. Also, the Pi signaling is 3.3v, some servos may require 5v. My advice is to give it a try, if you have problems then use a board.

jelimoore (author)2014-08-10

Hello, how would you go about recording this video?

selvam27193 (author)2014-06-12

hey how do interface the web camera with pi the code send to me please,then which algorithm is best for face detection and tracking also please give the code on for image tracking in python format send to my id selvam27193@gmail.com

dhaval123 (author)2014-02-18

I want assembly for fitting servo motor

jangop (author)2014-01-07

Hello,

I want to try this one but I can't find the download for the "PiFace.py". Can somebody help me?

Grintor (author)jangop2014-01-07

It's in the zip file on step 6

jangop (author)2014-01-07

Hello,

I want to try this one but I can't find the download for the "PiFace.py". Can somebody help me?

JLRTech (author)2013-11-27

Skip earlier question.... Resolved. Power fluctuations.
New problem also resolved: Servo 0 was moving opposite to correct tracking direction. Resolution: changed str(_Servo0CP) to str(300 - _Servo0CP) in the two ServoBlaster.write statements. Woohoo! Works like a charm!