This is a tutorial on how to build a simple face-tracking camera using an Edison with arduino breakout board, any android smartphone, a servo motor and a few other things you may find lying around in your desk.The code is written in python and is very simple.
What is does is repeatedly download frames from your smartphone through the network and then try to find a face on that frame. If it is centered, it does nothing. If the face is to the right or left of the picture, it will move the servo in order to center that face and repeat the process.
It runs a bit slow because it needs to wirelessly download a new frame every time it loops.
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: Materials
- Intel edison
- Servo motor
- Any android smartphone with camera
- Cell phone holder (used in cars)
- 3 jumper wires
- paperweight of some sort
- rubber bands
Step 2: Setup Your Edison
The code that will be running on your Edison requires the OpenCV library for face recognition and UPM library for controlling the servo (you could stick with mraa but upm makes it easier). Please note that installing the openCV library requires a memory card, because there is not enough memory on the Edison alone.
I'll assume you know how to do this but, in case you don't, don't worry. It's a one time thing.
Yocto users, follow:
Building and installing openCV library:https://software.intel.com/en-us/articles/opencv-3...
Ubilinux users, follow:
Dealing with low memory: http://www.emutexlabs.com/forum/general-questions/...
Building and installing openCV: http://docs.opencv.org/doc/tutorials/introduction/...
Step 3: Download IP Camera App
Download this app on your phone: https://play.google.com/store/apps/details?id=com....
It makes your cell become an IP webcam, allowing you to access the images through your wifi network. If your computer is able to access the camera, so is your edison.
Enter video settings and change the resolutions to 640x480.
I found out you could actually use this from this instructable, you should check it out for more detailed information: https://www.instructables.com/id/Computer-Vision-on-Intel-Edison-using-your-smartph/
Step 4: Transfer Files to Edison
Open winSCP or other SFTP client and transfer the script and the haarcascade_frontalface_default.xml to your edison. It may be a good idea to run the script and check if openCV and UPM are properly installed.
Step 5: Tie Servo Motor to Weight
Use a paperweight or anything that is as heavy or heavier than your smartphone to hold the servo in place when it is turning. Tie them together using wire, strings or rubber bands. If you find out a way to screw your servo in place, that would work even better.
Step 6: Prepare Phone Holder
Your servo probably came with a few screws and bearings. Drill a few holes just a little smaller than the screws you are going to use and screw the plastic piece to the holder. This way we'll be able to mount your holder on top of the servo.
Step 7: Connect Servo to Edison
A servo motor has 3 wires: two for power (Vcc and GND) and one for controlling the angle: the signal wire.
There is more than one set of colours in which these are sold, but all of them can be found in the diagram found in this step.
Connect Vcc to any 5V pin, GND to any GND pin and signal to pin 5 (note that the ~5 represents that this pin can be used as an analog output i.e PWM signal).
Step 8: Run!
Start the IP Camera by clicking on Start server. It's IP will show up on the screen and you just have to enter it on the terminal when running the script. If you want to see what the camera sees, go to your browser and enter it's address using port 8080. For example, 192.168.1.105:8080
Navigate to the folder where you transfered the script and enter, as root or using sudo, python PhoneFaceDetect.py and enter your camera's IP when requested. For example, mine shows 192.168.1.105
If no face is in the frame, the script will print "No face detected". If there is one or more faces, it will print the horizontal position (in pixels) where the first face recognized is located.