Introduction: Facial Recognition Security System for a Refrigerator With Raspberry Pi

Browsing the internet I have discovered that prices for security systems vary from 150$ to 600$ and above, but not all of the solutions (even the very expensive ones) can be integrated with other smart tools at your home! For example, you cannot set up a security camera at your front door so it automatically opens the door for you or your friends!

I have decided to make a simple, cheap and powerful solution, that you can use anywhere! There are many manuals on how to create cheap and homemade security systems, however I want to demonstrate really nontrivial application of those – security system for a refrigerator with facial recognition!

How does it work? The IP camera placed on the top of a refrigerator, sensors (two buttons) detect when a person opens the door of the refrigerator, after that Raspberry Pi takes a picture of that person (with IP camera), then sends it to Microsoft Face API to analyze the image and receive the name of the person. With this information Raspberry Pi scans the “access list”: if the person has no permission to access the refrigerator, Raspberry notifies the owner via email, text message and twitter! (See pictures above)

Why? The system allows you to control your family members, especially when they are on a diet, or struggling with not eating after midnight! Or use it just for fun!

Moreover, you can actually set up the camera at your front door and configure the system to open the door when you, your family members or friends are approaching. And this is not the end! Possibilities of the application are endless!

Let’s begin!

Step 1: Preparation

You will need:

  • Raspberry Pi 3 (you can use older versions, but third generation has Wi-Fi, so it's very convenient)
  • Buttons
  • Wires
  • Old Smartphone or Raspberry Pi camera

First thing you have to do is to configure your Raspberry Pi. Detailed instructions on how to do that you can find here and here, but we will cover the most important steps in this manual.

  1. Download Win32 DiskImager from here (if you use Windows)
  2. Download SD Formatter from here
  3. Insert SD card into your computer and format it with SD Formatter
  4. Download Raspbian Image from here (Choose "Raspbian Jessie with pixel")
  5. Run Win32 DiskImager, choose your SD card, specify the path to Raspbian image, click "Write"
  6. Insert SD card into your Raspberry Pi and turn the power on!

Additionally, you would need to configure your Raspberry Pi to have the access to the system via SSH. There are lots of instruction in the internet, you can use this, for example, or you can attach monitor and keyboard.

Now your Pi is configured and you are ready to proceed!

Step 2: Making a Sensor

Step Description: In this step we will make a sensor that detects when person opens the door of a refrigerator and activates Raspberry Pi.

To set it up you would need the 2 buttons that you’ve originally prepared. First button will be detecting when the door is opened, the second button will be detecting when the door is opened to the point when we are taking a photo of a person.

  1. Solder wires to buttons.
  2. Attach the first button to the door of the refrigerator so that it is pushed when the door is closed (see picture above)
  3. Attach the second button to the door of the refrigerator as shown on the photo above. This button has to be released at all times, except when the door reaches the point when the system takes a picture. To set it up you need to attach something to your refrigerator so that this button is pressed when the door is opened to the desired extent (see photos above).
  4. Attach wires from the buttons to the Raspberry Pi: first button to GPIO 23 and ground, second button to GPIO 24 and ground (See fritzing diagram).

Note:I use BCM pinout (not Board), more on the difference read here.

Once connected to your Raspberry Pi via SSH, to run the python shell, type in the terminal:

python3

If you are attaching monitor and keyboard to Raspberry Pi just run “Python 3 IDLE” from the menu.

Next step is to make Raspberry Pi work with the buttons. We will attach special listeners to GPIO 23 and 24 pins, that will listen for “rising edge” event and “falling edge” event on those pins. In case of the event the listeners will call the functions that we’ve defined. “Rising edge” means that the button was pressed and now released (first button – door is opened), “falling edge” means that the button was released and now pressed (second button – door has reached specific point). More on the buttons functionality - here.

First, import library that give us access to the pins:

import RPi.GPIO as GPIO

Now define special functions that will be called when event is triggered:

def sensor1(channel):
print(“sensor 1 triggered”)

def sensor2(channel):
print(“sensor 2 triggered)

Set pinout type:

GPIO.setmode(GPIO.BCM)

Configure pins:

GPIO.setup(23, GPIO.IN, pull_up_down=GPIO.PUD_UP)
GPIO.setup(24, GPIO.IN, pull_up_down=GPIO.PUD_UP)

Attach listeners:

GPIO.add_event_detect(23, GPIO.RISING, callback=sensor1, bouncetime=300)
GPIO.add_event_detect(24, GPIO.FALLING, callback=sensor2, bouncetime=300)

Now you can test it! If you push the button 1 you will see a message in terminal “sensor 1 triggered”, the button 2 gives you a message “sensor 2 triggered”.

Note:When you're done with experimenting do not forget to call the following function: GPIO.cleanup().

Let’s set up one more function that is called when the door reaches the point where we take a photo! You can do it yourself or use my implementation that I have attached here (sensor.py)

Note:sensor.py is used only for the testing purposes, the files with full functionality I have attached to the last step.

Step 3: Configure IP Camera

Step description: Now we are going to configure old smartphone as an IP camera.

Using smartphone as an IP camera is done via app. There are different apps for Android, iOS, Windows Phone that you can use. I chose the one called "IP Webcam" for Android. This is a free app and it is easy to configure.

Run the app, go to "Video preferences" to set up resolution of photos that the app will provide. Then tap "Start server" (First image above). On the bottom of the screen you have to see ip address of the cam (See second image above). In browser you can type http://cam_ip_address/photo.jpg and you will get the image from ip camera! Type http://cam_ip_address/photoaf.jpg to get a focused image. Write down this ip address, we will use it for getting an image with the person that opens the refrigerator.

Finally, attach the camera to the refrigerator (Last image above).

Step 4: Face API

Step Description: In this step we will talk about Microsoft's Face API that does facial recognition and identifies people.

Microsoft's Face API is a face recognition service, through which we can analyze photos and identify people on them.

First, you need Microsoft Azure Account. If you don’t have one you can create it for free here.

Second, go to https://portal.azure.com , click "New" at left side, type into the form "Cognitive Services APIs", select it and click "Create". Or you can open this link . Now you need to enter Name of your service, select type of subscription, type of API which you need ( in our case it's Face API), location, pricing tier, resource group and agree Legal Terms (see screenshot added to this step).

Third, click "All resources", select you Face API service and see the usage statistics, credentials, etc.

Face API details can be found here, examples in different programming languages are provided. For this project we are using python. You can read documentation and make your own set of functionality or you can use the one provided here (this is not the full set of functionality provided by Microsoft, only the points that are needed for this project). My python files is attached to this step.

Let’s move to the structure of work with Face API. To use "Identification" functionality we have to create a library of people using which Face API service will be recognizing the photos that are being taken by the app. To set it up, please follow the steps:

  1. Create a Group
  2. Add Persons to this Group
  3. Add faces to these persons
  4. Train group
  5. Submit photo with person who you want to identify (you have to provide photo and group id in which service will look for candidates)
  6. Result: In response you'll get a list of candidates who can be on the photo you submitted.

I have created three files with specific functionality that allows to work with groups, single persons and single photos:

  • PersonGroup.py - contains features that allow: create group, get information about group, get list of all your groups, train group and get status of training
  • Person.py - contains features that allow: create person, get person information, list all persons in specified group, add faces to specified person
  • Face.py - contains features that allow: detect face on image, identify person, get name of identified person

In the file called "recognition.py" I provide features that allow you to check if image contains a face and add faces to specified person (automatically adds face from many images from specified folder).

Download file attached to this step, unpack it, change 'KEY' global variable in the these three files: PersonGroup.py, Person.py and Face.py to you own key which you can find: portal.azure.com > all resources > face api service (or how you called it) > keys tab. You can use any of the two keys.

Note:here we are going to train Face API service to recognize people, so the following actions can be done from any computer (Raspberry Pi is not needed for that) - changes are saved on Microsoft's server.

After changing KEY, run recognition.py and enter the following command in python shell:

PersonGroup.create("family", 'fff-fff')) // you can use your own name and id for
group
printResJson(PersonGroup.getPersonGroup('fff-fff'))

You have to see data about group you just created. Now enter:

printResJson(Person.createPerson('fff-fff', 'name of person'))

Now you get person ID. Create folder with images of this person so that all images contain face of this person. You can use function detectFaceOnImages in recognition.py which shows you on which photos face is detected. Now, run command:

addFacesToPerson('folder with images', 'person ID which you got after previous command', 'fff-fff')

Then we have to train our service by entering the following:

PersonGroup.trainPersonGroup('fff-fff')
printResJson(PersonGroup.getPersonGroupTrainingStatus('fff-fff'))

Now our group is trained and is ready to identify a person.

To check person on image you can:

Face.checkPerson(image, 'fff-fff')

In response you'll get a list of candidates and probability who is on the photo.

Note:every time you add faces to a person or person to a group you have to train the group!

Step 5: Node-Red Configuration

Step Description: In this step, we will create Node-Red flow that will notify you about the access violation to your refrigerator =)

If your Raspberry Pi runs on Raspbian Jessie November 2015 or later version you don’t need to install the Node-Red, because it is already preinstalled. You just need to update it. Please use manual here.

Now, we have to install Twilio node to the Node-Red, so we could trigger a text message. Open terminal and type in:

cd ~/.node-red<br>npm install node-red-node-twilio

More about Twilio node here.
After that, run the Node-Red by typing into the terminal:

node-red

Then go to:
http://127.0.0.1:1880/ - if you open browser on your Raspberry Pi
http://{raspberry_pi_ip}:1880/ - if you want to open Node-Red editor from other computer

To know ip address of raspberry pi use this instruction.

Now you have to find the Twilio node in the list of nodes in your Node-Red editor (usually it appears after 'social' group).

It is time to create the flow!

Note:you can use my flow attached to this step, but do not forget to configure the nodes: email, twitter and twilio. Read about that later.

Our flow starts with "notify" node that accepts POST request from our main program with some data about access violation (example of the data can be found in the comment node "about receiving objects"). This node immediately responds with "Ok" message, so main program know that the data was received (Flow: /notify > response with Ok > response). Green node at the bottom with name msg.payload is there for debugging purposes: if something is not working you can use it.

From fist node (/notify) data propagated to "Data Topic" and "Image Topic" where topics "data" and "image" added respectively.

In the "compile" node we receive data (that we get during the first step) with the "data" topic and an image with the "image" topic (the image is taken from /home/pi/image.jpg). These two messages should be compiled into one object, but the two objects are received at different time! To handle this we will use the "context" feature that allows us to store data between function invocations.

Next step is to check whether person from our access list or it is a stranger (checkConditions node). There is a "trustedPerson" field in the data we receive: “true” means that we know this person, but he/she violated access permission, “false” means that the person is a stranger.

When the result is “true” we send notification to twitter, twilio and email; when the result is “false” - only email and twilio. We create an object for email with a message, attached image and email subject, an object for twilio with a message. For twitter we add data to an object if "trustedPerson" is true. Then send these three objects to three different nodes.

Note:If the following node shouldn't receive a message we just send "null" to it.

It's time to configure nodes for notification!

Twitter
Add "twitter" node to the flow. Open it by double click. Click on pencil next to "Twitter ID". Then click on "Click here to authenticate with Twitter". Enter to your twitter account and give the Node-Red needed permissions.

Email
Add "email" node to the flow. If you don’t use Gmail you would need to change data in the following fields - "Server" and "Port" (you can find which server and port you should use on the Help Pages of your email agent) otherwise do not change these fields.

  • To > email address to which messages will be sent
  • Userid > login from your email (maybe the same as "To" field)
  • Password > password from your email account
  • Name > name for this node

Twilio
Go to https://www.twilio.com/try-twilio and register an account. Verify it. Go to https://www.twilio.com/console .Click on "Phone Numbers" (big # icon) and create free number. If you're outside of the USA you have to add GEO permissions, go to https://www.twilio.com/console/sms/settings/geo-pe... and add your country.

Now, go to the Node-Red editor, add the Twilio node, double click on it to configure and fill out all the fields:

Click Deploy!

Now your flow is ready! You can test it by sending POST request with specified object!

Step 6: Compiling the Whole Project

Step Description: In this step we will put all parts together and make them work as a separate system.

By this step you have to:

  1. Configure old smartphone as ip camera
  2. Have working sensors
  3. Trained Microsoft’s Face API
  4. Configured Node-Red flow

Now we have to improve code that we wrote in step 2. More specifically function process() that is called when when person opens the door. In this function we will do the following:

  1. Get image from ip camera and save it in “/home/pi/” with name “image.jpg” (function “fromIpCam” in file “getImage”)
  2. Get name of the person on that image (function “checkPerson” in file “recognition”)
  3. Check access permission for that person (function “check” in file “access”)
  4. Based on result of “check” function compose message
  5. Send composed message to Node-Red (function “toNodeRed” in file “sendData”)

Note:to see the full code of mentioned functions please download zip file attached to this step.

About function “fromIpCam”. This function make the GET request to your ip camera, get focused image in response and save it to path specified by you. You have to provide camera ip address to this function.

About function “checkPerson”. The function gets path to image and group in which you want to seek person from the photo as parameters. Firstly, it detects a face on provided image (file Face.py, function “detect”). In response it gets id if face that was detected. Then it call “identify” function (Face.py file) that find similar persons in specified group. In response it gets a person id if person is found. Then call function “person” (file Person.py) with person ID as a parameter, “person” function returns person with specified ID, we get name of person and return it.

About function “check”. This function is placed in file “access” where also places “access list” as a global variable (you can modify it as you want). Getting the name of person from previous function, function “check” compare this person with access list and return the result.

Note:full project is attached to the next step.

Step 7: Conclusion

In this step I attached the full project that you should unzip and place to your Raspberry Pi.

To make this project work run “main.py” file.

If you control Raspberry Pi via SSH you have to run two programs from one shell: python program and Node-Red. Type in the terminal the following:

node-red

Pres “Ctrl + Z” and type:

jobs

You have see Node-Red process. Look at ID of the process and type:

bg <id of node red process>

Now Node-Red have to start working in background. Then go to the directory with your project and run main program:

python3 main.py

Note:do not forget to change KEY in python files (step 4) and credentials in Node-Red flow (step 5)

Done! Your refrigerator is safe!

I hope you enjoyed this intractable! Feel free to leave your minds in comments.

I would appreciate if you vote for my project =)

Thank you!

IoT Builders Contest

Participated in the
IoT Builders Contest

Epilog Contest 8

Participated in the
Epilog Contest 8

Arduino Contest 2016

Participated in the
Arduino Contest 2016