Introduction: ChikonEye: Single Python Script to Face Recognition and Computer Safety!!

About: A Computer Science and Engineering student and a Hobbyist. love to make robots,electric devices,diy things and Programming them by Myself. My youtube channel Follow me on gi…

To recognize faces one would need 3 differen scripts-

1. Data Set creator (takes photos and saves them in a folder)

2. Trainer (Train the model with all those saved photos)

3. Recognizer (final script that will recognize faces based on Train data)

To make it simple and user friendly I have written a single script with GUI that will do all those 3 works in a user friendly way with buttons text boxes etc.

Also this code can be used to protect your privacy from peepers. Let's begin...

Watch the video:

Step 1: Things You'll Need to Install

This is a python script. I assume you have installed python on your computer. Current latest version is 3.7.2.

Also you need to install OpenCv face recognition library first. download from here. Then you'll need to pip install pyautogui, pillow and numpy.

Install them via python package installer (PIP) like the image Numpy and Pillow. Time comes with python built in.

Step 2: Face Recognition Mandatory Steps:

To start recognizing faces you first need to Take images to create Data Set.

Data set is a set where you save the images by which you want to Train your model (The program). So you got it Training your model is the second Step. To detect faces there's some classifier in OpenCv , we'll use HaarCascade_Classifier_frontalFace / al2. xml file. After downloading opencv go to the folder then find goto sources>data>HaarCascade and then you'll get some .xml files. You may also use one of these to recognize license plates, eyes etc. Code is the same (Almost).

Trainer is another code which trains the model by the data save on the data set folder. After training it creates a .yml file the folder. The more data you feed to train your model more accurate your model will be. So feel free to take a lot of samples. Not a lot of photos of one person but photos of different person's. That way it will be able to distinguish people.

The next code that will distinguish or recognize a person is the Recognizer code.

Recognizer code is the code which acts based on the trained data. So this code actually says who is who.

Step 3: But You Said Single Script Ashraf.....

Yes I did and that's why I have written a single python script with pyautogui (GUI stands for Graphical User Interface) so that you can use it in a user friendly way with some buttons and text boxes.

Here's an Interesting thing I did with the code. I told it in the previous Instructable .


Some times people peeps to your computer screen to see what you're doing. To avoid people seeing what I am doing I have added this code into the code

import pyautogui #import pyautogui library

from time import sleep #import sleep - for the delay

pyautogui.hotkey('win', 'r') #windows_key + Run pyautogui.typewrite("cmd\n") #cmd to open command prompt

sleep(0.500) #500 milisec delay#write the code then press enter('\n') thus pc will auto-lock

pyautogui.typewrite("rundll32.exe user32.dll, LockWorkStation\n")

This code lock the computer screen. Its explained in this Instructable.

Now the full code. Make sure to follow the video to get to know better.

Download code from github. or copy from below.

The code looks huge but Trust me you don't need to worry a bit. You just need to Change two paths as described in the video. (Folder where the code is saved and Classifier path) that's it.

ChikonEye literally watches your back. preventing others who peeps your computer from seeing your valuable secret works. Now your works are safe and secure.

Chikon eye uses your laptop(primary = 0 or secondary = 1, 2 so on) camera to see how many people are watching at the computer screen. If some1 unauthorized tries to see it automatically locks the computer screen.

developed by Ashraf Minhaj mail me at- """ """ Version: 2.0 (all codes in one .py file). Contains: 1. Dataset Creator (take photo and create a dataset) 2. Tariner (training your model) 3. Recognizer (the code that recognizes you) 4. Amazingly with GUI support (makes this code more user friendly) """ """ I'll make and executable .exe file so that this can be run on any computer. right now it can be used by the people who has python, numpy, opencv pyautogui installed in their pc. Don't worry exe is coming soon. """

import cv2 import numpy as np import pyautogui from time import sleep from PIL import Image import os

#location of opencv haarcascade face_cascade = cv2.CascadeClassifier('F:\\opencv\\sources\\data\\haarcascades\\haarcascade_frontalface_alt2.xml') cap = cv2.VideoCapture(0) # 0 = main camera , 1 = extra connected webcam and so on. rec = cv2.face.LBPHFaceRecognizer_create()

#the path where the code is saved pathz = "C:\\Users\\HP\\cv_practice\\chikon" #Change this

#recogizer module def recog(): """ Recognizes people from the pretrained .yml file """ #print("Starting")"{pathz}\\chikoneye.yml") #yml file location id = 0 #set id variable to zero

font = cv2.FONT_HERSHEY_COMPLEX col = (255, 0, 0) strk = 2

while True: #This is a forever loop ret, frame = #Capture frame by frame gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) #change color from BGR to Gray faces = face_cascade.detectMultiScale(gray, scaleFactor = 1.5, minNeighbors = 5)

#print(faces) for(x, y, w, h) in faces: #print(x, y, w, h) roi_gray = gray[y: y+h, x: x+w] #region of interest is face

#*** Drawing Rectangle *** color = (255, 0, 0) stroke = 2 end_cord_x = x+w end_cord_y = y+h cv2.rectangle(frame, (x,y), (end_cord_x, end_cord_y), color, stroke)

#***detect id, conf = rec.predict(roi_gray) #cv2.putText(np.array(roi_gray), str(id), font, 1, col, strk) print(id) #prints the id's #if sees unauthorized person if id != 1: #execute lock command pyautogui.hotkey('win', 'r') #win + run key combo pyautogui.typewrite("cmd\n") # type cmd and 'Enter'= '\n' sleep(0.500) #a bit delay #windows lock code to command prompt and hit 'Enter' pyautogui.typewrite("rundll32.exe user32.dll, LockWorkStation\n")

elif id == 1: #if authorized person (me & my Brother Siam) print("Authorized Person\n") #do nothing

cv2.imshow('ChikonEye', frame)

#check if user wants to quit the program (pressing 'q') if cv2.waitKey(10) == ord('q'): op = pyautogui.confirm("Close the Program 'ChikonEye'?") if op == 'OK': print("Out") break

cap.release() cv2.destroyAllWindows() #remove all windows we have created

#create dataset and train the model def data_Train(): sampleNum = 0 #print("Starting training") id = pyautogui.prompt(text=""" Enter User ID.\n\nnote: numeric data only.""", title='ChikonEye', default='none') #check for user input """ if id > : print(id) pyautogui.alert(text='WRONG INPUT',title='ChikonEye',button='Back') recog() """

#if user input is 1 2 or 3 max 5 here if id != '1' and id != '2' and id != '3' and id != '4' and id != '5': pyautogui.alert(text='WRONG INPUT',title='ChikonEye',button='Back') recog()

else: #let, the input is okay while True: ret, img = gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray, 1.3, 5)

for(x, y, w, h) in faces: #find faces sampleNum = sampleNum + 1 #increment sample num till 21 cv2.imwrite(f'{pathz}\\dataSet\\User.{id}.{sampleNum}.jpg', gray[y: y+h, x: x+w]) #uncomment this cv2.rectangle(img, (x,y), (x+w, y+h), (255,0,0), 4) cv2.waitKey(100)

cv2.imshow('faces', img) #show image while capturing cv2.waitKey(1) if(sampleNum > 20): #21 sample is collected break trainer() #Train the model based on new images

recog() #start recognizing

#Trainer def trainer(): faces = [] #empty list for faces Ids = [] #empty list for IDs

path = (f'{pathz}\\dataSet')

#gets image id with path def getImageWithID(path): imagePaths = [os.path.join(path,f) for f in os.listdir(path)] #print(f"{imagePaths}\n") for imagePath in imagePaths: faceImg ='L') #cv2.imshow('faceImg', faceImg) faceNp = np.array(faceImg, 'uint8') ID = int(os.path.split(imagePath)[-1].split('.')[1]) #print(ID) faces.append(faceNp) Ids.append(ID)


return Ids, faces

ids, faces = getImageWithID(path)

print(ids, faces) rec.train(faces, np.array(ids)) #create a yml file at the folder. WIll be created automatically.'{pathz}\\chikoneye.yml') pyautogui.alert("Done Saving.\nPress OK to continue") cv2.destroyAllWindows()

#Options checking opt =pyautogui.confirm(text= 'Chose an option', title='ChikonEye', buttons=['START', 'Train', 'Exit']) if opt == 'START': #print("Starting the app") recog() if opt == 'Train': opt = pyautogui.confirm(text=""" Please look at the Webcam.\nTurn your head a little while capturing.\nPlease add just one face at a time. \nClick 'Ready' when you're ready.""", title='ChikonEye', buttons=['Ready', 'Cancel']) if opt == 'Ready': #print("Starting image capture + Training") data_Train() if opt == 'Cancel': print("Cancelled") recog()

if opt == 'Exit': print("Quit the app")

Create a DataSet folder in which folder you have saved the code. The .yml file will automatically be saved by the program.

Run this in powershell or command prompt.

If you start it th first time it will not recognize anyone. So please make sure to train the model by pressing train then other things are pretty easy.

|| Obviously I don't have rocket science secret data in my computer but at least this can be used for fun with friends and that'll save your privacy sometimes too. ||

Thank you.

Epilog X Contest

Participated in the
Epilog X Contest