Introduction: Drowsiness Alert System

Every year many people lose their lives due to fatal road accidents around the world and drowsy driving is one of the primary causes of road accidents and death. Fatigue and micro sleep at the driving controls are often the root cause of serious accidents. However, initial signs of fatigue can be detected before a critical situation arises and therefore, detection of driver’s fatigue and its indication is ongoing research topic. Most of the traditional methods to detect drowsiness are based on behavioural aspects while some are intrusive and may distract drivers, while some require expensive sensors. Therefore, in this paper, a light-weight, real time driver’s drowsiness detection system is developed and implemented on Android application. The system records the videos and detects driver’s face in every frame by employing image processing techniques. The system is capable of detecting facial landmarks, computes Eye Aspect Ratio (EAR) and Eye Closure Ratio (ECR) to detect driver’s drowsiness based on adaptive thresholding. Machine learning algorithms have been employed to test the efficacy of the proposed approach. Empirical results demonstrate that the proposed model is able to achieve accuracy of 84% using random forest classifier.

Step 1: Things You Need

1.RASPBERRY PI

2.WEBCAM (C270 HD WEB CAM FOR BETTER RESULTS)

Pc version might need some changes in the code

Step 2: Python Code With Eyes Shape Predictor Dataset (PC Version)

to detect eyes much effectivelyt in a real time video, we can use this sbelow .dat file.

https://drive.google.com/open?id=1UiSHe72L4TeN14VK...

Download the .dat file from above link and run the below python code

Python code

from scipy.spatial import distance
from imutils import face_utils import imutils import dlib import cv2

def eye_aspect_ratio(eye): A = distance.euclidean(eye[1], eye[5]) B = distance.euclidean(eye[2], eye[4]) C = distance.euclidean(eye[0], eye[3]) ear = (A + B) / (2.0 * C) return ear thresh = 0.25 frame_check = 20 detect = dlib.get_frontal_face_detector() predict = dlib.shape_predictor(".\shape_predictor_68_face_landmarks.dat")# Dat file is the crux of the code

(lStart, lEnd) = face_utils.FACIAL_LANDMARKS_68_IDXS["left_eye"] (rStart, rEnd) = face_utils.FACIAL_LANDMARKS_68_IDXS["right_eye"] cap=cv2.VideoCapture(0) flag=0 while True: ret, frame=cap.read() frame = imutils.resize(frame, width=450) gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) subjects = detect(gray, 0) for subject in subjects: shape = predict(gray, subject) shape = face_utils.shape_to_np(shape)#converting to NumPy Array leftEye = shape[lStart:lEnd] rightEye = shape[rStart:rEnd] leftEAR = eye_aspect_ratio(leftEye) rightEAR = eye_aspect_ratio(rightEye) ear = (leftEAR + rightEAR) / 2.0 leftEyeHull = cv2.convexHull(leftEye) rightEyeHull = cv2.convexHull(rightEye) cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1) cv2.drawContours(frame, [rightEyeHull], -1, (0, 255, 0), 1) if ear < thresh: flag += 1 print (flag) if flag >= frame_check: cv2.putText(frame, "****************ALERT!****************", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2) cv2.putText(frame, "****************ALERT!****************", (10,325), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2) #print ("Drowsy") else: flag = 0 cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF if key == ord("q"): break cv2.destroyAllWindows() cap.stop()

Step 3: Raspberry Pi Version

when the persons closes his/her eyes then the raspberry pi will give you the alert

CONNECT your buzzer to pin 23 (see the picture)

from scipy.spatial import distance

import RPi.GPIO as GPIO

from time import sleep

GPIO.setwarnings(False)

GPIO.setmode(GPIO.BCM)

from imutils import face_utils import imutils import dlib import cv2

buzzer=23

GPIO.setup(buzzer,GPIO.OUT)

def eye_aspect_ratio(eye): A = distance.euclidean(eye[1], eye[5]) B = distance.euclidean(eye[2], eye[4]) C = distance.euclidean(eye[0], eye[3]) ear = (A + B) / (2.0 * C) return ear thresh = 0.25 frame_check = 20 detect = dlib.get_frontal_face_detector() predict = dlib.shape_predictor(".\shape_predictor_68_face_landmarks.dat")# Dat file is the crux of the code

(lStart, lEnd) = face_utils.FACIAL_LANDMARKS_68_IDXS["left_eye"] (rStart, rEnd) = face_utils.FACIAL_LANDMARKS_68_IDXS["right_eye"] cap=cv2.VideoCapture(0) flag=0 while True: ret, frame=cap.read() frame = imutils.resize(frame, width=450) gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) subjects = detect(gray, 0) for subject in subjects: shape = predict(gray, subject) shape = face_utils.shape_to_np(shape)#converting to NumPy Array leftEye = shape[lStart:lEnd] rightEye = shape[rStart:rEnd] leftEAR = eye_aspect_ratio(leftEye) rightEAR = eye_aspect_ratio(rightEye) ear = (leftEAR + rightEAR) / 2.0 leftEyeHull = cv2.convexHull(leftEye) rightEyeHull = cv2.convexHull(rightEye) cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1) cv2.drawContours(frame, [rightEyeHull], -1, (0, 255, 0), 1) if ear < thresh: flag += 1 print (flag) if flag >= frame_check: cv2.putText(frame, "****************ALERT!****************", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2) cv2.putText(frame, "****************ALERT!****************", (10,325), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2) #print ("Drowsy")

GPIO.output(buzzer,GPIO.HIGH)

else: flag = 0

GPIO.output(buzzer,GPIO.LOW)

cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF if key == ord("q"): break cv2.destroyAllWindows() cap.stop()