Introduction: NAIN 1.0 - the Basic Humanoid Robot Using Arduino

Nain 1.0 will have basically 5 detachable modules-

1) Arm – which can be controlled via servos.

2) Wheels – which can be controlled with dc motors.

3) Leg – Nain will be able to switch between wheels or legs for movement.

4) Head – Its head can be controlled for various nods.

5) Camera module- which can be interfaced for Face Recognition Access.

Along with this NAIN will be able to speak and interact with users and can show you the time by its inbuilt clock. It will have a wireless control using Wi-fi /Bluetooth.

Step 1: Components Needed

  1. Servo Motors -4
  2. Arduino Mega - 1
  3. Raspberry Pi - 1
  4. Usb Camera -1
  5. Speaker -1
  6. DC Motors -2
  7. L293D -1
  8. Battery Pack - 1
  9. Wheels -2
  10. Castor Wheels - 2

Along with these you will need aluminium square strips to make the body and srews and nuts to fit them properly.

Step 2: Body Structure

The body structure will be made of lightweight aluminium square rods which will help in assembling it easily.

As of now assemble them as shown in the figure and also cut out proper spaces for the servo motors to be attached in the arms.

Attach a hexagonal wooden base at the bottom.

Below the wooden base, attach DC motors and wheels as we do in any line follower robot.

Interestingly, Add two castor wheels- one on the front and other on the back of the robot.

Step 3: Wiring and Coding

To wire up the different modules refer to the codes attached in this part.

Firstly we tested each modules using standalone codes and then we combined them all in one and controlled the movement of wheels and Arms using a bluetooth module.

Step 4: Raspberry Pi and Image Recognition

Image Recognition is performed using a USB Camera and Raspberry Pi.

For that, you will need to install the OPEN CV library on your Pi.

You can do that from here - https://github.com/jabelone/OpenCV-for-Pi

Then you will need to perform image recognition using haar cascade.

You can do that from here -https://thecodacus.com/category/opencv/#.WvsNC4iFPDc

After studying the above link and following that, I have made some changes in the final code i have used that I am pasting below -

DATASET GENERATOR :

import
cv2

cam = cv2.VideoCapture(0)

detector=cv2.CascadeClassifier('Classifiers/face.xml')

i=0

offset=50

name=raw_input('enter your id')

while True:

ret, im =cam.read()

gray=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)

faces=detector.detectMultiScale(gray, scaleFactor=1.2, minNeighbors=5, minSize=(100, 100), flags=cv2.CASCADE_SCALE_IMAGE)

for(x,y,w,h) in faces:

i=i+1

cv2.imwrite("dataSet/face."+name +'.'+ str(i) + ".jpg", gray[y-offset:y+h+offset,x-offset:x+w+offset])

cv2.rectangle(im,(x-50,y-50),(x+w+50,y+h+50),(225,0,0),2)

cv2.imshow('im',im[y-offset:y+h+offset,x-offset:x+w+offset])

if cv2.waitKey(100) & 0xFF == ord('q'):

break

# break if the sample number is morethan 20

elif (i>20):

break

cam.release()

cv2.destroyAllWindows()

It will create a datset of your photos that will be used for authentication.

TRAINER :

import
cv2,os

import numpy as np

from PIL import Image

recognizer = cv2.face.createLBPHFaceRecognizer()

cascadePath = "Classifiers/face.xml"

faceCascade = cv2.CascadeClassifier(cascadePath);

path = 'dataSet'

def get_images_and_labels(path):

image_paths = [os.path.join(path, f) for f in os.listdir(path)]

# images will contains face images

images = []

# labels will contains the label that is assigned to the image

labels = []

for image_path in image_paths:

# Read the image and convert to grayscale

image_pil = Image.open(image_path).convert('L')

# Convert the image format into numpy array

image = np.array(image_pil, 'uint8')

# Get the label of the image

nbr = int(os.path.split(image_path)[-1].split(".")[1].replace("face-", ""))

#nbr=int(''.join(str(ord(c)) for c in nbr))

print nbr

# Detect the face in the image

faces = faceCascade.detectMultiScale(image)

# If face is detected, append the face to images and the label to labels

for (x, y, w, h) in faces:

images.append(image[y: y + h, x: x + w])

labels.append(nbr)

cv2.imshow("Adding faces to traning set...", image[y: y + h, x: x + w])

cv2.waitKey(10)

# return the images list and labels list

return images, labels

images, labels = get_images_and_labels(path)

cv2.imshow('test',images[0])

cv2.waitKey(1)

recognizer.train(images, np.array(labels))

recognizer.save('trainer/trainer.yml')

cv2.destroyAllWindows()

DETECTOR

import
cv2

import numpy as np

import os

c=0

recognizer = cv2.face.createLBPHFaceRecognizer()

recognizer.load('trainer/trainer.yml')

cascadePath = "Classifiers/face.xml"

faceCascade = cv2.CascadeClassifier(cascadePath);

cam = cv2.VideoCapture(0)

fontface = cv2.FONT_HERSHEY_SIMPLEX

fontscale = 1

fontcolor = (255, 255, 255)

while True:

ret, im =cam.read()

gray=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)

faces=faceCascade.detectMultiScale(gray, 1.2,5)

for(x,y,w,h) in faces:

cv2.rectangle(im,(x-50,y-50),(x+w+50,y+h+50),(225,0,0),2)

Id = recognizer.predict(gray[y:y+h,x:x+w])

if(Id<70):

if(Id==1):

Id="Shashank"

elif(Id==2):

if(c==0):

Id="Shivam"

c=c+1

os.system("espeak 'Welcome Shivam Access Granted'")

else:

Id="Shivam"

else:

Id="Unknown"

cv2.putText(im, str(Id), (x,y+h), fontface, fontscale, fontcolor)

cv2.imshow('im',im)

if cv2.waitKey(10) & 0xFF==ord('q'):

break

cam.release()

cv2.destroyAllWindows()

Step 5: LCD and Speaker

I have also used a I2C LED Display and a speaker.

The LED is controlled via Arduino Mega and its code is given in the final code.

For Speaker, it is connected with the Raspberry Pi and uses eSpeak Utility.

You can find its reference here - https://www.dexterindustries.com/howto/make-your-raspberry-pi-speak/

Step 6: Final Steps.

Assemble everything and get ready for the bang.

Check out the videos here -

NAIN 1.0 - The Basic Humanoid Robot using Arduino

NAIN 1.0 - HANDSHAKE

NAIN 1.0 - The I2C LED Display in progress.

The Final project report is also attached with this instructable.

See to it for any further guidance on this projects.

Microcontroller Contest

Participated in the
Microcontroller Contest