Introduction: Friendly Robot (ICARUS Robot)

About: Hello Everyone, My name is Abdelrahman but you can call me ICARUS. I love making new and crazy stuff, my passion is tinkering and making. Joined to maker community in 2019, feel free to text me Luv U all

The structure of the ICARUS Documentation 


  1. material selection
  2. the material I choose
  3.   why did I choose it?
  4. Introduction
  5.  how did we think about this project?
  6.  how do we select the project?    
  7.  how do we do the storyboard?
  8.  how did we deal with issues?

  9. software selection & machine using
  10. cad design
  11.  design the body on Solid Works or Fusion 360 (preferred)
  12.  prepare the body (STL)file on Cura
  13.  print the body
  14. circuit design
  15. wiring diagram using fritzing
  16.   design PCB on eagle (hat for Raspberry Pi 3 model B)

  17. computer vision coding 
  18. definition of computer vision
  19.    OpenCV library and its uses
  20.    Installing important libraries
  21.    Face recognition explanation
  22.    Hand recognition explanation
  23.    The whole code     
  24. google assistant code
  25. mobile app

Supplies


It’s the main and most important process of them all because if you don’t select your component carefully and accurately the whole project will fall down

  • The main and most important thing is the microprocessor ( Raspberry Pi 3 model B )
  • Raspberry Pi camera to make face recognition and camera tracking “computer vision”
  • 2 Metal gear motors ratio 1:1000 because it has a huge power that can hold the whole body and move fast
  • OLED screen to display the emotion and messages of FABY to make it interact well with the hymen
  • Lipo battery 1300 mA, has a large capacity that can supply the motors and servo and OLED screen
  • Servo motors to make a mechanism for the head of FABY to make it track the person and look up and down for more interactive
  • Ultrasonic sensor to detect any obstacles in front of it and avoid it
  • Infrared sensor: to detect if there is ground or not so that it doesn’t fall from high ground
  • USB microphone: to recognize speeches 
  • Speakers: to make the robot speak
  • Caster wheel: to support the body


Software selection & machine using

  • Fusion 360 or Solid Works any of them will be satisfying for making the body
  • Eagle: for making PCB of the FABY Robot
  • Cura Ultimaker 3D printer (to make infill and slice)
  • For programming, we will use pycharm to use python
  • 3D printer machine
  • PCB milling machine
  • Fritzing software

Step 1: Introduction

As usual, before I go into details about this project, I will discuss with you the process before making this project, the process I love to call on it, the selection process

 Selection process contains

  • how we think in this project
  • how we select the project
  • how we did the storyboard
  • how we deal with issues
  • technical selection

I studied the project as it's known as (project management) it's organized procedures that let you study projects very carefully and accurately, and I will try to give you thoughts about what happened behind the scenes

How we think in this project 


when we decided to make a project (at first, we hadn't decided yet which project we would choose) we intended to make it like ROMBA (its cleaning robot)

We want to make an automatic smart robot.

After buying ROMBA and doing reverse Engineering on it (that examines every part in the ROMBA robot and determines the function of every part)

We watched an ad for a robot called Vector

What makes us admire this project is that it's an interactive Robot, small, and very cute

So we were confused about which one we would choose to make and then we got a wonderful idea.


Selection of project


As I said earlier, we had a smart idea, the team and I decided to make a combination with both of them

We decided to make our ICARUS ROBOT

ICARUS ROBOT is not like both ROMBA and VECTOR, it's had some features of ROMBA and some others of VECTOR

We were excited to start building the board and study the project and guess what, that's exactly what we did

Storyboard  


Simply what we want this robot to do is

To be interactive

Connected with mobile application

Connected with WIFI

Connected with Google Assistant

So, we started to warm up and make some prototypes to see how it worked and absolutely we faced issues and problems

And as I make in all of my documentation, I will tell you our failures before success, to avoid everything wrong we did


Step 2: Cad Design

In this part, we will learn how to design the robot on CAD software

How did we think about the body?

Before you think anything you should know this, you must take the measurements of your components to know how the body dimension will be Figures (1),(2),(3)

As smaller it is, as better it was

As we finished the measurements and prepared our dimensions, we could imagine the body, so we had to body to be smaller and artistic we found one that looked exactly as we describe

Figure (4)

So, we will make our body look closely like that

We had the measurements and a vision of the body so, let's build our FABY Robot

Let’s begin with the head, the head must include the camera and OLED screen so will design the head to contain all of these components in Figure (5)

now we need to think of a mechanism that allows the head to move up and down, luckily for you, we will give you this mechanism in Figure (6)

So let’s see our mechanism on the head figure (7)

Let’s prepare our components and put them in the body figure (8)

Let’s see our body now figure (9)

design the wheel and put it into the final shape figure (10)

Don’t worry I will leave all files down so you can download it

After finishing designing the body it’s time to fabricate it

To fabricate the body, you need the main thing that you will print with the filament I prefer PLA filament figure (11) 

 So, after you save the parts as STL files, open Cura and import these files, I will give you two examples

 The first one is the mechanism of the head figure (12) 

Then press save to file and take this file to flash drive put it on a flash drive and put it in the 3D printer to begin to print Figure (13) 

Let’s move on to the second part that we will print which is the body of the wheel without the tire, again repeat the same steps, and import the file in cura figure (14)

Edit the filament and supports and draft as you see and press slice, you will see the average time that the part will be 3D printed in Figure (15)

Then press save the file, put it on the flash drive upload it to the 3D printer, and leave it to be printed figure (16),(17)

Then add to the wheel the rubber part which we made by ourselves figure (18), (19), (20), (21)

Step 3: CIRCUIT DESIGN

First, before we go to PCB design we should make at first wiring diagram to know how we will connect the components with each other.

Open fritzing insert all of the components on it and start wiring (figure 1)

Complete wiring (figure 2)

So the final schematic wiring diagram (figure 3)

After we made the wiring diagram it’s time to move to the eagle cad, we will have some fun

Now we will design a hat for Raspberry Pi 3 Model B

  1. Select your components.
  2. arrange them in the most suitable way so that the pins are as close to each other as possible to avoid overlap of wires and routes.
  3. export your schematic to the board and get your components arranged.

*Since We're using the Raspberry Pi board as a controller, we need to make sure that the board will not overlap the pi and fit well over it.

that's why we used the component "RASPBERRYPI-40-PIN-GPIO_PTH_REFERENCE".


Components:

  • 7.4 lipo battery
  • step-down dc-dc converter
  • 2 IR sensors
  • L293D motor drive IC.
  • 2 metal gear DC motor.
  • Raspberry Pi 3 model B.
  • 1 Ultrasonic sensor

Shield board components :

  • 5 of 2 wire 5mm terminals.
  • 3 of 3 wire 5mm terminals.
  • 6-pin headers for the screen

After that, we will send it to the lab to be fabricated and note that (the mine track width is 0.5mm and the mine diameter hole is 0.8 mm )

After we sent it to FAB LAB it was fabricated and the result was fantastic again, don’t worry I will leave the board and schematic down in files, just click on it and download it

After we finish the PCB fabrication it’s now time to soldring the component on the PCB

Note: Thanks to Alaa Saber for providing us with all the resources, we did the PCB design with his effort so thank you again  

Step 4: Computer Vision

this part is not easy, it’s needed a lot of concentration and codes to understand how face recognition and hand recognition work

so be aware of everything I will say and I will try to give you the conclusion, not the whole idea

Note: special thanks to our lovely Amani who helped us all by herself with computer vision she explained all the procedures with marvelous words, and explained all the errors she faced, I recommend reading her article on computer vision you will find interesting videos with simple explanation just click on

Computer Vision: Computer Vision is the broad parent name for any computations involving visual content – that means images, videos, icons, and anything else with pixels involved. But within this parent idea, there are a few specific tasks that are core building blocks:

In object classification, you train a model on a dataset of specific objects, and the model classifies new objects as belonging to one or more of your training categories.

For object identification, your model will recognize a specific instance of an object – for example, parsing two faces in an image and tagging one as Tom Cruise and one as Katie Holmes

to make face & hand recognition we should install an important library called OpenCV.

What is OpenCV library?

is an open-source for computer vision and digital image processing and machine learning software library

I highly recommend to you if you don’t know what face detection is and haven’t deal with it you should visit the websites, I will leave now

Face detection with OpenCV and deep learning

OpenCV Face Recognition

These two websites will explain every small detail about using deep learning with the OpenCV library to make face detection, I shall not go into detail in this documentation because it will be very long and hard instead of I will give you short summary of it.

How to install OpenCV library?

I worked on the command prompt in Windows and just typed this one line

pip install opencv-python               

I assume that you have OpenCV installed on your system.


Dlib and the face_recognition packages.

Note:For the following installs, ensure you are in a Python virtual environment if you’re using one. I highly recommend virtual environments for isolating your projects — it is a Python best practice. If you’ve followed my OpenCV install guides (and installed virtualenv + virtualenvwrapper )

then you can use the workon command prior to installing dlib and face_recognition

 .

Installing dlib without GPU support

If you do not have a GPU you can install dlib using pip by following this guide:

Face recognition with OpenCV, Python, and deep learning

$ workon # optional


$ pip install dlib


Or you can compile from the source:

Face recognition with OpenCV, Python, and deep learning


$ workon <your env name here> # optional


$ git clone https://github.com/davisking/dlib.git


$ cd dlib


$ mkdir build


$ cd build


$ cmake .. -DUSE_AVX_INSTRUCTIONS=1


$ cmake --build .


$ cd ..


$ python setup.py install --yes USE_AVX_INSTRUCTIONS


Installing dlib with GPU support (optional)

If you do have a CUDA-compatible GPU you can install dlib with GPU support, making facial recognition faster and more efficient.

For this, I recommend installing dlib from source as you’ll have more control over the build:

Face recognition with OpenCV, Python, and deep learning

$ workon <your env name here> # optional


$ git clone https://github.com/davisking/dlib.git


$ cd dlib


$ mkdir build


$ cd build


$ cmake .. -DDLIB_USE_CUDA=1 -DUSE_AVX_INSTRUCTIONS=1


$ cmake --build .


$ cd ..


$ python setup.py install --yes USE_AVX_INSTRUCTIONS --yes DLIB_USE_CUDA


Install the face_recognition package

The face_recognition module is installable via a simple pip command:

Face recognition with OpenCV, Python, and deep learning

$ workon <your env name here> # optional


$ pip install face_recognition

How does face recognition work?

In order to build our OpenCV face recognition pipeline, we’ll be applying deep learning in Reviewing the entire FaceNet implementation is outside the scope of this tutorial, but the gist of the pipeline can be seen in Figure 1

Face alignment, as the name suggests, is the process of

(1) identifying the geometric structure of the faces and

 (2) attempting to obtain a canonical alignment of the face based on translation, rotation, and scale

While optional, face alignment has been demonstrated to increase face recognition accuracy in some pipelines


After we’ve (optionally) applied face alignment and cropping, we pass the input face through our deep neural network:

  The FaceNet deep learning model computes a 128-d embedding that quantifies the face itself.

That’s the way face recognition works.

Here is a sample of the face recognition code

And here is a video for Amani trying the code and it works yeah

Amani video


Hand track and counting 

In order to make hand track we need to install a library called math it’s easy to install I will leave links and codes down


How does hand track work?

In order to detect fingertips, we are going to use the Convex Hull technique. In mathematics, Convex Hull is the smallest convex set that contains a set of points. And a convex set is a set of points such that, if we trace a straight line from any pair of points in the set, that line must also be inside the region. The result is then a nice, smooth region, much easier to analyze than our contour, that contains many imperfections.

To detect the Fingers and count them.

Find the ROI(region of interest)

Hand Segmentation: Convert the video frame from BGR to HSV(or Gray)

Perform a Gaussian blur

Perform a Threshold

Find the Biggest Contour(this will be our hand)

Perform a convex hull and mark the ROI(region of interest

Count the no. of countors

Display it

Samples

very well. You can find the original code here. Then, found another source that illustrates how the counting of hand fingers' code works. You can find it here.

Now for the final step: mix the two codes together (face recognition and hand tracking)

Just bring the face recognition code and merge it with hand tracking code in one big code, don’t forget to edit the variables

Sample of code

And here is a video for Amani after she finished the whole code  Here is a video of the mixed code running

Step 5: Google Assistant Software

In this part, we will learn about Google Assistant and how to provide our FABY Robot with Google Assistant

Before anything, it’s preferred to work on the Raspberry Pi software (Linux rasspian distribution)

Make sure you plug the microphone into the Raspberry Pi

 By the end of 2019 and the beginning of 2020, google posted a tutorial about how to implement Google Assistant on your Raspberry Pi https://developers.google.com/assistant/sdk/guides/library/python, I followed all the tutorial steps, I managed to make it work In this documentation I will talk about these steps, so you can follow me here or go to Google tutorial.

Configure and Test the Audio Test the microphone and speaker to make sure they are working well. To do so, Frist, open the terminal then write “a play -l” to find the card number and device number allocated for the microphone and speaker. Don’t forget to write down these two numbers for the microphone and speaker as we will need them later.

Second, Create the (asoundrc) file by writing this command “sudo nano /home/pi/.asoundrc”

on the terminal

 Third, replace all the text inside it with the following text “ pcm.!default {

 type asym capture.pcm "mic"

playback.pcm "speaker"

 }

 pcm.mic

{

type plug slave

{ pcm "hw:,"

 }

}

pcm.speaker {

type plug slave {

 pcm "hw:,"

 }

 }

Fourth, replace it with the numbers that you wrote down earlier.

Fifth, save and exit the file by clicking (Ctrl + x) then (y + Enter)

Sixth, type “alsamixer” on the terminal and raise the sound level of the speakers

Seventh, test the speakers by typing “speaker-test -t wav” on the terminal, by clicking enter you should

hear (left, Front) from the speakers then click (Ctrl + c) to stop it.

Eighth, test the microphone by recording sound by typing “a record --format=S16_LE --duration=5 --

rate=16000 --file-type=raw out.raw” on the terminal

Ninth, play the sound that you recorded to make sure the microphone works well by typing “apply --

format=S16_LE --rate=16000 out.raw” on the terminal



Configure a Developer Project and Account Settings and Register the Device Model

First, open your internet browser go to “console.actions.google.com” then select (new project) and

enter the name of your project

Third, after clicking on (REGISTER MODEL), click on (Download OAuth 2.0 credentials), save the json file

as we will need it later, then skip the specific traits options

Fourth, go to “console.developers.google.com/apis” then click on (Enable API and services), search for

Google Assistant API and enable it


Fifth, go to the authorization console screen as shown in the figure below, then select (External) then

(CREAT), then confirm your Email and save the settings

Sixth, go to “myaccount.google.com/activitycontrols” and make sure all the following are turned on

Web & app activity

Location History

Device information

Voice and audio activity

Step4: Install the SDK and Sample Code

First, open the terminal then type “sudo apt-get update” then “sudo apt-get upgrade” then install

python3 virtual environment by typing “sudo apt-get install python3-dev python3-venv” then write

“python3 -m venv env” to enable the virtual environment

Second, update the pip by typing “env/bin/python -m pip install --upgrade pip setuptools wheel”, then

activate the Python virtual environment using the source command by typing “source env/bin/activate”

Third, type in the terminal the following “sudo apt-get install portaudio19-dev libffi-dev libssl-dev”, then

Install Google Assistant SDK by typing “python -m pip install --upgrade google-assistant-sdk[samples] ”

Fourth, copy the JSON file that you downloaded and put it in “/home/pi” directory then copy its path.

Fifth, back to the terminal and make sure the virtual environment is activated then type “python -m pip

install --upgrade google-auth-oauthlib[tool]”

Sixth, type in the terminal “google-oauthlib-tool –scope https://www.googleapis.com/auth/assistantsdk-prototype --save --headless --client-secrets /home/pi/<credential-file-name>.json”, Don’t forget to

replace <credential-file-name> with the path of the JSON file.

Seventh, now you must see a URL on the terminal, open it copy the code then paste it into the

terminal

Eighth, to activate Google Assistant type “google samples-assistant-push to talk --project-id <project-id> --

device-model-id <model-id>” on the terminal Don’t forget to replace both <project-id> and <modelid> with their values from the action dashboard then go to the project settings page


Step 6: Mobile Application

For Mobile App design, I used MIT App Inventor, because it's easy to learn and easy to use. MIT App Inventor is a great starter program for app building.

Several challenges should be included in the App:

1.   Make an attractive design.

2.   Converting the voice into words for commands.

3.   Send a command to take a photo or record a video.

4.   The robot motion control.

5.   Connect to Google Assistant API. 

So, I watched several videos on YouTube, such as:

The design not work well in first, so several versions as the following:

  • Version 1:
  • The first Screen:
  • The buttons for voice [Button3] were added, with the speech organizer to talk as Non-visible components, beside a text box for writing the voice words.
  • Added ON [Button4]and OFF [Button5] buttons to control the raspberry pi GPIO pin, by adding Android thingsboard1 and Android thingsGPIO1 as Non-visible components.
  • Forward [Button2] was added to go to screen 2.
  • The Second screen:
  • The buttons for voice [Speak Button] were added, with the speech organizer to talk as Non-visible components, beside a text box for writing the voice words.
  • Added Backward [Button1] to back to Screen 1, and forward [Button2] to go to Screen 3. 
  • The third Screen:
  • The buttons for voice [Button1] were added, with the speech organizer to talk as Non-visible components, beside a text box for writing the voice words.
  • Bluetooth
  • Added a Bluetooth picture [ListPicker1 button]to open the mobile Bluetooth to contact with the robot which is controlled by adding BluetoothClient1 and Clock as Non-visible components.
  • Added Backward button [Button2] to back to Screen 1. 
  • The Cons of the design: 
  • It is designed for converting speaking into words, but it is so boring with poor design. 
  • The video testing 


  • Version 2:
  • here I downloaded the Vector app to follow the ideal design. But the home icons are not fully organized and sending the commands still not working.

  • Version 3:
  • It covers all the previous challenges.
  • The First Screen {Welcome screen}:
  • Involves one button to go to the home screen [Screen 2]
  • The Second Screen {Home Screen}:
  • Includes six buttons to open the camera, Stats, Entertainment, Question and Answer mode, Interact, About.
  • The Third Screen {Camera Screen}:
  • Contacted with firebase database by adding in Non-visible components, by sending code {1} when pressing camera picture [Snap button] to open the robot camera and take a capture, and sending code {2} when pressing the video picture [Video button] to open the robot camera and record a video. 
  • The firebase was also tested by adding Name and Age in the text boxes and pressing save to save on the database.
  • A camera test button was added to show the firebase response, which was done by adding Ev3touchsensor1 and camera1 as non-visible components.
  • The Fourth Screen {Entertainment Screen}:
  • Includes four arrows {left, right, up, and down} to control the robot's motion.
  • The Fifth Screen {Question and Answer mode Screen}:
  • Which will be connected with Google Assistant API
  • The Sixth Screen {Interact Screen}:
  • Contains the buttons for speaking [Hi_FABY], with the speech organizer to can talk as Non-visible components, beside a text box for writing the voice words.
  • The Seventh Screen {About}:
  • That includes information about the robot and the working team and the fabricated place.  
  • The video testing 
  • The video Testing the commands sending to firebase.