Introduction: EWON Raspberry Pi Powered Home Robot

About: Engineering and Design

I recently found myself binge-watching a lot of Netflix series due to the current situation, I hope you all are safe, and I saw that season 5 of the Black Mirror was released. An anthology series that revolves around a group of people's personal lives and how technology manipulates their behavior.

And one of the episodes that caught my attention was Rachel, Jack, and Ashley Too. One of the main characters of this series is a home robot named Ashley O and that robot has a lot of character around it and I thought to myself I should build one, it's a good project to get started with programming if not that then at least I can program it to laughs at my jokes.!

What / Who is Ewon? What can it do?

So before I started to work on this project I set some ground rules to be followed. This project had to be

  • Easy for everyone to try out
  • Not just about being cute but also being useful so it doesn't end up on a shelf
  • Modular, so that you can keep adding new features.

After setting this rule I decided to use the Google Assistant SDK. This SDK provides a lot of features that I was looking for and if you get bored of Ewon you cab always use it as a Google Home device and do what a Google home does.

What Ewon will be doing is adding a character to the Google assistant. That is showing emotions and reacting to what the user says. Now it's not just the voice you hear but you also get to see how it reacts.

Collaborate on Discord: https://discord.gg/Fp73HmT2w2

NOTE: This instructable is under development. I will soon upload all the relevant files. Thank you.

Step 1: Parts Required for Ewon

ELECTRONICS

  • Raspberry PI
  • Servo SG90 (x4)
  • Servo MG995 – standard (x2)
  • PCA9685 16-Channel Servo Driver
  • USB sound card
  • Microphone
  • Speakers (Any small speaker will do, something like this)
  • Male and female pin header connectors
  • Breadboard
  • Nextion Display

FASTENERS AND BEARINGS

  • M3*10mm (x10)
  • M3*8mm (x10)
  • M3 Nuts (x20)
  • Bearing
    • OD: 15mm ID: 6mm Width: 5mm (x2)
    • OD: 22mm ID: 8mm Width: 7mm (x2)

OTHER MATERIALS

  • Standoff
    • 40mm (x4)
    • 30mm (x4)

TOOLS

  • 3D printer

Step 2: Understanding Ewon and Programming

Before I start with the programming aspect let me briefly explain the block diagram of the circuitry of Ewon.

RPI (Raspberry pi) is the brain of the system. Servo driver controlled by RPI drives the servo. Display controlled by RPI with serial communication to show emotions and lastly, mic and speakers used to communicate with Ewon. Now that we know what hardware does what let's start programming Ewon.

Installing google assistant SDK

Let me explain the two reasons why I planned to use Google Assistant:

  • I wanted Ewon to not just be a fun robot but also a useful one. Google Assistant SDK already has a ton of resources that you can use to increase the functionality of Ewon.
  • You can also use actions on google and dialog flow to give Ewon the ability to chat with pre-defined responses. For now, we will concentrate on only the basic SDK.

Let's get started by installing the google assistant SDK. This shouldn’t be difficult as there are a ton of resources to help you set up Google Assistant SDK on RPI. You can follow this tutorial along:

Tutorial: https://developers.google.com/assistant/sdk/guide...

After the end of the above process, you should be able to click enter on the keyboard and talk to the assistant. That’s all about installing the Google Assistant SDK.

What should I name it? Ewon?

Hey Google! That’s what's used to start speaking to google assistant and unfortunately google doesn’t allow any other custom wake word to be used. So let's see how we can change this so that google assistant is triggered when someone calls Ewon.

Snowboy: a highly customizable hot word detection engine that is embedded in real-time compatible with Raspberry Pi, (Ubuntu) Linux, and Mac OS X.

A hot word (also known as wake word or trigger word) is a keyword or phrase that the computer constantly listens for as a signal to trigger other actions.

Let's start by installing Snowboy on RPI. Remember to activate the virtual environment to install Snowboy as you did to install Assistant SDK. Everything we install from here on has to be installed in the virtual environment. Installing Snowboy can be a little tricky but this link should help you install it without any problems.
Link: http://docs.kitt.ai/snowboy/#introduction

Here's a summed up installation process if the above link gets confusing or install fails.

$ [sudo] apt-get install libatlas-base-dev swig
$ [sudo] pip install pyaudio
$ git clone https://github.com/Kitt-AI/snowboy
$ cd snowboy/swig/Python3
$ make
$ cd ../..
$ python3 setup.py build
$ [sudo] python setup.py install

Once Installed run the demo file [ found in the folder - snowboy/examples/Python3/ ] to see if everything works perfectly.

Note: you can easily change the name of your robot to something else too. All you have to do is go to https://snowboy.kitt.ai/ and train a custom hotword and then place that hot word in the same folder as ewon.pmdl.

Can Ewon understand emotions?

Now that Ewon has a name I'll be using Ewon instead of calling it a robot. Okay so emotions, short answers no, Ewon cannot understand emotions so what we are gonna do here is make Ewon detects emotion in our speech using keywords and then play the corresponding facial expression associated with it.

To achieve this what I have done is a simple sentiment analysis script. There are 6 different emotion classes.

Happy, Sad, Anger, Fear, Disgust, and Surprise. These are the main emotion classes and each of them has a list of keywords associated with the emotion. ( for example good, nice, excited, all come under happy emotion).

So whenever we say any of the keywords in the emotion class the corresponding emotion is triggered. So when you say "Hey Ewon!" and wait for Ewon to speak and I continue saying "Today is a nice day!", it picks up the keyword "Nice" and triggers the corresponding emotion 'Happy' which triggers the facial expression for Happy.

Are those ears on Ewon?

The next step would be using the triggered emotion to run the respective facial expression. With Ewon, the facial expression is noting but moving its ear and neck using servos and changing the display to change the eye movements.

First, the servos, to run this it's fairly easy you can follow this tutorial to set up the Adafruit servo library.
Link: https://learn.adafruit.com/adafruit-16-channel-se...



Then we assign the maximum and minimum value for all the servos. This is done by manually moving each servo and checking its limits. You can do this once you have assembled Ewon.

Eyes for Ewon

For the eyes, I am using a Nextion display which has a bunch of pictures like below.

It’s a sequence of images I designed in photoshop which when played in sequence makes an animation. A similar sequence was created for all emotions. Now to display any emotion all you have to do is call the specific image sequence which makes up the animation. The files are inside the 'Display files' folder, download link below.

Finally!

Putting it all together when the happy emotion is triggered by the script the happy function is called and the servo moves to the already set angles and display plays the happy eye animation. So this is how we achieve "understanding" of human emotions. This method isn’t the best and there are times when keywords don’t fall in the same emotion as predefined, but for now, this works well enough and you can always add more keywords to increase the accuracy of detection. Further, this can be replaced by a much more trained emotion analysis model like the Paralleldots Emotion analysis model to get better results. But when I tried it there were a lot of delays which would make Ewon react slower. Maybe Ewon version 2.0 will have something like this.

This is the LINK to all the files needed to run EWON. Download the file and follow the below steps:

  • Unzip the file place this folder (Ewon) at home/pi/
  • Add Device Id and Model ID in the main.py file. The ID is obtained while installing google assistant SDK.
  • Open the command prompt and run the source of the following commands:
source env/bin/activate
python main.py models/Ewon.pmdl

Step 3: Printing the Body

You can find the 3d files here: https://www.thingiverse.com/thing:4511545

Now that we are all set up with the brain of Ewon its time to print its body. There are 18 unique parts to be printed, most of them are pretty small, with a total print time of around 15-20 hours. (excluding the cases).

I used white PLA with 50% infill and a layer height of 2mm. You can change these values if needed it should work fine but make sure that the small parts have 100% infill, for strength.

After the files have been printed you can use sandpaper or a hand file and clean out the printed parts especially the links where the parts slide through each other. Smoothening the joints will make the mechanism smooth and will provide less resistance to the servo. This process can take as long as you want as one can get lost trying to make the printed parts look perfect.

Extra notes: You can re-drill the holes in the 3d printed parts using a 3mm bit. All the holes are of the same dimensions. This will make it easier while screwing in the nuts later on in the assembly.

Step 4: Putting Ewon Together

Before we start with the assembly there are few modifications needed to the printed parts. The files named servo link have to be fitted with servo links that come with the servo, this makes the 3d printed links to connect well with the servo.

Assembly of Ewon should be straight forward. I have attached images for you to follow along.

Extra notes: Make sure you do not overtighten any of the bolt or screw as this may break and wear the printed parts.

Step 5: Wiring Up Ewon

We are at the final step to make Ewon come to life. Here’s the wiring diagram for the components along with images showing the connection.

  • Servo driver is connected to the I2C pins that are SDA and SCL of RPI.
  • The display is connected to the RX and TX pins of RPI
  • Microphone and speakers are connected to the USB Sound card which is connected to RPI through the USB port.

Warning: Be careful of shorting your RPI. Please check all your connections twice and make sure you haven’t done any mistakes. All the accessories that are speaker, servo driver, and display are powered by a separate 5v battery and do not use the Raspberry Pi 5v line. Raspberry pi is only used to send data to the accessories but not to power them.

Step 6: Hey Ewon! Can You Hear Me?

So we have attached all of our accessories and installed all the necessary libraries. You can start Ewon by running the shell script using
./run Ewon.sh
But what is this .sh script? Ewon uses many different libraries with different scripts(Google assistant SDK, Snowboy, Adafruit, etc). All the scripts are placed in their respective folders. (We can move all the files in the same path and have all the scripts organized but currently, some of the libraries don't allow moving the source files, so, for now, we will just keep them in their respective locations)
.sh is shell scripts that run all these scripts one by one from every location so you don’t have to manually go to each location and run the scripts. This makes it easier to handle all the commands.

Once you run the shell script just say “Hey Ewon!” and you should see Ewon start listening to you. Now you can use Ewon as google assistant and speak to it and you can see Ewon changing expressions from what you say. Try something like “Hey Ewon! I am sad today “ and you can see Ewon being sad with you. Ask Ewon for a joke and see it laugh at the joke.

Step 7: Whats Next?

Ewon doesn't stop here. Ewon now has a way to detect and show emotions but we can have so much more. This is just the beginning.

In the coming update, we will work on how to make

  • Ewon detects faces and tracks your face and moves along with your face.
  • We will add sound effects to give an extra depth to the character.
  • Add mobility so that Ewon can move along with you.

Note: Due to the current situation it has become very difficult to source parts for the project. This made me change design and functionality considering the things I had in my inventory. But as soon as I get my hands on all the parts ill update the project above.

Collaborate on Discord: https://discord.gg/Fp73HmT2w2

Updates:

  • Made some changes in the code, and removed the shell script.
  • Added a rectangular body for EWON.
3D Printed Contest

Participated in the
3D Printed Contest