EWON Raspberry Pi Powered Home Robot




Introduction: EWON Raspberry Pi Powered Home Robot

About: Engineering and Design

I recently found myself binge-watching a lot of Netflix series due to the current situation, I hope you all are safe, and I saw that season 5 of the Black Mirror was released. An anthology series that revolves around a group of people's personal lives and how technology manipulates their behavior.

And one of the episodes that caught my attention was Rachel, Jack, and Ashley Too. One of the main characters of this series is a home robot named Ashley O and that robot has a lot of character around it and I thought for myself I should build one, it's a good project to getting started with programming if not that then at least I can program it to laughs at my jokes.!

What / Who is Ewon? What can it do?

So before I started to work on this project I set some ground rules to be followed. This project had to be

  • Easy for everyone to try out
  • Not just about being cute but also be useful so it doesn't end up in a shelf
  • Modular, so that you can keep adding new features.

After setting this rule I decided to use the Google Assistant SDK. This SDK provides a lot of features that I was looking for and if you get bored of Ewon you cab always use it as a Google Home device and do what a Google home does.

What Ewon will be doing is adding a character to the Google assistant. That is showing emotions and reacting to what the user says. Now it's not just the voice you hear but you also get to see how to reacts.

NOTE: This instructable is under development. I will soon upload all the relevant files. Thank you.

Step 1: Parts Required for Ewon


  • Raspberry PI
  • Servo SG90 (x4)
  • Servo MG995 – standard (x2)
  • PCA9685 16-Channel Servo Driver
  • USB sound card
  • Microphone
  • Speakers (Any small speaker will do, something like this)
  • Male and female pin header connectors
  • Breadboard
  • Nextion Display


  • M3*10mm (x10)
  • M3*8mm (x10)
  • M3 Nuts (x20)
  • Bearing
    • OD: 15mm ID: 6mm Width: 5mm (x2)
    • OD: 22mm ID: 8mm Width: 7mm (x2)


  • Standoff
    • 40mm (x4)
    • 30mm (x4)


  • 3D printer

Step 2: Understanding Ewon and Programming

Before I start with the programming aspect let me briefly explain the block diagram of the circuitry of Ewon.

RPI (Raspberry pi) is the brain of the system. Servo driver controlled by RPI drives the servo. Display controlled by RPI with serial communication to show emotions and lastly, mic and speakers used to communicate with Ewon. Now that we know what hardware does what let's start programming Ewon.

Installing google assistant SDK

Let me explain the two reasons why I planned to use Google Assistant:

  • I wanted Ewon to not just be a fun robot but also a useful one. Google Assistant SDK already has a ton of resources that you can use to increase the functionality of Ewon.
  • You can also use actions on google and dialog flow to give Ewon the ability to chat with pre-defined responses. For now, we will concentrate on only the basic SDK.

Let's get started by installing the google assistant SDK. This shouldn’t be difficult as there are a ton of resources to help you set up Google Assistant SDK on RPI. You can follow this tutorial along:

Tutorial: https://developers.google.com/assistant/sdk/guide...

After the end of the above process, you should be able to click enter on the keyboard and talk to the assistant. That’s all about installing the Google Assistant SDK.

What should I name it? Ewon?

Hey Google! That’s what's used to start speaking to google assistant and unfortunately google doesn’t allow any other custom wake word to be used. So let's see how we can change this so that google assistant is triggered when someone calls Ewon.

Snowboy: a highly customizable hot word detection engine that is embedded in real-time compatible with Raspberry Pi, (Ubuntu) Linux, and Mac OS X.

A hot word (also known as wake word or trigger word) is a keyword or phrase that the computer constantly listens for as a signal to trigger other actions.

Let's start by installing Snowboy on RPI. Remember to activate the virtual environment to install Snowboy as you did to install Assistant SDK. Everything we install from here on has to be installed in the virtual environment. Installing Snowboy can be a little tricky but this link should help you install it without any problems.
Link: http://docs.kitt.ai/snowboy/#introduction

Here's a summed up installation process if the above link gets confusing or install fails.

$ [sudo] apt-get install libatlas-base-dev swig
$ [sudo] pip install pyaudio
$ git clone https://github.com/Kitt-AI/snowboy
$ cd snowboy/swig/Python3
$ make
$ cd ../..
$ python3 setup.py build
$ [sudo] python setup.py install

Once Installed run the demo file [ found in the folder - snowboy/examples/Python3/ ] to see if everything works perfectly.

Note: you can easily change the name of your robot to something else too. All you have to do is go to https://snowboy.kitt.ai/ and train a custom hotword and then place that hot word in the same folder as ewon.pmdl.

Can Ewon understand emotions?

Now that Ewon has a name I'll be using Ewon instead of calling it a robot. Okay so emotions, short answers no, Ewon cannot understand emotions so what we are gonna do here is make Ewon detects emotion in our speech using keywords and then play the corresponding facial expression associated with it.

To achieve this what I have done is a simple sentiment analysis script. There are 6 different emotion classes.

Happy, Sad, Anger, Fear, Disgust, and Surprise. These are the main emotion classes and each of them has a list of keywords associated with the emotion. ( for example good, nice, excited, all come under happy emotion).

So whenever we say any of the keywords in the emotion class the corresponding emotion is triggered. So when you say "Hey Ewon!" and wait for Ewon to speak and I continue saying "Today is a nice day!", it picks up the keyword "Nice" and triggers the corresponding emotion 'Happy' which triggers the facial expression for Happy.

Are those ears on Ewon?

The next step would be using the triggered emotion to run the respective facial expression. With Ewon, the facial expression is noting but moving its ear and neck using servos and changing the display to change the eye movements.

First, the servos, to run this it's fairly easy you can follow this tutorial to set up the Adafruit servo library.
Link: https://learn.adafruit.com/adafruit-16-channel-se...

Then we assign the maximum and minimum value for all the servos. This is done by manually moving each servo and checking its limits. You can do this once you have assembled Ewon.

Eyes for Ewon

For the eyes, I am using a Nextion display which has a bunch of pictures like below.

It’s a sequence of images I designed in photoshop which when played in sequence makes an animation. A similar sequence was created for all emotions. Now to display any emotion all you have to do is call the specific image sequence which makes up the animation. The files are inside the 'Display files' folder, download link below.


Putting it all together when the happy emotion is triggered by the script the happy function is called and the servo moves to the already set angles and display plays the happy eye animation. So this is how we achieve "understanding" of human emotions. This method isn’t the best and there are times when keywords don’t fall in the same emotion as predefined, but for now, this works well enough and you can always add more keywords to increase the accuracy of detection. Further, this can be replaced by a much more trained emotion analysis model like the Paralleldots Emotion analysis model to get better results. But when I tried it there were a lot of delays which would make Ewon react slower. Maybe Ewon version 2.0 will have something like this.

This is the LINK to all the files needed to run EWON. Download the file and follow the below steps:

  • Unzip the file place this folder (Ewon) at home/pi/
  • Add Device Id and Model ID in the main.py file. The ID is obtained while installing google assistant SDK.
  • Open the command prompt and run the source of the following commands:
source env/bin/activate
python main.py models/Ewon.pmdl

Step 3: Printing the Body

You can find the 3d files here: https://www.thingiverse.com/thing:4511545

Now that we are all set up with the brain of Ewon its time to print its body. There are 18 unique parts to be printed, most of them are pretty small, with a total print time of around 15-20 hours. (excluding the cases).

I used white PLA with 50% infill and a layer height of 2mm. You can change these values if needed it should work fine but make sure that the small parts have 100% infill, for strength.

After the files have been printed you can use sandpaper or a hand file and clean out the printed parts especially the links where the parts slide through each other. Smoothening the joints will make the mechanism smooth and will provide less resistance to the servo. This process can take as long as you want as one can get lost trying to make the printed parts look perfect.

Extra notes: You can re-drill the holes in the 3d printed parts using a 3mm bit. All the holes are of the same dimensions. This will make it easier while screwing in the nuts later on in the assembly.

Step 4: Putting Ewon Together

Before we start with the assembly there are few modifications needed to the printed parts. The files named servo link have to be fitted with servo links that come with the servo, this makes the 3d printed links to connect well with the servo.

Assembly of Ewon should be straight forward. I have attached images for you to follow along.

Extra notes: Make sure you do not overtighten any of the bolt or screw as this may break and wear the printed parts.

Step 5: Wiring Up Ewon

We are at the final step to make Ewon come to life. Here’s the wiring diagram for the components along with images showing the connection.

  • Servo driver is connected to the I2C pins that are SDA and SCL of RPI.
  • The display is connected to the RX and TX pins of RPI
  • Microphone and speakers are connected to the USB Sound card which is connected to RPI through the USB port.

Warning: Be careful of shorting your RPI. Please check all your connections twice and make sure you haven’t done any mistakes. All the accessories that are speaker, servo driver, and display are powered by a separate 5v battery and do not use the Raspberry Pi 5v line. Raspberry pi is only used to send data to the accessories but not to power them.

Step 6: Hey Ewon! Can You Hear Me?

So we have attached all of our accessories and installed all the necessary libraries. You can start Ewon by running the shell script using
./run Ewon.sh
But what is this .sh script? Ewon uses many different libraries with different scripts(Google assistant SDK, Snowboy, Adafruit, etc). All the scripts are placed in their respective folders. (We can move all the files in the same path and have all the scripts organized but currently, some of the libraries don't allow moving the source files, so, for now, we will just keep them in their respective locations)
.sh is shell scripts that run all these scripts one by one from every location so you don’t have to manually go to each location and run the scripts. This makes it easier to handle all the commands.

Once you run the shell script just say “Hey Ewon!” and you should see Ewon start listening to you. Now you can use Ewon as google assistant and speak to it and you can see Ewon changing expressions from what you say. Try something like “Hey Ewon! I am sad today “ and you can see Ewon being sad with you. Ask Ewon for a joke and see it laugh at the joke.

Step 7: Whats Next?

Ewon doesn't stop here. Ewon now has a way to detect and show emotions but we can have it do much more. This is just the beginning.

In the coming update, we will work on how to make

  • Ewon detects faces and track your face and move along with your face.
  • We will add sound effects to give an extra depth to the character.
  • Add mobility so that Ewon can move along with you.

Note: Due to the current situation it has become very difficult to source parts for the project. This made me change design and functionality considering the thinks I had in my inventory. But as soon as I get my hands on all the parts ill update the project above.


  • Made some changes in the code, removed shell script.
  • Added a rectangular body for EWON.
3D Printed Contest

Participated in the
3D Printed Contest

Be the First to Share


    • Stick It Challenge

      Stick It Challenge
    • Trash to Treasure Contest

      Trash to Treasure Contest
    • Woodworking Contest

      Woodworking Contest



    Question 14 days ago

    Hello, thanks for this project.
    2 parts are missing :
    - the other side of the Rod_connect_.stl
    - the part connecting Rod_connect to the servo command.
    You can see it in the picture. Could you please send it ?
    Also, I noticed some differences between your STL files and the pictures. Could you please update the files ?
    Thank you


    Answer 14 days ago

    Can you check Thingiverse now? I have added a few more files.


    15 days ago

    Hello Sharath Naik, After 1.5 months I finally received all the parts and I start to build Ewon!
    What about you and the new version?
    If I remember well, you were planning to finish now...
    I hope everything goes well :)


    3 months ago

    Hi there. Great project! Those Nexticon displays are hard to come by here in Australia (and im guessing everywhere right now). Does / would the code need to be changed to display on another type of display (not hdmi). For example, using the dsi connector on the pi?

    Cheers. Jonno.


    Reply 2 months ago

    I have been playing with the other sized displays from same company.
    The 2.8 https://itead.cc/product/nx3224k028-nextion-2-8-en...
    and the 3.5 https://itead.cc/product/nx4832k035-nextion-3-5-en...

    You will have to resize the pictures for a cleaner effect (haven't done that yet) otherwise once you regenerate the .tft file from the provided .hmi both displays are function. Do remember to change the "device" in the nexton editor or you will have to regenerate the .tft again and again until you remember.

    Of course I now have to look into modifying the display hold for the different displays.

    By the way thanks sharathnaik for the project it has been a lot of fun putting it together.
    Now to figure out how to make it work with MyCroft


    Reply 2 months ago

    Thank you for that update @djr868. I am currently working on the second version of ewon. So haven't updated anything here.


    Reply 8 weeks ago

    what did you use to make the mount for the display screen. As I mentioned I have a smaller and larger size display. Presently I have stl for the screen mount in freecad and am trying to make modifications to fit the other screens. If you had used scad or something and are willing to share the object form it would make life easier.


    Reply 7 weeks ago

    Can you share the screen you are using...


    Reply 7 weeks ago

    I have been playing with the nx3224k028 and 035 either side of the sized screen you provided in the parts list. Both work with the code you provide at some point I have to resize the images so they fit properly on the screen. Would make testing nicer to have the screen on the holder instead of on the table top.


    Reply 8 weeks ago

    That's a reaaaally nice project !! I'm going to build it and combine it with openbot (for movements in space).
    Very nice to read that you work on a new version.
    Will you use a different hardware ?
    Do you have a very approximate date of release ?
    Thank you :) :)


    Reply 7 weeks ago

    It'll be a much smaller version and will also use the raspberry pi. The plan is to support it long-term so I will take some time to set up the foundation. If things work out as planned, i should have something by the end of April.


    Reply 7 weeks ago

    Amazing !
    You probably already have an idea of the hardware you will use !
    Would you share it with me ? In pm if you want. I live in a remote island in indian ocean and i need more than a month to get shipment from Aliexpress.
    I would love to assemble it as soon as you publish :)


    4 months ago

    Hi, great job!
    I had a problem trying to do it. Snowboy got obsolete, what other library could I use?


    Reply 4 months ago

    It did? Hmmm I am not really sure about other ways.. I am currently working on a second version and once I come up with something I'll update it here.. Sorry about that..


    Reply 4 months ago

    Hi, it's a very good project, I'm currently trying to replicate it. I have been a programmer for 10 years, I could collaborate with you in the new version if you are interested, this is my email softwareescarlata@gmail.com. I really like robotics and I have always wanted to do something like that.


    9 months ago

    Love it! Including how you used the ear fins for expression. I ask looking for a desktop robot companion, this could be it.
    Any further development?


    Reply 8 months ago

    Haven't had the time to make upgrades.. But hopefully will be try to make something soon.


    Reply 8 months ago

    He's great the way he is! He's already very expressive. It was very clever to analyze emotions from keywords. Keep up the good work.


    Question 12 months ago on Step 3

    Is there anyone willing to print the 3D parts to send to me? This is the only project i've loved that seems to necessitate one.