Introduction: Nao Robot Mimicking Movements Using Kinect

In this instructable I will explain to you how we let a Nao robot mimic our movements using a kinect sensor. The actual goal of the project is an educational purpose: a teacher has the ability to record certain sets of moves (eg. a dance) and can use these recordings to let the children in de classroom mimic the robot. By going through this entire instructable step by step, you should be able to fully recreate this project.

This is a schoolrelated project (NMCT @ Howest, Kortrijk).

Step 1: Basic Knowledge

To recreate this project you have to possess some basic knowledge:

- Basic python knowledge

- Basic C# knowledge (WPF)

- Basic trigonometry knowledge

- Knowledge on how to set up MQTT on a raspberry pi

Step 2: Acquiring the Necessary Materials

Required materials for this project:

- Raspberry Pi

- Kinect Sensor v1.8 (Xbox 360)

- Nao robot or virutal robot (Choregraph)

Step 3: How It Works

A kinect sensor is connected to a computer running the WPF application. The WPF application sends data to the Python application (robot) using MQTT. Local files are saved if the user choses so.

Detailed explanation:

Before we start recording, the user has to enter the ip-address of the MQTT broker. Besides that, we also need the topic on which we want to publish the data. After pressing start, the application will check if a connection could be established with the broker and it will give us feedback. Checking if a topic exists is not possible, so you're fully responsible for this one. When both inputs are OK, the application will start sending data (x,y & z coordinates form each joint) from the skeleton that is being tracked to the topic on the MQTT broker.

Because the robot is connected with the same MQTT broker and subscribed on the same topic (this has to be entered in the python application too) the python application will now receive the data from the WPF application. Using trigonometry and self written algorithms, we convert the coördinates to angles and radians, which we use to rotate the motors inside the robot in real-time.

When the user is done recording, he presses the stop button. Now the user gets a pop-up asking if he wants to save the recording. When the user hits cancel, everything is reset (data gets lost) and a new recording can be started. If the user wishes to save the recording, he should enter a title and hit 'save'. When hitting 'save' all acquired data is written to a local file using the title input as the filename. The file is also added to the listview on the right side of the screen. This way, after double clicking the new entry in the listview, the file is read and sent to MQTT broker. Consequently, the robot will play the recording.

Step 4: Setting Up the MQTT Broker

For the communication between the kinect (WPF project) and the robot (Python project) we used MQTT. MQTT consists of a broker (a linux computer on which the mqtt software (eg. Mosquitto)) is running and a topic on which clients can subscribe (they recieve a message from the topic) and publish (they post a message on the topic).

To set up the MQTT broker just download this entire jessie image. This is a clean install for you Raspberry Pi with a MQTT broker on it. The topic is "/Sandro".

Step 5: Installing the Kinect SDK V1.8

For the kinect to work on your computer you have to install the Microsoft Kinect SDK.

You can download it here:

https://www.microsoft.com/en-us/download/details.a...

Step 6: Installing Python V2.7

The robot works with the NaoQi framework, this framework is only available for python 2.7 (NOT 3.x), so check what version of python you have installed.

You can download python 2.7 here:

https://www.python.org/downloads/release/python-27...

Step 7: Coding

Github: https://github.com/PinsonJonas/Project-2-sandro

Notes:

- Coding with the kinect: first off you look for the connected kinect. After saving this inside a property we enabled color- and skeletonstream on the kinect. Colorstream is the live video, while skeletonstream means a skeleton of the person in front of the camera will be displayed. Colorstream isn't really necessary to get this project working, we just enabled it because bitmapping the skeletonstream to the colorstream looks slick!

- In reality it's really the skeletonstream that does the job. Enabling skeletonstream means that the skeleton of the person is being tracked. From this skeleton you recieve all kinds of information eg. bone orientations, joint information, ... The key to our project was the joint information. Using the x-y & z coordinates of each of the joints from the tracked skeleton, we knew we could make the robot move. So, every .8 seconds (using a timer) we publish the x,y & z coordinates of each of the joints to the mqtt broker.

- Since the python project has a subsription on the mqtt broker we can now acces the data inside this project. Inside each joint of the robot are two motors. These motors can't just be steered using the x,y & z coordinates directly. So, using trigonometry and some common sense, we converted the x,y & z coordinates of the joints to angles understable to robots.

So basically every .8 seconds the WPF project publishes x,y & z coordinates of each of the joints. Consequently, inside the python project these coordiantes are converted to angles, which are then send to the corresponding motors of the robot.