Introduction: Accelerometer Based Gesture Recognition for Controlling a LED

These materials and the information contained in this instructable are provided by students enrolled at Software of Places (www.softwareofplaces.com)
Class at PUC-Rio University. The content represented here is the student’s final project for class evaluation porpoise published by the student and is solely the responsibility of the page author. Statements made and opinions expressed are strictly those of the author and not of PUC-Rio University.

The aim of this project is to develop an application to allow an user to control a LED using predefined gestures.

The motivation of this project is to enable a wider range of people to use the application. Elderly, convalescent, deaf and people with difficulties in locomotion are examples to who may be benefited from this technology.

Also, this project shows how to develop this application with low cost device for recognizing gestures, that is based only on a 3-axis accelerometer that is embedded on a microcontroller, with bluetooth embedded too.

Step 1: Devices Worn

To prototype this project I used one LightBlue Bean (http://legacy.punchthrough.com/bean/), I put it into a band and used as a bracelet on the wrist. LightBlue Bean already have Bluetooth and a 3-axis accelerometer embedded in it. With these two components the "band" can recognizes gestures and send the information to another device.

To receive the recognized gestures and control the LED, I used one Intel Galileo microcontroller with a shield for Grove connectors.

Bill of materials:

LightBlue Bean - $30

Intel Galileo Gen 2 - $75

LED - $1

Computer - ...

Step 2: Machine Learning Algorithm to Recognize Gestures

The first thing to decide is the Machine Learning (ML) algorithm. The most used algorithms for problems with time series data, such as audio and gesture recognition, are Hidden Markov Model (HMM) and Dynamic Time Warping (DTW). In this prototype case, the processor used to process the accelerometer data and the ML model, to classify a gesture, is an ATmega328P. This processor has 32KB of memory space to put code in it and 2KB of space of RAM memory, so it would be very difficult to implement a robust ML algorithm, like HMM and DTW, in the LightBlue Bean processor. Thus, I used the J48 decision tree model built by Weka (http://www.cs.waikato.ac.nz/ml/weka/) to recognize gestures. A decision tree model is basically some ifs and elses that you can easily put in any application code.

Step 3: Creating a Dataset

Before recording the data I chose three gestures to interact with the LED. I used the same gesture to turn on and off the LED (toggle). The chosen gesture for it is a movement like the Brazilian sign language (LIBRAS) sign used as a gesture for saying "turn on" a device (http://www.acessobrasil.org.br/libras/). The other two chosen gestures were worn to accelerate and decelerate the LED blinking. In a study done by Kühnel, Christine, et al. (2011) most of participants moved either arm or iPhone downwards to reduce the lighting brightness. I used this idea to choose a movement like a slap to top and slap down to accelerate and decelerate the LED blinking. A fourth "gesture" that needs to be trained is a non gesture. Since the application classifies time series data all the time, the ML model must know when the measures from the accelerometer do not indicate a valid gesture.

For recording the dataset I developed two applications. One application was developed in Arduino and runs in the LightBlue Bean to capture the accelerometer data while the gesture is being executed. The another application runs on Processing, that receives the gesture data via serial port and write it to a text file.

Step 4: Pre Processing Data for Decision Tree Use

Unlike HMM and DTW algorithms, Decision Tree is not prepared to deal with classification problem in time series data. A time serie is a sequence of measured data. Each gesture has its own time serie data. Each time serie can have different number of measures, even being the data from a same gesture. To use Weka Decision Tree in this case, we need to extract some features from the time serie and build a file with .arff extension.

Waleed Kadous investigated it in his PhD thesis and proposed two approaches to extract features from a time serie. I used an approach that, according him, worked surprisingly well even being a simple algorithm. This approach consists in dividing each time series (regardless of length) into a number of windows. Thus, if an example is 45 samples long, the first window will have the values from the first to ninth measure, the second window will have the values from the tenth to eighteenth measure and so on. Then we compute statistics for each of the windows.

A R application was developed to, based on the text file generated by the Processing application, pre process the time series and generate the data part of the arff Weka train file. This application calculates the mean and the standard deviation of each window of each axis (x, y and z) as the features.

Step 5: Generating the Machine Learning Model

This part Weka did almost everything. I needed only to put all pre processed data into an arff file and load it to Weka. Then you can select the ML algorithm (J48) and run the training with the default settings just pressing the Start button. The results will be on the right side of the application. On the left of it, you can select the generated model with the mouse right button and select "Visualize tree", translate this image into ifs and elses code, then you have a model to recognize gestures in your application.

Step 6: Developing the Gesture Recognizer

Now we develop an Arduino application to recognize gestures, using the created model, and send the gesture information.

This application also needs to pre process the accelerometer data, extract the ML model features, for then use it to recognize the time serie as a gesture, or non gesture.

The application developed does not implement continuous gesture recognition. The gesture to be recognized must occur while the red LED in LightBlue Bean is on, the same way that gestures dataset was created.

The recognized gesture information is sent via Bluetooth to a computer running a Processing application, that reads the gesture in the serial port.

Step 7: Processing the Recognized Gesture

Also, another application in Processing was developed. Its goal is to read the recognized gesture and do a specific http get request to Intel Galileo's local ip. In this prototype, both the computer running the application and Intel Galileo must be connected to the same local network.

Step 8: Controlling the LED

Finally I developed the Node JS application that runs on Intel Galileo and controls a LED.

With a few lines of Node JS, with the help of Express framework, you can create an API to receive a http get request.

Based on the request parameter received (recognized gesture), this prototype turn on and off a LED and switch its blinking speed, with the help of cylon framework.