Introduction: Hear Me
This project is aimed to help those with hearing and speaking dissabilities, to offer a better way to communicate with people.
Using Edsion, Leap Motion, audio device and cloud computing service offered by Mircosoft Azure to build a wearable device translating sign language to spoken language in realtime.
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: Parts and Materials
- Intel Edison with Arduino Breakout Board
- Leap Motion
- Bluetooth Speaker
- 32GB Micro-SD Card
- 12V-2A Adapter
Step 2: Software and Configuration
OS/Image: ubilinux for edison
- Because of the large size of librarys, we have to “expand” the root partition. To do this, we moved /var and /usr into micro-SD card and make a symbolic link back to their original path.
- the Leap Motion library requires new version dependency libraries, so we have to upgrade our ubilinux to debian jessie. replace all “wheezy” in /etc/apt/source.list by “jessie” and do “apt-get update; apt-get upgrade; apt-get dist-upgrade”.
For the bluetooth speaker surpport, install pulseaudio and pulseaudio-module-bluetooth by apt-get, and edit /etc/bluetooth/audio.conf by add the following line: Enable=Source,Sink,Media,Socket
Leap Motion library has a bug on debian, we can’t turn it on by service command, so we have to put a line “leapd &” in /etc/rc.local.
Step 3: Collecting Gestures
Recording the position and orientation of palm and relative position of fingers of each hand using Leap Motion Python API and Leap Motion SDK, picking frames properly, and save the data into a multi-dimentional array. It becomes training data after attaching label manually.
Step 4: Machine Learning Model and Web Server
Uploading training data to Microsoft Azure, Using machine learning tool provided by Azure to generate the model, and choose the best model to build web server. We chose Multiclass Neural Network this time.
Step 5: Real-time Gesture Matching
Sending raw data from Leap Motion to Edison, and after some processing on Edison, submit the data to Azure by wifi to detect which gesture it is.
Step 6: Sounds Output
Using Edison to connect the bluetooth speaker, and play the corresponding sound files.