Introduction: Brain Controlled Music Generator - Submitted by BayLab for the Instructables Sponsorship Program

Wouldn't it be cool if you could create music based on your brain activity? Follow this instructable and I'll show you how!

Step 1: The Platform

For this project, you need a way to read brainwaves. One of the cheapest ways is to hack the Star Wars Force Trainer. You can pick up a used one on ebay (that's what I did) or buy one new. It's a really neat toy and actually quite fun to play with as it is. It reads alpha and beta brainwaves, sends them back to the basestation, which then controls the speed of a fan to blow a ping pong ball up and down. We'll intercept that data and use it for our own purposes. 

Here's a video of the stock version in action:

Step 2: Getting the Data

The head piece uses 3 contacts with the head to take an EEG. Then it sends the data wirelessly over a regular RF connection somewhere in the 2.4 GHz range. But most importantly, Zibri discovered that there are header pins that were left in from testing and you can get sensor data out over RS232. So it’s really easy to interface with.

I wasn’t completely sure how I would make everything work since I didn’t know what the serial port would send me. I took it apart, found the header pins, and plugged it into an FT232RL IC so I could see the data over USB. It turns out that it sends a 3 dimensional data vector. The first component is the “Attention” number which comes from the EEG. The second is the “Meditation” number which also comes from the EEG. Each of those is a particular type of brainwave (alpha and beta brainwaves, respectively). The third number is the connection quality. If the sensor isn’t against your skin or getting a good reading, then that number goes to 200. The first two numbers can range from 0 to 100. The last one can range from 0 to 200 (as far as I can tell). Not every number will always show up on any of the columns, it seems to prefer certain numbers. I think that this may have to do with the fact that an FFT (fast Fourier transform) is being done. It outputs this data roughly 1-2 times per second (that is, 1-2 rows of the 3 columns of numbers). This isn’t a rock solid rate, and when you keep the same brainwave level for a certain amount of time, then the sensor stops transmitting. That means that it won’t send a new update of numbers for several seconds if you are able to keep the position of the ball in the tube steady.

Step 3: Hacking the Hardware

These pictures show where you need to solder to in order to get the RS232 output. The plastic shell kind of pops apart in 2 pieces after you've removed all the screws.

To actually connect this to your computer, you can either run it through a serial cable or you can use a serial to USB adapter if your computer doesn't have a serial port. 

Step 4: Software

To actually make the sounds, I used Processing. If you’ve ever done any work with either Processing or any computer music, you know that this was a TERRIBLE idea. I agree. I chose processing because I wanted to eventually have a cool visualization to go along with the sounds, and it can easily interface with an Arduino (although I ended up not using this feature). I also didn’t know about the existence of Supercollider or any other languages specifically meant for this purpose. Anyway, I basically read in the serial data, do some formatting to it to get 3 separate numbers, and then take the average of the last 8 “Attention” numbers and “Meditation” numbers. Based on that average, a certain sample is played from a sound library. The “Attention” average plays a melodic sample and the “Meditation” average plays a background, ambient sample. These samples can be played over top of each other, so you end up with several of these ambient sounding samples being played at the same time. They are all pentatonic so they work in any order. I later wrote a second version that plays guitar instead of ambient sounds. I recorded myself playing 4 bars of several chords, and rearranged the code a bit to try to make it sound a little better. You can play lots of ambient sounds simultaneously and it won’t sound bad, but that isn’t always the case for the guitar or more melodic stuff. It never sounds dissonant, but it can get interesting sometimes. Most of the time I find it downright pleasant.

Right now, the code is extremely simple, but got the job done considering it’s the fruit of a sleepless night of work in an attempt to hack together a demo. You can download my code here:

https://github.com/blueintegral/Mental-Note

ClockworkRobot made another revision of this code that includes a visualization that you can download on his website. He’s also got more pictures and information about the hack.

Step 5: Bonus Points

I added some LEDs underneath the fan just to make it look more awesome. These are optional. 

Step 6: Similar Work/Inspiration

I talked to some graduate students from the Georgia Tech GVU Center. They are doing some amazing things there, and one of those things happens to be composing music using data from an EEG! I talked to them, and they are taking a slightly different approach to figuring out what the person wants to hear. Here’s their paper [pdf]. They had better sensors and they were also using supercollider. 

Since I did this whole thing in a single night, there’s plenty of room for improvement. I’d like to change the algorithm to depend on previous sounds played. I’d also like to experiment with more sounds, and perhaps use a midi library to play individual notes (basically controlling an entire piano rather than just samples). I’ve got a friend who’s a student at the Eastman School of Music who I’m hoping can help me out in writing a better algorithm to create better sounding and more original music. I’m also going to get an FT232RL and put it inside the shell so it looks nicer. It can be powered by USB and I’ll just install a panel style USB port in the case. I’d also like to add support to use multiple Force Trainers at once, so one person can be composing the melody and the other can be composing the harmony or countermelody.

Step 7: Demonstration