loading

Project Author

Collaborators

About

An experimental video capture system that connects a participant's brain to camera functions via a Neurosky Mindwave EEG Reader Headset. The Biofeedback Cinema system operates in lieu of a traditional cinematographer, instead giving agency of the composition over to the participant herself via a custom BRAIN to CAMERA interface. The project was developed in collaboration with workshop attendees Gregory Hough, Salud Lopez, and Pedro Peira. You can read up on the results of the workshop at: http://comunidad.medialab-prado.es/en/groups/biofeedback-cinema

Prototype Configuration

The Biofeedback Cinema system lends itself to many potential applications. For this instructable we have prepared a demo of the system that looks at the participantĀ“s level of focus/attention (a single integer) and translates that to camera position (via pan and tilt) and camera focus (internally via OpenCV). This is all made possible by a bluetooth connection between a Neurosky EEG Reader Headset and a Raspberry Pi.

The Raspberry Pi is a small computer outfitted with a webcam and scripts (available below) that bridge the participant's brain activity to camera settings and camera position. Dynamic camera position is made possible via an Arduino microcontroller receiving signals from the Raspberry Pi. We look forward to further development as we aim to include additional brainwave parameters (frequencies associated with eye blinks, etc) and camera functions (i.e. hue, saturation, brightness, etc).

Below are the instructions to build your own Biofeedback Cinema system.

Happy experimenting!


Step 1: Supplies

Everything you need to build your own Biofeedback Cinema prototype is listed below.

  1. Neurosky Mindwave Mobile EEG Headset
  2. Raspberry Pi B+ (b+ is better, more USB ports, but a B Model is fine too if you have a USB hub).
    1. Raspberry Pi Power Adapter or Battery Pack
    2. Wifi Dongle -or- Ethernet Connection (only necessary during setup)
    3. Bluetooth Dongle --> see wiki for compatible dongles
    4. SD Card (at least 8GBs) with NOOBS.
  3. Arduino Any board is fine, using Uno in this Instructable. Also note, you could just use the I/O on the Pi.
    1. Arduino Power Adapter or Battery Pack
    2. A-B USB Cable
  4. USB Webcam
  5. Mini Pan-Tilt Kit
  6. Monitor w/HDMI Input --> Or use VNC to remote control your pi from your computer [tutorial here]
    1. HDMI Cable
  7. USB Keyboard & Mouse --> recommend bluetooth keyboard and mouse to minimize USB ports used.

Step 2: Setup Raspberry Pi

    1. Setup Hardware

    • Connect keyboard, mouse, bluetooth dongle, wifi dongle (or ethernet), webcam, monitor via HDMI cable, and power to your Raspberry Pi.

    2. Setup Operating System

    • Toggle the power on and your Pi should start up. Install Rasbpian OS, instructions here: http://www.raspberrypi.org/help/noobs-setup/
    • If booted and Raspian installed properly you should see the home desktop [Image above].

    HINTS:

    • If the desktop aspect ratio is off try rebooting your Raspberry Pi. If it is still off, look here to update the aspect ratio manually.
    • If you open up a text editor and your keyboard special characters are mismapped, look here to update your keyboard configuration.
    • Test your internet connection (you will need this to install libraries during setup). Look here for help getting wifi setup.

    Step 3: Connect Neurosky Headset

      1. Bluetooth Configuration

      Before the Pi can connect to the Neurosky we need to setup bluetooth:

      • On the desktop open up "LXTerminal" (referred to as Terminal from here on out). Run this command to resolve and update deficiencies:

      $ sudo apt-get update

      • Install bluetooth with this command:

      $ sudo apt-get install bluetooth

      • Install handy desktop-toolbar bluetooth utility:

      $ sudo apt-get install -y bluetooth bluez-utils blueman

      • Reboot Pi from Terminal:

      $ sudo reboot

      2. Test Bluetooth Connection

      • Turn on Neurosky headset
      • From the Terminal scan for devices:

      hcitool scan

      • The Mindwave headset should be listed, take note of the MAC address of the headset [image above].

      3. Install Neurosky Libraries

      Now we are ready to install the Neurosky Python libraries and start picking up its data stream with the library test script:

      • From the Terminal install the github utility:

      sudo apt-get install git-core

      • Clone github repository with Neurosky Python library:

      sudo git clone https://github.com/cttoronto/python-mindwave-mobile

      • We need to Update MindwaveMobileRawReader.py file with the MAC address of your headset. FYI: Filenames are case sensitive.

      sudo nano /home/pi/python-mindwave-mobile/MindwaveMobileRawReader.py

      • Update the MAC address listed in the file. Ctrl-X to finish, Y to save, Enter to exit.
      • Pair the Neurosky and the Pi and allow auto connect feature, if prompted for a PIN use "0000":

      $ sudo bluez-simple-agent hci0 XX:XX:XX:XX:XX:XX

      $ sudo bluez-test-device trusted XX:XX:XX:XX:XX:XX yes

      • Install Python Bluetooth library:

      sudo apt-get install python-bluez

      • Run the library test script to ensure the Pi is able to preview the datastream. You should see the data streaming [image above]:

      $ sudo python /home/pi/python-mindwave-mobile/read_mindwave_mobile.py

      Step 4: Connect USB Webcam W/Open CV

        1. Install OpenCV

        • From the Terminal:

        $ sudo apt-get install libopencv-dev python-opencv

        • When finished, continue:

        $ sudo apt-get -f install

        • For good measure:

        $ sudo apt-get install libopencv-dev python-opencv

        • Test installation by attempting to import the library:

        $ python

        >> import cv2

        2. Test OpenCV in Python with USB Webcam

        • On the desktop open "IDLE" (do not open IDLE3!)
        • From the File menu select New Window. Copy our Cv-Blur-Test script into the new window and save. Script available here: https://github.com/PrivateHQ/biofeedback-cinema/bl...
        • From the Run menu Select Run Module (or Press F5). It may take a few seconds to get going, but you should see a small frame appear with your live webcam feed, and the video should be blurry. Congratulations, OpenCV was installed and is working successfully with your webcam [Image above].

        Step 5: Connect Arduino

          1. Download Arduino IDE

          • From the Terminal:

          sudo apt-get install arduino

          2. Connect Arduino & Load Sketch

          • Plug the arduino into the Pi with the A-B USB cable.
          • From the desktop start menu go to Electronics and open Arduino IDE. Copy our arduino-serial-pi sketch into the IDE [Link below]. This is a very basic sketch that will move servo motors based on input coming over the serial. We will send data over the serial based on brainwave output, using a Python sketch in the last step when we put everything together.

          Arduino-serial-pi sketch online here: https://github.com/PrivateHQ/biofeedback-cinema

          • In the Arduino IDE, go to the Tools menu, select Serial Port and select the Arduino port listed, probably something like /dev/ttyACM0. Make a note of the port.

          3. Disable Serial Console

          • Download and run a script to disable serial console so the usb serial connection can run smooth:

          $ wget https://github.com/wyolum/alamode/blob/master/bundles

          /alamode-setup.tar.gz?raw=true -O alamode-setup.tar.gz

          $ tar -xvzf alamode-setup.tar.gz

          $ cd alamode-setup

          $ sudo ./setup

          $ sudo reboot

          FYI:

          If you are using the B+ there might be enough I/O to support the servos, (look here to setup and use the GPIO). However, I am interested in adding additional components for future brain-to-electronics experimentation. So, setting up the initial prototype with an arduino ensures plenty of breakout electronic possibilities.

          Step 6: Putting It All Together

          1. Final Python script

          • Before we can add the final python script to the "python-mindwave-mobile" folder, we need to change the folder permissions. From the Terminal:

          $ chmod a=rwx /home/pi/python-mindwave-mobile

          • Open IDLE and run our final Python script, available online here: https://github.com/PrivateHQ/biofeedback-cinema/ Make sure it is located in the python-mindwave-mobile folder. FYI: You will need to update our Python script with your actual Arduino port address.
          • When you run this script three things you should happen: 1) Your attention level will be listed in the Python Shell, 2) A small frame appears showing the webcam live feed with the blur changing based on the attention level, 3) the motor(s) move as the attention level is being passed to the arduino via the serial [Video above].

          Step 7: Improvements & Development

          The Raspberry Pi has limited processing power, and struggles to run OpenCV functions smoothly. This is something I will continue to develop and improve upon. Additionally, I plan to include additional brainwave parameters (frequencies associated with eye blinks, etc) and camera functions (i.e. hue, saturation, brightness, etc) in future iterations.

          <p>pi@raspberrypi:~/python-mindwave-mobile $ python read_mindwave_mobile.py</p><p>Traceback (most recent call last):</p><p> File &quot;read_mindwave_mobile.py&quot;, line 12, in &lt;module&gt;</p><p> dataPoint = mindwaveDataPointReader.readNextDataPoint()</p><p> File &quot;/home/pi/python-mindwave-mobile/MindwaveDataPointReader.py&quot;, line 17, in readNextDataPoint</p><p> self._putNextDataPointsInQueue()</p><p> File &quot;/home/pi/python-mindwave-mobile/MindwaveDataPointReader.py&quot;, line 27, in _putNextDataPointsInQueue</p><p> dataPoints = self._readDataPointsFromOnePacket()</p><p> File &quot;/home/pi/python-mindwave-mobile/MindwaveDataPointReader.py&quot;, line 37, in _readDataPointsFromOnePacket</p><p> dataPoints = self._readDataPointsFromPayload(payloadBytes)</p><p> File &quot;/home/pi/python-mindwave-mobile/MindwaveDataPointReader.py&quot;, line 75, in _readDataPointsFromPayload</p><p> return payloadParser.parseDataPoints();</p><p> File &quot;/home/pi/python-mindwave-mobile/MindwavePacketPayloadParser.py&quot;, line 15, in parseDataPoints</p><p> dataPoint = self._parseOneDataPoint()</p><p> File &quot;/home/pi/python-mindwave-mobile/MindwavePacketPayloadParser.py&quot;, line 25, in _parseOneDataPoint</p><p> return self._createDataPoint(dataRowCode, dataRowValueBytes)</p><p> File &quot;/home/pi/python-mindwave-mobile/MindwavePacketPayloadParser.py&quot;, line 77, in _createDataPoint</p><p> assert False</p><p>AssertionError</p>
          <p>// i try this solution but still the problem</p><p>pip install -U distribute<br>sudo easy_install pip<br>sudo pip install setuptools==7.0<br>but doesn't work</p>
          <p>Woooha! Thanks a lot!</p>
          <p>This is facinating! </p>

          About This Instructable

          2,988views

          20favorites

          License:

          More by PrivateHQ:Biofeedback Cinema 
          Add instructable to: