Introduction: Obstacle Detection System Via Haptic Feedback Vest

This project was completed in CS241/EE285: Embedded System Workshop at Stanford University under the supervision of Professor Phil Levis and TA Shane William Leonard.


The motivation for this project was to provide an aide to avoid obstacles for the visually impaired. Generally, the white cane serves its purpose to guide a user along a wall or curb so he or she can walk straight, detect changes in elevation such as stairs or raises in sidewalk, and detect obstacles towards the ground that might be trip hazard. While the white cane accomplishes these tasks very well, it is unable to detect obstacles above the waist level. As such, low-hanging tree branches, ceilings, etc. are potential threats. This inspired our project - we wanted to offer a solution that would alert a user of an obstacle, while being inconspicuous. And thus, our idea for an obstacle detection system via haptic feedback vest was born.

Particularly, the design utilizes cameras and software that parse the user's surroundings into a depth map. This depth map is then used to vibrate the vest at a particular location to signify an obstacle in that location. For example, a vibration in the upper left region of the vest would indicate a hazard in the upper-body/leftward side of the surroundings.

This instructable provides all the information needed to build this system. We hope that this design will inspire others to design for the visually impaired - perhaps future designs will even improve on ours. We are passionate about creating robust solutions for problems that help the disabled live less hindered. We hope you find this mission rewarding as we did. Thanks for checking this out!

Step 1: Technical Overview

This section will be a technical overview of the system. The system's individual components are enumerated below. Each will be further explained in forthcoming sections.

Raspberry Pi: This handheld computer will serve as the brains of the operation. It is the central arbiter, fielding inputs and actuating outputs. This subsystem also runs all the software needed in this application. Particularly, it converts the user's surroundings into a depth map and communicates over I2C to toggle the necessary vibrators.

Camera System: This subsystem has a dual-camera setup that is needed in order to decipher depth in a 2D image. Much of the time, issues in creating a disparity map stem from these cameras not being aligned! This will be talked about more in a forthcoming section. The camera hands the images it takes off to the Raspberry Pi via USB cable.

Disparity Map Algorithm: The images provided by the camera are used to create a disparity map, from which the software can detect what obstacles are where. This code utilizes many open-source computer vision libraries, and depending on the resolution of the images, is the bottleneck in the system.

Actuator System: Once the image has been discretized into blocks where obstacles may reside, the Pi then actuates the vibrators. This is handled via an I2C driver, where one bus can then control sixteen motors simultaneously. The software passes over a sixteen length buffer

Motor Driver with PMU: A custom PCB contains the I2C to vibration motor electrical interface, as well as the 5V power interface between the Raspberry Pi and a USB battery.

Mounting: All of the hardware (and software on the Pi) is meant to be ambulatory. Thus, the Pi and circuit board will be housed in a fanny pack on the hip of a user, while a vest with vibrators attached to it will alert to user to obstacles.

Step 2: Raspberry Pi

The Raspberry Pi is the central arbiter of activity in our haptic vest system. It handles taking pictures with the cameras, running the image processing code to generate a depth map, running the algorithm to convert a depth map into a small array of haptic feedback, and sending this haptic information to the piezoelectric vibrators.

Subsequent sections will explain each of these components in detail, but for the purpose of using our system, one needs only download the code at [GIT REPO HERE], install OpenCV, and install Adafruit_PCA9685. You can then run the entire pipeline via main.py. If you want main.py to run at startup, simply add the line `python [path_to_repo]/main.py` to /etc/rc.local

(Note: This paragraph reflects the current state of the system, but we hope to have something more elegant/robust by the end of the quarter. Saving everything as intermediate files has made it easier to debug.) The software pipeline consists of four files: main.py, camera_controller.py, depth_map.py, and motor_controller.py. Main.py spawns threads for the other three, which communicate via files in specified directories. Camera_controller.py writes to a directory read by depth_map.py, and depth_map.py writes to a directory read by motor_controller.py. Each step in the pipeline only reads the most recent files in the directories.

Step 3: Camera System

To get depth information about the surroundings, we will need 2 cameras that give us a stereo image pair. We used the 2 Logitech USB cameras described in the parts list. In order for the disparity map algorithm in the next step to work, both cameras have to be separated horizontally but aligned vertically. We can accomplish this by mounting them next to each other on a 3D printed holder such as the one shown in the first image.

NOTE: The disparity map algorithm is VERY sensitive to vertical misalignment! To avoid this problem, we removed the camera casings which have lots of round edges and are difficult to stabilize (we had to disconnect and re-solder some connections on the printed camera boards to accomplish this). The resulting setup is shown in the second image. Both cameras should be plugged in to the USB ports of the Raspberry Pi.

The code for image capture was written in Python using SimpleCV functions, and can be found in the Git repository linked earlier. You can learn more about the SimpleCV documentation here.

Step 4: Disparity Map Algorithm

To create a disparity map, we used an effective SimpleCV Python algorithm called findDisparityMap which we found on GitHub and edited it slightly for our project. The edited code that we implemented is included in our Git repository linked earlier. Running this code to generate a depth map can take a long time for high resolution images (we saw runtimes of ~6 seconds) and can really limit the real-time functionality of this project. A better solution is to limit the image resolution (we used 640x480), which in turn reduces the processing time (about 100-400ms).

The remaining variability in processing time is due to the “max number of disparities” parameter, which is an indicator of which depths will be represented on the resulting map. Using a higher number of disparities in the algorithm allows the depth map to identify closer objects in the depth map, but also increases the runtime of the algorithm. Using a lower number of disparities allows the depth map to represent objects that are farther away and decreases runtime. We used numdisparities = 100 in our algorithm as a middle-of-the-road approach, but this parameter can be tuned to the needs of any given implementation. If your disparity map looks like garbage, try 1) changing the numdisparities parameter or 2) switching the order of the images that are input to the algorithm as the "left" and the "right" images.

An example pair of example stereo images taken with webcams on the Raspberry Pi and the corresponding depth map generated by our implementation are shown in this section. In the depth map, the white segment on the left indicates that person in the photos is relatively close to the cameras. The gray segments on the right and slightly to the left are picking up on the instruments in the background. The chair and hands are much closer to the cameras and don't show up in the map because they can only be identified with a higher numdisparities parameter. Note that this algorithm has a hard time picking up on untextured surfaces, which is why there is a black hole in the middle of the white segment in the map (that part of the shirt has relatively little visible texture).

Step 5: Actuator System

In this step, we want to convert our disparity map results from the cameras to haptic outputs that alert the wearer of nearby obstacles. This step involves setting up the I2C driver, converting the disparity map to multiplexing inputs, and the control commands to turn off the GPIOs associated with the converted disparity map.
First, we want to set up the I2C drivers for the Raspberry Pi. I2C is a type of communication protocol that allows data exchange between microcontrollers and peripherals with very little extraneous wiring (for more in depth description, check out the following link: https://learn.sparkfun.com/tutorials/i2c). We begin by downloading command-line utility programs that will help get the I2C interface working by typing 'sudo apt-get install -y i2c-tools'. Then to set up the I2C interface itself, we first turn it on by typing 'sudo raspi-config' and then scrolling to '9 Advanced Options' -> 'A7 I2C' -> 'yes' to enable I2C -> 'yes' to automatically loading the kernel module -> '' button -> 'yes' to reboot.

After this, we can check that the interface is working by typing 'ls /dev/*i2c*' to which the Pi should respond with '/dev/i2c-1'. We can also check whether any devices are present with the command 'i2cdetect -y 1' which should display the addresses of peripherals detected. For troubleshooting, check out these links: (https://learn.adafruit.com/adafruits-raspberry-pi-lesson-4-gpio-setup/configuring-i2c, http://www.raspberrypi-spy.co.uk/2014/11/enabling-the-i2c-interface-on-the-raspberry-pi/).

Next up, we want to convert the disparity/depth map to multiplexing inputs. Given a disparity map, and 16 GPIOs, we decided to create a 4x4 grid that corresponds to a very coarse-grain representation of the scene. We analyze the disparity/depth map in blocks of size (width/numGPIOcols, height/numGPIOrows), in this case the number of GPIO rows and cols are both 4. Our algorithm gets the 80th percentile pixel value (or the pixel value that is higher/brighter than 80% of the other pixels), and based on how bright this pixel is, we want to set the haptic feedback to a certain strength, 4 is the highest, 0 is the lowest. After calculating the haptic feedback strength for each block, we send it off to the actual motor controller code, located in 'motor_controller.py'.

Within motor_controller, we use the Adafruit library code to go through the motors set to be turned on and set the pwm value to correspond to the necessary strength from the disparity map information. All of the motors that correspond to the disparity map are then activated for ~1 second to alert the wearer of incoming obstacles.

Step 6: Motor Driver Circuit Board With Power Management

In this step, we created a PCB to drive the Piezoelectric vibrators and serve as the interface between the system and the battery. Since the piezoelectric vibrators demand significant current (~60mA), we decided to use a PWM driver chip as a controller to control a bank of MOSFETs, to not only sufficiently power the piezoelectric vibrators but also obtain more i/o than is natively on the Raspberry Pi. The PCB contains a PWM driver chip with I2C interface and 16 copies of a MOSFET-based Pizeoelectric Vibrator driver circuit. A pair of USB Type B connectors recieve a 5V supply from a large USB battery bank. There are two USB-B connectors in case the system exceeds the current limitations of one USB-B power line.

We used CadSoft Eagle to create our PCB, Schematic and board files for the PCB can be found. Simply populate the board with the headers and associated components, using the Raspberry Pi GPIO ribbon cable to connect to the Raspberry Pi and the 34 pin twisted pair ribbon cable on the outputs. Populate C3 with a 220uF dielectric capacitor, since we found that smaller capacitor values affect image taking speed. Jumper the "3.3V" side of the 3-pin jumper near the PWM IC, and the "PWR" jumper near the GPIO ribbon cable.

If you would like to try your hand at creating the circuit another way, download Eagle to view the schematic.

The schematic involves connecting the I2C controlled PWM chip(PCA9685) to pins 3 and 5 of the raspberry pi. The 5V Pin on the GPIO header can be used to power the entire Raspberry Pi board, so it is attached to the 5V supplied by the two USB-B Connectors. The Raspberry pi will provide a 3.3V power source once it is powered, which can be used to power the PWM chip. Each of the outputs of the PWM chip is connected to a small MOSFET driver circuit containing a 10Kohm pulldown resistor, 2N7000 Mosfet, and a diode to protect the circuit against inductive kickback from the

Step 7: Mounting on the Vest

In this section, we detail the final electrical/mechanical assembly of the vest.

In our system, we used a simple, bright neon traffic vest.

16 pairs of wires (all except the last, purple, one) have a number indicating the identity of PWM channel associated with that twisted pair on the motor driver board. Use an X-Acto knife to split off each pair twisted pair, then solder 16 vibration motors to the 16 twisted pairs associated with PWM channels. Reverse the vest and affix the vibration motors to the vest in a 4x4 pattern to the front of the vest in the abdominal area, starting with the motor paired to PWM channel zero in the top left, then proceeding horizontally along each row until the motor attached to PWM channel 15 is in the bottom right. Place the two boards(motor driver board and Raspberri Pi, attached with the Raspberry Pi GPIO ribbon cable) in a fanny pack or other convenient carrying device, and use a USB-B cable to connect to the USB battery bank. Finally, tightly cinch the vest closed (we did it by using duct tape).

Affix the camera system to the front abdominal area of the vest as well, level with the ground.

We recommend first testing the device before putting on a blindfold or giving it to a visually impaired person.

Step 8: Final Thoughts

Epilog Contest 8

Participated in the
Epilog Contest 8

First Time Authors Contest 2016

Participated in the
First Time Authors Contest 2016