Introduction: Haptic Feedback Vest for Obstacle Detection

This project was completed in CS241/EE285: Embedded System Workshop at Stanford University under the supervision of Professor Phil Levis and TA Shane William Leonard.


The motivation for this project was to provide an aide to avoid obstacles for the visually impaired. Generally, a white cane serves its purpose to guide a user along a wall or curb so he or she can walk straight, detect changes in elevation such as stairs or raises in sidewalk, and detect obstacles towards the ground that might be trip hazard. While a white cane accomplishes these tasks very well, it is unable to detect obstacles at a distance or above waist level. As such, low-hanging tree branches, vehicles, etc. are potential threats. This inspired our project - we wanted to offer a solution that would alert a user of an obstacle, while being inconspicuous. And thus, our idea for an obstacle detection system via haptic feedback vest was born.

Particularly, the design utilizes cameras and software that parse the user's surroundings into a depth map. This depth map is then used to vibrate the vest at a particular location to signify an obstacle in that location. For example, a vibration in the upper left region of the vest would indicate a hazard in the upper-body/leftward side of the surroundings. This instructable provides all the information needed to build this system. We hope that this design will inspire others to design for the visually impaired - perhaps future designs will even improve on ours. We are passionate about creating robust solutions for problems that help the disabled live less hindered. We hope you find this mission rewarding as we did. Thanks for checking this out!

Step 1: Technical Overview

The system's individual components are enumerated below. Each will be further explained in the following sections. All parts that we purchased are listed under the “Parts List” spreadsheet in the attached “Hardware” ZIP file.

Setting Up the Raspberry Pi and Related Libraries/Drivers: This handheld computer will serve as the brains of the operation. It is the central arbiter, fielding inputs and actuating outputs. This subsystem also runs all the software needed in this application. Particularly, it converts the user's surroundings into a depth map and communicates over I2C to toggle the necessary vibrators

Camera System: This subsystem has a dual-camera setup that is needed in order to decipher depth in a 2D image. Much of the time, issues in creating a depth map stem from these cameras not being aligned! This will be talked about more in a forthcoming section. The camera hands the images it takes off to the Raspberry Pi via USB cable.

Motor Driver with PMU: A custom PCB contains the vibration motor electrical interface, as well as the 5V power interface between the Raspberry Pi and a USB battery.

Putting Together the Vest: All of the hardware (and software on the Pi) is meant to be ambulatory. Thus, the Pi and circuit board will be housed in a fanny pack on the hip of a user, while a vest with vibrators attached to it will alert the user to obstacles.

Software pipeline: We’ve set up a bash script (and optional daemon) that will run the entire pipeline of image capturing all the way to haptic feedback. You can download the necessary files and dependencies from our git repository.

Parameter Tuning for Disparity Map Algorithm: The images provided by the camera are used to create a disparity map (also referred to as a depth map), from which the software can detect what obstacles are where. This code utilizes many open-source computer vision libraries, and is the performance bottleneck in the system.

Parameter Tuning for Actuator System: The depth map is discretized into sections that correspond to a 4x4 grid of vibrators, which are controlled by the Pi and the custom PCB. This connection is handled via an I2C driver, where one bus can then control sixteen motors simultaneously.

Step 2: Setting Up the Raspberry Pi and Related Libraries/Drivers

The Raspberry Pi is the central arbiter of activity in our haptic vest system. It handles the following:

  • taking pictures with the USB webcams
  • running the image processing code to generate a depth map
  • running the algorithm to convert a depth map into a small array of haptic feedback
  • sending this haptic information to the vibration motors

We’ll cover all these things in detail later, but for now you need to set up the Raspberry Pi. You can find a tutorial on how to setup the Raspberry Pi 3 here and a tutorial on how to ssh into the Pi here. Loading the Raspberry Pi operating system will require the microSD card mentioned in the list of parts. We have tested our system on a Raspberry Pi running Raspbian. In theory any operating system should work, but you may not be able to follow our directions exactly if you want to run on another Linux distribution.


Download software libraries

Now that you have a Raspberry Pi and can connect to it, you need to download some software libraries onto the Pi. First make sure that you have python 2 and pip installed (you can download both of these using apt-get). Using pip, you should now install pygame, numpy, OpenCV (we used version 2.4.13), SimpleCV (we used version 1.3), and Adafruit_PCA9685.

For example, to install numpy, type 'sudo pip install numpy'.

Set up I2C driver

Now we would like to set up the I2C drivers for the Raspberry Pi. I2C is a type of communication protocol that allows data exchange between microcontrollers and peripherals with very little extraneous wiring (for more in depth description, check out this link. We begin by downloading command-line utility programs that will help get the I2C interface working by typing:

'sudo apt-get install -y i2c-tools'.

Then to set up the I2C interface itself, we first turn it on by typing:

'sudo raspi-config'

and then scrolling to:

'9 Advanced Options' -> 'A7 I2C' -> 'yes' to enable I2C -> 'yes' to automatically loading the kernel module -> 'yes' to reboot.

If you suspect there are problems with the I2C driver, try the following: check that the interface is working by typing 'ls /dev/*i2c*' to which the Pi should respond with '/dev/i2c-1'. Then check whether any devices are present with the command 'i2cdetect -y 1' which should display the addresses of peripherals detected. If these do not return what you would expect, you can troubleshoot further by checking out this Adafruit link and this Raspberry Pi link.

Step 3: Camera System Setup

To sense our surroundings, we will need 2 cameras that give us a stereo image pair. From the stereo image pair, we will be able to extract depth information and detect obstacles. For this step, we used 2 USB cameras (Logitech C270 webcams) mentioned in the list of parts.

We then need to ensure that the cameras are kept horizontally separated and vertically aligned in a consistent way. In order to do this, we removed the black camera casings which have lots of round edges and are thus difficult to stabilize (you may have to disconnect and re-solder some connections on the printed camera boards). We were then able to mount the bare-bones cameras onto a 3D printed holder, shown in the second image.The Solidworks and .STL files for the camera holder we designed for our webcams are in the attached ZIP file “Hardware.zip”. Other camera choices would also work if you want to increase alignment stability, as long as there are separate USB outputs for each camera.

Now that you have mounted the cameras, you can plug them into the Raspberry Pi USB ports. Make sure to plug in the camera on the left side of your mounted system first (left if you are facing the front of the cameras). The Raspberry Pi believes the camera that is plugged in first to be Camera 1 and the other to be Camera 2. Switching this order could cause a corrupted depth map to be generated (discussed more in Step 6). If you ever notice that one of the cameras isn't on, you can fix it by stopping the program, unplugging and replugging both cameras, and restarting the program.

Important Note!

The depth map algorithm is VERY sensitive to vertical misalignment!

Try taking a stereo image pair for calibration and make sure that you see only horizontal differences and NO vertical differences between them. See the backpack images above for an example of an ideal stereo image pair (images taken from this dataset). If you do see any misalignment, either in rotation or displacement, adjust the cameras until pairs of images appear to have only a larger horizontal shift for close objects and little to no shift for far objects. We did this by putting small pieces of tape and paper behind one of our mounted cameras until the images we captured were aligned, and then hot gluing the cameras into place.

Step 4: Motor Driver Circuit Board With Power Management

Next, you need to create a circuit to 1) drive the vibration motors and 2) serve as the interface between the system and the battery pack. We designed a handy PCB that lays out this circuit. If you would like to design it yourself, see the subsection below titled “Detailed Explanation”.

How to assemble the PCB

  1. Manufacture the PCB. We used CadSoft Eagle to create our PCB, Schematic, board files, and gerber files, which can be found in the attached ZIP file named “Hardware”.
  2. Populate the board with the headers and associated components according to the schematic (populate C3 with a ~220uF dielectric capacitor, if you find that image capture code is timing out).
  3. Use the black Raspberry Pi GPIO ribbon cable to connect the Raspberry Pi GPIO headers on the right to the 40-pin PCB header.
  4. Attach the rainbow 34-pin twisted pair ribbon cable (from the list of parts) to the 34-pin output header on the PCB.
  5. Jumper the "3.3V" side of the 3-pin jumper in the middle of the PCB (JP4), and the "PWR" jumper near the GPIO ribbon cable (JP5).


Detailed Explanation

Since we needed to control sixteen vibration motors with limited GPIO pins, we decided to use to use a PWM driver chip (the PCA9685) as a multiplexer. This allows us to use the I2C pins on the Raspberry Pi to drive 16 motors simultaneously (see the figure in the first image). However, since these motors demand significant current (~60mA) which is more than the PWM pins are able to provide, we need to create a motor driver circuit.

The PCB contains this PWM driver chip with I2C interface and 16 copies of a MOSFET-based vibration motor driver circuit (driver circuit seen in the second image). A USB Type B connector is included to receive a 5V supply from the USB battery pack (from the list of parts). There are two available USB-B connectors on the PCB in case the system exceeds the current limitations of one USB-B power line.

If you would like to try your hand at creating the circuit another way, such as breadboarding it, download Eagle to view the schematic. The schematic involves connecting the I2C controlled PWM chip (PCA9685) to pins 3 and 5 of the raspberry pi. The 5V Pin on the GPIO header can be used to power the entire Raspberry Pi board, so it is attached to the 5V supplied by the two USB-B Connectors. The Raspberry Pi will provide a 3.3V power source once it is powered, which can be used to power the PWM chip. Each of the outputs of the PWM chip is connected to a small MOSFET driver circuit containing a 10Kohm pulldown resistor, 2N7000 MOSFET, and a diode to protect the circuit against inductive kickback from the vibration motors.

Step 5: Putting Together the Vest

You can now assemble all the pieces of your vest.

1. Use an X-Acto knife to split off each twisted pair (there are 17 pairs) on the ends of the rainbow 34-pin twisted pair ribbon cable. 16 pairs of wires (all except the last purple one) have a number indicating the identity of the PWM channel associated with that twisted pair on the motor driver board. Solder 16 vibration motors to the 16 twisted pairs associated with PWM channels. The red wire on the vibration motor should be soldered and heat shrinked to the pale brown wire (connected to 5V), and the black wire on the motor should be soldered and heat shrinked to the colored wire for each twisted pair.

2. In our system, we used a simple, bright neon traffic vest. Flip the vest inside-out and affix the vibration motors to the inside of the vest in a 4x4 pattern (this will be placed over the abdominal area), starting with the motor paired to PWM channel 0 in the top left, then proceeding horizontally along each row until the motor attached to PWM channel 15 is in the bottom right. See the first image for reference.

3. Put the vest on backwards (see the second image). The 34-pin rainbow twisted pair ribbon cable should go from the motors in front of the user, over their shoulder (inside the vest), and connect to the motor driver PCB in the back.

4. Place the two boards (motor driver PCB and Raspberry Pi, attached with the black Raspberry Pi GPIO ribbon cable) in a fanny pack or other convenient carrying device, and use a USB-B cable to connect to the USB battery bank (see third image).

5. Cinch the vest closed so that the vibration of the motors can be easily felt by the user. We accomplished this by wrapping elastic bandages around the outside of the vest.

6. Affix the camera system on the user’s chest so it can capture images of what is in front of the user. Make sure both cameras are as horizontally parallel (level to ground) as possible. Alternatively, you can place the cameras over the user’s eyes (see fourth image).

Step 6: Running Haptic Vest Software on the Pi

Now that you have all the hardware set up and plugged in, you’re ready to run the haptic vest code! First, clone the code from our git repository onto your Pi. You can run the system manually by executing the run_haptic.sh script. If you want the haptic system to run automatically when the Raspberry Pi turns on, copy the haptic.service file into /etc/systemd/system (to do so, run

`sudo cp /etc/systemd/system haptic.service`

from the repository directory) and then run

`sudo systemctl enable haptic`.

The next time you start up your Pi, the entire system should execute automatically. If you ever want to start or stop the system manually, simply run

`sudo systemctl start haptic` or `sudo systemctl stop haptic`, respectively.

You can configure the system using the config.json file. The most useful configuration is the debug flag. When set to true, the system will output all of its image files (raw images and depth maps) to separate directories, with filenames based on the time that the images were taken. This flag is set to false by default to conserve disk space, but you may find it useful to turn it on if you run into issues. Examining the depth maps can help confirm that the cameras are vertically aligned and in-plane (see Step 7 for an example depth map). Subsequent sections will explain the other parameters in the config file, but a typical user will not need to change them.

Congratulations! At this point, you should have a functional haptic feedback vest system! If it is not working as expected, continue to Steps 7-8 to tune parameters. We also highly recommend first testing the device before putting on a blindfold or giving it to a visually impaired person.


Detailed Explanation

For those interested, here is a breakdown of what each file in the repository does. It’s not necessary to understand the system as a whole, but may make debugging easier. You can see how these processes are connected in the image above.

  • main.py: This is the main controller of the whole system. It spawns the three other processes (listed below and shown in the image) that perform the actual work.
  • camera_controller.py: This process is in charge of continuously taking pictures with the two webcams. It sends these pictures through the raw_image_queue that the next process reads from. Note that the Raspberry Pi is unable to take images from two USB cameras at the exact same time, so there will be a slight delay between when the images are taken (about 20ms). This shouldn’t affect the performance of the system as long as you’re not moving too quickly.
  • depth_map.py: This process reads images from the raw_image_queue, generates a depth map, and outputs the depth map to the depth_map_queue, which is read by the next process. See step 7 for details about how this algorithm works.
  • motor_controller.py: This process reads depth map images from the depth_map_queue, decides on the vibration strengths to send to the vest, and vibrates the motors accordingly. See step 8 for details about how this algorithm works and how to configure different strengths, if you so desire.

Step 7: Parameter Tuning for Disparity Map Algorithm

If your haptic vest system is not performing as well as expected, the next two steps will explain how to tune parameters and debug the system.

The code in depth_map.py generates a disparity map (also referred to as a depth map) from our stereo images, which will tell us how close objects are to the user wearing the cameras. To create a depth map, we used an effective SimpleCV Python algorithm called findDisparityMap which can be found on GitHub. The disparity map algorithms looks at horizontal displacement of objects, identifying pixels that have been significantly displaced as being close, and pixels that have been displaced very little as being very far.

Does your depth map look like trash? Fear not!! Try the following:

  1. switching the order of the images that are input to the algorithm as the "left" and the "right" images
  2. checking to see whether the cameras are vertically aligned
  3. changing the parameters discussed below

This algorithm involves a variety of parameters, and these can be tuned for best results according to the environment in which you capture your stereo images, the horizontal separation of your cameras, and the desired depth range. You should start by adjusting the resolution of the captured images, the numdisparities parameter, and the blockSize parameter.

Resolution - Reducing this will reduce the run-time of the algorithm. We limited the image resolution to around 640x480 in order to get runtimes of about 100-400ms.

Numdisparities - This represents the maximum number of disparities that the algorithm uses to determine which depths will be represented on the resulting depth map. Using a higher max number of disparities in the algorithm allows the depth map to identify closer objects in the depth map, but also increases the runtime of the algorithm. Using a lower number of disparities allows the depth map to represent objects that are farther away and decreases runtime. We used numdisparities = 80 in our algorithm as a middle-of-the-road approach.

BlockSize - This parameter tunes how many pixels the algorithm examines at a time. Increasing this value allows the algorithm to identify larger objects in the camera view. If the pixel width of the object is larger than the value of BlockSize, and the object is homogenous in color, the algorithm will fail to identify the object. For our project, we used blockSize=41.

Not sure what your depth map is supposed to look like? An example stereo image taken with webcams on the Raspberry Pi and the corresponding depth map generated by our implementation is shown in images 1-4. In the depth maps (images 2 and 4), you can see that objects that are closer to the camera are brighter than objects that are farther away. You can also see that the person is closer to the camera in the second depth map than in the first depth map. The gray segments on the right and slightly to the left are picking up on the lab benches on each side. Note that this algorithm has a hard time picking up on untextured surfaces, which is why there is a black hole in the middle of the person’s shirt in both depth maps (that part of the shirt has relatively little visible texture).

Step 8: Parameter Tuning for Actuator System

In this step, we will discuss parameter tuning for the actuator system. The primary parameters associated with the actuator system are the percentile pixel level (discussed in the depth map discretization section below) and the threshold-pixel-values corresponding to each vibration level (discussed in the vibration strength section below). As is, the defaults for these configurations should be sufficient for a general use case. However, if you want to better understand or tune these parameters, we’ve provided some design insight into how these parameters affect the output.

Discretizing depth map

In the previous section, we discussed how the two images the cameras output can be converted to a depth map. This depth map is essentially a rough estimate of how far objects are from the user. As such, this depth map needs to somehow alert the user of objects in proximity.

The strategy for this step is to divide up the depth map into 4x4 blocks (corresponding to 4 rows of 4 vibration motors) and decide how proximate objects are in the block. (The config file provides the option to change the number of rows and columns to something besides 4x4.) Currently, each setting of the sixteen blocks is based off the 80th percentile pixel value - essentially looking at 80% of the pixels and seeing the upper bound brightness level. If the 80th percentile pixel value is very low, this means that likely the depth map isn’t detecting a close object and thus we will set the vibration level to low or off. If the pixel value is high, there’s probably a very close object and thus we will set the vibration level to high. The benefit of using this method is that it is very resistant to effects like shot noise (perhaps one super bright dot in a mostly dark block) yet still provides a solid estimate of whether an object has been detected within a given image block. We will refer to the output of this substep as the proximity value of a block. (Note that depth maps are returned in gray scale, and thus pixel values range between 0 and 255.)

If you feel that the 80th percentile benchmark is too tolerant of noise or ignoring smaller objects within the block, perhaps try increasing the percentile to 85 or 90 in the config file. Increasing the percentile will make the algorithm less resistant to shot noise but will give higher precision for smaller objects that may be close to the wearer. While training our configuration, we noticed that tuning to the 90th percentile allowed the system to be more sensitive to tabletop objects. Note that this parameter can be trained with some stock stereo images (or even your images!). In a python command line, convert your test images to a depth map, and visually inspect if your decision boundaries are reasonable.


Mapping vibration levels to vibration strengths

This subsection deals with two tunable parameters: threshold_levels and power_pwm. The former arbitrates which proximity values (from the previous subsection) map to which vibration level, while the latter defines how strong a particular vibration level should be.

The threshold_values will control the vibration level of each block. Currently, there are four threshold_values which define the boundaries of five buckets - OFF, 1, 2, 3, 4. For example, if we want to be alerted to an object far away, which has a lower proximity value, we would then set the threshold_value associated with the vibration level of 1 to be a correspondingly lower number. Then, we would feel a (slight) vibration even if the object is farther away. The logic continues on the other end of the spectrum. If we want the highest level of vibration possible for objects that are medium-distance away, we would set the threshold_value associated with the vibration level of four to a correspondingly lower number. Thus, we would feel a significant vibration for images that are medium distance away.

However, these “digital” settings for each block aren’t able to communicate with the motors as-is. Thus, the power_pwm converts each threshold_value (which ranges from 0 to 4) to a duty cycle (saved as counts out of 4096). This duty cycle serves as PWM signal which drives the motors (see PCB section). Intuitively, a lower threshold_value maps to a lower power_pwm. Tune power_pwm and threshold_values if you feel certain threshold_values or proximate obstacles don’t map to physical vibrations as a user would expect. In our case, the user felt that 4000 was too strong of a pwm_power, so we mapped a threshold_value of four to 3000 in power_pwm (3000/4096 ≅75% duty cycle). The logic holds for the other end of the spectrum. If you want a threshold_value of one to be more subtle, consider lowering the current power_pwm setting of 1 from 650 to something lower.

You can see an example of the 2 previous depth maps annotated to show the vibration levels of each 4x4 block in the 2 images in this section.


Thanks for reading, and we hope you enjoy trying out our design!