Introduction: Walking Guide to Enhance the Mobility of Visually Impaired People

The goal of the instructable is to develop a walking guide that can be used by disabled people, especially the visually impaired. The instructable intends to investigate how the walking guide can be used effectively, so that the design requirements for the development of this walking guide could be formulated. To fulfill the goal, this instructable has the following specific objectives.

  • To design and implement the spectacle prototype to guide the visually impaired people
  • To develop a walking guide to reduce collision with obstructions for the visually impaired people
  • To develop a method for pothole detection on the road surface

Three pieces of distance measurement sensors (ultrasonic sensor) is used in the walking guide in order to detect the obstacle in each direction including front, left and right. In addition, the system detects the potholes on the road surface using sensor and convolutional neural network (CNN). The overall cost of our developed prototype is approximately $140 and the weight is about 360 g including all electronic components. The components are used for the prototype are 3D printed components, raspberry pi, raspberry pi camera, ultrasonic sensor etc.

Step 1: Materials Needed

  • 3D Printed Parts
    1. 1 x 3D printed left temple
    2. 1 x 3D printed right temple
    3. 1 x 3D printed main frame
  • Electronics and Mechanical Parts
    1. 04 x Ultrasonic sensor (HC-SR04)
    2. Raspberry Pi B+ (
    3. Raspberry pi camera ( battery
    4. Wires
    5. Headphone
  • Tools
    1. Hot Glue
    2. Rubber Belt (

Step 2: 3D Printed Parts

The spectacle prototype is modeled in SolidWorks (3D model) considering the dimension of each electronic components. In the modeling, the front ultrasonic sensor is positioned in the spectacle to detect the front obstacles only, the left and right ultrasonic sensors are set to 45 degree from the spectacle center point in order to detect obstacles within the shoulder and arm of user; another ultrasonic sensor is positioned towards the ground facing for the detection of pothole. The Rpi camera is positioned at the center point of the spectacle. In addition, the right and left temple of the spectacle is designed to position the raspberry pi and battery respectively. The SolidWorks and 3D printed parts are shown from different view.

We have used 3D printer to develop the 3D model of the spectacle. 3D printer can develop a prototype up to a maximum size of 34.2 x 50.5 x 68.8 (L x W x H) cm. Besides this, the material which is be used to develop the model of the spectacle is Polylactic acid (PLA) filament and it is easy to obtain and low in cost. All parts of the spectacle are produced in house and the assemble process can easily be done. In order to develop the model of the spectacle, the amount of PLA with support material is needed as approximately 254gm.

Step 3: Assembling the Components

All the components are assembled.

  1. Insert the raspberry pi to the 3D printed right temple
  2. Insert the battery to the 3D printed left temple
  3. Insert the camera to front of the main frame where the hole is created for camera
  4. Insert the ultrasonic sensor at the specified hole

Step 4: Hardware Connections

The connection of each component is mapped with the raspberry pi and shown that the trigger and echo pin of the front sensor is connected with GPIO8 and GPIO7 pin of the raspberry pi. The GPIO14 and GPIO15 connect the trigger and echo pin of pothole detection sensor. The battery and headphone are connected with Micro USB power and Audio jack port of raspberry pi.

Step 5: User Prototype

A blind children wears the prototype and feels happy to walk in the environment without any collision with obstacles. The overall system gives a good experience while testing with visually impaired.

Step 6: Conclusion and Future Plan

The main goal of this instructable is to develop a walking guide to assist the visually impaired people to navigate independently in environments. The obstacle detection system aims to indicate the presence of obstacles around the surroundings at the directions of front, left and right. The pothole detection system detects the potholes on the road surface. The ultrasonic sensor and Rpi camera are used to capture the real world environment of the developed walking guide. The distance between the obstacle and the user is calculated by analyzing the data from the ultrasonic sensors. The pothole images are trained initially using convolutional neural network and the potholes are detected by capturing a single image each time. Then, the prototype of the walking guide is developed successfully with a weight of about 360 g including all electronic components. The notification to the users are provided with the presence of obstacles and potholes through audio signals by headphone.

Based on the theoretical and experimental work carried out during this instructable, it is recommended that further research could be done to improve the efficiency of the walking guide by addressing the following points.

  • The developed walking guide became slightly bulky due to the use of several electronic components. For example, the raspberry pi is used but all the functionalities of the raspberry pi are not used in here. Hence, developing an Application Specific Integrated Circuit (ASIC) with the functionalities of the developed walking guide can reduce the size, weight and cost of the prototype
  • In real world environment, some critical hindrances that are faced by the visually impaired people are humps on the road surface, staircase situation, road surface smoothness, water on the road surface etc. However, the developed walking guide only detect the potholes on the road surface. Thus, the enhancement of walking guide considering other critical hindrances may contribute in the further research for aiding visually impaired people
  • The system can detect the presence of obstacles but cannot categorize the obstacles, which are essential for the visually impaired people in navigation. Semantic pixel-wise segmentation of the surroundings may contribute to categorize the obstacles around environment.