Introduction: Autonomous Robot Performing Different Tasks

Introduction

In this project, a dedicated algorithm is made so that the robot can autonomously navigate the track as well as perform tasks such as line following, detecting an obstacle, grabbing and delivering an object. In addition, robustness is also considered it is because if robot navigate the pathway multiple times its performance will not affect.

Components needed

  1. Raspberry Pi 3
  2. GP2Y0A21 sharp distance sensor
  3. Edge sensor
  4. RPi camera
  5. JGB37-545 motor with encoder and 1:10 gearing (max 600RPM out)
  6. IMU
  7. Teensy 3.5
  8. Some wires
  9. Audio amplifier
  10. Robot chassis

Step 1: Mechanical Parts

All the parts can be download from the link below:

3D printed parts

IR sensor mount


Basket

After assembling

Step 2: Calibration of Sensors

Before calibration

As all parts of the robot are 3D printed so first thing is to assemble the robot, the next step is to connect to the RPi from the PC via WiFi. For that, first the RPi is connected via LAN cable to get IP address of the RPi. After getting the IP address, static IP is set for WLAN. This is necessary step, so that RPi can be easily connected via WiFi without checking the IP address again and again. For this, some changes have done in the dhcpcd.config file of the RPi.

Go to directory cd /etc and open file dhcpcd.conf using any editor. At the end, write these lines.

interface eth0
static ip_address=192.168.43.101 // ip you want to use 
interface wlan0
static ip_address=192.168.43.100
static routers=192.168.43.1 // your Default gateway
static domain_name_servers=192.168.43.1

For calibration

Once RPi is connected with WiFi. Next step is to connect with the GUI, here Regbot GUI is used. For connecting enter the IP address of the RPi which you selected while setting the static IP in robot connection if it's successfully connected you will see green color.

  • IR sensor

The sensor used is a GP2Y0A21 sharp distance sensor. The detection range is approximately 10 cm to 80 cm and analog voltage signal indicates the distance. It uses 3 pins that are power, ground and output signal.

Here, two distance sensors are used, one for the front and one for the left. For calibration of these sensors,objects are placed in front of both the sensors, then the readings are taken. Once they gives approximate values that means it’s calibrated. It can be seen under IR Distance option in GUI. The values of D1 right and D2 front should be approximately same. The sensors have IR LED equipped with lens, which emits narrow light beam. After reflecting from the object, the beam will be directed through the second lens on a position-sensible photo detector (PSD). The conductivity of PSD depends on the position where the beam falls. Then, the conductivity is converted to a voltage level and if the voltage is digitalized by using analog-digital converter, the distance can be calculated.The output of distance sensors is inversely proportional, this means that when the distance is growing, the output is decreasing. To be noted that the distance from cm is converted to meters, when GUI is used. The calibrated reading of IR sensors are given below:

  • Line or Edge sensor

Edge sensor consist of eight BPW46 sensors with an opaque blind between them. The line sensor used is shown below:

For calibration, the robot is placed on white line and dark surface. After placing the robot, IR light emitted from LED bounces back from surface underneath to be captured by photodiode. The current through photodiode is proportional to photons it receives. So black color absorbs IR radiations, hence black line under a photodiode receives less photons resulting in less current compared to white. This means that if the surface is white, the photodiode’s current goes up and vice versa. Then, this current signal is converted into voltage signal resulting in detection of line. To check the values,Edge option in GUI is used. Once the values are calibrated, then it is saved into the flash.

Step 3: Line Following

The complete track is given below:

Once the line sensor is calibrated. The robot gets a threshold value that is most effective, afterwards a command is sent to make the robot follow a line. Boolean variables are used that are set to 1 or 0 as detected by the line sensor. To ensure that robot successfully follows the white line without going off track, low speed is selected. Here, the robot follows left edge of line.

For checking the robustness, performance is tested several times that is 10. The results are validated after every trial and are proven to be successful.The line following task highly depends on the initial calibration.The most challenging part is that the smaller the radius of the curve, the less reliable the edge following function is. The smoothness of the white tape pieces connecting also has a significant effect; better results are achieved after the circle is remade from smaller pieces of tape.

Step 4: Detect an Obstacle While Doing Line Following

Second task is to detect an obstacle while line following and then react by avoiding the obstacle and resume the path it would have taken on the other side of the obstacle. However, if no obstacles are put in front of it then it follows the line without stopping anywhere.

An IR sensor in the front of the robot is used to detect an object in front. A second IR sensor which is mounted at the left side of the robot is used to sense if the obstacle is still next to the robot or it passed it and it can start to turn.

How object avoidance is implemented

1. Robot senses the obstacle with second IR sensor

2. It turns 90◦to the right

3. It goes backwards until first IR sensor can see the obstacle

4. After the obstacle is in range it goes forwards until the edge of the obstacle is reached and it is out of range again (the robot has just passed the obstacle)

5. The robot goes 0.2 m which is the distance between the sensor mount and the robot’s back with an additional tolerance

6. It turns 90◦to the left and goes until the objects appears in the 0.25 m range of the first IR sensor

7. It continues going until the the object gets out from the same IR sensor range with the same tolerance as in step 5.

8. It turns 90◦to the left and goes straight until line valid is found

9. Turns 90◦to the right for a last time and resumes line following

Line following

After that to enable the robot to simply follow the line if there is no obstacle but start the described avoidance procedure if any object is found on the way. It uses the goto command with labels and events.

snprintf(lines[line++], MAX_LEN, "vel=0.3, acc=2, white=1, edger=0: dist=1, ir2<0.1");
snprintf(lines[line++], MAX_LEN, "goto=1:last=8");
snprintf(lines[line++], MAX_LEN, "vel=0,event=2,goto=2:time=0.1");
snprintf(lines[line++], MAX_LEN, "label=1,event=1:time=0.1");
snprintf(lines[line++], MAX_LEN, "label=2");

The robot starts the line following with two stopping conditions: it has to stop if 1 m distance is reached, or if any obstacle is detected in the range of 0.1 m. The second line is executed, once one of these two events is encountered. If the event is the IR range criteria, the last command disables goto=1 (number 8 represents any events related to IR 2) and the execution will enter to line 3, which will stop the robot, set event= 2 flagTrue and go to the last line without entering line 4. If the stopping criteria is the distance condition, the last command is ignored so the execution jumps to label=1 in line 4, sets the event=1 flag and finishes.

  • event 1 flag is set if the distance is reached
  • event 2 flag is set if object is found by the IR sensor

Step 5: Recognize an Object and Deliver It

For object detection, a pink golf ball is used. The implementation includes coordinate transformations. The photos are captured by the RPi camera, the frame is received in the format ofcv::Mat. The first step is pre-processing the image, for which the medianBlur OpenCV function is used. It implements a median filter, which smooths an image using a predefined (k size×k size) aperture. Afterwards, the original RGB image is converted to HSV using the cvtColor function. By this, the pink colour can be easily detected. During operation, the lighting may vary, which changes the detected color of the ball. Instead of determining an RGB range, it is better to use the HSV description for this purpose, as only the hue range has to be modified to define the pink range between red and blue. The saturation value defines disparity from white, and the value value defines how close it is to black. After defining the range, a mask is implemented to highlight only the object on the image. For this, the inRange function in OpenCV is used. The mask creates a black and white image, which provides a good starting point to try to estimate the diameter and the origin of the detected circle. To remove the unwanted noise (appears as white dots in the black background), opening morphological transformation is implemented, which is an erosion followed by dilation. Next, a Gaussian blurring is used for smoothing. Finally, the function HoughCircles is applied, which is especially for feature detection. It finds circles in a grayscale image using the Hough transform and returns their origin in pixel coordinates (top left corner is the origin of the pixel coordinate system) and the radius of the detected circles in pixels. Several attempts are made with fit Ellipse, but the Hough transform turns out to be more robust. Morphology removes the unwanted area, thereby protecting the Hough transform to detect two circles which ends up in an erroneous distance and angle result.

Original image

Median blurring

Mask

After opening morphology

Gaussian blurring

Circle Hough transformation

Coordinate transformation

The output of the Hough Circles is a structure containing the origin and the radius of the detected circle in pixel coordinates. These values have to be transferred to the robot coordinates system to be able to perform the maneuvering. The knowledge on the ball is applied to determine the distance between the ball and the camera. The detailed notation is described that will be used in the upcoming equations. By using the known diameter of the ball, we can write

d = f.do /r_i.2.(w_s/w_i)

Where,

d is the distance between the camera and the object

f is the focal length of the camera in millimeters

do is the diameter of the ball in millimeters

r_i is the radius of the detected object in pixels

w_s is the width of the optical sensor in millimeters

w_i is the width of the image in pixels

It is essential to know that the angle of view of the bare sensor is 62.2×48.8 degrees. Once the distance in millimeters is calculated between the camera and the object in the camera frame, the relative angle can be found. A limitation of 60 degrees of the horizontal field of view is taken into consideration. Here, there are two possible cases: when the object is on the left of the camera and when it is on the right of it. For the two cases we can write

α = arctan(xd/f) [rad] ∀ xi>w_i/2

where

xd = (xi −0.5·w_i)·w_s/w_i is the distance between camera plane and object plane in millimeters

α is the angle in radian

xi is the x coordinate of the object in pixels

Second case:

xd = (0.5·w_i − xi)·w_s/w_i ∀ xi ≤ w_i/2

The above equations transform the object from pixel coordinates to the camera frame. Unfortunately, the camera is mounted approximately dev= 200 millimeters away from the origin of the robot coordinates in the z-direction. This causes deviations which have to be compensated for by using the following equation:

dtrue = 2√dev^2+d^2−cos(π−α)·2·dev·d
αtrue = arcsin(sin(π−α)·d/dtrue)

After achieving an accurate estimate of the object position, a mission has been built to scan a given area for the object and deliver it to a target position. The robot takes an image at each observation point (red), and as soon the object is detected, it grasp a ball with the help of arm mounted on a servo. Once the ball is in the basket, the robot finds the way back to the track and delivers it to the target. If none of the three attempts for detection is successful, the robot still goes to the target spot and indicates that no ball has been detected by using audio.

Step 6: Follow the Another Bot

Here, the idea is to create a circular track where the robot has to make a full round by handling static and dynamic objects during the line following. More specifically, able to stop in case of static obstacles with the robot, furthermore set the speed in case of dynamic objects such as another robot which is driving around on the same circular line with varying speed.

Simple stop/go control is implemented based on the IR sensor’s reading. Whenever the other robot is inside a range of 0.2 m the main robot stops and waits for 2 seconds then continues following the line with a speed of 0.4 m/s. The aim is to make one full round, however since the exit from the round about is at the other side than the entering, the robot does 1.5 rounds in total. To be able to track the number of rounds, the robot counts how many times it crosses the only line, which is placed perpendicularly to the roundabout at the inside edge of the main circular line, as a continuation of the exit line towards the center of the circle. For the 1.5 rounds it has to leave right after the second cross is encountered.

Before going inside the circle

  • Turn 180 by the head command which turns the robot precise enough
  • Follow the line until the main line is crossed
  • Turn 90 to the left
  • Wait until the other robot is passed which is captured by the IR sensor
  • When the IR sensor is triggered by the other robot, wait an additional 4 seconds to make sure that the track is clear
  • Follow the line until it crosses the roundabout and turn 90◦to the left after it is reached

After the robot arrives to the circle. Robot follows the inner edge of the tape for a short amount of time, then says to continue the line following until the cross line is reached. In case that happens, it sets an event flag after that the execution switches to another state where both the IR sensor is continuously checked if the other vehicle is in range and also the event flag is monitored if it is triggered, meaning the exit line is crossed (”Increment cross counter” box). From this state, the execution goes back to the previous state if any of the described condition is triggered. It only enters to a new state if the ”Increment cross counter” box is reached for the second time. In that case, it jumps to the ”Leave track” box. Once 1.5 rounds completed it turns 90 to the left and follow the line until it ends.

Step 7: Result

Video

Robots Contest

Participated in the
Robots Contest