Introduction: Raspberry Pi - Autonomous Mars Rover With OpenCV Object Tracking

Powered by a Raspberry Pi 3, Open CV object recognition, Ultrasonic sensors and geared DC motors. This rover can track any object it is trained for and move on any terrain.

Step 1: Introduction

In this Instructables, we are going to build an Autonomous Mars Rover which can recognize objects and track them using the Open CV software running on a Raspberry Pi 3 with an option to use a webcam device or the original raspberry pi camera. It is also equipped with an Ultrasonic sensor mounted on a servo to track its way in dark environments where camera wouldn't work. Signals received from Pi are sent to the motor driver IC (L293D) which drives 4 x 150RPM DC motors mounted on a body built with PVC pipes.

Step 2: Materials & Software Required

Materials Required

  1. Raspberry Pi (Any but zero)
  2. Raspberry PI Camera or a webcam
  3. L293D motor driver IC
  4. Robot Wheels (7x4cm) X 4
  5. Geared DC Motors (150RPM) X 4
  6. PVC pipes for chassis

Software required

  1. Putty for SSH ing the Pi
  2. Open CV for object recognition

Step 3: Building the Rover Chassis

To build this PVC chassis, you will need

  • 2 X 8"
  • 2 X 4"
  • 4 T-Joints

Arrange the PVC pipes in a ladder like structure and insert into T-joints. You may use the PVC sealant to make the joints even stronger.

The geared DC motors are connected with the PVC pipe chassis using clamps and then wheels are connected with the motors using screws.

Step 4: Building Ultrasonic Rangefinder Assembly

The ultrasonic range finder assembly is built using an HC-SR04 Ultrasonic sensor connected with a Micro Servo motor. Cables are pre connected with the ultrasonic sensor before putting into the plastic case which is connected to the servo motor via screws.

Step 5: Schematics and Electrical Connections

Please make the electrical connections as per the circuit diagram attached.

Step 6: SSH and Open CV Installation

Now, we need to SSH into our raspberry pi in order to install the required software. We will start by SSHing to our Raspberry Pi. Make sure your Pi is connected to the same router as your PC and you know it's IP address assigned to it by your router. Now, open a command prompt or PUTTY if you are on Windows and run the following command.

ssh pi@192.168.1.6

Your Pi's IP might be different, mine is 192.168.1.6.

Now, enter your default password - "raspberry"

Now, that you have SSH'd into your Pi, Let's start by updating with this command.

sudo apt-get update && sudo apt-get upgrade

Let's install the required developer tools now,

sudo apt-get install build-essential cmake pkg-config

Next, we need to install some image I/O packages that will help our Pi to fetch various image formats from disk.

sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev

Now, some packages for fetching video, live streaming and optimizing OpenCV performance

sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install libxvidcore-dev libx264-dev
sudo apt-get install libgtk2.0-dev libgtk-3-dev
sudo apt-get install libatlas-base-dev gfortran

We also need to install Python 2.7 and Python 3 header files so we can compile OpenCV with python bindings

sudo apt-get install python2.7-dev python3-dev

Downloading OpenCV source code

cd ~
wget -O opencv.zip https://github.com/Itseez/opencv/archive/3.3.0.zip
unzip opencv.zip

Downloading opencv_contrib repository

wget -O opencv_contrib.zip https://github.com/Itseez/opencv_contrib/archive/3.3.0.zip
unzip opencv_contrib.zip

It is also recommended to use a virtual environment for installing OpenCV.

sudo pip install virtualenv virtualenvwrapper
sudo rm -rf ~/.cache/pip

Now, that virtualenv and virtualenvwrapper has been installed, we need to update our ~/.profile to include the following lines at bottom

export WORKON_HOME=$HOME/.virtualenvs<br>
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
source /usr/local/bin/virtualenvwrapper.sh

Create your python virtual environment

mkvirtualenv cv -p python2

switch to the created virtual environment

source ~/.profile
workon cv

Installing NumPy

pip install numpy
Compile & Install OpenCV
cd ~/opencv-3.3.0/
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE \<br>
    -D CMAKE_INSTALL_PREFIX=/usr/local \
    -D INSTALL_PYTHON_EXAMPLES=ON \
    -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.3.0/modules \
    -D BUILD_EXAMPLES=ON ..

Finally compile OpenCV

make -j4

After this command finishes running. All you need to do is install it.

sudo make config
sudo ldconfig

Step 7: Running the Python Code for Rover

Create a Python file called tracker.py and add the following code to it.

sudo nano tracker.py

code:-

#ASAR Program
#This program tracks a red ball and instructs a raspberry pi to follow it.
import sys
sys.path.append('/usr/local/lib/python2.7/site-packages')
import cv2
import numpy as np
import os
import RPi.GPIO as IO

IO.setmode(IO.BOARD)
IO.setup(7,IO.OUT)
IO.setup(15,IO.OUT)
IO.setup(13,IO.OUT)
IO.setup(21,IO.OUT)
IO.setup(22,IO.OUT)

def fwd():
    IO.output(21,1)#Left Motor Forward
    IO.output(22,0)
    IO.output(13,1)#Right Motor Forward
    IO.output(15,0)
def bac():
    IO.output(21,0)#Left Motor backward
    IO.output(22,1)
    IO.output(13,0)#Right Motor backward
    IO.output(15,1)
def ryt():
    IO.output(21,0)#Left Motor backward
    IO.output(22,1)
    IO.output(13,1)#Right Motor forward
    IO.output(15,0)
def lft():
    IO.output(21,1)#Left Motor forward
    IO.output(22,0)
    IO.output(13,0)#Right Motor backward
    IO.output(15,1)
def stp():
    IO.output(21,0)#Left Motor stop
    IO.output(22,0)
    IO.output(13,0)#Right Motor stop
    IO.output(15,0)
###################################################################################################
def main():

    capWebcam = cv2.VideoCapture(0)                     # declare a VideoCapture object and associate to webcam, 0 => use 1st webcam

                                                        # show original resolution
    print "default resolution = " + str(capWebcam.get(cv2.CAP_PROP_FRAME_WIDTH)) + "x" + str(capWebcam.get(cv2.CAP_PROP_FRAME_HEIGHT))

    capWebcam.set(cv2.CAP_PROP_FRAME_WIDTH, 320.0)              # change resolution to 320x240 for faster processing
    capWebcam.set(cv2.CAP_PROP_FRAME_HEIGHT, 240.0)

                                                        # show updated resolution
    print "updated resolution = " + str(capWebcam.get(cv2.CAP_PROP_FRAME_WIDTH)) + "x" + str(capWebcam.get(cv2.CAP_PROP_FRAME_HEIGHT))

    if capWebcam.isOpened() == False:                           # check if VideoCapture object was associated to webcam successfully
        print "error: capWebcam not accessed successfully\n\n"          # if not, print error message to std out
        os.system("pause")                                              # pause until user presses a key so user can see error message
        return                                                          # and exit function (which exits program)
    # end if

    while cv2.waitKey(1) != 27 and capWebcam.isOpened():                # until the Esc key is pressed or webcam connection is lost
        blnFrameReadSuccessfully, imgOriginal = capWebcam.read()            # read next frame

        if not blnFrameReadSuccessfully or imgOriginal is None:             # if frame was not read successfully
            print "error: frame not read from webcam\n"                     # print error message to std out
            os.system("pause")                                              # pause until user presses a key so user can see error message
            break                                                           # exit while loop (which exits program)
        # end if

        imgHSV = cv2.cvtColor(imgOriginal, cv2.COLOR_BGR2HSV)

        imgThreshLow = cv2.inRange(imgHSV, np.array([0, 135, 135]), np.array([18, 255, 255]))
        imgThreshHigh = cv2.inRange(imgHSV, np.array([165, 135, 135]), np.array([179, 255, 255]))

        imgThresh = cv2.add(imgThreshLow, imgThreshHigh)

        imgThresh = cv2.GaussianBlur(imgThresh, (3, 3), 2)

        imgThresh = cv2.dilate(imgThresh, np.ones((5,5),np.uint8))
        imgThresh = cv2.erode(imgThresh, np.ones((5,5),np.uint8))

        intRows, intColumns = imgThresh.shape

        circles = cv2.HoughCircles(imgThresh, cv2.HOUGH_GRADIENT, 5, intRows / 4)      # fill variable circles with all circles in the processed image

        if circles is not None:                     # this line is necessary to keep program from crashing on next line if no circles were found
	    IO.output(7,1)
            for circle in circles[0]:                           # for each circle
                x, y, radius = circle                                                                       # break out x, y, and radius
                print "ball position x = " + str(x) + ", y = " + str(y) + ", radius = " + str(radius)       # print ball position and radius
                obRadius = int(radius)
                xAxis = int(x)
                if obRadius>0 & obRadius<50:
                    print("Object detected")
                    if xAxis>100&xAxis<180:
                        print("Object Centered")
                        fwd()
                    elif xAxis>180:
                        print("Moving Right")
                        ryt()
                    elif xAxis<100:
                        print("Moving Left")
                        lft()
                    else:
                        stp()
                else:
                    stp()
                cv2.circle(imgOriginal, (x, y), 3, (0, 255, 0), -1)           # draw small green circle at center of detected object
                cv2.circle(imgOriginal, (x, y), radius, (0, 0, 255), 3)                     # draw red circle around the detected object
            # end for
        # end if
	else:
	     IO.output(7,0)

        cv2.namedWindow("imgOriginal", cv2.WINDOW_AUTOSIZE)            # create windows, use WINDOW_AUTOSIZE for a fixed window size
        cv2.namedWindow("imgThresh", cv2.WINDOW_AUTOSIZE)           # or use WINDOW_NORMAL to allow window resizing

        cv2.imshow("imgOriginal", imgOriginal)                 # show windows
        cv2.imshow("imgThresh", imgThresh)
    # end while

    cv2.destroyAllWindows()                     # remove windows from memory

    return

###################################################################################################
if __name__ == "__main__":
    main()

Now, all that is left to do is run the program

python tracker.py

Congrats! your self-driving rover is ready! The Ultrasonic sensor based navigation part will be completed soon and I will update this instructable.

Thanks for reading!

Attachments