Introduction: Automatic Vision Object Tracking

About: Engineer, writer and forever student. Passionate to share knowledge of electronics with focus on IoT and robotics.

On my last tutorial, we explored how to control a Pan/Tilt Servo device in order to position a PiCam. Now we will use our device to help the camera to automatically tracking color objects as you can see below:

This is my first experience with OpenCV and I must confess, I am in love with this fantastic "Open Source Computer Vision Library".

OpenCV is free for both academic and commercial use. It has C++, C, Python and Java interfaces and supports Windows, Linux, Mac OS, iOS and, Android. On my series of OpenCV tutorials, we will be focusing on Raspberry Pi (so, Raspbian as OS) and Python. OpenCV was designed for computational efficiency and with a strong focus on real-time applications. So, it's perfect for Physical computing projects!

Step 1: BOM - Bill of Material

    Main parts:

    1. Raspberry Pi V3 - US$ 32.00
    2. 5 Megapixels 1080p Sensor OV5647 Mini Camera Video Module - US$ 13.00
    3. TowerPro SG90 9G 180 degrees Micro Servo (2 X)- US$ 4.00
    4. Mini Pan/Tilt Camera Platform Anti-Vibration Camera Mount w/ 2 Servos (*) - US$ 8.00
    5. LED Red
    6. Resistor 220 ohms
    7. Resistor 1K ohm (2X) - Optional
    8. Miscellaneous: metal parts, bands, etc (in case you will construct your Pan/Tilt mechanism)

    (*) you can buy a complete Pan/Tilt platform with the servos or build your own.

    Step 2: Installing OpenCV 3 Package

    I am using a Raspberry Pi V3 updated to the last version of Raspbian (Stretch), so the best way to have OpenCV installed, is to follow the excellent tutorial developed by Adrian Rosebrock: Raspbian Stretch: Install OpenCV 3 + Python on your Raspberry Pi.

    I tried several different guides to install OpenCV on my Pi. Adrian's tutorial is the best. I advise you to do the same, following his guideline step-by-step.

    Once you finished Adrian's tutorial, you should have an OpenCV virtual environment ready to run our experiments on your Pi.

    Let's go to our virtual environment and confirm that OpenCV 3 is correctly installed.

    Adrian recommends run the command "source" each time you open up a new terminal to ensure your system variables have been set up correctly.

    source ~/.profile

    Next, let's enter on our virtual environment:

    workon cv

    If you see the text (cv) preceding your prompt, then you are in the cv virtual environment:

    (cv) pi@raspberry:~$
    Adrian calls the attention that the cv Python virtual environment is entirely independent and sequestered from the default Python version included in the download of Raspbian Stretch. So, any Python packages in the global site-packages directory will not be available to the cv virtual environment. Similarly, any Python packages installed in site-packages of cv will not be available to the global install of Python.

    Now, enter in your Python interpreter:

    python

    and confirm that you are running the 3.5 (or above) version

    Inside the interpreter (the ">>>" will appear), import the OpenCV library:

    import cv2

    If no error messages appear, the OpenCV is correctly installed ON YOUR PYTHON VIRTUAL ENVIRONMENT.

    You can also check the OpenCV version installed:

    cv2.__version__

    The 3.3.0 should appear (or a superior version that can be released in future). The above Terminal PrintScreen shows the previous steps.

    Step 3: Testing Your Camera

    Once you have OpenCV installed in your RPi let's test if your camera is working properly.

    I am assuming that you have a PiCam already installed on your Raspberry Pi.

    Enter the below Python code on your IDE:

    import numpy as np
    import cv2
    
    cap = cv2.VideoCapture(0)
     
    while(True):
        ret, frame = cap.read()
        frame = cv2.flip(frame, -1) # Flip camera vertically
        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        
        cv2.imshow('frame', frame)
        cv2.imshow('gray', gray)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    
    cap.release()
    cv2.destroyAllWindows()
    

    The above code will capture the video stream that will be generated by your PiCam, displaying it both, in BGR color and Gray mode.

    Note that I rotated my camera vertically due the way it is assembled. If it is not your case, comment or delete the "flip" command line.

    You can alternatively download the code from my GitHub: simpleCamTest.py

    To execute, enter the command:

    python simpleCamTest.py

    To finish the program, you must press the key [q] tor [Ctrl] + [C] on your keyboard

    The picture shows the result.

    To know more about OpenCV, you can follow the tutorial: loading -video-python-opencv-tutorial

    Step 4: Color Detection in Python With OpenCV

    One thing that we will try to accomplish, will be the detection and tracking of a certain color object. For that, we must understand a little bit more about how OpenCV interpret colors.

    Henri Dang wrote a great tutorial about Color Detection in Python with OpenCV.

    Usually, our camera will work with RGB color mode, which can be understood by thinking of it as all possible colors that can be made from three colored lights for red, green, and blue. We will work here with BGR (Blue, Green, Red) instead.

    As described above, with BGR, a pixel is represented by 3 parameters, blue, green, and red. Each parameter usually has a value from 0 – 255 (or O to FF in hexadecimal). For example, a pure blue pixel on your computer screen would have a B value of 255, a G value of 0, and a R value of 0.

    OpenCV works with HSV (Hue, Saturation, Value) color model, that it is an alternative representation of the RGB color model, designed in the 1970s by computer graphics researchers to more closely align with the way human vision perceives color-making attributes:

    Great. So, if you want to track a certain color using OpenCV, you must define it using the HSV Model.

    Let's say that I must track a yellow object as the plastic box shown ay above picture. The ease part is to find its BGR elements. You can use any design program to find it (I used PowerPoint). In my case I found:

    • Blue: 71
    • Green: 234
    • Red: 213

    Next, we must convert the BGR (71, 234, 213) model to an HSV model, that will be defined with upper and lower range boundaries. For that, let's run the below code:

    import sys
    import numpy as np
    import cv2
     
    blue = sys.argv[1]
    green = sys.argv[2]
    red = sys.argv[3]  
     
    color = np.uint8([[[blue, green, red]]])
    hsv_color = cv2.cvtColor(color, cv2.COLOR_BGR2HSV)
     
    hue = hsv_color[0][0][0]
     
    print("Lower bound is :"),
    print("[" + str(hue-10) + ", 100, 100]\n")
     
    print("Upper bound is :"),
    print("[" + str(hue + 10) + ", 255, 255]")

    You can alternatively download the code from my GitHub: bgr_hsv_converter.py

    To execute, enter the below command having as parameters the BGR values found before:

    python bgr_hsv_converter.py 71 234 213

    The program will print the upper and lower boundaries of our object color.

    In this case:

    lower bound: [24, 100, 100]

    and

    upper bound: [44, 255, 255]

    The Terminal PrintScreen shows the result.

    Last, but not least, let's see how OpenCV can "mask" our object once we have determined its color:

    import cv2
    import numpy as np
    
    # Read the picure - The 1 means we want the image in BGR
    img = cv2.imread('yellow_object.JPG', 1) 
    
    # resize imag to 20% in each axis
    img = cv2.resize(img, (0,0), fx=0.2, fy=0.2)
    # convert BGR image to a HSV image
    hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) 
    
    # NumPy to create arrays to hold lower and upper range 
    # The “dtype = np.uint8” means that data type is an 8 bit integer
    
    lower_range = np.array([24, 100, 100], dtype=np.uint8) 
    upper_range = np.array([44, 255, 255], dtype=np.uint8)
    
    # create a mask for image
    mask = cv2.inRange(hsv, lower_range, upper_range)
    
    # display both the mask and the image side-by-side
    cv2.imshow('mask',mask)
    cv2.imshow('image', img)
    
    # wait to user to press [ ESC ]
    while(1):
      k = cv2.waitKey(0)
      if(k == 27):
        break
     
    cv2.destroyAllWindows()
    

    You can alternatively download the code from my GitHub: colorDetection.py

    To execute, enter the below command having in your directory a photo with your target object (in my case: yellow_object.JPG):

    python colorDetection.py

    The above picture will show the original image ("image") and how the object will appear ("mask") after that the mask is applied.

    Step 5: Object Moviment Tracking

    Now that we know how to "select" our object using a mask, let's track its movement in real time using the camera. For that, I based my code on Adrian Rosebrock's Ball Tracking with OpenCV tutorial.

    I strongly suggest that you read Adrian's tutorial in detail.

    First, confirm if you have the imutils library installed. it is Adrian's collection of OpenCV convenience functions to make a few basic tasks (like resizing or flip screen) much easier. If not, enter with below command to install the library on your Virtual Python environment:

    pip install imutils

    Next, download the code ball_tracking.py from my GitHub, and execute it using the command:

    python ball_traking.py

    As a result, you will see something similar to the gif below:

    Basically, it is the same code as Adrian's unless the "video vertical flip", that I got with the line:

    frame = imutils.rotate(frame, angle=180)

    Also, note that the mask boundaries used were the one that we got in the previous step.

    Step 6: Testing the GPIOs

    Now that we have played with the basics of OpenCV, let's install a LED to our RPi and start to interact with our GPIOs.

    Follow the above electrical diagram: The LED's cathode will be connected to GPIO 21 and its anode to GND via a 220-ohm resistor.

    Let's test our LED inside our Virtual Python Environment.

    Remember that its possible that RPi.GPIO is not installed in your Python virtual environment! To fix this issue, once you are there (remember to confirm that the (cv) is in your terminal), you need to use pip to install it into your virtual environment:

    pip install RPi.GPIO

    Let's use the python script to execute a simple test :

    import sys
    import time
    import RPi.GPIO as GPIO
    
    # initialize GPIO and variables
    redLed = int(sys.argv[1])
    freq = int(sys.argv[2])
    GPIO.setmode(GPIO.BCM)
    GPIO.setup(redLed, GPIO.OUT)
    GPIO.setwarnings(False)
    
    print("\n [INFO] Blinking LED (5 times) connected at GPIO {0} at every {1} second(s)".format(redLed, freq))
    for i in range(5):
        GPIO.output(redLed, GPIO.LOW)
        time.sleep(freq)
        GPIO.output(redLed, GPIO.HIGH)
        time.sleep(freq)
    
    # do a bit of cleanup
    print("\n [INFO] Exiting Program and cleanup stuff \n")
    GPIO.cleanup()

    This code will receive as arguments a GPIO number and the frequency in seconds that our LED should blink. The LED will blink 5 times and the program will be terminated. Note that before terminate, we will liberate the GPIOs.

    So, to execute the script, you must enter as parameters, LED GPIO, and frequency.

    For example:

    python LED_simple_test.py 21 1

    The above command will blink 5 times the red LED connected to "GPIO 21" every "1" second.

    The file GPIO_LED_test.py can be downloaded from my GitHub

    The above Terminal print screen shows the result (and of course you should confirm that the LED is blinking.

    Now, let's work with OpenCV and some basic GPIO stuff.

    Step 7: Recognizing Colors and GPIO Interaction

    Let's start integrating our OpenCV codes with GPIO interaction. We will start with the last OpenCV code and we will integrate the GPIO-RPI library on it, so we will turn on the red LED anytime that our colored object is found by the camera. The code used in this step was based on Adrian's great tutorial OpenCV, RPi.GPIO, and GPIO Zero on the Raspberry Pi:

    The first thing to do is to "create" our LED, connecting it to the specific GPIO:

    import RPi.GPIO as GPIO
    redLed = 21
    GPIO.setmode(GPIO.BCM)
    GPIO.setwarnings(False)
    GPIO.setup(redLed, GPIO.OUT)
    

    Second, we must initialize our LED (turned off):

    GPIO.output(redLed, GPIO.LOW)
    ledOn = False
    

    Now, inside the loop, where the "circle" is created when the object is found, we will turn on the LED:

    GPIO.output(redLed, GPIO.HIGH)
    ledOn = True
    

    Let's download the complete code from my GitHub: object_detection_LED.py

    Run the code using the command:

    python object_detection_LED.py

    Here the result. Note the LED (left inferior corner) goes on everytime that the object is detected:

    Try it with different objects (color and format). You will see that once the color match inside the mask boundaries, the LED is turned on.

    The video below shows some experiences. Note that only yellow objects that stay inside the color range will be detected, turning the LED on. Objects with different colors are ignored.

    We are only using the LED here as explained in the last step. I had my Pan Tilt already assembled when I did the video, so ignore it. We will handle with PAN/TILT mechanism in next step.

    Step 8: The Pan Tilt Mechanism

    Now that we have played with the basics of OpenCV and GPIO, let's install our Pan/tilt mechanism.

    For details, please visit my tutorial: Pan-Tilt-Multi-Servo-Control

    The servos should be connected to an external 5V supply, having their data pin (in my case, their yellow wiring) connect to Raspberry Pi GPIO as below:

    • GPIO 17 ==> Tilt Servo
    • GPIO 27 ==> Pan Servo

    Do not forget to connect the GNDs together ==> Raspberry Pi - Servos - External Power Supply)

    You can have as an option, a resistor of 1K ohm in serie, between Raspberry Pi GPIO and Server data input pin. This would protect your RPi in case of a servo problem.

    Let's also use the opportunity and test our servos inside our Virtual Python Environment.

    Let's use Python script to execute some tests with our drivers:

    from time import sleep
    import RPi.GPIO as GPIO
    GPIO.setmode(GPIO.BCM)
    GPIO.setwarnings(False)
    
    def setServoAngle(servo, angle):
    	pwm = GPIO.PWM(servo, 50)
    	pwm.start(8)
    	dutyCycle = angle / 18. + 3.
    	pwm.ChangeDutyCycle(dutyCycle)
    	sleep(0.3)
    	pwm.stop()
    
    if __name__ == '__main__':
    	import sys
    	servo = int(sys.argv[1])
    	GPIO.setup(servo, GPIO.OUT)
    	setServoAngle(servo, int(sys.argv[2]))
    	GPIO.cleanup()
    

    The core of above code is the function setServoAngle(servo, angle). This function receives as arguments, a servo GPIO number, and an angle value to where the servo must be positioned. Once the input of this function is "angle", we must convert it to an equivalent duty cycle.

    To execute the script, you must enter as parameters, servo GPIO, and angle.

    For example:

    python angleServoCtrl.py 17 45

    The above command will position the servo connected on GPIO 17 ("tilt") with 45 degrees in "elevation". A similar command could be used for Pan Servo control (position to 45 degrees in "azimuth":

    python angleServoCtrl.py 27 45

    The file angleServoCtrl.py can be downloaded from my GitHub

    Step 9: Finding the Object Realtime Position

    The idea here will be to position the object in the middle of the screen using the Pan/Tilt mechanism. The bad news is that for starting we must know where the object is located in real time. But the good news is that it is very easy, once we already have the object center's coordinates.

    First, let's take the "object_detect_LED" code used before and modify it to print the x,y coordinates of the founded object.

    Download from my GitHub the code: objectDetectCoord.py

    The "core" of the code is the portion where we find the object and draw a circle on it with a red dot in its center.

    # only proceed if the radius meets a minimum size
    if radius > 10:
    	# draw the circle and centroid on the frame,
    	# then update the list of tracked points
    	cv2.circle(frame, (int(x), int(y)), int(radius),
    		(0, 255, 255), 2)
    	cv2.circle(frame, center, 5, (0, 0, 255), -1)
    			
    	# print center of circle coordinates
    	mapObjectPosition(int(x), int(y))
    			
    	# if the led is not already on, turn the LED on
    	if not ledOn:
    		GPIO.output(redLed, GPIO.HIGH)
    		ledOn = True
    

    Let's "export" the center coordinates to mapObjectPosition(int(x), int(y)) function in order to print its coordinates. Below the function:

    def mapObjectPosition (x, y):
        print ("[INFO] Object Center coordenates at X0 = {0} and Y0 =  {1}".format(x, y))

    Running the program, we will see at our terminal, the (x, y) position coordinates, as shown above. Move the object and observe the coordinates. We will realize that x goes from 0 to 500 (left to right) and y goes from o to 350 (top to down). See the above pictures.

    Great! Now we must use those coordinates as a starting point for our Pan/Tilt tracking system

    Step 10: Object Position Tracking System

    We want that our object stays always centered on the screen. So, let's define for example, that we will consider our object "centered" if:

    220 < x < 280

    and

    160 < y < 210

    Outside of those boundaries, we must move our Pan/Tilt mechanism to compensate deviation. Based on that, we can build the function mapServoPosition(x, y) as below. Note that the "x" and "y" used as parameters in this function are the same that we have used before for printing central position:

    # position servos to present object at center of the frame
    def mapServoPosition (x, y):
        global panAngle
        global tiltAngle
        if (x < 220):
            panAngle += 10
            if panAngle > 140:
                panAngle = 140
            positionServo (panServo, panAngle)
     
        if (x > 280):
            panAngle -= 10
            if panAngle < 40:
                panAngle = 40
            positionServo (panServo, panAngle)
    
        if (y < 160):
            tiltAngle += 10
            if tiltAngle > 140:
                tiltAngle = 140
            positionServo (tiltServo, tiltAngle)
     
        if (y > 210):
            tiltAngle -= 10
            if tiltAngle < 40:
                tiltAngle = 40
            positionServo (tiltServo, tiltAngle)
    

    Based on the (x, y) coordinates, servo position commands are generated, using the function positionServo(servo, angle). For example, suppose that y position is "50", what means that our object is almost in the top of the screen, that can be translated that out "camera sight" is "low" (let's say a tilt angle of 120 degrees) So we must "decrease" Tilt angle (let's say to 100 degrees), so the camera sight will be "up" and the object will go "down" on screen (y will increase to let's say, 190). The above diagram shows the example in terms of geometry.

    Think how the Pan camera will operate. note that the screen is not mirroring, what means that if you move the object to "your left", it will move on screen for "your right", once you are in opposition to the camera.

    The function positionServo(servo, angle) can be written as:

    def positionServo (servo, angle):
        os.system("python angleServoCtrl.py " + str(servo) + " " + str(angle))
        print("[INFO] Positioning servo at GPIO {0} to {1} degrees\n".format(servo, angle))
    

    We will be calling the script shown before for servo positioning.

    Note that angleServoCtrl.py must be in the same directory as objectDetectTrac.py

    The complete code can be download from my GitHub: objectDetectTrack.py

    Below gif shows an example of our project working:

    Step 11: Conclusion

    As always, I hope this project can help others find their way into the exciting world of electronics!

    For details and final code, please visit my GitHub depository: OpenCV-Object-Face-Tracking

    For more projects, please visit my blog: MJRoBot.org

    Below the link to my next tutorial, where we will explore "Facial Recognition":

    Real-time-Face-Recognition-an-End-to-end-Project

    Saludos from the south of the world!

    See you in my next instructable!

    Thank you,

    Marcelo

    Epilog Challenge 9

    Participated in the
    Epilog Challenge 9