The objective is to write a script that makes a robot car detecting a blue marker, move towards the marker, read the sign that’s on the marker and follow its direction until a stop sign is found.
The video shows an overview of the approach and performance.
Software used:PythonopenCV and NumpyMini-driver, camera and websockets classes of Dawn Robotics
(They closed their business, but supporting blogs still can be found at Dawnrobotics blog (I'm not sure if this site will be updated to SHA2, but you'll find the libraries and all anyway at: Dawn Robotics Bitbucket repositories)
Here you'll find the complete script
The coding is rather straight forward and well commented. It's considered to be self-explanatory.
There are several ways to track an object in a live video stream. The most simple and fast methods are size detection and color tracking. Using size detection the objects are preferably squared. For the objective is to read the signs on the markers, color detection is used in this case. The signs are placed on a blue A4 background. This makes them easy to detect and simplifies filtering out the sign.Color detection however is rather dependent of the light conditions (darkness, lamp lights, shadows). When using color tracking at night, the RGB values used for masking will have to be adjusted according to the overall situation. Such can be easily done with a calibrating script, which can also be found at the same repository in the Handy stuff folder.
The picture displays the output of all steps: from calibrating, thru detection, binarizing and edging, perspective transformation and comparing with a reference image.
Marker detection and keeping focus while moving, is done by plain color tracking. It’s 50x faster than the full interpretation routine. ‘openCV Bounding Box’ is used to get a more accurate centroid (Contours are sometimes tricky when light conditions are a bit sluggy). A bounding box also produces the width of the marker, which is used to keep focus. The difference is shown in the second picture by a red line for the contours found and a green line representing the bounding box.
The range routine uses a constant value to multiply the width by. The constant is determined at a known range and since all markers do have the same size, the width in pixels can be used for ranging thru the camera image. Ranging isn’t needed to adjust position and direction, but can be used to keep distance. (So: more for fun).
The heads-up display routine also shows the centre coordinates. Also just for fun (but who knows)
The script uses a time-out routine for Pythons ‘time.sleep ()’ is just reliable at very small intervals and accurate time-outs are needed to enable exact turns.Images are grabbed as global variable (saves a lot of typing and a small bit of memory)
Readings while moving are tuned by time-outs. The routine produces more than a hundred readings in a couple of seconds, creating an overload of the webserver and Pi’s memory
After comparing the sign the script forces a wait for the latest image to get the next marker into memory.Finally the full detection routine is used for comparing the sign with the reference images. This routine detects the white inner rectangle, shown in the third picture by a blue rectangle.
The steps are straight forward: find the smallest rectangular contour, edge it, adjust perspective by warping, binarize it and compare it with the reference images on disk.
The script can easily be extended with all kinds of routines, like a line-following routine. The evasion routine (emergency break when an obstacle is encountered while moving toward a marker) still has to be filled in.
A good example of using object tracking by color in code can be found at:
good instructions on object detection and color tracking in Python and openCV can be found at the site of Adrian Rosebrock at: