Introduction: The RR.O.P. - RaspRobot OpenCV Project

FIRST: I used a translator to help me , because I 'm not fluent in English ,I apologize for the bad english . My intention really is to collaborate .

SECOND: My thanks to you I got an award in the "MICROCONTROLLER CONTEST SPONSORED BY RADIOSHACK" !!! I want to thank you very much !!!! What I can help with this project all I'm here to explain! Thank you and let's learn! : D

This project describes the creation of a mobile robot that uses a computer vision system for guidance. The main objective of this project is to demonstrate that it is possible to use the artificial vision for a robot to interact with the external environment, using a feature such as shape, color or texture. This characteristic is used as a metric to determine the movement of the entire robot.

Robotics is a branch of research that is becoming crucial to support human activities, with the development of robots that guarantee reliability, range, speed and security which they are applied. In most of these applications the robot interprets the outside environment through the perception, that is, by recognizing information using artificial receptors, this enables the system to have a sense element which can recognize a characteristic such as color, shape or texture through a system of Computer Vision.

If you want to learn a little about how it was done this robot, from the choice of technologies to drive, stay with me and let's work together!

Enjoy!

Step 1: What Is My Idea?

The idea to create this project, came up with a question: "Is it possible to see the computer and perform a task alone?" Before this doubt I joined an area that I search for some time: "robotics" and I drew up a question still more complete: "is it possible to get around a mobile robot independently through an artificial vision?". And another question: "how will I know what is happening and if the robot is performing the functions according to what was planned?" SIMPLE! I thought of creating an interface and remotely access the Raspberry Pi and view everything that is happening quickly and you do not have much computational cost. That was a good catch! It was these questions that I have elaborated before the entire context and started my research all involving this project.

I learned that in computer vision, the task of segmentation of color has a very low computational cost, and so I chose this task and this kind of feature can be implemented in many programming languages and these languages depending on the platform by the way accept robotic resources . And then it was only to choose the key technologies that the entire system would be implemented and then: get to work!

Step 2: Main Technologies of the Project

Before starting any project, it was Necessary to know and learn about the key technologies que Were chosen:

  • Raspberry Pi: was studied ways to apply computer vision in any computer that could be small scale, he could use robotic chassis. Then the microcomputer was chosen Raspberry Pi Model B. The Raspberry Pi has an acceptable cost, small size and specs (clock, CPU, RAM, and other ETHERNET). The Raspberry Pi is capable of embarking peripherals such as USB ports and also the ability to integrate features such as actuators and sensors in a set called external links GPIO, including pins outputs / digital inputs, UART, I2C, SPI, audio, 3v, 5v and GND.
  • OpenCV:the OpenCV library is a computer vision library chosen. The library is important for the recognition of the object by using a characteristic such as color, texture or form. This is where operations, functions and features that capture the image and make the processing of information of interest to the project directly assisting in other parts of the system for decision-making are performed.
  • Python:to describe the source code was used the programming language Python, the programming language chosen for the system was the Python 2, based on compatibility with the Raspberry Pi and the OpenCV library. Python has some characteristics and features such as being a high-level language, present unique usability, disposal of high-level types (integer, boolean) control blocks by indentation and possess native libraries for Python that support the development of the project, as NumPy , Pygame, Matplotlib and SciPy. For the Raspberry Pi, is the version 3.2.3 of Python and its development environment is the 3 IDLE scripts where able to use the library computer vision system to conduct perception of the environment and extract information that will be developed will be used in decision-making system. The script will find the settings for handling, settings, functions and libraries.

And from this information technology has been able to start and develop a project that could ...

Step 3: What Is a Computer Vision System?

The goal of computer vision is to enable the artificial means, such as computers, having the ability to sense the external environment, can understand it, take appropriate measures or decisions, learn from this experience so they can improve their future performance.
An artificial vision system is a reflection of the natural vision system, it is possible to achieve, for example, in nature, tracing certain targets such as predators, food and even objects that may be in the path of an individual through the vision and learning. Thus, a major goal of an image is to inform the viewer of your content, allowing it to make decisions. An important subtask in this computer vision system is image segmentation, which is the process of dividing the image into a set of individual regions, segments or number. Which consists of partitioning an image into meaningful regions, grouping them before seeking a common characteristic, such as color, edge and shape.

Color segmentation uses color to classify areas in the image separating objects that do not have the same characteristic color. It is common for a computer vision system aims to make the reconstruction of the external environment, specifically, the objects in which the first goal is to achieve the object location with certainty and reliability. The color of the object is a feature used to separate different areas and subsequently enable the use of a tracking module. So this feature was chosen to be studied and implemented.

Lets gooooooooo learnnnnnnnnn!

Step 4: Materials Used in the Project

I chose some pieces that I already had and others that got to help finish the project:

- Raspberry Pi Model B + 16GB SD card class 10
- 2 DC Motors - 6V
- Robotic chassis (old)
- H Bridge (L298N model) for motor control up to 2
- EDUP Wireless adapter (ralink chipset)
- Power Bank 1A / 15k mAh
- Logitech C270 HD Camera
- 9g Servo Motor
- Support Pan-tilt steel
- Jumpers (M-M, M-F and F-F)
- 4 AA 1.5V batteries mAh 6k
- Cables plastic
- Sinks and thermal paste
- A lot of persistence and patience xD

Step 5: Starting the Raspberry Pi

There are some operating systems geared to the Raspberry Pi as Pidora, Raspbian, NOOBS, ARCH LINUX and the Raspberry Pi contains no ROM for archiving and OS installation, it is necessary to use a bootable SD card with this operating system on it, the transmission speed of the card directly influence the processing of the operating system, preferably cards that have a higher rate of 2 Mb / s recording is required. The card being used with the OS is a class 10 SDHC card that can reach speed of 45MB /s.

The operating system chosen for this project is the RASPBIAN - Debian Wheezy, it comes with over 35,000 packages, precompiled software bundled in a nice format for easy installation on your Raspberry Pi and can be downloaded at oficial site.

Click here if you want more information about your setup and preparation of the card.

If you want information from the first configuration of the Raspberry Pi, click here.

  • THE CAMERA

The system chosen for the webcam is auto detected after plugging in the USB port on the Raspberry Pi. Use the lsusb command and will be shown the detection of peripheral. For viewing webcam streaming software, which is a way to transmit multimedia data via packets temporarily stored in the cache Raspberry Pi is required. The streaming software has been installed: mjpeg-streamer in Raspeberry Pi through the commands in this site.

After meeting some settings Raspberry Pi, we now understand more things about the communication used?

    Step 6: System Communication

    The connectivity of the system can be done in two ways: wired (Ethernet LAN connection 10 / 1mg) and wireless (connection by a USB wireless adapter). The choice was the wireless communication in order to allow mobility to the system. Before installing the wireless adapter is necessary to know some of your information such as your Service Set Identifier (SSID), which is the set of characters that identifies a wireless network, also know the type of encryption used on the network, the network type wireless connection and the adapter to be addressed by the system and functioning properly, you must install your firmware, a package of software available as model the internal chip adapter.
    For choosing a study based on wireless adapters compatible with the Raspberry Pi was done through the official website. The adapter was chosen to model the EDUP RALINK 5370 chip with a frequency of 2 dBi antenna, it was chosen to have a reasonable range of the access point and provide easy installation.

    To make remote communication in order to access information, upload / download files and perform necessary testing was all computers belong to the same network, that is, both computers to be connected to an access point. And to accomplish these tasks some communication protocols were used:

    • GUI - GRAPHIC USER INTERFACE

    To remotely access a GUI on Raspberry PI was required an RDP or VNC protocol and a connection encrypts good quality.

    • Access via VNC protocol: it is necessary to install the software on your computer and UVCViewer the Raspberry Pi that TightVNC is a free suite of remote control software has been installed.

    Server installation: sudo apt-get install tightvncserver
    Startup: tightvncserver
    It creates a default session: vncserver :1 -geometry 1024×728 -depth 24

    • Access via RDP protocol: it is necessary to install the software and RDPDesk Raspberry Pi XRDP server that automatically starts the boot was installed.

    Server installation: sudo apt-get install xrdp

    • COMMAND LINE

    Access via the command line was necessary to perform maintenance, upgrades and scripts for command lines which is quite practical. In the remote access machine Putty software that creates an SSH access protocol using IP access to the Raspberry Pi was installed and it performs the installation of the service only once..

    Server installation: sudo apt-get install ssh

    • FILE TRANSFER

    All previous communication protocols are limited in the direct transfer of files, so an option is found via FTP protocol, is the standard of the existing TCP / IP oriented to transfer files, it is independent of operating system or hardware. Is all important for performing analysis of scripts and data exchange with the Raspberry Pi, for it was used in WinSCP access computer software along with IP destination to exchange files.

    After configuring the media, the process of creating scripts for computer vision and robotic integration of resources was initiated.

    Step 7: Creating and Configuring the Computer Vision System (CVS)

      The OpenCV library is where all the processing operations of the video is done and the information extracted from this processing is done making decisions robotics platform and other components that enable the dynamic system. His version is the OpenCV-2.2.8 being present examples and all the functions available in versions for Windows, MAC and Linux.

      It was necessary to do the installation with the command:

      Update the system: sudo apt-get update
      Install updates: sudo apt-get upgrade
      Install the OpenCV library: sudo apt-get install python-opencv

      The CVS (Computer Vision System):

      The figure of this step shows the set of system functions. These functions are executed sequentially, repeatedly and for real values of the dynamic characteristics of the object (coordinates and size), up to six times (or six variations) a second time. Ie, every second are generated up to six values that will be processed and compared, streamlining the platform.

      Use the figure of this step for each step description:

      1. First it was necessary to capture (or receive) the image or, specifically the frame containing the image (frame). The size is 160x120 pixels. The frame at large (eg, 640 pixels wide and 480 pixels high), caused slowdowns in the recognition process when the image was transmitted remotely.The system default is RGB color, this color system is represented in the webcam frame obtained through the basic colors: red (Red), Green (Green) and blue (Blue). These colors are represented on a pixel by pixel dimensional vector, for example, the color red is represented 0com values (0, 255, 0), respectively represented for each channel. That is, each pixel has its RGB value represented by three bytes (red green and blue).
      2. After the captured image, the conversion from RGB color system to the color HSV (hue, saturation, and value) was undertaken, since this model describes similar to the recognition by the human eye colors. Since the RGB (red, green and blue) system has the colors based on combinations of the primary colors (red, green and blue) and the HSV system defines colors as their color, sparkle and shine (hue, saturation, and value), facilitating the extraction of information. In diagram the step 2 shows the conversion from RGB to HSV, using the "cvtColor" native OpenCV, which converts the input image from an input color system to another function.
      3. With the image in HSV model, it was necessary to find the correct values of HSV minimum and maximum color of the object that will be followed. To save these values, were made two vectors with minimal HSV and HSV maximum color object as values: minimum Hue (42) Minimum saturation (62) Minimum brightness (63) Maximum Hue (92) Maximum Saturation (255) Maximum Brightness (235). So the next step to generate a binary image, the relevant information may be limited only in the context of these values. These values are needed to limit the color pattern of the object. A function of comparing the pixel values with the standard values of the inserted vector was used. The result was a binary image providing only one value for each pixel.
      4. Having made the segmentation, resulting in the binary image, it is noted that noise are still present in the frame. These noises are elements that hinder the segmentation (including obtaining the actual size) of the object. To fix (or attempt to fix) this problem, it was necessary to apply a morphological transformation through operators in the frame, so that the pixels were removed that did not meet the desired standard. For this, the morphological operator EROSION, who performed a "clean" in the frame, reducing noise contained in it was used.
      5. Then it was used to "Moments" function, which calculates the moments of positive contour (white) using an integration of all pixels present in the contour. This feature is only possible in a frame already binarizado and without noise, so that the size of the contour of the object is not changed by stray pixels in the frame, which hinder and cause redundancy in information.
        moments = cv2.moments (imgErode, True)

        In the proposed example, it was necessary to find the area of the contour and its location coordinates in the frame to be made the calculations of repositioning the chassis. The calculation of the area of the object performs the binary sum of positive, generating the variable M00 and recorded in the variable "area":
        area = moments ['m00']
        The specificity of the contour refers to an object, not a polygon. This value is found an approximate area of positive pixels (white) that make up the object. If this value is null area, is disregarded the existence of an object color treated (if the "green" color) in the frame. Using this feature will help accomplish the movement of the robot approaching and distancing of the target object, trying to treat the problem of depth. That is, the fact that the object is approaching or distancing overly chassis.

        And from the targeted area was possible, define the coordinates of the object in this frame. For the coordinates of the object was used parameters obtained Moments function that found its coordinated. But this was coordinated based on centroid of the object, is found only if the area of the object is greater than zero. Using this feature was important to make the movement of horizontal and vertical adjustment of the robot in order to increase the degree of freedom and minimize restriction of movement of the object to be identified. Using the area of the object parameter and combined with M00 x and y parameters Moments of function, it was possible to find the coordinates (x, y).

        Thus the values received in the coordinate (x, y) refers to the placement of the found segmentation of the object relative to the frame and to facilitate the interpretation of the information which is being drawn from the coordinate information, a function that draws a circle at the centroid was applied the object.

      The result:

      Step 6 will be demonstrated over the course of the next steps it will be necessary to explain the configuration of engines and other parts before proceeding with this part.

      The attached script "SVC.py" helps you begin to understand how the system will work. If problems give your Raspberry Pi, I recommend reviewing the installed libraries or contact.
      How to execute a script python in Raspberry Pi?? click here!

      We will proceed with the assembly robot? Get to work!

      Attachments

      Step 8: Horizontal & in Depth Motion Using the CVS Pt1.

        Having made the segmentation and extraction of information from the object in the image, some techniques in order to ensure the execution of functions that enable the dynamic chassis in front of the segmented object were used. There are three kinds of movements, in order to interact with the external environment, using the software and the hardware in a more complete manner. The changes refer to the location of the object and refer to the use of their depth and / or location:

        • Forward
        • Brings
        • Right
        • Left
        • Upward movement of the camera
        • Lower camera movement

        But with DC motors and the chassis robotic is made movement forward, backward, left and right. The horizontal & depth adjustment movement of the coordinate system is done by assuming binary-encoded digital pulses per pin only two values (0 and 1) performed by the control module, the H-bridge through the input pin (input). These pins are connected by jumpers and connected via GPIO on RPI, the script is necessary to import the GPIO library and declare the use of GPIO mode:

        sudo apt-get install python-rpi.gpio

        The images of this step describes all the connections for the total operation of robotic chassis near the H-bridge and motor and feeding was made using 4 AA batteries 1.5V each resulting in 6 V and 6000mAh. The source code was necessary to import this library and also choose the mode of GPIO.

        Step 9: Horizontal & in Depth Motion Using the CVS Pt2.

        This step will be described in depth and horizontal movements.


        But that will serve the movement deep ??? Yes, this movement is useful!

        It was developed to allow the chassis to move forward and backward in order for him to zoom in or out of the object (green color). This movement is based on the area of the desired color (green) on the image / frame. For this was the logical thinking: let not the object exceeds the desired distance and could collide with the chassis or even to distance, the point of the chassis lose sight of it and you can not catch it! This case has been set a minimum size of the object 50 pixels which is equivalent to approximately 50 cm from the chassis (may vary in your project). And through an area boundary, the system will be static (non-moving) and thus the object approaching or leaving, is done setting minimum and maximum distance in order not to lose the object and / or to avoid collision object with the chassis. The "MOVIMENTO_PROFUNDIDADE.py" code exemplifies this movement.

        Below a video exemplifying! Enjoy it! :-)

        Along with the movement of depth will be described horizontal movement. But why Saymon ?? Because both use the same engines! : P
        All control functions are analyzed and set direction for the robot could move on the same axis (in this case, move right and left). As the control is horizontal, these engines were configured to be moving using the coordinate of X axis (horizontal), that even the X axis coordinate for decision making and necessary adjustments.

        The adjustment is performed as follows:
        Along with the movement of depth will be described horizontal movement. But why Saymon ?? Because both use the same engines! :P
        All control functions are analyzed and set direction for the robot could move on the same axis (in this case, move right and left). As the control is horizontal, these engines were configured to be moving using the coordinate of X axis (horizontal), that even the X axis coordinate for decision making and necessary adjustments.

        The adjustment is performed as follows:
        It is started from the frame segmentation and processing and sequence of steps are dynamic i.e. it may change according to need adjustment. Where the minimum area (considered skilled) was set to be 50 pixels which is equivalent to approximately 50 cm from the chassis. A feature of displaying a circle at the centroid of the object was used to improve the interpretation of recognition in all movements, as well as integration to be presented in the next steps. To perform the functions of horizontal adjustment, properly, a logic for all stages of segmentation, extraction of image information and techniques for horizontal adjustment was developed. This technique uses the center of the frame as a reference setting, and from it, one side is bounded limit the right and the left, making it acceptable to drive up to 60% of the total area of the axis in the X axis, ie 30% on each side, considering the center of the frame. If the coordinates satisfy the imposed limits, the chassis does not perform any movement; the engines are stopped (off), considering a set limit to the horizontal object.
        For this case the code "MOVIMENTO_HORIZONTAL.py" exemplifies !! Try and we will continue our project !!

          After this step, go to the vertical movement that uses another engine motor! :-)

          Step 10: Vertical Motion Using the CVS

          For this movement was used another engine, as specified was a 9g servo motor coupled to a tilt pan support and also near the camera. (see pictures of the step).


          Finally, we developed a mechanism to get the information from the Y axis and use them in a way that occurs in addition to the horizontal movements and approach the vertical camera movement. For it was used by the camera, a pan-tilt support, which enabled the vertical movement of the artificial receptor in order to track the movement of the object (green color) on the Y axis Its assembly was done in front of the system , enabling the camera can "see" the object before it collides with the chassis.

          This support (pan-tilt) has a microservo, which is the engine responsible for the vertical movement of the bracket along with the camera. It is activated via a Pulse-width modulation (PWM) controller for its use was necessary to install a Kernel available for this resource.

          The kernel chosen was Occidentalis v0. 2 of Adafruit (click here to download), which includes standards for the declaration of the servant. These standards are written codes in a file during script execution and are made by a function. This enabled Kernel activate a PWM controller on the 18 GPIO (pin 12) enabling the servomotor rotation at an angle in degrees from 0 degrees to 180 degrees.


          To properly perform vertical adjustment using the pan-tilt support has been developed a technique similar to the slider, where the logic is to use the center frame and the Y axis from an upper and lower limit perform the vertical adjustment of an artificial receptor . It is acceptable to the Y value of the object to remain in the 60% allowable similar to the horizontal adjustment, making the vertical support that does not move, if it exceeds 30% at the top, is an adjustment to the limit of the servo motor (180 degree) if necessary. If the Y value of the object exceeds 30% at the bottom relative to the center of the Y axis tax, an adjustment is made to limit the servo motor (grade 0) if necessary.


          Look this video (hey the red object was used only for test!):

          The "MOVIMENTO_VERTICAL.py" refers algorithm will this movement. Now yes !!! The robot is moving at all angles and movements and ready to implement features of computer vision to learn the moves!

          It's almost done our killing machine !! :-)

          Step 11: Diagram of the System RR.O.P.

          The figure of this step displays the entire context of the project, the parts of the systems were categorized into modules (Central Module Module Module Receptive and Reactive), where each party has targeted activities that only by working together ensured the full functionality of the system.


          Central Module: this part of the system is the Raspberry Pi, this feature makes the connection to the router in order to create a network between it and the Receptive module also presents all configurations of libraries, functions and webcam which receives the frame to be treated. The computer vision system created this design was used to describe the individual movements categorized in: Motion, Motion and Depth Movement, which resulted in the integration of the three movements in robotic system using resources installed on the Raspberry Pi. To integrate the three movements, was used a logical adjustments based on comparisons, the first object to be detected soon after the analysis is made of the need for depth adjustment in order to adjust the distance of the object system immediately so that later the other movements can be made. Made depth adjustment is done a comparison between the horizontal and vertical limits, to determine which of the two movements is necessary to prioritize, justifying the need to diagnose which of the two pre-set limits are less respected, ie, are more distant acceptable values, so it is done the horizontal or vertical adjustment as the limits are analyzed. Shortly after the last adjustment is made using logic acceptable limits may be horizontal or vertical adjustment, according the previous adjustment step. With these three settings could not lose the object frame before acceptable speed.

          Reactive Module: this part of the system, were categorized actuators, such actuators play the role of robot integration and external environment by locomotion, they receive parameters settings and enable the system (chassis or camera) get around as the object with the color of interest moves. Its functions are passed by the Central Module via a communication by the GPIO hardware interface. As the Central Module sends commands the actuators work together to ensure the movement before the object is present in the capture area of the frame.

          Receptive Module: this part of the system was used to demonstrate the system user, through video processing or execution of the script with the entire context algorithm developed for this work, using the communication protocols employed. For the user to follow the decision making and execution of the functions, was employed resources of the OpenCV computer vision which are freely available in the documentation, which state the robot library, was shown through colors and features that vary according to the movements of horizontal, vertical, or depth adjustment.

          Step 12: Last Implementation RR.O.P.

          The final implementation of sitema was taken!
          Ie as shown above, was united all the moves in the same code! Show the ball was !!!

          But to make it easier to understand the system, we used the OpenCV (receptive module) features that symbolize the states !! In this present step two codes:

          • "main-project.py" which features integration with the features of OpenCV

          Of course the OpenCV resources greatly facilitate our life !! But the computational cost is higher of course! :PPPPPP



          This video presents the integrated functions without OpenCV motion: all engines working together! I hope you enjoy !!!


          Step 13: [FUN] FAIL TESTS

          In the course of learning had some tests that failed ... just for fun you'll leave here ... xD

          Not all flowers ... and some things will always go wrong. Fair enough we learn from mistakes, agree? these tests were tests which gave not too sure, but formed the basis for the construction and completion of the robot. Enjoy to see how not calibrate right may result in segmentation faults .. hahaa

          Step 14: Testing and Running the Project

          Now we come to the end of the project!

          Just to remind you: the project uses powerbank to power Raspberry Pi and batteries for other engines !!

          The entire robot was built step by step shown above. Some details may have gone unnoticed to the course, but I'm here to help! I hope everyone enjoyed the project and used to learn a little more about computer vision! I want to thank everyone who comment with tips or suggestions (are very good things) and I'm here to help everyone! Really, thanks for all and learn a lot !! : ') Thanks for my brother Sayoan Oliveira!

          Only one thing: this entire project was produced by me Saymon Oliveira, who just want more people to learn and collaborate more and more ... I await contact from all so that we can work together and improve this project!

          I used a translator to help me , because I 'm not fluent in English , I apologize for the bad english . My intention really is to collaborate .

          Below is the video and photos of the final project. ANSWER IN MY YOUTUBE CHANNEL! THANKS :)

          Sincerely Mr. Oliveira Saymon

          Microcontroller Contest

          First Prize in the
          Microcontroller Contest