As robots begin to populate the planet they will need a way to "see" the world similarly to the way we humans do and be able to use this vision data to make decisions. Today a very popular computer vision system is the self-driving car. As humans, we take for granted how fast our brains process all the visual stimuli of the road. We just intuitively know how to stay in our lane and react to deviations by steering without thinking too much. For a robot however this is a much more complex problem. Cameras mounted to self-driving cars watch for lane markers on either side of the vehicle to make sure the car is staying inside its own lane. Calculations running at thousands of times per second then crunch this visual data and determine the best adjustments to steer the car. Kind of amazing right!?!

High Level Overview

In this Instructable we are going to use OpenCV to teach the Intel Edison how to spot our yellow robot logo in an image. We do this by sending the Edison a control image, the robot picture, and a test image, the T-Shirt picture, and the Edison will compare the two. If there is a match the Edison will let us know that we found the robot!

## Step 1: What Is OpenCV?

History

OpenCV stands for Open Source Computer Vision and is a library of functions used to help people code their own computer vision algorithms. OpenCV was originally developed by Gary Bradski at Intel in 1999, so its fitting we are now running it on an Intel Edison! In addition to the Edison OpenCV can be run on Linux, Windows, Mac, Andriod and iOS. Not to mention its also been ported to work with some of the most popular computing languages such as C++, Java, Matlab and Python. The icing on the cake here is the fact that OpenCV is completely free for commercial and research purposes! How cool is that?

OpenCV became so popular because it provides high-level interaction to computer vision, meaning you don't need to understand all of the complex mathematics behind modern pattern recognition algorithms or deep machine learning mathematics to start engaging with computer vision.

## Step 2: Setting Up OpenCV on Intel Edison

So now that we roughly know what OpenCV is and what it can do lets set it up on our Edison. Luckily for us there are already amazing tutorials here and here about the nitty gritty of getting OpenCV up and running on your Edison. Depending on the language you use to interact with the OpenCV libraries some of the instructions in these tutorials may change. However, below is a list of the most important set up steps:

1) Make sure your Edison is up to date with the latest firmware

`configure_edison --setup`

2) Update your IOT developer kit libraries

`opkg update										 opkg  upgrade`

3) Expand the memory with a MicroSD card with over 4GB of memory

This last step is important because OpenCV's many features use decent bit needed memory. Now if you are going to be using python be sure to get numpy and opencv using

`opkg install python-numpy opencv python-opencv nano`

## Step 3: Setting Up Webcam

Getting images onto the Intel Edison can be done a couple of different way. You can transfer files using a USB stick, SD card or through an ssh transfer software like FileZilla. However one of the best options is to take your pictures with the Intel Edison using a webcam. To set up a USB webcam you're going to have to enable UVC support. Luckily there is an in depth tutorial here, the essentials are:

1) Setting up custom linux kernel to enable UVC

```~/edison-src> bitbake virtual/kernel -c menuconfig
```

2) Make sure you enable USB Devices

```cp edison-src/build/tmp/work/edison-poky-linux/linux-yocto/3.10.17+gitAUTO*/linux-edison-standard-build/.config edison-src/device-software/meta-edison/recipes-kernel/linux/files/defconfig
```

3) Build and Flash

```~/edison-src> bitbake virtual/kernel -c configure -f -v

~/edison-src> bitbake edison-imag```

## Step 4: Light Up Wooden Case Part 1

To bring OpenCV out from the purely digital world I've designed a digital camera that lights up green when the Intel Edison finds a match and red when it doesn't. To begin the build I laser but out the camera pieces in both 1/8" plywood and 1/8" acrylic. Then using wood glue I carefully secured the thinner camera pieces together. Next I dropped in the Intel Edison and stacked up the remaining laser cut pieces around it to form the camera case.

## Step 5: Light Up Wooden Case Part 2

Now that the camera body is together lets solder together the lights. Because we only needed to light the camera up with green and red LED I soldered on 4 of each to the underside of the top of the camera box. This allowed for a really clean sleek look from the outside through the acrylic reveals.

## Step 6: Light Up Wooden Case Part 3

Finally, to power the LED's we are going to use the 5V signal from the Intel Edison gated through a 2N222 transistor, this is because the output pins on the Intel Edison cannot pass enough current to properly light up the LED's to full brightness. The transistor allows us to control the flow of a larger current from a tiny signal coming out of the Edison digital ports.

## Step 7: Example Project: Pattern Matching the Robot Part 1

Lets finally put our fingers to the keys and generate some code. For this tutorial I use Python with OpenCV to teach the Edison how to find which T-Shirt has the right yellow Instructables robot on the front. Download the images attached to this step if you'd like to run demo with the same photos.

Import Libraries

```import numpy as np<br>import cv2
from matplotlib import pyplot as plt```

The first step of any Python code is to import the libraries used by your script. In this case we are using the numpy library, which you will have down loaded through the Make tutorial already, the CV2 library which is the python binding of the OpenCV tutorial and the matplotlib library to be able to graph the outputs of our code.

Import Images

` img1 = cv2.imread('RedShirt2.jpg',0)`
`img2 = cv2.imread('instructablesRobot.jpg',0)`

First we import the images and convert them to greyscale. OpenCV imports images as 3 matrices of data. You have the red, green and blue channels essentially creating 3 different images however SURF works best on greyscale images so we average all those channels together for our greyscale image. Next we call the detect and compute function.What we are doing here is having our algorithm find "interesting" points in our image. Interesting points defined by the SURF algorithm are mathematically defined points that should be unique. Meaning if you find another point in another image with the same mathematical uniqueness there is a good chance that those points match. The goal of this script is to find enough unique points so as to account for bad matches and outliers.

## Step 8: Example Project: Pattern Matching the Robot Part 2

Finding KeyPoints and Descriptors

`kp1, des1 = sift.detectAndCompute(img1, None)<br>kp2, des2 = sift.detectAndCompute(img2, None)`

In the images above you'll notice the red circles around certain pixels in the image. Those are the unique features also known as keypoints. Each keypoint has a descriptor. There are many more keypoints for these images but I plotted only 10 per images, so as not to overcrowd the image. The more keypoints you can find the better because it will help give our algorithms more information to work with, thereby allowing it to make better decisions on how the two images match.

## Step 9: Example Project: Pattern Matching the Robot Part 3

Matching Images

`flann = cv2.FlannBasedMatcher(index_params, search_params)<br>matches = flann.knnMatch(des1, des2, k=2)`

Now that we have filtered out the list of keypoints and descriptors lets dive into FLANN matching. FLANN stands for Fast Approximate Nearest Neighbor Search. It solves the optimization problem of finding similar points. In this case we feed in the keypoints and descriptors found in the last step into the algorithm and it matches similar points found in the two images. Sometimes the computer will jump to conclusions and match unlike feature if you don't have your constraints dialed in just right.

Lowe's Ratio

One of those constraints is the Lowes Ratio. Scaling this ratio under .8 will usually get rid of 90% of your false positive matches, in our code we narrow it a bit further to .6 as seen in the code here:

```for m, n in matches:<br>    if m.distance < 0.6 * n.distance:
good.append(m)```

Plotting

The final step is to generate a plot to show how are images match. In the case of the red shirt (which is greyscale in the image above) notice the green lines connecting the two images. This means we have a match! Great news we found the right robot. Below is the code used to generate that plot. Be aware that if you are running the Edison headless you will want to save this to a picture because there is no screen to print too.

```draw_params = dict(matchColor=(0, 255, 0), # draw matches in green color<br>                   singlePointColor=None,
flags=2)

img3 = cv2.drawMatches(img1, kp1, img2, kp2, good, None, **draw_params)```

## Step 10: Success!

Congratulations! You just dove into one of the most complex pieces of computer vision science. There are a myriad of resources at your disposal to learn more about OpenCV and the algorithms used in this tutorial. Here are just a couple I found extremely helpful:

OpenCV Docs
http://opencv.org/documentation.html

SIFT

https://en.wikipedia.org/wiki/Scale-invariant_feat...

FLANN Feature Matching

http://docs.opencv.org/master/dc/dc3/tutorial_py_matcher.html#gsc.tab=0

Thank you for checking out this Intro to OpenCV tutorial, and have fun teaching the robots of the future to "See"!

<p>FInally an Instructable on Image Processing on an Edison!</p>
<p>Which IDE should I use to program the Edison for this purpose?</p>
You can use any text editor and then SSH into the edison and transfer the file over to the Edison. I recommend getting the code to work on your computer first so you can debug and then get it working on the Edison.
<p>I have been facing a couple of problems trying to get OpenCv installed on my computer.</p>
<p>Did you try...</p><p>sudo apt-get install python-opencv</p>
<p>i am currently working on a project that uses opencv on raspberry pi 2.Do you have any ideas on how to calculate real time position of an object using a camera ?</p>
<p>really nice</p>
<p>impressie</p>
maybe you could use this to recognize faces? and you could unlock your door just by showing your face!
<p>Wouldn't recommend... I used to use face log in on my computer, until I tried holding my facebook profile picture on my phone screen up to the webcam... Worked fine, so far too easy to get past. </p><p>Saving that up to prank the next person I find using face log in!</p>
<p>totally possible!</p>
<p>How does the program handle distortions? If the shirt gets wrinkles, does it still find a match? Or if your image is not perpendicular to the camera perspective, does it understand the ratios of the keypoint distances?</p>
<p>Interesting</p>
<p>If you took snapshots of streaming video to do this static processing, what do you think the snapshot rate would be? I&quot;m thinking about robot target acquisition and homing.</p>
<p>Nice one. </p>
<p>Nice job:)<br>What is the model of laser cutter that you are using?</p>
<p>So I can use any webcam or does it have to mention UVC on the packet somewhere?</p>
<p>Thanks :)</p>
<p>Very helpful tutorial! Any idea of doing the same with a Raspi?</p>
<p>sadly not at this time but check out this instructable:</p><p>https://www.instructables.com/id/RasPi-OpenCV-Face-Tracking/</p>
<p>Can i do this in real time with Wifii conection?, Which distance that edison's wifii can reach ?, is it the same distance of Galileo Gen 2 ? Thank you Zacharyianhoward.</p>
<p>Yeah you might be able to transmit the video through a webstream for live video, and I'm unsure of the specs for the galileo but I imagine there is a spec sheet somewhere online. </p>