This tutorial describes the necessary steps to make a balloon targeting and destroying turret using the Zybo Zynq-7000 Development board made by Digilent. The Zynq-7000 is a nice platform to develop on as it incorporates a relatively powerful ARM Cortex A9 processor, as well as a Xilinx Artix FPGA on the same chip. This enables the use of platforms such as Linux to be used, but also custom hardware to be easily added. Other systems could easily be used given they can run RTLinux and OpenCV.

The idea behind this project was automated targeting of a moving object, and for simplicity's sake, balloons were chosen. The balloon will be detected using a webcam that feeds to the ARM processor running OpenCV. The center of the balloon will be detected in the resulting image, and coordinates will be sent to the FPGA where it will move the servos and enable the laser when the correct position has been set. We have omitted the tracking algorithm for safety considerations, but describe the overall process in a later section. You must always be careful when working with lasers and you must especially consider the risks when combining a laser with potentially faulty computer-vision code. However, if care is taken, completing this project culminates into a great learning experience.

The following items will be necessary to complete this project:

  • Digilent Zybo Zynq-7000 Dev board
  • Xilinx Vivado IDE + SDK, v2016.2
  • Device Tree Compiler (dtc)
  • UART Serial program (screen, minicom)
  • SSH Client (openssh)
  • gcc
  • build-essentials
  • libgtk2.0-dev

  • pkg-config

  • libavcodec-dev

  • libavformat-dev

  • libswscale-dev

  • python-dev

  • python-numpy

  • MicroSD card
  • PC ATX power supply
  • Webcam
  • 2x Servos
  • Prototyping board
  • Laser Diode
  • Heatsink assembly for laser diode
  • Appropriate lens for laser wavelength
  • LT1637 Op-amp
  • Power NFET transistor (Rated for 10A or more...will need a heatsink as it will be dissipating up to 3W)
  • Low resistance, high power resistor (we used a 10W 1 Ohm resistor)
  • 2x 10K resistors
  • 100 Ohm and 560 Ohm (or close) resistors for setting voltage at laser diode
  • LT1083 Series (or any other) adjustable linear regulator
  • DAC (we used a Microchip MCP4911, any DAC could be used though)

Step 1: Partition the MicroSD Card & Load a Linux Root Filesystem

First, the card must be partitioned, with one FAT32 boot partition and one ext2 root filesystem partition. GParted or fdisk on a Linux system can be used to do this. For the sake of simplicity, our boot partition is labeled "ZYBO_BOOT" and our root filesystem is labeled "ROOT_FS"

For our root filesystem, we chose to use Arch Linux. Theoretically, any root filesystem should work, such as Ubuntu or Xillinux as long as it is compiled for ARM. We have not tried these systems, but setting up a development environment and installing packages should be easier on those systems than on Arch Linux.

Step 2: Preparing to Compile U-Boot

The next step for booting Linux on the Zybo board is by using U-Boot. We can compile U-Boot from sources.

First, we cloned U-Boot from the master-next branch of Digilent’s U-Boot repository. Then, we modified /opt/Xilinx/Vivado/2016.2/.settings64-Vivado.sh so that nothing tampers with LD_LIBRARY_PATH. Then, we sourced the Xilinx environment at /opt/Xilinx/Vivado/2016.2/settings64.sh. If you're using a 32-bit system, then modify .settings32-Vivado.sh and use settings32.sh. And of course, replace /opt/Xilinx with your install location.

After all of that, execute these lines in your terminal to use the Xilinx tools for compilation.

export CROSS_COMPILE=arm-xilinx-linux-gnueabi- 
export ARCH=arm
export PATH=$HOME/tutorial/u-boot-Digilent-Dev/tools:$PATH

Step 3: Compiling U-Boot

Navigate to u-boot-Digilent-Dev/include/configs/zynq_zybo.h


"sdboot=if mmcinfo; then " \
	"run uenvboot; " \ 
	"echo Copying Linux from SD to RAM fs... && " \ 
	"fatload mmc 0 0x3000000 ${kernel_image} && " \ 
	"fatload mmc 0 0x2A00000 ${devicetree_image} && " \ 
	"fatload mmc 0 0x2000000 ${ramdisk_image} && " \ 
	"bootm 0x3000000 0x2000000 0x2A00000; " \ 
"fi\0" \

And replace with:

"sdboot=if mmcinfo; then " \
	"run uenvboot; " \ 
	"echo Copying Linux from SD to RAM fs... && " \ 
	"fatload mmc 0 0x3000000 ${kernel_image} && " \ 
	"fatload mmc 0 0x2A00000 ${devicetree_image} && " \ 
	"bootm 0x3000000 - 0x2A00000; " \ "fi\0" \ 
"fi\0" \

Also, find:

"fdt_high=0x20000000\0"    \
"initrd_high=0x20000000\0"    \

and replace it with:

"fdt_high=0x1C000000\0"    \
"initrd_high=0x1C000000\0"    \

Then, to start compilation, execute:

make mrproper
make zynq_zybo_config 

Step 4: Compiling Linux

To compile the Linux kernel, we cloned the next-branch of Digilent’s Linux repository. We run ‘make menuconfig’ and are greeted by a convenient ncurses menu.

We enabled the following configuration options

Device Drivers > Multimedia support > Media USB Adapters

<*>   USB Video Class (UVC)  
<*>   CPiA2 Video For Linux 
<M>   GSPCA based webcams  --->
<M>   USB Philips Cameras  
<*>   USB ZR364XX Camera support  
<*>   USB Syntek DC1125 Camera support
Device Drivers > DMA engine support
[*]   Async_tx: Offload support for the async_tx api
Floating point emulation
[*]     Support for NEON in kernel mode

Save the file and exit. Then, make sure these are true in the .config

CONFIG_USB_OTG is not set

Then we compiled the kernel, copied it to the boot directory, then installed by executing the following

make UIMAGE_LOADADDR=0x8000 uImage modules
cp arch/arm/boot/uImage /media/ZYBO_BOOT/
make INSTALL_MOD_PATH=/tmp/ modules_install
sudo cp -r /tmp/lib/modules/3.18.0-xilinx-46110-gd627f5d/ /media/ROOT_FS/lib/modules/

Step 5: Generating the Vivado Project and Adding AXI GPIOs

After compiling Linux, you must design the hardware and generate a .bin file. We need a complete description of the FPGA system before we can develop a first stage boot loader and a device tree that describes the available peripherals.

To begin, clone the project template from Zembia's repo.

Then, within the Templates/02_template__project__zybo_vivado_2015_1/sources/bd/design_1.tcl, at line 13, change

set scripts_vivado_version 2015.1

to point to that latest version. We're running 2016.2, so we changed it to

set scripts_vivado_version 2016.2

Then, open Vivado and run the TCL script 'create_vivado_project.tcl' as shown above. This will generate a block design. Once the block design has been created, open it up, right click, and click "Add IP". A window should appear and in the search type AXI GPIO. Add two of these modules. Once the blocks have been added, double click them in order to configure. The Servo block needs to be set to 16 bits wide, while the SPI block needs to be set to 17 bits wide. Right click on the GPIO outputs now, and click "Make External". Once this has been done, at the top of the window, there will be a box with a button saying "Run Connection Automation". Click this and it will connect the appropriate signals to the GPIO blocks. Finally, right click FCLK0 which is going to these blocks, and make it external as well.

Step 6: Create FPGA Modules for Servo and Laser Control

Next, modules must be created on the FPGA for servo and laser-power control. Software will be used to write to the registers of these modules, and the modules will interpret their register content to determine how to drive the servos and laser power. The servos require a 50Hz square wave with a varying duty cycle from 2% to 4% (i.e. on time of 1ms to 2ms). This is easily written in the FPGA using a clock divider and a counter. The provided modules take in an 8 bit position value, and turn that into the correct duty cycle. 8 bits was chosen as the servos we used (MG996R) have a maximum rotation of 120 degrees, and a dead bandwidth of 5us. This means a maximum of 200 steps which fits nicely within 8 bits (256 possible steps). The output of these modules will have to be routed to a pmod output on the Zybo board once they are incorporated into the project.

The laser power is controlled via a DAC which communicates using SPI. The DAC we used (Microchip 10 bit MCP4911) takes a 16 bit command. The module included takes in a 16 bit value from the ARM processor, and correctly formats for what the DAC wants. Additionally it divides the master clock by 16 down in order to make a slow enough clock that the DAC can use it (The MCP4911 takes a maximum frequency of 20MHz). It is easiest to simply add the Servo and SPI modules to the overall wrapper created in the previous step. In the newest releases of vivado, you can add the the modules to the block diagram, however this isn't possible in the 2015 and older releases, and so for compatibility we will add modules to the wrapper.

To do this, first we need to add the modules to the design. Download the provided modules and in the right panel labeled "sources" right click the Design Sources folder and select "Add Sources". Find where you download the Servo and FPGA modules and add them to the project.

Next, we need to connect these newly added modules to our design. To do this, simply add the desired outputs to the initial declaration of the wrapper:

  output ss;
  output sck;
  output data_out;
  output servo_xy_out;
  output servo_z_out;
  output laser_enable;
  output [3:0] register_value;
  output clk_out;
  output laser_disable;

Then add the module instantiations for the SPI master and the two servo blocks right before the endmodule statement in the wrapper:

   spi_master SPI_MAS(
    reg [1:0] clk_cnt = 0;
    reg slo_clk = 0;
    always @ (posedge FCLK_CLK0) begin
      slo_clk <= ~slo_clk;

Servo_control ServoXY (
Servo_control ServoZ (

Once the modules have been added to the project, create the FPGA bitstream. When it finishes the synthesis and writing the bitsteam, go to the menu option at "File > Export > Export Hardware" to move the resulting bitfile and design to the Xilinx SDK.

Step 7: Create Zynq FSBL

Now, we need to create a first stage bootloader to boot U-Boot.

Open the SDK. When it launches, go to File > New > Application project. Name it "FSBL" and hit next. Select Zynq FSBL as the project template.

In the FSBL project, navigate to src/fsbl_hooks.c.

Insert the following lines at the top

#include "xparameters.h"
#include "xiicps.h"
#include "xemacps.h"

Then, locate the u32 FsblHookBeforeHandoff(void) function and replace it:

u32 FsblHookBeforeHandoff(void) {
    u32 Status;    

    Status = XST_SUCCESS;
     * User logic to be added here.
     * Errors to be stored in the status variable and returned
    fsbl_printf(DEBUG_INFO,"In FsblHookBeforeHandoff function \r\n");    /* Read Out MAC Address */
        int Status;
        XIicPs Iic;
        XIicPs_Config *Iic_Config;
        XEmacPs Emac;
        XEmacPs_Config *Mac_Config;        

        unsigned char mac_addr[6];
        int i = 0;
        fsbl_printf(DEBUG_GENERAL,"Look Up I2C Configuration\n\r");
        Iic_Config = XIicPs_LookupConfig(XPAR_PS7_I2C_0_DEVICE_ID);
        if(Iic_Config == NULL) {
            return XST_FAILURE;
        fsbl_printf(DEBUG_GENERAL,"I2C Initialization\n\r");
        Status = XIicPs_CfgInitialize(&Iic, Iic_Config, Iic_Config->BaseAddress);
        if(Status != XST_SUCCESS) {
            return XST_FAILURE;
        fsbl_printf(DEBUG_GENERAL,"Set I2C Clock\n\r");
        XIicPs_SetSClk(&Iic, 200000);
        mac_addr[0] = 0xFA;
        fsbl_printf(DEBUG_GENERAL,"Set Memory Read Address\n\r");
        XIicPs_MasterSendPolled(&Iic, mac_addr, 1, 0x50);
        fsbl_printf(DEBUG_GENERAL,"Get Mac Address\n\r");
        XIicPs_MasterRecvPolled(&Iic, mac_addr, 6, 0x50);

        fsbl_printf(DEBUG_GENERAL,"MAC Addr: ");
        for(i = 0; i < 6; i++) {
            fsbl_printf(DEBUG_GENERAL,"%02x ", mac_addr[i]);

        fsbl_printf(DEBUG_GENERAL,"Look Up Emac Configuration\n\r");
        Mac_Config = XEmacPs_LookupConfig(XPAR_PS7_ETHERNET_0_DEVICE_ID);
        if(Mac_Config == NULL) {
            return XST_FAILURE;

        fsbl_printf(DEBUG_GENERAL,"Emac Initialization\n\r");
        Status = XEmacPs_CfgInitialize(&Emac, Mac_Config, Mac_Config->BaseAddress);
        if(Status != XST_SUCCESS){
            return XST_FAILURE;

        fsbl_printf(DEBUG_GENERAL,"Set Emac MAC Address\n\r");
        Status = XEmacPs_SetMacAddress(&Emac, mac_addr, 1);
        if(Status != XST_SUCCESS){
            return XST_FAILURE;

        fsbl_printf(DEBUG_GENERAL,"Verify Emac MAC Address\n\r");
        XEmacPs_GetMacAddress(&Emac, mac_addr, 1);
        if(Status != XST_SUCCESS){
            return XST_FAILURE;

        xil_printf("MAC Addr: ");
        for(i = 0; i < 6; i++) {
            xil_printf("%02x ", mac_addr[i]);

    return (Status);

Step 8: Creating BOOT.bin

After the FSBL has completed building, navigate to Xilinx Tools > Create Zynq Boot Image.

Select someplace to save the .bif. This .bif just saves your settings for creating the boot image.

Then, in this order, add FSBL.elf, design_1_wrapper.bit, u-boot.elf.

These should be located in:




Select 'Create Image' and move it to ZYBO_BOOT on the MicroSD card

Step 9: Creating the Device Tree Blob

Now, we need to inform all of the components on the Zybo board how to communicate. Go to the menu item at Xilinx Tools > Repositories and add this global repository.

Then, go to File > New > Board Support Package. Select device_tree as the Board Support Package OS.

Navigate to system.dts and change

bootargs = "console=ttyPS0,115200";


bootargs = "console=ttyPS0,115200 root=/dev/mmcblk0p2 rw earlyprintk rootfstype=ext4 rootwait devtmpfs.mount=1";

To enable ethernet, change

&gem0 { 
	local-mac-address = [00 0a 35 00 00 00]; 
	phy-mode = "rgmii-id"; 
	status = "okay"; 
	xlnx,ptp-enet-clock = <0x6750918>; 
	ps7_ethernet_0_mdio: mdio { 
		#address-cells = <1>; 
		#size-cells = <0>; 


&gem0 { 
	//local-mac-address = [00 0a 35 00 00 00]; 
	phy-handle = <&phy0>; 
	phy-mode = "rgmii-id"; 
	status = "okay"; 
	xlnx,ptp-enet-clock = <0x6750918>; 
	ps7_ethernet_0_mdio: mdio { 
		#address-cells = <1>; 
		#size-cells = <0>; 
		phy0: phy@1 { 
			compatible = "realtek,RTL8211E"; 
			device_type = "ethernet-phy"; 
			reg = <1>; 

and go to Zynq-7000.dtsi and change

gem0: ethernet@e000b000 { 
	compatible = "xlnx,ps7-ethernet-1.00.a"; 
	reg = <0xe000b000 0x1000>; 
	status = "disabled"; 
	interrupts = <0 22 4>; 
	clocks = <&clkc 13>, <&clkc 30>; 
	clock-names = "ref_clk", "aper_clk"; 
	local-mac-address = [00 0a 35 00 00 00]; 
	xlnx,has-mdio = <0x1>; 
	#address-cells = <1>; 
	#size-cells = <0>; 


gem0: ethernet@e000b000 { 
	compatible = "xlnx,ps7-ethernet-1.00.a"; 
	reg = <0xe000b000 0x1000>; 
	status = "disabled"; 
	interrupts = <0 22 4>; 
	clocks = <&clkc 13>, <&clkc 30>; 
	clock-names = "ref_clk", "aper_clk"; 
	//local-mac-address = [00 0a 35 00 00 00]; 
	xlnx,has-mdio = <0x1>; 
	#address-cells = <1>; 
	#size-cells = <0>; 

To enable USB, go back to system.dts and change

the &usb0 entry to:

&usb0 { 
	compatible = "xlnx,ps7-usb-2.20a", "xlnx,zynq-usb-2.20a", "chipidea,usb2", "usb-nop-xceiv", "xlnx,ps7-usb-1.00.a", "xlnx,zynq-usb-1.00.a";
	dr_mode = "host"; 
	phy_type = "ulpi"; 
	status = "okay";
	reg = <0xe0002000 0x1000>; 
	xlnx,usb-reset = <&gpio0 46 0>; 

Save the files and open your terminal. Navigate to 02_template__project__zybo_vivado_2015_1/vivado_project/vivado_project.sdk/device_tree and compile your device tree with this command

dtc -I dts -O dtb system.dts -o devicetree.dtb

Now, you can move devicetree.dtb to ZYBO_BOOT.

Step 10: Booting Linux and System Configuration

Safely eject the microSD card and plug the SD card into the board. Then, move JP5 so that it is NOT shown above. Move it so that it covers SD and QSPI.

Connect a microUSB cable to the UART port of the ZYBO and power on the board. Connect to the board over UART at 115200 baud. With the screen program, this is:

sudo screen /dev/ttyUSB1 115200

You should be able to log into the Linux system with the username and password "root".

To enable screen forwarding, a higher bandwidth connection needs to be made. We can set up a subnet between the Zybo board and your computer.

On the Zybo, execute

ifconfig eth0
ifconfig eth0 down
ifconfig eth0 up

And on your on computer, do the same thing, but assign a different ip. If your host computer is running Linux, this would be:

ifconfig eth0
ifconfig eth0 down
ifconfig eth0 up

Give yourself a username as password on the Zybo board that you can SSH into. Execute this from the Zybo board

useradd -m zybo 
passwd zybo

Step 11: Compiling OpenCV With Optimizations

Now, as the final step for the system setup, we can move on to getting OpenCV on the system. This step can take up to 2 hours.

On the Zybo board, clone the OpenCV repository here and execute.

      -D CMAKE_INSTALL_PREFIX=/usr/local \
      -D ENABLE_SSE2=OFF \
      -D ENABLE_SSE3=OFF \
      -D WITH_CUDA=OFF \
      -D ENABLE_VFPV3=ON \

After cmake, type 'make' then 'sudo make install'

Step 12: Construct Laser Current Sink

Required Materials:

  • Prototyping board
  • Laser Diode
  • 1x LT1637 Op-amp
  • 1x Power transistor (Rated for...)
  • 1x Low resistance, high power resistor (we used a 10W 1 Ohm resistor)
  • 2x 10K resistors
  • 100 Ohm and 560 Ohm (or close) resistors for setting voltage at laser diode
  • LT1083 Series (or any other) adjustable linear regulator
  • DAC (we used a Microchip MCP4911, any DAC could be used though)

Now that the software development environment has been set up, the physical hardware topology and the mechanical system must be constructed.

The laser diode is just like an LED in that it needs to be current limited in order for the device not to be damaged. In order to do this we need to make a circuit that will set the current through the laser diode. Since we want to control the laser current and make it vary, we will be using a feedback system to set the current to anything we want, instead of simply limited it to a maximum value. As such, we will be using a feedback system that references a variable voltage that we can control.

The attached circuit will do just this. The op-amp sets the gate voltage of the NFET transistor which determines the current through the transistor. A feedback voltage is taken at the drain of the transistor and fed back to the negative terminal of the op-amp. This feedback is compared against a reference voltage and the output of the op-amp adjusts so that the feedback voltage will match the reference voltage.

Resistor R3 sets how sensitive this system is. Using a 1 Ohm resistor here, the reference voltage is directly proportional to the current through the laser diode. In other words, a reference voltage of 1V means that there will be 1A through the laser diode. If the resistance is increased to 2 Ohms, this divides by two (ie 1V means 500mA). Make sure this resistor is rated at the power that it will be dissipating at these current levels (ie. if your system will have a max current of 2A, and you're using a 1 Ohms resistor, you need a resistor rated for more than 4 watts).

The reference voltage is controlled using a DAC. We used the Microchip MCP4911 10 bit DAC because it was cheap, easy to control using SPI, and available as a through hole part.

The LT1637 op-amp is powered using a single 5V supply, there is no need for a dual supply. The shut down pin of the op-amp can be used as a laser disable. When this pin is tied high, the op-amp is turned off. As such, connecting this pin to a switch that is connected to the supply voltage will enable you to disable or enable the laser independent of the reference voltage.

Depending on the laser diode used, the 5V supply for the op-amp may not be enough. This will be the case if the turn on voltage of the diode is at or around 5V or more. Our laser diode's turn of voltage was ~4.5V, and so this this additional voltage was needed. If this is the case, the use of a different power source will be required. We used a PC power supply, and so there was a 12V rail easily available to us. The diode can be powered off of this, however the voltage may need to be lowered since at 12V, there would be a very large voltage drop across the current limiting transistor and it would dissipate too much power. As such, a linear regulator is used to bring this voltage down to a more reasonable level (in this case 7V). The linear regulator isn't very efficient either, and so it will need adequate cooling. Make sure this regulator as well as the current limited transistor have heatsinks mounted on them, and they still get hot, an external fan may be necessary (we reused an old laptop fan).

Step 13: Assembling the Laser

Required Materials:

  • Laser Diode
  • Appropriate diode housing
  • Appropriate Lens for laser wavelength

While the laser diode will work without any cooling, it certainly won't last long. The laser diode is only about 30% efficient at its best (ie 30% of the power consumed is converted into light, the rest into heat). As such, the diode needs some serious cooling. For example if you are using a 2 watt laser diode, while 2 watts will be output as light, around 4.5 watts will be output as heat.

Most laser diodes come in two sizes, 5.6mm and 9mm. Make sure you get a laser diode mount in the correct size. Screw down mounts are the best as they more securely fit the diode in them to provide better cooling. Anything similar to the picture shown should work.

Solder wires to the + and - terminals of the diode. Put the diode in the housing and screw it down securely. Screw the lens in to the front of the housing and the laser should be ready to go.

Step 14: Powering the System

Required Materials:

  • PC ATX power supply

This system will require a fair amount of power when fully active. An easy and cheap solution is to use any PC ATX power supply. These supplies are very cheap, and provide a variety of voltages (3.3V, 5V, and + and - 12V) making them ideal. We used the 5V rails to power the Zybo board, op-amp and servos, and the 12V rail to power the laser.

In order to use the PC power supply, find the green wire coming out of it (there will only be one) and short it to a black one (ground). A switch can be added between the two wires for easy power on and off of the system. The yellow wires are all 12V rails, the black wires and ground, while the red wires are the 5V rails. Connect one of each of these wires to the appropriate nodes on the laser driver circuit.

In order to power the zybo board, there are a couple of different options, however the easiest is to find an old micro USB cable, and cut off the end of it. Connect the V+ wire (red) to a 5V wire from the power supply. Connect the ground (black) to a ground wire on the power supply. The Data + and - wires can be ignored or just cut off entirely.

The servos will need 5V power as well in order to move at their full speed. They can be connected to the 3.3V outputs that the zybo board provides, however they will move slower. Connect the V+ and Ground wires of the servos to the appropriate nodes on the laser driver circuit.

Step 15: Construct Physical Mounting

Required Materials:

  • 2 Servos (we used 2x MG996R because of their high torque and low cost)
  • Some sort of material that can easily be cut and glued (we used foam board as it is strong and lightweight)
  • Strong glue (we used JB Weld, any sort of epoxy should work though)
  • Laser Assembly
  • Webcam

The laser and camera are moved using two servos. One servo controls XY position, while the other controls the Z direction. A mounting system as shown in the picture works relatively well. We used foam board to construct all the physical mountings. Foam board is pretty strong, and is easily cut with a serrated knife or small saw. Everything was connected using JB-Weld as it is extremely strong. Any epoxy could be used, the idea here is just to get a strong hold between the servos and the foam board.

The system was constructed using the arm as shown in the picture to make sure the laser and camera rotated on their axis. The webcam is then mounted on top of the laser. There will be a small offset between the webcam and the actual laser, which will have to be taken into account in the aiming algorithm.

Step 16: Capture Pictures of Balloon and Determine Features

Once you have compiled openCV for the Zybo, you're ready to begin developing your image-processing system. However, before writing the actual algorithm, you must first gather the enough data to create a successful balloon-tracking system. We decided to track red balloons for our project. Therefore, we captured images of a red balloon using a 640x480 Playstation Eye to use as our "training data". We captured these images with the balloon floating in front of a classroom whiteboard. While fixing the environmental parameters doesn't help create a robust system, the conditions were sufficient for our class project. However, while I will describe our balloon-tracking solution, I will offer some tips and alternative approaches to help improve system accuracy.

Once you capture the images of the red balloon, you can separate the images into various color spaces. For example, we took an RGB image of the balloon in front of the whiteboard, converted it to HSV using an openCV function, and looked at each channel separately (i.e. looking at the S channel of the HSV image will produce a grayscale image where large S values produce bright pixels and small S values produce dark pixels, as seen above). After looking at each color channel, you can visually determine what channels differentiate the balloon from its background. We determined that the Hue, Saturation, and Blue channels produced the best results. You can use these color channels to threshold the image, turning pixels of interest (ones that potentially belong to a red balloon) white while turning all other pixels black. For the rest of this tutorial, we will assume that Hue, Saturation, and Blue channels are the ones you'll use, though you can cater your solution to your own circumstances.

Once you determined Hue, Saturation, and Blue to be the best channels, you must determine the average values of each channel for pixels that belong to a red balloon. This brings forward an interesting dilemma: the hue color space formats the color values in a circle, as seen in the image above. It just so happens that the red hue values, which are the ones we are most interested in, get split across values 0 and 255; intuitively, this corresponds to the point where the circle reaches 360 degrees. Therefore, to accurately determine the average Hue value of the red balloon, you must rotate the Hue circle so that the red value doesn't get split. To do this, add 25 to every pixel in the Hue image, wrapping pixels from 255 to 0. Once you do this, you can crop the images to contain only the balloon and, thus, find the average values of each channel across all images.

Now that you've obtained the average Hue, Saturation, and Blue values of red balloons, you must identify significant features of a red balloon such as roundness, area, texture, color, and temperature (Xbox Kinect provides infrared data). For our solution, we decided to classify a thresholded object based off its Hue, Saturation, Blue, and Area features, though we recommend including other features to improve robustness.

Step 17: Aiming Algorithm

Now that you've identified features to classify objects, you're ready to create an aiming algorithm. Every computer-vision algorithm described below is implemented in openCV and should be used when possible; the openCV library functions have been optimized for the Arm processor and will therefore help satisfy real-time requirements. Our aiming algorithms is implemented as follows:

For each video frame that you read in...

  1. Threshold the image using the color information you obtained. Pick threshold values that allow the balloon pixels to become white while background pixels become black.
  2. Perform morphological erosion followed by morphological dilation to remove noise pixels.
  3. Fill in all black holes in thresholded objects.
  4. Obtain the center coordinates of each white object and evaluate all features of interest (Average Hue, Average Saturation, Average Blue, Area).
  5. For each object, compute how close the object's features are to the ideal balloon's values (the ideal feature values are the ones we derive from the training images).
  6. Pick the object with features that most closely resemble an ideal red balloon. If that object has any features that largely deviates from the balloon's corresponding features, then there isn't a balloon in the image. Otherwise, that object is the balloon, so you can return its center coordinates.

Step 18: Tracking Calibration and Servo Movement

If you know the X and Y offset of the balloon's center from the center of the image, you ultimately know how much the camera should rotate vertically and horizontally in order to align the center of the balloon with the center pixel of the image. This works because the angle a camera rotates is linearly related to the number of pixels traversed in the axis of rotation. For example: if a balloon is x pixels away from the center of the image, then the angle the camera should rotate is THETA degrees, with THETA = bx. You can experimentally determine the value of 'b' by calibrating your tracking algorithm. In order to calibrate your system, the camera must be correctly mounted on the servos. We calibrated our system with the following steps:

  1. Use memory-mapped IO to point the camera at a whiteboard. Take note of the value written to the PWM module that rotates the vertical servo.
  2. Draw a dot on the whiteboard that corresponds to the center of the video feed. You can determine the center coordinate more easily if you overlay cross-hairs on the video feed (like a sniper scope), where the cross-hairs intersect at the center of the screen. This can be done through openCV.
  3. Write a value to the vertical PWM register using memory-mapped IO, allowing the camera to rotate upward. The camera should rotate until the dot on the board has reached the bottom of the screen. Take note of the value written to the PWM module that rotates the vertical servo.
  4. Find the difference between the two PWM values we wrote to the memory-mapped IO. The difference between these two values map to a length of 320 pixels in the vertical direction (since the camera has 640 pixels on the vertical axis).

Given the X and Y offset of the balloon's center from the center of the image, you now know the values to write to the PWM modules. You can dynamically adjust this linear relationship during runtime so the laser/camera can hone in on the center of the balloon. Once you have centered the camera within a bounding box of the balloon's center, you may drive the laser for a few seconds. You can drive the laser by writing to the SPI module using memory-mapped IO. It's recommended to create software drivers for hardware communication, as the SPI protocol contains many redundant steps (start bit, data, etc.).

And there you have it! This marks the end of the tutorial!

can it be used to target fly's or mosquito's?
<p>That was the initial idea of this project. However, you couldn't simply use vision techniques to target mosquito's, you would have to use sound. I think other projects exist that already do this.</p>
<p>Useful against intrusive drones !</p>
<p>Awesome! Great instructable :)</p>

About This Instructable




More by KeithB132:Laser Balloon Destroyer with Digilent Zybo Board using RTLinux 
Add instructable to: