CPE 439 RTOS Final Project - Resistor Identifier

For our Real-Time Operating Systems class (CPE 439) at Cal Poly, we set out to design a Resistor Identifier using the Zybo Zynq SoC. This chip packs dual core ARM A9 processors as well as an FPGA. We created custom image pre-processing filters in hardware while running a version of embedded Linux on the A9's for image/video processing.

Project Authors: Carson Robles, Shiv Khanna, Jake Wahl, David Larson

Step 1: HDMI Input and VGA Output (Vivado 2018.2)

  1. Download the vivado-library-master folder from Polylearn or Github.
  2. Save these files within the project folder you are using
  3. Add the files to the IP catalog “Add Repository”
  4. Using the IP catalog, add 3 new IPs (clk_wiz, dvi2rgb, and rgb2vga)

  5. Search for each of the IPs using the search bar and double click them to add them to your design sources

  6. recustomize the each of the IP settings as follows:

  7. For clk_wiz: under the “Clocking Options” tab in the "Clocking Features" section, "Frequency Synthesis" and "Phase Alignment" need to be check and the rest of the options under clocking features need to be unchecked. Jitter Optimisation should be set to Balance, and the primitive should be set to PPL. Do not enable clock monitoring. Finally, scroll down to "Input Clock Information" and set the primary input clock frequency is set to 125MHz and the secondary input clock frequency is set to 100MHz.

  8. Under the “Output Clocks” tab, the first output clock should be check while the second output clock should be left unchecked. Set the requested output frequency of output clock 1 to 200MHz and the requested phase to 0 degrees.

  9. Double click the dvi2rgb source and edit the custom IP settings. "Enable DDC ROM", "Add BUFG to PixelClk", and "Debug module" should all be checked and the rest should be unchecked. The TMDS clock range should be set to ">=120MHz (1080p)" and preferred resolution should be set to "1920x1080".

  10. Lastly, double click on the rgb2vga source and set to blue component color depth to 5, the green component color depth to 6, the red component color depth to 5, and the Vid in data width to 24.

  11. Create a top-level wrapper module (top_wrapper.sv)and use the generated Verilog files from the IP’s to instantiate all three blocks.

  12. Right click on “Design Sources”

  13. Click on “Add Sources”

  14. Create a new Design Source file

  15. Make sure the file type is “SystemVerilog” Click “Finish.” A window will pop up asking you to define the module. Just click “Ok”

  16. Double click on the new wrapper you created and write your system verilog wrapper code. The top_wrapper we used can be found at: top_wrapper.v

  17. Right click on the “Constraints” folder and add the ZYBO_master.xdc constraints file that can be found on polylearn.

  18. Within the constraints file you just added, find the “HDMI Signals” section and comment in the appropriate TMDS signals, HPD signal, HDMI output enable, and the 2 I2C signals (ddc_scl and ddc_sda).

  19. In the same constraints file, go to the “VGA connecter” section, and fill in each of the VGA signals.

  20. Generate the Bitstream. After its finished, turn on the ZYBO and open Adept 2.

  21. In adept 2, click “Browse” for the FPGA and locate the .bit file that was generated from generating your bitstream. The file is most likely in the “runs” file within the project folder.

  22. In order to see the HDMI passthrough that you have created, you will need an HDMI source and a VGA output. Your HDMI source can be a camera like a GoPro or simply your laptop. When you make the appropriate connections, you should see that the signal is passed through to your VGA output on the ZYBO board.

Step 2: Image Buffering

One of the trickier parts of the design was figuring out the most efficient way to store the image data on the FPGA while running it through the filter. The filter is fast enough to process the data as it arrives, however, the filtering algorithm used requires 3x3 matrices of pixels meaning at least 3 rows of the image need to be stored at any given time.

The initial inclination is to store the entire frame on chip, run the data through the filter, and then buffer a new frame. The issue with this approach (aside from it being less efficient) is that the FPGA does not have enough BRAMs to store the entire frame. The solution to this that we engineered was to have 3 separate line buffers and rotate which was being written to when a new line is being received, while at the same time pushing out 3x3 matrices of pixel data as all three line buffers can be read from concurrently--resulting in reading one column of the matrix each clock cycle which leads to a complete matrix after 3 consecutive reads, this matrix is then sent out to the convolution engine.

The source code for the image buffer can be found in the RTL Github folder (https://github.com/carsonrobles/Resistor-Value-Det...).

Step 3: Video Filtering Using Convolution Kernels

In an effort to increase the speed of the algorithm, various image filters were implemented in the FPGA portion of the Zynq chip and applied to the incoming HDMI stream before being sent to the processor. The speed of the filters is limited by the speed at which HDMI video data is sent to the device, allowing the video data to be filtered at almost no extra cost in terms of speed (except for the initial overhead to fill the data pipeline). This is beneficial to the performance of the system because the CPU no longer has to waste cycles filtering the image prior to running the actual resistor detector algorithm.

An image convolution engine was used to create these filters. This is a popular way to obtain the results we desired and it also leads to high code reuse between different filters as each filter only differs by a 3x3 matrix of filter coefficients called the kernel. The way the convolution engine works is by taking a 3x3 matrix of pixels (the middle pixel being the pixel of interest and all of its immediate neighbors), multiplying each pixel by the corresponding filter coefficient in the kernel matrix, and summing all 9 products to produce the new pixel value to replace the middle pixel in the input matrix. This series of operations is carried out for every pixel in the frame. A website that we found useful when experimenting with different kernel matrices can be found here (http://setosa.io/ev/image-kernels/), it allows you to upload an image and use predefined kernels or enter your own values and observe the output image.

These filters can be chained together in series to run data through multiple filtering effects, however, between each filter the data much be buffered similarly to how it is initially stored from the HDMI stream to get a true cascaded filtering system.

The SystemVerilog source code for the FPGA system can be found on Github (https://github.com/carsonrobles/Resistor-Value-Det...).

Step 4: Embedded Linux With Petalinux

Xilinx's Petalinux tool was used to build a custom Embedded Linux Kernel for the Zybo 7000 board.

This tutorial is for Ubuntu 16.04 / Linux Mint 19

Petalinux takes 60-80 Gb of space on your development machine and recommends 4 GB of ram.

1. Download Petalinux installer from Xilinx, the file should be named


2. Install Dependencies

sudo apt-get install tofrodos gawk xvfb git libncurses5-dev tftpd zlib1g-dev zlib1g-dev:i386  \
	 	     libssl-dev flex bison chrpath socat autoconf libtool texinfo gcc-multilib \
                     libsdl1.2-dev libglib2.0-dev screen pax

3. Make Directory

sudo mkdir -p /opt/pkg/petalinux
sudo chown <user> /opt/pkg/
sudo chgrp  <user> /opt/pkg/
sudo chgrp  <user> /opt/pkg/petalinux/
sudo chown  <user> /opt/pkg/petalinux/

4. Run Petalinux Installer

cd ~/Downloads
./petalinux-v2017.4-final-installer.run /opt/pkg/petalinux

5. Source Petalinux tools

source /opt/pkg/petalinux/settings.sh

6. Download the Petalinux project for Digilent Zybo

git clone --recursive <a href="https://github.com/Digilent/Petalinux-Zybo.git">https://github.com/Digilent/Petalinux-Zybo.git</a>

7. Configure Petalinux Project

cd <path>/Petalinux-Zybo/Zybo

Image Packaging Configuration -->
	Root filesystem type --> SD card
		Device node of SD device --> /dev/mmcblk0p2
		Name for bootable kernel image --> image.ub

8. Next, open project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi in a text editor and locate the "bootargs" line. It should read as follows:

bootargs = "console=ttyPS0,115200 earlyprintk uio_pdrv_genirq.of_id=generic-uio";

9. Replace that line with the following before saving and closing system-user.dtsi:

bootargs = "console=ttyPS0,115200 earlyprintk uio_pdrv_genirq.of_id=generic-uio root=/dev/mmcblk0p2 rw rootwait";

10. Add opencv

petalinux-config -c rootfs

Filesystem Packages --> libs --> opencv
Add any other packages here to Ubuntu rootfs

11. Build the system


12. Base FPGA design


13. Building BOOT.BIN with your bitstream

petalinux-package --boot --force --fsbl images/linux/zynq_fsbl.elf --fpga <your system_wrapper.bit> --u-boot

14. Configure SD card

Format an SD card with two partitions: The first should be at least 500 MB and be FAT formatted. The second needs to be at least 1.5 GB (3 GB is preferred) and formatted as ext4. The second partition will be overwritten, so don't put anything on it that you don't want to lose. If you are uncertain how to do this in Ubuntu, gparted is a well documented tool that can make the process easy.

Copy images/linux/BOOT.BIN and images/linux/image.ub to the first partition of your SD card.

Warning! If you use the wrong /dev/ node in the following command, you will overwrite your computer's file system. BE CAREFUL

sudo umount /dev/sdX2
sudo dd if=images/linux/rootfs.ext4 of=/dev/sdX2 

15. The following commands will also stretch the file system so that you can use the additional space of your SD card. Be sure to replace the block device node as you did above:

sudo resize2fs /dev/sdX2

16. Eject the SD card from your computer, then do the following:

1. Insert the microSD into the Zybo
2. Attach a power source and select it with JP5 
3. If not already done to provide power, attach a microUSB cable between the computer and the Zybo
4. Open a terminal program: sudo screen /dev/ttyUSB1 115200
5. Press the PS-SRST button to restart the Zybo.

You should see the boot process at the terminal and eventually a root prompt.

Step 5: Image Processing Using OpenCV

All the magic of detecting the resistor occurs in OpenCV.

1. Input our frame of a resistor.

2. Process our frame

  • Run a Guassian Blur and Median Blur with a kernel size of 3
  • Turn the RGB image into LAB color space
    • This will be the color space we use in most of our algorithms
  • Locate where the resistor is
    • Run a standard deviation on the LAB image
      • Columns:
        • Split the image into the L, A, and B channels
        • Go through every column and get the standard deviation for each channel
        • Return a weighted standard deviation
      • The white background wont have a large standard deviation but the resistor will. We find the resistor by finding the continuous section of the image with a large standard deviation
      • Repeat but for the rows
    • We have now found the resistor (hopefully)

3. Classify the resistor

  • Get the middle 1/3 of our resistor image
    • To be more accurate we get rid of the edges of our resistor
  • Reduce our image to a single column by averaging the pixel values
  • Go through every pixel on our column
    • Get the average of the last N/30 pixels where N is the number of pixels
    • Run CIE94 on our pixel and the last N/30 average
      • The CIE94 is an algorithm for comparing two colors in lab space
      • https://en.wikipedia.org/wiki/Color_difference
      • The CIE94 gives us a number, the larger the number, the more different the pixels
      • Ignore pixels right next to each other, colors shift over multiple pixels is normal
      • If the CIE94 value is big enough, add the pixel to a possible color shift locations
      • Once we go though the whole resistor draw all the edges where the resistor shifts color
  • Find the bands
    • To find the bands we implemented a penalty system
    • We have a resistor split up by a bunch of edges, some bands made up by these edges are the actual color band, some are the background of the resistor. How do we say which is which
    • We go though every combination of possible bands:
      • EXAMPLE:
      • We find 6 edges in our resistor (so 7 discrete bands)
      • We go through every combination from 0000000 to 11111111 where 0's are the resistor bands and the 1's are background
      • This makes 2^7 or 128 possible combinations
      • For each one we take into account
        • Do we have 3 resistor bands? (We dont find tolerance bands)
        • Are all resistor bands around the same size?
        • Are the resistor bands and background bands alternating?
        • Are the resistor bands average color different than the whole resistor?
          • Resistor bands should have a different color
        • Are the background bands average color similar to the whole resistor
          • Background bands should have a similar color to the whole resistor
      • These conditions are all weighed differently based on how important they are
    • We get the lowest penalty combination, this is the most likely combination that we will use
  • Find our color
    • Get the mean of each resistor band
    • Compare the color to our samples
      • We have 50 samples in a LUT, 5 for each color
      • These colors are from different lighting situations
    • Save what color each band is

4. Make overlay

  • Calculate the resistance from our bands
  • Draw resistance value above the resistor
  • Output to screen

Our C++ code:

Source file found in /src folder of this github repo: https://github.com/ShivKhanna/opencv_resistor

main.cpp: Reads in an image/video and outputs the resistor frame with an overlay

frame.cpp: Processes the frame and finds where the resistor is located

resistor.cpp: Algorithm for finding the value of the resistor given an image of a resistor

CIE.cpp: Implementation of 2 color difference algorithms, CIE94 and CIEDE2000



    • Stone Concrete and Cement Contest

      Stone Concrete and Cement Contest
    • Backyard Contest

      Backyard Contest
    • Planter Challenge

      Planter Challenge



    7 months ago

    Hello there!

    We're glad you want to share something with the Instructables community!

    And we’re here to help you out.

    In order to be published live on the site and be eligible to enter a contest, an Instructable must consist of the following things:

    - Multiple steps showing how you made your project

    - Written instructions in each step

    - Your own original images

    Check out our free online class that walks you through how to write an Instructable: https://www.instructables.com/class/How-to-Write-an-Instructable-Class/

    Beyond making your Instructable simply publishable, this guide (https://www.instructables.com/id/How-to-Create-a-Feature-Worthy-Instructable/) explains what is required to have your Instructables featured by our site editors. It’s very helpful, and definitely worth checking out.

    We would love to review your project again after you have made the necessary edits, and we will publish your project if it is eligible.

    If you have any questions, please feel free to ask right here or send us an email to service@instructables.com.


    Instructables Community Manager