Introduction: Virtual Touch Screen Game Using Zybo

Virtual Touch-screen Game hands-on tutorial for the Zybo will provide step-by-step instructions for customizing your hardware to emulate touch screen on simple TFT monitor using camera and finger detection.

Required Hardware:
- Zybo Board

- Vivado 2014.1 Webpack

- USB web camera

- VGA TFT monitor

- two Digilent JoyStick modules

- micro SD card (4GB or more)

- keyboard

- mouse and


Required software:

- Xillinux by Xilybus

- OpenCV

Step 1: Installin an Operating System and Configuring Hardware

The first step is to install an operating system on your Zybo board. The system that we chose is the Xillinux by Xillybus which is basically a modified Ubuntu 12.04 distribution of Linux based Operating System optimised for Zybo board.

First you need to dowmload two files:

- boot partition kit for your board -

- SD card image -

The instructions for what to do with these files are on the following link:

After booting up the system, to run the GUI, you have to type in the command "startx" (without " ") and press enter (Picture 1). Now the GUI will load and you can continue with the project (Picture 2).

Step 2: Installing OpenCV

OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications.

To install and configure OpenCV 2.4.1, complete the following steps.
The commands shown in each step can be copy and pasted directly into a Linux command line. To open the terminal, press alt+ctrl+t.

1. Remove any installed versions of ffmpeg and x264.

sudo apt-get remove ffmpeg x264 libx264-dev

2. Get all the dependencies for x264 and ffmpeg.

sudo apt-get update

sudo apt-get install build-essential checkinstall git cmake libfaac-dev libjack-jackd2-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libsdl1.2-dev libtheora-dev libva-dev libvdpau-dev libvorbis-dev libx11-dev libxfixes-dev libxvidcore-dev texi2html yasm zlib1g-dev

3. Download and install gstreamer.

sudo apt-get install libgstreamer0.10-0 libgstreamer0.10-dev gstreamer0.10-tools gstreamer0.10-plugins-base libgstreamer-plugins-base0.10-dev gstreamer0.10-plugins-good gstreamer0.10-plugins-ugly gstreamer0.10-plugins-bad gstreamer0.10-ffmpeg

4. Download and install gtk.

sudo apt-get install libgtk2.0-0 libgtk2.0-dev

5. Download and install libjpeg.

sudo apt-get install libjpeg8 libjpeg8-dev

6. Create a directory to hold source code.

cd ~

mkdir src

7. Download and install install x264.

cd ~/src


tar xvf x264-snapshot-20120528-2245-stable.tar.bz2

cd x264-snapshot-20120528-2245-stable

8. Configure and build the x264 libraries.

./configure --enable-shared --enable-pic


sudo make install

9. Download ffmpeg version 0.11.1 from

cd ~/src


tar xvf ffmpeg-0.11.1.tar.bz2

cd ffmpeg-0.11.1

10. Configure and build ffmpeg.

./configure --enable-gpl
--enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-nonfree --enable-postproc --enable-version3 --enable-x11grab --enable-shared --enable-pic


sudo make install

11. Download and install install a recent version of v4l (video for linux) from For this guide I used version 0.8.8.

cd ~/src


tar xvf v4l-utils-0.8.8.tar.bz2

cd v4l-utils-0.8.8


sudo make install

12. Download and install install OpenCV 2.4.2. Download OpenCV version 2.4.2 from

cd ~/src


tar xvf OpenCV-2.4.2.tar.bz2

13. Create a new build directory and run cmake:

cd OpenCV-2.4.2/

mkdir build

cd build


14. Verify that the output of cmake includes the following text: found gstreamer-base-0.10GTK+ 2.x: YESFFMPEG: YESGStreamer: YESV4L/V4L2: Using libv4l Build and install OpenCV.


sudo make install

15. Configure Linux. Tell linux where the shared libraries for OpenCV are located by entering the following shell command:

export LD_LIBRARY_PATH=/usr/local/lib

16. Add the command to your .bashrc file so that you don’t have to enter every time your start a new terminal.

Alternatively, you can configure the system wide library search path. Using your favorite editor, add a single line containing the text /usr/local/lib to the end of a file named /etc/ In the standard Ubuntu install, the opencv.conf file does not exist; you need to create it. Using vi, for example, enter the following commands:

sudo vi /etc/





17. After editing the opencv.conf file, enter the following command:

sudo ldconfig /etc/

18. Using your favorite editor, add the following two lines to the end of /etc/bash.bashrc:



After completing the previous steps, your system should be ready to compile code that uses the OpenCV libraries. The following example shows one way to compile code for OpenCV:

g++ `pkg-config opencv --cflags` my_code.cpp -o my_code `pkg-config opencv --libs`

Step 3: The Game

The game is a basic 8 bit ping pong game. This is not the final version, there are still some bugs in the design and it has to be merged with the finger detection application. The purple fields on the screen are reserved for fingeres. At this time, the game is played with the keyboard (left player - Q and A, right player - O and L), and you can change speed (S) manually.

The game and the finger detection applications are designed separately and the idea is that the finger detection app detect the finger on the purple line, calculates the y coordinate and then sends that coordinate to game app witch then updates the position of players tile.

The file contains the source code that can be compiled with the folowing comand line:

g++ `pkg-config opencv --cflags` test2.cpp -o game `pkg-config opencv --libs`

and you can run the game with:



Step 4: Finger Detection

With the web camera we are tracking multiple objects and calculating their coordinates on the screen. This part is not finished yet. What we done at this point is tracking of this simple blue objects. We can also detect and follow fingers, but you need to change bool variable calibrationMode on true (line 136). Then, when the program is built and executed, the slider window will appear, and first you need to change minimum and maximum HUE values to select the colors you want to detect. The detected object appears white on the screen. After that, with the you can reduce the noise with minimum and maximum values of SATURATION and VALUE (Picture 1).

If the calibrationMode is set to false, it will detect blue objects (Picture 2).

The file contains the source code.

This is still work in progress so there are still bugs!

Project video: