Virtual Touch-screen Game hands-on tutorial for the Zybo will provide step-by-step instructions for customizing your hardware to emulate touch screen on simple TFT monitor using camera and finger detection.
- Zybo Board
- Vivado 2014.1 Webpack
- USB web camera
- VGA TFT monitor
- two Digilent JoyStick modules
- micro SD card (4GB or more)
- mouse and
- Xillinux by Xilybus
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: Installin an Operating System and Configuring Hardware
The first step is to install an operating system on your Zybo board. The system that we chose is the Xillinux by Xillybus which is basically a modified Ubuntu 12.04 distribution of Linux based Operating System optimised for Zybo board.
First you need to dowmload two files:
- boot partition kit for your board - http://xillybus.com/downloads/xillinux-eval-zybo-...
- SD card image - http://xillybus.com/downloads/xillinux-1.3.img.gz
The instructions for what to do with these files are on the following link:
After booting up the system, to run the GUI, you have to type in the command "startx" (without " ") and press enter (Picture 1). Now the GUI will load and you can continue with the project (Picture 2).
Step 2: Installing OpenCV
OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications.
To install and configure OpenCV 2.4.1, complete the following steps.
The commands shown in each step can be copy and pasted directly into a Linux command line. To open the terminal, press alt+ctrl+t.
1. Remove any installed versions of ffmpeg and x264.
sudo apt-get remove ffmpeg x264 libx264-dev
2. Get all the dependencies for x264 and ffmpeg.
sudo apt-get update
sudo apt-get install build-essential checkinstall git cmake libfaac-dev libjack-jackd2-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libsdl1.2-dev libtheora-dev libva-dev libvdpau-dev libvorbis-dev libx11-dev libxfixes-dev libxvidcore-dev texi2html yasm zlib1g-dev
3. Download and install gstreamer.
sudo apt-get install libgstreamer0.10-0 libgstreamer0.10-dev gstreamer0.10-tools gstreamer0.10-plugins-base libgstreamer-plugins-base0.10-dev gstreamer0.10-plugins-good gstreamer0.10-plugins-ugly gstreamer0.10-plugins-bad gstreamer0.10-ffmpeg
4. Download and install gtk.
sudo apt-get install libgtk2.0-0 libgtk2.0-dev
5. Download and install libjpeg.
sudo apt-get install libjpeg8 libjpeg8-dev
6. Create a directory to hold source code.
7. Download and install install x264.
tar xvf x264-snapshot-20120528-2245-stable.tar.bz2
8. Configure and build the x264 libraries.
./configure --enable-shared --enable-pic
sudo make install
9. Download ffmpeg version 0.11.1 from http://ffmpeg.org/download.html.
tar xvf ffmpeg-0.11.1.tar.bz2
10. Configure and build ffmpeg.
--enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-nonfree --enable-postproc --enable-version3 --enable-x11grab --enable-shared --enable-pic
sudo make install
11. Download and install install a recent version of v4l (video for linux) from http://www.linuxtv.org/downloads/v4l-utils/. For this guide I used version 0.8.8.
tar xvf v4l-utils-0.8.8.tar.bz2
sudo make install
12. Download and install install OpenCV 2.4.2. Download OpenCV version 2.4.2 from http://sourceforge.net/projects/opencvlibrary/fil...
tar xvf OpenCV-2.4.2.tar.bz2
13. Create a new build directory and run cmake:
cmake -D CMAKE_BUILD_TYPE=RELEASE ..
14. Verify that the output of cmake includes the following text: found gstreamer-base-0.10GTK+ 2.x: YESFFMPEG: YESGStreamer: YESV4L/V4L2: Using libv4l Build and install OpenCV.
sudo make install
15. Configure Linux. Tell linux where the shared libraries for OpenCV are located by entering the following shell command:
16. Add the command to your .bashrc file so that you don’t have to enter every time your start a new terminal.
Alternatively, you can configure the system wide library search path. Using your favorite editor, add a single line containing the text /usr/local/lib to the end of a file named /etc/ld.so.conf.d/opencv.conf. In the standard Ubuntu install, the opencv.conf file does not exist; you need to create it. Using vi, for example, enter the following commands:
sudo vi /etc/ld.so.conf.d/opencv.conf
17. After editing the opencv.conf file, enter the following command:
sudo ldconfig /etc/ld.so.conf
18. Using your favorite editor, add the following two lines to the end of /etc/bash.bashrc:
After completing the previous steps, your system should be ready to compile code that uses the OpenCV libraries. The following example shows one way to compile code for OpenCV:
g++ `pkg-config opencv --cflags` my_code.cpp -o my_code `pkg-config opencv --libs`
Step 3: The Game
The game is a basic 8 bit ping pong game. This is not the final version, there are still some bugs in the design and it has to be merged with the finger detection application. The purple fields on the screen are reserved for fingeres. At this time, the game is played with the keyboard (left player - Q and A, right player - O and L), and you can change speed (S) manually.
The game and the finger detection applications are designed separately and the idea is that the finger detection app detect the finger on the purple line, calculates the y coordinate and then sends that coordinate to game app witch then updates the position of players tile.
The game.zip file contains the source code that can be compiled with the folowing comand line:
g++ `pkg-config opencv --cflags` test2.cpp -o game `pkg-config opencv --libs`
and you can run the game with:
Step 4: Finger Detection
With the web camera we are tracking multiple objects and calculating their coordinates on the screen. This part is not finished yet. What we done at this point is tracking of this simple blue objects. We can also detect and follow fingers, but you need to change bool variable calibrationMode on true (line 136). Then, when the program is built and executed, the slider window will appear, and first you need to change minimum and maximum HUE values to select the colors you want to detect. The detected object appears white on the screen. After that, with the you can reduce the noise with minimum and maximum values of SATURATION and VALUE (Picture 1).
If the calibrationMode is set to false, it will detect blue objects (Picture 2).
The finger.zip file contains the source code.
This is still work in progress so there are still bugs!