Introduction: SpyBot: Rpi Robot With Live Camera Feed!! Opencv-tkinter-rpi

About: A Computer Science and Engineering student and a Hobbyist. love to make robots,electric devices,diy things and Programming them by Myself. My youtube channel Follow me on gi…

I have always wanted to make a robot with live camera feed as they are very cool. For this raspberry pi is the best choice because it's light weight and with python it is one kind of invincible board till now.
Moving on, some other guys made such robots but the UI (User Interface) was not good looking, and I wanted to make a game controller like GUI (Graphical User Interface) where buttons are floating on camera feed / video.

So, Here We Are, presenting the SpyBot made with raspberry pi using Tkinter and OpenCv with live camera feed!!

Interesting? Let's Hop In guys!!

Watch the video to see it in action!!

Step 1: Parts / Necessary Things (Hardware)

  • Raspberry Pi ( I used Pi3 model B) - 1x
    [booted using an SD card (8 GB or more, I suggest 32 GB)]
  • Raspberry Pi UPS -1x
    (They can power the board while charging, You should buy this thing)
  • Standard USB cable for powering RPi from UPS - 1x
  • L298n motor driver board - 1x
  • Gear motor with compatible wheel - 4x (for four wheel drive)
  • USB web cam - 1x
    I just unscrewed it and made it naked.
  • PVC sheet / Foam Board / Cardboard (to make the body)
  • Some male to female Jumper wires

Buy electronic components at

UPS setup Video:


  • Cutter knife - Be very careful with it !!
  • Screw driver
  • Glue Gun - Careful it can be very hot!

Let's make it!

Step 2: Principle (What We Are Doing and How)

So, we are making a robot that can see and see, send us what it sees and we can control where the robot goes.

To make it see we need a camera. To make a live feed we need to make a software that updates images from the camera continuously. So, we'll use OpenCV (Open source computer vision Library) and Tkinter as GUI library in python3.

To control the robot we need to have something to move the body, here four motors and wheels will do it. For controlling motors we have a motor driver board. And to control the motor driver we'll use rpi GPIO pins. We'll bind some buttons on our GUI to send commands to the motor driver board

Step 3: Make the Robot

Glue four motors and the motor driver on the back of the pvc board. On top of the board place the rpi and then using spacers attach the UPS board. Then add a holder to screw the camera. Then finally add the camera.

The whole thing is demonstrated in the video below:

Connect motor pins to

m11 = 8 m12 = 10

m21 = 12 m22 = 11

and ground of motor battery to gnd (any gnd pin on rpi).

Step 4: Raspberry Pi Setup

Before you start I assume you have successfully set the rpi up. if not click here to know how to boot your pi. I suggest you to install etcher, it is easiest and simplest to burn SD card using etcher.

After that enable VNC viewer. VNC stands for Virtaul Network Computing. To set it up from terminal open rpi terminal, type

sudo raspi-config

Then select - Interfacing Options >> VNC >> select YES to enable VNC.

That's it for the initial setup section. You can now select VNC Icon from upper label and get the link address to access the rpi computer (windows, MAC, Linux etc) from anywhere. To see it you need VNC-Viewer. You can find it both for computer and android. I controlled the robot using an Android tablet. However it is advised to download it in your pc and mobile device too.
Download VNC-Viewer from real VNC.

Download for android device.

Step 5: The CODE

The code is written in python3. Before you download or run the code you have to install several libraries -

1. Tkinter (rich GUI library). In the terminal type

pip3 install tkinter

2. OpenCV (Open source computer vision library) - responsible for accessing the camera and image processing

Simplest way is to pip install everything, before insalling OpenCV you need to install some other libraries too, from terminal type

sudo apt-get install libhdf5-dev libhdf5-serial-dev libhdf5-100
sudo apt-get install libqtgui4 libqtwebkit4 libqt4-test python3-pyqt5 sudo apt-get install libatlas-base-dev sudo apt-get install libjasper-dev

After that,

sudo pip install opencv-contrib-python

We'll also need PIL / pillow, for image access in Tkinter

sudo apt-get install python3-pil.imagetk

To access GPIO pins using python -

pip install RPi.GPIO

Now, here is the code. Download from here.

""" SpyPi raspberry pi robot control panel with live video feed.
Just like a video game control panel! *only this time the car/robot is real."""

""" version: 4.f 1. seeing live feed, buttons over lapped 2. for controlling and interacting within the same window. 3. show notification on image 4. Control DC motors (making a full mobile robot) """

""" author : Ashraf Minhaj mail : blog : """

import cv2 #open source computer vision library from tkinter import * #import only what necessary from PIL import Image, ImageTk import RPi.GPIO as pin #import gpio control library

pin.setwarnings(False) pin.setmode(pin.BOARD)

#motor control pins m11 = 8 m12 = 10 m21 = 12 m22 = 11

#setup pins with initial state (LOW) pin.setup(m11, pin.OUT, initial = pin.LOW) pin.setup(m12, pin.OUT, initial = pin.LOW) pin.setup(m21, pin.OUT, initial = pin.LOW) pin.setup(m22, pin.OUT, initial = pin.LOW)

msg ='' #message variable

#width, height = 800, 500 #setting widht and height

cap = cv2.VideoCapture(0)

cap.set(cv2.CAP_PROP_FRAME_WIDTH, 600) #cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 350)

root = Tk() root.title("RPI Robot Control Panel by Ashraf Minhaj")

#main label for showing the feed imagel = Label(root) imagel.pack()

# font font = cv2.FONT_HERSHEY_SIMPLEX # org org = (30, 20) # fontScale fontScale = 0.7 # Blue color in BGR color = (255, 255, 255) # Line thickness of 2 px thickness = 2

notif = ''

#load background and button images backg_img = 'car.gif' for_img = 'f.gif' left_img = 'l.gif' right_img = 'r.gif' back_img = 'b.gif' quit_img = 'close.gif'

msg = ''

def pins_default(): """default state of certain pins, important for motors.""" pin.output(m11, pin.LOW) pin.output(m12, pin.LOW) pin.output(m21, pin.LOW) pin.output(m22, pin.LOW)

def forward(): """forward motion""" print("Going Forward Baby.") global msg msg = 'Going Forward' pin.output(m11, pin.HIGH) pin.output(m21, pin.HIGH) notlabel.config(text=" man keya liya hain") notif = 'shamne jai vai' return

def backward(): """backward motion""" print("Going back! Watch OUT!!") global msg msg = 'Going BACKWARD' pin.output(m12, pin.HIGH) pin.output(m22, pin.HIGH) notlabel.config(text=" pichhe dekh madar toast")


def left(): """go left""" print("Going left Baby.") global msg msg = 'Going LEFT' pin.output(m12, pin.HIGH) pin.output(m21, pin.HIGH) notlabel.config(text=" dili to bam haat voira") notif = 'baaye jai vai' return

def right(): """go right""" global msg msg = 'Going RIGHT'

print("Going right man.") pin.output(m11, pin.HIGH) pin.output(m22, pin.HIGH) notlabel.config(text=" daye dekh") notif = 'dane jai vai' return

def msg_default(): global msg msg = '' #return msg def notification(): global notif #notification variable (global)

return notif

def check_faces(f): #check face upon given frame faces = face_cascade.detectMultiScale(f, scaleFactor = 1.5, minNeighbors = 5) #print(faces) for(x, y, w, h) in faces:

print('Face found\n') #print(x, y, w, h) roi_f = f[y: y+h, x: x+w] #region of interest is face #*** Drawing Rectangle *** color = (255, 0, 0) stroke = 2 end_cord_x = x+w end_cord_y = y+h cv2.rectangle(f, (x,y), (end_cord_x, end_cord_y), color, stroke) return f

def get_frame(): """get a frame from the cam and return it.""" print("chhobi lagbo vai.") ret, frame = return frame

def update(): """update frames.""" global msg print("dak porse vai")

frame = get_frame() #if notification() != None: cv2.putText(frame, msg, org, font, fontScale, color, thickness, cv2.LINE_AA) cv2image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

#manipulate image here (if needed) #x = notification() #cv2.putText(frame, 'yahoo', org, font, fontScale, color, thickness, cv2.LINE_AA)

try: cv2.image = check_faces(frame) except: pass

img = Image.fromarray(cv2image) imgtk = ImageTk.PhotoImage(image=img)

imagel.imgtk = imgtk imagel.configure(image=imgtk) msg_default() #motor pins back to default pos pins_default() imagel.after(15, update)

#read the image for tk im_f = PhotoImage(file= for_img) im_l = PhotoImage(file= left_img) im_r = PhotoImage(file= right_img) im_b = PhotoImage(file= back_img)

im_quit = PhotoImage(file=quit_img)

#buttons for_but = Button(root,text="<< Left",repeatdelay=15,repeatinterval=10, command=forward) for_but.config(image=im_f, fg='gray', border=0, borderwidth=0, bg='black'), y=250)

left_but = Button(root,repeatdelay=15,repeatinterval=10, command=left) left_but.config(image=im_l,border=0,borderwidth=0, bg='black' ), y=300)

right_but = Button(root,repeatdelay=15,repeatinterval=10, command=right) right_but.config(image=im_r,border=0,borderwidth=0, bg='black'), y=300)

back_but = Button(root, repeatdelay=15, repeatinterval=10, command=backward) back_but.config(image=im_b, border=0,borderwidth=0, bg='black'), y = 350)

quit_but = Button(root, text='Quit', command=root.destroy) quit_but.config(image = im_quit, bg='red'), y=0)

msg = '' msg_default()


root.resizable(0, 0)

root.mainloop() pin.cleanup()

Note: RPi comes with python2 and python3 installed by default (raspbian-I used). Do not try to update or change python versions unless you're a pro cuz things can get messed up, besides other apps depend on both the python versions.

Step 6: Run the Bot!!!

Now open vnc viewer on your device and search using the address on rpi-vnc-server, and run the code.
Have fun and happy making!!

Next plan:

As this robot has a computer (rpi) and a camera, we can proceed to make a fully functional autonomous car, this is my next challenge / goal.

Thank you.

Raspberry Pi Contest 2020

Participated in the
Raspberry Pi Contest 2020