Introduction: Chefbot: a DIY Autonomous Mobile Robot for Serving Food in Hotels

As the title says, we are going to see how to build an autonomous mobile robot called Chefbot which is for serving food and beverages in hotels and restaurants.

This was a hobby project and i built this robot after seeing a robot called Turtlebot2. The main aim of the project was to build an open source autonomous mobile robot research platform and doing some application using it. Serving food was one of the application of the robot and i have named the robot as Chefbot.

Note : The robot is not using Roomba as the base platform which is used in Turtlebot2; instead of Roomba, i have built the entire mechanism from scratch.

I have documented the entire building procedure of this robot into a book called Learning Robotics using Python published by PACKT publishers. The book already featured in R.O.S Blog, Robohub, OpenCV website, Python book list etc.

The header images shows the book cover page and the Chefbot prototype.

This tutorials is a quick start guide for developing this robot, for detailed designing and development, you should refer the book itself.

You should have basic knowledge in Python and ROS for starting with this tutorials.

In this tutorials series, you can see an abstract of each chapters of this book. Following are main steps that we are going to discuss

  1. Mechanical Design of Chefbot
  2. Working with Robot Simulation using ROS and Gazebo
  3. Designing Chefbot Hardware
  4. Interfacing Robotics Actuators and Wheel Encoders
  5. Working with Chefbot Sensors
  6. Programming Vision sensors using R.O.S and Python
  7. Working with Speech Recognition and Synthesis
  8. Applying Artificial Intelligence to Chefbot using Python
  9. Integrating of Chefbot Hardware and Interfacing to ROS using Python
  10. Designing a GUI for Chefbot using Qt and Python
  11. Calibrating and Testing of Chefbot

Step 1: Mechanical Design of Chefbot

In this step, we can see an abstract of the Chefbot design process mentioned in the book.

The robot design starts from a set of requirements.

Following conditions have to be met by the robot design.

Here are the requirements

  1. The robot should have a provision to carry food and drinks.
  2. The robot should be able to carry a maximum payload of 5 kg.
  3. The robot should travel at a speed between 0.25 m/s and 1 m/s
  4. The ground clearance of the robot should be greater than 3 cm
  5. The robot must be able to work for 2 hours continuously
  6. The robot should be able to move and supply food to any table by avoiding obstacles
  7. The robot height should be between 40 cm and 1 meter
  8. The robot should be of low cost

After analyzing and designing from the requirement we are coming to the conclusion that, following parameters should be on the robot.

Motor Specification

  • Robot drive : Differential wheeled drive
  • Required Motor Speed : 80 RPM
  • Wheel diameter : 9 cm
  • Motor Torque : 20 Kg-cm

So we should design a drive system of robot and buy motors that is matching with these specs.

Next step is to design the robot chassis.

Robot Chassis Design

We are taking the 3-platform layered architecture in this robot which is similar to Turtlebot 2.

I have used following free software tools for sketching and viewing the 2D and 3D design of the robot

  • LibreCAD: LibreCAD is a fully comprehensive 2D CAD application that you can download and install for free.
  • Blender: Blender is the free and open source 3D modeling tool.
  • Meshlab: MeshLab is an open source, portable, and extensible system for the processing and editing of unstructured 3D triangular meshes.

In Ubuntu you can install these tool using following command

Installing LibreCAD

$ sudo apt-get install librecad

Installing Blender

$ sudo apt-get install blender

Installing Meshlab

$ sudo apt-get install meshlab

You can see the 2D design of robot, base plate, middle plate and top plate of the robot modeled using LibreCAD.

The dimensions of plates and dimensions of each holes are given below

Dimensions

Here are the dimensions of each plate

Base plate:

  1. M1 and M2(motors): 5x4 cm
  2. C1 and C2(caster wheels) Radius : 1.5 cm
  3. S(Screw) radius : 0.15 cm
  4. P1-1,P1-2,P1-3,P1-4 : Outer radius = 0.7 cm, Height = 3.5 cm
  5. Left and Right Wheel sections : 2.5 x 10 cm
  6. Base plate Radius: 15 cm

The middle and top plate have the same dimensions of base plate with same screw size and other dimensions. You can view these plates from the image gallery.

Each plates are connected using hollow tubes with screws. You can see its dimensions from the images.

The 3D modeling is done using Python script inside Blender. You can see screenshot of Blender with robot model from the images.

The Python script and Blender 3D model file is attached along with this step.

We can export the robot model to STL it can viewed in 3D mesh viewing tool called Meshlab which is included in the images.

The python script to generate robot model in Blender is given below

import bpy<br>

#This function will draw base plate
def Draw_Base_Plate():


    #Added two cubes for cutting sides of base plate
    bpy.ops.mesh.primitive_cube_add(radius=0.05, location=(0.175,0,0.09))
    bpy.ops.mesh.primitive_cube_add(radius=0.05, location=(-0.175,0,0.09))
    ################################################


    #Adding base plate
    bpy.ops.mesh.primitive_cylinder_add(radius=0.15,depth=0.005, location=(0,0,0.09))
    #Adding booleab difference modifier from first cube
    bpy.ops.object.modifier_add(type='BOOLEAN')
    bpy.context.object.modifiers["Boolean"].operation = 'DIFFERENCE'
    bpy.context.object.modifiers["Boolean"].object = bpy.data.objects["Cube"]
    bpy.ops.object.modifier_apply(modifier="Boolean")
    ################################################


    #Adding booleab difference modifier from second cube
    bpy.ops.object.modifier_add(type='BOOLEAN')
    bpy.context.object.modifiers["Boolean"].operation = 'DIFFERENCE'
    bpy.context.object.modifiers["Boolean"].object = bpy.data.objects["Cube.001"]
    bpy.ops.object.modifier_apply(modifier="Boolean")
    ################################################


    #Deselect cylinder and delete cubes
    bpy.ops.object.select_pattern(pattern="Cube")
    bpy.ops.object.select_pattern(pattern="Cube.001")
    bpy.data.objects['Cylinder'].select = False
    bpy.ops.object.delete(use_global=False)


#This function will draw motors and wheels
def Draw_Motors_Wheels():
    #Create first Wheel
    bpy.ops.mesh.primitive_cylinder_add(radius=0.045,depth=0.01, location=(0,0,0.07))
    #Rotate
    bpy.context.object.rotation_euler[1] = 1.5708
    #Transalation
    bpy.context.object.location[0] = 0.135
    #Create second wheel
    bpy.ops.mesh.primitive_cylinder_add(radius=0.045,depth=0.01, location=(0,0,0.07))
    #Rotate
    bpy.context.object.rotation_euler[1] = 1.5708
    #Transalation
    bpy.context.object.location[0] = -0.135
    #Adding motors
    bpy.ops.mesh.primitive_cylinder_add(radius=0.018,depth=0.06, location=(0.075,0,0.075))
    bpy.context.object.rotation_euler[1] = 1.5708
    bpy.ops.mesh.primitive_cylinder_add(radius=0.018,depth=0.06, location=(-0.075,0,0.075))
    bpy.context.object.rotation_euler[1] = 1.5708
    #Adding motor shaft
    bpy.ops.mesh.primitive_cylinder_add(radius=0.006,depth=0.04, location=(0.12,0,0.075))
    bpy.context.object.rotation_euler[1] = 1.5708
    bpy.ops.mesh.primitive_cylinder_add(radius=0.006,depth=0.04, location=(-0.12,0,0.075))
    bpy.context.object.rotation_euler[1] = 1.5708
    ################################################


    #Addubg Caster Wheel
    bpy.ops.mesh.primitive_cylinder_add(radius=0.015,depth=0.05, location=(0,0.125,0.065))
    bpy.ops.mesh.primitive_cylinder_add(radius=0.015,depth=0.05, location=(0,-0.125,0.065))
    #Adding Kinect
    bpy.ops.mesh.primitive_cube_add(radius=0.04, location=(0,0,0.26))    
#Draw middle plate
def Draw_Middle_Plate():
    bpy.ops.mesh.primitive_cylinder_add(radius=0.15,depth=0.005, location=(0,0,0.22))
#Adding top plate
def Draw_Top_Plate():
    bpy.ops.mesh.primitive_cylinder_add(radius=0.15,depth=0.005, location=(0,0,0.37))


#Adding support tubes
def Draw_Support_Tubes():
####################################################


    #Cylinders
    bpy.ops.mesh.primitive_cylinder_add(radius=0.007,depth=0.30, location=(0.09,0.09,0.23))
    bpy.ops.mesh.primitive_cylinder_add(radius=0.007,depth=0.30, location=(-0.09,0.09,0.23))
    bpy.ops.mesh.primitive_cylinder_add(radius=0.007,depth=0.30, location=(-0.09,-0.09,0.23))
    bpy.ops.mesh.primitive_cylinder_add(radius=0.007,depth=0.30, location=(0.09,-0.09,0.23))


#Exporting into STL    
def Save_to_STL():
    bpy.ops.object.select_all(action='SELECT')
#    bpy.ops.mesh.select_all(action='TOGGLE')
    bpy.ops.export_mesh.stl(check_existing=True, filepath="/home/lentin/Desktop/exported.stl", filter_glob="*.stl", ascii=False, use_mesh_modifiers=True, axis_forward='Y', axis_up='Z', global_scale=1.0)


#Main code
if __name__ == "__main__":
    Draw_Base_Plate()
    Draw_Motors_Wheels()
    Draw_Middle_Plate()
    Draw_Top_Plate()
    Draw_Support_Tubes()
    Save_to_STL()

Step 2: Working With Robot Simulation Using R.O.S and Gazebo

After designing the robot 3D model, the next step is to simulate the robot. I have describe complete simulation of this robot from scratch in the book. The simulation is done using R.O.S and Gazebo.

Here is the quick start to do the robot simulation.

Prerequisites for simulation:

The first figure shows the Chefbot simulation in Gazebo.

We need following R.O.S packages to run this simulation.

Following command will install necessary dependencies.

$ sudo apt-get install ros-indigo-turtlebot ros-indigo-turtlebot-apps ros-indigo-turtlebot-interactions ros-indigo-turtlebot-simulator ros-indigo-kobuki-ftdi ros-indigo-rocon-remocon

Setting R.O.S Catkin workspace

$ git clone https://github.com/qboticslabs/Chefbot_ROS_pkg.git

From the cloned files, copy the chefbot folder into catkin workspace src folder and build the workspace using catkin_make command.

Running Chefbot Simulation

Launch the simulation using the following command

$ roslaunch chefbot_gazebo chefbot_hotel_world.launch

This will open Gazebo simulator with a hotel like environment which is shown in the second image.

Now we are trying to implement autonomous navigation in simulation. First we have to perform SLAM for building map of the environment and after building the map, we have to run AMCL nodes for localizing the robot on the map.

After localization, we can command robot to go into a particular table position for delivering the food and will return to the home position after delivering food.

Performing SLAM using Chefbot

We can see how to perform SLAM and AMCL using the simulated environment.

Start the SLAM algorithm using the following command

$ roslaunch chefbot_gazebo gmapping_demo.launch

Start visualizing the map in Rviz using the following command

$ roslaunch chefbot_bringup view_navigation.launch

We can start mapping the entire hotel by moving robot around the environment.

We can move robot manually using teleoperation, following command can be used for teleoperation for Chefbot

$ roslaunch chefbot_bringup keyboard_teleop.launch 

We can generate the map of the environment as shown below. When the mapping is complete we can save the map to a file using following command

$ rosrun map_server map_saver -f ~/hotel_world

This saved map is used for the next step for doing AMCL

Performing AMCL on Chefbot

After saving the map, close all terminal and start the Gazebo and its nodes using following command

$ roslaunch chefbot_gazebo chefbot_hotel_world.launch

Launch the AMCL node using the following command

$ roslaunch chefbot_gazebo amcl_demo.launch map_file:=/home/hotel_world.yaml

Start Rviz with necessary settings for visualization

$ roslaunch chefbot_bringup view_navigation.launch

Now we can see the robot is localized on the map which is having the same position of Gazebo.

We can command the robot to go to a particular position inside map using Rviz 2D Nav Goal Button.

Using 2D Nav goal button we can give a goal pose to the robot and then you can see the robot will plan a path to that position and move to that path by avoiding obstacles autonomously.

We have performed the simulation of complete robot, now it's the time for designing the hardware prototype of the simulated robot.

Step 3: Designing Chefbot Hardware

The robot prototype meets following requirements

  • The design should be simple, cost effective
  • It should have components which helps to perform autonomous navigation
  • Good battery life

Component List

  1. Pololu DC Motors with Encoder
  2. Pololu Motor H-Bridge
  3. Level shifter (3.3V - 5V)
  4. DC Motor brackets
  5. Embedded Controller board : Tiva C Launchpad
  6. DC Buck convertor : LM 2596
  7. MPU 6050
  8. Ultrasonic distance sensor
  9. Battery : Turnigy 14.8Volt, 5000 mAH, 20C
  10. Asus Xtion Pro
  11. Intel NUC

The detailed block and connection diagram of the robot is shown on the images. We can use Asus Xtion Pro instead of Kinect.

Step 4: Interfacing Robotics Actuators and Wheel Encoders

In this section, we can see the interfacing of Motors and Encoder to Tiva C Launchpad. We are using encoders for getting the robot odometry information.

Interfacing DC Motor to Tiva C Launchpad

You can see the circuit to interface DC motor to Tiva C Launchpad using a motor driver.

I have used Energia IDE to program Tiva C Launchpad . We can use this IDE to program various set of Texas instrument boards. The IDE is a modified version of Arduino IDE and we can program the board using the same programming language that we have used in Arduino.

You can download Energia from the following link. The IDE is available for Windows, Linux and Mac. We are using Linux version.

Download Energia

  • Plug the Tiva C Launchpad into Linux system and first select the board and device name as shown in the image gallery.
  • The test code for motors is attached below

Interfacing Quadrature encoder to Tiva C Launchpad

Next step is to interfacing wheel encoders, You can see diagram to interface wheel encoder which is inbuilt in the DC gear motor. The type of encoder is called Quadrature Encoder. The more details about interfacing and working principles of Quadrature encoders are mentioned in the book

The test encoder interface code is attached below

Step 5: Working With Chefbot Sensors

The robot is having ultrasonic sensors which is used for detect collision and also it has a IMU called MPU 6050 which helps for computing robot odometry.

You will get the circuit diagram of Ultrasonic sensor interfacing to Tiva C Launchpad and its code. The expected output of this code is also shown in the images.

Interfacing MPU 6050 with Tiva C Launchpad

The interfacing diagram of MPU 6050 IMU is given below.

The pins are given below

Launchpad Pins| MPU 6050 pins

+3.3 V | VCC/VDD

GND | GND

PD0 | SCL

PD1 | SDA

The interfacing code and MPU 6050 library are attached in this step. You have to copy the MPU 6050 library to sketchbook/libraries location to compile the code.

We will get the output in serial monitor if everything works fine. We can also print the sensor values using a python script, which is also attached !!!

Step 6: Programming 3D Vision Sensors Using R.O.S and Python

In this step, we are going to see how to interface a 3D vision sensor such as Kinect/ Asus Xtion Pro in R.O.S.

We can install 3D sensor system driver, middleware called OpenNI and its R.O.S driver/launch files using following command

$ sudo apt-get install ros-indigo-openni-launch

After installing driver, you can start 3D sensor using the following command

$ roslaunch openni_launch openni.launch

We can view the Point cloud from 3D sensor in Rviz using following command. You can see the Point Cloud visualization screenshot from the image gallery

$ rosrun rviz rviz

We are using 3D sensor for mocking the functionality of Laser scanner which used for performing SLAM. The laser scanner is highly expensive, so mimicking the functionalities of laser scanner using 3D vision sensor can reduce the overall budget of the robot.

The Point cloud data generated not by the 3D sensor but it is processed inside the ROS openni driver.

The point cloud data can be convert to laser scanner data using the following ROS package

Converting Point Cloud to Laser Scanner data

The depthimage_to_laserscan and pointcloud_to_laserscan packages helps to convert 3D sensor depth image to laser scan data.

You can see the screenshots of converted point cloud to laser scan from the image gallery

Step 7: Working With Speech Recognition and Synthesis

The robot can take speech commands from master/user. I have used speech recognition/synthesis libraries to enable this feature. We can see how the recognition and synthesis works from the image gallery.

In the book, i have demonstrated following speech recognition toolkit and its interfaces using python.

Following speech synthesis libraries and its programming is mentioned in the book

In the book, we can also see speech recognizer and synthesis package in R.O.S

Step 8: Applying Artificial Intelligence to Chefbot Using Python

In this section, you can see how to make the robot interactive by adding some sort of intelligence to it. We are using AIML(Artificial Intelligence Mark-up Language) and its Python extension called PyAIML for making the robot interactive.

Here we are converting the speech to text and input it into the AIML, after getting the text output from AIML, we can convert it into speech using TTS. The block diagram of this process is shown in the image gallery.

I have also built a ROS package for handling AIML file and the block diagram of rosaiml package is also given.

You will get the complete code from the code we have cloned already. For getting exercise files, you should check the book codes.

Step 9: Integrating of Chefbot Hardware and Interfacing to ROS Using Python

This section discuss about the complete integration of sensors and actuators to build the robot. You will see the manufactured parts of the robots and its complete interconnection.

The cloned files contain the complete interfacing packages of ROS and firware code for Tiva C Launchpad

Step 10: Designing a GUI for Chefbot Using Qt and Python

In this step we are building a commander GUI for the robot. Using the GUI, we can command the robot to go into a particular table position in a hotel like environment.

After building the map, we can start AMCL for localizing and command the robot to go into a particular pose of map. We can retrieve each pose of the map.

The GUI of the robot is placed in chefbot/chefbot_bringup/scripts folder called robot_gui.py

Start commanding robot in Gazebo

Start Gazebo simulation of hotel environment

$ roslaunch chefbot_gazebo chefbot_hotel_world.launch

Launch AMCL with the generated map

$ roslaunch chefbot_gazebo amcl_demo.launch map_file:=/home/lentin/catkin_ws/src/chefbot/chefbot_bringup/map/hotel1.yaml

View robot in Rviz and correct the initial pose of robot in the map using 2D Pose Estimate button

$ roslaunch chefbot_bringup view_navigation.launch 

Using 2D Nav Goal button, we can command the robot in the map. We will get the pose of robot from following command

$ rosrun tf tf_echo /map /base_link

We can collect each pose of table and feed inside the robot_gui.py, the poses are hard coded inside this code. After getting each position of table, we can start running the robot commander GUI

$ rosrun chefbot_bringup robot_gui.py

Select the table number and press Go button. Here we are taking a 3x3 tables in hotel

Step 11: Calibration and Testing of Chefbot

After complete integration of robot and building the GUI, the final step is to calibrating and testing the robot.

First we need to calibrate following sensors

  1. Kinect/Asus Xtion Pro
  2. Wheel Odometry Calibration
  3. Calibrating the MPU 6050

ROS - 3D sensor calibration

In ROS, we have a package to perform calibration of RGB image and depth image. Given below the command to install this calibration package

$ sudo apt-get install ros-indigo-openni-launch ros-indigo-cameracalibration                     

Following tutorial will guide you to calibrate the robot : Tutorial link

The image gallery shows the RGB, and Depth image calibration of 3D sensor

The details of Odometry and IMU calibration need to be referred from the book itself. You can also see how to test the GUI and inserting new poses inside the code.

Hope that you all enjoyed the quick start tutorial to build Chefbot. I can't explain the entire book content here but this tutorial give an idea to build an autonomous mobile robot.

If you really wanted to know more details of robot design; then you can checkout the book.

Regards

Lentin Joseph

Comments

author
medalistt (author)2016-09-17

Hello. first of all, thank you for you job and this tutorial. I have a question, can i use leap motion instead asus (kineckt) ?

author
Gursehaj Singh (author)medalistt2016-10-11

You can but it has limited range and field of view

author
Ibrahem Garrah made it! (author)2016-06-28

AMAZING Robot and Book,

Thank you for your reply to our friend emails... it really helps us

13536169_1069906519769057_1174325574_n.jpg
author
DIY Hacks and How Tos (author)2015-11-15

Awesome robot.

author

Thanks :)

author
ATiRon 3 (author)2015-11-15

Awesome tutorial! Well done!!

author
Lentin Joseph (author)ATiRon 32015-11-15

Thanks :)

About This Instructable

9,742views

148favorites

Bio: Lentin Joseph is an Author of Learning Robotics using Python(http://learn-robotics.com) and founder and CEO of Qbotics Labs (http://www.qboticslabs.com) from ... More »
More by Lentin Joseph:Chefbot: A DIY Autonomous mobile robot for serving food in hotels
Add instructable to: