Introduction: Pointer Dog Robot With RPi and Arduino
Let's get your own Pointer Robot with Raspberry Pi and Arduino.
Pointer is a hunting dog which follows hunter's command and points target. The robot described here acts like Pointer. It follows your voice commands, and it traces and points your target autonomously. You can see the robot working in the video below.
In the video above, voice commands in English are told in Japanese accent. It is not necessary to care about locale nor language at all in setup and practice. Commands number just 7 in total, and an easy read aloud service can be used in web.
All sample codes can be downloaded securely from Instructables. Stuff is listed in the next to the last step. If you have no material, you can buy all of them for about 130 dollars. I wish videos and pictures in each step would cover my poor English.
Step 1: Raspberry Pi and Arduino Can Support Each Other
Arduino is a very good micro-controller. It works in many great projects. On the other hand, it is unsuited for some works which require large resource, like image or speech analysis in real time. Hence special boards or PCs are used to support Arduino for faster or advanced analysis in earlier projects.
However I want my Pointer robot works in self-contained without using special device, PC nor cloud resources. And I also want to get it at lower cost. Several months ago, I thought that I might be able to get it with Raspberry Pi. So I began Raspberry Pi.
Though I began Raspberry Pi with little knowing about Linux and Python, I was able to practice basic tasks with referring to many kind articles and commentaries in web. In these tasks, I felt that Raspberry Pi is not always superior to Arduino. To speak frankly it might be better to think Raspberry Pi is a small PC rather than a micro-controller. I feel it in projects with PWM, especially using sound device in them.
Actually some special devices are sold to support Raspberry Pi for these projects. However Arduino can work for Pi instead of these special boards. Pi processes advanced analysis and outputs result. Arduino control LEDs, motors and servos based on the received result. UART(serial communication), I2C or SPI can be used between Pi and Arduino.
Step 2: Set Up Raspberry Pi
I began Raspberry Pi with installing "Raspbian Stretch" to Pi 3 using NOOBS(ver.2.4.4) in October 2017. I recommend beginner to use Pi 3 with 16GB micro SD card. Though I tried Pi Zero, Pi Zero W and 8GB micro SD card for this project, I found that they don't have enough resource to make robot act adequately.
Now I start again with clean stuff for this instruction. The latest NOOBS file, ver.2.8.1 in May 2018, is downloaded and extracted in my PC. It is copied and pasted to a new 16GB micro SD card which has been formatted in FAT32.
Here I set up Raspberry Pi as simple as possible. I follow the "recommended" options and keep default settings. Hence I plug a LAN cable into my Raspberry Pi to connect internet and install Raspbian OS(*1). After install finished and desktop displayed in monitor connected with HDMI cable, open terminal window and execute just the next 2 lines for update. (Don't contain "$ " when copy and paste.)
- $ sudo apt-get update
- $ sudo apt-get upgrade
After these 2 lines finished, click the raspberry icon at the upper left corner and select "Preferences", "Raspberry Pi Configuration" and "Interfaces". Check all interfaces are "Disable"(*2). Then reboot your Pi.
- $ sudo reboot
(*1) The default "Language: English(UK)" and "Key board: gb" are selected.
(*2) Though VNC will be changed "Enable" later, it is desirable to keep "Disable" until Step 6 at least. You can unplug many cables from Pi finally and change VNC "Enable". See Step 10.
Step 3: Prepare Camera and Microphone
In this project camera and microphone should be connected to your Raspberry Pi.
I had a popular USB camera "Logitech C270" and use it for this project. Though it is a little too big for small servo "SG90" and has too long cable, it is good because it has enough resolution and additional microphone. This microphone works well. On the other hand two cheap USB microphones I bought anew don't perform well, because their maximum gain are too small.
Of course you can use the official Camera Module and some good USB microphone respectively. The official Pi Camera is smaller and its ribbon cable is shorter. These are good(*), but sample codes in this article should be corrected if you use the Pi Camera instead.
Hereafter, we presuppose that Logitech C270 or an adequate USB camera with microphone shall be connected to our Pi.
(*) However CSI ribbon cable might not be flexible enough for this project.
Step 4: Install OpenCV and Try Color Tracking
You can install OpenCV in your Pi much easier than Julius described later. Open terminal window in your Pi and execute only 1 line below, you finish it.
- $ sudo apt-get install libopencv-dev python-opencv
Then download two sample codes into your Pi. Open this page in browser "Chromium" in your Pi, and download two sample files attached below. After that, move the downloaded files in Download directory "/home/pi/Download" to your home directory "/home/pi".
Let's run the first sample code in your Pi to which a USB camera is connected. Execute the next command, a preview monitor is opened. You can see how to run the command and how it works in Video 2.
- $ python /home/pi/adx_PR1_preview.py
To quit the preview monitor, press 'q' button when the monitor is active (*1).
Let's run the second sample code in your Pi (*2). Execute the next command and put a orange target close to camera, your Pi will starts tracking the target in the preview monitor.
- $ python /home/pi/adx_PR2_colorTrack.py
[Video_2] Color Tracking
(*1) When the monitor is acceptable, the upper frame of window is blue. If not, click preview window.
(*2) After quit the first code, new preview monitor may not open. If so, run the command above for about 5 times.
Step 5: Get 2-axis Platform Camera With C270 and Servos
In the former step our Pi becomes to be able to track a orange target in monitor. Here we make it can track in wider range with two servos.
One servo is attached to the base plate of robot. It makes camera panning to left or right. The other servo is attached to the first one, and the horn of the second one supports camera. The second servo makes camera tilting to upward or downward.
Then a troublesome matter meets a maker with poor tools like me. It is not easy that we attach the USB camera C270 to the second servo in normal position. The side face of the body of the camera is too narrow for the servo horn to support it firmly. A good and special adapter is needed(*) to attach camera in normal position to the servo horn.
Hence we attach the camera vertically (in portrait mode) to the second servo. In result preview monitor is rotated 90 degrees. It is unpleasant for us, but our robot doesn't care about. Panning is done with referring to Y-axis in preview monitor, and tilting is done with X-axis.
However the rule "Panning with X-axis and Tilting with Y-axis" is kept in every sample code used in the next step and later. Because we hope to get good adapter and attach camera in normal position in future. Hence we should take care that here "panX" in the sample code means tilting angle of camera and "tiltY" means panning angle respectively.
(*) Later I found no adapter is needed. Only a small wedge, a bit of double sided tape and a few additional rubber bands are needed. It will be described in Step 14.
Step 6: Contol Camera Platform With Arduino
In the former step we got a 2-axis rotatable platform camera with 2 servos. "How to control this platform?" should be asked here.
To come to the point, it is difficult to control it by Raspberry Pi alone. Pi has 2 hardware PWMs to control servo, but audio resource in Pi uses them also. In this project our Pointer robot should follow our voice command. Therefore we cannot use them for servo controlling.
Here we use Arduino NANO or its compatible board to control 2 servos. UART(serial communication), I2C or SPI can be used to communicate between Pi and Arduino. We use UART.
Check again "Serial" is Disable in "Interfaces" tag in "Raspberry Pi Configuration". And open "/boot/cmdline.txt" file. Then delete "console=serial0,115200" in the text of this file and save it.
- $ sudo leafpad /boot/cmdline.txt
Next open "/boot/config.txt" file.
- $ sudo leafpad /boot/config.txt
Check carefully whether any lines with "dtoverlay=..." or "enable_uart=..." are written without the comment-out "#" in the text of the file. If you find them, comment-out them. After comment-out or there is no such line, add 2 lines below at the end of file.
After the file corrected, save the file and reboot Pi. Then check again both the Pi configuration and "/boot/config.txt" file. The added line in "/boot/config.txt" file may be changed automatically like "enable_uart=0". So correct it again "enable_uart=1" and save the file. Then reboot Pi and check the points above again. I think the corrected line is OK this time.
Pi is set up for UART communication now. Next click and see the picture above. After that, connect wires very carefully between Pi, Arduino, logic level converter, servos and batteries. After all wires are connected correctly, unplug 2 wires from 2 pins, RX and TX in Arduino, temporarily.
Then download the Arduino sample sketch into your PC. Two sample files are attached below. In your PC, click and download "adx_PR_arduino.zip" and unzip it. After that upload it to your Arduino with using IDE in your PC. If upload finished, plug again the 2 wires into RX and TX in Arduino, which were unpluged temporarily above.
Finally download the other sample code "adx_PR3_uart180.py" into your Pi. Open this page in your Pi and download it into your Pi. Then move it to your home directory "/home/pi".
Open terminal window in your Pi and run the next command twice(*). (* See note below.)
- $ python /home/pi/adx_PR3_uart180.py -19290
- $ python /home/pi/adx_PR3_uart180.py -19290
You see your camera faces the front, if its platform set correctly and camera faced other direction. Next run the command below, camera turns its face with a little panning.
- $ python /home/pi/adx_PR3_uart180.py -19275
Each servo can rotate from 0 to 180 degrees. Here controlling them with resolution of 180 respectively (0, 1, ..., 179). 19290 means that each servo stays at 90 degree: 19290 = (90*180) + 90. However reading error happens occasionally. Therefore negative number, -19290, and some filters are used in the sample sketch. For example, if a number read now is bigger or smaller to the certain extent than the number read 1 step ago, no rotate is done. However a number is sent twice, this filter doesn't work at the second reading.
Step 7: Install Speech Analizer Julius
Before this project, I installed both Google Assistant and Amazon Alexa in my Pi 3 and tried them together. I saw both of them have good listening comprehension.
However I want my Pointer robot works in self-contained without using resources in web or cloud. Hence I installed a dictation module "Julius". It has been developed as a research software for Japanese speech recognition and it is distributed with open license together with source codes.
No special setting is needed to install and run Julius in your Pi. It works in default setting just after the standard Raspbian installed. It is not required any locale nor language changing for Japanese. In this project, only 7 words are used. Hence you don't care about language also. If you see Video 3 in Step 9 and follow these words like as in the video, you can command your robot as well.
You can install Julius by following steps below. These steps are described in a commentary site for a guide book written by Takashi Kanamaru. I appreciate him. Though the commentary is written in Japanese, you can read it in your language with using "Google Translation".
- Open this page on browser "Chromium" in your Pi
- Click here and download "julius-4.4.2.tar.gz" into your Pi ("Downloads" directory is selected automatically)
- Click here and download "dictation-kit-v4.4.zip" into your Pi
- Move these two files from "Downloads" directory to your home directory "/home/pi"
- Open terminal window and execute next 9 lines in sequence
- $ sudo apt-get update
- $ sudo apt-get install libasound2-dev
- $ tar zxf julius-4.4.2.tar.gz
- $ cd julius-4.4.2
- $ ./configure --enable-words-int --with-mictype=alsa
- $ make
- $ sudo make install
- $ cd
- $ unzip dictation-kit-v4.4.zip
It is written that Julius can hear you in other languages such as English, Slovenian, French and so on. Further you can try other speech analysis modules, though I cannot tell you good advice about them. And if you don't care about self-contained, you can use engines in web or cloud. If you so, I recommend Google Assistant because it is easy to use in this project. See Step 3 in my earlier project.
Step 8: Set Up Audio Input
In the former step, you installed dictation engine "Julius" and its dictation kit in your Pi. Then open "/home/pi/.bashrc" in leafpad.
- $ sudo leafpad /home/pi/.bashrc
Add the next line at the end of ".bashrc" file and save the file.
- export ALSADEV=plughw:1,0
Next download a audio config file "asoundrc.conf", which is attached below. Open this page in your Pi and download it into your Pi. Then move it from "Download" directory to your home directory "/home/pi". After that, rename it ".asoundrc". (Add '.' and delete ".conf".) Then you may not be able to find the renamed file. If you cannot find it, click right in your home directory and check "Show Hidden".
Then shutdown Pi and plug USB device with microphone into Pi. Well, boot Pi again. Open a terminal window and execute the next command.
Now alsamixer is displayed in the window. Press F6 key and select sound card. Then press F4 key. If the selected card has sound input (microphone), bar graph(s) is/are displayed. Else if no bar graph is displayed, press F6 key and select other card.
You can select sound input device or change its gain with arrow keys. However the graph(s) is shown despite the card stops working occasionally. To avoid this misleading, press up or down arrow keys. If the card is stopping, the graph disappears and "This sound device was unplugged. Press F6 ...." is displayed. If so, quit alsamixer at once with pressing "Esc" key and open it again.
Now you can try Julius. Run a command below in a new terminal window.
- $ julius -C dictation-kit-v4.4/main.jconf -C dictation-kit-v4.4/am-gmm.jconf -demo
Wait until "<<< please speak >>>" is shown like in the picture above. This is the first stand-by status. Then speak to microphone "Ah, Uh, Oh". If Julius works well, some Japanese characters are printed. Conversely, if the command ends without stand-by status shown or the same new lines are printed successively when no speaking, the sound card of microphone doesn't work well. If so, quit Julius with "ctl+c" and check aslamixer described above. If Julius doesn't work well still after the check of aslamixer, see the troubleshooting below.
When we start with standard and clean stuff, Julius should work well in the way above. However it might not work well with other stuff. In the latter case, you should check the card and device numbers of the microphone. Open terminal window and execute the next command.
- $ cd
- $ arecord -l
If the microphone works, card number X and device number Y are shown like below. If you start with clean stuff, X and Y ought to be 1 and 0 respectively.
- "card X: U0xNNd0xNNN [USB Device 0xNNd:0xNNN], device Y: USB Audio [USB Audio]"
On the other hand, if card number X and device number Y are not equal to 1 and 0, reboot Pi and check these numbers again. So if these numbers are not still equal to 1 and 0, correct the added line above in ".bashrc" file according to the numbers and try again.
Step 9: Download and Test Dictionary
It seems that Julius cannot hear us as well as Google Assistant or Amazon Alexa. Therefore a special dictionary is introduced here, which contains very limited words to command our Pointer robot.
Download the special dictionary file "adx_PR4_command.dic" attached below. Open this page in your Pi and download it. Then move it from "Download" directory to your home directory "/home/pi".
Next correct the file in dictation kit for Julius to make it refer to the special dictionary.
- sudo leafpad dictation-kit-v4.4/main.jconf
Search the line "-v model/lang_m/bccwj.60k.htkdic" carefully in the text of the file. If you find it, make it comment out with '#' and add a new line below. Save the file and close it.
- #-v model/lang_m/bccwj.60k.htkdic
- -v /home/pi/adx_PR4_command.dic
Then open the special dictionary file and see the text of it. Here we don't change nor save this file.
- $ sudo leafpad /home/pi/adx_PR4_command.dic
The top 2 lines in it can be ignored here. The remaining lines are only 7, which are command words.
- Go forward: "g o: g o:" (as "Go Go")
- Go backward: "b a q k u b a q k u" (as "Back Back")
- Turn right: "r a i t o r a i t o" (as "Right Right")
- Turn left: "r e f u t o r e f u t o" (as "Left Left")
- Stop and wait: "m a t e m a t e"
- Start tracking: "t o r a q k u s u t a: t o" (as "Track Start")
- Stay down: "s u t e i d a u N" (as "Stay Down")
After you close this dictionary file without saving it, open a new terminal window and run Julius again.
- $ julius -C dictation-kit-v4.4/main.jconf -C dictation-kit-v4.4/am-gmm.jconf -demo
If your voice follow each word correctly, the relevant Japanese characters are printed in the terminal window. The Japanese characters are listed in the picture above.
The pronunciation of each command can be heard in Video 3 below. See and listen to this video, and talk into microphone with Julius running. You can also learn pronunciation of each command with using a read aloud service "Sound of Text" in web. Furthermore if you use it for your proxy, you need not learn the pronunciation. I show how to use it in the last step additionally.
[Video_3] Lesson in Voice Command
If you want to finish speech recognition, press "ctl+c" in active window where Julius running.
Step 10: Connect Additional Wires
In the former step, we ran Julius in our Pi with the special dictionary and checked it works well when we talk the command words to microphone. So we want our Pointer robot to act following the voice commands.
To follow the voice command, our robot requires 2 additional motors(*1) and wheels. Here Arduino controls these motors also in addition to controlling servos. And a motor driver L298N and a double-gearbox act as mediators between these new materials.
Click and see pictures above. And connect additional wires carefully between Arduino, motors, motor driver and 5V battery_2. After all wires are connected correctly, connect a wire between GPIO 5 and 25 in your Pi.
All right. We have installed all basic modules in our Pi and checked they work well. So we can unplug LAN cable from our Pi and use Wi-Fi instead. As also VNC(*2) can be used without hesitating to, we can unplug HDMI cable(*3), mouse and key board from Pi. Now only camera and microphone are plugged.
(*1) 0.1uF capacitors can prevent electrical noise of motor from disturbing UART comunication. See the lower right picture above.
(*2) I recommend that Wi-Fi and VNC should be made "Enable" in your Pi hereafter, and VNC Viewer is installed into your PC. Because LAN and HDMI cables plugged into Pi encumber your robot moving. The power supply to the motors should be moderate to limit the noise from them. If these cables are still plugged, your robot with limited power supply might not be able to move well.
(*3) You can select resolution. Click the raspberry icon at the upper left corner of VNC client monitor in your PC. Choose "Preferances", "Raspberry Pi Configuration", "System" and "Set Resolution".
Step 11: Activate Your Robot
Download the last two files required to command your robot. One works with Julius running in module mode and acts as a mediator between dictation and decision making by your robot. And the other executes the decision making.
Open this page in your Raspberry Pi and download the two sample files attached below into your Pi. After that, move them from "Download" directory to its parent directory "/home/pi"
Now all setup is done. Then open 4 new terminal windows like the picture above.Three of them are used in this step. In the first one, check and set alsamixer adequately. See Step 8 to know what you should do. Here change the gain of microphone to be smaller or around minimum level, because the noise of motors and gearbox interferes with good dictation.
Next in the second terminal execute the command below to run Julius in module mode. In module mode, an alternative stand-by status is shown in window. See the picture above. If the status is not displayed, check Step 8 and 9 again.
- $ julius -C dictation-kit-v4.4/main.jconf -C dictation-kit-v4.4/am-gmm.jconf -demo -module
If the stand-by status in module mode is displayed normally, run the next command in the third window. The stand-by status in the second window goes to the final stage. See the picture in the next step, you can know the final stand-by status in the second window. If the final status is not displayed, check Step 8 and 9 again.
- $ python /home/pi/adx_PR5_voiceCtl.py
If the final stand-by status is displayed normally in the second window, talk "Ma-te Ma-te" to microphone(*1, *2). 3 Japanese characters are printed in the third window. You can see the series of these operations in one picture attached to the top of the next step. Before using the fourth window, let's command your robot with your voice in the next step.
(*1) You can know how pronounce "Ma-te Ma-te" at 2:20 in Video 3 in Step 9. Or you can know it also with using support tool described in the last step.
(*2) When Julius hears your command wrong, it may start moving against your command. Push reset button on Arduino, you can cancel it.
Step 12: Command Your Robot With Your Voice
In the former step, your Pointer robot got ready to follow your voice command. You can command it to go ahead/back, turn right/left or stop moving. Your robot moves or stops with following your voice commands below(*).
- Go forward: "Go Go"
- Go backward: "Back Back"
- Turn right: "Right Right"
- Turn left: "Left Left"
- Stop and wait: "Ma-Te Ma-Te"
In Step 9, you have already learned how to talk these commands to microphone in Japanese accent and check them heard correctly. Here the heard words are printed in the third windows opened in the former step, because Julius is opened in Module mode in the second window. See the next video. You can know (1) how to use the third window opened in the former step, (2) how to command your robot, and (3) how your robot follows your command.
[Video_4] Command Your Robot with Your Voice
(*) You should talk these commands in very Japanese accent. You can know how to use a web service which talk them in Japanese accent. See the last step.
If your robot doesn't start moving when you command it to move, ...
- Check the second window whether the final stand-by status is displayed.
- Check the third window whether the correct Japanese characters are displayed after your voice command.
- Check wire connection in Step 10 very carefully again.
- Check your gearbox has adequate reduction ratio described in Step 15 later, and check your 5V battery_2 is fresh enough.
If your robot doesn't stop moving when you command it to stop moving, ...
- Check the third window whether the correct Japanese characters are displayed after your voice command.
- Keep microphone at more distance from motor or gearbox.
- Change the gain of microphone smaller or larger. See the former step.
- Try the read aloud service in a web-site described in the last step with "matei matei".
The sample codes provided in this article presuppose to be used with the stuff in Step 15. If you use them with other stuff for microphone, DC motors, gearbox or wheels, they might not work well.
The values of the parameters in these sample code are set to make robot follow us well against the noise of motors and gearbox. For example the power supply to each DC motor is set subdued to keep the noise smaller as possible. Hence if a raddled DC motor or a gearbox with lower gear ratio were used, they might not be able to move wheels. If you want to supply more power to each DC motor, comment out the next line in the sample sketch "adx_PR_arduino.ino" and upload the corrected sketch to your Arduino again with unplug 2 wires from 2 points, RX and TX in Arduino, temporarily.
- //dcMotor = ( code % 180 ) * 20 / 12; // 255/180 = 17/12
Also instead of commenting out this line, you can change the value of coefficient "20/12" if you want to change your robot moving faster or slower. (The original value was "17/12". "20/12" is a arranged value to make it move faster.)
Step 13: Make Your Robot a Pointer
In the former step, you have commanded your robot to move or stop with your voice. Though it follows your voice command well, it doesn't track target never. Now you can make it Pointer robot which tracks and points our target.
To make it, the fourth window opened in Step 11 is used. Run the next command in the fourth window. After several seconds later, a preview window is opened and video captured by camera is displayed.
- $ python /home/pi/adx_PR6_PointerRobot.py
If a orange target is in the sight of the robot, it start tracking the target autonomously. In this autonomous tracking, your Pointer robot doesn't follow your voice command as long as it watches the target. If you want command it with your voice in in the middle of this autonomous tracking, you should talk the next special command to it.
- Talk "Stay Down" in Japanese accent to your robot, which is tracking target autonomously
This means that "Stay, Watch target and Listen to me!". If your robot recognizes it, 2 Japanese characters and 'o' are printed in the third window, and your robot stop moving. Only 2 servos continue to make the camera face the target after the special command heard.
Then if you want your robot goes into autonomous tracking again, ....
- Talk "Track Start" in very Japanese accent to your robot, which listens to you and watches target
Your Pointer robot turns its face in front after the command heard(*). And if a orange target is in the sight there, your robot starts tracking and pointing it autonomously. On the other hand, if no target is in the sight, it still follows your voice command until it finds a new target.
See Video 5 below. You can know (1) how to stop autonomous tracking with "Stay Down" and (2) how to command your Pointer to track our target again.
[Video_5] Command Your Robot to Move, Stop and Point Target
Finally if you want to turn off your robot working, ....
- Make active the second window and quit the code running there with "ctl+c"
- Make active the third window and quit the code running there with "ctl+c"
- Make active the preview window of camera and quit its code running with press "q" key
- Check alsamixer again in the first window and change the gain of microphone with up arrow key
If the graph showing gain into microphone is disappeared in the last check above, press "Esc" key and check alsamixer again.
(*) If your Pointer turns its face toward wrong direction when command "Track Start", correct the three constants (-20790, 115 and 90) in the sample code "adx_PR6_PointerRobot.py". These constants are written under the line "# Set the camera to face forward when starting autonomous ..." in this sample code.
Step 14: Futher Application
We got a Pointer robot in the former step. Then some ideas for improvement and application are listed here. You can see them in Video 6 below.
- Attach USB camera horizontally (in landscape mode) to 2-axis platform(*)
- Command robot to track another target
- Get a pet robot instead of Pointer robot
--- A pet robot will reply to our calls, search our faces and run to us.
- Get a naughty cat robot
--- A naughty cat robot continues to play with a ball autonomously.
- Get a expressive robot with a face
--- Our robot can have some communication device(s).
- Get a mini Strandbeest with good eye and ear.
--- Transplant stuff on the plate of robot onto mini Strandbeest to train it further.
[Video_6] For Further Applications
(*) I have tried three ways to attach USB camera "C270" to the base plate with 2 servos;
- vertically with no kit,
- horizontally with no kit,
- horizontally with Pan/Tilt kit.
The last two ways can be seen in the upper left panel in the picture above. Way 2 has the largest rotating range. Way 1 is limited in panning, because the lower part of the camera interferes with the base plate. Way 3 is limited in tilting, because the USB camera C270 doesn't face the front but the down when the tilting servo is set at the neutral 90 degrees. Though it can face upper with opening the hinge of the leg of C270, it becomes less rigid to support the head of camera. On the other hand, this way 3 is robustest against the noise of motors and gearbox. Because it can keep the longest distance from them.
Step 15: Stuff
Stuff List (except Step 14)
- Raspberry Pi 3: Pi Zero cannot work well
- Micro SD card 16GB: 8GB card is too small
- Arduino: NANO or its compatible board
- Level converter: Between Arduino 5V and Pi 3.3V
- USB camera with microphone: Logitech C270 used here
- Servo motors: SG90 x 2
- Motor driver: L298N
- DC motors: 2 small DC motors for hobby (Preferably 0.1uF capacitors attached)
- Gearbox: Transmit 2 motors to 2 wheels (reduction ratio 344.2:1 selected here)
- Wheels: 2 wheels attached to the shafts of Gearbox
- Caster wheel: The third wheel of robot
- Plate: Body of robot
- Battery(1): Supply DC 5V with enough current for Pi 3 and Arduino
- Battery(2): Supply DC 5V for 2 DC motors and 2 servos.
- Jumper cables
- Solderless Breadboard
- USB cables: For power supply
- HDMI cable: For Raspberry Pi setup(*)
- Orange-colored Ball: Target
(*) LAN cable, USB keyboard, USB mouse and TV (or PC) monitor with HDMI port are needed to set up Raspberry Pi in Step 2. They are usual commodities in many homes. Therefore they are not included here.
Step 16: How to Use Read Aloud Service
In Video 4 and Video 5 in the previous steps, a read aloud service in a web-site is used. It is "Sound of Text". I appreciate the webmaster of it.
You can use it even if you don't know Japanese at all. You can also use it for pronunciation practice or your proxy to command robot.
- Open the site of "Sound of Text" in your browser.
- Input all command words below in seven lines in the Text box in this site
- Select "Japanese (Japan)" in the Voice list(*)
- Click Submit button
- See every command is arranged in Sounds box for several seconds later
- Click PLAY button for each command
- Listen to the pronunciation of each command
The pronunciation of each command, except for "matei matei", is spoken the words in very Japanese accent. Only "Ma-Te Ma-Te" in Step 12 is not spoken well by "Sound of Text". Hence input "matei matei" to get a near pronunciation for the command.
- go go
- back back
- right right
- left left
- matei matei
- track start
- stay down
If you use "Sound of Text" as your proxy to command your robot, you need not talk the command. Just click PLAY button for each command. On the other hand, if you want to talk to your robot, you should watch Video 3 in Step 9 and listen to the pronunciation of "Ma-Te Ma-Te" spoken at 2:20 in the video.
(*) I have tried "English(United States)" selected in the Voice list. Julius can hear the command "stay down" so well, and "matei matei" well, though it cannot hear "left left" adequately at all.