Introduction: PiCrawler

PiCrawler is a robot designed for moving in all directions on the ground and supporting every component of itself. Here are the major pieces of tech used in this project:

  • Snips: an offline speech-recognition package
  • Raspberry Pi / Python: for a Snips API handler and communicate with the Arduino
  • Arduino: managing servos and receive commands from the Pi
  • Laser cutting (acrylic) / 3D Printing (PLA): for mechanical parts of the robot
  • InkSpace: for laser cutting design
  • TinkerCAD: for 3D printing parts design and modification
  • AUTOCAD Fusion: for testing the moving mechanism
  • Electronic hand tools: for drill, polishing, and modifications of laser cut and 3D printed parts

Reference:

  • The body design is based on this project
  • The moving mechanism is called Klann's Linkage, which is a patented design and is used by the reference project

Step 1: Step 1 : Power Solution Design

For moving, the PiCrawler needs to power two servos, an Arduino Uno, and a Raspberry Pi.

I used a ZMI Plugornot 15W output duo USB power bank and a battery pack for 4x AA batteries for the power solution. These options are significantly heavier than LiPo batteries, which are common in RC models. But these are safer, more accessible, and way easier in maintenance (no Voltage monitoring and easy recharge!). Since Klann's Linkage is good for carry loads, weight is not much of a concern here.

The ZMI power bank is good enough for powering the Pi and the Arduino (board only). And the 4x AAs are used for the servos.

Reference:

ZMI Power Bank: https://www.amazon.com/ZMI-Plugornot-Portable-Char...

4x AA Battery Pack: https://www.amazon.com/Waykino-Battery-Holder-Stor...

Step 2: Step 2: Parts Design

Much of the mechanical parts are the same from the reference project. I've extended the legs by 20 mm.

The frame design was reinforced for the load. And it can be further improved for a more stable platform.

The main body, or the platform, was redesigned for holding all the parts of this project. I've added two holders in different sized for the two power supplies, two racks for the Pi and Arduino respectively on each side, and a hot-shoe rack for holding the RODE microphone. All of them are laid vertically for easy unloading and reloading.

Considering the mechanical weakness of 3D printed parts. I switched to laser cutting of acrylics for most of the mechanical parts. Only those taller spacers were 3D printed, since the acrylic sheet is fixed 3mm thick.

Step 3: Step 3: Assembly

This part is generally the same as in the reference project. Except that I reduced layers stacked inside the panels.

Step 4: Step 4: Arduino Programming

#include <Servo.h>

#define LeftServoPin 5  //Sets the left servo to pin 5
#define RightServoPin 6  //Sets the right servo to pin 6
Servo LeftServo;
Servo RightServo;

void setup()
{ 
  pinMode(LeftServoPin, OUTPUT);
  pinMode(RightServoPin, OUTPUT);
  LeftServo.attach(LeftServoPin);
  RightServo.attach(RightServoPin);</p><p>  Serial.begin(9600);
  pinMode(LED_BUILTIN, OUTPUT);
  Serial.println("Hello Pi, this is R3..");
  BlinkLED();
}

loop() 
{
  SerialListener();
}

void SerialListener()
{
  if (Serial.available() > 0) {
    String msg = Serial.readString();
    msg.trim();
    BlinkLED();
//    Serial.println("R3 >> reveived : " + msg + " len=" + msg.length());</p><p>    if (msg.equals("forward") || msg.equals("w")  ) {
      Serial.println("R3 >> Forward");
      Forward();
    }
    else if (msg.equals("backward") || msg.equals("s") ) {
      Serial.println("R3 >> Backward");
      Reverse();
    }
    else if (msg.equals("left") || msg.equals("a") ) {
      Serial.println("R3 >> Turn Left");
      TurnLeft();
    }
    else if (msg.equals("right") || msg.equals("d") ) {
      Serial.println("R3 >> Turn Right");
      TurnRight();
    }
    else if (msg.equals("stop") || msg.equals("x") ) {
      Serial.println("R3 >> STOP!");
      Stop();
    }
    else if (msg.equals("circle") || msg.equals("c") ) {
      Serial.println("R3 >> Circle");
      RunCircle();
    }
    else {
      Serial.println("R3 >> Invalid Input.");
      Stop();
    }
    
    Serial.flush();
    }
}
/* MOVEMENT FUNCTIONS BELOW
  0 = full speed counter-clockwise rotation
  90 = Stops the servo from moving
  180 = full speed clockwise
*/

void BlinkLED()
{
  digitalWrite(LED_BUILTIN, HIGH);   // turn the LED on (HIGH is the voltage level)
  delay(100);                       // wait for a second
  digitalWrite(LED_BUILTIN, LOW);    // turn the LED off by making the voltage LOW
  delay(50);                       // wait for a second
  digitalWrite(LED_BUILTIN, HIGH);   // turn the LED on (HIGH is the voltage level)
  delay(100);                       // wait for a second
  digitalWrite(LED_BUILTIN, LOW);    // turn the LED off by making the voltage LOW
}

void RunCircle()
{
  Forward();      //Moves forward
  delay(7000);    //Wait 7 seconds
  TurnRight();    //Turns right
  delay(10000);   //Wait 10 seconds
  Forward();      //Moves forward
  delay(7000);    //Wait 7 seconds
}

void Forward()
{
  LeftServo.write(0);
  RightServo.write(180);
}

void Reverse()
{
  LeftServo.write(180);
  RightServo.write(0);
}

void Stop()
{
  LeftServo.write(90);
  RightServo.write(90);
}

void TurnLeft()
{
  LeftServo.write(0);
  RightServo.write(0);
}

void TurnRight()
{
  LeftServo.write(180);
  RightServo.write(180);
}

Step 5: Step 5: Snips Voice Assistant Development

This is a very simple design of the Snips platform.

All you need is to open an account on Snips's website (Free!), create a new 'App', then add a new 'intent' in it.

Everything can be configured inside the web-based console.

As shown in the picture, I've added five levels to the 'direction' intent: forward, backward, left, right, and stop.

Then, you need to generate some example command/sentence for your 'app'. As to how many examples are needed, well, the more the better. And for the performance, a rule of thumb is about 3-4 examples for each level of each intent.

After saving your intent, the system will train the model automatically. You just need to download the assistant package and transfer the zipped file to your Pi. I prefer using scp/ssh for this task.

Before installing the Snips app. You need to configure your Pi for the Snips platform, following this tutorial: https://docs.snips.ai/getting-started/quick-start-...

Then just deploy the app from the zipped file.

You can use 'snips-watch -vvv' for testing the voice recognition.

Step 6: Step 7: Python Script Connecting Snips & Arduino

import serial
from hermes_python.hermes import Hermes
from sense_hat import SenseHat
import time

port = "/dev/ttyACM0"
port_alt = "/dev/ttyUSB0"
MQTT_ADDR = "localhost:1883"
SENSEHAT = SenseHat()
SERIALCOMM = True

try:
    slave = serial.Serial(port, 9600)
    slave.reset_input_buffer()
except:
    try:
        slave = serial.Serial(port_alt, 9600)
        slave.reset_input_buffer()
    except:
        SERIALCOMM = False
        print("Pi >> No board found on ttyACM0 or ttyUSB0")
        sensehat_display("Pi >> No board found on ttyACM0 or ttyUSB0")

# Serail Comm
def comm_sender_serial(command):
    if slave.inWaiting() >= 0:
        try:
            # command = str(input("Pi << direction: "))
            slave.reset_input_buffer()
            slave.write(command.encode())
            feedback = slave.readline()
            return "{0}".format(feedback.decode().strip())
        except:
            slave.write("stop".encode())
            return "R3 >> Invalid Command: {0}".format(command)
    else:
        return "port busy"

# Snips msg handler
def direction_command_handler(hermes, intent_message):
    sensehat_flush(timer=0.2)
    intent = intent_message.intent.intent_name
    if intent == "yishuo:Direction":
        parse, confidence = parse_hermes_message(intent_message, intent_slot_num=1)
        command = parse['dir']
        try:
            sensehat_display( "Pi >> {0}".format(command) )
            execution = comm_sender_serial(command)
            sensehat_display(execution)
        except Exception as e:
            sensehat_flush(colour=(127,0,0))
            print("Pi >> {}".format(e))
            sensehat_display(e)
    else:
        print("Pi >> Intent Unknown")
        sensehat_display("Pi >> Intent Unknown")

# Utility functions
def parse_hermes_message(intent_message, intent_slot_num):
    if intent_message is None:
        return None
    intetn_name = intent_message.intent.intent_name
    confidence = "{0:.2f}".format(intent_message.intent.confidence_score)
    slots = intent_message.slots
    # error case : incomplete command phrase
    if len(slots) != intent_slot_num :
        sensehat_display("ERROR: incomplete command")
        return None
    out_dict = {}
    for slot_name, slot_value in slots.items():
        out_dict[slot_name] = slot_value.first().value
    return out_dict, confidence

def sensehat_display(msg):
    white = (255, 255, 255)
    SENSEHAT.clear()
    SENSEHAT.show_message(msg, text_colour=white, scroll_speed=0.05)
    SENSEHAT.clear()

def sensehat_flush(colour=(255,255,255), timer=0.5):
    SENSEHAT.clear()
    SENSEHAT.clear(colour)
    time.sleep(timer)
    SENSEHAT.clear()


with Hermes(MQTT_ADDR) as h:</p><p>    sensehat_flush(colour=(255,165,0), timer=1.0)
    feedback = (slave.readline().decode()).strip()
    print("R3 >> ", feedback)
    sensehat_display(feedback)
    h.subscribe_intents(direction_command_handler).start()

Step 7: Final: Testing

Before testing:

  1. load the Arduino code to the Arduino
  2. load the Python script to the Pi
  3. run the Python script on the Pi with 'python __your_python_script.py' command

Then, the robot should be able to move by your voice command.