Wallace Autonomous Robot - Part 4 - Add IR Distance and "Amp" Sensors

Introduction: Wallace Autonomous Robot - Part 4 - Add IR Distance and "Amp" Sensors

About: Have enjoyed a mostly fun ride in an electronics and software career. Also enjoy latin dancing.

Hello, today we start the next phase of improving Wallace's capabilities. Specifically, we're trying improve its ability to detect and avoid obstacles using infrared distance sensors, and also take advantage of the Roboclaw motor-controller's ability to monitor current and turn that into a virtual (software) "sensor". Finally, we'll take a look at how navigate without SLAM (simultaneous location and mapping) (for now), since the robot doesn't yet have an IMU (inertia measurement unit) or ToF (time of flight) sensors.

By navigation, initially it will just be two main goals:

  1. avoid obstacles
  2. recognize when it's stuck somewhere and not making any progress. ("progress" means did it move forward any meaningful distance)
  3. a possible 3rd goal could be for it to try align itself squarely to a wall.

This project began with a robot kit and getting basic movements to work using a keyboard and ssh connection.

The second phase was to add sufficient supporting circuitry to prepare for addition of many sensors.

In the previous Instructable, we did add several HCSR04 acoustic sensors and the robot can now avoid obstacles as it moves around the apartment.

While it does do well in the kitchen and hallway with good, solid flat surfaces, it is totally blind when approaching the dining room. It can not "see" the table and chair legs.

One improvement can be to keep track of typical motor-currents, and if the values jump, then the robot must have hit something. It's a good "plan B" or even C. But that doesn't really help it navigate around the dining area.

(Update: actually, for now, current-monitoring is plan A when reversing as I have temporarily removed and sensors from the rear).

The video for this section is constitutes the final phase of obstacle-avoidance sensors.

What you see in the video is six front HCSR04 acoustic sensors, and two Sharp IR sensors. The IR sensors didn't come into play much in the video. Their forte' is mostly when the robot finds itself in the dining area facing table and chair legs.

In addition to the sensors, the current-monitor came into play especially during reversing, in case it bumps into something.

Finally, it utilizes the history of the last 100 moves, and some basic analysis to answer one question:

"Has there recently been real forward progress (or is it stuck in some repeating dance)?"

So in the video when you see a forward-reverse repeated, then it turns, it means it did recognize the forward-reverse pattern, thus tries something else.

The only programmed goal of this version of the software was to try to make continuous forward progress, and try to avoid obstacles.

Step 1: Add Supporting Circuitry (MCP3008)

Before we can add the IR sensors, we'll need the interface circuitry between them and the Raspberry Pi.

We will add an MCP3008 analog-to-digital converter. There are many online resources how to connect this chip to the Raspberry Pi, so I won't go much into that here.

Essentially, we have a choice. If the version of IR sensors operates at 3V, so can the MCP3008, and we can then directly connect to the Raspberry.

[3V IR sensor] ---> [MCP3008] ----> [Raspberrry Pi]

In my case, however, I am running mostly 5V, so that means a bi-directional level shifter.

[5V IR sensor] ----> [MCP3008] ----> [5V-to-3V bi-directional bus] ----> [Raspberry Pi]

Note: There is only one signal output from the IR sensor. It goes directly to one of the input analog signal lines of the MCP3008. From the MCP3008, there are 4 data lines we need to connect (via the bi-directional bus) to the Raspberry Pi.

At the moment, our robot is going to run using just two IR sensor, but we could easily add more. The MCP3008 eight analog input channels.

Step 2: Mount IR Sensors

Sharp makes several different IR sensors, and they have different ranges and coverage area. I happened to have ordered the GP2Y0A60SZLF model. The model you choose will affect the placement and orientation of the sensor. Unfortunately for me, I did not really research exactly which sensors to get. It was more of a "which ones can I get at a reasonable time & price from a reputable source, out of the ones they offer" decision.

(Update: However, that may not matter, as these sensor seem to get confused by interior ambient lighting. I am still exploring that issue)

There are at least three ways to mount these sensors on the robot.

  1. Place them in a fixed position, at the front, facing slightly away from each other.
  2. Place them onto a servo, at the front, facing slightly away from each other.
  3. Place them in a fixed position, at the front, but at the leftmost and rightmost furthest corners, angled toward each other.

In comparing choice #1 to choice #3, I think that #3 will cover more of the collision area. If you take a look at the images, choice #3 can be done not only so that the sensor fields overlap, but also they can cover the center and beyond outside width of the robot.

With choice #1, the more apart the sensors are angled from each other, the more of a blind spot in the center.

We could do #2, (I added some images with servo as a possibility) and have them do a sweep, and obviously this can cover the most area. However, I want to delay the use of a servo as long as possible, for at least two reasons:

  • We'll use up one of the PWM communication channels on the Raspberry Pi. (It's possible to enhance this but still...)
  • The current draw with the servo can be significant
  • It adds more to hardware and software

I would like to leave the servo option for later when adding more important sensors, such as Time-of-Flight (ToF), or perhaps a camera.

There is one other possible advantage with choice #2 that is not available with the other two choices. These IR sensors can become confused, depending on the lighting. It could be that the robot gets a reading of an object that is imminently close when in fact there is no close-by object. With choice #3, since their fields can overlap, both sensors can be registering the same object (from different angles).

So we're going with placement choice #3.

Step 3: Time to Test

After we've made all the connections between the Raspberry Pi, the MCP3008 ADC, and the Sharp IR sensors, it's time to test. Just a simple test to make sure the system is working with the new sensors.

As in previous Instructables, I use the wiringPi C library as much as possible. Makes things easier. Something that isn't very obvious from reviewing the wiringPi website, is that there's direct support for the MCP3004/3008.

Even without that, you could just use the SPI extension. But no need to. If you take a close look at Gordon's git repository for wiringPi, you'll come across a listing of supported chips, of which one of them is for MCP3004/3008.

I decided to attach the code as a file because I couldn't get it to display correctly on this page.

Step 4: A Virtual Sensor - AmpSensor

The more different ways you can have the robot receive information about the outside world, the better.

The robot currently has eight HCSR04 acoustic sonar sensors (they are not the focus of this Instructable), and it now has two Sharp IR distance sensors. As stated earlier, we can take advantage of something else: the Roboclaw's motor-currents sensing feature.

We can wrap that query call to the motor-controller into a C++ class and call it an AmpSensor.

By adding in some "smarts" to the software, we can monitor and adjust typical current-draw during straight movement (forwards, backwards), and also rotational movements (left, right). Once we know those ranges of amps, we can select a critical value, so that if the AmpSensor gets a current reading from the motor-controller that exceeds this value, we know the motors have probably stalled, and that usually indicates the robot has bumped into something.

If we add some flexibility to the software (command-line args, and/or keyboard input during operation), then we can increase / decrease the "critical-amps" threshold as we experiment by just letting the robot move and bump into objects, both straight in, or while rotating.

Since our navigation portion of the the software knows the direction of movement, we can use all that information to perhaps, stop the movement, and try to reverse the movement for some short period before trying something else.

Step 5: Navigation

The robot currently is limited in real-world feedback. It has a few close-distance sensors for obstacle-avoidance, and it has a fall-back technique of monitoring current-draw should the distance sensors miss an obstacle.

It does not have motors with encoders, and it does not have an IMU (inertial-measurement-unit), so that makes it more difficult to know if it's really moving or rotating, and by how much.

While one can get some sort of indication of distance with the sensors currently on the robot, their field of view is wide, and there's unpredictability. The acoustic sonar may not reflect back correctly; the infrared can be confused by other lighting, or even multiple reflective surfaces. I'm not sure it's worth the trouble to actually try to track the change in distance as a technique to know if the robot is moving and by how much and in which direction.

I purposely chose to NOT use a micro-controller such as an Arduino because a) I don't like it's psuedo-C++ environment , b) and that too much development will wear out the read-write memory (?), and that I would need a host computer to develop (?). Or maybe I just happen like the Raspberry Pi.

The Pi running Raspbian, however, isn't a real-time OS, so between these sensors' instabilities, and the OS' not reading exactly every time, I felt that the purpose of these sensors was better suited for obstacle-avoidance and not actual distance-measurement.

That approach seemed complicated and with not so much benefit, when we can use better ToF (time-of-flight) sensors (later) for that purpose (SLAM).

One approach that we can use is to keep some sort of track of what movement-commands have been issued within the last X seconds or commands.

As an example, say that the robot is stuck facing a corner diagonally. One set of sensors tell it that it's too close to one wall, so it pivots, but then the other set of sensors tell it that it's too close to the other wall. It ends up just repeating a side-to-side pattern.

The above example is just one very simple case. Adding some smarts may just raise the repeated pattern to a new level, but the robot remains stuck in the corner.

Example, instead of rotating back and forth in place, it rotates one way, does momentary reverse (which then clears the critical distance indications), and even if it rotates in the other direction, it still goes forward at some angle back into the corner, repeating a more complicated patter of essentially the same thing.

That means we really could use a history of commands, and take a look at how to exploit and use that information.

I can think of two very basic (rudimentary) ways of using the movement-history.

  • for the last X number of moves, do they match Y patttern. A simple example could be (and this happened) "FORWARD, REVERSE, FORWARD, REVERSE, .....". So there's this matching function that returns either TRUE (pattern found) or FALSE (not found). If TRUE, in the navigation portion of the program, attempt other movement-sequences.
  • for the last X number of moves, is there a general or net forward movement. How might one determine what is real forward movement? Well.. one easy comparison is that for the last X moves, "FORWARD" occurs more than "REVERSE". But that doesn't have to be the only one. How about this: "RIGHT,RIGHT,LEFT,RIGHT" . In that case, the robot is having to make right turns to get out of a corner or because it approached the wall at an angle, that could be considered real forward progress. On the other hand, "LEFT, RIGHT, LEFT, RIGHT..." might not be considered real forward progress. Thus, if "RIGHT" occurs more than "LEFT", or "LEFT occurs more than "RIGHT", then that could be real progress.

At the start of this Instructable, I mentioned that a possible 3rd goal could be squaring up or aligning to a wall. For that, however, we need more than "are we close to some object". For example, if we can get two forward-facing acoustic sensors (not the focus of this article) to give reasonably good, stable responses regarding distance, the obviously if one reports a much different value than the other, the robot has approached the wall at an angle, and could attempt some maneuvering to see if those values approach each other (facing the wall squarely).

Step 6: Final Thoughts, Next Phase...

Hope this Instructable gave some ideas.

Adding more sensors introduces some advantages, and challenges.

In the above case, all of the acoustic sensors worked well together and it was rather straight-forward with the software.

Once the IR sensors were introduced into the mix, it became a bit more challenging. The reason is that some of their fields of view overlapped with the those of the acoustic sensors. The IR sensors seemed a bit sensitive and unpredictable with changing ambient-light conditions, whereas of course the acoustic sensors are not affected by lighting.

And so the challenge was in what to do if an acoustic sensor is telling us that there's no obstacle, but the IR sensor is.

For now, after trial-and-error, things ended up in this priority:

  1. amp-sensing
  2. IR-sensing
  3. acoustic-sensing

And what i did was just to lower the sensitivity of the IR sensors, so they would only detect very close objects (such as imminent chair legs)

So far, there hasn't been a need to do any multi-threading or interrupt-driven software, although I do occasionally encounter loss of control between the Raspberry Pi and the Roboclaw motor-controller (loss of serial communications).

This is where the E-Stop circuit (see previous Instructables) would normally come into use. However, since I don't want to (yet) have to deal with having to reset the Roboclaw during development, and the robot isn't going that fast, and I am present to monitor it and shut it down, I haven't connected the E-Stop.

Eventually, multi-threading will most likely be necessary.

Next Steps...

Thank you for making it this far.

I obtained some VL53L1X IR laser ToF (time-of-flight) sensors, so that's most likely the topic of the next Instructable, together with a servo.

Share

    Recommendations

    • Metalworking Contest

      Metalworking Contest
    • Tiny Home Contest

      Tiny Home Contest
    • Fix It! Contest

      Fix It! Contest

    Discussions