Wallace Autonomous Robot - Part 4 - Add IR Distance and "Amp" Sensors

About: Have enjoyed a mostly fun ride in an electronics and software career. Also enjoy latin dancing.

Hello, today we start the next phase of improving Wallace's capabilities. Specifically, we're trying improve its ability to detect and avoid obstacles using infrared distance sensors, and also take advantage of the Roboclaw motor-controller's ability to monitor current and turn that into a virtual (software) "sensor". Finally, we'll take a look at how navigate without SLAM (simultaneous location and mapping) (for now), since the robot doesn't yet have an IMU (inertia measurement unit) or ToF (time of flight) sensors.

By navigation, initially it will just be two main goals:

  1. avoid obstacles
  2. recognize when it's stuck somewhere and not making any progress. ("progress" means did it move forward any meaningful distance)
  3. a possible 3rd goal could be for it to try align itself squarely to a wall.

This project began with a robot kit and getting basic movements to work using a keyboard and ssh connection.

The second phase was to add sufficient supporting circuitry to prepare for addition of many sensors.

In the previous Instructable, we did add several HCSR04 acoustic sensors and the robot can now avoid obstacles as it moves around the apartment.

While it does do well in the kitchen and hallway with good, solid flat surfaces, it is totally blind when approaching the dining room. It can not "see" the table and chair legs.

One improvement can be to keep track of typical motor-currents, and if the values jump, then the robot must have hit something. It's a good "plan B" or even C. But that doesn't really help it navigate around the dining area.

(Update: actually, for now, current-monitoring is plan A when reversing as I have temporarily removed and sensors from the rear).

The video for this section is constitutes the final phase of obstacle-avoidance sensors.

What you see in the video is six front HCSR04 acoustic sensors, and two Sharp IR sensors. The IR sensors didn't come into play much in the video. Their forte' is mostly when the robot finds itself in the dining area facing table and chair legs.

In addition to the sensors, the current-monitor came into play especially during reversing, in case it bumps into something.

Finally, it utilizes the history of the last 100 moves, and some basic analysis to answer one question:

"Has there recently been real forward progress (or is it stuck in some repeating dance)?"

So in the video when you see a forward-reverse repeated, then it turns, it means it did recognize the forward-reverse pattern, thus tries something else.

The only programmed goal of this version of the software was to try to make continuous forward progress, and try to avoid obstacles.

Step 1: Add Supporting Circuitry (MCP3008)

Before we can add the IR sensors, we'll need the interface circuitry between them and the Raspberry Pi.

We will add an MCP3008 analog-to-digital converter. There are many online resources how to connect this chip to the Raspberry Pi, so I won't go much into that here.

Essentially, we have a choice. If the version of IR sensors operates at 3V, so can the MCP3008, and we can then directly connect to the Raspberry.

[3V IR sensor] ---> [MCP3008] ----> [Raspberrry Pi]

In my case, however, I am running mostly 5V, so that means a bi-directional level shifter.

[5V IR sensor] ----> [MCP3008] ----> [5V-to-3V bi-directional bus] ----> [Raspberry Pi]

Note: There is only one signal output from the IR sensor. It goes directly to one of the input analog signal lines of the MCP3008. From the MCP3008, there are 4 data lines we need to connect (via the bi-directional bus) to the Raspberry Pi.

At the moment, our robot is going to run using just two IR sensor, but we could easily add more. The MCP3008 eight analog input channels.

Step 2: Mount IR Sensors

Sharp makes several different IR sensors, and they have different ranges and coverage area. I happened to have ordered the GP2Y0A60SZLF model. The model you choose will affect the placement and orientation of the sensor. Unfortunately for me, I did not really research exactly which sensors to get. It was more of a "which ones can I get at a reasonable time & price from a reputable source, out of the ones they offer" decision.

(Update: However, that may not matter, as these sensor seem to get confused by interior ambient lighting. I am still exploring that issue)

There are at least three ways to mount these sensors on the robot.

  1. Place them in a fixed position, at the front, facing slightly away from each other.
  2. Place them onto a servo, at the front, facing slightly away from each other.
  3. Place them in a fixed position, at the front, but at the leftmost and rightmost furthest corners, angled toward each other.

In comparing choice #1 to choice #3, I think that #3 will cover more of the collision area. If you take a look at the images, choice #3 can be done not only so that the sensor fields overlap, but also they can cover the center and beyond outside width of the robot.

With choice #1, the more apart the sensors are angled from each other, the more of a blind spot in the center.

We could do #2, (I added some images with servo as a possibility) and have them do a sweep, and obviously this can cover the most area. However, I want to delay the use of a servo as long as possible, for at least two reasons:

  • We'll use up one of the PWM communication channels on the Raspberry Pi. (It's possible to enhance this but still...)
  • The current draw with the servo can be significant
  • It adds more to hardware and software

I would like to leave the servo option for later when adding more important sensors, such as Time-of-Flight (ToF), or perhaps a camera.

There is one other possible advantage with choice #2 that is not available with the other two choices. These IR sensors can become confused, depending on the lighting. It could be that the robot gets a reading of an object that is imminently close when in fact there is no close-by object. With choice #3, since their fields can overlap, both sensors can be registering the same object (from different angles).

So we're going with placement choice #3.

Step 3: Time to Test

After we've made all the connections between the Raspberry Pi, the MCP3008 ADC, and the Sharp IR sensors, it's time to test. Just a simple test to make sure the system is working with the new sensors.

As in previous Instructables, I use the wiringPi C library as much as possible. Makes things easier. Something that isn't very obvious from reviewing the wiringPi website, is that there's direct support for the MCP3004/3008.

Even without that, you could just use the SPI extension. But no need to. If you take a close look at Gordon's git repository for wiringPi, you'll come across a listing of supported chips, of which one of them is for MCP3004/3008.

I decided to attach the code as a file because I couldn't get it to display correctly on this page.

Step 4: A Virtual Sensor - AmpSensor

The more different ways you can have the robot receive information about the outside world, the better.

The robot currently has eight HCSR04 acoustic sonar sensors (they are not the focus of this Instructable), and it now has two Sharp IR distance sensors. As stated earlier, we can take advantage of something else: the Roboclaw's motor-currents sensing feature.

We can wrap that query call to the motor-controller into a C++ class and call it an AmpSensor.

By adding in some "smarts" to the software, we can monitor and adjust typical current-draw during straight movement (forwards, backwards), and also rotational movements (left, right). Once we know those ranges of amps, we can select a critical value, so that if the AmpSensor gets a current reading from the motor-controller that exceeds this value, we know the motors have probably stalled, and that usually indicates the robot has bumped into something.

If we add some flexibility to the software (command-line args, and/or keyboard input during operation), then we can increase / decrease the "critical-amps" threshold as we experiment by just letting the robot move and bump into objects, both straight in, or while rotating.

Since our navigation portion of the the software knows the direction of movement, we can use all that information to perhaps, stop the movement, and try to reverse the movement for some short period before trying something else.

Step 5: Navigation

The robot currently is limited in real-world feedback. It has a few close-distance sensors for obstacle-avoidance, and it has a fall-back technique of monitoring current-draw should the distance sensors miss an obstacle.

It does not have motors with encoders, and it does not have an IMU (inertial-measurement-unit), so that makes it more difficult to know if it's really moving or rotating, and by how much.

While one can get some sort of indication of distance with the sensors currently on the robot, their field of view is wide, and there's unpredictability. The acoustic sonar may not reflect back correctly; the infrared can be confused by other lighting, or even multiple reflective surfaces. I'm not sure it's worth the trouble to actually try to track the change in distance as a technique to know if the robot is moving and by how much and in which direction.

I purposely chose to NOT use a micro-controller such as an Arduino because a) I don't like it's psuedo-C++ environment , b) and that too much development will wear out the read-write memory (?), and that I would need a host computer to develop (?). Or maybe I just happen like the Raspberry Pi.

The Pi running Raspbian, however, isn't a real-time OS, so between these sensors' instabilities, and the OS' not reading exactly every time, I felt that the purpose of these sensors was better suited for obstacle-avoidance and not actual distance-measurement.

That approach seemed complicated and with not so much benefit, when we can use better ToF (time-of-flight) sensors (later) for that purpose (SLAM).

One approach that we can use is to keep some sort of track of what movement-commands have been issued within the last X seconds or commands.

As an example, say that the robot is stuck facing a corner diagonally. One set of sensors tell it that it's too close to one wall, so it pivots, but then the other set of sensors tell it that it's too close to the other wall. It ends up just repeating a side-to-side pattern.

The above example is just one very simple case. Adding some smarts may just raise the repeated pattern to a new level, but the robot remains stuck in the corner.

Example, instead of rotating back and forth in place, it rotates one way, does momentary reverse (which then clears the critical distance indications), and even if it rotates in the other direction, it still goes forward at some angle back into the corner, repeating a more complicated patter of essentially the same thing.

That means we really could use a history of commands, and take a look at how to exploit and use that information.

I can think of two very basic (rudimentary) ways of using the movement-history.

  • for the last X number of moves, do they match Y patttern. A simple example could be (and this happened) "FORWARD, REVERSE, FORWARD, REVERSE, .....". So there's this matching function that returns either TRUE (pattern found) or FALSE (not found). If TRUE, in the navigation portion of the program, attempt other movement-sequences.
  • for the last X number of moves, is there a general or net forward movement. How might one determine what is real forward movement? Well.. one easy comparison is that for the last X moves, "FORWARD" occurs more than "REVERSE". But that doesn't have to be the only one. How about this: "RIGHT,RIGHT,LEFT,RIGHT" . In that case, the robot is having to make right turns to get out of a corner or because it approached the wall at an angle, that could be considered real forward progress. On the other hand, "LEFT, RIGHT, LEFT, RIGHT..." might not be considered real forward progress. Thus, if "RIGHT" occurs more than "LEFT", or "LEFT occurs more than "RIGHT", then that could be real progress.

At the start of this Instructable, I mentioned that a possible 3rd goal could be squaring up or aligning to a wall. For that, however, we need more than "are we close to some object". For example, if we can get two forward-facing acoustic sensors (not the focus of this article) to give reasonably good, stable responses regarding distance, the obviously if one reports a much different value than the other, the robot has approached the wall at an angle, and could attempt some maneuvering to see if those values approach each other (facing the wall squarely).

Step 6: Final Thoughts, Next Phase...

Hope this Instructable gave some ideas.

Adding more sensors introduces some advantages, and challenges.

In the above case, all of the acoustic sensors worked well together and it was rather straight-forward with the software.

Once the IR sensors were introduced into the mix, it became a bit more challenging. The reason is that some of their fields of view overlapped with the those of the acoustic sensors. The IR sensors seemed a bit sensitive and unpredictable with changing ambient-light conditions, whereas of course the acoustic sensors are not affected by lighting.

And so the challenge was in what to do if an acoustic sensor is telling us that there's no obstacle, but the IR sensor is.

For now, after trial-and-error, things ended up in this priority:

  1. amp-sensing
  2. IR-sensing
  3. acoustic-sensing

And what i did was just to lower the sensitivity of the IR sensors, so they would only detect very close objects (such as imminent chair legs)

So far, there hasn't been a need to do any multi-threading or interrupt-driven software, although I do occasionally encounter loss of control between the Raspberry Pi and the Roboclaw motor-controller (loss of serial communications).

This is where the E-Stop circuit (see previous Instructables) would normally come into use. However, since I don't want to (yet) have to deal with having to reset the Roboclaw during development, and the robot isn't going that fast, and I am present to monitor it and shut it down, I haven't connected the E-Stop.

Eventually, multi-threading will most likely be necessary.

Next Steps...

Thank you for making it this far.

I obtained some VL53L1X IR laser ToF (time-of-flight) sensors, so that's most likely the topic of the next Instructable, together with a servo.

Share

    Recommendations

    • Make it Glow Contest 2018

      Make it Glow Contest 2018
    • Plastics Contest

      Plastics Contest
    • Optics Contest

      Optics Contest

    4 Discussions

    0
    None
    Omnivent

    3 months ago

    Hi Eli,

    .

    "(Update: However, that may not matter, as these sensor
    seem to get confused by interior ambient lighting. I am still exploring
    that issue)"

    .

    Not sure if you tamed your Sharp sensors, but I doubt that what you see is ambient light issues - Sharp knows how to make an Optical Distance Sensors (ODS) and they have done so for decades.

    .

    The general issue with all Sharp ODS'es, in amateur robotics, is the amateurs relentless way of discarding most facts, as stated in the datasheet ;)

    (Sorry, the truth isn't always nice, but it is the #1 issue with Sharp ODS'es (and lots of other stuff btw.) I have seen and helped people with online for decades).

    .

    The overall issue with the ODS'es is the lack of understanding its needs for a "stiff" supply (think Adamantium hard :) as it operates similar to a cell phone in bursts, so when the datasheet states:

    Average supply current, Icc1, Vcc=3V, V CTRL =3V, 33 mA (typ) and 50 mA (max)

    The operative words to heed are "Average" and "50mA"

    But "Average" is a vague term here, so it needs to be coupled with another far more important statement from the datasheet:

    "Please use an electric source with an output current of 400mA or more because LED pulse current ismore than 300mA."

    .

    Which can all really be boiled down to: Add a sensible amount of capacity or you're shit out of luck!

    .How much capacity then.

    The datasheet mentions 16.5ms +/- 3.7ms for one measurement (in the Timing Chart section). We knoiw that it draws pulses, so isn't on all the time, but for the time being, let's assume that the supply has to hold up to 20.2ms @ 400ma for a single measurement, assuming the LED is on the entire period, which it isn't, but works for a worst case example to start with:

    .

    The formula for capacity is Axs/V (Ampere times seconds divided by allowable voltage drop)

    Assuming an allowable voltage drop of 0.25V

    That means a capacitor of: 0.4A x 0.0202s / 0.25 = 33'000µF

    This is worst case and with the LED on the entire 20.2ms, but luckily, this isn't needed :)

    Parameters that shrinks the magnitude of capacity needed are: Duty Cycle of LED and the allowable voltage drop over the period.

    If you find (using a 'scope) that the Duty Cycle is say 2%, the number falls to:

    33'000µF x 2 / 100 = 660µF

    and further, if your experiments show that it will work reliably with a voltage drop of say 0.75V, it is further reduced to: 660µF x 0.25V / 0.75V = 220µF

    Please note that the above are just imaginary values.
    You will have to find the Duty Cycle and the allowable voltage drop for your exact device - and then multiply the capacity you reach with 1.5 to be on the safe side :)

    .

    A more precise way of finding the exact needed capacity would be to calculate on a pulse by pulse basis, including the pause in between pulses where the capacitor can recharge and the recharge time. But ball park figures is a very helpful start, although the answers you get may be surprisingly high.

    .

    From personal experience, I'd say start with 10..100µF mounted smack dab at the ODS' power terminals and be sure that you can feed it with 50mA average (i.e. all the time).

    .

    I'm pretty sure that this will cure most or all issues you have. In the unlikely event that you're still not happy, add a Kalmann filter to kill off any noise - and before anything else, make sure your motor(s) are neither conducting commutator noise over the power line, nor radiating noise (i.e. proper dampening and shielding, coupled with a star ground at the battery negative terminal, as that is the lowest impedance point. Screening, electric noise dampening, a good ground and separating power cables from signal/sensor cables will give you much less grief with any sensors that you may add.

    .

    In case you don't have a 'scope, PM me and I'll dig some different ODS'es up and measure them for you. I don't have the model you have bought, but I have 3 or 4 different types, some that is a bit more advanced, but I think the Duty Cycle will be pretty similar - because, why change something that works.

    .

    Have a nice day :)

    3 replies
    0
    None
    elicorralesOmnivent

    Reply 3 months ago

    Hi, thanks , quite a bit of info. :)

    Part of what you wrote is present - the shielding, the star ground, and I had added some caps across the motors, and the high-current wires and devices are separated from the logic wires and voltage.

    So adding a cap across the ODS would be the new thing. And possibly the Kalman filter.

    I do have a scope ... I had checked the rails way back, but have not checked them recently not at the ODS themselves.

    The power is from a 12V SLA AGM 10Ah..

    hey, thanks for all the input.

    0
    None
    elicorraleselicorrales

    Reply 3 months ago

    I was remembering a bit more... I had done a test program to watch the outputs from the IR sensors, and really, just doing a running average of the last N readings was sufficient for my purposes. I'd call that a simplified way of filtering.

    What I noticed during the robot moving around, is that if I set the desired threshold (in software) beyond a certain point (too low of a number), it over-reacted but only if it was in a certain position, at a certain place in the apartment. If I set the threshold to be more lenient (higher number), or the robot approached that position from a different angle, then it didn't over-react.

    By "over-react" I mean that it was misinterpreting something as being too close when it was not.

    Based on everything I have observed, I don't think either noise or lack of current was a real issue. However, out of curiosity, I would definitely try adding some capacitance.

    It would seem to me that if noise and / or current was an issue, that would have been evident at several times and locations, (more erratic), but that's not what I observed. The behavior has been predictable , constant, consistent.

    0
    None
    Omniventelicorrales

    Reply 2 months ago

    Hi again,

    I dug up a sensor and measured a bit on it. It was a Sharp GP2D12 (bought around 1999-2001, but I bet the "motor" in them are pretty similar, as it's a single chip solution containing both modulation and the sensor line).

    .

    Sharp GP2D12:
    All measurements taken directly at the power terminals of the GP2D12 with a fast 'scope.
    .
    Modulation: ~40ms bursts, each made up of 32 LED pulses with a repetition rate of 1ms and around 9ms of inter-burst pause.
    .
    Each pulse (LED on time) is ~130µs).
    .
    With no capacitor, each edge of the pulse is very noise ridden, with a potential to screw up digital circuitry in no time flat:
    Leading edge pulses to under 3V, then up to ~5.5V and dies in a dampened ringing ending at ~1.1V..1.2V under Vdd (While the LED is on).
    Trailing edge (LED extinguishing) starts with a fast pulse to more than 15V, then down to around 2V and fading into Vdd in a (less than pretty) dampened ringing.
    .
    A 10µF ceramic (i.e. low ESR) capacitor directly at the power terminals of the GP2D12 removed the spiking (up/down), but wasn't able to keep the power terminals at Vdd during LED pulses (of course).
    Alternatives to 10µF ceramic could be 10µF _solid_ tantalum, or ~22µF regular tantalum (the latter with a shorter life cycle though)
    .
    Pics (1..5 with no cap):
    1 Full burst
    2 Single pulse
    3 Leading edge of LED pulse
    4 Trailing edge of LED pulse
    5 A handful of pulses
    6 A handful of pulses with a 10µF ceramic cap directly over the power terminals

    Apart from the cap, twisting the wires to your power source (around 1 twist each 10mm or so and kept as short as practically possible (keeping the inductance of the supply wires low that way will help) and don't run them in parallel with motor supply wires. Power taken directly from the SLA is good, as that means low impedance :)

    Hope everything plays out well for you :)

    Sharp_GP2D12_p1.pngSharp_GP2D12_p2.pngSharp_GP2D12_p3.pngSharp_GP2D12_p4.pngSharp_GP2D12_p5.pngSharp_GP2D12_p6.png