Introduction: Odometry: Running Squares

Doesn't sound like a 'Big Deal' ?

Well, to be honest, for me it turned out to be pretty hard work. ;-0)

Like in previous blogs I’m not aiming for a scientific essay, but I’ll try to expose the facts I had to gather for making this work, in a handsome way. Hoping it will help others who are working on the same topics.

Up till now I had my robots running towards a target, keeping it in sight and adjust the direction of the bot according to the change in the camera image. Simple and effective when one can use target objects to aim for. The scripts I used resulted in nice curved moves towards markers I placed upfront.

But thinking about controlled moves without using a camera and running trajectories in a straight line at a certain speed for a certain distance, I had to dig in the theories of odometry: the use of data from motion sensors to estimate change in position over time.

The idea is simple and as old as ancient navigation by sailors. The method they used is called ‘Dead reckoning’, where ‘De(a)d’ stands for ‘deduced’. The sailors estimated their position relative to a starting location by using velocity and heading. Once new data on the real position was retrieved (e.g. by sextant measuring), a new, corrected, estimation was made. To become effective rapid and accurate data collection, equipment calibration, and processing are required in most cases.

So, to make a robot car move in a straight line for a certain distance at a certain speed, one should estimate where it will be in a short moment of time, then read the actuators, determine corrections based on the errors between estimation and readings and calculate a new estimation. Sounds like a typical job for a computer!

But there’s a lot more involved than bare coding. The hardware characteristics are important (size of the car and the wheels, strength of the motors, maximum velocity and acceleration of the bot and the resolution of the actuators).

And then there’s still a lot more that influences the effectiveness, leading to (sometimes) frustrating results. But I’ll come to all that later on.

I’ll deal with the estimation part, the control loops (explain the use of PID’s), characteristics of the actuators and some handy parts of theories about dead reckoning with robots.The Python script can be found at:

Running squares

You won’t find very sophisticated coding there. It is written straight forward with a lot of comments to make it as readable as possible for those who are just starting (like me).The same applies to the movie. It’s not a commercial video. Shot with my Blackberry smart phone and just meant for illustration of what the script implies.

Step 1: The Steering Part

To build the script 3 main parts have to be considered: the ‘Steering’ part, which can be derived from the hardware characteristics, the ‘Controlled’ part, the algorithms dealing with the differences (errors) between the estimations and reality and the ‘Actual’ part, consisting of the data from the sensors (actuators) used, providing information about the most probable (!) actual situation.

The effectiveness of the script heavily depends on the accuracy of the actuators used. For example: if the ancient sailors would have had a GPS system, they would have had more accurate data on their actual position.

Using more, different, actuators like encoders, a digital compass, a GPS retriever (for outside use) or even a gyroscope, would produce more accuracy on the actual situation. Unfortunately it makes the controlled part a bit more complex, for it should deal with the weighted influence of the separate sensors to determine the most accurate data on the actual position ('sensor fusion').

Every actuator used, will at least bring some particular ‘noise’ into the equations. If a GPS receiver would be retrieving data at a steady position for, let’s say, 24 hours, the plotting of that data would be scattered around the actual position. Therefor the accuracy of a GPS system is expressed by 95% of the readings falling within a certain range (e.g. < 5 meter). To deal with filtering sensor noise and balancing the weight between the separate actuators one should use a more sophisticated control loop. The Kalman filter is often used for that.

So to keep it simple in my first explorations, I stuck to a single actuator and only used encoders. They generate ‘ticks’ per rotation of the wheel axes, thus providing data on the actual turning of the wheels (which actually provides info on velocity and/or distance).

There are many different encoders. Optical encoders often use a segmented disc to measure the changes (from solid to non-solid and reverse) with an infrared sensor. The resolution (i.e. amount of ticks per revolution of the wheel axis) of optical encoders is rather low and thus limits the accuracy that can be achieved. Other encoders make use of magnetic discs and do have a much better resolution. Optical encoders are much cheaper and easy to tune/use. Finally encoders can be single or quadrature. The latter also provides data on the direction of the wheel rotations.

RB2 has single optical encoders. They do oppose limitations; especially on the accuracy (>= centimeters) and the measuring frequency (we need at least 1 tick to enter the controlled part).

Now let’s dive into the ‘Steering’ part. It is the most factual part and all variables needed can be calculated upfront and used as constants in the script. If the script is intended to be used on several different robots, the ‘Steering’ part has to be dealt with in the initialization routine of the script.

Some facts about RB2:

  • The chassis is a DF Robot Baron (4 motors, connected as 2 differential motors) + additional mounting floor
  • Optical encoders on the 2 front wheels, generating 20 ticks per revolution.
  • The distance between the center of the wheels = 0.147 m.
  • The wheel diameter = 0.065 m.

These facts can be used to calculate some important data:

  • The wheel perimeter = 0.20 m (i.e. Pi * the wheel diameter)
  • The amount of ticks per meter = 97.9 t/m (i.e. ticks_per_revolution / wheel _perimeter)
  • The full_turn_perimeter = 0.46 m (i.e. wheel_center_distance * Pi) When using both sides in counter revolution!! I’ll come to that later when diving into turning by a certain amount of degrees
  • The amount of ticks_per_full_turn = 45.2 t (i.e. full_turn_perimeter * ticks_per_meter)
  • The amount of ticks_per_degree = 0.13 t (ticks_per_full_turn / 360)

So we already have a lot of useful data, but we also need some figures depending on the velocity. We need to know the maximum speed the robot could reach. Sometimes the supplier provides data on the kit and/or motors, but that turned out to be far off. I made some calculations on the maximum rotation on the motors, taking the deduction ratio of the gearing into account, but that also proved to be far off.

For a lot of obvious (and less obvious) reasons: slight manufacturing differences (even when acquired from the same series), gearing resistance (lubrication), tiny wheel differences (center position, diameter), the grip of the tires, the weigth of the car and how it’s distributed and a lot of environmental influences like: the surface (resistance, unevenness, flatness) and even air resistance. All factors that cannot be dealt with in the control model and why some probable data has to be acquired by testing upfront (while keeping as much factors as constant as possible).

The maximum speed of the robot can best be found by running it in the same environment at maximum PWM for a certain time and measure the distance made in the middle of this run. I think this could be best measured by counting the encoder ticks and recalculate these into meters (doing so will at least take the encoder noise into account).

RB2 came out with 0.31 m/s (the supplier provided 0.68 m/s). With this figure we can calculate an other variable needed:The max_ticks_per_second = 30.4 t/s (i.e. ticks_per_meter * max_speed)

At last we need the stall speed. This can also be found by testing: start with 0.0 PWM and increase the PWM until the robot really starts to move. It is common to be 5 – 30% of your total PWM scope. For RB2, I came out with: 0.124 m/s.

Now we have all the data to estimate the velocity and distance made by the robot to cross a certain distance at a certain speed. The figure below shows the typical diagram for such a run.

Step 2: The Control Loop

The steering part provides the facts for our calculations. The next step is the estimation of where the robot should be as a start of the controlled part of the script.

First of all we’ll have to pick an interval for the control loop including the sensor measurements. The smaller the interval, the better: creating a fast feedback on our estimations by the data of the encoders. Preferably the interval should be around 0.01 second.

The Dagu Arduino like mini driver of RB2 allows a feedback interval for the sensors of 0.01 sec. But there were some things to consider. At first there are several sources of delay (e.g. the execution time for the control loop itself). Second to take into account is the encoder resolution. In other words: the minimum time required to generate at least 1 tick at starting speed (i.e. the stall speed). For RB2 0.1 sec is used as interval. A 10-times lower feedback frequency does have its implications on the control loop: it limits the degree of tuning the errors and adjustments. A trajectory of 2 meters will take a good 6 seconds to finish (0.31 m/s). In this period there will only be less than 60 moments to read the actuator data and implement the calculated adjustments.

For good tuning (I will come to that when describing the control loop) one should at least have a couple of hundred evaluation loops. You can watch the effect in the video at the start of the blog: the corrections on the direction are sometimes a bit abrupt. Having a higher feedback frequency will allow to smooth-en the corrections more. (Another cause is the encoder resolution: the minimum error that can be read is 1 tick. So what you see on the video, is the best I could squeeze out of the situation.)

It is essential that the evaluations are done at a constant frequency. Variations in the intervals are disastrous for the control loop! That’s where I was confronted with another hurdle. I started using the Python time.time() function, but found out it created some weird peaks. Even when just running a script with only the interval loop itself. So I switched to the time.clock() function which is bound to the processor time itself and ran the script on a Windows, a Unix and a Linux system. Halas with no better results. Even tried threading.Timer() to generate a timing loop as a separate thread. This worked out fine, but made the script far too complicated (for me). As you can read in the script, I ended up with capping the timer at a maximum and skip the control loops when the timer exceeds the limit. A bit rough and it does have its consequences on accuracy, but it works better than having too much variances in the control loops. Probably one could produce a better timing/event handling in C/C++, but for me that would take me too much time to find out in the light of what the script should do.

So at every 0.1 sec the encoders are read, the differences (errors) on our estimation are calculated and a new estimation is made. In the script the desired velocity of the bot is calculated and set as target. Targets are more often referred to as ‘Setpoints’. In the script a Setpoint is the desired speed multiplied by the interval time, giving the amount of ticks to make in the interval. Actually this isn’t correct. The desired speed is the speed at the end of the interval. If one wants to work more precise, one should use the equation: Vt = Vo + a.t or at least the average speed: (Vt – Vo) / t to calculate the exact amount of ticks that should be made in the interval. (V stands for Velocity in m/s, a = the acceleration and t = the interval time).

Taking in account all limitations I described already, I kept it simple. As the video shows, one can reach reasonable results anyhow.Having a setpoint established, the bot will run at a certain speed and at the start of a new interval, the setpoint can be evaluated against the encoder data. That’s where the controlled part starts.

Step 3: Using a PID

Let’s imagine the first setpoint = 2 ticks. So we expect the bot to run at 2 ticks per 0.1 sec. (Like I mentioned: this is a rough calculation, leaving out acceleration during the interval). Then the bot starts to run at stall speed. After 0.1 second the sensors are read and the encoders come up with 1 tick. We’ re below target at an error of 1 tick.

In formula:

error = setpoint – actual ticks

The actual speed has to be increased for we are aiming at a higher speed for the bot to run. The simplest way to correct that is adding the error to the next speed command. In practice this will turn out to be too much or too less to get the bot near the desired speed. Tuning of the correction can be done by multiplying the error with a constant. This way of correcting is called ‘proportional’. The tuning constant is called a ‘gain’.

In formula:

P = Kp * error

Tuning the proportional correction thoroughly will reduce the error over time significantly, but will never bring it to zero (for there’s always an error needed for a proportional correction). In a lot of cases this way of correcting the error will also result in rapid changes, resulting in a shocking behavior (oscillation) of the bot. To smoothen the reactions an ‘Integrated’ part is added to the correction by summing up the errors over time. The integrated part also can be tuned by using a gain.

In formula:

I = Ki * sum of errors * interval

Summing up will finally reduce the error to zero and it works fine as long as there is a portion of the error left. Integrating will also prevent endless increasing of the speed. The correction takes more time and … works longer. So when the error is finally reduced to zero, the integrated part of the correction will keep on working (and become negative) until the sum of all errors is reduced to zero. This is called overshooting.

Often (e.g. when applied on a balancing robot) a third part of the correction is needed to ‘mute’ the overshoot. This is the ‘differential’ part. It deals with the volatility of the change by taking the difference between 2 errors over time into account. Also the differential part can be tuned by using a gain.

In formula:

D = Kd * (error – previous error) / interval

Adding the parts will result in a controlled correction of the speed: P + I + D.

The setpoint, the error and the PID-correction are all set in ticks (per second). For the command to change the motor speed, it is needed to recalculate this into a PWM-percentage. Every time the controlled part is executed, the speed of the motors will be increased until the bot runs at the desired speed.

As mentioned before: this is a simplified control and works without using the acceleration in the equations. Looking at the velocity diagram: this script doesn’t realize a decelerating motion. When entering contests you will need to and you probably will have the bot crawling for the last millimeters towards the exact target distance. (Working with RB2 is done in centimeters at the best.)

The full Python coding is provided in the script, so I’ll leave it out of this blog. Keeping track of all values and saving them into a csv formatted file, will enable us to analyze the error data and helps tuning the correction by changing the gains (so quit some exercise and a healthy portion of patience). The picture below shows a dashboard used for analysis. Obvious I wasn’t there yet. ;-0)

If the tuning is done properly the error graphs will show nice curves, slowly approaching zero.

RB2 is a differential driven robot: it has 2 sets of independent motors. For every side a separate controlled part is used. In theory both sets of motors should be running at the target speed at the same moment in time and the bot should run in a straight line. To tip it off, a third, balancing, PID can be used to correct the remaining differences between left and right.

RB2 produced another challenge: I couldn’t tune out the difference between the left and the right speed. I still don’t know he reason for that (I switched motors, switched tires, reversed the wiring, tried another motor driver, changed the frame, replaced the battery pack, but couldn’t close the gap). So, in my script the balancing PID has the greatest weight in the total correction and the maximum of both encoder readings is used to calculate the distance made (adding another inaccuracy). However I do think this is the way to implement: first tune out the PID’s for the motors. When the desired result and behavior is reached, implement the balancing PID.

All what’s left for running in squares now, is making turns by exactly 90 degrees. And of course: ‘Why should we ever want the bot to run in squares?’

Step 4: Control the Turns

Driving (almost) straight, at a targeted speed and for a certain distance in centimeters is important for having the bot running squared trajectories, but we also need the bot to make turns at 90 degrees. A perfect 90 degree turn is again a target, but not always the reality. Also while turning there will be a lot of possible disturbances (e.g. slipping wheels).

Using the encoders to control the turns enable us to get satisfactory results as close as possible to the target. The way this can be done is pretty straight forward: calculate the distance for the wheels to turn in ticks and have the bot turn until that specific amount of ticks have been reached.

The steering part already provided the amount of ticks per degree (see the start of this blog). When multiplied by the amount of degrees to turn, we get the amount of ticks to produce.

The amount of ticks per degree depends on the full turn perimeter. The value mentioned in the beginning of this blog is based on turning the bot with the wheels on both sides turning counter wise. In this case the center of the full turn perimeter is the center of the bot and the width of the bot corresponds to the diameter of the full turn perimeter.

It is also possible to turn the bot by having the motors run at just one side of the bot. In that case the full width of the bot gives the radius of the full turn perimeter.

The picture below shows the differences between both ways to turn. Depending on the chosen method the steering variable has to be calculated differently.

It’s my personal opinion that the differential turn (both sides counter wise) will produce less disturbance and therefor better results (shorter turn, less friction on the wheels).

Coding this as a function in Python, will allow us to use all different kind of angles. The only thing to keep in mind is that at least 1 tick has to be produced and that sets the minimum angle that can be used.

Step 5: Why Running Squares?

Well, so far for Dead Reckoning: continuously estimate, actuate and correct errors. Even when executed very well, it is still just a model of reality. I described already a lot of disturbances that will be no part of the model. So, by definition the bot will never run exactly (100%) straight or will ever turn by exactly 90 degrees.

The impact of a deviation of the driving direction is substantial (when driving in a new direction, the bot will never end up where it’s supposed to). That’s a good reason for further calibrating.

Fortunately there’s a simple way to do that. Borenstein et. al. developed a method that’s extensively described in:


UMBMark - A Method for Measuring, Comparing, and Correcting Dead-reckoning Errors in Mobile Robots

( http://www-personal.umich.edu/~johannb/umbmark.ht... )

In short this method comes down to: run a square clockwise and then run the same square counter clockwise. The picture below shows an emphasized projection of the real trajectory. CW and CCW are the errors that can be measured after running a square. The angle error can be calculated as:

( CW + CCW ) / Length of a square leg

Optional this error can be multiplied by a gain. Finally this error can be added as a constant error to the controlled part for balancing the wheels.

That’s it for now!