Robot position with quadrature encoder

I'm building a robot for environment mapping. It's a simple "start here"-class robot: 2 wheels, 2 dc motors and a range sensor mounted on a servo. I plan to hook it up with my PC 1st by USB and later on by wireless somehow.

I want it to drive around and send the range sensor readings to the PC which in turn will be building a map of the environment. The hard part will be knowing the robot's position relative to it's earlier position(s). I know this would be easy with a GPS or an accelerometer, however I wish to find a more lowtech/DIY solution, so here is my idea:

What if I attached a quadrature encoder (aka rotary encoder) to the motor shafts? Knowing the size of the wheels it should be easy to calculate how far it is moving in any direction at a given time?

Has anyone tried something similar? Has anyone ever hooked up a quadrature encoder to a MCU? Do you guys think it'll work? Do you foresee any problems?

EDIT: I forgot to mention that I allready found instructions on how to hook it up to an Arduino. That's not the issue. It's more practical advice, ideas and experiences in the use of it I'm looking for...

It’s easy to measure how far
It’s easy to measure how far it has moved based on a encoder counting motor rotations, yet you are forgetting that you are recreating a model of the real world based on a set of variables. Measuring all variables is impossible. For example, there are many factors who cause inefficiency, thus leaving you with a model of the environment which isnt true to the real world, increasing over time

Inline7.gif

. Take a look at this page - and especially the video at the bottom. Pay attention to how it misses more on the location the longer it uses to get there.

You still have to worry
You still have to worry about gear slop and how both wheels will have a little slip. You will need landmarks so the rbot can "true up" and know where it is at to see if where it thinks it is is correct. You will slowly build up accumulated error such that it thinks it is in the middle of the room but is in fact humping the wall.

Thanks guys

I seems I’m on the right track. I had considered that the method is inherently imprecise for various reasons and that this imprecision will increase over time. I also had an idea about constantly double checking the position using the map itself. However I didn’t consider using “predefined” landmarks like doorways. That may be a good idea.

For now I’ll keep considering the options and all suggestions are still more than welcome :slight_smile:

I would recommend

I would recommend integrating a tilt compensated compass to help compensate for drive errors, and if you intend to use it outdoors integrate a GPS as well. The math can/will become tricky.

I started working on a similar system more than a year ago, it has not gotten the time it deserves in recent months thou, take a look at http://www.auv.co.za/blog/renosterfirstrun.

Hi!

No outdoors and for the moment no GPS, compass, accellerometer or other “hightec” units. I’ll probebly end up there some day and like I said I’m aware that these would make it a lot easier BUT I don’t have the money to buy every component I want. Besides I wanna atleast TRY the cheaper lowtec options first and see how far they’ll take me. And while ELECTRONICS scare the crap out of me I’m not afraid of math nor programming :slight_smile:

I’d be interested in knowing more about your project: is it currently creating a map? Do you have a way of visualizing the map? Is it currently (in the video posted) using just the quadrature encoder or also GPS and compass?

In that video it is only


In that video it is only using quadrature encoders, one on each wheel, with a simple waypoint system. I had one rouge point at the end which made for an interesting little dance circle.

The positioning is based on a relatively simple odomotry example by David Anderson, http://geology.heroy.smu.edu/~dpa-www/robo/Encoder/imu_odo/.

I found that using an X configuration, front left with rear right, and front right with rear left, to get two points and then average these I was able to get within about 1% over approx 10m, within about 10cm, of the target. I am sure if I used the UMBark calibration process I would have been able to get it more accurate.

After integrating an Ocean Server OS5000-S compass (I dreamt of one of these for many months before I could afford it) to get the heading, providing some compensation for wheel slip, and rounding in float math, I was able to get that down to less than half, about 0.48% measured over a straight line.

In the video it was also using an I2C based quadrature counter, http://www.auv.co.za/blog/attiny45quadraturedecoder, thus the counts are slightly out dated at the time of the position calculation. This is negligible initially but is amplified at higher speeds and does cause drift over time.

I have since designed a new robot control shield for the ArduinoMega to make use of the many external interrupts on the ATMega1280 to allow interfacing the encoders directly to the primary processor. This with the compass, and GPS should allow for pretty accurate positioning outdoors.

No map visualization yet, got loads of sensor data thou.

All I need now is the time to work on it again.

The downfall of the picaxe

The downfall of the picaxe is you can’t use arrays. Typically you would make a grid of the room the robot will be in. Lets say the room is 20 feet by 20 feet. You make an array of [19,19] (as 0,0 is the first coordinate). Ever grid location where there is something in the way you put a 0. The robot’s current location is marked with an X and the goal location is number 1. Everytime the robot moves you change the array so that the other numbers are the number of squares away from the goal. (look up wave front to get more details on what I am talking about). Since accumlated error thorws you off of the true location you use landmarks to true up your location.

This is likely overkill for what you want to do and PICAXE can’t handle it, but look it up and maybe you can adjust it to work on the picaxe. It just so happens this was taught in lecture this week here are the powerpoint slides on localization. http://roboti.cs.siue.edu/classes/integratedsystems/lectures/2009/LectureOct12&14.ppt

Well…

Sounds like you’ve got the hardware so why not give it a go :wink:

Besides if you’re interested you may be able to use the software I’ll be making. I’ll be programming a DirectX application for receiving and visualizing the data. I recently made a serial oscilloscope based on a DirectX API called Dark GDK. It performs great, it’s very easy to use and the serial connection is allready functioning so I’ll be able to throw something together pretty fast…

 

 

Picaxe

"The downfall of the picaxe is you can’t use arrays."…and no floating points either :frowning:

Man am I glad I bought an ARDUINO :slight_smile:

Mapping method

So far I’ve been playing around with points in a Cartesian plane. Simple…whenever my sharp sensor (mounted on a servo) detects an object I calculate a point (x,y) relative to the sensor based on the distance and angle of the servo, and store the point in a large array. Off couse this has been easy because the sensor is itself is static. Once it starts moving around It’ll be much harder.

I also considered using a simple matrix (grid) as jklug80 described. But I think I’ll stick with the method described above: storing and displaying all the points. My PC will be the actual brains and the Arduino will only be sending of sensor readings to and receiving commands from my PC, thus I’m not limited to the scarce memory of my Atmega328. Even so eventually I’ll have to start thinking about saving the data in files. For this I think I’ll use a grid. So I’ll only be keeping the nearby grid-sections in the memory at a given time. Hence as the robot moves forward the sections behind it will be saved in files and the sections in front of it will be loaded into the memory.

EDIT (thought I’d write a bit more about my thoughts):

Furthermore I would draw lines between ALL points that are closer together than the diameter of the robot (represented as a circle). Then apply a simple navigation rule: Robot cannot cross lines!

I also thought about applying a probablitity/uncertainty variable to each point: How certain is it that there is something there. This would be based on for instance: the distance from which it was detected (the sensor is more precise at close ups), the time since it was detected (the object may have moved in the meantime), and the number of times detected (how likely is it to be a static vs. moving object).

That should give a better idea of my initial thoughts on the matter…

 

Thanks a lot to both of you for providing a lot of interesting input :slight_smile:

Nice PPT
Took a closer look at your PPT and the more I think about it, the more I like it (the grid system). It simplifies things a lot and it reminds me of the 1st computer games I was making as a kid :slight_smile: A similar system is described here. Also very interesting reading…