Dead reckoning algorithms / solutions

Hello All

Me again, flogging a (hopefully not) dead horse.

I wonder if anyone can provide links/guidance/code/insight into dead reckoning programming using the various (relatively) low cost components available to us.

I am thinking of things such as gyroscopes, accelerometers, magnometers etc. At this point I am excluding wheel encoders (since my vehicle is tracked and suffers from slipping tracks) or any form of external input such as beacons or gps etc.

I imagine one should be able to calculate your position if you know the direction and duration of any acceleration. The maths is a little difficult for me as any studies are a good 15+ years ago. I am sure there is some differentiation/integration required to determine change in distance while accelerations is occuring.

I am aware there will be drift due to the resolution or accuracy of your components.

I don't think this question needs information on any specific bot since it should be self contained.

Am I looking at a complicated solution or is this something that is achievable?

Thanks guys & gals

 PS. for those interested in dead reckoning using encoders the wiki page http://en.wikipedia.org/wiki/Dead_reckoning has some equasions you may find usefull under the heading 'Differential drive dead reckoning'.

 

Have you seen …

https://www.robotshop.com/letsmakerobots/node/28666 there is even some code posted in the comments and the builder has said he will post the code when he is happy with the looks of it.

Hi People.A mathematics

Hi People.

A mathematics expert would be usefull at this point.

I have found a document that a grad student created studying DR solutions for an AUV athttp://scholar.lib.vt.edu/theses/available/etd-08082005-173535/unrestricted/AaronKapaldoThesis.pdf

Now unfortunately, as previously stated, I did study mathematics but that is 16+ years ago and I have only used it to count my change since so maths skills are minimal at this point.

Am I correct in understanding that the formula 4.13 expressed in section 4.2 is what I am looking for? That the resultant vector on the left side of the equasion is the x,y,z positional co-ordinates and Φ, θ and ψ are the angular rotations giving us the orientation of the body (bot)?

I then have some problem determining the value of h in 4.13. The author states A, B and H are calculated from equations 4.3 and 4.4 but I am getting lost in all of that. Is h in 4.13 the same as the H in 4.5? Where do we use A and B in 4.13?

A final problem (I have) with this is that the velocity V used in the algorithm is the measured foreward velocity of their AUV. I guess I could substitute my measured foreward velocity of my bot or determine the velocity using the accelerometer…

Anyway, if anyone can clear up on this, especialy the calculation of A, B and H - perhaps reducing them to easier to read calculations I would much appreciate it. My integration and differentiation is basically non-existant at this point.

fwiw - it seems 4.13 is all simply based on constants measured or determined beforehand and inputs from a 3-axis gyrometer. If his AUV is caught in a current his DR position will suffer greatly. For my requirements I guess I could pretty much work with this as my bot is unlikely to slide sideways and only really move in the foreward direction.

I would appreciate any input/discussion into this.

PS. I have found a further document but have not consumed it yet so don’t know if it would deliver valuable info at http://www.cs.bris.ac.uk/Publications/Papers/2000009.pdf where the authors use various devices such as gyros, magnometers, accelerometers and pedometers to mention but a few.

 

Cheers

 

You can’t really do that

… not with an accelerometer

even with more sensors (such as a gyro and a compass) the errors in the sensor measurments will be amplified and your calculated position will be further and further away from reality

These errors accumulate very very fast, watch this http://www.youtube.com/watch?v=C7JQ7Rpwn2k

1 Like

Thanks for that AntonioThat

Thanks for that Antonio

That was a very interesting video to watch.

I guess it is the double integration of the errors causing the extreme errors.

Perhaps I should limit my DR to determining the orientation of the platform and using a constant, pre measured, speed in the direction of the platform x-axis to determine my relative position.

Given I am running a tracked ground vehicle which is not likely to roll or slide unless something is wrong I will probably get as good a position, perhaps better, than DR using only those 6 axis (accel & gyro). Still, I need to figure out the algorithm for that… and find the platform orientation code which someone must have somewhere.

Cheers

The best way to do DR when

The best way to do DR when the drive wheels/tracks slip is to add a separate odometer ( one or two light weight, low slip wheels with encoders on them or even an optical mouse) even then, DR won’t be very precise on rough terrain but it could be workable in a tabletop or indoors environment. With large enough wheels you may even get good results on concrete and asphalt.

I know that using acclerometers would be really cool but unfortunately it just isn’t practical. For example, your tracks will cause shocks that will probably be outside the range measured by your sensor and will cause the robot to wobble, tilting the sensors. Even slight tilting of the robot will read gravity as a lateral acceleration.

All of these problems can probably be overcome by using high quality, high speed sensors and doing the DR math in hardware or “soft hardware” but it will probably be much more expensive than using a mechanical solution. This is why computer mice used encoders and image sensors and not accelerometers.