Mapping

In a reply to this post: https://www.robotshop.com/letsmakerobots/node/31641 I have exposed one of my software problems that I encountered on one of my robots. I am re-posting this separately with some added info:

Here is a real case. My robot's size fits in a 20x20 cm square. All the measurements the IR and US sensors return are in cm. I can generate a list of coords for the obstacles detected in a scan. Because of EEPROM space constraints, I can store a map with the cell size similar with my robot's size. Each cell is represented by a byte in a 2D array. We can use some tricks to write 2 cells in a byte, perhaps even 4 cells if necessary. To have an accurate map, we can encode different cell information in the value of that byte, for instance, a cell might have a wall on the North side, but the cell is empty, we can store the value of 200 in the byte. 201 for a East wall, 202 for a South wall, etc. A cell that is completely blocked can be 255. Say we have a couch that uses 6 cells, for the cells on the edge that the robot can measure, we can encode the value 250 but for the cells closer to the wall we can write 255. Makes sense? Similar, we can write 100 for a cell that contains the leg of a table or a chair. The robot might go between the legs, but say the chair occupies 2 cells. Instead of blocking the access, the robot can decode the 100 value and approach with caution, measuring the distance between the legs, center itself and pass with care, then update it's new position on the map. In the case of chairs, it may also be possible that a human moved them around and they are actually occupying other cells. The robot in this case can mark their new position on the map and also proceed with care. Why go between the chair or table legs? Say we want to play ball fetch, the ball will roll there for sure, so the robot should be able to follow. So, storring the map is easy, 2D array. Set rules on what to do based on the encoded value. Not so easy, but doable.

Now here is my problem. As I said in the beginning, the sensors return distances in cm but my map size is in 20x20cm. How do I compare the sensor readings with the map? How do I make decisions on how to update the map or correct the robot's position on the map? I use IR and US sensors on a panning head and also a compass and encoders to measure the traveled distance. The robot travells from the center of one cell to the center of the next cell and makes 90 degree turns, except for the "proceed with care" situations. Say we elliminate the special case for now to keep things simple. How do I use the sensor readings to verify the position on the map? This is where I need help.

Added stuff:

I do not want to use a hardware fix to this problem. I can do that easily, as I am a hardware guy. Beacons, RFID, barcode, makes localization easier. But then the robot is constrained to that environment. For now I want the robot to work with a given map, but the next step will be to build the map as it goes. I know it's easy to just put in a BeagleBoard or something similar and do PC programming, but I still think it may be possible to have a microcontroller do this, even if mapping is the only thing it does. I have no problem using a few microcontrollers in my robot, actually that's the case. So assume I am using one Arduino just for the mapping process (if a 328 is too small - as memory - I'll use a mega). The actual map is stored in the EEPROM, if that is not big enough I can use a uSD shield.

Why I think the robot should have a map of the environment? Because fatching a ball is not the end of the story. That is just one task the robot can do. Other tasks will require the robot to pick up stuff from the floor and place them in correct bins. Think of it as a mini butler robot, microcontroller based. Any uses may be developed if we have a working system. Also, the map will be accompanied by a list of places where the robot will need to go, for instance the charging station, the bins for toys and socks, the kid's chair to bring back his toys after he throws them on the floor. 

Again, what I need is an algorithm that allows me to compare external sensor readings with the stored map. The fact that the map is pre-built or built by the robot is not important at this moment.

Perhaps I'm looking at this the wrong way, if there are better ways, please share your ideas. 

How about “reversing” the

How about “reversing” the problem? Let your encoders calculate the position on the map, but use obstacle avoidance.

With an X/Y-map, your encoders on the wheels should be able to tell you where you are quite accurately, so to get to point 4.5 from 0.0 you have to go 4 cells forward and then 5 cells to the right. 

David Anderson explains it way better than I do, so I’ll leave it to him, it’s 2:28:02, but it’s totally worth it.

http://www.youtube.com/watch?feature=player_embedded&v=8CXReb7f0Eo

 

Here’s how I would do it

First of all, forget about encoding the “walls” around a cell. It doesn’t help. Better use those 8 bits to increase your map resolution.

I would actually try and use a quad tree representation of the map. Quad trees can be used to quickly detect collisions and they might be able to store a more detailed map in less space (it really depends on the map though).

For chairs and other tricky but traversable obstacles I would use a different encoding than for solid objects, so you would have a range of “permeability”. You can then use this to assign various costs when planning a route - solid objects have infinite cost. This will make the robot go around the chair if it can, but if the ball is under a chair it would still “reluctantly” go get it. 

 

In order to determine your location based on distance readings you should use a “filter”. A particle filter is probably the best for your problem.

In a particle filter you basically simulate what your robot would see in a certain position (x, y, angle). Each simulation is a “particle”. Each particle has a probability that it represents the correct position of the robot.

When you start, you create a fixed number of particles, with random positions, and equal probabilities. At all times, the sum of all particle probabilities should equal 100%. If you have 1000 particles each particle is probably the real position with 0.1% probability - this means the robot has no clue where it is.

To impreve the robot’s guess about where it is, you take a measurement from the sensors and you go through the list of particles.

For each particle you simulate what the measurements would be, given the particle’s position and the map, then you compute the difference between the real mesurements and the simulated measurements. If the difference is large, it is less likely that this particle is the correct position of the robot, so you decrease the particle’s probability. If the error is small you increase the probability.

Now for the important step. You create a new set of particles, based on the probability of the old ones, so that if the old particle has a high probability it is more likely that new particles will be created near it. It is important that the new particles are not an exact copy of the old ones, there must be some randomness added when picking the position of a new particle based on an old particle. If there is too much randomness the filter will never converge to the real position. If there is too little randomness it will take a lot of time until the position is found.

If your robot is moving, your estimation of the movement should also be applied to the new particles.

An important thing to point out is that particle filters won’t really work unless the sensors or the robot are moving.

Let’s say you wake up with a map in your hand, facing the corner of an empty, artificially illuminated room (painted white). You may know you are in a corner but you won’t be able to tell which one, you could be in any of 4 locations, unless you look around.

The particle filter will actually show this very well becuse the particles will eventually converge around 4 points instead of one. 

You can now take another mesurement and repeat the process. After a few iterations the particles with the highest probability will be a very good aproximation of your robot’s position (if they all converge into one point - in a square box they will never converge unless there is some other discriminating measurement like a beacon, a colored wall or a compass)

Note that in order for this to work you need a fairly large number of particles, in the order of hundreds.  I don’t think it will work very well on an arduino uno with its 2k ram, but I would love to be proven wrong.

If you are wondering…

why am I not implementing this myself if I’m so “smart”? - well I’ve been asking myself the same thing :slight_smile:

Maybe it’s because I’m really lazy and don’t have any distance sensors in my parts bin. I should probably get a Wiidar from CtC.

Wow, a lot of reading to do.

Wow, a lot of reading to do. Thanks for all your ideas and links to more info. I’ll try to understand it if I can, if I’ll get stuck I’ll ask for more help. Not sure yet which way to go, but hopefuly after the reading I’ll have a possibility.

Thanks again guys!

Maybe you should explore this

http://www.udacity.com/overview/Course/cs373