Wavefront Algorithm Mapping

Hi, LMRians. I've been reading up on this Wavefront Algorithm Navigation and I understand bits and pieces of it, but not everything. I'm trying to learn up on it so I can use it in my next robot : Project 4L-FRED (Alfred), a butler robot. The robot is suppose to navigate around the house from the dining area to the living room and serve drinks to guests. The robot will have pre-recorded messages like greetings and stuff like asking what drinks the guest would prefer. The guest would then press a button, stating his/her choice and Alfred will travel back to the kitchen, tell whoever's in the kitchen what drinks were requested, and return carrying the tray of drinks. 

Anyway, the procedure of how Alfred will do all that is a bit fuzzy for now, since I need to get navigation nailed down first. What I can understand from this Wavefront Navigation is that the robot uses a pre-programmed map in its programming that displays the impassable areas of the map like walls, sofas etc. The robot uses encoders to track its position on the map based on the movement of its motors. A compass sensor would help the robot align itself so that it does not gradually rotate off course. If I were to implement adaptive mapping, I would need to have 1 or 2 IR Rangefinders on the bot so that it tracks changes in the pre-programmed map, like toys or cereal boxes lying around the floor. Basically, the robot adapts itself to the changing enviroment so that it is able to reach the target location. But for Project 4L-FRED, I don't think i'll implement adaptive mapping just yet, maybe at later time. For now, I just want it to scan for obstacles, and gradually slow down to a halt as the obstacle approaches. Alfred will then say something like : "Excuse me, you are blocking my path. Please allow me to pass." Or something along the lines of that. If the object still does not wish to move after three warnings, Alfred will sound an alarm and call for assistance to move the object out of the way.

Finally, we can get to the questions.

1. Is there anything missing from my understanding of the Wavefront Algorithm Navigation?

2. Could there be better ways of doing this, preferably without beacons or black tape around the house? 

3. Can anyone help me figure out how to write the Arduino Code for Wavefront Algorithm Mapping? I have no clue how to start. I can probably learn       how to use the compass readings to adjust the robot's angle, but other than that, i'm stumped.

Thanks for any and all help given by you guys. :D

Reference : http://www.societyofrobots.com/programming_wavefront.shtml  Wavefront Algorithm (What I've read)

I was going to post a longer explanation but…

here’s another link http://pirate.shu.edu/~wachsmut/Teaching/CSAS-Robotics/Challenges/06_Wavefront/index.html

The idea is that you have a map, a starting point and a destination point.

From the starting point you expand a “wave”. The wave’s “height” increases with the distance from the origin. You expand this wave until you no longer have any empty space or you reach the destination. Once the wave has reached the destination you can generate a path (a series of moves) by going “downhill” on the wave, from the destination back to the starting point.

How to do this efficiently on an Arduino is left as an exercise to the reader :slight_smile:

Actually I was planning to write something like this for my own robot so you may look at my code when I write it (if ever, somehow I’m always too busy (or lazy)).

Ah, now i’m getting it!

At first, I didn’t understand how the “Wave” spreads out. So I pulled out a piece of paper and drew a 7X7 Matrix, then I marked a few of the cells as impassable (Walls). Then I marked one of the corners as the robot start position and another as the target location. Initially I though the wave spreads around the entire cell, meaning diagonal cells as well. It didn’t look right so I refered to the diagrams and found out why I was doing it wrong. The wave only spreads out on the left, right, top and bottom of the cell, not including the diagonally aligned cells. I did a few of these matrixes to test it out, having a bit of fun. It was like one of those brain puzzles, lol. :slight_smile: Then a thought occured to me, what if the ATMega328-PU can’t handle the size of the actual matrix? I read on the Society of Robot’s forums that the ATMega328 may not be able to process very large matrices with its 2KBs of RAM. Any ideas on this, or should I switch to another chip? I have an assortment of Atmel chips at home like the 644P and a many others, all of them Through-hole type chips. Thanks for the link and the help! :smiley:

Thanks for reminding me about this one

We are moving to a new house quite soon and my new shop will be many many times bigger than what I have now --I have been getting back to working on Walter again (it will be nice to give him some space).

…And here it is, you pop up with this post. Good timing.

I am mirroring what you are doing here, I am using the Society of Robots tutorial as a rough outline and rewriting the code to work with my brain (my actual brain, not the Propeller). However, I am doing mine through Processing instead of on the robot itself. It might be worth a shot to look at, you would certainly have as much memory as you would ever need. Not to mention the ability to actually see the map as it is being drawn would be handy.

 

If you have never worked with Processing before, you should not be scared --It is almost identical to Arduino, you would pick it up in a weekend.

I will post stuff as I get stuff done.

This makes me feel really
This makes me feel really retarted, and think I don’t know anything!

Guess what? I feel the same!

Guess what? I feel the same! I can theoreticize all that I want but have a hard time writing the code. I guess I didn’t put all I have into this yet.