How to implement the Wavefront algorithm

I am thinking of creating a robot that can navigate using a map it is controlled from p.c and it is having 8 bit controller to do low level task were the PC doing image processing I planned to implement it in a single room where the robot is placed and it (robot and environment) is tracked by a camera from a height or in the ceiling of the room. First the robot need to be mapped, like this http://www.societyofrobots.com/programming_wavefront.shtml

To do:

  • Track the robot from some height using camera Following the wavefont algorithim to locate robot and obstacles.

images.jpg

procedure:(just my idea)

the camera will give image of the robot surrounded by obstacles in the random places. using some opencv technique draw some grind over the image.

  • Locating the grid which contain robot(by having some colored symbol over the robot) and locating the grids containing the obstacle.

  • Now the grids with obstacle is thought as wall and the remaining is the free space for the robot to navigate.

  • robot is going to get the goal place which should be reached is given from the pc(may be like point the place to reach in the image by mouse click).

path_planning_1.jpg

Unknowns :

  • Mapping the room and locating the robot

How to do that? The robot should know where it is in the map or the image. We cannot believe only the camera is enough to locate the robot. So I thought of adding triangulation mapping like placing two IRs in the room and a receiver in the robot.

The doubt I have in this is how an IR receiver can know from which direction it is receiving the IR signal (from left or right ). I think it knows only that it receives IR not the direction. Then how is the triangulation going to happen if I don't know the angle and direction?

  • coming to the image processing, how can I implement the Wavefront algorithm(that is capture the live vedio and draw grids over it to find robot and the obstacles)?

I have HC-05 Bluetooth module, Arduino, Bluetooth dongle, chassis with dc motors and driver, and a dc supply.

I’ve seen the wavefront done

I’ve seen the wavefront done on an Arduino, somewhere on LMR. I don’t remember the page, but I guess you may be able to search for it. For the orientation, I think an angle compensated compass would be enough and simpler to use, then either do triangulation to find your position on the map or just do the corrections by video analysis (harder, at least to me). Encoders are also a must have, or your robot will not know how far it had traveled, unless again, you use video analysis. I never completed this part of programming of my robot, so I will be interested to see your solution. Personally, I would like to have it all embeded in the robot.

Take it in pieces…

I’d start with a good compass, the compass might lag a half second when you rotate the bot, but that shouldn’t be a big deal if you slow your rotation as you are getting close to your goal heading and continuously adjust heading while driving forward.  Basically, the closer you are to your goal heading, the smaller the steering adjustment.  A gyro drifts, so you’ld probably need a compass or something to correct for the drift.  The compass worked so well for me I turned the gyro off.

You could use OpenCV with the camera up high to identify the bot and the obstacles if they had a unique color on top, one for the bot, and one for the obstacles.  Or you could look for the floor color and then identity the holes in that image as the obstacles.  There are probably many other ways to do it.  I’m currently going down the path of having the camera on the bot since it moves in many rooms and using visual landmarks in the given room…not easy and not very reliable for me yet.  Having your camera up high should avoid a lot of complexity and OpenCV will spit out the pixel coordinates of each blob for each color you ask it for.  OpenCV can also tell you how far you’ve travelled, so you could do without the encoders.

Good luck and happy coding!