I am thinking of creating a robot that can navigate using a map it is controlled from p.c and it is having 8 bit controller to do low level task were the PC doing image processing I planned to implement it in a single room where the robot is placed and it (robot and environment) is tracked by a camera from a height or in the ceiling of the room. First the robot need to be mapped, like this http://www.societyofrobots.com/programming_wavefront.shtml
To do:
- Track the robot from some height using camera Following the wavefont algorithim to locate robot and obstacles.
procedure:(just my idea)
the camera will give image of the robot surrounded by obstacles in the random places. using some opencv technique draw some grind over the image.
Locating the grid which contain robot(by having some colored symbol over the robot) and locating the grids containing the obstacle.
Now the grids with obstacle is thought as wall and the remaining is the free space for the robot to navigate.
robot is going to get the goal place which should be reached is given from the pc(may be like point the place to reach in the image by mouse click).
Unknowns :
- Mapping the room and locating the robot
How to do that? The robot should know where it is in the map or the image. We cannot believe only the camera is enough to locate the robot. So I thought of adding triangulation mapping like placing two IRs in the room and a receiver in the robot.
The doubt I have in this is how an IR receiver can know from which direction it is receiving the IR signal (from left or right ). I think it knows only that it receives IR not the direction. Then how is the triangulation going to happen if I don't know the angle and direction?
- coming to the image processing, how can I implement the Wavefront algorithm(that is capture the live vedio and draw grids over it to find robot and the obstacles)?
I have HC-05 Bluetooth module, Arduino, Bluetooth dongle, chassis with dc motors and driver, and a dc supply.