Autonomous Robot Navigation Using Vanishing Points

Shown below is a video of a robot that uses the perceived vanishing point of an image to navigate through a corridor. The robot, based on an iRobot Create, uses a standard webcam and video processing to locate the vanishing point of what it sees, and navigates towards that point. Such navigation works very well in office-like environments with straight walls, windows, and ceilings. The robot also uses visual clues, like orange traffic cones, to recognize specific locations.
http://www.youtube.com/watch?v=nb0VpSYtJ_Y
I was responsible for vision-based navigation of the robot within the hallways. I used the vanishing points from the parallel lines present indoors to compute the robot heading. This was then fed into a controller to control the direction of the robot for navigation. The computation was made robust to change in light conditions, false detections, occlusions by a layered filtering approach that included RANSAC and least squares filtering among others.
Such navigation has some very interesting implications for simple navigation through common environments (houses, offices, shopping malls, etc.). Has anyone tried this using RoboRealm? You can read the project report (PDF) here. [Via Hackzine]
Flag this post

Thanks for helping to keep our community civil!


Notify staff privately
It's Spam
This post is an advertisement, or vandalism. It is not useful or relevant to the current topic.

You flagged this as spam. Undo flag.Flag Post