Obstacle Avoidance using the Kinect

Posted on 12/07/2011 by rasoul
Modified on: 13/09/2018
Project
Press to mark as completed
Introduction
This is an automatic import from our previous community platform. Some things can look imperfect.

If you are the original author, please access your User Control Panel and update it.

This is my master thesis work that I thought it might be interesting for people here. The thesis were divided into three sections; Theory development, Simulation and Implementation. Two first sections are very theoritic that I prefer to talk about the last part. The implementation was performed on the robot Dora which is one of the CogX demonstrator systems. She is a mobile robot built on a Pioneer P3-DX platform equipped with a fixed laser range scanner, a pair of stereo cameras and a Kinect sensor ...


Obstacle Avoidance using the Kinect

This is my master thesis work that I thought it might be interesting for people here. The thesis were divided into three sections; Theory development, Simulation and Implementation. Two first sections are very theoritic that I prefer to talk about the last part. The implementation was performed on the robot Dora which is one of the CogX demonstrator systems. She is a mobile robot built on a Pioneer P3-DX platform equipped with a fixed laser range scanner, a pair of stereo cameras and a Kinect sensor installed on top of a pan/tilt unit.

The software architecture on the robot Dora utilized the laser range data only to build a map and handle obstacle avoidance and path planning. The problem was that the laser scanner cannot see any obstacle with any height lower or upper than height of  the laser scanner installation. The idea was to enable the current obstacle avoidance algorithm of the software to be able to fuse the additional data captured by the Kinect sensor to avoid colliding with obstacles of any type in an unstructured environment.

The first video shows the robot in action where it was sent to 2.5 meters straight forward. In front of the robot there was a table that laser range scanner couldn't see it.

The second video shows the same senario in which the environment modeling data from the Kinect sensor was fused to laser range scanner data to build more realistic model of the environment.

This work revealed that although Kinect is a low-cost 3D vision device but its capability to obtain a 3D perception of the environment is invaluable specially with performing an accurate calibration. Nonetheless, it suffers from the fact that for any object closer than about 50 cm to the device no depth data is returned. The accuracy of the sensor for the points further than 3.5 meters considerably decreases.

The experience in this work also revealed that the projection pattern on a very bright or very dark area will be absorbed and thus cannot be seen by the IR camera. These limitations impose the range of applications of the Kinect sensor to be restricted to indoor environments where the level of IR absorption is not very high and the objects closer than 50 cm are not matter of subject. This later shortcoming has a great impact on obstacle avoidance.

Explores the environment to find objects and classifies rooms.

  • Actuators / output devices: Pioneer P3-DX, Pan-tilt unit
  • CPU: Intel(R) Pentium(R) 4 CPU 3.20GHz
  • Operating system: Linux (Ubuntu)
  • Programming language: C++
  • Sensors / input devices: The Kinect sensor, Laser range scanner
  • Target environment: indoors
Flag this post

Thanks for helping to keep our community civil!


Notify staff privately
It's Spam
This post is an advertisement, or vandalism. It is not useful or relevant to the current topic.

You flagged this as spam. Undo flag.Flag Post