This is my master thesis work that I thought it might be interesting for people here. The thesis were divided into three sections; Theory development, Simulation and Implementation. Two first sections are very theoritic that I prefer to talk about the last part. The implementation was performed on the robot Dora which is one of the CogX demonstrator systems. She is a mobile robot built on a Pioneer P3-DX platform equipped with a fixed laser range scanner, a pair of stereo cameras and a Kinect sensor installed on top of a pan/tilt unit.
The software architecture on the robot Dora utilized the laser range data only to build a map and handle obstacle avoidance and path planning. The problem was that the laser scanner cannot see any obstacle with any height lower or upper than height of the laser scanner installation. The idea was to enable the current obstacle avoidance algorithm of the software to be able to fuse the additional data captured by the Kinect sensor to avoid colliding with obstacles of any type in an unstructured environment.
The first video shows the robot in action where it was sent to 2.5 meters straight forward. In front of the robot there was a table that laser range scanner couldn't see it.
The second video shows the same senario in which the environment modeling data from the Kinect sensor was fused to laser range scanner data to build more realistic model of the environment.
This work revealed that although Kinect is a low-cost 3D vision device but its capability to obtain a 3D perception of the environment is invaluable specially with performing an accurate calibration. Nonetheless, it suffers from the fact that for any object closer than about 50 cm to the device no depth data is returned. The accuracy of the sensor for the points further than 3.5 meters considerably decreases.
The experience in this work also revealed that the projection pattern on a very bright or very dark area will be absorbed and thus cannot be seen by the IR camera. These limitations impose the range of applications of the Kinect sensor to be restricted to indoor environments where the level of IR absorption is not very high and the objects closer than 50 cm are not matter of subject. This later shortcoming has a great impact on obstacle avoidance.
Explores the environment to find objects and classifies rooms.
Actuators / output devices: Pioneer P3-DX, Pan-tilt unit
CPU: Intel(R) Pentium(R) 4 CPU 3.20GHz
Operating system: Linux (Ubuntu)
Programming language: C++
Sensors / input devices: The Kinect sensor, Laser range scanner
Cool stuff! The Kinect has been in the news a lot lately, with the controversy over people that have been hacking it for a while and Microsoft releasing an SDK. Seems like a good move on Microsoft’s part to me. They do tend to catch on eventually.
Hopefully we’ll see more and more high quality sensors available to the hobby robotics crowd.
How did you find working with the Kinect? Did you use the released SDK or was your work before that came out?
"This work revealed that although Kinect is a low-cost 3D vision device but its capability to obtain a 3D perception of the environment is invaluable specially with performing an accurate calibration. Nonetheless, it suffers from the fact that for any object closer than about 50 cm to the device no depth data is returned. The accuracy of the sensor for the points further than 3.5 meters considerably decreases. The experience in this work also revealed that the projection pattern on a very bright or very dark area will be absorbed and thus cannot be seen by the IR camera. These limitations impose the range of applications of the Kinect sensor to be restricted to indoor environments where the level of IR absorption is not very high and the objects closer than 50 cm are not matter of subject. This later shortcoming has a great impact on obstacle avoidance."
My work was actually before the released SDK by microsoft. The software was developed under Linux OS, Ubuntu 10.04. For the Kinect driver, the open source OpenNI framework was used.
Very interesting. All sensors have their limitations, of course. I think a successful approach is layering of sensors for different ranges, environments and tasks.
Even for hobby robots, there are times when a combo of of sensors for line following, obstacle avoidance, bump detection, etc. are needed. It seems like the Kinect would be a great vision sensor, but might needs to be supplemented in some cases. IR or ultrasonics could fill in the close range, while LIDAR could provide longer range detection.
In any case, you have done some very nice work. I’m glad to see you sharing on LMR.
Nice work. Can you please provide more information about the pan-tilt unit that you used in your system. I made a 2 servo pan-tilt unit but it doesn’t look enough to hold kinect on it.
Also I would love to read your thesis, let me know if its publically available for reference.
The pan/tilt uint we used in the project had the model number PTU-46-70. The Kinect sensor is a bit heavy for micro servos to be able to stabilize it. Use two standard servos (4.4 Kg.cm) to build a pan-tilt unit.
The thesis I worked on belongs to CSC/CAS department of KTH university actually. They might put the thesis to be available online soon. Here is the link to check: