I have tried to build such a
I have tried to build such a robot myself, but, software is not my strongest skill. And I wanted to be microcontroller based (a few micros linked together) and self sustained, with no computer connection, so I can take it anywhere with me and still work. I have tried mapping, scanning for objects but got stuck one way or another and didn’t go further because I don’t have enough time to study and experiment. For me, it either works simple or not at all.
That being said, I have not found an easy method that does not involve at least a mobile phone or a Raspberry Pi and a webcam. Still, even if I would use such a setup, I need to learn how to program it. I have not found a system that I would just copy it. There are ROS based robots around, but the setup needed a Kinect and a laptop (or just a motherboard) and it was too expensive for me.
If you want to go the least sensors way, I think a ceiling pointing camera with IR pattern on the ceiling, or, a webcam mounted on the ceiling connected to a computer like the Overlord project is the simplest route. In both systems, the robot does not need a compass or encoders, just a distance sensor, although I would recommend a US and a IR sensor, because both can fail in different situations, so they complement each other. Also, if you want to scan left-right to find objects, the IR sensor has a narrow beam, compared to the US sensor that has a cone, so the IR is more suited for such purpose.
Years ago I played with a robot system called the ER1 (I still have it in my basement). The robot had stepper motors (so they did not need encoders) and a webcam connected to the laptop you would place on the robot. You could either use the camera pointed at the floor at an angle for avoiding collisions or you could use it to do object recognition, by taking a picture of the objects and store them in a database for later comparison. The software (proprietary) was also able to determine the relative distance to the object, knowing what size the object had in the picture taken at a fixed distance (one foot) from the robot. So in theory, you could place the robot in a room and have it take 4 pictures, one facing each wall, so the robot could remember how the room looks like. Also, you would need to assign relative direction headings for the walls (like N, S, E, W walls). When the robot would be powered up, it would look arround to determine if it was in a known room. If not, it would try to face the walls and take pictures (but without a compass, this would require the robot to always face N at power up). The user could also take pictures of different objects and when recognized, trigger different actions, for example, the robot could recite a text when a page from a book was recognized. It could also recognize people. Pretty cool stuff for the year 2001. Probably it could be done by using OpenCV and I would love it if it could be done on a Beagle Bone or Raspberry Pi (although I doubt they have enough processing power).
So yeah, that’s what I wanted to add to the wealth of info the others provided.
Cheers!