Precise location sensor for indoor

The idea I have is to make a smart robot that stores information about the house. It moves around and keeps a digital blueprint of the room by using an ultrasonic sensor it detects where objects are and by knowing its own position, the robot will store the location of the obstacles so in the future he will know a path to move around it and I will be able to send hom to certain locations.

I was wondering what the options are for precise location detection in a house. 

How can the robot know where he is? If I pick him up and move 10cm and power on the robot he should be able to detect the correct location so he knows how to move. 

I was first thinking about GPS but I don't think that will give me the precision I am looking for. 

I also thought about IR but that does not move through walls so I basically have to install IR sensors in every room in order to cover the house.

Are there other options to use or is this just a little bit to tricky to achieve?

 

I wonder

if one couldn’t get away with 2 IR beacons per room and a Wiicam on a pan/tilt head.

Thanks for the response.

Thanks for the response. Good to get a reality check. 

How do these automatic vacuum cleaning robots work? Are they using IR to get back to the charging station? If so, they always need to be in viewing range of the station and cannot go to another room, right? 

I gave it another thought and maybe I can achieve this with some fancy AI programming. 
If I store the info from the ultrasonic sensor and compare this with the current reading and past reading I might be able to figure the location. 
Not sure if the Arduino will be fast enough to process. If not, I can use the XBEE and a computer to delegate the computing.  

The high end LG vacuum

The high end LG vacuum cleaners use an upwards facing camera with a slight fish eye lens to track the ceiling. They determine what room they are in by the shape of the ceiling and comparing to internal map. Of course this is in addition to dead reckoning by wheel encoders and an optical mouse style sensor. A little more elegant than the roombas random number generator movements.

I still would go with IR

I still would go with IR beacons in each room. two should be enough, one near the door to get in view as soon the robot looks around the corner. Then use a grid of IR receivers or a WII cam for a rough triangulation.

Thanks. Can IR sensors be

Thanks. 

Can IR sensors be identified or do I need RFID for this? For example if I have 2 IR sensors in each room, can the robot know which sensor is which or is that not possible? If that’s possible I would be able to use 2 or 3 (or more) to have a more precise triangulation. 

Passero,I was just working

Passero,

I was just working on this last night!  Great minds think alike, or more likely in my case anyways, all robots have to solve the same problems…

IR receivers work as either they sense 38k hz signal or not.  Remote controls work by sending timed pulses so every time you press a button on your remote control for your tv, it will send maybe 20 ms, 10 ms, 15 ms pulse for one button, and a completely different series of pulses for a different button.  The tv picks it up and then does the required command.  Normally, the remote will do the same command two times in a row, one right after the other just in case the receiver missed the first part.  Adafruit.com has a nice tutorial on using IR with remotes that you might interesting.

http://learn.adafruit.com/ir-sensor

You can do the same thing.  You have to do it on 38k hz for receiver to pick it up which is about 26.3 microseconds per cycle.  So, the algorithm is:

turn on led for 13 microseconds

turn off for 13 microseconds

Stolen shamelessly from (which is a nice article on using IR sensors for obstacle avoidance but good tutorial on using IR on a robot):

https://www.robotshop.com/letsmakerobots/node/29634

void IR38Write() {
  for(int i = 0; i <= 384; i++) {
    digitalWrite(IRledPin, HIGH);
    delayMicroseconds(13);
    digitalWrite(IRledPin, LOW);
    delayMicroseconds(13);
  }
}

will give you on an arduino very close to a 1 ms pulse.  Although my leds are rated at 100 ma continuous, I found I had to put time between the pulses or the IR LED wasn’t emitting anything my receiver could sense.  I used 200 ms between pulses which seemed to work well although will undoubtedly have to play with it depending on how my robot actually picks up the signals.

Oddbot has a nice tutorial on chaining LEDs (IR emitters are a type of LED) and how to calculate your resistor value.

https://www.robotshop.com/letsmakerobots/node/4948

Since I will have several IR LEDs, I haven’t decided whether I want to do a round robin approach or go with all at once in one circuit and have to do a mosfet to switch the power supply.  Since I have the controller and programming is by far my better skill than creating circuits, it will probably be the round robin.

Good luck.

Regards,

 

Bill

Just to clearify, the

Just to clearify, the beacons are not sensors but IR transmitter. Simple IR LEDs on 38kHz with a modulated ID wave on that 38er wave. So the robot knows by reading that ID bits which beacon he is receiving. 

Yeah, more beacons will gove you more precision but also more headache with the programming in case you want to let your robot run freely in your whole house with 5+ rooms. I don’t know what you want to achieve, how precise it has to be but it’s depending on the “resolution” of the IR receiver array on your robot how precise the triangulation will be.

Wonderful information all.

Wonderful information all. Thanks a lot! 

I will have a look at all the options and see what I will try. 

My ultimate goal is to make my robot smart so he learns about the environment and I can tell him to go to room A when he is in room B. If he pick him up and put him in room B he will be able to find out where he is and how to get to room A. 

For the sake of discussion, I was thinking about building a learning algoritm which takes information from the ultrasonic sensor and compares this with previous readings. Based upon this I have a history of readings and I can compare this with paths done by the robot in the past. By comparing these readings I might be able to estimate the location. 
In order to optimize computing power I probably use XBEE to send the sensor data and let my computer with i7 processor do the difficult calculations and send info back to the robot about his position.

By doing this, the robot will get smarter over time.

Just an FYI, I am really interested in AI and this is how I came up with the use case of letting the robot learn his environment as he moves.  

Well, you will have an

Well, you will have an interesting way to go. As I assume you are more the software guy than a hardware dude.

Writing an AI is happening here from time to time. Check out MarkusB. He already posted some quite cool AI algorithms.Others on LMR have similar attempts but that’s your job to find out now :slight_smile: I will try to help whenever I can.

I have tried to build such a

I have tried to build such a robot myself, but, software is not my strongest skill. And I wanted to be microcontroller based (a few micros linked together) and self sustained, with no computer connection, so I can take it anywhere with me and still work. I have tried mapping, scanning for objects but got stuck one way or another and didn’t go further because I don’t have enough time to study and experiment. For me, it either works simple or not at all.

That being said, I have not found an easy method that does not involve at least a mobile phone or a Raspberry Pi and a webcam. Still, even if I would use such a setup, I need to learn how to program it. I have not found a system that I would just copy it. There are ROS based robots around, but the setup needed a Kinect and a laptop (or just a motherboard) and it was too expensive for me.

If you want to go the least sensors way, I think a ceiling pointing camera with IR pattern on the ceiling, or, a webcam mounted on the ceiling connected to a computer like the Overlord project is the simplest route. In both systems, the robot does not need a compass or encoders, just a distance sensor, although I would recommend a US and a IR sensor, because both can fail in different situations, so they complement each other. Also, if you want to scan left-right to find objects, the IR sensor has a narrow beam, compared to the US sensor that has a cone, so the IR is more suited for such purpose.

Years ago I played with a robot system called the ER1 (I still have it in my basement). The robot had stepper motors (so they did not need encoders) and a webcam connected to the laptop you would place on the robot. You could either use the camera pointed at the floor at an angle for avoiding collisions or you could use it to do object recognition, by taking a picture of the objects and store them in a database for later comparison. The software (proprietary) was also able to determine the relative distance to the object, knowing what size the object had in the picture taken at a fixed distance (one foot) from the robot. So in theory, you could place the robot in a room and have it take 4 pictures, one facing each wall, so the robot could remember how the room looks like. Also, you would need to assign relative direction headings for the walls (like N, S, E, W walls). When the robot would be powered up, it would look arround to determine if it was in a known room. If not, it would try to face the walls and take pictures (but without a compass, this would require the robot to always face N at power up). The user could also take pictures of different objects and when recognized, trigger different actions, for example, the robot could recite a text when a page from a book was recognized. It could also recognize people. Pretty cool stuff for the year 2001. Probably it could be done by using OpenCV and I would love it if it could be done on a Beagle Bone or Raspberry Pi (although I doubt they have enough processing power).

So yeah, that’s what I wanted to add to the wealth of info the others provided.

Cheers!