Uff-da... So I have pulled the training wheels off of Walter and am letting him rove around sans fences. I have actually had great luck with pre-solving a lot of problems (table legs and driving under things) but still I am running into a few problems.
I ask this as an open question.
What is a hallway to a robot? --We all know the right-left-right-left-right-left problem with corners and now, I am finding the same with hallways but instead, with my outward shooting side (keep away from the wall) sensors. Not to mention, if it is a dead end, it is continuing to follow standard code, turning away from the obsticle when it gets to the end thus leading to a 1/4 turn away, finding it just turned into a wall and backing only to find its ass hitting the other wall. --Here are some senerios --tell me what you think.
trying to think of a way to give Guilbot a sense of position using only a ping sensor. The way i was planning to do it was to have him sit in the start position and have him scan the room in a a full 360 arc to get a distance from each wall to give him a position relative to the outside of the room. Thats about as far as i got, i managed to map the results out into a basic model in processing. My eventual aim was to have him remember each room as a sort of finger print in the hope he could recognise a room according to the dimensions of it. The problem with this method was that things like furniture and odd shapes in the wall could throw things off.
could be solved by matching maybe 90%, having set points in a node to measure from. It could possibly work by measuring the length and width of a room when it first enters it, if it moves so that it is in the middle of the door way then moves 1 meter into the room thid would gie it a standard point to measure from in each room. It could then measures the distance in front of it and adds the meter it has already traveled to give a length, then measures the distance left and right to get a width of the room for the first part of it, this would give it a start point. It could then grid this out with grid lines 10cm appart.it then moves to another corner to take the measurements again, if they are within 10% of the first results then it would be safe to assume that the measurements are of the room in full without any obsticles, if not then it would have to measure it from all of the corners of the room till it had managed to work out the right size, this could get complicated.
If there were a lot of obsticles and a lot of mismatching measurements then it would have to have some way of taking the masurements of the obsticles and deleting them fro mthe map as it goes, this i think would require a very accurate movement and orientation system, which i cant think if a decent answer for without having beacons or some static point to reference in each room which would kinda defeat the point in it.
that looks pretty cool, if that could be used to build up a picture then things such as coving and natural deviations in the rooms stucture could help to build up a very accurate finger print, however it would still rely on an accurate movement and positioning system.
I started with the go get a beer, come back as step one of the whole navigation thing. My first thought is the beacon theory -I have played with an IR beacon system that works very, very well. Simply an IR led constantly spitting out an IR signal. On top of walter is a IR sensor at the back of a long tube thus only seeing the IR beam from straight ahead. When scanning this sensor with a servo, I can easily find a bearing of the beacon and drive in that direction. Using this system, and some simple “first beacon, turn right…second beacon turn left” I am confident I can get from A to B. This is, of course, saying I am willing to pre-set beacons before the run.
I do like the ideas mentioned above, measuring rooms, using doors as start-points etc. Speaking in terms of what is in my head right now, lets look at this problem backward… With the beacon system I was stuck on the idea of how to get back from B to A. The theory I came up with is a basic grid system. I think this theory is hand-in-hand with any kind of “start-point” or junction idea. As a way of mapping, if the robot stays on a X/Y grid system making only 90 degree turns, EEPROM recording of turns and distance traveled should be a piece of cake. Of course, a return path would be as easy as playing back from the EEPROM in reverse order. I even took a look at I2C compases for the job. If a good, simple start-point/ junction system could be established, household navigation should be a breeze and you could do away with any beacon crap.
You know, it’s funny… my original post was really about basic ideas on the subject of sensor reading and movement reactions but now that I think about it, I think this was really the conversation I wanted to have… and I do think a blind wal-around of my house is a good idea -both for the learing and comedic value.
–Juctions… I wonder if there is any reason I couldn’t just put a big color-coded dot in the center of each room… Find the center, align yourself with north, all is good from there.
And I admit, I would not easily program a robot to map the house by itself. So there has to be a solution in between those two extremes.
How about guessing every once in a while. Maybe a recognising method that would indicate direction only. Like “North” or “Kitchen”. And let Walter figure everything else out by itself. All it needs to “understand” is whether it is getting closer to a target or not.
**THe only ** problem i can see with a compass is interference fro mthe structure of the building and possibly power lines. Im not sure how much of an impact this would have but i was always tought to read a compass away fro structures.
I don’t think you are going to get that close to “free range” with a set of micro-processors.
For eample, this room here has a big table a several upright chairs next to it. I know they are table and chairs because I’ve seen many other examples and these fit my stored pattern. So, this might be the dining room. Experience tells me that if there’s a cupboard here it will contain plates, glassware, cutlery and possibly alcoholic drinks. It probably won’t contain tools for fixing the car or a toilet.
Start with baby steps. Painting “room names” on the ceiling isn’t really cheating. I use the table and chairs to recognise the dining room, the bot uses a glyph. We both have to decide if it’s where we want to be, and if not how do we get out?
Chris, you must have plenty of little, agile bots running around your house by now. They could do the scouting. Maybe have one in every room. Guide Walter through the rooms an halls by signals (sound, light, RF, IR).
Idea 1: I once tried to teach my robot tricks that i reinforced - something like teaching a dog that only knows 6 movements. Once in a while a dog would try another trick or incorporate another trick to get a treat and the most effective trick that results a treat would be remembered more. Combination of movement would be the trick that i want to reinfoce. I tried once to program lexi to have only six actions and repeat the 3 most reinforced actions. Reinforcement comes from a remote control that I press whenever I want to reinfoce an action. After a exhasting coding and testing I run out of time to fix the code not to mention my non-expertise about programming.
Point:I have the same idea with rik about possitive reinforcement and learning to map out a room.
Idea 2: I always want a robot that has beacon around it like a fence and a safety net which indicates the safety zone and unknown zone which is on the outside. Getting a beer on the fridge is too complicated. I think just roughly mapping an area in the middle of the room that has no obstacle is a success to me.
Goal:Put down the robot in any place on the room and map its safe zone. Safe zone means an areas without any obstacle.
Plan:I think putting beacons before hand is not giving the robot independence. I been having this idea from my first RC car, after driving out of range of the remote I taught what if the car can drop a recieve to the farthes the remote can reach and transmit the signal to have a farther range. ON THE MAPPING BUSSINESS: what if a robot can bring like 6 IR beacons and drop it around him(or maybe another one to act as the point where the robot is place to start with). The robot would navigate to one beacon to another and constantly pushing it away the center. The area inside the IR beacon fence is the safe zone and outside is the unknown zone. Or maybe navigate from center to an IR beacon, push the ir beacon away the center, go back to the center, navigate to another beacon AND REPEAT. If it works like what I envision it(like in perfect world), you can drop a robot in any place and it could map an increasingly larger safe zone that (maybe) another robot that has another mission can use.
Point:Maybe we could teach a robot skills to "live" like a simple organism(simple goals) and better teach a robot skills to learn (simple things).
Ok I’ve not read all of that but the idea of getting your robot to make their way about the house I would solve with the A* algorithm. Especially if what the little guy needs to learn is purely static. Let the robot create the weighted network based on physical measurements of purely non colidable area. Landmarks can be placed on the graph too such as recharge stations. Now the robot can run A* to get exactly where he wants in the shortest route while avoiding static scenary completely. Table legs and such items could be checked with a UV sensor and then map node movement around it. Lots of good algorithms for working out graphs in rooms if you want to be more complex. You are having a pathfinding problem and computer games have had this problem solved and efficiently too.
If memory and processing can handle this, it is great for organising the static area. Dynamic objects can be detected at runtime predicting collision based on object velocity it sensors. A* can be made more efficient depending on the situations such as if you expect lots of dynamic collisions you could use D* and so many other variations can speed up processing. Take advantage of this highly efficient algorithm research and robots spacial awareness can look very realistic and clever.