Navigation with (2) sonars Update 2 (with data)

*****Update 2.16.10*****

Got some data to start thinking about, guys. My thoughts of getting nice, even packs of 4 or 5 is a little out the window. To be honest, I tried to forget that I could see where the robot was facing and tried to make a decision where to go based on just the readings. I couldn't . Please note that I was seeing the readings on a LCD so left was above right not in a straight line. --Now that I think of it, I have no idea why I didn't line them up as it is a 16x2 display. Huh.

All readings were taken L to R

Now, here's what I did, I added some scraps of drywall that I had around the kitchen table's base. I didn't want any sensor holes, just smooth walls just to keep this as basic as I could. So here we go, the data is under each photo and the 3 readings were at 1000, 750 and 500.

Navi_Pictures_001.jpg

@1000

L = 00000000

R = 11100000

@750

L = 00000111

R = 11110000

@500

L = 00011111

R = 11111111

Navi_Pictures_002.jpg

@1000

L = 00000000

R = 00000000

@750

L = 00000000

R = 00000111

@500

L = 11111000

R = 00001111

Navi_Pictures_003.jpg

@1000

L = 00000000

R = 01111001

@750

L = 00000000

R = 11111101

@500

L = 10000001

R = 11111101

Navi_Pictures_004.jpg

@1000

L = 00000000

R = 11000000

@750

L = 00000000

R = 11000011

@500

L = 00001111

R = 11111110

****Original Post*****

Need some brainstorming here, folks...

Walter is just friggin big and because of his size, he needs some "side checks" i.e. one sonar is just not enough to keep an eye on his total width. In the past, I have used one center-mounted sonar with one sharp on each side. The sonar was in charge while the sharps would catch the walls. Basically, the sharps would catch a wall if Walter was coming up on that wall at a shallow angle or was traveling allong a wall (not actually following it). This system worked well (except for the God-awful elec. noise from the sharps) with the sonar being a code priority and the sharps "knew their place" in terms of when they could tell the code to change course.

Now:

I have 2 sonars, one each on each of the front two corners. Each sonar can pivot from almost 90 degrees to the side to slightly past center (i.e. the left sensor can see straight out to the left side and, when fwd, can see slightly past the centerline --a little to the right)

Question:

I need some brainstorming here. I have tried about 9 different versions of navigation code with different levels of success and a lot of failures. I have tried:

If one side is closer than the other, the closer is priority -listen to that sensor.

If one sensor is looking to the side, let it skootch off the wall and the fwd sensor is the priority for obsticle avoid.

If one side sees something, let a bit=1 (to remember something WAS there) and then figure that in as to what the other side is seeing.

Average all numbers on one side and compare to the average of all numbers on the other side.

Find a wall, keep that side's sensor fixed on the wall and sweep the other side's sensor for obsticles

Simple avoid whatever either sensor sees no matter what unless both are triggered. Then stop, scan R to L, find the furthest distance, turn to that heading, rescan to see if there is a wide enough path for the bot to fit, turn to the middle of this "path" and go.

etc. etc. etc. etc. etc.

 

I'm looking for BIG ideas here, folks. No code, just a 30,000' view of the problem. Brainstorm, brainstorm, brainstorm...

 

****Update*****

I think I got it, guys... Check my work, would ya?

We have this:

0211001638.jpg

Breaks down like this. 3 distance rings with 8 segments each left and right. Each point on the arc gets a bit --6 bytes total, 48 bits total. The danger limit is proportionate to the servo posistion so as the sonar swings more to the side the danger limit is decreased and as it swings to the fwd posistion, it is increased. At each point, if the range is > danger that bit turns into a one.

Reading the data:

Find the start and end of each "pack" of ones. Each pack of ones is then checked for width. Each pack of ones that makes the cut must be wide enough for the bot to get through and also should be able to be narrower if it is farther out. From there, I have all the available paths I could use. Choosing a path could be anything -for instance, widest pack that is farthest out wins -or- lets say I am in my docking mode... I have found the beacon, is there a path wide enough in that direction to travel that way? Or maybe I get one of those pyro-thingies that detect body heat... Same thing: There is a human and I should follow it, is the direction of the sensor within one of the "paths" available. Etc. Etc.

Can you describe a little

Can you describe a little how you would like Walter to move about? Just random movement while avoiding obstacles and walls? Should he search for some object, human, light/dark etc? Is he mapping the area so you can tell him to go to a specific place?

A really easy algorithm could be:

if (left < 80cm AND right < 80cm) { turn left; }
else if(left < 40cm) { turn right; }
else if(right < 40cm) { turn left; }
else { go forward; }

OK, here are some thoughts.

OK, here are some thoughts. Establish the principal behaviors you want. Such as:

  • In the absence of any obstacles or other commands, move towards the most open space
  • Set a variable like sensor_priority, and one sensor will always have priority
  • If sonar1_range < sonar2_range, sensor_priority = 2
  • Anything closer than danger_distance immediately assigns priority to that sensor
  • In the event of a conflict between sonar1 and sonar2, stop, scan both sensors and decide

You might have other modes, like the one you described using one side-turned sensor to keep a minimum distance for a wall, while the other sensor scans for obstacles. So depending on the situation, you might have general navigation code, wall following code, etc.

Another random thought, is that with two sensors you have the possibility of bifocal vision, which allows for excellent depth perception in human and other animals with forward facing eyes. I don’t have a solid idea how to take advantage of that, but it is a germ of an idea. Maybe complex object detection or recognition may be possible.

Consider posting some video showing Walter in action. Maybe it will inspire some good ideas in others.

How good is Walter’s memory?

Your sensors are providing instant information. Range measurements that are only valid in that moment. It’s momentary data. How much of that do you wish to store for future retrieval? Or do have some purist wish for your bot to “live in the moment”? (Now where did I read that before?)

In other words, does Walter “have a plan”? A floor plan of the premises for starters.

For a bot living “in the moment”, you might consider how long one moment is. Maybe it is as long one full sweep of the sensor. How about storing all the data from one sweep in a single number? An average could work. But perhaps also a number indicating how much floor space in front of the sensor is free of obstacles. I’m sure I read that before, when discussing surface area calculation. Can’t remember where exactly (my brain at the moment is filling up with snot).

Scan 100 degrees in steps of 10 degrees. Each scan showing more than <some distance> scores one point. When a new scan drops below <some distance> reset the score to zero, remember the old score somewhere. Each new score is compared to the old one. Only the best score is reported back to the main program.

That kind of code will hopefully tell you how wide a space is. Problem is, that algorythm is designed for one forward facing sensor. You might have to combine the readings from both sensors into one sweep.

OOOH, that may the best sniffing idea so far. >snorrrrth< Full stop.

Thanks, guys…

To be honest, I have forgotten a very basic pricipal and have broken a very important rule here. Actually, more like a cardinal sin… I went straight to the full code, it didn’t work, then I came here and said "I wrote a bunch of ■■■■ and it didn’t work, tell me why."

I am correcting the problem now. I descovered/ remembered/ realized that I am on step 487 when I really need to be on step one. I.e. right now, I have the robot in RC mode with the sonars constantly firing and displaying the values on a LCD. I can also manually posistion the sonar servos via RC and that number also appears on the LCD. For the last couple days, I have been driving around to different posistions, checking data in different posistions, recording it as well as the servo posistion and a quick picture of the bot. I am dumping all of this info into some rik-style charts to start getting an idea of what different situations “look like” in data form. I can say this first off, my initial “guesses” on limits and “danger levels” were way off.

rik:

I’ll say my friend, you got something there… I think I might re-examine my original “may I go” code. Based on the fastest speed of the servos and the slowest speed of my motors, I could really only get 1 or 2 sonar sweeps in a reasonable amount of forward travel. Not a lot of data there or time to figure out what it all means. With my “may I go” code, it is a system of narrowing down the situation:

Regular sonar sweep

If wide open --it’s wide open, just go.

as things get closer or more cluttered or narrow, speed is decreased or a full stop and assess is triggered

way close on one or more sensors triggers a full stop, edge find, a calculation to see if edge to edge is wide enough to get through, and then an encoder turn to the middle of this path, etc. etc.

 

Really when I start thinking about this rationally, a “just keep going and avoid” robot is not what I want. In actuallity, I want walter to stop a LOT! I was so into this navigation code that I forgot that walter has a kick-ass head, can talk and sing, etc. Why would I not want to use this? Instead of a nifty-smooth path around a corner, why wouldn’t I want him to stop and look around? (pretending to look for a new route (with some spoken comment) while the navigation’s “may I go” has a chance to do the math on the new path.) Besides, if the eventual goal is to leave walter on all the time (with a self-dock and charge) I simply can’t have him driving around all the time like a “start here”.

 

Thanks again, guys --I got my head going the right way now, just some more basic tests, I’ll
get there.

 

The sinner is pardoned

Your sin didn’t leak into LMR, so you’re off the hook. You asked for broad viewed brain storming and you got it. And it kinda helped.

I like your approach with the data gathering. Figures…

I also like the idea for coding different navigation modes based on the immediacy of any perceived dangers. But most of all, I like that you are rethinking Walter’s behaviour. That is really Big Idea Creativity going on!

Walter is not gerbil sized, so he should not run around like one. He’s morel like a big friendly dog, so he should walk around gently and with great consideration. He’s got a very polite English voice, so he should act like, errrhm, never mind…

With every high level decisioin you will have Walter make, please sprinle in some random data. Nobody likes a totally predictable personality.

And ask your “voice talent” for some more quotes: “may I go?”, “I think I will”, “what’s over here?”,“looks like a doorway”, “I’d better share my data with master Chris, so he can make a neat rik-chart”.

Hey, CtC. If anyone deserves

Hey, CtC. If anyone deserves some leeway on asking for help, it’s you. Walter may be the most complex and well considered long term project on LMR. TOBI comes close, but you’ve made so much progress. Plus, your videos are great. "Ting!"

Are you using EEPROM memory at all? I occurs to me that you have a very basic decision to make. Is it more interesting/useful to make Walter capable of navigating a general, unknown environment, or to know lots of detailed data about a known environment (your basement/house).

There has been some very advanced mapping/navigating work done by some at this site. Should Walter have some detailed map data for things in your house that are VERY unlikely to change, like the walls and floor plan? It’s a lot of data, but you have a pretty advanced platform, and you can set aside memory to store it.

Think of how a blind person can navigate around their own home. Walter isn’t blind, but his processing of the data he can get from sonar is seriously limited compared to what a sighted person can do. Think about sonar as a walking stick for a bind man, rather than vision. He can see out a limited distance, and determine when something is in the way. It’s hard to interpret what is in the way, just that something is there. This analogy may help you think through some of the programming challenges.

Walter is a great piece of work. I can’t wait to see how you tackle his latest stage of development. You are an inspiration to many.

Update
Update

Jackfruit
Jackfruit

Yeah baby! Bits ‘n’ Bytes - Groovy!

Good to see the bits are rubbing off on you.

What you just designed there is binary map, a bitmap. In radial coordinates. Zeroes code for danger, ones code for safety.

The danger limit for each measurement is “arbitrary”, as far as the interpreting algorithm is concerned. More on that later. (No wait, you have three limits. Must discuss why later.)

Building a bitmap for later interpretation is brilliant. Because it allows you to use one and only one piec’o’code (poc or algorithm) to build up a generic map that can be used in different poc’s for decision making. That’s exactly why bots make maps, I suppose. No bot would let its sensors decide where to go. Not even a start-here-bot (would be fun though).

Like you write: “Choosing a path could be anything”. Therein lies the brilliance. Separating eyes from brains. You now have the logical architecture for giving your sensors a separate picaxe. Just send the bitmaps down the wire to the main brain.

Ever heard of the unit “Light Year”? Sure you have. Germans take this principle a bit further. They speak of “Geh Minuten” and “Wagen Stunden”. I now introduce “Walter Seconds”: the distance Walter will cover in a single second. I further propose that a smart danger limit would be expressed in this new unit, the Ws. It is a fluid unit too. Its real world value (expressed in meters or feet) would vary with Walters current velocity (in meter per second). Let’s assume an arbitrary danger limit of 3 Ws. Three seconds in which Walter can take a high level decision and execute it. “Emergency brake now” would be the highest priority decision to take.

As Walter travels at varying speeds through its (his, pardon me) environment, he would automatically adapt his danger limits to his speed. Or at least, the limits for his forward looking sector. The sectors to the sides are hardly influenced by the forward pointing velocity vector. Or are they…

ideas Ideas IDEAS!

Here we go:

Walter seconds. I think I had that idea without knowing it with the 3 rings. What I wanted to avoid, or include (depending how you see it) is a “just past the limit” situation. Lets say that a distance is just past the first (closest) ring. It could just be a inch or two. At the same time it could be infinity as well. As for deciding on a heading this makes a huge difference. With just one ring, you miss out on finding doorways and open areas. Now in terms of Walter seconds, the 3 rings sorta cover this: If the pathway found is available farther away, faster speed is allowed, if a path only exists in the first ring, you need to switch to the “I’m in a narrow hallway” code. I was figuring on the 3 rings being set at approx.

–1st Comfortable distance from a wall (i.e. centered in the width of a standard hallway)

–2nd Far enough away to assume a reasonable path. This would be "I’m in a corner (walls on 2 sides, one side open) --This could also be “I’m in a dead end” (first ring ok on 3 sides BUT no good on 2nd ring, 3 sides)

–3rd Way out there. Used for finding a door, open space or “I’m in a big room, I can go as fast as I want”

 

Now, I kinda pee’d my pants when you said EEPROM. This is just fantastic. One big lookup table! Check my math here… (3) eeproms, one each for each ring. Each possible combination of bits converted to its byte (in regular number form --is this decimal?). This byte is used as an address on the EEPROM. At that address is stored a number that corresponds to a movement command.

Example: (and I know that each ring is 16 bits --R and L-- never mind that, just stay with me here)

First ring is seeing a dead end on 3 sides. 00000000

converted to dec. =0

lookup address 0 on the eeprom and you get number 17

go back to code and select case

case b? = 1 turn right or something

case b? = 2 turn left or something

case b? = 17 you are in a dead end, 180 turn.

 

I am realizing as I type this that I would really need 6 eeproms to keep this completly tidy but with three, I could just double the number on the other side or add 255 to the second side etc…

We are talking about a metric ■■■■■■■■ of pre-programming, but I would think I could simply code one side (256 combinations) then write some code to read this info, modifiy it as needed and spit it back out to the other eeproms to cover the other 5 sections.

I am liking this system a lot!

Wait a minute… 256 combinations??? Nope…

Wait a sec here, rik.

We don’t have anything even CLOSE to 256 combinations! Walter is not going to fit through a “single-bit pathway”! All paths are going to need to be at least 4 or 5 bits wide. We don’t care about all the little far-away spaces (between a table and a chair leg) that show up as a single bit among a bunch of no-go’s, we only want to know where we CAN go. Before we look anything up, we can eliminate anything that is less than 4 or 5 bits wide --and less as we go farther out. Gotta figure out how to do this but man, we are talking about a lot less data here to dig through.

Crap.
If a path is 4 bits wide… Gotta figure in if there are 2 bits on one side and 2 bits on the other, if facing fwd! Hmmmm.

Thinking about bits before looking them up…

That kinda defeats the purpose of the lookup table.

My worry is that we’ll end up with way more than 256 combinations. It’s 256 on each side. That’s not 512, but 65536 different combinations (2^16).

Should still fit nicely in a 64 kiloByte memory though.

lookup ideas

If (and only if) you are making decisions based on what one half of Walter sees, then (and only then) can we get away with considering just 256 combinations. The decisions for the left quarter circle would be identical for the right quarter circle. Your code would simply use the same lookup table for either decision. Just remember to flip the bits in the byte before doing that:
00110000 becomes 00001100 and so on…

But you might be more interested in considering the whole picture at once. All 16 bits in one go. That’s a 2^16 bits lookup table. Especially the bits up front are particularly - erhm - relating to each other. That doorway right in front of Walter will show up as two half doorways. One half in either byte. Not sure if I am making myself clear here.

Crap, we are into words…

3 words instead of 6 bytes… My mind is starting to overflow (yup, that’s a bit/byte/word joke).

If you tell me I will hear, if you show me I will understand, if I teach someone else, i will understand. I need to teach myself. I am having some issues getting my head around this and the theory is getting a bit too much. Before we get too much farther into this, I need to get some of the primary tests done… I don’t think I can theorise about this much more until I get step 1-4 done, working and understood. I am going to start with some simple tests with a nice, LCD data output and go from there. Tell you what, let me get some basics working – I’ll move all this over to a blog when I have something to show.

Also…

I just figured something out…

You think digitally. You are a programmer.

I convert all problems to a picture in my head then do the math. (A/D conversion)

Hmmm.

I’m all about

I’m all about minimalist/procedural programming, so I’m somewhat opiniated on the matter, but lookup tables are awful =P
With the exception of trigonometric or other unusual functions, I really can’t stand allocating such huge resources to some task that can be performed using sequential analysis. So, on that note, I had a bit of a think regarding your sensor mapping issue…

Firstly there’s that problem of having the map split in two, making it hard to decide what’s going on straight ahead. One quite simple solution would be to take the right half of the left map, and the left half of the right map, and stick them together, i.e.
Left Map [l7, l6, l5, l4, l3, l2, l1, l0] where bit l7 is on the far left
Right Map [r7, r6, r5, r4, r3, r2, r1, r0] where bit r0 is on the far right
Middle Map [l3, l2, l1, l0, r7, r6, r5, r4]
If you’re looking for 4bit wide spaces, the ‘Middle Map’ byte only yields 3 important patterns: [x1111xxx], [xx1111xx] and [xxx1111x]. All other 4bit or wider openings that can appear in the middle map will also appear on the side maps, so you don’t need to consider them.

Instead of looking for an unbroken string of 4 ‘1’s, I thought of a little trick that can reduce the complexity of the vector maps. Let’s say you’ve detected a pattern of [01011111] from the left sensor. It’s easy to see that there’s a 5bit (or maybe more) space to the right of the sensor, but how does Walter determine that? Also, where is the center of this space?
• First, we duplicate the bit pattern into another memory location, so we now have two identical copies. Then we shift the bits (or rotate them, as it is often called) and recombine them. Here’s and example using the bit pattern I mentioned above:
Original Bits [01011111]
Duplicated Bits [01011111]
• Now comes the shift, we move the original bits left once, and the duplicated bits right once, padding with ‘0’s if needed. I think PICBASIC uses a command along the lines of >> for this.
Left’d Bits [10111110]<-0
Right’d Bits 0->[00101111]
• Recombine the two modified bit patterns with the unmodified version using AND.
Original Bits [01011111]
Left’d Bits [10111110]
AND’d Left’d Bits [00011110]

Original Bits [01011111]
Right’d Bits [00101111]
AND’d Right’d Bits [00001111]
• Finally, AND the results of the last step together again.
AND’d Left’d Bits [00011110]
AND’d Right’d Bits [00001111]
AND’d AND’d Bits [00001110]
Original Bits [01011111] (for comparison)

As you can see, only spaces with 3 or more bits of consecutive clear space ‘survive’ this process, and further more the center of the detected space remains the same. 3 consecutive bits yields only a single bit at the center in the final map, 4 consecutive bits yield two bits, 5 yields 3, etc etc. All these bit operations are extremely fast in terms of processing time, so even on a PICAXE I can’t imagine a routine like this would take very long to execute.
You can sorta think of this routine as a digital filter, since it removes the ‘noise’ of isolated single/double space bits, and consolidates the relevant data. If you do decide to use the lookup table approach, you can still lead in with this method, which will reduce your memory requirements by at least half (the larget byte goes from [11111111] = D’256’ to [01111110] = D’126’).

Sound introspection

Fair enough. I’m picking up what you’re putting down. Stuph like that.

I’ll be here.

My arch nemesis strikes

My arch nemesis strikes again! Damn you Rev Ed!
RLF and RRF (rotate file register left/right) are part of the RISC operations list for 8bit PICs, why wouldn’t they allow access to that in all their designs?

In any case, I won’t argue that lookup tables aren’t ever useful, but I still think they should work with the decision making code rather than replace it altogether =)