Occupancy Grid Programming Help

With much help and inspiration from Gareth and Grog (and others) I have a working Lidar (wiicamera and laser) system. Currently, I am using a very low power laser (while checking the mail everyday for the good laser to show up) and getting about 16" of range. For now during testing this is just fine. Everything is just in a smaller scale, but proportionally, everything is equal.

I am taking one distance measurement every degree in a 180 degree sweep. This array of 180 distances is sent off to processing via BT. Thank you rik for help with the processing code --and oldie but a goodie, dusted off and still working great. I start with a small circle in the center of the screen, this shows the location of the camera. From this point, we have 180 imaginary "rays" going out 1 degree apart. On each of these rays we place a dot. The dot's distance from the center is the same as the corresponding distance as read by the lidar. We draw all the dots. We connect each one with a line, and we get this:

Lidar_Processing.jpg

Anywhere the line changes from blue to red (or vice versa) there has been a change in distance (between 2 adjacent points) and that change is above a given threashold. We can consider this an "edge". From one "edge" to another can be considered a "path" in that we would never drive into the corner or "edge" of anything. Zip through all the arrays of distances and angles and paths and "paths that are wide enough to fit through" and you end up with walter picking the path that is furthest away and one that he can physically fit through. We could also use this same data to find the longest "blind corner" which would be the one he would be most "curious" about. Or we could go to the closest to "investigate".etc... etc... Basically, we end up with a bunch of arrays of data, we can go through them in different ways with different criteria and chose a given path among many available ones.

Whew... Man Chris, you gunna ask a question any time soon?

I want to start exploring the idea of occupancy grids. I am gaining much inspiration from this.

Now, I have got the very basic idea here, if you look at the picture above, you see basically a 1/2 circle. This circle jets in and out but it does start at the left and continues to the right and it is a continous line. How do I code the simple idea of everything in the circle is clear, everything out of the circle we don't know about yet? I can draw a grid, I can associate an array with it --no problem. But how do I say, "If there is a peice of a line or a dot or pixel at these coordinates, change the coresponding array location to 1"? Or, "draw an imaginary line between every point and the center and any part of the grid that touches that line, change the array value to 1". Moreso, I would also need to move the center dot as walter moves to a new location on the grid. Actually, now that I think about it, that's a simple thing to do in processing. I got that one. Focus.

I think this would be similar to a video game and how characters would interact with walls and objects. Maybe I should draw a line from the center to each point --draw each ray-- Man, I got nuthin' here... I don't even know how to begin thinking about the problem...

Go easy on me.

https://www.youtube.com/watch?v=b6XUOhjkgwQ

Oh well, I have sucombed to

Oh well, I have sucombed to this problem last year… Never got an idea how to actually do it, it involves some math I never knew (or did I?). Talked to some guys about it, programmer guys that do that for a living, but no avail. If you do manage to do it, please, share it, my robot really needs to know where to go… Did I mention I wanted to do this in a microcontroller? I mean just the mapping part, scanning, displaying and deciding where the objects are… I know there is not enough room for a large map inside a microcontroller, but I was thinking of working with smaller blocks of a bigger map. Oh guys, please help here!

SLAM, now a proper word.

"Simultaneous Localization And Mapping" is what its called. Not ust by me, but also by people who done it before you. Find those clever bastards and tell them to get their ■■■■ inside a prop board for you.

Or show us your printouts and tell us where you got stuck. I know you have a desk full of them!

I’m curious, for the

I’m curious, for the purposes of maintaining an occupancy grid, can you abstract a group of data points into a blob the size of your robot? If the idea is to help a robot (Walter, in this case) navigate, then it seems like you could speed processing and reduce data by building a grid of roughly Walter-sized squares. If any of the data points within a square are showing an object, you can’t go there.

Forgive me, these concepts are pretty new to me. I’ll have to give your post on Crum a good read through.

a few things that come to

a few things that come to mind…probably obvious to some, and not helpful, but sometimes worth pointing out the obvious

when I think of an occupancy grid, as you point out, I think of a 2 dimensional array, which does fit the model of an area in front of you.    

as each vector or ray goes further away from the origin, there will be spaces in that grid you simply will not see.  not because vision is blocked, but because a fixed increment of 1 degree will land behind or in front of an object, and only as you get closer will they come into view.   however, you would think 1 degree would give pretty good resolution until you get too far out.

when imaging how hard drives work, you picture a similar map and the further away from the center you go , the more loss you have, aka “sector overhead"

I think your main question is:  But how do I say, “If there is a peice of a line or a dot or pixel at these coordinates, change the coresponding array location to 1”?     aren’t you answering the question in the question?     are you wondering how to compute the x,y coordinates for a blip found?   such as:  object found at a distance of 30” at 57 degrees, therefore using vector math, that is grid position x,y, therefore place a 1 in the array.

or am I missing the whole point of your question.

**Awesome. **

First and foremost, Hello to Big Face!! So good to see you again, my friend. And yes, I would LOVE to get a look at your code!

I can agree with almost everything said already. But my original question still stands… However I gave it some though and I think I can ask it in a better way…

If I were making a button, I would draw a rectangle and put some words in it. From there, the commands would be, if mousex>recx and mousex<recx+recwidth and mousey>recy and mousey<recy+recheight and the mouse button is clicked -->then you hit the button.

Would this sorta thing need to be done with my sonar line? I just can’t seem to get my head around the fact that I basically need to check every pixel on the screen (or every array location) over and over and over and just keep comparing them to the data I got. Isn’t there a simpler/faster way? Or is this just what computers do?

Oh yeah, I totally know about the system of numbering the squares --and using the numbers to know how far away from “target” we are. For now, gotta keep it simple --If I can make a system with just a bunch of 1’s and 0’s for now. Just red/blue or black/white squares. Something there or something not there. Keep it binary, keep it simple --we can get complicated later.

I understand and agree with

I understand and agree with the mouse logic analogy, but not sure how it relates to your problem.

your picture appears to assume solid objects behind each hit, which may or may not be true.  instead of an entire line behind each hit, how about just placing a red dot at each hit.   then as you progress around the 180 degrees, it would draw the outline of the room or the objects in front of you, from your point of view.  

obviously, you wont have knowledge of whats behind each blip until you move elsewhere.   then when you take 2 steps forward, scan again, and a little more of the picture gets filled in because now you see a little more of whats behind there.

I’ll shut up after this, :slight_smile:  because it looks like BigFace has done what you want to do.

You’re not simply looking for these conversions again, are you?

You’re not simply looking for these conversions again, are you?

(source svg)

Simple concept --didn’t sink in

@rik --Awesome picture by the way…

I have explained this to a lot of newbies but alas, I didn’t seem to get it this time…

Take a look at your picture, rik --My first thought was, “of course the point at the end of the ray (@ 3’) would trigger the square below it. Now, what about all the squares that are touched by the ray itself?”  --Now that I think about it, you don’t have to do anything with them! They are already open!!! This is like asking, “how do I tell my robot to NOT do something…” Well, it won’t do anything by default, unless there is code telling it to do otherwise. In this case, there is the simple fact that we are NOT looking for open space (we assume everything is open to start with) and we are looking to ELIMINATE open space.

This is not an issue of in front of the line or behind the line. --Known and unknown-- Instead, we are only worried about the actual line itself. The line (the obsticle or outside wall or whatever) is the only known we have, and that known only exists ON THAT LINE! The obsticle could be paper thin and when we get around this barrier, we could find a ton of open space behind it. Who cares? The only thing we can know --or need to know-- is just that line. As we move around and take more measurements, the “line” will gain “depth” because it will be supplemented by many other lines.

@Big Face

I like your thoughts on each grid space constantly getting flipped on and off as we remeasure it. I use a similar system (to solve this problem) for my navigation. I start with a direction we WANT to go and then run it through a bunch of tests. During these tests, ways we CAN go get eliminated or “make the cut”. Basically, the same chunk of data gets a couple chances to make to cut, and in the end we get the most “trustworthy” data we can get. I would assume the grid system would work the same. Each square gets run though some yea/nay tests with those tests being affected by the “score” the sqare already has. If the square has gotten an OK the last 4 times it was checked, and thus has a high score, it will take a lot more nays before we can change it. In then end, the goal is to have the “most trustworthy” or “the most likely” to be correct data. On the other hand, as with everything, I will give it a shot with simple 1 or 0, see how sloppy it is, and add more time/code/precisions as we need it.

 

 

Single points vs. size of grid.

Paul you got it. Single dots.

Simple when you think about it… If the dot is in the square, that square gets the nod. As I said above, we are only worried about the data right on that line, or in this case, the very end of each of the “rays”. Forget about connecting the dots --the grid will tell the tale in the end.

KISS, Chris…

BTW rik…

That picture of yours up there ^^ is going on a tee-shirt.

“rik is my boink filter” on the back.

Late night update…

So far.

  • hooked up keyboard to propeller
  • propeller hooked up to Bluetooth
  • able to send “arrow keys” to processing

   In Processing

  • made big double-decker array [50][100]  // [Y][X]
  • started image buffer at 1000x500
  • drew a bunch of boxes 10x10
  • seeded currentX and currentY and currentDirection for a start position
  • figured out array code: 
  • 0=open
  • 1=object
  • 2=been there path
  • 10 Walter’s current position and pointing north
  • 11 Walter’s current position and pointing east
  • 12 Walter’s current position and pointing south
  • 13 Walter’s current position and pointing west
  • Able to “drive” walters position around on this grid and mark where we have been and know what way we are pointing
  • Path is stored in the array

First_Grid.jpg

Enough for tonight. Video tomorrow maybe.

Man, I'm tired... Arrays fill your brain up. Double decker arrays fill your brain twice as fast...

New rule…

Everything MUST be done at a 90 degree angle. If we/I don’t stick to this 90 degree theory, my head will explode.

Exploding CtC’s head

What if instead of a grid system you used a pattern of circles and rays to make a sector map like a disc drive? Then your mapping points would align more gracefully to your measurements. ; j

BOOOM!

(wipes bits of grey matter from sleeve)

You gotta be kidding me…

I am fighting tooth and nail to get through 90 degree angles… You want concentric circles? Yup, my head officially exploded.

Great Progress CTC

45 next right?

45 is doable…

Yeah, 45s are the next logical step… That said, I am on step 13 --that is step 87. I hope to drive around via RC tonight and get the side sensors going. Will be a good test of basic data transfer and getting that data into the main array.

update

Why does hardware have to always come around, and stick its big, non-working nose into everything? Yup, I lost a motor driver last night. Funny --everything was super-duper A-OK and then all of a sudden, nothing. A little diagnostics later and i find it was the onboard arduino brain itself that died. Little heat gun work (and a new 328) and it seems that i am back in the ball game. However, it looks like I have a day of re-tweeking my current and volt sensors and getting this guy back to 100%.

Lost one day --pretty minor.

Hopefully driving forward and taking readings tomorrow.

Surface Mounted Balls of Steel

You replaced the uC on your Wild Thumper Controller? Wow, you’ve got guts!

Guts/ Desperation

rik, you could not be more correct…

I didnt wanna, I had never done it before and all I had was the kind of heat gun you would use to strip paint. I was scared ■■■■■■■■■ To be honest, it was a LOT better than I thought it would have been. Lifting the old chip off was pretty straight forward: Keep the heat moving and keep tapping the chip with some tweezers. It eventually moved a little, I grabbed it and pulled it off. I did a little tidy-tidy on the pads, tried to “prep” them with some fresh solder and went on to flooding/wicking the new chip. --That was really the hard part.

As we all know, solder wick works worst with small amounts of solder. I mean, you gotta have more stuff to capillary so the capillary action keeps capillaring. I found that it helps to “pre-soak” the braid a little bit and be sure the tip of the iron stays wet with solder (for heat transfer). Took some patience but i got it done.

My heart was pounding the whole time. --Literally.