Rudimentary LDR eye

Old idea rekindled

This idea about very basic robot vision haunted me ever since I first saw AmandaLDR. Even before it was named after a canine. It's somewhat inspired by the evolution of the first eyes in nature: just a bunch of light sensitive cells on your rump.

Chris' ramblings about Walter the other day inspired me to make this vague idea more defined. Here are still many open questions. And I do not have the time at the moment to pursue this in my lab. So, I invite you all to give this some thought and perhaps a little experimentation. Please share!

Situation

This is set in The Carpenter's house, but it could easily be anyone's residence. Or office. Or maze. It is the imaginary "Green Room" which I think Chris has. He does not. Probably. Probably not. But the colour works for the nifty diagram.

eye_ldr_floorplan.png

 

Consider a large window in the North wall, a smaller one in the West wall. In the East is a door to another room. In the South the door through which Walter just entered. He does not smell a Wumpus. That sensor is not invented yet.

Somewhere on the top of his form, Walter has eight light sensitive cells. Engineered, rather than evolved. These are humble (cheap) LDRs, somewhat encased in a light obscuring material. Each cell "looks" in a different direction. Let's number these cells 0 through 7 clockwise. (Start to count at zero if you need to prove you're a nerd. Rotate stuff clockwise to prove you're not a fundamentalist nerd.)

Each cell receives a different amount of ambient light. And thus, each LDR gives off a different voltage to each 8-bit ADC to which it is hooked up. The "brain" receives eight byte values.

Beware: Assumptions make for ugly offspring!

Assumption #1

Each room in the house has a unique lighting situation. Recognising this situation equals identifying the room.

This octagonal LDR setup will produce a data pattern based on the distribution of light around the robot. Here it is in another diagram. The cells are now rearranged in a straight line.

 

The top diagram reads the values in 8 bit resolution. Values range from 0 to 255 (decimal). Cell #0 reads 171 out of 255. That translates to 10101011 (binary). All eight values form a data pattern. We are now going to program some very basic pattern recognition.

Assumption #2

The eight values in such a data pattern can be combined into a memory address. Simply by stringing all the bits along into one 8x8=64 bit long word. At each such address the brain holds information about location. Or it will store some, while it is learning.

For example, the combination of all eight values above form a veeeeeery long address. This memory address holds data that has meaning to Walter along the lines of "you're in the green room facing North".

The veeeeery long, 64 bit, address is a bit of a problem. We are programming in a very stupid computer. We simply cannot juggle words that long. And also, we do not have enough memory to store even one byte in 2^64 different places. 2^64 bytes sums up to 16 Exabytes of memory. Besides, it would store the same old info in many many places anyway. That is wasteful. Of memory space and of learning time.

So we need to dumb it down a few orders of magnitude. We must make the cells "less precise" about the light levels they see. I propose to scale the values 0-255 down to 0-3. That is a factor of 64. That would save us 6 bits per cell. resulting in a total of 16 bits in the resulting pattern. That is a normal word value in a Picaxe and sums up to 2^16 = 64 KiloBytes of memory space required. That would easily fit in an EEPROM for example.

The second diagram demonstrates this low resolution data pattern being derived from the first one.

Assumption #3

An oldy: Teach your robot, rather than programming it.

Let Walter roam the house. Avoiding collisions as if it were a start here bot. Each time it enters a room it does not yet recognise (or not recognise any more), it will ask "BLEEP?". You will have to tell it somehow (suggestions welcome) where it is. Walter will store this new info into the appropriate, empty, memory nook.

Next time it enters the same room, it will hopefully receive the very same pattern through its eye. This is one reason to dumb down the patterns. A slight variation in lighting conditions (a thin tiny cloud has drifted over the house for example) will not upset the patterns too much.

Or better: the dumber patterns are not as sensitive to variations.

At first the bot would need to learn a lot. Many empty memory nooks and crannies to be filled. "Need more input, Stephanie." Just leave Walter alone in a room for 24 hours and let him soak up all the patterns it can get its eye on. Daytime and night time patterns. With and without people in the room.

He would not need to "BLEEP?" for a location, because it is under orders not to move. All patterns will register the same room as the location to recognise next time around. Walter needs to soak up each and every room. Well actually, Walter needs not be attached to this part of his brain during this educational tour around the premises. He could just be stumbling through the yard, old school style.

Assumption #4

If only I could send a part of my brain some place else to soak up interesting stuff while I were getting drunk in the yard....

 


 

Update 10 april

Oddbot and mintvelt are both suggesting the same thing. I did not mention it in the original post, because it was already too long.

They speak of smarter ways to produce a more distinctive data pattern. Ideally, each room would only produce one pattern. (That is what bar codes are all about. The pattern is then called a code.) But natural variations in light will probably cause many patterns per room.

Think of fingerprints. I could scan my right thumb a million times and the flatbed scanner would produce a million different bitmap files. Here is an imaginary example.

 

fingerprint_scan.jpg

I could dumb it down the way I propose above. And I would end up with a much coarser bitmap. Here is the same image in a 64x64 pixel version.

This 64x64 monochrome bitmap can hold 2^4096 different combinations of pixels. That is a lot "dumber" than the above 480x480 version. But it's still not very efficient. My one thumb could still produce a zillion different patterns in an algorithm like that. Each of them different from the next by only one pixel. One of those patterns would look like this.

It's exactly the same print, but rotated 90 degrees. To a computer it holds very different data, but to us it it still the same old information: my thumb print.

Now, if I were to follow each line in the pattern and note all the crosses an junctions and endpoints, I would end up with this "roadmap".

This roadmap would hardly ever change. This map is the kind of pattern the Police are storing in their fingerprints database. Makes it much easier to search for matching patterns. Rotating it would not change the data.

In this roadmap version is a lot less data but still very distinctive information available. Compare:

480x480 = 230400 pixels or bits
64x64 = 4096 pixels or bits
roadmap = 20 vectors, some 64 points

Taking room prints

The same principle can be applied to the lighting situation in a room. The Green Room could be described as:
"big window, corner, doorway, dark corner, doorway, corner, small window, light corner".
I could draw up a translation table that says "big window" = 3, "dark corner" = 1, etcetera.
The pattern would then read as:
"2, 3, 1, 1, 2, 2, 2, 1".
(I bolded the first two values for recognition.)

And that is exactly what my aneurysm proposes to do. But it is still a bitmap. Rotating the bitmap would produce different data of the exact same room. The above pattern after 90 degrees of rotation would be:
"1, 1, 2, 2, 2, 1, 2, 3".

This is totally different data to a computer, but holds the same information to us. If we would turn the pattern "right side up" before feeding it into the computer, we could help it greatly in searching for matching patterns in its "room print database".

So which side is the right one? Without any external sources of orientation (like a compass), we can only go by the available data. I propose to use the brightest light source out of the eight detected levels. Either do this in hardware (put the eye on a servo) or in software (shift the values until the highesst value sits up front in the series). Both example patterns would turn into
"3, 1, 1, 2, 2, 2, 1, 2".

When you consequently use the brightest/highest value as your point of orientation, you will get consistent pattern comparisons. The hardware will probably give slightly better results because the servo can turn anywhere in high angular resolution, zooming in on the brightest light source. The software can merely rearrange the values in 22.5 degrees steps.

How about that wheather huh?

All that will hopefully account for the rotation/orientation issue. How about varying light levels? Both Oddbot and Mintvelt suggest looking at differences between values, rather than looking at absolute values. Differences themselves can be calculated absolutely or relatively.

The reasoning is like this: No matter how bright the wheather outside, the big window will always be the brightest looking wall in the room (in daytime anyway). Two suggestions:

A) bring the values down so that the lowest value is always zero. Subtract the lowest value from all values (absolute differences remain). The dark/bright relations will remain intact. Fewer different patterns, same information. Good!

B) bring the values up until the brightest value is maxed out. Multiply all values by 256 and divide by brightest value (relative differences remain).

I hope that helped...

I like it
Surely it would mean this robot would only be accurate during daylight/dusk hours though?

No, not "surely"

"for you to experiment"

and

"for me to read here the next day"

:wink:

memories

Oh boy. Creative Computing and "hunt the wumpus". I remember those days.

Anyway. To overcome the time-of-day and the artificial-lighting-on-or-of problem it may be better not to register absolute values but rather differences between the 8 readings and even changes in readings as the bot moves in a certain direction.

That way changes in lighting due to the bot moving to a place where some of the light is obscured by furniture can be taken into account.

Just a thought

Ass outta you and me…

When you say we are talking about a stupid computer, you forgot to add that it is also limited by the code available… For instance, when you are calculating bits, you have to remember that a picaxe will not let you take a byte apart -other than the first 2 bytes that can only turn into single bits, not strings of them. Another limitation is variables you only have 28 or so. As these values are stored and read, they have to be put into variables to be worked with, i.e. compared etc.

This is sorta a funny situations we have here in the fact that we have an artie and a techie thinking about this… My thoughts have recently been thinking about bar-codes. I have no idea what they would look like right now, but it seems probably something round with a series of “rays” (of differing widths and spacing) coming from the center. A small rotating disc with line-follow sensor(s) mounted on the bottom of walter could read it, know what room and get a given direction to align itself to. I see these “discs” maybe in a corner -it should be easy enough to find a wall and a quick “follow around the outside” would eventually find the disc in the corner.

I’m stuck on floor for now --anyone have any ideas on what kind of sensor could see something on the ceiling? --Short of a camera, that is… But then again, I am buying a new MSI Wind in the next week or so -maybe it is about time I start to learn big-boy coding.

I just did some math…

I just re-read your 0-3 calculations… Just wanted to check my math. @64kbyte and (8) 256k EEPROMs, my math says we are talking (4) sensor readings per chip = 32 readings total? If we assume 4 readings per day (morning, noon, evening, night -only artificial light), we are back down to 8 rooms. This is assuming EEPROM space is not used for any other data.

No?

NO, I did not understand that

I assumed that 64 kilo bytes would be sufficient to store all 4-bit data from all eight cells (sensors). Or rather, we could address 65536 different bytes (locations in the EEPROM). Each location in the EEPROM corresponds to one possible data pattern.

I guess I also assumed that in an EEPROM it is possible to address each byte individually.

No?

to explain further

The entire EEPROM (or scratch pad perhaps?) would be reserved for this 64 KB lookup tabel alone. Other functions are not permitted. Maybe above the 64 KB border.

Yes?

oops
With each suggestion in today’s update, the coding became that much harder.

Alright boys…

You guys have me convinced… I’m gunna give it a go…

Proposal (for you to tell me is all wrong):

Forget Walter for now, this wiil be done on breadboard.

One sensor, probably just a lightly “shielded” LDR on a 360 degree servo going into an adc input (the LDR shooting upward at slight angle)

This servo rig will be placed in the center of the room, this assuming Walter could do the same with sonar.

Spin one sweep, stoping and taking reading at the (8) locations, noting brightest

Spin back to brightest location -now calling this “start” or "0"

2nd spin, (8) stops starting at “0”, (8) readings into scratchpad memory

Readback values, do the “average out” math (*256/ highest)

Store to eeprom

Repeat for different rooms

 

At this point, I think I could try to figure out some code to start to interpret these numbers however, I figure we are having so much fun with this conversation, and I know how much rik likes to make charts, I will just come back with a bunch of data for you guys. --How does all this sound to you folks? Pretty good start for a proof of concept?

All wrong

Like the idea! A lot. Even most of details.

I would like to see this 360 degree servo. Home brew?

I like the simplicity (code wise) in sweeping twice. Once for searching the brightest spot and again to take final reading in the correct sequence.

The eight resulting readings stored temporarily in scratch pad. hmmmkay. Averege out math. Challenge for a picaxe. Does not do fractions/decimal. Only integers. Bytes need to be processed as words (two bytes each). Then reduce back to bytes. Or better to nybbles or even half nybbles.

(A nybble is off course half a byte. Half a nybble would likely be called a crymble.)

Before storing to eeprom, comes an imprtant step. Deciding where in the eeprom to store anything. Just storing at the next available address is not very smart. Well, not as smart as I am anyway. Perhaps it is smarter. In which case, how would I know?

My plans hinges on the idea to store “You are in the green room” (coded into a single byte) at the location dictated by the eye. This is where we need to string those crymbles together into a 16 bit address.

I’ll see if I can come up with some picaxe basic that can do that.

Pretty code and some luv

I love it that you’re picking up this wild goose chase! Here’s my contribution in prettified sample code. It works in my simulator. Mind the bitwise shifters. They requires a X1 or X2 Picaxe.

#picaxe 28x1

’ Rudimentary LDR eye
https://www.robotshop.com/letsmakerobots/node/6461

symbol pattern = w4

’ some fake readings from eight different directions all around
’ starting with brightest measurement
symbol cell0 = 220 ’ brightest reading
symbol cell1 = 86
symbol cell2 = 117
symbol cell3 = 140
symbol cell4 = 147
symbol cell5 = 180
symbol cell6 = 82
symbol cell7 = 171

’symbol scale = 255 / cell0 ’ this would be 1.159 if picaxe would do decimals

’ better ramp up the scale to word size for now
symbol scale = 65535 / cell0 ’ 297

’ scale up the values, normalizing according to brightest
’ undo ramping up
b0 = cell0 * scale / 256 ’ = 255
b1 = cell1 * scale / 256 ’ = 99
b2 = cell2 * scale / 256 ’ = 135
b3 = cell3 * scale / 256 ’ = 162
b4 = cell4 * scale / 256 ’ = 170
b5 = cell5 * scale / 256 ’ = 208
b6 = cell6 * scale / 256 ’ = 95
b7 = cell7 * scale / 256 ’ = 198


pause 1000

’ shift the reading 6 bit positions to the right
b0 = b0 >> 6 ’ 11111111 >> 00000011
b1 = b1 >> 6 ’ 01100011 >> 00000001
b2 = b2 >> 6 ’ 10000111 >> 00000010
b3 = b3 >> 6 ’ 10100010 >> 00000010
b4 = b4 >> 6 ’ 10101010 >> 00000010
b5 = b5 >> 6 ’ 11010000 >> 00000011
b6 = b6 >> 6 ’ 01011111 >> 00000001
b7 = b7 >> 6 ’ 11000110 >> 00000011


pause 1000

’ repeat for each LDR:
’ put the 2 remaining bits from each byte in pattern word
’ then shift word to left by 2 positions, making room for next two bits
pattern = 0 ’ 00
pattern = pattern | b0 ’ 11
pattern = pattern << 2 ’ 11
pattern = pattern | b1 ’ 1101
pattern = pattern << 2 ’ 1101
pattern = pattern | b2 ’ 110110
pattern = pattern << 2 ’ 110110
pattern = pattern | b3 ’ 11011010
pattern = pattern << 2 ’ 11011010
pattern = pattern | b4 ’ 1101101010
pattern = pattern << 2 ’ 1101101010
pattern = pattern | b5 ’ 110110101011
pattern = pattern << 2 ’ 110110101011
pattern = pattern | b6 ’ 11011010101101
pattern = pattern << 2 ’ 11011010101101
pattern = pattern | b7 ’ 1101101010110111 aka 55991 aka $DAB7


pause 1000

Don’t forget to store your room identification code (room number) in eeprom at that address: 55991.

Screw bit-shifting use points…

First off, a picaxe can do all kinds of math involving decimals -its just the answer that gets spit-out as a whole number. Here’s what we got:

Set your test rig up in the first room and have it take a reading every hour for a day, maybe 10 readings. Average them, or not -whatever- and stick them in your eeprom. Just like this:

writei2c sequential address (room number,b0,b1,b2,b3,b4,b5,b6,b7)

You can write to sequential addresses, it does not matter. The room number is a fixed number I pre-assign to each room and the b0-b7 is the LDR readings. Lets now assume you have 50 readings in packs of 10… 5 rooms.

Now walter heads into a new room, and does the “look for the brightest, start there” thing and ends up with another set of b8-b15.

Now you simply read back each eeprom address and compare b0 to b8, b1 to b9, b2 to b10 and so on. As we compare we just use:

How much different is b0 to b8 + or -? The room number (which could be an eeprom address or scratch pad number to save variables) gets those “points” -the points being the difference between the current reading and the prerecorded one, no negitive numbers here, just the difference in a whole number. Continue this process with the remaining variables in each pre-recorded eeprom data address.

Now this could also be done in two catagories, one set of points for each pack of 10 to then be averaged together for a “room total” or each of the 10 entries in a pack could win within that room number and then compete with the winner from the other “room packs”.

Another way, would be to assign scratchpad or “reserved” eeprom numbers. Lets say we have pre-recorded the readings starting at address #11. Now after walter has taken his “which room am I in” sweep, we read #11-#21 and rewrite them into #'s 1-10. As we go through our “assign points” system, we can again rewrite the winners. 10 turns into 1 winner written at address #1. In the end we have #'s 1-5 representing the winners from each room that were taken from each room’s original pack of ten. Again and again we are trimming down the choices but doing it based on scratchpad or eeprom address #'s instead of using 482 differnt variables that we don’t have.

I am rambling here but the point is, we simply compare and asign points as to samples of each room and how close they are to what we are reading now. Room with the most points, wins.

Now don’t get me wrong, we are talking about a metric ■■■■■■■■■ of “if x>y then… if y>x then…” but who cares? We don’t have any pauses, PWM’s or servos to deal with here so lets just overclock the picaxe up to 16hz (or one of those new fancy x2’s up to 64hz) and fly through these calculations. Badda boom, badda bing. Simple.

 

Christ, I hope this makes sence tomorrow… this is 5 pints of Guinness logic here, yo.

Holy crap I have a clock…

I’m a friggin’ god! I have a clock, yo. When we were in the thick of my LCD problems I, in a fed-up rage, bought a “picaxe brand” LCD and paid the extra 5 bucks for the clock upgrade! Walter knows what time it is… This should trim down the number of samples we would have to go though. When we are pre-recording samples of each room to stick on the eeprom, we just time stamp them. When we go back to compare, we can compare to the one that is the closest to the current time. -Maybe the closest three or so. At any rate, this would almost elimate the sun going down problem and certainly determine time at which artificial lights have been turned on. Now, if this system works and is used for any great amount of time (over months) we would have to re-record the samples to deal with the shorter winter days etc. but really, that’s only, what? -Twice a year?

Word to ya motha.

Would it be possible to put
Would it be possible to put a small color sample on the wall or floor near the doorway of each room for him to see as he enters? Red = kitchen, blue = bathroom, etc? Maybe a barcode or RFID?

I luv it rik :slight_smile:
I luv it rik :slight_smile:

I’m probably dense but if

I’m probably dense but if the general idea is to do room identification, then I have an uneasy feeling that the absolute position of the robot is assumed. I know at my house, depending on the position of a hypothetical LDR sweeper, within a room, you could dramatically change the “room idenification combination”.

I’m tingling in anticipation of Chris’ real data. But I suspect that a single room can have all 256 combinations :stuck_out_tongue: depending on many factors (absolute position, where the division of light is when scanning, reflection related to position, etc. etc.)

Is the goal (asside from having a lot of fun) to identify / map a house? If so, will accuracy improve if another positioning system is used to augment the data? What other systems? Sonar?

I’m tingling…

**I assume the goal **

is not only room identification, but to teach the bot to recognize a room based on it’s own repeated analysis of inputs? the way us organic meatbags do it? So wouldn’t barcodes or color stripes, etc, sort of defeat that? or am i reading too much into it?

I like this thread, btw.

that is not the goal

that is the appeal of this setup

its simplicity and analogy to us just is too friggin’ beautiful

[and reading “us organic meatbags” made me think of biodegradable packaging material for American food - nice confusion in the early morning]

My…
headhurts :smiley: