Andar

re: OCR

Hi LordGG,

You asked about OCR.  I use Tesseract OCR.  If you google it, it will come up.  I use the java version and run it on the Android phone.  I think I only had to write a few pages of code to use it effectively for my purposes.  It was pretty easy to learn and use, although it returned a lot of false positives…characters where there were none…as I was using images from the robot’s camera…not a typical use of OCR.

The way you can determine position is this.  If you have a compass, and you know the servo position of the head, you can determine which direction the robot is looking.  The OCR library returns the coodinates (X,Y) and width/height of the text it finds…I seem to remember it will give you the position of each word if you want it.  You can determine how many degrees the word is offset verticaly and most importantly horizontally within the image by using the pixel difference from the center of the image.  (I can send code if this doesn;t make sense)  Add in the heading already calculated from the head position (compass heading + servo position), and you can get a heading for the text.

The important thing is that the robot must already have simple 3D map in its memory of where all the text is located in the room…obviously not ideal.  A little trigonometry can then determine position by using angles to one or more of these words.  I try to use at least two words on two different walls.

Thank you too for participating and contributing.

Ears that see…for obstacle avoidance, localization, and depth

Another idea came to mind today that I believe I am going to try on my bot.

I designed and printed movable ears this weekend.  Each ear is about 2 inches wide and about 2.5 inches tall and looks exactly like a cat’s.  The idea is for each ear to rotate 180 degrees ranging from straight ahead to straight backwards.  The question then is, what kind of sensor to put in them for maximum benefit?  I think I have it.

https://www.robotshop.com/en/pixy-cmucam5-image-sensor.html

The beauty of these is that they do the heavy duty vision processing right on the board at 50 frames per second and track many objects at the same time!  This is much better than I was able to achieve on Android with OpenCV.  A simple Microcontroller like an Arduino should be able to handle 2 of them.  This is really really exciting to me as I was trying to figure out a way to do localization, a way to get some 3D vision so I can get the arms to grab things, and a way to get much enhanced situational awareness, without having to cram a couple RPi’s into the bot which I don’t really have sufficient room for.

By putting Pixy Cameras in the ears, the ears can move around without having to move the head.  When both ears (cameras) are directly forward, I should be able to correlate the same object in both camera’s output, and use the difference in position to know how far away it is.

For localization, I think a simple system of remembering where objects were seen, size, and in what direction could be used.  Later, an algorithm could correlate what it sees now with its memory and guess where it is.  Everytime an object match is found, the candidate location gets a +1 vote.  Everytime one object is found in a similar relative position to another object, this position might get a +3 vote.  I’ll write more on this later.  The point is, I think it is feasible on a small robot now.

A simpler method for a smaller bot (like Andar) might be to put a Pixy facing directly up.  I want the detail of looking around, determining distance, having some depth perception, and one day being able to grab things.

Color coding

This could work well for localization, just triangulate between known markers.

I see  this working very well with marking stuff, not at all well with things that aren’t all the same color.

A solution for some…

On another note, I stumbled across this on detection angles and thought you might find it interesting:

http://www.cs.cmu.edu/~illah/PAPERS/lopresti.pdf


 

Echo Location

A robot needs to be able to find precise locations and sizes of things in the environment. This can be done without sight.

Bats do it:

https://en.wikipedia.org/wiki/Animal_echolocation

And so do people when they look at fish finders, or when they hear an approaching noise.

I can see a complex mathematical way and a mechanical way:

Given that you have an ultrasonic signal and a directional ear, the ear can rotate to pinpoint the sound. Cats are particularly good at locating the sources of ultrasonic noises due to the ear size and shape, they have some 120 muscles to move their ears.

There is an assortment of USB  microphones, There are also small cheap electrostatics (anaolog) that have an extended high end. I bought a couple for leak detectors (ultrasonic hiss).

I have a couple of cheapy USB unidirectionals coming for my catbot, but my idea with that is to locate the source of what is speaking, and to pull that out from the background clutter, not echo locating.

You asked for ideas…

Hello Martin,Thanks for

Hello Martin,

Thanks for elaborating on your method, I understand it better now. I didn’t have in mind that the OCR library could return so much things (like coordinates, size…). Once again, you had a smart approach :slight_smile:

Hi Martin,This sensor looks

Hi Martin,

This sensor looks promising. Its processing capabilities are amazing for a very affordable price ! I am looking forward to what you will get of it (or them :slight_smile: ).

Hi cyberjeff,What your

Hi cyberjeff,

What your talking about is an other challenge I think (yet very interesting), I have the feeling it is about helping the robot to localize things around it, knowing its environment.

The localization of the robot itself (so the robot knows where it is, within a “system” like a room) is another challenge that Martin tries to address with his idea. But it is right, to 2 areas may intersect together :slight_smile:

WoW…

That sensor looks amazing! Are you saying that I could use it to map the roof of my appartment?

    What you are doing with the ears is great, it’s like eyes, but better since you can focuse them independently, well, it would be like a chameleon :slight_smile:

I’ll be following the growth of Ava!

Yeah, it looks so great that

Yeah, it looks so great that I have ordered one too to do some tests :wink:

I don’t think the sensor can map a place, but it can learn to recognize 7 objets/pattern/beacon. 7 is not a lot but can be enough to navigate in one or 2 rooms :slight_smile:

**Maybe facing the roof… **

If we face the sensor to the roof, and we have a compass, we can know where we are with only one color mark on the roof right?

I don’t know what would be the precision, and we need clear view of the roof, but knowing the robot orientation, and the andle where the mark is, we can calculate where the robot is.

If we can recognize 7 items, we could locate the robot in 7 areas. And if we combine them…

of we use red as the spot to detect, and a series of other colors as the id of the room… even more places to locate in!

Two separate problems

Yes,  there are two different problems. Localizing a robot could be done with the color shape tracker, but it tells you nothing about what isn’t marked.

Both problems need to be solved.

Theorically it sounds doable

Theorically it sounds doable :slight_smile: If the robot has a compass, align the mark in the center of the “viewfinder” and knows the angle of the floor with the line which goes from the sensor to the mark, I guess it should work. Sounds good !

Direction

Take a look at the videos.

If you use two colors side by side you can get the angle that is offset. On the ceiling, that gives you one more angle to tell where you are. It will tell you at what angle the walls are, among other bits. Helpful if you want to drive down the center…

Work with compass and you have one more waypoint. Might be a case for quaternian geometry.

Vision, Sonar, etc.

I think you guys made a lot of very good points.

Like you guys said, if you are using landmarks on the ceiling, you should be able to locate off of a single landmark using compass, servo angles, and trig.  Having more than one landmark should improve the accuracy.  Lighting could be tough.

I see the ability of Pixy to do 7 colors as quite a lot to work with.  With two digit codes 7^2 = 49 landmarks.  Three digit codes - 7^3 = 343 landmarks.  Hagisonic does it with just a dot pattern, some of their later markers have some blue dots thrown in.  I thought about using beer cans with horizontal colored tape “rings” around them to make unique markers that I would set on shelves or other places around a room.  The rings would always have the same X and width and be close in Y, so recognizing them as part of a landmark should be doable.  The Pixy “color code” scheme is basically supposed to do what I just described, so it may already be there.  There are really so many ways to to do this, but it all starts with being about to recognize something that appears unique or at least significant.  The pixy software had a slider to control the sensitivity of each color so you could increase/decrease false positives/negatives.

I was reading articles that said that the Pixy firmware software was Open Source, it seems about anything could be done by learning to modify that.

Localizing through vision like this will take a bit of time to do periodically.  The robot would likely need to track its odometry in between these scans if they are infrequent…which probably means encoders.

My previous post was a little off topic, it had a localization element to it, but was a lot of other things as well.  I am intrigued by the possibilities with vision for situational awareness with 2 Pixy cameras, as in this example:    Robot sees yellow ball…determines it is far away, and moves towards it.  I then kick the ball towards the robot but miss…robot recognizes ball is moving and getting closer and alters course and stops in front of it…maybe kicks it back.   Football anyone?

Another example…robot is talking to me, but someone else walks by to the side or rear of the robot.  I would like the robot to recognize that through sonar and/or vision.  It seems like an awareness of moving objects as significant is fundamental to getting smarter.

One approach I would like to try is this…I am hoping to increase the sensitivity on the pixies to max so I get a lot of false positives and pick colors that are either general and common to my indoor environment and have the robot pick up the positions and sizes of normal everyday things like plants, doors, wall outlets, black rectangles like TVs, and remember them.  Using rare colors or a combination of general and rare might be good too.  Later, using a lot of software that I would have to write myself, recognize the similarity of past observations with current observations and recognize “I think am on your desk in your office”.

I really wish we could do what bats and dolphin do with echlocation.  I think the way we design our sonars for robots is flawed.  I think you need one transmitter with several receivers on the same sensor inorder to get the detailed data you need to determine the position of everything precisely.  I have heard that dolphin pick up the return signal along their jawline with sensors (their teeth) spaced evenly about a wavelength apart to the clicking wavelength on each side.  Much better sensors are needed with much better signal processing.

Gotta run.  Sadly I don’t get paid to post.

Impulse

Sonar is not a trivial project, I threw it out there because it is possible, not because it was probable.

In a distant life of mine, I built and ran live sound, and experimented. If you run an impulse, you receive all the reflections. Back then I was looking at reverberation and what reflections needed to be broken up.

Two takeaways:

1) The acoustic signature will vary from room to room and within the room. The singing in the shower effect.

2) I believe it is possible with one source and three mikes to localize early reflections. You can also get some size information from the returned impulse shape, wavelengths larger than the object will wrap around rather than reflect.

I see this as doable, but I think a Kinect or one of the smaller lighter weight alternatives is an easier path. You can think of a Kinect as 3D IR ranger as it looks at multiple IR dot patterns, rather than one.

Ultrasonic sensor array

Hey nahueltaibo,

What ultrasonic sensors are you using with Andar ? We already had a short discussion about the use of sensors in “one-pin mode”, you said it wasn’t an issue.

I have done some testing yesterday and I was a bit disapointed. While SRF-05 sensor works well in one-pin mode (just need to connect mode pin to ground), it is sadly different with low cost HY-SRF05. Guenine SFR05 are 10 times more expensive than the HY-SRF05. Of course, I bought 12 HY-SRF05, our hobby is already expensive enough :smiley:

Yesterday, while checking the one-pin mode, I had a surprise : HY-SRF05 don’t have MODE pin, they have OUTPUT pin instead. And the use is not the same, it seems this pin can be used as an alert pin : its state changes when an object is detected. So… bye bye mode pin and… bye bye single pin mode :confused: If a Nano had 24 digital pins, I could deal with it, but it has only 14 (counting RX1 and TX1) and 22 if I use all analog pins as digital. I still need 2 pins for serial communication whith main backbone. So it handles 10 sensors at best (in 2-pins mode).

Since I don’t plan to buy 200€ of SRF05 sensors, I did more tests. I have tried to share the same pin to trigger all my sonars. For testing purpose, I used 3 sensors, on the same line. It works, but it doesn’t look as stable as 2-pin mode. Actually, every ensors get triggered at the same time, so sometimes, a sensor receives the echo from the sensor next to itself before its own echo :wink: On a chassis, sonars will not look to the same direction, so it might not be such a problem. I just cannot predict how it will behave in front of walls’ corner etc.

Maybe I can share 2 arduino pins to trigger the sonars, so when I will read the value of one sensor, its 2 neighbours (triggered by the other arduino pin) won’t send pulse.

I don’t know yet which option I will choose (9-10 sensors instead of 12, or a shared trig pin(s)). An other option (that I haven’t tested yet), would be to connect one Arduino pin to both sonars’ pins (trig and echo). Maybe with a diode before the echo pin (to prevent it to receive trig signal) it could work ? hum… I don’t have a good feeling :wink:

Anyway, sorry for this long post, to summarize : do you use guenine SRF05 ? If not, how did you deal with the one-pin question ?

Thanks !

haha

Even if I was sure it will not work, I have tried that : I have connected one pin of the Arduino to both (trigger and echo) pins of the HY-SRF05, directly (no diode or anything involved) and guess what ! It works !

I have spent time looking on the web information about “one-pin mode” with HY-SFR05, I had found nothing concluding, so I am surprised it worked. But it does ! :wink:

 

edit : of course I switch arduino pinMode from/to output/input on the fly.

Exactly

I was at lunch, and planning to answer as soon as I came back, you didn’t gave me time! hahaha.

    Here is the documentation of the New Ping library. There you have a one pin example, and the only module that do not work is the SRF06 as far as the doc says. https://bitbucket.org/teckel12/arduino-new-ping/wiki/Home#!examples. They also have an example qith 15 range finders, so My work this weekend will be to make a merge of those two examples, and if i make it, add some comunication between the nano and the backbone (I2C robably).

    I think the two signal pins are one for sending and the other for receiving, (not for selecting a mode).

As soon as I get some progress I’ll let you know so we can keep us posted since we are working almost in the same :slight_smile:

   This is the current status of my sonar : https://m.facebook.com/nahueltaibo/albums/10153692633464931/


 

Haha sorry, I tried as soon

Haha sorry, I tried as soon as I was back from work :wink:

From your pictures, I can see you use HC-SR04 sensor. That is why you have 4 pins while I have 5 on SRF05 (5V, Grnd, Trigger, Echo and a fifth named “Mode” on guenuine SRF05 and “Output” on HC-SRF05).

Yesterday I have used the basic functions of Arduino, I’ll test the NewPing lib for better performances. Thanks for the link, I can see they connected the 2 pins together, great :slight_smile:

edit : just tested with the NewPing lib, it works too (results are similar to the ones I have with classic method).

Sonar Interference?

Hey Martin,

I was getting ready to start working on a bot with a sonar ring when I found this thread. I have a VERY basic one I started for the wife and had some issues, so changed gears to see if I could work it out on a different one. One of the things I was looking at was a sonar ring like you guys use, but the first one uses sonar to (though not a ring) which brings me to my question. Have you tried Ava and Anna in the same room with each other? Do they generate interference?

Thanks for your time

Earth Guardian