Vision, Sonar, etc.
I think you guys made a lot of very good points.
Like you guys said, if you are using landmarks on the ceiling, you should be able to locate off of a single landmark using compass, servo angles, and trig. Having more than one landmark should improve the accuracy. Lighting could be tough.
I see the ability of Pixy to do 7 colors as quite a lot to work with. With two digit codes 7^2 = 49 landmarks. Three digit codes - 7^3 = 343 landmarks. Hagisonic does it with just a dot pattern, some of their later markers have some blue dots thrown in. I thought about using beer cans with horizontal colored tape “rings” around them to make unique markers that I would set on shelves or other places around a room. The rings would always have the same X and width and be close in Y, so recognizing them as part of a landmark should be doable. The Pixy “color code” scheme is basically supposed to do what I just described, so it may already be there. There are really so many ways to to do this, but it all starts with being about to recognize something that appears unique or at least significant. The pixy software had a slider to control the sensitivity of each color so you could increase/decrease false positives/negatives.
I was reading articles that said that the Pixy firmware software was Open Source, it seems about anything could be done by learning to modify that.
Localizing through vision like this will take a bit of time to do periodically. The robot would likely need to track its odometry in between these scans if they are infrequent…which probably means encoders.
My previous post was a little off topic, it had a localization element to it, but was a lot of other things as well. I am intrigued by the possibilities with vision for situational awareness with 2 Pixy cameras, as in this example: Robot sees yellow ball…determines it is far away, and moves towards it. I then kick the ball towards the robot but miss…robot recognizes ball is moving and getting closer and alters course and stops in front of it…maybe kicks it back. Football anyone?
Another example…robot is talking to me, but someone else walks by to the side or rear of the robot. I would like the robot to recognize that through sonar and/or vision. It seems like an awareness of moving objects as significant is fundamental to getting smarter.
One approach I would like to try is this…I am hoping to increase the sensitivity on the pixies to max so I get a lot of false positives and pick colors that are either general and common to my indoor environment and have the robot pick up the positions and sizes of normal everyday things like plants, doors, wall outlets, black rectangles like TVs, and remember them. Using rare colors or a combination of general and rare might be good too. Later, using a lot of software that I would have to write myself, recognize the similarity of past observations with current observations and recognize “I think am on your desk in your office”.
I really wish we could do what bats and dolphin do with echlocation. I think the way we design our sonars for robots is flawed. I think you need one transmitter with several receivers on the same sensor inorder to get the detailed data you need to determine the position of everything precisely. I have heard that dolphin pick up the return signal along their jawline with sensors (their teeth) spaced evenly about a wavelength apart to the clicking wavelength on each side. Much better sensors are needed with much better signal processing.
Gotta run. Sadly I don’t get paid to post.