Andar

multiple servos

 I don’t see why that board wouldn’t work. Easier than my approach. I believe the mega only does 12, but am not sure.

I’m using this:

http://www.hobbytronics.co.uk/pwm-servo

I wanted to clean up the wiring off the Arduino. I looked for a while for a shield and then decide to just run an i2c over to a board that already had the pins for the servo.

Either way, I  think offloading the PWM is a good idea. You get a an extra 2 bits of resolution (12 instead of 10) and changing the refresh frequency is trivial.

Hey,Thank you to the three

Hey,

Thank you to the three of you for those comments. I am glad to see I wasn’t missing something and you were confronted to the same questions :slight_smile: I will give a try to the single-pin mode with my HY-SRF05 and see how it turns out.

Martin, it is nice to have feedback from your pratical experience :slight_smile: I like the approach of “rings of defense” which makes a lot of sense.

I am not sure I will add arms for the first iteration of my bot, but I have already a servo controller (this one from Adafruit, not tested yet). I think it is always better to delegate to specialized hadware what is not vital  when it is possible. I have never tested so many servos with an Arduino, but I remember I had issue with a picaxe and a few servos (it was solved by using a SD20 servo controller, which was defitnitely more efficient).

Martin, since your are around and we are talking about robot’s movement, I have a question. How does Anna or Ava is selfconscient about her location and environnement ? Do they just wander around randomly or are they able to know where they are and where they need to go (going in front of TV to change TV’s channel for instance) (aside missions based on compass). I know triangulation with tags (with OpenCV) can be an option. Do you use encoders on your robots’s wheels to calculate the motion in space ? I haven’t planned to add wheels encoders on my robot, but I have the feeling it is the only way for a robot to map is environnement when exploring. Would you have advices or hints I could follow ?

Thanks :slight_smile:

 

**This is getting so good!! **

I’ve been wondering the same things about Anna/Ava mapping capabilities! 

I read a paper shared by unix_guru on this link https://www.robotshop.com/letsmakerobots/node/40489 It was very instructive, maybe it helps you too.

Re: LordGG, concerning localization

Thanks for the questions LordGG.  I’ll do my best to answer in a balanced way.  I’m so thrilled that others are building sonar arrays!

The bottom line is, localization is an area that needs much improvement for Anna.  I hope you guys come up with something and help me out!

As it stands, she has GPS and a compass.  This works decently well outdoors for navigating a route of waypoints to get her within around 10-15 feet of each waypoint, while the force field keeps her from bumping into stationary or moving objects along the way.  The error in accuracy is too great to be useful indoors.  Indoors, about all she does is wander if she is allowed to run free, using the force field again to not bump into stuff…until she bumps into something made of cloth anyway.

For indoors, I prototyped a mechanism of using OpenCV and OCR to recognize words written on walls.  I posted a blog and video about this.  It worked for getting a location within a room with an accuracy down to about 6 inches or less, but was really just a novelty, as it required having large readable words on 2 or more walls within each room, and the robot having a precise internal map of where each word was.  Basically, it was a cool prototype, but not practical.

For Ava, the head is going to be able to tilt directly up, so she will be able to see the ceiling of whatever room she is in.  My intent is to have her recognize landmarks on the ceiling using OpenCV, similar to the Hagisonic StarGazer system that goes for about a $1000 and up.  Spotting 1 or more landmarks and using heading, elevation, and some trig, should lead to similar results of the OCR method.  It is really the same concept, I suppose the challenge is accurately recognizing the landmarks.

I have had a lot of other ideas, some of which involve using OpenCV to remember sizes, shapes, and colors to use as landmarks.  I can’t recommend any of them at this point as they are just unproven concepts in my head.  I did a lot of work at one point with OpenCV recognizing shapes and colors before realizing that using OCR was a lot easier and more accurate.

There are some LMR members that have been successful it seems at indoor localization and moving room to room (I seem to remember an InMoov on wheels that did it), possibly using SLAM.  Perhaps we can draw some of those folks into a discussion (by starting a forum topic) as their knowledge is much greater than mine.

I used to focus on movement a lot more, but once I started building verbal interaction and central brains, I got lost in that for a couple years.  It’s probably time to revisit everything with an eye to improve things.

re: time

I sort of answered this in another post, but I wanted to make sure it didn’t get lost in my long post.

In response to your question…What is the Arduino able to do while it awaits responses?  The people that wrote the NewPing library were clever.  I believe they used interupts.  The result is the CPU doesn’t ever wait for the sonars, it goes about its business of doing everything else and periodically checks the active sonar or fires a new one.  There is a setting to control how often it checks, and this setting ends up determining how accurate your sonar readings can be.

This means you can do a great deal with the Mega while all this is going on…or another Arduino if you put the sonar stuff there.  It’s probably my favorite thing about the NewPing library.

DUE and 3.3V notes

Thanks, that caught my attention.

I may be the only one here using the DUE, it uses different interupts and timers than the Atmel chips. So I looked up the NewPing library and noticed that NewPing1.6 added the DUE and NewPing1.7 removed it! This due to the rangers being 5V.

Hardware workaround for 5V is here:

http://forums.parallax.com/discussion/152308/ping-sensor-3-3v-compatible

I suppose the older 1.6 version would be needed.

I haven’t used a Mega yet, I was going to buy one when I found that the DUE was faster, with more memory, and cheaper. Does have some issues though…

Hi Martin,Thanks for your

Hi Martin,

Thanks for your reply :slight_smile:

Using marks on the ceiling is pretty clever, especially if we find a way to make marks invisible to human but visible to a robot. Sadly for me, the main room of my home has a very high sloping ceiling, it won’t make things easier. But I still like the idea :wink:

I didn’t thought about OCR either. Here, you’re talking about real OCR (getting characters from an image), right ? If you have any Framework/Library (for Windows / .NET) to advise, I am interested ! If I can imagine how OCR can be convenient to help a robot knowing in which room it is, I am not sure to understand how it can help to determine its coordinates in a room ?

Here is what I had in mind : the robot must already know a map of the room where it is. He starts by scanning the room to find at least 3 beacons. According to the “observed” sizes of these beacons (and knowing the real size), he computes the distance from them and should be able to triangulate his position. Now, the robot can move and update his position in real time (using encoders on wheels) and a compass. Sometimes, he can synchronised his calculated position with his real position by scanning the room again.

I haven’t tested anything of that, but nothing sounds easy : estimating distance from a marker might not be very accurate, and updating position in real time by looking at wheels sounds even more of a challenge :wink:

You’re right, we’re having many interesting discussions on Andar’s page (thank you nahueltaibo for hosting), maybe we should move to the forum. I’ll try to create a new thread if more questions come :slight_smile:

Again, thank you :slight_smile:

 

Hi nahueltaibo,That is a lot

Hi nahueltaibo,

That is a lot of reading, thank you ! I’ll have look at it and hope I will understand it all :wink:

Beacons

RF beacon location is out there. I’ve previously posted a link to a company that will do pretty much what you want, I don’t have it bookmarked, no time to find it now.

Accuracy was a couple cm, cost about $49 beacon, you need 3 or so and a router for I believe $100.

I don’the know. Maybe I’m to

I don’the know. Maybe I’m to naive,  but is it that difficult to implement SLAM with the sonar sensors? Well and wheel encoders too. I will be working on that soon. I hope it is not terribly complex :slight_smile:

Thanks I wasn’t aware about

Thanks I wasn’t aware about that technique. If it can be accurate enough to be used for indoor, i buy ! :slight_smile: If you can find back your bookmark, I am interested.

From what I have read, it is all about converting the strenght of a signal into a distance. Some RF modules allows access to a RSSI pin, and the voltage on that pin corresponds to a certain strenght. Then it is a matter of triangulation and conversions.

I found a few DIY approaches :

–> http://grauonline.de/wordpress/?page_id=467

–> http://www.instructables.com/id/433-MHz-UHF-lost-model-radio-beacon/

I will try to find an emitter and a receiver to test that. But I am afraid that in a small areas (like a room), variations in signal strenght would be to insignificant to get accurate distances.

I’m afraid it is a bit

I’m afraid it is a bit compex :wink: Mapping a place with a combo sonar/wheel encoders is totally doable (not that easily).

But the robot knows where he is only from its starting point, his origin. When it is turned on, the robot doesn’t know where it is in the room untill it has scanned all the room again. Even if the map is backuped into its memory, when the robot is turned on it doesn’t know where he is, except if someone tells him or if it considers that it is always turned on at the same location (which can be acceptable :wink: )

Marvel Minds Navigation

Not signal strength:

http://www.marvelmind.com/

You can buy it at Robot Shop and patronize our “sponsor”. Why aren’t they demoing it?

Something just crossed my

Something just crossed my mind.Hopefully can be good ideas for Andar.

- Suppose Andar is running on a table or on the second floor or possibly any high place. What about adding infrared / sonar that spots about 45 degrees heeling to detect whether soon it’s already an edge of the floor in front of Andar to avoid falling down from the table or second floor. Or possible opencv canny edge is probably useful too.

 

- Motion planning using computer vision before starts to move so it can decide where it should go.

below is example of my MSER capture :

 

1_0.jpg

4_0.jpg

 

2_0.jpg

 

decision making then can be accomplished by using fuzzy logic or description logic.

 

Dear MartinCan you check

Dear Martin

Can you check this information on hackaday : http://hackaday.com/2014/01/30/2d-room-mapping-with-a-laser-and-a-webcam/

https://shaneormonde.wordpress.com/2014/01/30/2d-mapping-using-a-webcam-and-a-laser/

what do you think about it. Is it really that good ?

Btw, is it possible to use feature matching and color tracking to help mapping or identifying a room ?

Thanks for the link. Another

Thanks for the link. Another strategy ! And it’s smart :slight_smile: It is a bit expensive for some ultrasonic sensors which communicate together thanks to RF. If I got it right : the server tells to both, receiver and sender, throught RF, when an ultrasonic signal must be sent. This way, the receiver knows how long the message took to arrive to it.

It is great inspiration. Maybe a DIY version would me more affordable :slight_smile:

Man that looks so Tron!! lol

Man that looks so Tron!! lol awesome!!!

I am planning to add ir rangers for detecting at least the floor as you mention, It will be a mess to handle all this sensors!!

I’m not planning to start using OpenCV yet, too much to learn at the same time :). First I will do my best to get a working implementation of SLAM that works with the sonar ring… That’s my main focus, I always had that in my mind, and now is the moment of fighting with it!

Today I started woring on the real version of the sonar ring, I hope I have something soon!

re: Swordm4n, Lasers, Features

Thanks for posting the links.  The technique makes sense.  As the article mentions, using a line laser so you can do about 45 degrees at a time, by using OpenCV to look at the shape of the line.  Or, you can get vertical resolution.  More complex, but a lot faster.  You can get a laser that makes a line for about $10.  I tested it out a bit (arranging the line to be horizontal) and looked at the shapes of the line as you approach walls at different angles.  Example, a line sloping upward and to the right indicates the wall is closer on the left and farther away on the right. OpenCV can easily pick up the laser, its just a matter of coding.

This project does it…

https://www.robotshop.com/letsmakerobots/stalker

I’d be concerned about eye protection if I was doing this.  These lasers are not eye safe.  Anna used to beam me in the eyes a lot.

I think it is possible to use feature matching / color tracking to map/identify a room.  I thought about having a robot look north, and then scan every 10 or 15 degrees, summing up the most dominant or the least dominant color in each zone, and summing all the zones together in some kind of “bar code” for the given location that it could use to search against its memory of past locations it had been that were close to the current GPS coordinates, narrowing it down to the probable room and location within the room.  A lot of similar vantage points should have a similar bar code.  It might be crude, but it might work.  

There are a lot of sophistacated feature matching techniques spelled out in books on OpenCV, I just haven’t had time to dig further.  Some technique has to be possible if someone puts in the time.

re: Beacons

I was familiar with this system but decided to study it further after you posted.

At first I thought, this is really great.  Then I had an “Oh crap” moment.  Will this system interfere with the sonars on my bot?  I would think so.  I’ve only seen one sonar that operates on a different frequency that the standard ones we all use.  Then I realized, oh crap again, I may have a hard time using Anna and Ava at the same time due to sonar interference.  Bummer.

I guess the bots can pause, turn off their sonars, and localize.  It’s still a great solution, but it would call into question the use of sonar in the long run if you have a home full of robots trying to use sonar at the same time, would it not?  I wonder how the driverless cars handle this.  A lot of cars have sonars now in the bumpers.

Dear Martin, thank you very

Dear Martin, thank you very much for informations and your ideas.