Andar

Correct me if I’m wrong, Mr.

Correct me if I’m wrong, Mr. Triplet. but I think using 12-24 sonars is not suitable for his robot, when the body is pretty small. and it’s probably not good unless he’s going to use something similar to vector field histogram or probably  like your robot’s sonar pie wedges.

btw I think however in the future, if his robot will operate on any crowd places like in library, restaurant, mall, airport, where there are many moving objects he will really need opencv. Since there will be a need for motion history and calculate collision time. When it has to move to a specific object as it’s goal it will also need opencv.

Actually the accuracy of opencv depends on his android spec which I don’t know what android device he uses. However to operate accurately it will need enough gpu spec,

 

@nahueltaibo : btw very sorry as if you see this as if someone is trying to teach me, actually, I’m not better than you, since we all in the same learning phase

Force Field Algorithm

Thanks for this bit of insight.

I was unaware this algorithm, is there any code that I could see, I could only find this:

http://www.cs.mcgill.ca/~hsafad/robotics/

which leaves a lot to be worked out.

I have some experience with OpenCV from before, I just now have it on the Pi2. I don’t see it working well for obstacle avoidance. I think, for indoors, something like a Kinect which has depth measurements is easier. Not sure yet how much it can be lightened… Lidar has possibilities, I hope you got one of the Lidars for review… 

I just ordered 10 of the ultrasonics (they come in 5’s) and 1 GP2DO2 IR ranger. My robot won’t be very fast, but I am worried about it walking off counter tops, I hope the IR will stop that.



Thanks for your comments s0rdm4n!

I saw the ir sensor you mentioned, it great! I thought that ir was for shorter distances than ultrasonic. At least the ir sensors I bought are up to 80 cm I think. My idea is to use them to detect clifs, and maybe blind spots at different heights of the robot.

About adding more sensors, in fact, thats the main reason I’m using my phone as brain (a Galaxy S5). With it I have available onboard:

  • GPS
  • Bluetooth
  • WiFi 
  • 4G LTE
  • NFC
  • Gesture Sensor
  • Fingerprint Sensor (clearly won’t use it)
  • HR Sensor (don’t even know what this is)
  • Hall Sensor
  • Accelerometer
  • Geomagnetic Sensor
  • Gyro Sensor
  • Light Sensor
  • Barometer
  • Infra red sensor
  • Proximity Sensor
  • Heart rate (clearly won’t use it)

And other usefull things that are not sensors

  • Two cameras (I’ll use the front one for now)
  • An awesome screen to draw a pair of eyes full of emotions :slight_smile:

So from the list you mention only the temperature and capacitive sensors are missing. I already have the temperature sensor with me, and the only I don’t plan to add is the capacitive touch, at least for now. How do you think I could add that to my robot? I mean, to fullfill what porpouse?

About the range sansors, thanks for the data about the ir sensor, I didn’t know that there was one with that range. I might add it in the future!

Thanks for your comments again! 

This is my lecturer’s :

This is my lecturer’s  : http://cdn.intechopen.com/pdfs-wm/14099.pdf

he uses the algorithm for his Srikandi robot.anyway I think it’s not quite possible for opencv for measure distance unless with the help of laser or another sensors.

anyway previously I didn’t say that opencv to be used as main navigation system. It can be used for decision making before it starts to move, analyzing collision time for moving obstacle by using optical flow or motion history,etc.

I’ve tried using raspberry 2 to use opencv but I find it’s hard to run there, quite slow, so in real world, it will be impossible to use. That’s why I sacrifice my laptop for one of my robot that uses opencv :

https://www.robotshop.com/letsmakerobots/a-robot-explains-about-some-security-exploitation-technics

 

 

Please forget my recommendations for using opencv for anything related to navigation, just use sonar, I think it’s easier to implement and much more efficient.

**WoW! what a comment! **

Thanks for so much detail!

     Ok, I’ll keep trying to fit the 12 vertically now, I didn’t noticed that that’s what you did in Ava, untill you mention it here. No need of head movement would be great.

    I might be planning to use the force field algorithm. I also like the ocupancy grid, I’m not really sure if I have to chose or can use both, since Im just getting into that subject. And until I have somethinkg done with the sonar, I won’t be able to go further with that.

Between Backbone (arduino) and Brain (android) I have no delay, and the data between the Brain and the Remote also flows almost without delay. The 1 sec delay is only in the video connection between Brain and Remote apps. So, it is an important delay, but it just affects the vide streaming. Anyway, I hope I can improve it somehow.

Did you finally open your source code? It would be great to go through it to get ideas, and see how did you solved some issues I’m sure I’ll face with Andar :slight_smile:  [UPDATE: just saw int your reply on this post that you shared parts of the post in comments, I’ll read that. Thanks!]

Thanks for your help Martin!

re: swOrdm4n

I agree with you sw0rdm4n, 12 or more is not suitable for his robot as it stands now.   Even if what I wrote about was feasible on his bot, it is difficult to make a robot with all that gear look good (like he said).  I never liked how big the sonar housings are on my bot, I wish I had just spent the money on smaller sonars.  His bot isn’t quite big enough to support all that gear without MaxBotix sonars.  Lidar would be cheaper and better than that, but both setups are very pricey.  For the record, I would not recommend anyone do more than 12.  12 has enough challenges IMO.  He would need a bigger platform or a smaller setup.  Everything is a tradeoff. 

I agree with you, OpenCV is a good thing to have on his bot since he will have the Android phone in the face.  Getting OpenCV going on the phone is fairly straightforward.  I never learned how to use it for obstacle avoidance but I have seen a few members have.  One LMR member used a line laser and OpenCV to interpret where obstacles were based on the shape of the line perceived from a camera mounted above the laser.  That approach could work on this bot.  I would love to try that out and I would love to see more posts from you on different OpenCV techniques.  It sounds like you have done some pretty cool stuff.

I sincerely love all the discussion.  If I come off as teachy to anyone, I don’t mean to.  I’m in that same learning phase.  I think obstacle avoidance is not talked about enough on this site.  There are so many different options other than to just slap on a SRF04 to look like eyes and think you are done.  No offense to anyone that does that, it makes a lot of small bots look cute.

Regards,

Martin

re: Force Field Code

I’d be happy to share my Arduino code for Force Field, Obstacle Avoidance, and Throttle control, since they work in concert…from Anna.  It uses 9, which has some of the problems I describe when rotating near a lot of obstacles.  12 would solve that.  I’ll be doing significant upgrades for Ava and will be happy to share that too when I get that far.

I shared most of the significant portions already in comments on Anna and elsewhere.  I didn’t really try to optimize it or anything, but it works.  I haven’t changed it in a couple years.  There are a lot of settings to tweak for your robot though, as I used customized force values for each vector.  I found it helped when navigating through hallways and doors…which is a challenge.

If any of you drop me a message or email, I can send.  It just might take me a few days depending on what I have going on, but I will send.

As for driving off countertops, I use an IR sensor for exactly that purpose on Anna.  It works most of the time.  I have to be ready to catch her until I can perfect it. 

Regards,

Martin

Contact

Thanks for the kind offer to share. I could not find a way to message in this forum, so here is my email: [email protected]

I’m in fairly early stages of getting everything to work together as I find that  there is always something I need  that I don’t have yet. I was just getting the moves worked out when I burned out a servo due to the unregulated NiMH battery, so off to get a LiPo and a buck regulator, wiring of which is one of today’s projects.

I find that as I work my way through this, there is one thing after another that must be learned and then implemented.

My “catbot” is currently lying in pieces as I wanted to paint him, one of the advantages of using wood. I should finish that today, and then reassemble and recalibrate. I am an admirer of your work to provide interaction. I am just working my way through pocket sphinx and have found that MySQL runs well enough on the Pi 2 which will simplify the simple interactive bits I have planned.

I would loved to be cc’d to!

If it is possible,  I would love to receivethat email too! :slight_smile:

I have to deal with those subject in the short therm.

[email protected]

 

Thanks!

re: Force Field Code

I’ll send the code out to both of you as soon as I can pull it together.  I hope it helps.

Regards,

Martin

Nice if you will send them

Nice if you will send them your code. If you don’t mind, can you give the code too

 

my email is : [email protected]

 

Thank you before

Anton

quote:"So from the list you

quote:"So from the list you mention only the temperature and capacitive sensors are missing. I already have the temperature sensor with me, and the only I don’t plan to add is the capacitive touch, at least for now. How do you think I could add that to my robot? I mean, to fullfill what porpouse?"

You may use capacitive touch sensors if you will attach some robot arm for your robot.

- whem the arm movement is blocked

When the arm movement blocked by an object the sensor is useful.

- touch it or not

It can be used to determine when the arm have successfully touch an object or not

 

Movement and misc comments

OpenCV is one of those things that holds more promise than reality.

As far as motion, baseball outfielders time their arrival to where the ball lands, they mostly don’t run ahead. This lets them “vector” in to the actual balls location, the players movement is key here, just as you mention it in OpenCV, although there I think you are looking at the speed of the edges (closer is faster).

There are two things that IMHO OpenCV needs to be expanded to:

It lacks depth. This could be done with a fast lens and scanning the depth of field, such that you have depth slices and can tell where an object is by which slice that object has the greatest internal contrast. That is the way autofocus cameras largely work. There are also various means of doing this using stereo vision.

Second, I think is color. If you look for something, it’s color is a key thing you look for.

Just some things for those far smarter than myself to mull over… which they probably have…

As far as OpenCV, I am far more interested in using it to recognize individuals rather than track them. Seems to me that technology is already largely there, although I have not studied or used it.

For small players such as myself, the cost and availabilty of the necessary technology is right on the brink of doing so much… My thanks to those of you who have expanded the possibilities.

Force Field and Obstacle Avoidance Code

I sent out the code by email to all that asked for it.  I am by no means saying I think it is right approach for your bot, you’ll have to figure that out and see what can fit.  I look forward to seeing whatever you do on this cool project.

Regards,

Martin

By the way, I really like the design of Andar’s Head.  It kindof reminds me of the Martian from the old Bugs Bunny Cartoon show.

Thank you very much Mr.

Thank you very much Mr. Triplett, you’re the man !

Good luck with Ava.

Thank you very much! I’ll

Thank you very much! I’ll start to work on the autonomous mode soon, so I’ll be going through your code in detail to get a shortcut on the development time!

A out the head, you are right! It is kind of similar,  I didn’t noticed that before. I hope I can keep it similar in the final 3D printed version.

Thanks again!

Thanks

Thanks, I’ve been digesting this, it is quite a bit of code.

There is an old saying that you can’t see the forest for the trees. This appears to look first for where the forest is and then find the individual trees.

The math is not complex, keeping track of what to do next is! Of course that depends on just what the robot needs to do at any given point in time.

At any rate, it has given me pause to think just what do I want to do.

 

Ultrasonic sensors wiring

Hello (again) nahueltaibo,

I have been following your discussion with Martin about the advantage of using many ultrasonic sensors (12 for Ava and here). I am actually starting a new project that will have many similarities with your little guys. I was planning to use 5-7 ultrasonic sensors but Martin’s experience made me think about it :wink:

While I can see the advantages of so many sensors, I am also wondering about their implementation. Are you planning to use the sensors in single-pin mode or 2-pins mode ? (I have never tried the single-pin method and it doesn’t seem to be so straightforward). With the 2-pins mode, 24 pins are required, that’s a lot, even if the Mega offers plenty. Other potential issue I see (I didn’t try anything yet): checking 12 ultrasonic sensors in a row might take a bit of time, doesn’t it ? Time while the backbone is not available for doing something else.

Right now, an elegant solution could be to use a Nano connected to the 12 sensors. This dedicated Nano would push when necessary the values of the 12 ultra sonic sensors to the backbone (in one shot, avoiding it to freeze). Of course, if single-pin mode is not satisfying, we would lack of pins (or should use 2 nanos ;).

What did you have in mind and what to you think of those thougts ? Of course Martin (or anyone else) is more than welcome to answer :slight_smile:

Ultrasonic stuff

Hey LordGG

    You are thinkg the same way I do in all of the points!

   I can only answer just from my plans since I didn’t write code for the sonar yet, so maybe Martin’s opinion will be also required.  Nevertheless, my idea is also use a different arduino to handle the sonar as you mention, probably a nano. I already used the range finder with one pin only in another project without problems so that’s what I’ll try to do this time also.

   As soon as I get more details I’ll update this comment so it is more helpful. 

time

I’ve looked a bit into Martins code… again thanks.

Sound takes nearly a msec to travel 1 foot, so you have a substantial amount of time for that times whatever number of sensors you have, depending on max distance measured. You poll the ones you need most, the most often.

I believe Martins code uses the two pin mode (that could be changed). Both the Mega and the DUE have 54 digital i/o’s, so 2 pins still leaves a lot open, although a lot of wiring!

A question I have is what is the Arduino able to do while it awaits responses? It may be better to off load all of this to it’s own MCU and with the prices of these so low that may be the best plan. At least it would be for myself as I have an enormous amout of math to run 14 servos, which need continuous updating.

What I suggest is not using Martin’s code whole, but reading through it and rewriting what you need. This is part of an overall control system that also has pitch and roll, which isn’t included but also isn’t needed by us here.