You PROBABLY could do one Arduino with some careful programming. But the extra IO could come in handy and the extra Arduino is cheap so why not?
Good investment on the encoders. I also spent a lot of time messing with a circuit and trying to get placement of the IR sensors just right. Never did get it to 100%; very frustrating. I foresee a webcam attached to your RasPI and OpenCV in your future… Thanks for sharing the pics of what you have done.
Thanks Proto. And so far under $50 for this board…
All in, connectors, proto board, 2 Arduino Minis, Compass, barometer… I’m not yet over $50.
When I go to fab, I’ll be using the Atmel chips, not prebuild Arduinos, as well as the compass, accellerometer, and a few other I2C chips… I believe I can build this board for around $35 all in… In low quantities.
Why do you leave diy encoders, they doesn’t work correctly? The commercial encoders are really simple, but I don’t understand why they are so expensive. With that money it is possible to buy a raspberry (and more) that I think it is probably more complex.
Do you have some images that show better how do you connect the encoder to the wheel?
I’ve tried several DIY encoder circuits. I’ve spent probably in excess of 200 hours just on this problem… Not a waste, really, I’ve learned alot. Will probably go back to them at some point. But it was distracting me from the objective of this build. And frustrating me as well.
These Solarbotics have an ATtiny chip onboard that cleans/processes and decodes the pulses. ok… ok… I got lazy… LOL
They just work. position them 1.6mm from the reflective disk, and you get 128 pulses per revolution, with direction. My DIY version would drop out from time to time, or saturate if the wheel speed was too great, or REALLY saturate if you were driving into the morning sun across the kitchen floor…
Re: Do you have some images that show better how do you…
Do you have some images that show better how do you connect the encoder to the wheel?
I’m not sure I understand this question. Mechanically connect to the wheel/chassis? Or electrically, as in wiring?
I had to enlarge the shaft hole in the encoder board a bit to allow for my hub to spin freely. After I afixed the relective
encoder disk, I slid the encoder board over the shaft, and held it in place 1.6mm (roughly) from the disk with kids modeling putty.
I slid the shaft of the wheel onto the motor shaft, tightened the nut, and hotglued the encoder board to my chassis with an “L” bracket. When it set, I removed the putty.
This is what the encoder board itself looks like...
LIDAR Lite can measure down to zero… Just sayin’. :)
I really like this bot because I have a thing for sensors. Are you planning some comparative testing of the sensors or just seeing how many you can fit on one bot? Didn’t see any trusty SR04s on there… Not good enough for ya?
The LIDAR is good for mapping and rangefinding, but not great at detecting small obstacles locally (even the really expensive ones) bu the Sonar, with a much wider cone can tell you something is “somewhere in this area at this distance” which is good for collision avoidance.
I’d like to ultimately have a data feed that combines the two data types. In-field wide angle, and Outfield narrow angle.
I has the SR04s initially on my first bot https://www.robotshop.com/letsmakerobots/node/39052 but soon traded them out for the MaxSonar. Was hoping the the narrower cone of the MaxSonar would help me with Rangefinding, but it became apparent that I could reduce my pan increment to 5 degrees and still see no noticeable difference in the output… so…
After I do some more work on the Mapping and Localization piece, then I’m going to look at comparing the various sensors systematically.
And yes… LOL… Will be picking up a LIDAR Lite to add to the arsenal. I’ve also got a Kinect sitting on the shelf waiting. The problem with it is, it is bigger than by bot!
Awesome! It would be very interesting to mount the different sensors on the same pan bracket, sweep the area on front of the bot, overlaying the results on to a single display or downloading and plotting them on a pc. Would be a great start to a wiki article…
There might be another possibility for a good obstacle detection/mapping sensor that would be quick and not need to pan to get detailed data. I haven’t tried it but I think it would be doable.
1) Put a $10 laser (that shoots a line pattern or a cross pattern) low on the bot shooting out horizontally forward and level to the horizon. Maybe an inch or two off the ground. This line will spread out to give you 30-45 degrees of coverage at the same time.
2) The laser line will take various shapes depending on what it hits and at what angle. Example, if it approaches a wall hear on, a level line will be produced on the wall. If it approaches at an angle, a line sloping up on one side will result. If a narrow obstacle is in the path, a short line will be seen on the obstacle, with the rest of the line disjointed on whatever is behind the obstacle.
2) Use a camera mounted higher up in the bot and open CV to look at the line and filter for the intense red/pink of the laser. I have tried this and it works.
3) Evaluate the shape / slope / number of segments / position of the line to estimate. (haven’t tried but it seems like basic geometry and a little stats)
1. Distance
2. Obstacles
3. Angle of Attack (when near walls)
4) The strengths would be being able to evaluate an entire wide field of view in a single frame, with great granular detail, without panning. The weakness is this would not cover the vertical dimension. Perhaps two or three lasers could be used to cover various heights, but this would start beaming people in the eyes.
Looks like you might have all the pieces on your bot and skills to take a whack. Hope there is not some flaw in my thinking, I tried firing a laser and looking at the patterns quite a bit. It seems doable.
If you look at the picture below, you will notice the Raspberry Pi cam (5megapixel?), and about 6cm below you will see three wires feeding the Laser (power / ground / pulse. A cm below that, you will see the slot for the laser exit. This is a dismantled Black and Decker Laser line level.
It works quite well, and definately requires more attention, however unless you dedicate a processor akin to the Raspberry Pi to this function, you end up with a "run-stop-look-run-stop-look" method of travel.
My goal is to take measurements while traveling. Even with my panning sensors, I send a time stamp and a frame stamp with the distance samples that can them be aligned with wheel encoder position to get point in time measurements while traveling. The arduino can handle this data mapping readily.
I frequently get told I've got too many processors onboard... nope... I'm good with it.
I feel like I had really good luck with sonars and the “Force Field Algorithm”. I’ll donate some arduino code if its anything you’re interested in. Once I smoothed out the sonar data a bit, it gave a pretty good obstacle avoidance while moving around and looking elsewhere. At a few bucks per sonar, its hard to beat.
I think around 12 are needed to be effective indoors and cover 360 degrees and handle wall bouncing. My bot only covered 270ish, so it would turn back towards a wall (if its goal was behind the wall) after it had turned directly away from it, as the force field was blind in the back. This can be solved with software to have a short term memory of what is behind or some other techniques…still wish I had 12 though. Despite the blindspot, it would work its way around a wall while attempting to reach its goal.
Did you do anything with the laser line level yet?
Nice work, I really hope you stay interested in this project, I think we all can learn a lot from it. I think I’ll hold off on LIDAR until I see what happens with yours.
I read about the Force Field Algrithm from a University…
I read about the Force Field Algrithm from a University of Waterloo Paper from a couple years ago.
Would be VERY much interested in seeing how you implemented it. It will be a bit before I get back to the Laser Line Level. I’m re-doing my code around command and sensor processing in python. Moving a lot of my “git-er-dun” style inline coding to approriate classes, and threading where I can. I’ve removed the mySQL command queueing nonsense that I had between the webserver and the bot, and replaced it with a websocket client/server. Much more responsive (but you all knew that!)
I’m still tee-ing the commands to a mySQL table for logging and potential replay, but that may even go away in the future…
I’m also trying to understand how to set up a publish and subscribe system to support multiple “bots” as they come online. Or more appropriately… to support multiple sensors as they get added to a bot.
By the looks of your videos, I’ve got a couple years of catchup to do. Hopefully I can lean on you from time to time to guide me in the right direction.
I’m very interested in where you are going with this as its an area I intend to spend more time on when life/kids and work allow me to!
I spend a lot of time watching insects which are essentially pretty dumb but seem to get around ok, even with very little brain power, and they dont get stuck in corners either
I’ve been thinking about having two levels of navigation, one being a low-level, basic object avoidance, similar to an insect. Your sonars and IR sensors seem ideal for that. Then a higher level intelligence that can perform the localization and mapping/search path navigation which the Lase Range finder seems idea for.
So you can use the laser to make a plan and head for for your goal, but the sonars can override to get around obstacles and then when danger is passed, the higher level cortex makes a new plan.
This robot is great! You seem to have the same kind of goals as I do (I suppose most people here would want the autonomous, “goto location”, and manual drive modes, really). I’ve got my Pi, and varous other parts. Just trying to work out what to use as a frame.