We are starting a new autonomous BRAT tutorial with the sole purpose of finding those pesky water bottles and kicking the crap out of them autonomously. It uses a BRAT with a GP2D12 on a pan servo. Will post code and progress here until the tutorial is finished. 8)
The brat is an amzing platform. I look forward to seeing this tutorial!
how do you plan to tell the difference between say a wall and a water bottle with the GP2D12?
When you scan a wall it will show up as a gradual increase or decrease in distance, but scanning a bottle results in a narrow decrease in the distance in the scan. So it can tell the difference between a wall or a bottle by the sensors response. We are experimenting at first without walls close by. The idea is; Scan, Detect, Correct, take a step, repeat. When the distance is close enough; Kick, Scan for target elimination, revert to first routine.
well by using the gp2d12 you should also know the range to target and therefore you can calculate roughly it’s cross section (aka diameter) and compare it to what a soda bottle or can size would be. It might keep you from kicking the cat or other less than happy to be abused targets.
lol thats exactly what i was thinking EddieB. i still don’t see how its possible to pick out a water bottle from its surroundings. if you get it working Robot Dude i’d love to see the logic behind it.
We are approaching the problem incrementally. We are not operating the robot near a wall nor are there any other obstacles near by. We will deal with these issues later in the process. We can find and home in on the bottles, but are dealing with the dead band when the sensor is too close to an object the readings go down dramatically. No physical way to install the sensor farther back on the chassis so we are dealing with it in software. James is making great progress. Video soon.
Would a Ping not perform better than the IR?
prolly have a better range, but the GP2D12 has a very narrow field of view. And it’s just too easy to use with the Atoms built in A to D so we are using it.
are you scanning in 1 axis or 2?
We’re just panning. We have it acting much better now. It can locate and approach with great accuracy, but the kicking is still iffy. The biggest improvement was to read the GP2D12 10 time in fast succession and divide by 10 to average out any false detections. After we added this filter the bot performance increased tremendously.
The reason we are purposely keeping it simple is it’s intended to help beginners. But we will add complexity as time permits. 8)
Interesting. I guess a narrower field of view means that you have to do less panning to find the edges. Is that your reasoning?
Yes that’s it. The GP2D12 has a very narrow point that it detects at. We even use it with RIOS and an arm to do 3D scanning. Now it’s not accurate enough to create an accurate model, but it can make a 3D image that is recognizable. The ultrasonic sensors may be able to do the bottle detection job too though. I haven’t used them as often as I have the GP2D12.
http://img371.imageshack.us/img371/3259/ps2highresrawoe8.gif
Here are the videos I promised. There are three videos we shot in succession. So we had three successful seek and destroy tests in a row. There is still a lot of room for improvement. The third video show the bot had to take a few swings to connect.
youtube.com/watch?v=fmpcNlfv1oI
youtube.com/watch?v=pt_7JQS3d9M
youtube.com/watch?v=hlKGp7ozSDg
Enjoy!
hmm, are you moving to a position and then sampling? have you considered setting one end position and then doing a timed move to the other (using the SSC-32 to do the timing thing of course) and sampling as it sweeps?
Is this image somehow generated using the GP2D12?
I’m starting to use the GP2D12. I have two forward mounted devices, but no scan. This image appears to be from a low res camera, is it?
Good job on the bottle conquest!
Alan KM6VV
@ zoomkat & KM6VV: That image was actually made using a GP2D12, Arm, and RIOS. There’s more info about it here:
lynxmotion.net/phpbb/viewtopic.php?t=1139
There’s no SSC-32 in here. The Atom Pro is doing an SSC-32 emulation. We are sweeping between steps. And sampling if detected in the front. We are experimenting with changing the sweep direction depending on which side the robot last saw an object. Trying to prevent loosing sight and re-sweeping. It’s difficult to explain, but the code is pretty tight. We will post it soon.
I looked at the thread, and it is interesting, but somethings that are stated don’t seem to really match the images displayed. I just have an inquiring mind about these things.