Hi. We are working on a redesign/rebuild for our full body spinner combat bit. As a spinner, it can be difficult to know which direction is forward at any given point (due to torque shifting the chassis, being hit, etc…) I am looking into LIDAR as a “targeting system” front-end, and your Lidar lite with Arduino… With an end result of, stick forward= toward opponent. I’m looking for any input you may have to offer. Thanks for your time.
You could get rough headings to target (relative), and offer them to the heading control in the bot. But aren’t your 'bots basically R/C controlled for heading and speed? If you had feedback to the driver, you could get a possible retaliative heading and then elect to either manually correct to target. So, “Captain, Target is 30 degrees to port”, and then “Helm steer 30 degrees port”. Or you could elect to add in the heading correction to your commanded heading, and have the 'bot turn to face the target.
Either LIDAR or SONAR could work in this fashion. I’m thinking of Robomagellan…
Thanks for the info. I like the more “autocorrect” suggestion, with a kind of manual override. You are correct regarding the bots current control. Like angry overgrown RC vehicles…lol.
As KM6VV stated, you’d need to add a microcontroller and quite a bit of programming, which seems odd given the RC nature of robot combat. Which Lidar in particular? Most 360 degree scanners we offer are not meant for combat situations, so that would be a $400 sensor destroyed in seconds. If you meant only 1D sensor like the Lidar Lite, then how would the robot “know” its direction (it can get a distance to whatever is is front of it, but nothing more).
So far, I’m looking into the LIDAR lite v3 with Slam based software and a Arduino setup. Not exactly plug and apply, but with the availability of the API, it should simplify the programming side. Being a full body spinner, I can simply mount the sensor inside the armored shell with a “viewport” of sorts to get a 360° view. The rub is in accurately identifying the opposing combatant, and using that information to correct the forward control.
I better see the issue - as a full body spinner (with a dome?), it’s difficult to know your robot’s orientation, so perhaps it on its own can “know” its orientation and even better, with respect to its opponent. Question is, how would it know its opponent from say, a wall?
That’s the trick. I believe through the software we could do something like eliminating reference of anything above 5 ft tall, or something similar. I will be taking a look into the software option more thoroughly this weekend, but this could really give our driver an edge in reaction time and forward reference!
With SLAM you should get Z depth; that and perhaps an idea of the opposing 'bot size, you should be able to distinguish it from a wall. You can do similar with a set of sonars. I built a six-sonar head for a robot. I like the SRF08’s.
As Benson has noted, a $1000 LIDAR (Hokuyo) would be difficult to protect. Sonar transceivers wouldn’t be much easier.
Thanks for the info. With sonar in a 30’ sq box, are things like early reflections an issue?
For the battle bot arenas I’ve seen (TV), the area is fairly open. I think you’d be able to spot another bot, if the sensor(s) were set at the proper level. They could be recessed into the 'bot. I’d implement them onto a small cart or box of appropriate height, and test it out for your self.
You read the 360 degrees from 6 sensors, and plot it out on a graph to help you visualize what the sensors have seen. Requires a lot less computational power then SLAM. Then you can work out an algorithm to steer towards the target.
Nice. So far the idea is to place the sensor rig inside the Ti shell, with a viewport. Since the sensor will be spinning in excess of 90 mph, couldn’t I just use one sensor pulsing a few times per rotation?
I don’t think you’re going to get either of the sensors to work spinning around that fast. I envisioned a “head” up on top of the rotor.
What would be the speed limitations using a lidar sensor?
You wouldn’t spin the LIDAR from what I’ve seen. It does the scanning.
I was thinking that, if it were spinning, and using some predetermined pulse (as opposed to a continual scan), that may give the benefit of 360° scans using less processing power. I just don’t really know.
Not that I can see. I believe your 'bot spins much too fast for the acquisition. Check out the speed of the scan of your sensor. I’m thinking maybe 300 RPM? As opposed to your 'bot?
Hmmm. Thanks for the information. I have some creative thinking to do.