Mapping Rover -- The classic Rover 5 with improved 3D printed axis adaptors

Hi everyone, 

this is my first robot project not only on LMR but at all. In November I decided to build the apparently classic Dagu Rover 5 with the awesome offroad wheels. I wanted to start with a "simple" remote-controlled vehicle before making it autonomous. This should be a robot, after all. 

The Hardware

I'm using the Dagu Rover 5 with 4 encoders, the Dagu Rover Motor Controller, an Arduino Mega, a Readbearlab BLE Shield to send the sensor data, 3 SR04 sonar sensors and a Pololu MinIMU 9 compass, gyro and magnetometer. The IMU (and the sonars) are mounted on 3D printed parts. As many people have noticed before me, the axis adaptors for the Pololu offroad wheels are very long. The Dagu Rover chassis isn't very stable to begin with but with the wheels mounted at several cm distance, I was afraid the axes would break at any moment:

The axis adaptors are too long

With my new Printrbot Simple Metal, I designed new adaptors that have a 12 mm hex shaft at one end and a 4 mm half circle on the other end:

The process took a bit of trial and error but I like the result. I also like the holder for the gyro unit which creates a bit of distance between the magnetic fields created by the motors and the magnetometer:

Both models and all code are available on Github at https://github.com/stheophil/MappingRover

The Software

I'm a Computer Scientist by training. Computers are simple machines that do as they are told. Robots however have to interact with the pesky real world and nothing really works the way it should. 

Robots don't drive straight when you tell them to: I'm using the ArduinoPID library to control all four motors so that the wheels turn at the desired speed. I've made extensive tests to tune the PID parameters (see my Excel sheet). The four motors on my Rover behave very differently. To drive at the same speed, the weakest motor needs a 20% higher PWM signal than the strongest motor. The difference is even larger when turning on the spot and some wheels must turn backwards. 

A compass is a fickle instrument: Ok, now the rover is driving straight. But in which direction? I tried to mount the magnetometer at a little distance from the motors. I've calibrated the compass using the simple min/max method while the motors are running. I wanted to calibrate them in a real-world situation. That has worked ok. So far, I've avoided calibrating my compass by ellipsoid fitting.

I'm using Pololu's MinIMU AHRS (ported to straight C -- I'm using the awesome inotool.org build system for Arduino) to calculate the Rovers heading. 

The robot sends its heading, the distance recorded by the wheel encoders and the sonar measurement via Bluetooth (BLE to be precise which is just barely fast enough!) to my Mac. My central command application is not only the remote control but is also supposed to become the robot's brain. It currently draws the robot's path and builds a map of the robot's surroundings based on the sonar measurements. 

Cheap sonars are garbage-in garbage out sensors: What what can you expect for a few bucks? 

With the ambiguous results you get from sonars with a 15° opening angle, you can only build pretty rough probablistic maps. I'm building an occupancy grid map representation, i.e., a map of the probabilities that a point is occupied. Here you see a partial map of my living room. The darker the color the higher the assumed probability:

The sonar measurements become more precise as the rover approaches the obstacle. If the front sonar sensor were movable, the rover could sweep the space in front of him and remove some of the measurement noise in the map. 

Possible Improvements:

At the very least, the Mac command center should control the robot to create some simple maps. Then, it's a robot!

Of course, I have grand ideas for the Rover v2. I've been eyeing the new Raspberry Pi 2 to make the robot truly independent. The Raspberry Pi could use the camera to correct the magnetometer disturbances. The new Lidar Lite laser scanner is looking sweet too.

Update (7 Oct 2015): Making Maps Autonomously

Like I've said above, I already had my eyes on better hardware from the beginning! But first, this robot had to become an actual robot and not a remote-controlled toy. I've just pushed the first (kind of) working version of my autonomous map-making robot to my Github repository. This is how it works:

As described above, the robot uses the very imprecise sonar sensors to create a probabilistic map of its environment. That means the map is a greyscale image were the darkness of the pixel corresponds to the robot's confidence that an obstacle is at that position:

Occupancy grid (2)

When one of the sonar sensors sends a signal that it detects an obstacle in 2m distance, I update all pixels inside this arc. All pixels with a distance of < 2m are a little bit more likely to be free, all pixels on the arc at 2m distance (+- some tolerance) are a little bit likelier to be occupied. Over time, these probabilities accumulate and give a surprisingly good map if you know how my living room looks. The red line in the picture is the path the robot has taken, the little red square at the center is the robot itself. 

This occupancy map is not very good for navigating. Its resolution is 5cm per pixel, my robot is about 30cm each side. That means the robot occupies a lot of pixels. The robot has to find paths through the map such that it doesn't collide with a black (or dark grey) pixel. This would be much easier if I could have a map where the pixel at position (x,y) is white if (and only if) the robot could be centered at position (x,y) without colliding with a black pixel in the surrounding of (x, y). I can create such a map using an image transformation called an erosion. Eroding the image means that I enlarge all black pixels by the size of my robot. This is the result:

Eroded map

Now, all that is missing is a simple strategy. A random walk would work already, that's what the Roombas did until the very latest model. But I can do a little bit better maybe. An important observation I made when I remote-controlled the robot is that the sonar sensors are very imprecise. They have an opening angle of 15 deg which means that from afar, they don't measure very accurately how wide an obstacle actually is. That means that with a sonar sensor it's better to drive past obstacles at close distance. That is all the strategy I've implemented: 

  1. Make a 360 degree turn to read the environment
  2. Find the optimum angle which leads past the most obstacles at relatively close range. Closer obstacles are preferred because the further away the target, the less confident the robot can be that he'll reach it on the planned path. The robots IMU is not very precise either so he'll frequently think he's making a little turn, just because there's a steel beam beneath my floor. 
  3. Drive at the optimum angle until the robot reaches an (unforeseen) obstacle or until he has driven past the target. 
  4. Repeat

In order to find the optimal angle in step 2) the robot has to know where he has been. Again, I store this information in a map. I draw a very thick line along the path the robot has already taken. All obstacles along this path are not considered again. This is how this looks:

The thicker red line is the path the robot is currently planning to take. 

Conclusion

The robot has only an Arduino Mega onboard, not enough to implement any kind of interesting robotics algorithm. I decided early on to develop the algorithms on my Mac instead of putting a Raspberry or similar on the robot itself.

  • Programming and especially debugging was much easier this way, having a simple UI to visualize the robot's maps and decision making was very helpful. 
  • I underestimated the impact the delayed communication over Bluetooth would have. The robot was unable to turn to a specific angle until I reduced the robot's speed significantly. It still overshoots somewhat. 
  • I started writing some of the algorithms in Swift (like the UI) because I wanted to try it out. This was quite a waste of time for several reasons: 
  1. I'm a C++ programmer by day, so I'm much more productive with that in the limited time I have. Plus, if the upgraded rover should have an onboard Raspberry I would have had to port everything to C++ anyway. 
  2. I noticed only in the last few weeks that I needed a lot of image processing algorithms in order to make the robot autonomous. Calculating the eroded map, calculating distances from obstacles (= black pixels), drawing thick lines over the visited path etc. While Mac OS X comes with quite a few good image processing libraries, again these would not work on a Raspberry. Instead, I've used the OpenCV image processing libraries which most people use for face recognition etc. I had had no idea that OpenCV is much more than just a "Computer Vision" library. It implements a large set of image processing algorithms too. 

Update (9 Oct 2015): Now with a video! 

There's one to-do item remaining: The map making uses a simple local optimization strategy. At the robot's current position, it chooses the angle that leads the robots past the biggest number of unvisited obstacle pixels. It does not know how to go to unexplored parts of the map if no such pixels exist in the robot's immediate surroundings.

 

Builds a probabilistic occupancy map from sonar sensors

  • Control method: autonomous
  • Sensors / input devices: Ultrasound sensor
  • Target environment: indoor

This is a companion discussion topic for the original entry at https://community.robotshop.com/robots/show/mapping-rover-the-classic-rover-5-with-improved-3d-printed-axis-adaptors

Thanks!
The sentence was confusing. I create some distance between the motors, which create the magnetic field, and the compass. The compass, gyro and accelerometer are all part of the same component, hence the confusion

One downside of my adapters is that the wheels are not entirely straight relative to the chassis. But that seems to matter only aesthetically. My adaptors might still be slightly too large. Even with the original wheels my chassis looked bent.

This is really nice work!

I love the mapping, looks really good for just ultrasonics

Welcome to LMR!

To map even better the robot can turn on itself or wait more time to average the readings!

Turning on the spot

I’m afraid turning on the spot creates more errors rather than less :-) 

I get more sensor readings but the exact pose of the robot becomes less certain. I thought a rotatable sensor on the front may work better. 

I was surprised too :slight_smile:

But the credit goes to the developers of the original algorithm 

Good work here. I wonder if

Good work here. I wonder if two sonar sensors in each direction and improve resolution? Scanning them, as you daid, is another option. I’m reminded of the vacuum cleaners that 7se scanning lidar to map a room’s perimeter then storing that map for navigation purposes. Once a space is mapped  it moves to an adjoining space.

Nice Rover

Very cool robot.

It’s a bit similar to what I had in mind for my own Rover except I still am at the remote-controlled toy stage. (I haven’t posted it on LMR yet.)

I decided to go straight for the Raspberry PI though. Being a programmer in my day job, I think the coolest features will be available with more computing power onboard.

I had the same latency problem when I wanted to control a pan & tilt camera in real time. I moved the Servo position logic to the Raspberry PI and movement was suddently a lot more fluid.

Very cool project, good job! Can’t wait to see improvements when you’ll go the trully independent Raspberry Pi 2 way. 

Also quick question: How did you manage to make the robot understand it’s own movements? Does it know that at “X” PWM it should go at “Y” speed (kind of hard-coded) or it deduce it’s speed by looking at the distance of the object in front of it? (If distance is smaller after X time has passed, it mean the Rover actually went forward and isn’t stuck.)

This is awesome! I am very

This is awesome!  I am very impressed.  Incredible that you can make this work as well as you do with just a few ultrasonic sensors. 

You might want to look at ROS (ros.org - robot operating system) if you are moving to a RasPi or even if you keep your Mac as the master controller (ROS runs on Linux, not sure if they have a Mac version or not though-either way not a problem).  If you set up some xml, follow some tutorials, ROS will give you the SLAM algorithm you are trying to create.  If you get a LIDAR that has a driver, it will work automatically for you if that is the route you want to go.  There is support for Kinect so for $25 from Gamestop.com you can get it to generate a 3d point cloud for you about 15’ range with decent granularity.  They also have a navigation package so once you have a map, you can tell it where you want it to go and it will try to find the best way to get there.  Willow Garage which was a robotic think tank for Google did much of the work on ROS.

Check out this video:

https://www.youtube.com/watch?v=17W8dkzkvWA

I haven’t gotten this all working yet on my robot (in progress!), and there is a learning curve.  A lot of gotchas that aren’t well documented but if you can make your own SLAM algorithm with some $2 ultrasonic sensors, I am sure you will have this working in no time compared to me.  If you want more info on ROS, let me know and perhaps we can connect offline.

This is a demo using an old version of ROS and a Kinect.  The idea is there but there is a problem with the driver so it will connect to the Kinect (ironic- can’t connect to the Kinect?)  that needs to be fixed in the newest version of openni but it tells you all the steps and how to view the data coming from the Kinect.

https://www.youtube.com/watch?v=sYrncrewttQ

You have done some cool work.  I love that adapter you built going from a 12 mm hex to 4 mm d shaped shaft.  I have wheels I scavenged from broken kid’s toys that have those hex so I would love it if you would post an stl file for that.  Brilliant work.

Regards,

Bill

 

 

Thanks for the hints

Hi Bill,

I’ve hardly looked at ROS but I know it exists :slight_smile: So far, I didn’t want to use its SLAM implementation because I’m trying to understand the algorithms (and challenges better). Obviously, the big part the current algorithm is missing is estimating the robot’s position from the map. This is still on the to-do-list! 

I’ve uploaded the axis adaptor, you can get it at https://github.com/stheophil/MappingRover/blob/master/3D/axis_adaptor.scad The model itself is very simple, but you need to tweek the inner and outer diameter a bit. Depends on the tolerance of your printer as well.

Regards

Sebastian

**Servo logic **

I think I’ll follow your suggestion and move some of the turning logic over to the Arduino. If I send a signal “Turn to angle PI” or something like that, the results will hopefully be better. 

As to your question, I’ve written about that in another comment on the site. I hope I understood your question correctly. 

Sonar resolution

I’m not sure two sonars improve the resolution. I think you could get better results by comparing sonar measurements over time. When the robot senses an obstacle to its right at a distance of 2m and then moves 10cm forward and the measured distance falls, there must this obstacle must be in the part of the sonar sensor arc not covered before. 

Turning on the spot really helped

Thanks for the suggestion :smiley:

Sometimes the journey is

Sometimes the journey is more important than the destination.  Nothing wrong with that.  This is a hard problem to crack but have fun. 

Thanks for the upload of that file.  That will be very useful.  I will get my son to print out a few of those.

Regards,

Bill

 

Great posts…

Really great posts, especially the maps.

I’ve done a lot of work using multiple sonar setups.  There are positives and negatives to using them.  Good for detecting obstacles, not good at knowing their size, shape, etc. within the cone.

I would think with your software you could build much more accurate maps by using Sharp IR distance sensor on a panning servo.  The Sharp sensor would give you a distance at one particular angle (instead of the 15 degree cone).  The robot could take hundreds of measurements from a particular location before moving on to another location.  You could put multiple sensors on the same pan at different headings if you wanted to speed this up.  If you did pan and tilt, you could add some height to the map.  Obviously all this would take a lot of time, but I think the map could get pretty good if the bot knew pretty closely where it was when it made the readings.  Obviously some of the exotic sensors can make quick work of it, but it can be very rewarding to see what can be done with less.

I’ll be following this bot to learn as much as I can.  I’ve never done the occupancy grid mapping, so I hope you post frequently!

Adding height to the maps

Adding height to the maps sounds like a whole different ball game :slight_smile:

The next version of the robot will use a panning servo, but I plan to mount the Lidar Lite on top of it. I’ve just finished a little video and will update the post once its online. 

I ended up following your

I ended up following your suggestion and moved more control logic onto the robot. This improved the precision while turning dramatically. 

Is this project still being

Is this project still being developed? I think using image processing for the map is a big kill. Instead of saving your occupancy grid probabilities as an image store the values in an 2D array. You can then use a MAP rule to convert the probabilities to the deterministic values of 0 for empty and 1 for occupied and save the result in a new array. Using this new deterministic array you can use any number of search algorithm (A*, Dijkstra’s algorithm , D*, etc) to find a path for the robot to travel.

The project is being

The project is being developed in the sense that I’ve upgraded to the LidarLite sensor and a real on board computer. Will post something about it soon. I’m working through Thrun’s book “Probabilistic robotics” and an online lecture using it. Very recommended. 

As for your remarks: An image is nothing more than a 2D array. Using openCV and its image/2d array representations gives me access to its algorithms. There are non-trivial algorithms like feature-extraction etc or the erosion algorithm I use here to increase obstacles in size. The mapping from proabilities to a binary image is something I do too. Even for something so simple, the OpenCV implementation is better than a handwritten one because they try very hard to exploit the parallel nature of the problem. 

I wrote a naive C++ implementation for the erosion filter and even with all compiler optimizations, my simple implementation was 10x slower. I see that you’re also implementing occupancy grids (https://www.robotshop.com/letsmakerobots/node/49549) so I definitely recommend using OpenCV. I find that very often in programming, the key is to understand what kind of problem you have on your hands. Then you can use the appropriate tool for the problem. Making maps and navigating in them is to a large degree an image processing problem. 

- Converting a 2d array of probabilities to a binary 2d array of free/occupied pixels? That’s a color space conversion from a grey scale image to a black and white image

- Increasing the size of obstacles by half your robot diameter? That’s an image erosion

- Calculate the likelihood field that an obstacle measured by your sensor coincides with an obstacle in your map? Calculate the distance-transform on your image, ie, calculate the image where each pixel has a value from 0 -> infinity identical to its distance to an obstacle in your source image. 

- Find a path through your map as far away from obstacles as possible? Again, distance-transform your map first. 

 

(OpenCV does not have path-finding builtin, but I’ve implemented A* over the image map)

Kool. You lost me somewhere

Kool. You lost me somewhere along. But I think I see your point. I have been trying my hands at OpenCV lately and its a mouthful. My occupancy grids implementation was implemented purely in python (well all of my AI was implemented in python) and since I am still new to the image processing business I used python lists to store and manipulate the data. Its not the most optimized but it works fine for now. I would consider using OpenCV later on when I get a handle on things. Great work btw.