Processing mapping project

Project verview:


mapping system using serial comms and processing



I can do the coding and datbase side of things, hardware will be dependant on everyones robots but i use arduino and plan to go n to a beagle board i nthe long run and Cris uses pic so we should be able to make this pretty portable for everyones projects.


At the moment im working on a nice GUI(Graphical User Interface) to make it easier to see whats happening and hopefully cut down on development time.


Ideas anyons time :D


This sounds like a great
This sounds like a great project! If you decide on a standard serial communication format for what the program is expecting then prretty much any hardware could communicate with it if it stuck to those specs.



Most of this week has been spent trying to sort out my wireless cam & processing so i can crack on with this.

Im currently working on a laser ranging device that i plan to use for th basis of the mapping but i plan to be able to use other devices such as a ping or other ways to measure distance.

My idea so far is to have the bot gage where in the room it is by using measurements to objects in combination with measuring travel to work out where in relation to other objects it is, the way the map will be built up and formatted for storage and reference is still a little sketchy, this is where i will need help i think.

The maps need to be either small or be easily split into sections so that small microcontrolers can utalise them, but they also need to be accurate and readable as well as easy to form. I think this will develop with time. Id also like to have a set of classes for various chips and languages, that way we could develop a drop in mapping system.

Map criteria



Easy to use from both a computer

Maybe a grid format with various scales built in would work, for example it could have a few different sizes of grid, from cm squares, then moving up in tens to 1 meter, then moving up in 5m squares. The map would be orientated around the robot and objects mapped with relative coordinates to the bot.

robot is position 0,0

wall position: 10 meters north 5 meters east (centre position)

wall length: 5 meters

wall angle(from bot): 45 degrees

This could be passed to a function to deal with it after that:

storeObject(5, 10, 5, 45) // x, y, length, angle

The way above would take the position of an object from its centre point (if known), the problem with this is that it depends on whether you know the depth of an object to start with, however this can be solved by assuming ( :D) that the object is flat and has no depth unless otherwise known. The map system would then work out what objects are linked.

The other way would be to define the position of an object with coordinates for all points on it:

A wall would have a start point and a stop point

A square would have 4 points

A box 6 points and so on.

THis might be more accurate but i can see more problems with overly complex layouts. A standard way for all would be good but i cant see defining a line the same as defining a box or a curve. I can see using the centre point of an object to line it up on the grid and then sizing it from there working but i think that might have a few problems. The ability to work in either 2d or 3d would be good too, and probably not that hard to impliment.

The way i have outlined above would eliminate the need for gps or beacons. However i cant see this method working easily on an arduino or pic (if im wrong about that please tell me), i am planning on having either a powerful board on my bot or some form of server that deals with this and the bot jsut sends back data to it. It would be nice to have a way to simplify this so that it could be run on a less powerfull chip.

Wheel encoders scare me so im planning on using way points for travel measurement by either using an object on the map and locking a distance sensor to it while the bot moves, this might need some correcting within the mapping program itself. However im confdent i can make this work.

Most of this is just ideas at first, i need to get my laser ranger working first and then i will have a good test platform to work with.

Id appreciate some input if anyone has any or is interested in working with me on this.

I’m going to start with the

I’m going to start with the one fear that you have and thats with encoders. I think these would be invaluable to this project as it would allow you to track movement to a very fine level. Objects can be be blocked(by roaming cats, dogs and kids) and relying on one could cause you to lose direction.

One way could be to drop little ir beacons/waypoints(do I ever get tired of these :smiley: ), breadcrumbs as it were to that can be dropped from a small holder. I have an idea for these…will have to work on it when I’m done with some other things on the table.

Anyway, another useful sensor would be compass sensor, you could get your bearing for the most part, though that could be affected by various magnetic sources.


Mapping is tricky,I’m not sure how to tackle that one but I do have a few thoughts on that…I also found this interestign article while doing a quickgoogle for grid mapping and robotics.

If you have blootooth on your robot and your computer then it shouldnt be that hard to make them communicate.

This project is similar with

This project is similar with what I am working on for my robot.

The way I thought it: have a pre-built map as a grid of 10x10cm squares stored in the EEPROM or a SD card, depending on the size. Probably I will end up using the SD card, as there are already made shields for Arduino. The Mapper module loads up from the SD card a section of the map that is 2x2 meters, centered around the robot, as the sensors can scan this reliable in any direction. Have a compass on the robot, that is oriented to have the robot’s front towards the top of the map (North). The robot will start at the base (charger), with a known position on the map. Then it will use various ways to measure the distance (Ping, Sharp rangers) to known objects on the map (walls, unmovable objects like the couch, TV stand, etc.) and encoders to travel precise distances (as possible) and turn precise degrees. Turning is always double checked with the compass. As the robot moves a certain distance, a new section of the map is pulled out from the SD card. If new object are found or movable objects disappeared from the current section, the map is updated with the new data. All new objects are marked as movable until they are found in the same place several times as the robot passes by. To calculate travel course, a wavefront algorithm can be used on a mini-map with a grid size of 50x50 cm or a size that is suitable for the specific environment. The result is stored as a series of commands in an array and the robot moves in increments of 50cm and turn 90 degrees (in more advanced versions it can also make 45 deg turns). At every traveled 50cm, the robot updates the map. If objects block the way when the robot is traveling, a stop command is issued, evasive maneuver is calculated and the map is updated (not the mini-map, this one should contain only the non-movable objects).

At the moment, I am working on scanning and displaying the objects on the current section of the map. I am trying to do a sensor fusion to get more accurate data displayed. I received the new LCD in the mail, I will install it on the robot to test the code probably over this weekend. Then I need to see what I can do with the compass reading, as it doesn’t work at the moment, probably needs calibration. Oh well, lots of work to be done…