Follow and read signs

I was asked to start a blog. Don�t expect anything else than beginners level! A couple of months ago I bought a Raspberry Pi out of curiosity and soon after that I bought my first robot kit: the Pi Camera Robot of Dawn Robotics. (http://www.dawnrobotics.co.uk) Great stuff to start with! Especially for someone completely new in robotics, it offers a platform for exploration while it�s also already a kit that can be played with. I added some sensors and started learning and playing: Python, a bit of A rduino and C/C++, openCV and Numpy. All new stuff to me. After the first small working scripts, I baptized the bot RB-1 and it looked like this. After a couple of months I changed the chassis. I�ll explain in a next post. Personally I appreciate this kit very much. The image of Dawn Robotics provides a well working infrastructure platform. Keeping Arduino and lower level stuff like PWM�s away for a while until I was up to it. The WebSocket implementation enables also remote processing. So, I could code and run my scripts on a large-screen-windows-PC. That�s far more comfortable than working on the Raspberry itself. I certainly appreciated it when I started to play with openCV. I was able to display any window and as many as I needed while experimenting and debugging the scripts. That�s almost impossible at the Raspberry itself. Dawn publishes a blog, explaining their infrastructure and software. I made their scheme a bit more fancy ;-0)

 

In this blog I will share subjects I tackled. Just to provide other starters an easy information entry. Most topics can be covered by many good postings all over the web. I just try to structure that info. In the coming posts I�ll explain my code for finding and reading signs and explain how I dealt with topics as object tracking by color, isolating objects in video frames, comparing images and controlling differential wheels. And of course I�m looking forward to suggestions on improvements!

There are many valuable blogs and sites on the web, but I would like to point out a few I appreciate very much:

http://www.dawnrobotics.co.uk Supplier of the bot and its operating environment. Rather good documented and responsive help by Alan Broun.

http://roboticssamy.blogspot.nl Inspirational source. The SR-4 of Damuel Matos surely triggered my ambition

http://www.pyimagesearch.com Instructive site of Adrian Rosebrock on Python openCV. Great examples and explanation when starting with video images.

Next post:  find sign by color tracking

Looks for a blue colored sign, reads it and act upon


This is a companion discussion topic for the original entry at https://community.robotshop.com/robots/show/follow-and-read-signs

RB2 reads signs

RB1 (blog ‘Followi and read sings’) has evolved to RB2. First of all, it switched to a 4WD chassis to get more control of the movements. The Dagu Magician frame is ok, but running on a tiled surface, the rear castor caused a lot of unexpected swirling. Adding weight solved that, but then the hard tires started to slip when adding torque. RB2 has an aluminium frame (DF Robot), 4 motors (installed as 2x2) and softer wheels. It doesn’t have the funny looks of the Dagu, but the ugly bastard runs like a clock. The rest stayed unchanged: RPi B+, Dagu mini driver and the camera and websockets classes of Dawn Robotics.

|x

 

The script evolved as well and RB2 now operates trustworthy at an acceptable speed.

Video:

 https://youtu.be/7bVeIi_Izqg 

The major differences in this script are:
* Just use color tracking for detection and moving. It’s 50x faster than the full routine. I used openCV bounding box to get a more accurate centroid (Contours are sometimes only a part of the picture). The bounding box also produces the width of the sign, which is used to keep focus. The difference is shown in the picture by red lines (contours) and a green rectangle.
* A range routine was added, using a constant value to multiply the width. Range isn’t needed anymore to adjust position and direction, but can be used to keep distance. (so, more fun than functional)
* A heads-up display of the center coordinates and range is added. Also just for fun (but who knows)
* A time-out routine is added for Pythons time.sleep is only reliable at very small intervals and I needed an accurate time-out to enable exact turns.
* The grabbed image is used as global variable (saves a lot of typing and a small bit of memory)
* Readings while moving are tuned with time-outs. The routine produces more than a hundred readings in a couple of seconds, creating an overload of the webserver and Pi’s memory
* After reading the sign the script forces a wait for the latest image using the max-time variable

|x

 * Finally the full detection routine is used for comparing the sign with the reference images.

|x

 

 

 This routine detects the white inner rectangle, shown in the picture as a blue rectangle.When interested, more details are well commented in the script itself, which can be found at:

https://bitbucket.org/RoboBasics/raspberry-robo-cars/src/1434877c12f39efc2c9b2ff99172ad605236914f/Scripts/reading_signs.py?at=master

The script can easily be extended with all kinds of routines. I will be working on logging thru a digital compass and the encoders.

Have Fun!