Robot reading signs

The objective: a robot car detecting a blue marker, move towards the marker, read the sign that’s on the marker and follow its direction until a stop sign is found.

The video shows an overview of the approach and performance.

Software used:PythonopenCV and NumpyMini-driver, Arduino, Tornado and Websockets 

Here you'll find the complete script

The coding is rather straight forward and well commented. It's considered to be self-explanatory.

There are several ways to track an object in a live video stream. The most simple and fast methods are size detection and color tracking. Using size detection the objects are preferably squared. For the objective is to read the signs on the markers, color detection is used in this case. The signs are placed on a blue A4 background. This makes them easy to detect and simplifies filtering out the sign.Color detection however is rather dependent of the light conditions (darkness, lamp lights, shadows). When using color tracking at night, the RGB values used for masking will have to be adjusted according to the overall situation. Such can be easily done with a calibrating script, which can also be found at the same repository in the Handy stuff folder.


This is a companion discussion topic for the original entry at https://community.robotshop.com/robots/show/robot-reading-signs

very cool

very cool implementation using opencv and python.

btw all is coded explicitly. what about using convnet : https://github.com/jessicayung/self-driving-car-nd/tree/master/p2-traffic-signs

it uses keras to build cnn to recognize traffic sign.

Btw I stole some materials from udacity self driving cars course for my robot’s implementation. they provided very good materials including behavior cloning (using keras)

 

suggestions

you can make a binary image out of the main pic and crop the selected area in the blue path.after that you can use central mass function to find the weight spot and calculate the deflection(right left or small amount of deflect. which can be forward) and even use these vectors instantly for motor inputs

Thanx! I’ll try to implement

Thanx! I’ll try to implement it my current project.

youre welcome

I hope to see its results here

I wish you success

Thnx! I’m new to a lot of

Thnx!  I’m new to a lot of this stuff and there surely is a limit to my learning capacity.  Coding is my way of finding out if I understood the subject so far. I’ll dive in to your suggestion though!

For reference, what I

For reference, what I stumbled upon when searching for the terms you used:

 

https://www.quora.com/What-exactly-are-moments-in-OpenCV

https://en.wikipedia.org/wiki/Image_moment

 

It would be nice if you could elaborate a bit using some code or pictures or a link to know if this is what you meant.

looks like to be the solution

I recommend you to do experiment your proj. first in matlab you can use the function regionprops(im,centroid) to take the central mass of the bin image 

after that try to search equivalent func to regionprops() in openCV

Thanks.What do you mean with

Thanks.

What do you mean with deflections, though? It would be easier to have an example picture of what you mean, than trying to “reverse engineer” what you mean by going through many APIs/concepts.

Learning how to do it is another issue, it’s just about getting what you actually mean on a high level.