The objective: a robot car detecting a blue marker, move towards the marker, read the sign that’s on the marker and follow its direction until a stop sign is found.
The video shows an overview of the approach and performance.
Software used:PythonopenCV and NumpyMini-driver, Arduino, Tornado and Websockets
The coding is rather straight forward and well commented. It's considered to be self-explanatory.
There are several ways to track an object in a live video stream. The most simple and fast methods are size detection and color tracking. Using size detection the objects are preferably squared. For the objective is to read the signs on the markers, color detection is used in this case. The signs are placed on a blue A4 background. This makes them easy to detect and simplifies filtering out the sign.Color detection however is rather dependent of the light conditions (darkness, lamp lights, shadows). When using color tracking at night, the RGB values used for masking will have to be adjusted according to the overall situation. Such can be easily done with a calibrating script, which can also be found at the same repository in the Handy stuff folder.
it uses keras to build cnn to recognize traffic sign.
Btw I stole some materials from udacity self driving cars course for my robot’s implementation. they provided very good materials including behavior cloning (using keras)
you can make a binary image out of the main pic and crop the selected area in the blue path.after that you can use central mass function to find the weight spot and calculate the deflection(right left or small amount of deflect. which can be forward) and even use these vectors instantly for motor inputs
Thnx! I’m new to a lot of this stuff and there surely is a limit to my learning capacity. Coding is my way of finding out if I understood the subject so far. I’ll dive in to your suggestion though!
I recommend you to do experiment your proj. first in matlab you can use the function regionprops(im,centroid) to take the central mass of the bin image
after that try to search equivalent func to regionprops() in openCV
What do you mean with deflections, though? It would be easier to have an example picture of what you mean, than trying to “reverse engineer” what you mean by going through many APIs/concepts.
Learning how to do it is another issue, it’s just about getting what you actually mean on a high level.