try: | |
#----------------------------------------------------------------------------------------------------- Create trackbars to manipulate the low RGB values | |
img_low = np.zeros((15,512,3), np.uint8) | |
cv2.namedWindow('BGR_low') | |
cv2.createTrackbar('R','BGR_low',85,255,nothing) | |
cv2.createTrackbar('G','BGR_low',50,255,nothing) | |
cv2.createTrackbar('B','BGR_low',110,255,nothing) | |
cv2.imshow('BGR_low',img_low) | |
#----------------------------------------------------------------------------------------------------- Create trackbars to manipulate the high RGB values | |
# Default is blue range, low: 50,50,110 high: 255,255,130 | |
img_high = np.zeros((15,512,3), np.uint8) | |
cv2.namedWindow('BGR_high') | |
cv2.createTrackbar('R','BGR_high',255,255,nothing) | |
cv2.createTrackbar('G','BGR_high',119,255,nothing) | |
cv2.createTrackbar('B','BGR_high',131,255,nothing) | |
cv2.imshow('BGR_high',img_high) | |
#----------------------------------------------------------------------------------------------------- | |
while True: | |
bot.update() | |
switch_object_found = False | |
if latest_camera_image != None: | |
#----------------------------------------------------------------------------------------------------- get current positions of four trackbars | |
r_low = cv2.getTrackbarPos('R','BGR_low') | |
g_low = cv2.getTrackbarPos('G','BGR_low') | |
b_low = cv2.getTrackbarPos('B','BGR_low') | |
r_high = cv2.getTrackbarPos('R','BGR_high') | |
g_high = cv2.getTrackbarPos('G','BGR_high') | |
b_high = cv2.getTrackbarPos('B','BGR_high') | |
img_low[:] = [b_low,g_low,r_low] | |
img_high[:] = [b_high,g_high,r_high] | |
cv2.imshow('BGR_low',img_low) | |
cv2.imshow('BGR_high',img_high) | |
#----------------------------------------------------------------------------------------------------- 0 - Set color range to search for | |
hsv = cv2.cvtColor(latest_camera_image, cv2.COLOR_BGR2HSV) | |
lower_blue = np.array([b_low,g_low,r_low]) | |
upper_blue = np.array([b_high,g_high,r_high]) | |
#----------------------------------------------------------------------------------------------------- 1 - Mask only with trackbar values | |
mask_image = cv2.inRange(hsv, lower_blue, upper_blue) | |
cv2.imshow('step-1 - mask', mask_image) | |
#----------------------------------------------------------------------------------------------------- 2 - Convert masked color to white. rest to black | |
result_image = cv2.bitwise_and(latest_camera_image,latest_camera_image, mask= mask_image) | |
cv2.imshow('step-2 - convert', result_image) | |
cv2.waitKey( 1 ) | |
except KeyboardInterrupt: | |
pass # Catch Ctrl+C | |
#----------------------------------------------------------------------------------------------------- Clean-up windows if needed | |
#cv2.destroyAllWindows() |
The routine can be imported at the start of a control script for tracking objects. Just remove the try-loop and insert:
if cv2.waitKey( 1 ) & 0xFF == ord (‘c’): |
|
break |
The script will continue after you are done with calibrating (and save!) the RGB values found.
Note: I’m working through the WebSockets class of Dawn Robotics in order to process the script remote and display the windows on my PC screen. All correlating commands begin with ‘bot.’
The purpose of the script is to have the robot search for a marker, have it run towards the marker, interpret it and start searching for the next marker. So the structure of the script is: search for sign, move to sign, read sign and continue until it finds the sign to stop. These are the signs I use:
I’ll post only the relevant parts of the code here to keep it readable. If you want to have a look at the whole script (or even take a copy) you can find it at my GitHub at https://github.com/Bas-Huizer/Rpi-camera-robot-car.git
Note: it is still WIP because I changed the chassis and motors of the bot (I’ll show it later on in my project) and I’m working on new sign_routines (e.g. line following). So don’t be surprised if there are differences. And please feel free to comment if you can give me some hints.
I think the code is rather well commented and will explain a lot by itself, but I’ll highlight the steps.
First of all it is coded as a function that searches for a blue rectangle and will return:
switch found + centroid coordinates + contour found
The bot.update is a websockets routine to keep the camera alive. First the RGB values found are used to create a mask. (Here the values are hard coded because I use the calibrating script as a separate routine.) Then the image is converted to a binary image by the bitwise command. OpenCV uses Numpy to create multidimensional arrays of an image with a lot of different formats. The blurring step removes a lot of noise (small spots received with frequencies within the RGB ranges). The gray colorspace is used to resize the Numpy array into a binary format. Note: I use the same array (image_name) as much as possible to save memory. At the end of the day the robot will have to perform the script autonomously and it will have to fit into the Raspberry Pi memory.
We’ve now got an edged representation of the video image. Let’s see if it is a proper sign or just an occasional object we stumbled on:
First we check for the amount of contours in the image. If there is a lot of noise (light conditions!), there will be numerous contours detected. So we drop everthing except the biggest four. Donnot try less because of the structure of the contour array. If only the sign is found you’ll get 4 contours (inner, outer of the big and the small rectangle). With the moments feature in openCV we find the area, centroid coordinates and the length of a contour to determine if it is a rectangle-like shape.
The rest is basic coding: select contours with 4 corner points, discard those that are really to small (maybe we’ve accidentally found a very small rectangular-like noisy spot in the image) and save the smallest of the ones we found.
That’s it. The routine takes around 0.4 seconds to return. If it doesn’t return a sign found, the camera will shift a couple of degrees and the routine is called again. If the camera neck is at its maximum position, the bot will shift about 45 degrees and starts again. Over and over until a sign is found.
From here the bot can start moving towards the sign. Keeping focus on the centroids (same routine), adjust the neck_angle according the shift of the centroids and adjusting the motor speeds according to the angle of the camera neck. I won’t post all that. It is basic scripting (although a curved run requires some math). If needed you can find it on my GitHub.
So let’s skip to interpreting the sign once the bot arrived in front of it.