Can I Dock With Your Shoes? No Easy OpenCV For My Robot

My GoPiGo3 / Raspberry Pi 3 based robot knows to get on and off his, (robot is Carl), recharging dock, but if he were to wander away does not know how to find the dock. I have been working my way through “Practical Python and OpenCV” and got the itch to try to teach Carl to find his dock before I have completed the course/book.

(Carl’s dock is a gutted iRobot recharging dock, with directional green LEDs and a nice big “CARL” sign above it.)

I was feeling pretty good that Carl was able to find the green LEDs when he faced the dock, and was getting ready to code up the “turn to face the dock” step.

I took out the “wait for a key press” debugging steps and launched into “one more time before I quit.” That was when I found out Carl likes my wife’s green Crocs just as much as the green LEDs on his dock.

No easy OpenCV solution for a robot with green Crocs around. No, I’m going to have to read the whole book.

1 Like

Nice story :smiley:

I guess quick fix would be removing all the green stuff around the house :smiley:

1 Like

I told my wife that Carl might try to “hump” her shoes. She replied, it would be “cute” and she would like it if Carl would come visit her.

I got jealous and put a low-pass radius filter on potential LED recognition sets. I can’t have my robot like her better. This has an unplanned benefit that Carl is ignoring all green giants (or at least while I’m watching him).

2 Likes

Nice solution. Maybe you could make something more complex than just a green led on the charger side. Maybe you could make like a pattern made of several different LEDs that only your robot could recognize but that wouldn’t appear in reality so often.

True; iRobot drives the LEDs with numeric codes, but I ripped the guts out of the dock, and just have these “always on” LEDs.

I want my robot (and dock) to be as simple as possible, while exploiting the sensor capabilities to the fullest.

The strategy gets more complex as the precision and accuracy requirements tighten close to the dock.

  1. Find the dock from center of the “home room” using the green LED(s)
  2. Line up approximately normal to dock
  3. Use “Custom Object Recognition” on the “CARL->” sign and “translate a little left or right” maneuver to fine tune position.
  4. Perform Docking (Working well)

2019-07-09 06:29|[juicer.py.dock]---- Docking 244 completed at 8.1 v after 6.6 h playtime
2019-07-09 09:37|[juicer.py.undock]---- Dismount 245 at 10.9 v after 3.1 h recharge

I have to finish the course/book to be able to recognize the “CARL ->” sign, and
I may have to add some illumination LEDs to Carl to be able to see the “Custom Object” at night when the lights are off in the room.

This is the detailed plan for steps 1 and 2:

# findDock.py

Documentation:
Uses OpenCV on successive images captured by the PyCam to find the recharging dock.

Algorithm:
1) Capture an image
2) Mask for green LED(s) of the dock
3) Find number and position in the image of green LED(s)
4) If no LEDs and number of captures < "360 degrees of captures"
turn capture width and continue from step 1
else declare "dock not visible (at this location)"
5) Calculate dock angle relative to heading angle using horiz LED position in image
6) Estimate dock distance based on vertical LED position in image
7) Point distance sensor toward dock, take distance reading
8) Fuse estimate and reading for distance to dock
9) Point distance sensor fwd and 10 " away (for U turn clearance plus 1" )
10) If distance to dock GE 30" turn to face dock, otherwise turn away from dock
11) While distance sensor reading > 9 " (U turn clearance), drive to point 30" from dock
12) If drove away from dock, turn to face dock
13) Perform wall_scan() returns distance to wall, angle to wall normal
14) Calculate turn angle to intersect wall normal from dock at 90 degrees
15) Calculate distance from current position to dock-ctr-wall-normal
16) Turn to intersect wall-normal-from-dock at 90 degrees
17) While distance sensor reading > 9", drive to dock-wall-normal
18) Turn to face dock

Followed by approach_dock(), and then dock()

1 Like

Update: I am getting pretty good false-LED-rejection using the following a priori info:

  1. the LEDs never appear in the upper 62.5% of an image
  2. the green-ness of course
  3. the LEDs are basically round with a very small radius
  4. when there are two LEDs, they are close together (<20 pixels horizontal at 640x480 res)
  5. when there are two LEDs, they are horizontal (<4 pixels vertical at 640x480 res)
  6. there are a max of two LEDs visible on the dock

Now I need lots of testing to find out detection rate and false detection rate. It seems like this approach is working pretty well, from the center of the room, for the “find the dock” step. (I’ll have to program findRoomCenter() before letting Carl wander on his own.)

Silly me, I thought the green-ness alone was going to be sufficient.

1 Like