Well.... I set myself the task of creating a robot that would learn an object then go and find it using vision....
Mission accomplished.... well, its probably not very robust but that's a problem for another day...
Today's task was the Mega Block challenge - an area bordered by Mega Blocks (it's what the kids use and have quite detailed images on them that I can use as objects to find):
So… first the DRC takes an image and remembers it using SIFT descriptors. In this case a little doggy block…
Then it spins around and forgets where the doggy is. And off it goes…
First navigating by working out what is flooring:
And then looking for a match to the reference image:
Eventually by randomly moving and taking images (it’s all a bit slow on the Pi)… there are enough matches to say that the dog has been found:
It tries to do mapping, but it’s not good at the moment.
So… there you have it…
I should point out that the screenshots are taken from a laptop, but the images are from the Pi. All images are recorded by the Pi and I offload them to the laptop for debugging. The python is common to both with a little bit of this:
import os
if os.name == ‘nt’:
cv2.imshow(…)
so that the Pi doesn’t get involved in showing images etc…
Next… try to speed it up… get the mapping to work better… preprogram objects of interest?.. tidy up DRC (wires hanging out is not very pretty, is it?) World domination is surely on a matter of time…