Aimie is a bipedal humanoid robot that I'm currently working on, it is my first submition to LMR and my first robot ever. It utilises many servos for movent, with a Raspberry Pi and an SSC32 to control it. Aimie will have an AI written in Java, running on top of Arch Linux, the AI will be able to move the chassis depending on inputs it receives, for example a gyroscope will tell the AI if it's about to fall over and correct its position by putting a foot out.
Software:
So far, the vision part of the AI has been written, it is able to recognise and remember faces even after having been shut down. This is done using a local cache of images on the SD card of the raspberry Pi. The facial recognition was constructed using OpenCV.
The speech synthesis is a work in progress but currently uses OpenMARY because it's the best sounding free speech synthesis software for Java (In my opinion). It can currently take voice commands and perform actions on them but the commands need to be hard coded, I'm currently working on a system where it will be able to learn new commands itself.
Hardware:
The chassis will be moved by approximately 5 servos per leg and arm. Another two will be placed mid body to allow it to spin. Two micro servos will be placed in the head to allow it to look around.
Edit:
The below images is an outdated but still relevant picture of the legs. I have since constructed a backpack for aimie which contains the Raspberry Pi, SSC-32 servo controller, lots of wiring, an LED and a voltage regulator to keep everything protected. Aimie runs from a 7.5V 2500mAh battery that will be mounted within its chest.
Yeah I plan to. I’m coding it all on my Windows PC. I know it’ll need a few changes to work on the Pi but I’m mainly concerned about the resource usage.
Wow, very ambitious for a first robot. It certainly blows away anything I have done so far.
I am very interested in the OpenCV programming you are doing. I have read several books on OpenCV (one on SimpleCV, ie a subset of OpenCV in Python) but find the transition from the examples I find in a book (which work but are taken in perfect lighting, specifically chosen to prove a point etc etc) to the real world of a webcam connected to my robot to be problematic. The special sauce of how to filter real world pictures in shaky and uncertain light, how to identify patterns so I know what algorithms TO use is missing from the books I have. The books show me in great detail how to do a particular algorithm on images, just never a reason, why one would use one algorithm or transform over another.
What if any books would you recommend? Would you mind posting your code since you have something that works already? What other resources have helped you come up to speed on the OpenCV?
To be fair, I haven’t used any books. My learning style is more of a Google style. I’m using JavaCV, I haven’t gotten too far with it but so far I’m recognising faces from the web cam. Let me know if you need anything.
I have been trying to get it to be able to find the edge between wall and ceiling so my robot could approximate its distance and heading so coupled with a dead reckoning algorithm, I would know where it was. Depending on lighting conditions, it works sometimes and others it doesn’t. Shadows on the wall mess with the accuracy too. Frustrating, but fun, if you know what I mean?
Good luck with your project. Those legs are very cool.
Very cool robot!! Yeahh!!
I’d love to see it in motion. When could we see it walking??
I’ve done a similar one, and I’m courious to know how do you plan to make it walk. That is, which locomotion algorithm do you plan to use (ZMP, or other similar, which sensors are you using to get a stable periodic gait, etc…).
I’m not planning on getting it walking until I have the whole chassis completed. I was thinking of using inverse kinematics with sensors on the feet and an ecceleromiter within the backpack. I’ve also recently redesigned the legs so they can now bend all the way.