re: sw0rdm4n
Thanks. My plan for the ears is to put two sensors in each ear…
Video Camera: https://www.robotshop.com/en/pixy-cmucam5-image-sensor.html
Sound Sensor: https://www.robotshop.com/en/sound-sensor.html
I will have to scale the ears up a bit and reprint as these sensor won’t fit now. The camera sensor will pre-process video at 50 frames per second and pass on the width, height, x, and y coordinate of color blobs it detects. It can scan for seven different colors at the same time and track many objects at the same time.
Each ear will be mounted to a servo to let it move from directly forward to directly aft. For now I’m cancelling my plans to put an elevation servo in for each ear, however the entire head can elevate. The two ears will cover 360 degrees. I have a lot of plans for how to use this information to improve situational awareness, localization, and obstacle avoidance just to name a few. Ultimately, I want to use the two cameras to give the robot depth perception when looking forward at specific colored objects. I hope to eventually use this information to allow the arms to be able to grab things.
I plan to allow the ears to scan around while the robot’s face is focused on the person it is talking to. If the sonar detects something closer in a particular vector, the ear can rotate to that vector to get a more detailed picture.
While all this is going on, I believe I can use a memory of where objects were seen in the past to localize within a few inches in a room, probably on the Android. I did this with Anna and OCR, so it seems more feasible with the Pixy camera.
Due to the shape of the pixy sensor being flat on the bottom half, I believe I may be able to cram the sound detector sensor into each ear as well, directly in front of the Pixy but not blocking the Pixy camera. I have never worked with sound sensors, so this area is totally new for me.
I think these ears will turn out being the most useful sensors on the bot. I’ll still have the Android camera and thermal cameras facing forward with OpenCV (processing at a slower frame rate). I’ll have a lot more flexibility with it to program whatever I want.
I should have my Pixy sensors by next week. I hope to have the ears redesigned around them in the next few weeks.
Lucy runs on my home server as an API and supporting web site. You can access them both and use them from your robot as well. Send me an email to [email protected] and I’ll send you instructions.
For TV control, I was going to start with IR. I have done a lot of work with IR before so it is nothing particularly new I don’t really have my smart home yet, just some Philips Hue lighting and Wink stuff which I have managed to control through an API. I don’t know anything about using the frequencies or devices you mentioned yet, but would love to learn if you have time to share anything. I have some XBee Pro stuff lying around which I might cram in to the rear of the body or head but have never used it.