Modified Robot to become Advance AI automation Demo Video

Hi

This is a modified Kondo KHR2 with RCB,C7 board,Camera, custom software to make it more advance.

The demo video is about 16 minutes.
Voice, speech, reading, color, weather recognition. Voice control, automation, AI center of gravity, distance, AI talking and much more

So it is possible for anyone to make their RoboNova, Kondo, Lynxmotion robot to become advance with automation.

One day if I have the time I will make the Lynxmotion SSC-32 with one of the lynxmotion robot to do the same as above.

I just order a gripper using Lynxmotion…will put that in my next robot project.

Will add new feature to the software to do more thing on next update.
example

  1. Human motion sync with robot.
    Through the camera the robot will move its left arm as you move your left arm.
    Basically, the robot copy what you do.

  2. simple taste bud…(in liquid form only).
    Using certain sensors with the software, the robot
    can taste the difference between water, orange juice, coke, coffee etc…
    This should be fun to program.

If you have any more idea to add to the robot, please post it here or PM.
Thank

Sorry, I have to make the video into two pieces since youtube only allow max of 10minutes

Part A
youtube.com/watch?v=nITt-DL1ycw

Part B
youtube.com/watch?v=3Ds7CXa6oGY

I would like to see positional perception based off the position of the audio source. I have dabbled a little with making my projects respond to sound but I have not got to the point of making it calculate the angle the sound is coming from. My design is simple; it responds to what side the sound is loudest.

I would also like to see your photos of the head showing the camera.

I enjoyed your videos!

Here is a picture of the Aluminum robot head I designed:

img47.imageshack.us/img47/2789/robothead22fq.th.jpg

Here is a picture of the sound board I made with speech:

img414.imageshack.us/img414/3636/rcsuhighresfp9.th.jpg

SN96
“I would like to see positional perception based off the position of the audio source.”

I am not sure what you mean from above…
But for me the camera has a build in camera and microphone.
Sound from microphone goes to the C7 board, the software process the sound you speech, The Ai of the software process the keywords, and find the best possible answer, and talk back to you.

"
I have dabbled a little with making my projects respond to sound but I have not got to the point of making it calculate the angle the sound is coming from. "

I just put the microphone on top of the robot head (center).
I don:t really need to calculate the angle of the sound coming from.

"I would also like to see your photos of the head showing the camera. "

There is nothing fantansy about it. But if you want I can take a picture close up of the head. ITs a logitech camera with build in microphone.
I will probably going to buy another camera because this one doesn:t focus very well at a far distance.

I think my lynxmotion grippers shipping may arrive today.
I will find out when I get home.
I can use one of the grippers as a “mouth” to lips syn while the robot is talking back to you.
I can set up another demo for you if you want.
Robot mouth moving with lips syn while talking.

SN96.
I would like to know how you do your lips syns for your robot.
It is possible to setup a demo video, or is that too much work to do.
From what I see from your picture, you use hardware to control your speech, so I assume the hardware is expensive…
The robot head looks cool, how much it weight.

The head is fairly light, I have not weighed it though. The Speech synthisizer is around $25 for the chip and around $60 for parts and shipping costs. the board was another $20 so over all it was around $105+ to build. Much of that cost was for other components to drive the speaker, microphones, tilt sensor, and ping sensor.

I will see if I can find a video for you. I have a link to one posted somewhere on this forum.