Hello i am very new to the field of robotics. i have been building and designing, fairly simple robots such as audio-droids and simple Androids and the like for about two years now. I would like to take a small model, such as the “Femisapian”/femisapien? or the FT(Female Type)Robots, and upgrade everything to actual Lifesize or a little larger. How possible would this be? I also would imagine that by upsizing them, of course by doing it/designing it myself there would be a bit more room for extra components/sensors/etc.? I also might need to change design a little to authenticate a new type? Also I would probably need to find a plastics/molding company that could replicate the pieces right?
Thank You very much! Yes i would rather build a “lifesized” walking one,(tis’ my goal) and one that walks pretty decently as well, so i would love to use the structure of the FT robot, i will probably re-design it a bit/appearance, especially the head. Although i dont know of anyone who has been able to purchase a FT yet, i dont think they are for sale yet, and i heard that they will start out at $3 grand!!(if i could only get the blueprints!) maybe i should just focus on the Femisapians/femisapiens? “walking” structure. You see i am trying to find a good “Walking” structured robot to upgrade/lifesize to. One thats fairly affordable in original scale, and looks very nice/human-like in design, like FT(with knees and all). I noticed the femisapian walks kind of funny, but it might work for the price. Do you have any more suggestions? Like on a humanoid kit/design that I could upscale to?I will look around on designs I like.
(Sam Swan of Elvensynth)
Thank you for answering that for me! Ok i am a little more familiar on how it works. I purchase an audio synthesis pcb and connect it to a voice recognition pcb with a microcontroller. Does the magnevation only have 8 possible responses via the 8 buttons, and does one manually have to push the buttons for response? I need it to automatically respond back to me when i command it a phrase or word. Are you familiar with “Project Aiko?”. Trung Le uses speech communication with his android by simply asking it a question and it automatically responds back, and she can also learn. Say she does not know your name, you simply tell her your name, then ask her what your name is and she will tell you your name back. is it possible to do this with only an audio synthesis pcb, a microcontroller, and a speech recognition pcb(speaker,mic)? I am still a little fuzzy on how “Speech to Speech” works.
Question on your products regarding Audio synthesis, etc.
I am very interested on how the audio synthesis pcb’s work and what they are capable of. As well as the Speech recognition pcb capabilities. On my future project I want to be able to audibly speak to the robot and have it audibly reply back(via trained response)in a female voice. Could it be as simple as connecting an audio synthesis pcb (as voice output?)with a speech recognition pcb as input? Or does the speech recognition already have a way to understand your voice and already has “voices” for response? Connecting a microphone(input) and speaker (output)? I would have it simply respond to any “commands” back as “Yes Master”. I am wondering if these pcb’s are similar to chatbots, where you can train them to reply back with any response you would like, to almost any question. So they interface to a computer so you can train it to recognize your voice? and can you create or choose a female type voice for its audible responses? Via a microcontroller? I have noticed that the speech recognition pcb’s have a limited amount of commands (per second or two)that you can program/train. i might need to connect a few of these. what are the 8 event input buttons for on the magnevation speakjet activity center pcb? Pardon me, i know i have a lot of questions!!!I am quite new to robotics!
Thank you!
Sam S of Elvensynth
That was one idea I had, to use a computer motherboard, instead of separate PCB’s. One idea for my 3rd or 4th prototype of “Brexxa-1” rather Brexxa-3 or 4! aka “24Seven”, I would use a plastic mannequin(plastic because of possibly melting through parts and re-installing them) I plan on, and have planned on intalling the motherboard in the chest cavity area, with an access pannel on its back, i already have a lifesized animatronic head(lightweight too) that i plan on installing to the bust of the Static mannequin.
1: To keep the Android static(possible articular joints w/locks on back of knees so it could stand and sit)
2 wireless or not (doesnt matter if it’s not going to walk around yet)
3 Focus on using its intelligence via “Sound” using a freeware i found called E-Speaking.com
4 no need for range finder sensors or the like(maybe IR or untrasonic, just for human detection
5 possibly used as security device.
i hacked the wowee elvis alive(cut off speakers) and peeled off the flashing and made my own “Female-Type” using liquid latex applied by hand ( i also molded the face using a special playdough-like material “my secret recipe”, over the underskull and taped up the mechanisms to create a proper flashing of my own.) i let it dry/harden, then liquid latexed the entire head, i finished w/ acryllic flesh tone paint and added a nice wig.
So basically the head is done except for the synchronization of the mouth when i install her internals and memory/motherboard.
I just need to hack the jaw movement chip to synch with the output of the speakers.
Theres a lot to it and i will discuss more in future threads
SS of Elvensynth
In my first prototypes i had difficulty finding proper heads to use, finding that the animatronics at that time were very expensive. Then Wowee came out with the elvis alive! It was very affordable and contained all the essential humanoid parts! in my first prototype i used cosmetic heads, the ones they use for practicing cutting hair. I would cut out the eye/s and shave off all the hair, slice the back, peel off flashing, cut into styrofoam and install web cam, or other components, re-apply the flashing, and add a wig! attatch the neck base to a hi-tech servo w/ pvc glued to servo horn, use wood for frame of chest cavity,etc. One of my first prototypes as in my picture, i built a lifesized articular endoskeleton by hacking a robosapian, extening all limbs w/ pvc and wood, hinges for knees etc. I then took apart my boombox and installed the speakers into the stomach area, and attatched the tape player to the back pack i made, would make pre-recorded songs and voices w/ netscape reader on my comp. move her around the house, plug her in, and use her as security to trip people out. i also had auxillary on her to plug her in to the computer for whatever songs or voices i wanted to play. I used a web cam for her eye to monitor what she was seeing. That was my first prototype, Brexxa-1. Currently i am using the wowee alive humanoid animatronic head, and i customize the flashings.
I will look into the sega’s ema female robot. is it an actual physical toy or something? how big? or is this more of something i conclude a virtual thing, a game? Connecting/incorporating a virtual/behavior/appearance with an actual physical Android as the interface/serial port so when you plug your bot into the comp/t.v. it already has programmed features/voice/etc is a great idea. This explains how androids, when plugged in to the right source,(such as net or x-box,etc) can obtain tons of info or different behaviors,etc. Please explain to me your ideas of or for my robot, so basically i get that by installing the right hardware internally in my robot, that when she is plugged into the right source, she could gather all the right info or perform communication via the computer using a bluetooth and microcontroller? i am working on "speech to speech"recognition (human speaks and robot answers) as follows:
Still a little purplexed, I have recently come to conclude this:
If:
(Speech>text)>Algorythm>(text>speech)
then we have “speech to speech” theoretically:(human speaks, robot answers)
therefore:
(Speech(input) is voice recognition pcb, this converts to a text response, copies that same response(depending on question,phrase,command,etc)via NLP algorythm to the (text>speech) wherefore: text to speech is always the same.
So basically:
(voice recognition>text)>Algorythm>(text>Speech Synthesis)
and:
(input>response)>algorythm>(response>(response)output)
So basically:
voice recognition>speech synthesis. Where: input mic is connected to voice recog./speech to text>algorythm>text to speech/speech synthesis connected to speaker/s output.
This is how I have concluded one way to use/program “speech to speech”
Yes I have seen the femisapian/femisapien?, a.k.a. the E.M.A. robot (in Japan) it turns out to be that Wowee are the original creators(to my knowledge), instead of Sega robotics. I saw videos on her “walking” thought it looked kind of funny, i wanted to incorporate the FT’s (robo garage/kumotek/roboporium) movements instead, but not sure yet if they will be available to public, and quite expensive,like $3,ooo but very beautiful movements. I might use the pf manoi as source, it runs like around $1,500?, but dont think its movements are feminine. I will probably just use the femisapian, it’s very affordable for a “walking” robot, even though they work, i dont care for the way it looks when it walks too much, especially the way the arms moved. I DO really like a lot of the design of the body/exoskeleton.Of course i will change the arms a bit, and general design, to create my own special robot, and just use the femisapian as a tool on similar parts,if not the same…but on a much larger scale.
(Sam S. of Elvensynth)
I have studied the femisapian/femisapien’s? a bit more now and actually am quite impressed by the design and Walking. As well as the funtionality. Wowee states that they will hit the shelves September 26th for $99 US! Wow! very affordable! I also like the “pose positioning” new tech. Especially being able to guide her by the hand!
(Sam S. of Elvensynth)
Ok I have done some more research on the “Femisapians”/femisapiens? and Wowee Robotics states that they are the originals, and that the E.M.A.'s (eternal maiden actualization) by Sega Robotics are just copies. The two both seem to be the same as in functions and performance. The Japans ver. seemed to have purple painted designs as where the US ver. have red designs. Although I think that there are/will be a few different colors to choose from. The beautiful little android girls have a new function called “pose positioning” where you actually move any part and the robot will repeat it in timing and order. All you have to do is show her your hand and she will repeat it over and over, if you wave your hand twice, she will repeat twice! You can also “guide” her by the hand, and she will follow. She dances and blows kisses. The robot can actually WALK! and quite well at that! It can turn corners,navigate,right,left,frontwards and backwards! She has sensors to know where there is an edge or object in the way, so she can keep going!!! She can also hand out buisness cards. The cute little Android girl was designed for ages typically aiming at the 20 and up crowd. She will hit the shelves at September 26th maybe a bit earlier, and starting at $99 US. You can purchase one at your local Radio Shack, Circuit City, Target, (wherever they might sell Wowee’s toys) Sharper Image is going completely out of business!!! I imagine they would be a great gift for your friend,etc for this upcoming Christmas, Birthday, etc. The ones that will be released in Japan, starting in Sept also, will start at $175 US, quite a bit more than here!!
(check out my blogs for vids of the Femisapian at: Elvensynth@myspace/music.com) Or go to YouTube.
(Sam S. of Elvensynth)
Ok I read through the manuals a bit on a few of your products in the areas of Speech recognition PCB’s and Audio Synthesis PCB’s…Does or could the Devantech (female voi.) speech synth. mod. connect to the Magnevation speakjet activity cent. allowing for all or some of the output voice to be (female)… I read the speakjet chip alone speaks once you plug in ground and speaker, but what does the voice sound like? i would rather bypass all the circuit building and purchase magnevation speakjet activity cent. pre-assembled? I read the capabilities of the speakjet in general and seems to be almost unlimited! I would also like to look into using the images scientific SR-07 speech recog. circuit (assembled) for my trained input (speaker dependent).
Therefore:
input mic(speech recog. PCB)>microprocessor(NLP Algorythm)>(mag. speakjet activ.center PCB)out Speaker.
So basically, I wondered what the speakjet sounds like when it speaks, and could i incorporate the devantech (female) into it, if it doesnt sound female.
Still a little purplexed, I have recently come to conclude this:
If:
(Speech>text)>Algorythm>(text>speech)
then we have “speech to speech” theoretically human speaks, robot answers)
therefore:
(Speech(input) is voice recognition pcb, this converts to a text response, copies that same response(depending on question,phrase,command,etc)via NLP algorythm to the (text>speech) wherefore: text to speech is always the same.
So basically:
(voice recognition>text)>Algorythm>(text>Speech Synthesis)
and:
(input>response)>algorythm>(response>(response)out put)
So basically:
voice recognition>speech synthesis. Where: input mic is connected to voice recog./speech to text>algorythm>text to speech/speech synthesis connected to speaker/s output.
This is how I have concluded one way to use/program “speech to speech”
OR:
I might be able to bypass the text process by using a Speech synthesis PCB such as the SR-07 speech recog. circuit and connect/bridge a microcontroller such as the basic X/stamp to an audio synthesis PCB such as the magnevation speakjet activ. center. Wherefore a spoken command would trigger an audible response/phrase/sound/etc.
This sounds a bit complicated, I know. I am still very new to robotics, so bare w/ me all you pros.
I imagine w/ proper training/programming and proper hardware/pcb’s, this is very possible, though the knowledge is limited compared to using the net, .
OR:
Hook up a wireless motherboard/ and use the computer as its internal memory. Somehow access a chatbot program such as (pandorabots.com) and use it the same way. Human speaks, and robot answers. This might follow the (speech>text)>algorythm>(text>speech)process. Also it would be VERY usefull to use the robots name as a command, so it would know when its being spoken to and when to respond.
Any thoughts and ideas on this would help greatly.
Thanks,
Sam S.
I remember a freeware site that has small samples for free called Soundrangers.com. They have hundreds upon hundreds of sound samples! They also have pre-recorded female type voices,even robotic ones. Some even like from a Science Fiction movie! I wonder if I could incorporate one of those into the Mag. Speakjet PCB? I am sure i could find a way. Anyways thank you for your responses, and I will further look into this field.
have been reading “making things talk” by makezine.com? I have another idea on speech to speech communication. I have thought of this before, but had no clue as to how it worked.
1: using the Net as it’s source of communication (unlimited knowledge)
I would build a “nexus” (PCB’s,chips, mic and speaker) where you could talk into the onmi. mic., say a command, this would access the net and directly connect to say, (netscape reader.com/freeware), using netscape reader as a default?
the “nexus” would be wireless/bluetooth?
So i would say a command such as “open” this would directly open the net, the voice would respond to every command to verify, such as: “searching,…searching…opened.” then i would say another command such as “search cosmist”…it would then pull up all sources under “cosmist” including wikipedia.
I command, “open wikipedia” computer responds “opened”
I command “read”
it reads page, so forth, so forth.
So basically, by every command i use, the computer would respond to verify, always using same source such as (netscape reader.com) by automatically verifying every command, as well as automatically copying and pasting the page and triggering the speak button.
So far I would use:
voice recognition PCB
mic
speaker
bluetooth module (bluesmirf)?
Arduino
I know there’s quite a lot more to it…
Actually reproducing human motion does not seem to difficult at all for me to replicate, It depends on ones knowledge, applications, and testing…its been done mostly kept quite secretive for if not thousands of years, its a modern form of puppetry, of Magic. Same with Ai. I live, eat, and breathe robots right now…and am becoming extremely much more advanced, every new week…it does take A LOT of time, patience, precision…etc. I was recently loking on (Android World.com) at their hand and head projects…hands are very basic…depending on your application whether it be electric or pneumatic, either way, I was noticing the pricing…up to $18,ooo US a hand!!!
I am surprised to find, it seems that not too many people are interested in this field, …it is soo beneficial!..anyhow i noticed that their construction seemed a bit off on design…for me, it is all about expirementing right now, testing, prototyping. I really dont find it necessary to replicate the smoothness of motion and extreme D.O.F. that a human can produce, in a Droid. I believe that would really take away the fun…I enjoy it when they “glitch”, fall down, bumb into walls, etc! I have found that molding/casting is a very big part. If your silicone/latex,etc castings are nice…you got a cool looking Droid!
So this is what I have come to conclude regarding a speech-to-speech/speech engine.
when Android is booted/on…Dragon naturally will run automatically, as well as run A.l.i.c.e/Pandorabots to open up a text-to-text page…dragon will be listening for “wake” command, which is programmed/trained as the Androids name, otherwise it is un-attentive. When name/wake command is given…it is now open for conversation, questions, etc…(questions, conversation is given to robot)
After a 4 second delay…Android will run automatic “enter” button, (which will post a text response), as well as run a copy/paste which will run into the “Text-to-Speech” using Cepstral voices…etc…
Sleep command: (robot/androids name) followed w/ sleep, or go to sleep.
Autohotkey.com (freeware) to program the automatic algorythms
Dragon Naturally Speaking
A.L.I.C.E. (Pandorabots)
Cepstral, Netscape Reader…etc.
(Sam Swan/Elvensynth Robotics/11-18-2008)
I got Brexxa’s Speech Engine Running
I am using E-Speaking…
I plan on installing Dragon Naturally as well to run Dictation for A.l.i.ce./pandorabots…Brexxa’s official memory
and use autohotkey for automatic source code…
Actually reproducing human motion does not seem to difficult at all for me to replicate, It depends on ones knowledge, applications, and testing…its been done mostly kept quite secretive for if not thousands of years, its a modern form of puppetry, of Magic. Same with Ai. I live, eat, and breathe robots right now…and am becoming extremely much more advanced, every new week…it does take A LOT of time, patience, precision…etc. I was recently loking on (Android World.com) at their hand and head projects…hands are very basic…depending on your application whether it be electric or pneumatic, either way, I was noticing the pricing…up to $18,ooo US a hand!!!
I am surprised to find, it seems that not too many people are interested in this field, …it is soo beneficial!..anyhow i noticed that their construction seemed a bit off on design…for me, it is all about expirementing right now, testing, prototyping. I really dont find it necessary to replicate the smoothness of motion and extreme D.O.F. that a human can produce, in a Droid. I believe that would really take away the fun!!..I enjoy it when they “glitch”, fall down, bumb into walls?, etc! I have found that molding/casting is a very big part. If your silicone/latex,etc castings are nice…you got a cool looking Droid!
Shelling,…then Meks
Another idea I have been told and thought, is to use a lie detector along with the voice recog…combine the sound with vision, IRs…sonar…and a few photodiodes for light proximities and analysis.
Therefore not only does the Automat detect a human thru sonar and heat…it combines the voice and facial/skin color tones…and matches a memory shot photo…
Storing external memory on USB flash drives…etc.
Brexxa’s speech is at least working…she seems to be getting to know her name much better. but she keeps calling herself a “mendel”? I dont know what that is…Yes, I use A.L.I.C.E. @ Pandorabots…I plan on using Dragon for dictation into her memory…and on top of my current e-ssspeaking and autohotkey to source code the automatic…I think i am going to use like a 40 gig internal… I am currently in the process of compiling my computer down…and downloading Brexxas AIML to my laptop.
I just made a mold for an Android/Pirate Flashing…it should be set by Friday…
Happy Thanksgiving All!
(Sam-Swan of ELVENSYNTH)