I really don’t get your
I really don’t get your question… Fix what?
this robot.the
this robot.the accessories…
How I almost competed in Fire Fighting
Last week and weekend I worked hard on MiniEric to prepare him for Fire Fighting. I had to make him shorter to fit the height limitations, I gave him a paint job, changed the electronics, sensors, managed to render the Nokia color LCD worthless… Oh well, lots of work and frustrations. So many things to do and so little time left to the competition. I have tried to adapt Mike Ferguson’s code for his Crater robot but couldn’t figure out some of the logic and it wasn’t working on my bot. Trying to make the robot run in a straight line and turn precisely 90 degrees I made a list of commands that would take him over the entire course node by node. Of course, nothing was precise enough and after going through half of the course, the robot would hit the walls. I have tried adding sensor measurements, but one Sharp sensor failed to work entirely, the other one was giving me odd measurements and the Ping sensor was too high (read too close to the top of the walls) to be reliable. It works in a real house, just not in the Fire Fighting prop. If I tilt it, I can read the distance to the front wall, but can’t use it for side walls at an angle, the measurements are completely weird. So, I ended up with a sensorless robot. But he was able to detect the candle flame using the Thermopile Array sensor and the spraying mechanism worked fine. So I went to the competition just to talk with the people there and show my robot for fun. After the competition started, I saw a robot that was just running from wall to wall not knowing where it was, eventually by luck it would find the candle and try to put out the flame using a fan. It just hit me. It doesn’t have to be a perfect bot to compete, I can do it with simple commands and once in a room look for the candle and put it out! I can do that! So I rushed to my laptop, changed the code, did a small test right there on the floor and headed to the officials to enter him in the competition. But it was too late, the competition was already half way through… I did some more tests between rounds directly on the course and managed to get close to the candle but the sprayer couldn’t put it out for some reason. Of course, at home it works! Spooky candle they use, hard to get it blown off even with the fan.
Al in all, it was a good day, talked with the experts, met Mike Ferguson, saw his Izzy bot doing pretty well, talked with Jon Highlands, saw his super powerful robots (first and second place in Mini Sumo experts division). I came home and did more tests and this time I got the robot doing it almost perfect. Took a video to show you how it was supposed to work at the competition (well, without the small nudge).
Very nice
Looks even better now and fire fighting works realy nice. Bravo !
As for Fire Fighting competition, better luck next time. I see you as a perfectionist and I’m sure those candles will have no chance next time
Thanks! I’m sure in a year
Thanks! I’m sure in a year from now I’ll be a much better programmer and this competition will be a piece of cake. I’ll also try line following using the AVRcam. Having a year to prepare, I will make sure this time I won’t get caught short and have it working flawless. (Yeah, I’m a perfectionist.)
Voice Command and Speech update.
I have updated the project with a new video. Let me know what you think.
Very nice work with the
Very nice work with the onboard voice recognition. Not many people have got it done without a PC interface.
Do you have more information or links about the VR and synthesis chips? I couldn`t understand what they were called properly, your accent is very interesting.
Thanks!The VR chip is this
Thanks!
The VR chip is this one: http://www.tigal.com/1770 it has a Demo for Arduino. I have adapted the code from their demo to work on my robot.
The Speech chip is actually a C code developed by Webbot from SoR as a speech synthesizer. I have adapted that code (with a bit of help from Arduino forum) to work in Arduino and added it to my Arduino module that takes care of the VR chip. I had to use a mega328 because the code got over 14k that is available in a mega168. Without the bootloader, it will fit in mega168, since it’s almost 15k. Webbot told me that if I generate the phonemes from text, then play a bit with the parameters I can improve the way the text it’s spoken. I’ll try to do that and see how’s going. Since my robot speaks stored text, he also said it’s better to store that text in phonemes and not in actual text. Faster response and better speech quality.
The benefit of using a dedicated Speech module is that it can be on voice recognition at all times while the robot can do what ever it needs to do. I can stop it’s actions anytime I want and give him different commands. The Speech module on my robot is mainly an I2C Master device, but, depending on the situation, it can become a Slave device.
Here is the original C code: http://www.societyofrobots.com/member_tutorials/node/211
Oh, and my accent is Romanian.
WOW!
MiniEric is looking better and more advanced every time I check. Now with AVRCam and VR chip. And I see you got the speech synthesizer working too. Though it doesn’t sound that good yet. It should be possible to improve it I think.
Anyway keep up the good work
Thanks!Yeah, the speech
Thanks!
Yeah, the speech code gave me some headaches but I managed to make it work with a little help. It needs some improvements, I’ll see what I can do, but even if the speech is not perfectly intelligible I know what he wants to say so I can understand it. It’s like small kids, parents would know what they say, but outsiders would always have trouble understanding what they’re saying.
It will take a while until I’ll use the AVRcam. I have a Arduino compatible library for it and I made some attempts to use it, but I had poor lighting and that creates problems. In this apartment where I currently live there is plenty of light, be it day or night so I hope I won’t have so many problems.
I’ve been asked on SoR to make a tutorial about the VR and Speech so I’ll do it over the next weeks and link it here. Perhaps I’ll enter it in the 5th tutorial contest to win an Axon II, who knows…
Hehe
I guess we all feel that our bots are like our kids, and I imagine even more so with an advanced, and might I add cute one like MiniEric. However it should be possible to get some intelligible speech out of it. I recall playing with a speech synth as a kid on my Commodore 64. It didn’t have much more resources available than an Atmega328.
On the other hand image processing takes up quite a lot of resources. Some years ago I was playing around with some image processing with my webcam. It was quite heavy even on a PC, so I’m not sure what you intend to do with it, given the limited resources available in your current setup?
AVRcam does the image
AVRcam does the image processing on board, same like CMUcam. You set up to 8 colors for it to track, then you just give it commands like “track color 3” and it will return x,y coordinates (2 sets, opposite corners) of surrounding boxes for each object that has that color. Then it’s your turn to decide which box is the object of your interest and what to do with it.
If you have only one box returned, your task is easy enough to calculate the coordinates for the center of mass and give commands to move the head to center the box (object) in the image. Then you can read the angle for the servos and decide which way to turn the robot. The Ping sensor can measure the distance to the object when it is centered in the image, so the robot will know how far needs to travel.
Basically, thats it. Color blob tracking. Good enough for starters. Also line following capable.
Speech is an awesome
Speech is an awesome addition, hope the improvements go well. I really like this little guy, has some personality!
cool voice activation.
The voice activation controlling was cool but in the video it doesn’t reply to the command “mini Eric bow”. What was the problem? And was it already voice activated in the ‘fire test’ video?
Thanks for the comments
Thanks for the comments guys!
Yes, MiniEric was already voice activated when I was doing the Fire Fighting tests. At the competition, I could not use the voice command because of too much background noise. There was a FIRST competition with live amplified sound effects and music in another part of the Great Hall. The voice command works as you train it, some people had it done with the TV in the background and it worked. I did it in silence, I have problems if there is noise around me.
About the VRbot chip you can read an article in the last issue of Robot Magazine. The guy used it on his Robonova, as this chip was designed for that robot, but as you can see, it works on any robot that has a serial port (I’m using software bit-bang on 2 digital pins, as this was the sample code I got directly from Tigal). The “bridge” mode did not work for me, so I had to use some wires to connect the chip to the Basic FTDI board I’m using for programming my boards. This way I could train it perfectly.
I am not sure what you say about the “miniEric bow” command, as I don’t have one, I was just undecided for a moment what command I should say and when I said “move” it already came out to the main “say trigger” mode. There are 2 indicators that the robot is listening for commands, a beep and a red LED shining. Also the LCD prints “say trigger” or “say command” and then it displays what command he understood. If not, it will jump back to “say trigger”.
The Speech library is almost ready, I am waiting for some input from the original developer, as I have asked him if any improvements could be made. As soon as I have it completely done, I’ll attach the code for my Speech Controller module, that includes monitoring the VR chip and speaking.
i think so
I think so I misheard the voice coming out of the speaker any way know you can make mini eric bow.he he he
PC Platform
This is wicked cool. As your features get more advanced, you might want to consider having it run on top of a PC platform like the BeagleBoard (http://beagleboard.org/). It’s a little bigger than the Arduino board but is a full 600Mhz computer with audio/video (i.e. text to speach). You could have the arduino read the sensors and then send some data back to the PC for heavier processing or user interaction. If you want to get really slick, the Gumstix Overo (http://gumstix.com/) the same thing as the BeagleBoard but smaller than a stick of gum. Just an idea. Great work, though!
One of the more awesome robots here!
One of the more awesome robots here! Very impressive. Was bummed to hear you weren’t able to compete in the fire-fighting contest. Especially when you have put so much time and work into MiniEric. In any case, your effort is not wasted. Many here I think have noticed what a great robot this one is. Next year you will certainly have MiniEric ready to fight fires!
Thanks for telling us about the VR chip. The ideas are already churning here… Now I just need the time!
Thanks for the idea, but I’m
Thanks for the idea, but I’m not a computer programmer and I had a hard time trying to do something too advanced for my capabilities in my butler robot Eric. You can see this robot here: http://seriousrobotics.wordpress.com/eric-the-butler-robot/
I have decided to make MiniEric to get better at microcontroller programming to be able to do all the low level functions for the big robot then move the Navigation and A.I. to the PC. And of course, a smaller robot allows me to participate to different competitions too. So, when I’m done with MiniEric, I’ll get back on re-building the big robot, since I had to give up some parts when I moved to Canada.
An important construction improvement was done in MiniEric, the possibility to bend to grab object from the floor and then lift them up (I want Eric to be able to put them on a regular table or counter top). I am also considering making Eric like a Segway, but this is just a wishful thought at the moment. A preliminary experiment was done with this robot: http://seriousrobotics.wordpress.com/2008/12/22/balancing-roboduino/ but I could not make it drive around because some limitations of the system. An accelerometer and gyro are necessary to make a real balancer. But I’ll experiment more latter, perhaps make MiniEric balance.
MiniEric went through a
MiniEric went through a major make up the week before the contest, including height reducing wheel replacing, paint job, I replaced all electronics boards and I had to make the sprayer work. All low level functions had to be adjusted, nothing worked properly anymore. So I spent too much time fixing and debugging low level functions than working on the contest specific functions. And it’s still not the way I want it to move. Now I’m taking a small break, just lurk the forums and answer questions, prepare an article about MiniEric on Circuit Cellar magazine (digital version) that is going to be published soon. I’m also waiting for a new color LCD so I can continue the mapping efforts.
I’m going to make a tutorial about the voice recognition and speech for SoR pretty soon, so there will be more in depth information on this subject.
Thanks for your comments guys! Keep them coming, give me more ideas how to improve this robot.