AVA - Advanced Verbal Assistant

This is my Ava.  This is my 3rd bot and is the product of the last 3 years of work.  Ava combines advanced software, hardware, 3D sensors, machine learning models, and 22-degree of freedom movement to deliver what I believe is my smartest, most capable, and hopefully best looking bot yet. 

This bot uses visual SLAM to map and locate itself.  I plan on using this bot to demonstrate what is possible with state of the art natural language and vision processing using a combination of a custom neural net based architecture and advanced "transformer" based machine leaning models.


This is a companion discussion topic for the original entry at https://community.robotshop.com/robots/show/ava-v2
1 Like

@mtriplett Well done! Seeing your mechanical design skills advance in leaps and bounds too! You’re at the point where you might consider buying and using this:


You’ll get smooth surfaces etc.

I understand the rationale behind the “ears”, but honestly think the robot would look better without them. Based on much of the physical design, I’d envision the face to be "machine-esque’ rather than “playful” (sort of like Johnny 5).

Once again, well done and keep it up!

1 Like

Usain Bolt pose is great :smiley:

Thanks cbenson. I appreciate and welcome your feedback.

I can totally understand your aesthetic thoughts on the ears and the “machine” vs. “playful” or “animal-like” look. I had similar concerns and impressions early on (even in CAD stage) and questioned my choices on many occasions. I have also seen how little kids as young as 2 and up to adults can either be genuinely endeared to a bot and want it to be its “friend”, or be frightened by it and run or even cry. A bot that is 3ft tall and moving even slightly towards someone can be very threatening. The animated face, ears, and autonomous movements of the ears attempt to reduce that fear impulse and draw people in to a more normal social distance for conversation. Once a person has “backed away in fear”, a bot attempting to have a social conversation with that person would seem desperate and pathetic…uncanny valley.

Once the ears went on and started moving, I found that they kept growing on me more and more. Also of note, women have most often cited the ears as the biggest factor that they found endearing. Over time, I find myself questioning my choice less and less.

Aesthetics aside, It must not be forgotten that this bot is optimized for “Spacial awareness”, aesthetics are secondary. The ears perform a vital function…determining where “warm-bodies” (using the thermal cams) and “empty-space” (using the sonars) are at a high level (3 ft) around the bot…without having to turn and look in that direction. To me this is very important. I didn’t want the bot to struggle figuring out where other creatures (people and pets) are around it. Later on, I may put other sensors in the ears…we shall see.

On the CAD recommendation…thanks. I use the free version of Sketchup and have been happy with it.

Talk to you later.

Regards,
Martin

1 Like

Thanks! She does a few good Yoga poses too. I wish I had taken more pics with the skin on.

1 Like

I can see the fourth iteration changing aesthetics to become more “cartoon” / “animal” - esque. If you enclose the arms as well with rounded curves etc., I can see the robot being very approachable. Any chance of using two large circular screens (or partially covered rectangular ones) instead of one rectangular one for the eyes?

Definitely yes on the circular screens. I also considered using two small rectangular ones for eyes and having the rest of the face area in plastic and a few sensors. The upside could be lower power consumption and 1 less 12V regulator.

The downside would be loss of the animated mouth perhaps, the general purpose nature of the rectangular screen when booting up in windows, getting started, etc, and the loss of the amplified speaker in the screen for music and voice. The extra wiring for new eyes, amp, speaker could get busy and complicated. I do have room in the head though if I decide to go there. I may do it in the future. Right now I have all I can handle to try to get what I have to be reliable. I am replacing the USBHub and some other things to hopefully resolve that soon.

1 Like

Mouths are overrated :wink: Some time invested in the eyes (and some simple eye lids for blinking, perhaps changing the pupil size too, as well as their shape to simulate eyebrows) could make it the most loveable and emotive creature around. The rectangular screen can go into the chest, or the back, or even plugged in then removed.

image

1 Like

Good points all.

As you bring up, eyelids, blinking, pupils, and movement of the eyes are all important. You can’t tell from static pics, but I animate them all and the way I render them actually makes them look 3D…like the eyeballs are curved. The eyelids blink and close over time if bored/sleepy or open up more when awake or even more if fearful. You can’t see this in a single pic. I don’t have a light sensor yet controlling pupil dilation (like my other bots did) but I will in time.

On the mouth thing…you kinda need to see it in action before you can fully assess whether having a mouth is overrated. For most bots I would agree that it is. As an example where I think a mouth helps, I think the mouth on Anna and my old Ava added engagement. The new Ava is a big leap over that because of the visemes/phoneme syncing. I know it looks simple in the pics and that is on purpose. When you see it in real time in person you get a more engaging impression, as I animate the mouth continuously so it is making tiny fluid movements from moment to moment based on emotional and other state. When words are being spoken, I also synchronize the shape of the mouth with “visemes” that are derived from the “phonemes” for the individual syllables in each spoken word. This means the shapes make sense given what is being said. While far from perfect, the continuous subtle and the viseme movement create a whole different perception/engagement level (in my opinion) than having a fixed mouth or no mouth would. I went with graphically simple to not distract with weird things like teeth or a tongue, yet it is computationally complex, taking sound and emotional expression into account.

I have seen other people do graphical mouths, but to me, if they are just using a small set of fixed positions and are not animating the mouth fluidly from moment to moment in subtle ways, even when the bot is not talking, then the end effect is not engaging. If I was doing something simple like that, I would agree with you and say a mouth is overrated.

I have often considered going more old-school industrial “machine-like” as I think you might be suggesting with you earlier comments, with no face, no ears, no moving mouth, and just some LEDs or OLED screen for eyes. I could see that being a very cool look…a bot like that would fit into “Star Wars”, “Johnny Five”, “Wall-E” and most other movies. I would not be surprised if I end up there.

1 Like

Videos! :smiley:

1 Like

YES PLEASE! We ABSOLUTELY need videos, with sound so we can hear her speak. :slight_smile:

1 Like

Hi Martin,

This is brilliant work! Having followed your previous work, I already knew that you are a master in AI … but what about this mechanics? Wow, awesome!

I loved the mechanisms you created, specially using linear actuators!

Even the LEGO shock absorbers added a special touch!

Right now, analyzing every detail in every picture.

BTW, I think AVA and MDi #4 would get along well together!

Again, this is brilliant!

Dickel

That’s amazing work! Very impressive but also sad, that we live in a world where such bots have a (huge?) market. :cry:

Hi Dickel!

It is really awesome to see you here. Thankyou so much for your feedback. I welcome any thoughts you have, positive or negative. You have definitely helped inspire me to move beyond my boxy designs of the past.

I’ll definitely post more images and vids as things come together. I have a lot more old pics too.

For me, the breakthrough was the core movement. Initially I wanted it for balance, but later realized it was key to making the arms functional…without the tilt, the hands never get to the ground. With the tilt, they do so with plenty of room to spare to adjust the hand around to complete a grab (I haven’t programmed this yet). The lean is also critical for putting the head cams in view of anything that would be grabbed.

I still have some plastic covers to put on the inside of the tracks and the hole on the from of the abdomen. They are longer than my printer so I am waiting til I am sure. Everything else is basically there. A lot of the pics don’t have all the outer skin pieces on. They snap on/off for easy access.

Yes…this bot and MDi#4 would be quite a team. I loved the whole MDi series. MDi#2-3 have special places in my heart. Aesthetically they are close to a lot of the things I was striving for with my bot. One of the big challenges for me is how to get something worthy of “Dickel” looks while combining “Martin”'s sensors and processing. I guess that’s what I tried to do. Maybe one day we can create a Martin Dickel robotics and create “MDi#5” …or Dickel Martin…no matter to me.

Hope to see you here again,

Martin

2 Likes

That would be awesome! BTW, my son is called Martin Dickel.