Robbie the Robot

This is Robbie the Robot

http://escaliente-robotics.blogspot.com.au/

The project is a couple of years in the making he is a

2 wheel differential drive robot with a Inmoov upper body

the servos in the arm have been replaced with gear motor controlled with I2C

the head has 2 Dynamixel servos for pan and tilt

The attached video shows the first driving test

using a PS3 joystick.

The second shows a test of the arm moving with the gear motors

instead of the servos

next test will be with the arm controlled with ROS MoveIt

 

Update

Here is a video of Robbies arm controlled With Ros Moveit

the arm is being moved to random location

Update 25 Dec 2014

All Robbie wanted for christmas was a coat of paint

and some new parts. Just have to finish some wiring

then he is fully operational If i have some spare time

I want to finish the web interface. After hours of navigation

simulation he is ready to start autonomous operation

 

Update 16/01/15

Autonomous robot

This we get from wikipedia

A fully autonomous robot can:

  • Gain information about the environment (Rule #1)

  • Work for an extended period without human intervention (Rule #2)

  • Move either all or part of itself throughout its operating environment without human assistance (Rule #3)

  • Avoid situations that are harmful to people, property, or itself unless those are part of its design specifications (Rule #4)

An autonomous robot may also learn or gain new knowledge like adjusting for new methods of accomplishing its tasks or adapting to changing surroundings.

I have been asked the question how autonomous is Robbie and do you let him move on this own?

While in principle he has all the systems, and has demonstrated that they work on there own and sometimes they all work together. The fact is in the last 2 years he has been tethered to the battery charger and partially disassembled. Stage 1 is now complete we have a working robot. What we don't have is trust in him and reliability. Stage 2 of this build is address those problems trust will come with reliability but autonomy needs more , below is a list of some tasks the robot should do

Self-maintenance

  • Charge the battery, this part works using a behaviour tree

  • monitor the systems to be part of the above

Sensing the environment

  • Is anyone near me, face recognition work but needs to be improved

  • where am I, while localisation will give a map reference we need a name ie lounge room

  • day and night shut down nodes that wont be used at night

  • short and long term memory

Task performance

  • Goto a place, did I achieve my goal?

  • get something, did I achieve my goal?

  • locate something, did I achieve my goal?

Indoor navigation

  • Localisation

  • update the known world what has changed

And we also need to log activity, success and failures to measure performance, in the lab he can go through a door with out touching but in real life? Same for localisation.

 

Update 05/07/15

 

Its been a while since the last update. Other than the changes to the drive base covers all the work has been to improve reliability. The covers are an effort to keep out the dust (objects) and improve cooling also helps give a more finished look.

On the autonomous robot project I thought it would over quickly but it looks like being a very long project, the basics are solid simple behaviour works well I can leave power on to all systems leave the robot unattended the kids can move and interact with Robbie using voice control with out fear of crashing into walls or run out of power.

The next challenge is system health monitoring at the moment I only monitor battery power, I need to monitor the software as well looking for stalled or crashed nodes if movebase stalls in the middle of a drive Robbie will just keep driving, most of the software crashes were the result of the computer starting to fail (it has now failed totally)

Arm navigation with ROS Moveit is improving tuning the various parameters is very important joint speed makes a big difference in performance and I suspect inertia is also taken in to account. The biggest problem I had was missing commands joint goals were sent to the controllers but never arrived turns out it was a early sign of computer failure. Robbie wont have his new computer for a couple of weeks I can use the time to finish some of the smaller items on the todo list.

 

What's next?

Get_Beer project works in simulation in real life the grasping needs work

point N look project pick a point of interest Robbie will drive to the object point his head to the object and move the end effector to touch the object the kinect in his head will be used for the final positioning and recognition. The navigate to the point is working the look and point part is untested

 

Update 15/09/15


Robbie's computer is still broken so I was able to catch up on some tasks I never had time for.

The potentiometer were never very accurate I have designed magnetic encoders as a replacement they are more accurate and and just plug in the the existing structure they will be fitted on the next rebuild.

 

The overall control was a very hard to maintain and expand natural language is not very good with robot control some verbs are shown as nouns thus wont be interpreted as commands. In NLTK you can define what words are verbs or nouns but maintaining the files is troublesome, I tried pattern_en but it suffers from the same limitations. I also tried WIT a online language processor the learning curve is too steep and I wanted a local solution. Robbie's chat engine on the other hand works well.

 

I never really looked into pyaiml's capabilities but it can run system programs with command line args. For testing I reduced the loaded aimls to two one for commands the other for general response.

Of course that just puts me back to where it was before but with a lot more potential. Pyaiml will throw a error message for a unknown command I made it so it will append the command to a aiml file I only have to add the meaning later I can automate this but for now I want control over it, this sort of gives me a learning robot.

One of the intriguing possibilities is to query Knowrob ontologies.

For now I can add the name of a person from the face recognition node.

Next task is to make a semantic map and name the objects so when asked his location Robbie will answer “near the desk in the garage” not x,y,z.

 

All of Robbie's physical task are now controlled through behaviour trees program with action servers any task can be pre empted and resumed if there is a fault or error condition. The behaviour tree also monitors and controls Robbie emotions tasks will give pleasure doing nothing will result in boredom, when boredom reaches a certain level Robbie will do a random task that varies from uttering a quip using a Markovian chains, moving his head or arms to driving around in circles.

Using simulators like Rviz and Gazebo has made these tasks much easier.

 

update 19/12/15

 

Robbie is now fully functional again after his computer problems the reason the arms missed commands was due to the controllers resetting. after I supplied power to the USB hubs every thing worked as required.

To increase the accuracy of the arm I started replacing the potentiometers with magnetic encoders to fit the new encoders required a few modification to the gearbox, I incorporated a bearing in the top of the gearbox and a mount for the magnet in the drive gear plus a few extra tweaks to increase the strength of the assembly not all modification will be fitted at the same time some will wait until the next major rebuild

 

Moveit Update

Robbie's moveit configuration is working again accuracy is 15 cm not very good but magnetic encoders will help plus a better calibration. Obstacle avoidance suffers because planning only just misses the obstacles. Robbie now has a point at node where he will point to a published target pose.

 

Face recognition

We are now running the COB face recognition package this works well in daylight but the garage is to dark Robbie makes a few errors, I need to add more lights. The AI will say Hello when he first recognises a face then after 5 minutes he will just say Hi. The name of the recognised face is returned to the chat bot so he knows who he is talking to

 

Object recognition

will recognise a pre programmed object but wont learn a new object because ECTO requires direct access to the kinect driver Freenect uses a different driver and Openni will not work under indigo

2d recognition, shift and surf and not included in the opencv package so its very flaky

 

Navigation

Increasing the Global inflation will make Robbie plan further away from obstacles.

 

Autonomous operation

shutdown command will not work when Robbie is started using robot upstart also depth registered points from the top kinect will not always work unless something uses it straight away the lower kinect has the point cloud to laser scan and gives no trouble. I will start face recognition on start up and see if it remains stable. We haven't had any jitters or strange events since we started using the powered hubs for the arduinos. The current high temperatures are causing a few resets I need a bigger fan and more vents in the CPU bay

 

Robbie's Emotion system

has been turned off for the moment since he spent most of the time bored and kept quoting a markovian chain from Sun Tzu. It needs a lot more configuration and thought but its fun for a while

 

As the design of Robbie matures I'm starting to add covers to hide the wires and keep dust off electronics but this has induce a few extra problems

  1. Heat build up

    more fans need to be included in the design

     

  2. striped out threads

    printed PLA and MDF wont hold a thread for very long so now I will add M3 threaded inserts and M4 rivnuts to the structure

 

 

 

 

 

 

Navigation, fetch objects, recognise people

  • Control method: ROS
  • CPU: Arduino, I3
  • Operating system: ROS, Ubuntu 12.4
  • Power source: 12volt 20ah gel cell
  • Programming language: Python
  • Sensors / input devices: Kinect
  • Target environment: inside

This is a companion discussion topic for the original entry at https://community.robotshop.com/robots/show/robbie-the-robot

Wow!

Just fantastic work! What are the future plans for Robbie?

future plans for Robbie

for the next couple of months i want to bring together all his capabilities so they can all run together

ie Face recognition and face tracing through the AI will say hello or ask for your name if you are unknown he ask some questions and add you to the database. Navigation and arm control will work together to bring you what you request. At the moment they all work but not together

thank you for show us more informations peter

i am interesting by your tracing and your AI if you are time let s us to know how you do that step by step

it was worderful for every body 

thank you

ambroise

a

!

1

!

Awesome!

Is this the same Robbie that had a one webcam for an eye years ago?

Robbie started out as a box

Robbie started out as a box with one web cam here is a link to a earlier photo

http://escaliente-robotics.blogspot.com.au/2011_08_01_archive.html

Simply Amazing

What an amazing robot you have going there.  It’s very impressive.  Thanks for showing us your progress. 

Haha, now he looks a bit

Haha, now he looks a bit like C3PO :slight_smile: Merry Christmas Robbie :smiley:

Nice Project

Peter - just like to congratulate you on your build, when you say autonomous do you mean you let him drive around on his own, is he safe?

autonomous ops

He has limited freedom, in a large area i just watch from a distance but going through doorways i,m very close

just in case something goes wrong. The arms can get hooked on corners I will add a pan waist joint soon so the

arms have more protection. Other then that I only have to build trust and that comes with testing

 

Peter

**Sweet build, but… **

It looks like the unholy lovechild of C3PO and R2D2 :wink: ! Or perhaps their last shared ancestor.

Very impressive

I’ve been admiring this project for a long time.  Very nice job.

I was curious as to what ways you are using the Kinect.

I hope I can find the time to build something like this one day.

Kinects

The lower Kinect is used for navigation  with the ROS navigation stack it simulates a laser scanner

if I find some time I want to use rtabmap http://introlab.github.io/rtabmap/ it lookS like it can handle a home environment

Better (all the clutter and moving furnisher)

I had a head mounted kinect for face recognition and tracking but that really slowed the system when i have the time

I’ll use a webcam for face recognition and tracking and put the kinect in the chest for object recognition. I used the microphones

in the kinect with HARK to do voice localization and help with voice recognition it worked but need more tuning

A robot of this size you really need a team of people to many small jobs get left undone due to the lack of time

The Amazing Screw-On
The Amazing Screw-On Head!

2 Kinects?

It looks like you have 2 Kinects onboard.  Are you running both at the same time?  Do the dot patterns they project interfere with each other?

Impressive bot as always.  I bet you can really frighten small kids with that.  Tell them…"There really is a monster in the closet that likes to come out when its dark."

With the layout of your bot, it would be really easy to add a sonar array around the base just above the wheels, it might give you improved situational awareness about where obstacles are, especially to the sides and rear.  I would be tempted to add some thermal array sensors too if I had a bot that big.

Great work.  Thanks for posting.

the Kids love Robbie

The 2 Kinects don’t appear to interfere with each other the lower kinect is for navigation and localisation only. I haven’t found the need for sonars, the obstacle map from the kinect keeps the bot away from any hazards and localisation is accurate enough. Still the main reason is budget or he would have a lot more sensors.


The young kids really love Robbie I can set up face recognition to say hello every time he sees them so they play a game of peek a boo or i set the voice recognition to continuous the older kid can chat with him and the younger (6) just ask the say question, its fun to watch 

 

 

What is the robot’s computer?

Dear: Peter

What computer are you using? Laptop, desktop, or intell nuc? I am just curious because I am having trouble finding the computer you used for the robot because you said you are using ros and ubuntu? Thank you.

From: Noah

Robbie’s Computer

Hi Noah

Robbie’s computer was a Intel I3 on a small ATX mother board (standard desktop)  powered bt a M4atx dc converter board

it takes battery voltage 10V to 36V and connects to the standard ATX sockets on the mother board for a battery I use a 20amp hour gel cel this last for over 1 hour with normal driving and arm usage (when new)

the new one will be a I7 micro ATX mother board and max ram I like the extra USB ports on a ATX board

 

 

Peter