The MAAHR Project

The goal is to design, build and program a robot which is nearly as sophisticated as ASIMO ($1,000,000) or WilloGarage's PR2 ($400,000) from off the shelf components. The project is completely open source, everybody can participate.

Groups

  • Design: Ro-Bot-X, cr0c0, MarkusB
  • Mechanics: TinHead, JoeBTheKing, Jad-Berro, MarkusB
  • Electronic: MarkusB, Jad-Berro, Krumlink
  • Software: GroG, AgentBurn
  • Web presence: JoeBTheKing
  • You?

Inventory


           

 

Part Quantity Purpose Cost Shipping Link

 

Intel D525MW 1 Main Brain $83.94 $7.22 http://www.amazon.com/Intel-Dual-Core-Mini-ITX-Motherboard-BOXD525MW/dp/B0041RSC94

 

M2 - ATX PSU 1 Power for Brain $72.50 $8.72 http://www.mini-box.com/M2-ATX-160w-Intelligent-Automotive-DC-DC-Power-Supply

 

Memory 2 G Memory for Brain $21.84 $0.00 http://www.amazon.com/gp/product/B001KB6Z2U

 

Drive 1 T   $75.48 $0.00 http://www.amazon.com/gp/product/B001KB6Z2U

 

Kinect     $90.00 $0.00 Craigslist

 

Webcam Logitec C600   $39.48   http://www.amazon.com/s/ref=nb_sb_ss_i_0_20?url=search-alias%3Delectronics&field-keywords=logitech+c600+webcam&x=0&y=0&sprefix=logitech+c600+webcam

 

USB Wifi          

 

           

 

Total     $383.24 $15.94 $399.18

Current Milestones

  • Assemble brains in a protective case
  • Install all software
  • Verify working of Battery and Power Supply Unit
  • Attach Webcam
  • Attach Kinect - OpenCV Kinect update
  • Attach Bare Bones Board Arduino Clone
  • Design Base

Build Log Most Current At Top


OpenCV / Kinect Progress :

08-24-2011

I created a couple more filters and added some functionality to the OpenCV Service.  I have also created a new Service  called FSMTest for "Finite State Machine Test".  Not much is implemented as I wanted to get the Sphinx and Google speech recognition Services in a useful state.

Now that the housekeeping is done, I have uploaded a quick video of the progress.

What happens :

  • The FSM starts up - the Speech recognition services will begin listening for commands (I disabled them currently, because they take a lot of "start up" time which hinders debugging
  • I will command it to look for an object
  • It will rapidly go through various filters and techniques for object segmentation ... "Finding the Object"
  • It will then go through its associative memory looking for a match - if it finds a match it will "Report the Object"
  • If it does not find a match it will query for associative information from me.  This wil be "Resolving the Object". I will use with Sphinx speech recognition initially, but I am very interested in hooking into the Google speech recognizer Service I created recently.
  • This test shows a guitar case lying against the wall
  • Segmentation is only done with the kinect.  The kinect can range distances and a depth from 4' to 4' 3 inches is segmented off and turned into a polygon.
  • The polygon's bounding area is found then a template is created.
  • The Matching template filter is put into place, then the object is matched and locked.
  • The matching template will be associated with speech recognized words and put into memory, which can be serialized.
  • When an object leave the field of vision, and later returns to it, the robot will "Resolve the Object" and its associations.  It will be able to "Report the Object" by saying such briliant things as "Why did you put a guitar case in front of me?" .. which in turn may lead to other associations?

 



 

Remote Connectivity:

08-15-2011 
It is nice to have options.  A full X desktop would be nice to have with this project.
It would be necessary to start and use the X desktop from the command line.  Here are the step necessary to accomplish this.

  • Fedora 15 default will not have the sshd daemon running on boot - to change this option use the following commands
    # systemctl start sshd.service  # - will start the sshd daemon
    # systemctl enable sshd.service  # - will autostart the daemon on boot
  • X11vnc is a vnc server - a nice decription of the various flavors of vnc server is here
    https://help.ubuntu.com/VNC/Servers
  • VNC a number of ports opened on the firewall - 5800 5900 6000  for display 0 - for each display N a new set of ports 580N 590N 600N would need to be opened
  • The command line to start the vnc server on the MAAHR brain is 
    sudo x11vnc -safer -ncache 10 -once -display :0
  • This starts the vnc server so a login is a vailable - that is why root needs to start it.  You must make sure that display :0 is not being used by a current login.
  • TADA ! Fedora Core 15 running remotely on maahrBot 
  • I will be communicating to MRL directly so I needed to open up port 6161 UDP on the firewall too
  • The tower has grown a bit - It now includes the SLA Battery & a Power Supply.  The idea is I would like to keep this system on with 100% uptime.  So, when the system is charging it can still be online.  I don't know when Markus's base will be here, so I'm working on a temporary with some old H-Bridges I created a long time ago.

    It's simple, allows extensibility with more decks, cheap, and should be modular enough to "bolt on" to a variety of platforms.
  • It's "nice" to have a desktop to work on but MRL can run headless and the video feeds can be sent to other MRLs running on different computers.
  • I downloaded MRL-00013 unzipped and ran the following command line for headless operation, remote connectivity, and opencv
    $ java -classpath ":myrobotlab.jar:./lib/*" -Dja.library.path=./bin org.myrobotlab.service.Invoker -service OpenCV opencv RemoteAdapter remote
     This starts the OpenCV service and the RemoteAdapter service
    On my desktop I started just a gui service instances and connected to the maahrBot.
  • After refreshing MRL services local the remote services on the maahrBot appeared as green tabs
  • The control is the same as if the service was running locally to my desktop - it worked well with a high frame rate.
  • A few more little updates and I should be able to switch from webcam to kinect on the maahrBot

 


Brain Arrives:

07-13-2001

Simples protective Brain Case I could think of is a series of wood decks supported by 1/4" bolts.
I had some scrap 1/4" panelling for the decks.  Wood is nice because you can screw in the components anywhere and its non conductive.

Deck 1 - Mini-ITX Computer

Deck 2 - Drive and Power Supply Unit

Deck 3 - Tagged Cover.

 

Software Install

  • Connected Brain to a regular ATX power supply - this is important to do on the install - for example you can brick it if power goes out during a BIOS update
  • Jacked in the EtherPort (I don't like wireless during installs)
  • Update BIOS (Link)
  • Installed Fedora 15 from Live USB - Use Basic Video Mode (Problems with GNOME3)
  • Installed RPMFusion and Livna Repos
  • Disabled SELinux
  • Rebooted
  •  yum -y groupinstall 'Development Tools' 'Java Development' 'Adminitration Tools' Eclipse 'Sound and Video' Graphics
  • _go do something - this will take a while_
  • # uname -a returns (have not rebuilt the kernel yet)
    Linux maahr.myrobotlab.org 2.6.38.8-32.fc15.i686 #1 SMP Mon Jun 13 20:01:50 UTC 2011 i686 i686 i386
  • Installed Chrome
  • Started Eclipse - was pleasantly suprised to see it had loaded with SVN support
  • Browsed to SVN URL - http://myrobotlab.googlecode.com/svn/trunk/   right-click "check out" - finish - ready to Rock 'n Roll
  • yum -y groupinstall Robotics
  • yum -y install cmake cmake-gui
  • New version of OpenCV 2.3 (they are much more productive now) - downloaded

Todo

make opencv 2.3 build - get latest javacv

 


Hi LMR!

 

This is the MAAHR project. MAAHR stands for Most Advanced Amateur Humanoid Robot. The goal is to design, build and program a robot which is similar sophisticated as ASIMO. The project is completely open source, everybody can participate. I'll sponsor MAAHR with 15000USD for hard-, software and logistic.

MAAHR's specifications so far:

  • 2DOF head with 2 cameras and LCD mouth
  • 2DOF main body
  • 2 x 5DOF robot arms
  • 2 x robot hand, capable to lift at least 1kg
  • 4 wheeled or tracked base
  • Moving objects
  • Postures and gestures recognation
  • Speech recognation (and distinguishing sounds)
  • Synthesized voice
  • Recognizes the objects and terrain of its environment
  • Facial recognition
  • Internet connectivity

A first sketchup draft I did (arms still missing):

If you participate and have an update on the MAAHR robot page, please pm me and I'll send you the password.

In the moment exist 5 working groups:

 


 

Update April 24, 2011 by MarkusB

First hand design draft added (see zip hand-1):

 


Update 2011.07.02

Markus has sent me funds so that I may purchased the "Brain" from details gathered on this thread - https://www.robotshop.com/letsmakerobots/node/27362#comment-67606 .  The brains are speeding to my house and should arrive in the next week.

I have purchased a Kinect - who's drivers I got working and will incorporate it into the Swiss Army Knife Robot Software - "MyRobotLab" as another Service 


I wave at you Yay !

Markus will be working on the base.  After the brain meets the base, a little programming and some sensors are attached, we should be able to complete the first round of milestones.

These would include :

  • Basic local obstacle avoidance
  • Basic mapping (encode & Kinect) - precursor to SLAM
  • Telepresence - Teleoperation - Control and data acquisition through the Intertoobs

 

 


Update 2011.07.07

I do believe we have some BRAINS doctor....
The first part of the brain has arrived.
MRL - has template matching! - I just coded the part which allows you to select the template to match.  You just click on the corners of the video feed where you want it to match - and BAH-BING...  it matches.  In this case I selected a wall socket and it matched (the same) wall socket in ~1/10 of a second.  The template is shown in as the small picture below the video feed.  The number in the video feed represents the matching quality - 0 being perfect.
I have to admit ... selecting the template is now a breeze..... very enjoyable :)

 

Lots more to do ... OpenCV Template matching does not work well with Scaling nor Rotation.  Rotation will not be much of an issue - as wall sockets typically don't spin on the wall..  Scaling will but can be remedied by collecting multiple templates - at known distances from the bot..  Scale then can be part of the associated memory of the "concept" of what a wall socket is...

I'll have video when I get it developed a bit more....


Update 2011.07.08  

good news and bad news - PSU unit arrived  (Yay!) /  missing cable ! (Yarg!)

Need a 2 X 2 male to male Processor core voltage cable - the other connector by Intel specs is ok (20 - to - 24 pin socket) .. but I have to have that other 2 X 2 jacked in ....
Can't seem to find the cable online either - saw a few 4 X 1 to 2 X 2 cables ...
Intel, why do you make this so hard?

 

  • M2-ATX PSU Module Manual - Here
  • IBM D525MW Dual Core Manuals - Here

 


Update 2011.07.08 20.35 PST

Alright, an idea has been incubating in my head for a while and I can see how it might work.  I've been very interested in the power of "community internet teaching" for robotics.  Teaching robots can be an arduous and time consuming job.  Much like editing video frame by frame.  There was a project called "The White Glove Tracking Project" which used a distributed work force on the internet to do time consuming object isolation for an art project (previous - reference) . 

I working on dynamic Haar-Learning and tracking in addition to Kinect object isolation.  Once it segments or isolates an object, it will need language input to associate a newly found object with more data.  The interaction of people and the robot would be very beneficial.  I was thinking what if it used the ShoutBox, Twitter or some other web form to Alert others it had found a new object.  Then someone could log into it, examine what it had found and associate the correct word with it, e.g. "Table", "Light", "Chair" or "Ball" - The word with the image data (Haar data) would be associated and saved.  This would be similar to the Kinect video with the exception of instead of 1 person teaching the robot, there could be hundreds or thousands.

The sending of the Alert, the Logging into remotely, and the saving of data is already in MRL.  Just need to bolt all the pieces together.

 

What should it do?

  • Actuators / output devices: Not decided
  • Control method: Wifi - Internet, Phone, RF Joystick, Other?
  • CPU: Main - Mini ITX Dual Core 1.8 Ghz
  • Operating system: Linux - Fedora 15
  • Power source: M2-ATX 6-24V Intelligent Automotive ATX Power Supply
  • Programming language: Java, MyRobotLab, OpenCV, Sphinx 4, TTS
  • Sensors / input devices: IR, ultrasonic, Kinect, Others?
  • Target environment: indoors

This is a companion discussion topic for the original entry at https://community.robotshop.com/robots/show/the-maahr-project

This is the base you use?

This is the base you use? Isn’t it? Do you mind to post it when you have more complete project done?

|x

Looks similar, but it isn’t.

Looks similar, but it isn’t. It is much bigger. And this is a special case, not a common robot project :slight_smile:

You may want to use cable

You may want to use cable ties instead of fishing line for the fingers, have a look at the Robonova hands.

I can offer help in the

I can offer help in the Mechanics and Electronics Department if you want .

Interesting project !

I like the hands.

A few days ago I was playing with different hand designs and I did not come up with anything really, but I think your designs would work great.

Just brainstorming. If you

Just brainstorming. If you have some different design approach for the hand, please show :slight_smile:

Thanks for joining,

Thanks for joining, Jad-Berro, I’ll add you :slight_smile:

Are these really cable ties,

Are these really cable ties, wondered already what material this is?

Yes, they are yellow thin

Yes, they are yellow thin cable ties. The nice part about them is you can do push-pull with a servo, no need for a spring to keep the fingers open, you have more available force for gripping.

Good idea. I am building a

Good idea. I am building a prototype soon.

All my designs were bad for a hand.

But they may work good for a crawling robot one day. =P

About hands

To be able to grab various objects I think that you need an opposable finger. Servo with a wheel gives a very small contraction - a liniar actuator would be better (and stronger).

Using an opposable finger you could put two liniar actuators (one for the big finger and one for the other fingers). You cold also consider two oposable fingers and three “base” fingers - somehow like a claw.

Also, presure sensors in the fingers would be nice.

The pictured hands are nice and make a good impression but I am affraid those will only grab a small range of lightweight objects.

L.E.: an Air Muscle for each finger may be better.

Just a thought

Since you have a tracked system I wonder if a Theo Jansen style leg system would be a touch more humanoid. And super cool. Is that just silly?

Cajun crawler style ? More

Cajun crawler style ? More humanoid - don’t think so, but definitelly more interesting and fun to watch.

I said, "Cajun wha?"

And a google later, I was going "Oooooooh wee that a spicy robot!"

http://www.wired.com/gadgetlab/2009/03/the-cajun-crawl/

It does have a arthopod-ish feel to it. Maybe it is even less humanoid, but in the completely opposite direction.

Still trying to figure out if I have anything worthwhile to contribute…

I think even more mutant

I think even more mutant with Jansen legs:) I prefer tracks or mecanum wheels to steer the robot accurate

Hand gestures

I think we should go for a five finger hand. During my 5 finger robot hand prototype build it came into my mind, that a human hand can not only be used to grab things. It can be used for data exchange and we can adapt this to the robot as well. Here are only some of many hand gestures…

Using image recognation we can even communicate with the robot by hand gestures.

You do realize that for this

You do realize that for this the hand needs at least 2 DOF for each finger and thumb? Just say’n…

Nope. Most of the gesture

Nope. Most of the gesture can be realized with one servo/finger - finger stretched, finger curved, finger touches heel of hand. I hope I can finish the prototype coming weekend:) But I am open for every other hand designs.