Robik - general robot for education and used socks housekeeping

It all started when iRobot Roomba was adopted by our family to do its slavery job. We were discussing mechanics of Roomba with my son and decided to create our own robot that we will have fun with. It's name is Robik.

Main outlines of Robik are:

 

  • using ROS.org (since the very beginning)
  • made from cheap components and used parts
  • ROS Navigation stack works but will require tuning
  • laser guided docking is in progress; it should be able to dock and charge autonomously
  • 5 DoF arm can be manipulated manually or controlled by action server to perform pre-programmed tasks
  • egg grasping test will be performed shortly
  • MoveIt! configuration and topics is ready but full integration will require much more work
  • Kinect is used for RGBD vision that is converted to laser_scan used by navigation
  • a modular web UI has been created for easy access and robot control from various devices
  • Jetson TK1 is used as high-level controller capable to utilize OpenCV CUDA processing power
  • 5000mAh battery and onboard charger

 

Automated parking procedure consists of three phases:

 

  1. Navigate to a place close to base
  2. Laser guided drive backwards to base
  3. Finalizing parking by getting contact with power charging outlet

 

More photos and information can be found at https://plus.google.com/118382910860727881435

Remote control drive and arm manipulation, running ROS, RGBD via Kinect


This is a companion discussion topic for the original entry at https://community.robotshop.com/robots/show/robik-general-robot-for-education-and-used-socks-housekeeping

I’m going to make a home bot too:)

I also leave often socks on the floor. How do you recognize socks with OpenCV?

http://www.digikey.com/en/art

http://www.digikey.com/en/articles/techzone/2011/jul/the-five-senses-of-sensors-smell

Very cool! So you run ROS

Very cool! So you run ROS directly on a Tegra? Did you need to port that? Do you charge a single battery while the system is still drawing current from it? What chip/board are you using for that? 

Yes, ROS is used. It saved

Yes, ROS is used. It saved lots of developement and enabled to get where we are now. ARM HF binaries are needed. I first compiled from source but today I am using repository provided by Namniart.

Ragarding charging there is a switching circuit that uses relays and diodes to power the robot either from battery or from wall outlet with 15VDC adapter. It can switch without power interruption so I do not need to shut it down for charging. I am working on docking as you could see in one of the videos. The ultimate goal will be like Roomba does: work until battery is low, dock automatically, get charged and back to work :slight_smile: Let me know if you want more details on the switching circuit.

The e-nose sensors is interesting. I did not know about that. I will simply assume that all socks lying around are used. That’s reality anyway :slight_smile: I am hoping that object recognition will work for socks. I have started to play with MoveIt! recently and the recognition will come as part of it.

I was not successfull to find any e-node selling, am I wrong?

Cheers

Actually, I did not started

Actually, I did not started with object recognition yet but I bet on OpenCV. I’ll follow configuration patterns from MoveIt! first to see where I get.

I am using M$ kinect and hence I have RGBD data available. Vast majority of feature extractors work on RGB only. I feel RGBD can provide better results.

I’m looking forward to see your robot. We have a common goal :slight_smile:

I’m reverse engineering demos available

So far OpenCV works best on a uniform colored environment, and many dirty tricks can be done to get the same goal.

For example socks can be searched as a whitey blob of color, but if walls are white too, it must be specified that the whitey blob should be contained by a floor, which is kind of brown.A trick to go to pick the sock is to draw the region of blobs with a certain color like 255,255,255 and then steer left if there are more exactly white pixel at left.

Then there is object recognition with feature points, which it’s ideal when dealing with solid objects, but on Android crashes all time because it’s heavy, but it’s still manageable with grey small images, or still frames.

Edge detection using canny and sobel can help to avoid objects, but it’s easily tricked by textured objects or irregular lighting.

Most of examples and app use canned character recognition or face recognition, which aren’t useful unless you put labels or photos everywhere.

 

Thanks for the tips. I’m

Thanks for the tips. I’m also thinking to employ ground plane truth to indicate if an object is on the floor aand also to better guess size of the object. It will be a challange to identify socks of various colors. It will be ok if it picks other stuff of similiar size not just socks. My wife will do further filtering before washing :slight_smile: I will have to do lots of reading to explore what is actually possible.

Multicolors

When you get to deal with more colors it may be easier! Socks with many different stripes or highly texturized are more attractive with edge detection. Another warning point it’s always check that the socks aren’t weared by anyone or in unreachable places:)