Chinese Bot

This is the "Chinese Bot".  Its called that because 95% of the parts were sourced from China (via eBay). It uses Tamiya dual-motor gearbox, a cheap home-made arduino clone and has a 3-axis accelerometer.

The arduino runs a neural network, which controls the output speed & direction of the two motors. 

The inputs to the network are:

  1. Left Motor speed
  2. Right Motor speed
  3. Left 45 degree distance measurement
  4. 0 degree distance measurement
  5. Right 45 degree distance measurement

The accelerometer detects if there was a collision with an object (ie. wall).  If so, it stops the robot and trains the neural network with the inputs prior to collision.  After some time (the arduino is slow, and it usually takes about 20 seconds for the training to complete), the robot starts back up and continues.

Update: I fired up the bot for the first time in over a year and captured some video of the learning phase.  When I get some more time, I'll let it run for a long time and capture additional video of the fully trained network.  As you can tell from the pauses during training, it can be a long & time-consuming process.

Learns to navigate via Neural Network

  • Actuators / output devices: 1:120 Tamiya Dual Motor Gearbox
  • Control method: autonomous
  • CPU: Atmel ATmega328
  • Power source: 6 AA NiMH
  • Programming language: C
  • Sensors / input devices: 3-Axis accelerometer., HC-SR04
  • Target environment: indoor

This is a companion discussion topic for the original entry at https://community.robotshop.com/robots/show/chinese-bot

That’s a very nice robot.

That’s a very nice robot. could you provide more details about the training of the robot and a video?

Very interesting…95%

Very interesting…95% Chinese. Where are these 5% come from and are you sure that that 5% are not also come from China in a way? Haha, just kidding…

Intersting to hear somebody finally goes with an neuronal network. I am interested in that since I first played “Creatures” and learned about their virtual neurons. That prgram was used in an F16 flight simulatur and the creatures learned to fly that thing without prior input…impressive. 

As OddBot already mentioned, more info please. I am the second one who wnats to know more details about your approach.

Lumi, it’s all there:

Lumi, it’s all there: https://www.robotshop.com/letsmakerobots/node/35185

Just learns about logical gates instead of flying an F16 :smiley:

Yes I remember Markus.

Yes I remember Markus. However, the Creatures software in the 90’s had a very sophisticated neronal network with 1000 nodes. Not big but combined with 300 genes of the creatures a very interesting system. 

The original article is gone but thanks to the wayback machine we still can read it: http://web.archive.org/web/19990117012705/http://www.newscientist.com/ns/980509/features.html

Another article here: http://www.stanford.edu/class/sts145/Library/richard.pdf

 

More details coming…

Thanks for the interest.  I actually made this robot over a year ago as an experiment into using neural networks as a robot’s control system after watching some videos on YouTube. It’s also an extension of some work on control systems I did for virtual embodied agents. I haven’t played with this bot in over a year, I’ll see if I can locate the code and fire it up again to make a video.  The way the neural network works is as follows (as I recall):

Initially, the network (3-layer feed-forward network) is randomly connected (except for the two motor output neurons, which are intialized to full speed forward).  An array holds (3) three distance measurements that are constantly updated as the sonar is sweeped from left to right.  These measurements, as well as the current motor output speeds are fed into the neural network, and the output is fed directly to the motors.

When the robot hits an obstacle, all movement is stopped and the robot backs up about a foot.  The robot then tries multiple strategies to avoid the obstacle (ie. bank left, bank right, etc).  When one succeeds, the neural network training begins and using backprop, the network is updated with the solution.  This is the time-consuming part, as sometimes it can take over 20+ seconds for the network to converge… that was one of the reasons I gave up on this robot, as it seemed the Atmega was just way underpowered to perform backprop… even for a small neural network.  I always thought about updating the code and utilizing fixed point math (instead of floating point), to see if it would speed things up. 

Again, I’ll try and see if I can find the code and make a video.

is the sonar working? i

is the sonar working? i doesnt look like its working

however nice robot :slight_smile:

Sonar is working. In the

Sonar is working. In the video, the robot is currently in learning mode.  It hasn’t learned how to apply the sonar data to its motors yet.  I’ll try and post another video of it when its fully learned.