This is the "Chinese Bot". Its called that because 95% of the parts were sourced from China (via eBay). It uses Tamiya dual-motor gearbox, a cheap home-made arduino clone and has a 3-axis accelerometer.
The arduino runs a neural network, which controls the output speed & direction of the two motors.
The inputs to the network are:
Left Motor speed
Right Motor speed
Left 45 degree distance measurement
0 degree distance measurement
Right 45 degree distance measurement
The accelerometer detects if there was a collision with an object (ie. wall). If so, it stops the robot and trains the neural network with the inputs prior to collision. After some time (the arduino is slow, and it usually takes about 20 seconds for the training to complete), the robot starts back up and continues.
Update: I fired up the bot for the first time in over a year and captured some video of the learning phase. When I get some more time, I'll let it run for a long time and capture additional video of the fully trained network. As you can tell from the pauses during training, it can be a long & time-consuming process.
Learns to navigate via Neural Network
Actuators / output devices: 1:120 Tamiya Dual Motor Gearbox
Very interesting…95% Chinese. Where are these 5% come from and are you sure that that 5% are not also come from China in a way? Haha, just kidding…
Intersting to hear somebody finally goes with an neuronal network. I am interested in that since I first played “Creatures” and learned about their virtual neurons. That prgram was used in an F16 flight simulatur and the creatures learned to fly that thing without prior input…impressive.
As OddBot already mentioned, more info please. I am the second one who wnats to know more details about your approach.
Yes I remember Markus. However, the Creatures software in the 90’s had a very sophisticated neronal network with 1000 nodes. Not big but combined with 300 genes of the creatures a very interesting system.
Thanks for the interest. I actually made this robot over a year ago as an experiment into using neural networks as a robot’s control system after watching some videos on YouTube. It’s also an extension of some work on control systems I did for virtual embodied agents. I haven’t played with this bot in over a year, I’ll see if I can locate the code and fire it up again to make a video. The way the neural network works is as follows (as I recall):
Initially, the network (3-layer feed-forward network) is randomly connected (except for the two motor output neurons, which are intialized to full speed forward). An array holds (3) three distance measurements that are constantly updated as the sonar is sweeped from left to right. These measurements, as well as the current motor output speeds are fed into the neural network, and the output is fed directly to the motors.
When the robot hits an obstacle, all movement is stopped and the robot backs up about a foot. The robot then tries multiple strategies to avoid the obstacle (ie. bank left, bank right, etc). When one succeeds, the neural network training begins and using backprop, the network is updated with the solution. This is the time-consuming part, as sometimes it can take over 20+ seconds for the network to converge… that was one of the reasons I gave up on this robot, as it seemed the Atmega was just way underpowered to perform backprop… even for a small neural network. I always thought about updating the code and utilizing fixed point math (instead of floating point), to see if it would speed things up.
Again, I’ll try and see if I can find the code and make a video.
Sonar is working. In the video, the robot is currently in learning mode. It hasn’t learned how to apply the sonar data to its motors yet. I’ll try and post another video of it when its fully learned.