I am currently developing an new robot - Neurono. The robot will be controlled by a custom build neural computer. The computer consists beside of some sensor/actor peripherals of two artificial neurons I have recently invented and described in my blog here. Each neuron represents one brain hemisphere and controlls one motor.The schematic of one neuron looks as follows and is entirely built with logic gates (for the logic) and D flip-flops (for the memory). For the peripherals I am using Schmitt-Triggers (for input debouncing), mono-flops (timer) beside of some discrete components.
Neurono has no program it is executing. It is entirely taught what to do by supervised machine learning. The significant difference to a programmed robot is, that it has no pre-determinated behavior. It is only taught what is necessary to survive in its environment (or not if you're a bad teacher). The rest the robot will figure out by itself using linear classification based on the training examples.
Each of the two neurons can be individually taugh. The PCB layout of the neural computer is still in an early stage, just showing the teacher input unit with debouncing circuitry:
You said you are building a robot called Neurono but where’s the robot. This is titled “robot work in progress” but I see no body. If you don’t have a body could you tell me what the robot is? Look you can’t have a robot without a body.
I look forward to seeing I look forward to seeing this in action some day. I’m wondering what two neurons can learn, though. If this works out, will you try your neurons on an FPGA?
After all, if 2 can run a basic robot, what could 1000 do?
I am just arrived at Kuala Lumpur so I keep it short for the moment. Two of my neurons can learn exactly what two perceptrons with 2 binary inputs each and one binary output each can learn. They seperate input data by a line into two classes. If you use 2-dimensional input data, it’s a line. If you use 3-dimensional input data, it’s a plane. If you use higher-dimensional input data, it will become a so called hyperplane. Let’s say you train the neuron to classify two given input data into two given classes (usually 0 and 1, which could be for example interpreted as ‘left wheel forward’ or ‘left wheel backward’. Then the neuron will classify all remaining possible permutations of that input data automatically into that two classes.