Single-layer Perceptron

Perceptron_0.zip (1572Bytes)

The perceptron is a simple model of a neuron and was invented by Frank Rosenblatt in 1957. It has a number of external inputs, one internal input (called bias) and one output. The input values can be any number.  The output of the perceptron, however, is always Boolean.  When the output is '1', the perceptron is said to be activated. All of the inputs (including the bias) are weighted. Basically, the perceptron takes all of the weighted input values and adds them together. If the sum is above a threshold θ, then the perceptron is activated. Otherwise the perceptron is not and the output ƒ(x) is '0':

PERCEPTRON_1.jpg

X is here the n-dimensional input vector, W the according weight vector, b the bias input and X•W the dot product.

The single-layer perceptron has one drawback: The  learning algorithm does not  converge if the learning set is not linearly separable. It is for instance impossible for a single-layer perceptron to learn an XOR function. This discovery end of the 1960's led to the so called AI winter. Nevertheless it can be used for many pattern recognition applications.

So far I have programmed a single-layer perceptron which has 3 inputs (one input is just an extra dimension with a constant value to replace the bias term). The sketch can be found attached. In the video the perceptron learns a NAND function with two inputs. The perceptron is also able to learn to perform a binary AND, OR or NOR function. If you try to teach it a XOR function, the algorithm will never stop though.

I have used following resources for my perceptron research:

  • http://en.wikipedia.org/wiki/Perceptron
  • http://www.realintelligence.net/tut_perceptron
  • http://www.artificial-neural-networks.info/2008/05/perceptron-neural-network.html

I will now continue to program a more complex neural network using perceptrons and build a hardware perceptron with logic IC's and op-amps.

https://www.youtube.com/watch?v=ejWpfoAe8OU

**neurons! **

oooooh!! perceptron! always makes my brain lit up :slight_smile: …i’ve tried simulating them before with just transistors(yup, boolean :stuck_out_tongue: )…well in theory, it can simulate neural networks, but it won’t simulate psychology or thinking (i.e classical condition…e.g Pavlov’s experiments), well, Rodney Brooks’ simulation of neural nets base on his theory of subsumption architecture w/c basically makes layers and layers per se from what the robot learned/experienced, does make a hint of learning by classical or operant conditioning. i’m excited with this project, gonna be waiting for updates :slight_smile:

Well, at my last meeting

Well, at my last meeting with lumi I discussed the idea to build perceptron(s) by hardware. Not with transistors but with logic IC’s and Opamps (you need a kind of hybrid system because the weights etc. are not Boolean). Just like the computer Perceptron Mark 1.  Video here. Let’s see what I will come up with…

Talking with you was more

Talking with you was more efficient than watching these formulas, it’s like seeing Chinese characters… :slight_smile:

The idea is not new as you mentioned but I guess to try it with those microcontrollers is. What are the odds that we hobbyinsts get an AI running in a robots held together by hotglue and double side tape :wink: