Neurons can learn temporal patterns

https://www.sciencedaily.com/releases/2017/05/170529090526.htm

Individual neurons can learn not only single responses to a particular signal, but also a series of reactions at precisely timed intervals.

Learning is commonly thought to be based on strengthening or weakening of the contacts between the brain's neurons. The Lund researchers have previously shown that a cell can also learn a timed association, so that it sends a signal with a certain learned delay. Now, it seems that a neuron can be trained not only to give a single response, but a whole complex series of several responses.

The brain's learning capacity is greater than previously thought

"This means that the brain's capacity for learning is even greater than previously thought!" says Germund Hesslow's colleague Dan-Anders Jirenhed. He thinks that, in the future, artificial neural networks with "trained neurons" could be capable of managing more complex tasks in a more efficient way.

 

The researchers now show that the cells can learn not only one, but several reactions in a series. "Signal -- brief pause -- signal -- long pause -- signal" gives rise to a series of responses with exactly the same intervals of time: "response -- brief pause -- response -- long pause -- response."

 

I would have to read the paper to fully understand if sequence learning is a kind of replay of signal, so mostly memorizing, or if it is somehow a reaction with one timed sequence to another timed sequence. To my understanding, it saves a timed sequence in a way more suitable to neurons, i.e., mostly keeping the pauses, but sending responses inbetween that are not the same as the original signal".

I find it quite interesting to see that at least basic patterns can be learned at a level of individual cells, maybe a kind of micromovement could be learned simply, and the network would take care of combining and modulating that to more complex sequences. So that would allow to keep track of many many micromovements and allow us to really draw from all the experience we gather through life, and possibly learn faster if we need to change behavior (accident, age, or simply new skill), because we essentially just have to find a new combination of micromovements we already have stored.

A network of neurons would then be a DSP of those recorded signals (that come from the trained "input" neurons) and could be retrained to give rise to more complex/combined behavior. So you would store more real world information, and not just set weights in an essentially random network, until it finds the right mapping function from input to desired output/body motion (as traditional neural networks do, as far as I understand).

On the other hand, convolutional neural networks store a kind of mini image in parts of the networks, or more aptly said, mini filters. I am not entirely clear about how it works exactly.

Any opinions on what that would change for neural networks or learning? Any ideas on what that article actually means?

Keep it coming.

Very interesting.

Thanks for posting.

There is a simple way to emulate part of this, at least for the output signals.  I often send ot a stream of coordinated “actions” in one burst or snapshot, but the actions contain timing info, so it simulates a series of actions over time.  An example is complex gesturing (coordinated arm movements with speech, facial expressions, emotional states, etc)

The harder part is recognizing the temporal patterns on the input stream, I don’t have a simple way to do that yet.