Deep Learning vs Neuromorphic Computing in Robotic Systems

What is deep learning?

Artificial neural networks were inspired by the behavior of biological neurons. In 1958, Frank Rosenblatt created the Perceptron, a mathematical neuron model that basically multiplies weights (w1, w2, w3...) by input signals (x1, x2, x3...), adds them all up and the result is the activation or not of the output neuron.

These weights (w1, w2, w3...) can assume positive or negative values, representing signal stimulation or inhibition. This idea came from the fact that biological neurons are connected to each other sending stimulatory and inhibitory signals to other neurons to be activated or not.

In the last decades, this basic model has undergone little change. Changes have occurred much more in the organization of artificial neurons than in their basic structure. For example, a convolutional neural network (CNN) has the same neurons as a recurrent neural network (RNN). The difference is in the arrangement of these neurons.

When many neurons are used in different layers, a neural net is considered deep. Hence the name Deep Learning.

It has been observed that different layers are responsible for identifying different features with respect to the input data. For example, if a neural network model that tries to classify images has two layers, the first layer might be responsible for identifying edges and contours, while the second layer might be joining these edges and contours to identify patterns of small figures. The output layer would join these small figures together to understand the complete image.

How does a deep neural network learn?

There are many ways to calibrate the weights of a neural network. The most commonly used learning in neural networks occurs through gradient descent and back propagation. The idea is to create a cost function that tells how good the output result of the neural network is relative to a reference, and derive this cost function to find the direction of minimum of the function (mathematical gradient concept).

With this gradient calculated, each weight of the neural net can be updated so that the new values represent an evolution in performance. This procedure is repeated many times until the parameters converge to an optimal point.

Why has deep learning been very useful in robotics?

The interaction of a robot with an environment is very complex, since it involves an almost infinite variability of states (the position of the robot relative to the environment as well as the environment itself can always change).

Given this, an ideal scenario would be for robots to be generalist enough to know what to do even if the environment is slightly different from the one it has trained in. A robot that can walk on stones needs to balance itself regardless of the arrangement of the stones on the ground, which can always be different.

Neural networks have proven capable of delivering this result. From the many layers of abstraction in deep neural networks, a system is able to perceive that a given scenario, even if different, is very similar to the training data, so that the decision about which movements to perform turns out to be assertive enough.

Limitations of Deep Neural Networks

One of the biggest bottlenecks of this approach is in the training time and computational cost. Very large models are capable of delivering impressive results, such as GPT-3 that writes text almost as well as humans. But these models often require billions of parameters, which incurs high training costs and energy expenditure.

How Neuromorphic Computing Can Help

Although Perceptron-based artificial neural networks were inspired by biological neurons, there are many key differences.

Neuromorphic computing is much more concerned with exactly mimicking the functioning of biological neurons.

For example, biological neurons have a great deal of time dependence. A neuron does not stay on or off indefinitely. What occurs in practice is an activation defined by some frequency.

An activated neuron initiates a synapse that lasts a few milliseconds and then enters the resting potential, waiting for new action potentials.

This characteristic causes a constant input in a neuromorphic system to have an output with sinusoidal behavior.

Despite being more difficult to handle because of the complexity, neuromorphic systems have been shown to be more energy efficient, provided they are built under neuromorphic hardware.

So far, neuromorphic systems have not yet been used in all applications that deep learning has already proven useful, but in some specific areas the performance and energy cost of these systems have already shown to be very promising, such as adaptive robotic arm control:

The Importance of Specialized Hardware

Just as GPUs have evolved to be responsible for most computations involving deep neural networks, performing neuromorphic operations requires specialized hardware for maximum performance.

New models of neuromorphic hardware can be expected to be released in the coming years, as companies like IBM, Intel and ABR have been working on this for some time.

Synergy or competition?

In the limit, perhaps the robots of the future will have hybrid systems, using both deep networks and neuromorphic networks, exploiting the advantages and disadvantages of each architecture. One must remember that the battle is not about which architecture is better, but how to maximize performance while minimizing costs.

Flag this post

Thanks for helping to keep our community civil!


Notify staff privately
It's Spam
This post is an advertisement, or vandalism. It is not useful or relevant to the current topic.

You flagged this as spam. Undo flag.Flag Post