Hi everyone,
I thought about the lack of creativity of common robots. They also miss the concept of empathy.
Face recognition systems have algorhythms to read our mood, but they don’t care…
They can scan pictures full of disasters or happiness, without getting influenced.
A creativ robot needs moods.
Where do moods come from in us humans? A lot of different hormones control us, sometimes much more than knowledge does. Small children (or dogs) learn faster from our moods (in our voice) than by trial and error.
Neuronal networks learn mostly from weighing their connections, but that weight is based on ‘right or wrong’ only. They often have a threshold-neuron. That’s the first step into the concept of hormones.
Any Artificial Neuron Networks (ANN) would also benefit from a hormone that focus’ their attention to relevant input-data and stimulate long-term memory.
They’d learn faster and reduce processing data.
Ignoring less interesting input is like, dogs can filter loud noise and still hear us opening the bag with their treats.
When we’d add more and more hormones to AI-networks, we get moody robots and that’s the base for every creative process. ( they don’t need testosterone, especially the military bots don’t!)
I guess the fundamental question comes down to if we want robots whose actions we cannot predict.
Do we want or care about what an “angry” or “sad” robot produces? Does it make the robot less efficient or productive at the things humans want them to be doing?
Do we want any robot to be angry or sad and react accordingly and unpredictably? It’s one thing to use that “emotion” to draw something new, and another to prepare your drink with toxic chemicals.
Humans tend to want program a robot with all the information it needs, to limit trial and error to be able to reach the desired result(s) quickly. Even rewards based learning is biased towards what humans want.
I agree those moods are not useful for most robots.
Although, there is an ongoing discussion about autonomous cars’ ethik rules because AI has no empathy.
Lets take the rewards based learning a step further. (I like that term):
An autonomous car has learned all the basics and now takes advanced driving lessons. The teacher has a few buttons to trigger hormones in the ANN.
When a critical situation is ahead, the teacher presses the “watch out” button. (adrenalin) The ANN starts recording the situation until the teacher presses an reward-button. perfect; good; bad; worst are the options.
Later, when the car is in idle, the ANN simulates those recorded situations over and over again to find out what went good or bad. The teacher can point out, what it was by highlighting the object(s) that caused the critical situation.
That’s what humans do when they sleep, they learn in their dreams.
In many situations, it’s grey zone to determine what’s ethical or the best outcome (trolly problem and its many variants). How does the ANN know what is “good” vs. “bad” when it might be a difficult situation for most humans?