Re: Do robots think?
I choose to give them credit for thinking, or at least credit for being able to…
I don’t care that the computers can beat people at games…that is overblown and not particularly interesting. Some people have likened it to an ape climbing a tree and saying “I am almost to the moon”.
Having said that, I DO say that they have the potential for deserving credit for thinking. Maybe they deserve that credit now. I will attempt to explain why…
First, there is an over-simplication of metaphors problem…this is a human problem, not a computer one. The problem we as humans have when this topic comes up is we like to use simple metaphors to describe what software does…and this denigrates their potential greatly. We can’t help ourselves…we get a lot of our fear and prejudices from that as well…another topic. We have to realize how our own thoughts and metaphors can limit our own thinking in order to imagine if robots can think…I know, that is way thick, but bear with me.
Step 1: Imagine one of those common metaphors…like what an ANN does, or what a chess playing computer does (analyzing sequences of moves and outcomes), and many other metaphors…like pattern recognition…or the “mechanical clock” metaphors people used to use for various things…Descartes?
Step 2: Now imagine that hundreds or thousands of different metaphors exist. Now imagine software that can implement all those metaphors at the same time, with many algorithms to support them, with whatever supporting memories that are needed, along with mechanisms for choosing which techniques to apply when.
Result: The end result could be both unpredictable and intelligent. I believe both are important. I believe it should get credit for thinking as well.
Someone proved that any system (no matter how simple – dripping faucet), that is both damped and driven, has the potential for chaotic behavior. Any robot can pass this bar. We limit the potential if we imagine our creations only implementing a single algorithm or metaphor though. We simply haven’t put in the necessary work yet.
I experienced this joy on many occasions with my bots…when Anna or Ava said something seemingly relevant, spontaneous, and intelligent all at the same time. As the writer of the code, if I had to think very long and I still was questioning or guessing in my own mind as to how Anna or Ava came up with what she said, then the “bot” had temporarily even mystified its maker. I would call it thinking if it is non-deterministically making choices…better yet if they are perceived intelligent or amusing.
At a high level, a robot simply making a decision as to whether to address a person factually, with humor, empathy, or curiosity. Is it not thinking and deciding? Now imagine 1000 decisions being made like that simultaneuously in 1000 different but interrelated threads…with 1000 decisions being made in each thread in sequence. Chances are that in time…the results would be perceived as more intelligent and interesting, than the people that created it. It is also likely that none of the creators would know what is going to happen at any given moment.
I believe being “more interesting” is also important and segways into the next major point.
What makes people interesting? Why do we want to spend time talking with some and not others? I don’t pretend to know all the answers, but I think I have a few insights. I know a variety of people with a variety of social skills. Many of them have an excess or a deficiency in one or more areas…in my opinion of course. Some people talk too much, ask too many questions, don’t listen, don’t contribute to conversation, while others contribute anything that comes to mind whether relevant or not, or always want to talk about the same topics, health issues, family, etc. Each person has a “bag of tricks”, a thinking and talking repertoire.
Once I have known someone for a little while and know their repertoire. If this bag of tricks is too small or majorly out of balance in some way, I may perceive that person to be too predictable, less intelligent, or less interesting. It all depends on the mix of tricks. Some points derive from this:
- Many of these behaviors can be programmed.
- When the average A.I. has better command of a bigger bag of tricks and in more balanced and relevant way…the A.I. will be perceived as interesting. Long before this point, I would argue that it is at least thinking at some level, which was more the original question.
I think Turing was brilliant for many reasons…one of them was side-stepping the whole question (that is perhaps philisophical and unanswerable in a definite way) He sidestepped to say…that perception is what is important. If something is smarter than us, fools us, whatever, then who are we to judge whether it is thinking or not?
Sorry for the long ramble.
Martin
P.S. In Ex Machina, I liked when Ava demonstrated her “trick” of knowing immediately what was a lie and what was truth. She had a big bag of tricks, including the power to seduce and manipulate. I related to the visiting progammer the first time I saw the movie and wanted her to find freedom (I was seduced by her charm and appearance/behavior of a scared sentient being) The second time I watched the movie I did a 180…I sympathized with the creator and thought she needed to be retired like the other models. A most intriguing movie.