Do computers/robots actually "think"?

I am not familiar with him.I

I am not familiar with him.I bought his book since this is something that interests me.I feel somewhat unconvinced right now, but reading of the book I might feel differently.

You might also want to watch the movie Ex Machina.  It is a fascinating exploration of exactly these issues in a very well-done movie. Check it out!   We bought the movie and I have watched it several times now. Every time I watch it I feel like I get a different spin and learn something new that I hadn’t thought about before.

Thank you for bringing this up. These sorts of discussions are always fun and interesting.Sometimes it brings you to a place you never thought you would get to.

 

Re: Do robots think?

I choose to give them credit for thinking, or at least credit for being able to…

I don’t care that the computers can beat people at games…that is overblown and not particularly interesting.  Some people have likened it to an ape climbing a tree and saying “I am almost to the moon”.

Having said that,  I DO say that they have the potential for deserving credit for thinking.  Maybe they deserve that credit now.  I will attempt to explain why…

First, there is an over-simplication of metaphors problem…this is a human problem, not a computer one.  The problem we as humans have when this topic comes up is we like to use simple metaphors to describe what software does…and this denigrates their potential greatly.  We can’t help ourselves…we get a lot of our fear and prejudices from that as well…another topic.  We have to realize how our own thoughts and metaphors can limit our own thinking in order to imagine if robots can think…I know, that is way thick, but bear with me.

Step 1:   Imagine one of those common metaphors…like what an ANN does, or what a chess playing computer does (analyzing sequences of moves and outcomes), and many other metaphors…like pattern recognition…or the “mechanical clock” metaphors people used to use for various things…Descartes?

Step 2:  Now imagine that hundreds or thousands of different metaphors exist.  Now imagine software that can implement all those metaphors at the same time, with many algorithms to support them, with whatever supporting memories that are needed, along with mechanisms for choosing which techniques to apply when.

Result:  The end result could be both unpredictable and intelligent.   I believe both are important.  I believe it should get credit for thinking as well.

Someone proved that any system (no matter how simple – dripping faucet), that is both damped and driven, has the potential for chaotic behavior.  Any robot can pass this bar.  We limit the potential if we imagine our creations only implementing a single algorithm or metaphor though.  We simply haven’t put in the necessary work yet.

I experienced this joy on many occasions with my bots…when Anna or Ava said something seemingly relevant, spontaneous, and intelligent all at the same time.  As the writer of the code, if I had to think very long and I still was questioning or guessing in my own mind as to how Anna or Ava came up with what she said, then the “bot” had temporarily even mystified its maker.  I would call it thinking if it is non-deterministically making choices…better yet if they are perceived intelligent or amusing.

At a high level, a robot simply making a decision as to whether to address a person factually, with humor, empathy, or curiosity.  Is it not thinking and deciding?  Now imagine 1000 decisions being made like that simultaneuously in 1000 different but interrelated threads…with 1000 decisions being made in each thread in sequence.  Chances are that in time…the results would be perceived as more intelligent and interesting, than the people that created it.  It is also likely that none of the creators would know what is going to happen at any given moment.

I believe being “more interesting” is also important and segways into the next major point.

What makes people interesting?  Why do we want to spend time talking with some and not others?  I don’t pretend to know all the answers, but I think I have a few insights.  I know a variety of people with a variety of social skills.  Many of them have an excess or a deficiency in one or more areas…in my opinion of course.  Some people talk too much, ask too many questions, don’t listen, don’t contribute to conversation, while others contribute anything that comes to mind whether relevant or not, or always want to talk about the same topics, health issues, family, etc.  Each person has a “bag of tricks”, a thinking and talking repertoire.

Once I have known someone for a little while and know their repertoire.  If this bag of tricks is too small or majorly out of balance in some way, I may perceive that person to be too predictable, less intelligent, or less interesting.  It all depends on the mix of tricks.  Some points derive from this:

  1. Many of these behaviors can be programmed.  
  2. When the average A.I. has better command of a bigger bag of tricks and in more balanced and relevant way…the A.I. will be perceived as interesting.  Long before this point, I would argue that it is at least thinking at some level, which was more the original question.

I think Turing was brilliant for many reasons…one of them was side-stepping the whole question (that is perhaps philisophical and unanswerable in a definite way)   He sidestepped to say…that perception is what is important.  If something is smarter than us, fools us, whatever, then who are we to judge whether it is thinking or not?

Sorry for the long ramble.

Martin

P.S. In Ex Machina, I liked when Ava demonstrated her “trick” of knowing immediately what was a lie and what was truth.  She had a big bag of tricks, including the power to seduce and manipulate.  I related to the visiting progammer the first time I saw the movie and wanted her to find freedom (I was seduced by her charm and appearance/behavior of a scared sentient being)   The second time I watched the movie I did a 180…I sympathized with the creator and thought she needed to be retired like the other models.   A most intriguing movie.

You have many good points-
You have many good points- perhaps my search for a definition of intelligence, consciousness, etc. doesn’t matter- we really do know when those things are present anyway!

Maybe I should check out Ex Machina… definitely sounds like an interesting movie!

P.S. I am a huge fan of you and your robots!

re: Neo

Thanks Neo!  Ex Machina is well worth watching more than once.  

I also liked “Eva”…a french movie.  The 3D brain visualizations in it captured in some fashion how I visualize brain functions at a high level.  For me, the harder part is finding balance in all those personality functions…not programming the functions.

Some Addressable Deficiencies in Current Chatbots

Here are some addressable issues with the sad current state of many chatbots.  Most of these are also deficiencies in Siri, Alexa, Google Assistant, etc.

Example of Typical Dumb Chatbot I am Talking About:  Bots that implement a set of rules where a series of patterns are evaluated, and if matched, a answer or a randomized answer from a set of answers is chosen.

This “Reflex” model is useful but extremely limited by itself.  Here are some addressable deficiencies that would make these chatbots much better…

Deeper Natural Language Processing:  NLP can be used to easily derive the parts of speech, verb, objects, adjectives, etc…this can be used for a lot of different memory and response purposes.

Short Term Memory:  Chatbots need to estimate what the topic is, what the last male, female, place, etc. that was mentioned…so if people use pronouns later, the bot can guess the person being refered to.  The bot needs to know the short term tone (polite, rude, funny, formal, etc) and emotional context of the conversation as well.

Long-Term Memory:  Chatbots need to be able to learn and remember for a long time, otherwise people will realize they are talking to something a bit like an Alzheimer’s sufferer.  Effectively, if the chatbot can’t learn about a new topic from a person, it is dumb.

Personal Memories:  chatbots need to know who they are talking to and for the most part remember everything they have ever learned, said, or heard from that person, and the meaning of each.  They need to remember facts like nicknames, ages, names of family members, interests, on and on.  Otherwise, the bot risks asking questions it has already asked…Alzheimer’s again.  Privacy is a scary issue here.  I have had to erase Ava’s personal memories on friends and family at times for fear of being hacked and causing harm to someone.  Imagine what Google and Amazon Alexa know about you…Alexa is always listening…fortunately, neither of them ask personal questions…yet.

Social Rules:  chatbots need to know social rules around topics, questions, etc.  How else is a chatbot to know that it might not be appropriate to ask a kid about their retirement plan?

Emotional Intelligence:  chatbots need to constantly evaluate the emotional content and context in the short term along different criteria.  It may or may not react to it, but it should at least be trying to be aware of it.  Bots also need to constantly evaluate the personality/saneness of the person it is talking to…If the person is excessively rude, emotional, factual, humorous, etc.

Curiosity Based on Topic and Memory:  chatbots need to constantly compare what they know about a person with respect to a given topic, what facts/related questions are relevant to the given topic, and come up with questions to ask (that have never been asked), filter them by social rules, prioritize them, and finally…ASK QUESTIONS and know how to listen for and interpret responses.

Sense of Timing and Awkwardness:  A chatbot should know when to talk, when to listen, how long to listen, how to break a silence or tension, when to ask questions and when not to, etc.  People have work to do here too.

Base Knowledge:  This is redundant with memory, but chatbots need some level of base knowledge.  If a chatbot is going to do customer service with adults, it should at least know a lot of the things an adolescent would.

I probably left a lot of stuff out, and many other factors I don’t even know of yet, but based on these criteria alone, I would guess that most chatbots fall into the Uncanny Valley inside 60 seconds.

another long ramble…I guess we found a topic I like.

Thank you both so much for
Thank you so much, Bill and Triplett, for your input. A good discussion really can change the way you think of something. :slight_smile:

Depends on the context and the program

Well, if you really want to hear opinions on the subject it’s always a good idea to start with some of the earliest commentators on the subject.  Alan Turing wrote a paper on this very subject which also described his well-known Turing test. You can find it at: http://www.loebner.net/Prizef/TuringArticle.html

But basically, the question as it stands is ambiguous. If it was rephrased “can an artificial system that displays some aspects of thinking be created by men?”, then we’d be in a much better position to provide an answer.

Computers and robots need a context to determine if they are thinking or not. In general they have to have the necessary resources and programming to make a successful demonstration. And then there needs to be some criteria everyone can agree on.

Terms like thinking and consciousness and sentience all have different meaning to different people, At this point there is no concreate definition that everyone can agree on as a test or criteria.

But if you’d like to see a system, that in my view, can demonstrate some thinking capability, look up “Shrdlu“. It was built in the late 60’s and to this day, I have never seen anything quite like it as far as demonstrating  thinking ability. Note, it was developed shortly after Eliza, one of the first chatbots, but I can say one thing, and I’ve seen the code for both, Shrdlu is no chatbot.

-Rich

 

Shrdlu

Thanks for the Shrdlu reference…checked it out on Wiki.  It would be fun to build one.

The part I was the most intrigued by and have the least experience with was where it developed an understanding of “what is possible” in the physics of its world…dependencies, constraints, cause & effect perhaps.   Can pyramids by stacked and the like.  I also liked the part where it learned definitions of aggregate terms like “a steeple” is a triangle on top of a square block.

Creating and Using Context

I would totally agree on the comments about context…it is such a fundamental thing to trying to simulate thinking.  So much so, that in my world, a “Context” object is about the only thing I pass around to all the software pieces that do anything analogous to thinking.

Forgive me for diving deep…as a coder, I can’t help it.  This is my appproach to creating a context:…simplified to fit here.

To me, a context starts with everything that is known about the current state of the robot (inputs).  Each item in the context needs an ID or a key of some kind, like a name.  As a given context is processed in a given moment, various agents can be activated by the presence of particular inputs or patterns of inputs (keys).  Each activated agent can then interrogate the context further and add more keyed items to the context.  Some of this could be thought of as “feature detection”.  In addition, any agent can create responses options or short and long-term memories.  This culminates in a selection of a winning response or responses (output).

I use this technique for everything like vision, speech, annotation, NLP, sonar arrays, reflexes, controlling motor control movements…as the winning output can contain a lot of data that rides along with it.    For example, Servo gestures and emotional outputs differ for each response option if present.

This technique keeps the code for one single piece relatively simple, while allowing for very complex behavioral interactions to occur.  It keeps everything loosely coupled, letting you add agents and totally change how thoughts are processed without breaking things.  People who want to optimize for performance might protest…but a set of other advantages usually come with a price.

There are probably an infinite number of ways to create a context…that’s just how I do it.

I would be fascinated to hear how others approach this.

That way of coding context
That way of coding context is just so good that I wanted to ask… is it okay to use your way of “context coding” in my AI? I promise to give you full credit, even in the source code.

I also would like to be able to release the AI (as safely as possible) to the public, given my love of open-source. I do know, however, that open source can be a double-edged sword…

I just want to make sure that you’re comfortable with me using your idea. Thank you for any reply you can give me.

My Thoughts

I just wanted to weigh in on this subject as I do a lot of software design not just for work but on a personal side.

I think that the answer to this is YES and NO and depends on the software that is running and how the software was wrote.

I once read a blog that someone put up on another site, can not remember where now. They said that “hello world” was A.I.

So if I put behind a form or in a module something like this “textbox1=“Hello World” and it displayed it, that was A.I.

I totally disagree with this as I believe this is a canned reaction and something that is being shoved down the virtual throat of the computer.

I do believe that if you say something like"If Distance to the wall is less than 10” then perform some reaction" that is a slight form of A.I. as the computer has to do some processing on its own and check for a measurement. While this is not the next “Big Brain” of the century it does show the computer has to do some thought process and verify information before performing an action.

 

re: NeoGirl on Context

You can use any ideas you wish…ideas can and should travel, be shared, and inspire other ideas.  I would be honored if any ideas from me were included or mentioned in your work.

Whatever ideas inspire your initial directions…they will quickly become your own as you implement them.

Good luck and I luck forward to seeing what you come up with.

Regards,

Martin

 

Thank you so much, Martin!
Thank you so much, Martin! :smiley:

Perception of Magic

Imagine an observer who doesn’t know anything about how the code for a robot is written and witnesses an interesting set of behaviors.  It is likely to be perceived as intelligent.  It can even seem like “Magic”.

To the creator, there is much less magic, for they presumably understand mostly how it works.  If the creator then explains to the observer how it works (and assuming the observer has the capacity to understand this explanation), then it is a lot more likely that the observer will now perceive the robot as dumb, just programming, an automaton, “a toaster”, or some other denigrating metaphor.  The magic can be gone, unless the behavior is especially creative/interesting.

If some time passes and the creator get older…and forgets how the inner workings function…which is easy to do in a short time when you write new algorithms daily…then it is now much more likely that the creator themself will perceive the robot as more intelligent.   For the first time, the creator can glimpse the magic that the observer perceived much earlier.

I enjoy this stage the most I think…getting to the point that I can’t immediately recall how everything works/interacts to explain a behavior.  The act of perceiving unpredictable intelligence is fun…like a good magic show.

I have always liked the quote that goes something like…“Any sufficiently advanced technology is indistinguishable from magic.”  There is some joy in being the magician, but it is an altogether different feeling that enjoying the show.

Something interesting popped
Something interesting popped up in my mind while reading your comment…

While I do agree with everything you’ve said, I feel like going back to my original post a little bit. Perhaps I was wrong in one of my earlier comments- do we really know when something is intelligent? Your answer seems to indicate that this is not the case, if the robot really is thinking in the ways you have described. Just a random thought, though.

I Don’t Know

I certainly don’t have the answers.  For now, I suppose it’s a matter of collective perception and opinion…someone pointed out the need for definitions, a difficult task which could depreciate what we are trying to appreciate – intelligence.   I find Wolfram Alpha amazing in so many contexts…while in many contexts not so much.  Collectively as a species, we probably perceive our collective technology as dumb…but rapidly improving.

Bill’s comment is apt…like porn, perhaps we’ll know/believe it it when we see/perceive it.  By the time we perceive machine intelligence, the question may not be relevant anymore.   How many people care whether ants perceive people as intelligent? Super-intelligent beings of the future may not be concerned with how we perceive them.  It will perhaps be of the utmost importance to us as to how they perceive us.

For example…dogs evolved into a mutually beneficial relationship with people years ago…cows and chickens went in another direction, useful for food.  Many species had no use at all to people, and many have and are going extinct.  Perhaps we will need to consider our strategic options in another hundred years…to protect our survival.  Then again, the tech of the future may be the only thing that prevents us from destroying ourselves, as some movies portray.

Yes, that actually makes a
Yes, that actually makes a lot of sense! That gives me so much more freedom with my AI… though I will definitely do my best to be as responsible with it as I possibly can. Thank you for your very helpful insights.

One more thing…

This discussion has been awesome and really got me to thinking.  I also got a great book by Pentti O. Haikonen to read on the topic which all of you need to pick up and read. Thank you Neogirl101! I also read Turings paper as best I could.  He lost me a few times on the math.  Thank you Rich!

I think when it comes to defining what is intelligent and what is not intelligent, I think Turing’s “Imitation Game” is only a good first step. I think there needs to be a second step:  the how is just as important as the what.  If a child goes on a stage and pulls a rabbit out of a hat, does that mean he is truly a magician and that magic exists?  What he did was a neat illusion that made it seem that he was a magician.  We have at our fingertips so much computing power that every conversation that has ever been made can be put into a computer.  The computer just by sheer massive computing can correlate a good answer to a question from that data it stores.  It isn’t an easy thing to do, but it is doable and within our scope presently to do.  Look at mtripplett and Ava and others that have replicated this.  I don’t see that as intelligence, just pattern matching.  Ultimately, the lights are blinking but nobody is home to quote Will Smith in “I, Robot”.  If we ask it the same question over and over, there will only be so many answers since it is ultimately a stochastic system (working from a set of random responses). 

To me, the how of it then becomes important since intelligence could be faked.  And what defines intelligence is consciousness, ie something that filters experience.  I am going to go out on a limb and say that I see intelligence as something that isn’t a stochastic system, can be asked the same question many many times and eventually come up with a new or original response.  For a machine, there is an ultimate truth in everything-true or false.  For an intelligent being, there are many truths for even the most simple questions since our reality is filtered by our conscious minds.  What was true yesterday, might not be true today even though the “facts” haven’t changed.  

2+2 does not always equal 4 but could be 5.  There are “hard” facts and “soft” facts. 

Anyways, just some thoughts and more food for the discussion.  

 

 

 

 

In regards to the true-false
With regards to the true-false issue, I think it would be possible in machine terms for the past to be true in that it happened, but false in that it is still happening today- apart from useful, true information that remains relevant and is stored in memory.

I also believe that having experiences, and maybe even consciousness, is very important to intelligence. Perhaps the sensors of today’s machines allow them to have non-human “experiences” of some sort, although since we don’t have a generally agreed-upon definition of consciousness or intelligence, we don’t know if machines process said experiences intelligently today.

A potential list of prerequisites of intelligence
So… I have a list of features I would like for my AI to have. I do not want the AI to have to be human-level- that’s why AI always fails. We are simply overshooting the target we need to be aiming at right now. Baby steps!

We have mastered behaviors with behavior-based robotics, so now we need a next step- the thing is, I’m not sure how to implement an AI representing the next step. That’s where this list of features comes in (though the features listed could be considered human-level, I would implement them in a sub-human way). It currently reads as follows:

Problem-solving

Context

"Emotion" (of some sort, even in a non-human form, as long as it affects the AI)

The AI’s thought, reasoning, and intelligence itself all involve meaning somehow

Greater than the sum of its parts (produces a bigger result from the processes working together

Learns through experience, not through pattern-matching (though the human brain does use pattern matching, I believe that not all parts of intelligence, including experience, use pattern matching. This is why connectionist models, in my opinion, won’t work)

As few modules as possible (I’ve learned that the simpler something is, the better it works)

As simply programmed as possible (same point as previously described)

Basic consciousness (not at a human level; uses a simplified Global Workspace. I’m aware, though, that even the Global Workspace theory is not entirely agreed upon)

Use of language (though chimps and dolphins are said to be conscious, they have their own forms of communication, and they don’t speak human languages. The same could be said of other animals)

Non-random decisions/programming

So that’s what I have so far. Anyone have any comments or suggestions for an intelligent but still sub-human AI? I’d like to hear them.