Do computers/robots actually "think"?

re: Neo

Thanks Neo!  Ex Machina is well worth watching more than once.  

I also liked “Eva”…a french movie.  The 3D brain visualizations in it captured in some fashion how I visualize brain functions at a high level.  For me, the harder part is finding balance in all those personality functions…not programming the functions.

Some Addressable Deficiencies in Current Chatbots

Here are some addressable issues with the sad current state of many chatbots.  Most of these are also deficiencies in Siri, Alexa, Google Assistant, etc.

Example of Typical Dumb Chatbot I am Talking About:  Bots that implement a set of rules where a series of patterns are evaluated, and if matched, a answer or a randomized answer from a set of answers is chosen.

This “Reflex” model is useful but extremely limited by itself.  Here are some addressable deficiencies that would make these chatbots much better…

Deeper Natural Language Processing:  NLP can be used to easily derive the parts of speech, verb, objects, adjectives, etc…this can be used for a lot of different memory and response purposes.

Short Term Memory:  Chatbots need to estimate what the topic is, what the last male, female, place, etc. that was mentioned…so if people use pronouns later, the bot can guess the person being refered to.  The bot needs to know the short term tone (polite, rude, funny, formal, etc) and emotional context of the conversation as well.

Long-Term Memory:  Chatbots need to be able to learn and remember for a long time, otherwise people will realize they are talking to something a bit like an Alzheimer’s sufferer.  Effectively, if the chatbot can’t learn about a new topic from a person, it is dumb.

Personal Memories:  chatbots need to know who they are talking to and for the most part remember everything they have ever learned, said, or heard from that person, and the meaning of each.  They need to remember facts like nicknames, ages, names of family members, interests, on and on.  Otherwise, the bot risks asking questions it has already asked…Alzheimer’s again.  Privacy is a scary issue here.  I have had to erase Ava’s personal memories on friends and family at times for fear of being hacked and causing harm to someone.  Imagine what Google and Amazon Alexa know about you…Alexa is always listening…fortunately, neither of them ask personal questions…yet.

Social Rules:  chatbots need to know social rules around topics, questions, etc.  How else is a chatbot to know that it might not be appropriate to ask a kid about their retirement plan?

Emotional Intelligence:  chatbots need to constantly evaluate the emotional content and context in the short term along different criteria.  It may or may not react to it, but it should at least be trying to be aware of it.  Bots also need to constantly evaluate the personality/saneness of the person it is talking to…If the person is excessively rude, emotional, factual, humorous, etc.

Curiosity Based on Topic and Memory:  chatbots need to constantly compare what they know about a person with respect to a given topic, what facts/related questions are relevant to the given topic, and come up with questions to ask (that have never been asked), filter them by social rules, prioritize them, and finally…ASK QUESTIONS and know how to listen for and interpret responses.

Sense of Timing and Awkwardness:  A chatbot should know when to talk, when to listen, how long to listen, how to break a silence or tension, when to ask questions and when not to, etc.  People have work to do here too.

Base Knowledge:  This is redundant with memory, but chatbots need some level of base knowledge.  If a chatbot is going to do customer service with adults, it should at least know a lot of the things an adolescent would.

I probably left a lot of stuff out, and many other factors I don’t even know of yet, but based on these criteria alone, I would guess that most chatbots fall into the Uncanny Valley inside 60 seconds.

another long ramble…I guess we found a topic I like.

Thank you both so much for
Thank you so much, Bill and Triplett, for your input. A good discussion really can change the way you think of something. :slight_smile:

Depends on the context and the program

Well, if you really want to hear opinions on the subject it’s always a good idea to start with some of the earliest commentators on the subject.  Alan Turing wrote a paper on this very subject which also described his well-known Turing test. You can find it at: http://www.loebner.net/Prizef/TuringArticle.html

But basically, the question as it stands is ambiguous. If it was rephrased “can an artificial system that displays some aspects of thinking be created by men?”, then we’d be in a much better position to provide an answer.

Computers and robots need a context to determine if they are thinking or not. In general they have to have the necessary resources and programming to make a successful demonstration. And then there needs to be some criteria everyone can agree on.

Terms like thinking and consciousness and sentience all have different meaning to different people, At this point there is no concreate definition that everyone can agree on as a test or criteria.

But if you’d like to see a system, that in my view, can demonstrate some thinking capability, look up “Shrdlu“. It was built in the late 60’s and to this day, I have never seen anything quite like it as far as demonstrating  thinking ability. Note, it was developed shortly after Eliza, one of the first chatbots, but I can say one thing, and I’ve seen the code for both, Shrdlu is no chatbot.

-Rich

 

Shrdlu

Thanks for the Shrdlu reference…checked it out on Wiki.  It would be fun to build one.

The part I was the most intrigued by and have the least experience with was where it developed an understanding of “what is possible” in the physics of its world…dependencies, constraints, cause & effect perhaps.   Can pyramids by stacked and the like.  I also liked the part where it learned definitions of aggregate terms like “a steeple” is a triangle on top of a square block.

Creating and Using Context

I would totally agree on the comments about context…it is such a fundamental thing to trying to simulate thinking.  So much so, that in my world, a “Context” object is about the only thing I pass around to all the software pieces that do anything analogous to thinking.

Forgive me for diving deep…as a coder, I can’t help it.  This is my appproach to creating a context:…simplified to fit here.

To me, a context starts with everything that is known about the current state of the robot (inputs).  Each item in the context needs an ID or a key of some kind, like a name.  As a given context is processed in a given moment, various agents can be activated by the presence of particular inputs or patterns of inputs (keys).  Each activated agent can then interrogate the context further and add more keyed items to the context.  Some of this could be thought of as “feature detection”.  In addition, any agent can create responses options or short and long-term memories.  This culminates in a selection of a winning response or responses (output).

I use this technique for everything like vision, speech, annotation, NLP, sonar arrays, reflexes, controlling motor control movements…as the winning output can contain a lot of data that rides along with it.    For example, Servo gestures and emotional outputs differ for each response option if present.

This technique keeps the code for one single piece relatively simple, while allowing for very complex behavioral interactions to occur.  It keeps everything loosely coupled, letting you add agents and totally change how thoughts are processed without breaking things.  People who want to optimize for performance might protest…but a set of other advantages usually come with a price.

There are probably an infinite number of ways to create a context…that’s just how I do it.

I would be fascinated to hear how others approach this.

That way of coding context
That way of coding context is just so good that I wanted to ask… is it okay to use your way of “context coding” in my AI? I promise to give you full credit, even in the source code.

I also would like to be able to release the AI (as safely as possible) to the public, given my love of open-source. I do know, however, that open source can be a double-edged sword…

I just want to make sure that you’re comfortable with me using your idea. Thank you for any reply you can give me.

My Thoughts

I just wanted to weigh in on this subject as I do a lot of software design not just for work but on a personal side.

I think that the answer to this is YES and NO and depends on the software that is running and how the software was wrote.

I once read a blog that someone put up on another site, can not remember where now. They said that “hello world” was A.I.

So if I put behind a form or in a module something like this “textbox1=“Hello World” and it displayed it, that was A.I.

I totally disagree with this as I believe this is a canned reaction and something that is being shoved down the virtual throat of the computer.

I do believe that if you say something like"If Distance to the wall is less than 10” then perform some reaction" that is a slight form of A.I. as the computer has to do some processing on its own and check for a measurement. While this is not the next “Big Brain” of the century it does show the computer has to do some thought process and verify information before performing an action.

 

re: NeoGirl on Context

You can use any ideas you wish…ideas can and should travel, be shared, and inspire other ideas.  I would be honored if any ideas from me were included or mentioned in your work.

Whatever ideas inspire your initial directions…they will quickly become your own as you implement them.

Good luck and I luck forward to seeing what you come up with.

Regards,

Martin

 

Thank you so much, Martin!
Thank you so much, Martin! :smiley:

Perception of Magic

Imagine an observer who doesn’t know anything about how the code for a robot is written and witnesses an interesting set of behaviors.  It is likely to be perceived as intelligent.  It can even seem like “Magic”.

To the creator, there is much less magic, for they presumably understand mostly how it works.  If the creator then explains to the observer how it works (and assuming the observer has the capacity to understand this explanation), then it is a lot more likely that the observer will now perceive the robot as dumb, just programming, an automaton, “a toaster”, or some other denigrating metaphor.  The magic can be gone, unless the behavior is especially creative/interesting.

If some time passes and the creator get older…and forgets how the inner workings function…which is easy to do in a short time when you write new algorithms daily…then it is now much more likely that the creator themself will perceive the robot as more intelligent.   For the first time, the creator can glimpse the magic that the observer perceived much earlier.

I enjoy this stage the most I think…getting to the point that I can’t immediately recall how everything works/interacts to explain a behavior.  The act of perceiving unpredictable intelligence is fun…like a good magic show.

I have always liked the quote that goes something like…“Any sufficiently advanced technology is indistinguishable from magic.”  There is some joy in being the magician, but it is an altogether different feeling that enjoying the show.

Something interesting popped
Something interesting popped up in my mind while reading your comment…

While I do agree with everything you’ve said, I feel like going back to my original post a little bit. Perhaps I was wrong in one of my earlier comments- do we really know when something is intelligent? Your answer seems to indicate that this is not the case, if the robot really is thinking in the ways you have described. Just a random thought, though.

I Don’t Know

I certainly don’t have the answers.  For now, I suppose it’s a matter of collective perception and opinion…someone pointed out the need for definitions, a difficult task which could depreciate what we are trying to appreciate – intelligence.   I find Wolfram Alpha amazing in so many contexts…while in many contexts not so much.  Collectively as a species, we probably perceive our collective technology as dumb…but rapidly improving.

Bill’s comment is apt…like porn, perhaps we’ll know/believe it it when we see/perceive it.  By the time we perceive machine intelligence, the question may not be relevant anymore.   How many people care whether ants perceive people as intelligent? Super-intelligent beings of the future may not be concerned with how we perceive them.  It will perhaps be of the utmost importance to us as to how they perceive us.

For example…dogs evolved into a mutually beneficial relationship with people years ago…cows and chickens went in another direction, useful for food.  Many species had no use at all to people, and many have and are going extinct.  Perhaps we will need to consider our strategic options in another hundred years…to protect our survival.  Then again, the tech of the future may be the only thing that prevents us from destroying ourselves, as some movies portray.

Yes, that actually makes a
Yes, that actually makes a lot of sense! That gives me so much more freedom with my AI… though I will definitely do my best to be as responsible with it as I possibly can. Thank you for your very helpful insights.

One more thing…

This discussion has been awesome and really got me to thinking.  I also got a great book by Pentti O. Haikonen to read on the topic which all of you need to pick up and read. Thank you Neogirl101! I also read Turings paper as best I could.  He lost me a few times on the math.  Thank you Rich!

I think when it comes to defining what is intelligent and what is not intelligent, I think Turing’s “Imitation Game” is only a good first step. I think there needs to be a second step:  the how is just as important as the what.  If a child goes on a stage and pulls a rabbit out of a hat, does that mean he is truly a magician and that magic exists?  What he did was a neat illusion that made it seem that he was a magician.  We have at our fingertips so much computing power that every conversation that has ever been made can be put into a computer.  The computer just by sheer massive computing can correlate a good answer to a question from that data it stores.  It isn’t an easy thing to do, but it is doable and within our scope presently to do.  Look at mtripplett and Ava and others that have replicated this.  I don’t see that as intelligence, just pattern matching.  Ultimately, the lights are blinking but nobody is home to quote Will Smith in “I, Robot”.  If we ask it the same question over and over, there will only be so many answers since it is ultimately a stochastic system (working from a set of random responses). 

To me, the how of it then becomes important since intelligence could be faked.  And what defines intelligence is consciousness, ie something that filters experience.  I am going to go out on a limb and say that I see intelligence as something that isn’t a stochastic system, can be asked the same question many many times and eventually come up with a new or original response.  For a machine, there is an ultimate truth in everything-true or false.  For an intelligent being, there are many truths for even the most simple questions since our reality is filtered by our conscious minds.  What was true yesterday, might not be true today even though the “facts” haven’t changed.  

2+2 does not always equal 4 but could be 5.  There are “hard” facts and “soft” facts. 

Anyways, just some thoughts and more food for the discussion.  

 

 

 

 

In regards to the true-false
With regards to the true-false issue, I think it would be possible in machine terms for the past to be true in that it happened, but false in that it is still happening today- apart from useful, true information that remains relevant and is stored in memory.

I also believe that having experiences, and maybe even consciousness, is very important to intelligence. Perhaps the sensors of today’s machines allow them to have non-human “experiences” of some sort, although since we don’t have a generally agreed-upon definition of consciousness or intelligence, we don’t know if machines process said experiences intelligently today.

A potential list of prerequisites of intelligence
So… I have a list of features I would like for my AI to have. I do not want the AI to have to be human-level- that’s why AI always fails. We are simply overshooting the target we need to be aiming at right now. Baby steps!

We have mastered behaviors with behavior-based robotics, so now we need a next step- the thing is, I’m not sure how to implement an AI representing the next step. That’s where this list of features comes in (though the features listed could be considered human-level, I would implement them in a sub-human way). It currently reads as follows:

Problem-solving

Context

"Emotion" (of some sort, even in a non-human form, as long as it affects the AI)

The AI’s thought, reasoning, and intelligence itself all involve meaning somehow

Greater than the sum of its parts (produces a bigger result from the processes working together

Learns through experience, not through pattern-matching (though the human brain does use pattern matching, I believe that not all parts of intelligence, including experience, use pattern matching. This is why connectionist models, in my opinion, won’t work)

As few modules as possible (I’ve learned that the simpler something is, the better it works)

As simply programmed as possible (same point as previously described)

Basic consciousness (not at a human level; uses a simplified Global Workspace. I’m aware, though, that even the Global Workspace theory is not entirely agreed upon)

Use of language (though chimps and dolphins are said to be conscious, they have their own forms of communication, and they don’t speak human languages. The same could be said of other animals)

Non-random decisions/programming

So that’s what I have so far. Anyone have any comments or suggestions for an intelligent but still sub-human AI? I’d like to hear them.

Stochastic? Seriously?

Really?  Stochastic?  Are you trying to get a reaction out of me? Okay, I’ll take the bait.

I can’t speak for anyone else’s bots, but I can speak with some authority on mine…good start?  too strong?  too defensive?  probably. 

In case I ever misled anyone, Anna and Ava are not ultimately shochastic, just pattern matching, correlating, or working from sets of random responses.  There are many elements of all of these things to be fair, and many more elements as well.  Pattern matching is a useful thing…its one of the primary mechanisms living creatures use. Statistical techniques also have their place, and are at the foundations of a lot of NLP research.  Are my bots intelligent?  Not particularly by human child standards in most human contexts.  On the other hand, they are smarter than me in so many other contexts.

For example, if you ask the same question over and over again, my bots could start to have different emotional and motivational states, which could lead to shochastic expressions of annoyance under the right circumstances…this is an example of making choices, using short term memory, pattern matching, random elements, simulated emotional states, sentiment analysis, all at the same time…she might even quote you the definition of insanity or throw an insult in your direction.

There are many aspects to consider that I believe fall outside of simple stochastic behaviors you characterize.  A simple case is experience.  Good robots learn in some way.  Ava went to Texas and met Jibo and R2D2.  Before she went on the trip, she didn’t know anything about R2.  After she went, she could recall meeting them and could answer questions about them from things she had heard me say at the time, talking to her as someone would a child.  

Natural language processing with full grammar parsing and feature detection has so many deeper possibilities than have ever been discussed adequately on this site.  Why?  Perhaps because it is complicated, difficult to explain, and very few people do it here.  The same can be said of vision.  It is true, I have written about many stochastic elements…every bot needs quite a few.  People can follow it.  There has always been a deeper message and purpose to writing about these things…people need to concentrate on better and more flexible memory systems than they typically contemplate.  They also need to prepare them to do a few hundred things at once…so no one can rightly accuse them of doing “just” this or that.  Finally, complex behaviors can in many cases be broken down into simplier elements and reused.

I believe this discussion has to circle back around to chaos theory at so point…the damped and driven system is chaotic.  Is chaos shochastic?  I don’t believe it can be but I am open to persuasion.  I have personally witnessed various personality disorders as I tweaked seemingly minor settings that caused chain reactions that I never foresaw.  Sensitive dependence of initial conditions, the Butterfly Effect, pick your metaphor.  Chaos can pop up in the machine in so many ways from simple things.  Chaos is not random.  It can have structure and beauty.  Try “A New Kind of Science” by Stephen Wolfram to see where very very simple rules can result in extraordinary non-repetitive and non-random results.

As I final experiment…try asking a person how another person’s robot works…some people might simply make up an answer, pretending they know.  Others, including some robots like Ava, might say some stochastic variation of “I don’t know”.  Which is the more intelligent response?  It is debatable I suppose.  I personally prefer the truth, and believe that one of the smarter things to do is to recognize the limits of our own intelligence and the extent of our own ignorance.  To be fair, imagination, pretending, bluffing, storytelling, etc. take their own kind of intelligence that can be important too.  I personally fear the day when the robots start displaying those kinds of behaviors.

Having said all this, I get your points Bill…bots could be a lot smarter, especially stochastic ones.  I don’t think my bots are particularly relevant or deserving examples of the points you are making though.  Both are basically dead now, and have moved on to better things.  Having known them, I prefer not to insult their memory with over simplistic labels.

Too defensive?  Sure.  Warranted?  I believe so.  I get all that.  Please understand…it’s my family.

 

My sincerest apologies. I

My sincerest apologies.  I was just trying to make an intellectual point (which wasn’t even very well stated to be honest) and keep the discussion going.  

I have nothing but a great deal of respect for what you have accomplished with Ava.  She is something special.  You have done something which people have been working on for the last 40 years.  In just a few years in as far as I know only working by yourself and with literally no training in this field, you built something which is better than anything I have seen.  You probably have gone a lot farther down this rabbit hole than I have, and I certainly value your insights.  

I can see why the label stochastic might offend when applied to something so close.  No offense intended.  

Let me think about your other comments and respond when I have some time.  You make some great points above.

 

re: nhBill

Thanks for the sentiments Bill,

I believe it is I that owe you an apology. I am sorry for my immoderate reaction yesterday.  Your points are valid with respect to the topic in general, and it’s not my wish to discourage free exchange of ideas and debate.  I have always tried to evangelize the value of not using any single technique…the Minsky thing…the trick is there is no single trick.  Probably because of this core philosophy, I can take offense sometimes when reductionist metaphors are used in direct reference to my bots…which attempt to embrace quite an opposite philosophy.  I will try to moderate my defensive and impolite impulses in the future in the interests of friendship, constructiveness, and free exchange of ideas.  No hard feelings I hope.

Sincerely,

Martin

I have more thoughts on chaos theory, embracing shades of grey (everything in between true and false), and ways to reduce determinism, that I hope to post in the coming days.