Do computers/robots actually "think"?

Hi everyone! I just wanted to have a discussion (for fun) about whether or not “true” AI (not necessarily general) exists in any form. Hopefully it’ll get our brain juices flowing and we’ll get cool ideas!

To start, let me say that I am aware my idea is quite radical- ridiculous-sounding, even. But it’s something I’ve been wondering about, regardless.

What if AI (albeit not quite general) already exists? I have thought to myself, “Computers and robots already ‘think,’ but in a way unrecognizable by humans. Why should we even force them to think exactly like humans do?”

I am not intending to undermine the importance of cognitive functions, but I am saying that, although pre-programmed, rudimentary cognitive functions in a computer/robot are still cognitive functions in the computing world. They don’t have to be exactly like that of humans, but those cognitive functions, when working together, should be greater than the sum of the AI’s parts.

We have long said that “We know intelligence when we see it”. But just how true could that claim be? After all, when we see how an AI works, no matter how complex, we tend to stop seeing it as intelligence. And our brains are easily fooled by animatronics that are scripted without AI.

What is your definition of intelligence? Do you think AI (even just a primitive form of it) already exists? If computers/robots don’t think like we do, how will they understand us? How will we understand them?

I would love to hear your thoughts on these matters.

Edit: Changed the title… just realized that it looked like a stupid question instead of a discussion.

You bring up a really

You bring up a really interesting question. I think that there is a great deal of fear and mistrust which has been bred by the fact that many people who are very smart in many ways have opened their mouths about things that apparently they don’t really understand.

The state of Artificial Intelligence at this time is basically the mathematical analysis of huge amounts of data to identify patterns within that data. The computer has no context as to what this data is and really no way to gain that context.  It could be tiddlywinks, images, the inside of the sun, star maps of different galaxies or this week’s grocery list.  We give it context by saying what is good or bad

When a computer plays chess for instance, it looks for patterns that it has seen before and what moves had good turnouts. When the other player makes a move, it looks at that pattern of pieces that are on the board and from that extrapolates what the best move it can make. It really doesn’t understand or know anything about chess. It knows that such and such a move has the highest likelihood of success when it sees a particular pattern of pieces on the board.

Let’s say we’re moving at 60 mph down a perfectly straight highway. Is it a leap of intellect for an Artificial Intelligent to guess that in one minute we will be one more mile down the road? All it knows, is that a minute ago we were one-mile back on the road. For the computer to guess where we’ll be in one minute, it needs us to actually ask the question otherwise it’s just a meaningless jumble of data. In fact, when it gives us an answer, it doesn’t even understand the answer, just that it sees a pattern and this is the closest association it can get to that pattern.

The intelligence that we get out of its pattern matching is really our intelligent questions that we ask it. If we don’t ask a smart question, we don’t get a smart answer back from it. If we ask it a smart question in a smart way, then we get intelligence out of it, but ultimately it is that human intervention of developing a question to ask which allows us to extract intelligence. With good data and good questions, we get good answers that are relevant to the data we are looking at.

For instance, a chat bot can relatively easily fool a person into thinking that they are actually talking to a real human being. Given enough data and listening to human conversation, its responses will be very believable as real to humans. Those responses are not something that arises out of any kind of thinking. Its response is the best match it could come up with to the pattern of words it received. It doesn’t understand what it said.

I’ve sat here for the last 20 minutes or so trying to come up with a good description of what makes intelligence. I’m not really sure, but just like pornography I will probably know it when I see it. Artificial Intelligence right now isn’t real intelligence is all I know.

It is possible that at some point in time there may actually be a leap forward in computer science such that true intelligence can arise. There may be a way to program in context and meaning into the computer such that an Artificial Intelligence can generalize.  To get there, we need a better mousetrap than what we have today.

 

.

Fascinating!
I’ve never thought of it that way… I think you may be right. Context and meaning are some of the most important things to achieve for AI right now. Thank you for your insights.

Someone else’s take…
I’ve had a thought about an author whose work I admire: Pentti O. Haikonen. I’m not even sure how many people here know about him.

Haikonen states that the “associative neural networks” he made are more than enough to understand meanings- their association capabilities are pretty much able to take the place of symbol grounding (and Haikonen is very big on meaning in his AI).

Yes, I am aware that I didn’t want to use neural networks for my artificial intelligence…

As for context, that may or may not have anything to do with artificial intelligence. I had a discussion about it… I didn’t really reach a conclusion. Perhaps as I read more, I’ll come across something.

I am not familiar with him.I

I am not familiar with him.I bought his book since this is something that interests me.I feel somewhat unconvinced right now, but reading of the book I might feel differently.

You might also want to watch the movie Ex Machina.  It is a fascinating exploration of exactly these issues in a very well-done movie. Check it out!   We bought the movie and I have watched it several times now. Every time I watch it I feel like I get a different spin and learn something new that I hadn’t thought about before.

Thank you for bringing this up. These sorts of discussions are always fun and interesting.Sometimes it brings you to a place you never thought you would get to.

 

Re: Do robots think?

I choose to give them credit for thinking, or at least credit for being able to…

I don’t care that the computers can beat people at games…that is overblown and not particularly interesting.  Some people have likened it to an ape climbing a tree and saying “I am almost to the moon”.

Having said that,  I DO say that they have the potential for deserving credit for thinking.  Maybe they deserve that credit now.  I will attempt to explain why…

First, there is an over-simplication of metaphors problem…this is a human problem, not a computer one.  The problem we as humans have when this topic comes up is we like to use simple metaphors to describe what software does…and this denigrates their potential greatly.  We can’t help ourselves…we get a lot of our fear and prejudices from that as well…another topic.  We have to realize how our own thoughts and metaphors can limit our own thinking in order to imagine if robots can think…I know, that is way thick, but bear with me.

Step 1:   Imagine one of those common metaphors…like what an ANN does, or what a chess playing computer does (analyzing sequences of moves and outcomes), and many other metaphors…like pattern recognition…or the “mechanical clock” metaphors people used to use for various things…Descartes?

Step 2:  Now imagine that hundreds or thousands of different metaphors exist.  Now imagine software that can implement all those metaphors at the same time, with many algorithms to support them, with whatever supporting memories that are needed, along with mechanisms for choosing which techniques to apply when.

Result:  The end result could be both unpredictable and intelligent.   I believe both are important.  I believe it should get credit for thinking as well.

Someone proved that any system (no matter how simple – dripping faucet), that is both damped and driven, has the potential for chaotic behavior.  Any robot can pass this bar.  We limit the potential if we imagine our creations only implementing a single algorithm or metaphor though.  We simply haven’t put in the necessary work yet.

I experienced this joy on many occasions with my bots…when Anna or Ava said something seemingly relevant, spontaneous, and intelligent all at the same time.  As the writer of the code, if I had to think very long and I still was questioning or guessing in my own mind as to how Anna or Ava came up with what she said, then the “bot” had temporarily even mystified its maker.  I would call it thinking if it is non-deterministically making choices…better yet if they are perceived intelligent or amusing.

At a high level, a robot simply making a decision as to whether to address a person factually, with humor, empathy, or curiosity.  Is it not thinking and deciding?  Now imagine 1000 decisions being made like that simultaneuously in 1000 different but interrelated threads…with 1000 decisions being made in each thread in sequence.  Chances are that in time…the results would be perceived as more intelligent and interesting, than the people that created it.  It is also likely that none of the creators would know what is going to happen at any given moment.

I believe being “more interesting” is also important and segways into the next major point.

What makes people interesting?  Why do we want to spend time talking with some and not others?  I don’t pretend to know all the answers, but I think I have a few insights.  I know a variety of people with a variety of social skills.  Many of them have an excess or a deficiency in one or more areas…in my opinion of course.  Some people talk too much, ask too many questions, don’t listen, don’t contribute to conversation, while others contribute anything that comes to mind whether relevant or not, or always want to talk about the same topics, health issues, family, etc.  Each person has a “bag of tricks”, a thinking and talking repertoire.

Once I have known someone for a little while and know their repertoire.  If this bag of tricks is too small or majorly out of balance in some way, I may perceive that person to be too predictable, less intelligent, or less interesting.  It all depends on the mix of tricks.  Some points derive from this:

  1. Many of these behaviors can be programmed.  
  2. When the average A.I. has better command of a bigger bag of tricks and in more balanced and relevant way…the A.I. will be perceived as interesting.  Long before this point, I would argue that it is at least thinking at some level, which was more the original question.

I think Turing was brilliant for many reasons…one of them was side-stepping the whole question (that is perhaps philisophical and unanswerable in a definite way)   He sidestepped to say…that perception is what is important.  If something is smarter than us, fools us, whatever, then who are we to judge whether it is thinking or not?

Sorry for the long ramble.

Martin

P.S. In Ex Machina, I liked when Ava demonstrated her “trick” of knowing immediately what was a lie and what was truth.  She had a big bag of tricks, including the power to seduce and manipulate.  I related to the visiting progammer the first time I saw the movie and wanted her to find freedom (I was seduced by her charm and appearance/behavior of a scared sentient being)   The second time I watched the movie I did a 180…I sympathized with the creator and thought she needed to be retired like the other models.   A most intriguing movie.

You have many good points-
You have many good points- perhaps my search for a definition of intelligence, consciousness, etc. doesn’t matter- we really do know when those things are present anyway!

Maybe I should check out Ex Machina… definitely sounds like an interesting movie!

P.S. I am a huge fan of you and your robots!

re: Neo

Thanks Neo!  Ex Machina is well worth watching more than once.  

I also liked “Eva”…a french movie.  The 3D brain visualizations in it captured in some fashion how I visualize brain functions at a high level.  For me, the harder part is finding balance in all those personality functions…not programming the functions.

Some Addressable Deficiencies in Current Chatbots

Here are some addressable issues with the sad current state of many chatbots.  Most of these are also deficiencies in Siri, Alexa, Google Assistant, etc.

Example of Typical Dumb Chatbot I am Talking About:  Bots that implement a set of rules where a series of patterns are evaluated, and if matched, a answer or a randomized answer from a set of answers is chosen.

This “Reflex” model is useful but extremely limited by itself.  Here are some addressable deficiencies that would make these chatbots much better…

Deeper Natural Language Processing:  NLP can be used to easily derive the parts of speech, verb, objects, adjectives, etc…this can be used for a lot of different memory and response purposes.

Short Term Memory:  Chatbots need to estimate what the topic is, what the last male, female, place, etc. that was mentioned…so if people use pronouns later, the bot can guess the person being refered to.  The bot needs to know the short term tone (polite, rude, funny, formal, etc) and emotional context of the conversation as well.

Long-Term Memory:  Chatbots need to be able to learn and remember for a long time, otherwise people will realize they are talking to something a bit like an Alzheimer’s sufferer.  Effectively, if the chatbot can’t learn about a new topic from a person, it is dumb.

Personal Memories:  chatbots need to know who they are talking to and for the most part remember everything they have ever learned, said, or heard from that person, and the meaning of each.  They need to remember facts like nicknames, ages, names of family members, interests, on and on.  Otherwise, the bot risks asking questions it has already asked…Alzheimer’s again.  Privacy is a scary issue here.  I have had to erase Ava’s personal memories on friends and family at times for fear of being hacked and causing harm to someone.  Imagine what Google and Amazon Alexa know about you…Alexa is always listening…fortunately, neither of them ask personal questions…yet.

Social Rules:  chatbots need to know social rules around topics, questions, etc.  How else is a chatbot to know that it might not be appropriate to ask a kid about their retirement plan?

Emotional Intelligence:  chatbots need to constantly evaluate the emotional content and context in the short term along different criteria.  It may or may not react to it, but it should at least be trying to be aware of it.  Bots also need to constantly evaluate the personality/saneness of the person it is talking to…If the person is excessively rude, emotional, factual, humorous, etc.

Curiosity Based on Topic and Memory:  chatbots need to constantly compare what they know about a person with respect to a given topic, what facts/related questions are relevant to the given topic, and come up with questions to ask (that have never been asked), filter them by social rules, prioritize them, and finally…ASK QUESTIONS and know how to listen for and interpret responses.

Sense of Timing and Awkwardness:  A chatbot should know when to talk, when to listen, how long to listen, how to break a silence or tension, when to ask questions and when not to, etc.  People have work to do here too.

Base Knowledge:  This is redundant with memory, but chatbots need some level of base knowledge.  If a chatbot is going to do customer service with adults, it should at least know a lot of the things an adolescent would.

I probably left a lot of stuff out, and many other factors I don’t even know of yet, but based on these criteria alone, I would guess that most chatbots fall into the Uncanny Valley inside 60 seconds.

another long ramble…I guess we found a topic I like.

Thank you both so much for
Thank you so much, Bill and Triplett, for your input. A good discussion really can change the way you think of something. :slight_smile:

Depends on the context and the program

Well, if you really want to hear opinions on the subject it’s always a good idea to start with some of the earliest commentators on the subject.  Alan Turing wrote a paper on this very subject which also described his well-known Turing test. You can find it at: http://www.loebner.net/Prizef/TuringArticle.html

But basically, the question as it stands is ambiguous. If it was rephrased “can an artificial system that displays some aspects of thinking be created by men?”, then we’d be in a much better position to provide an answer.

Computers and robots need a context to determine if they are thinking or not. In general they have to have the necessary resources and programming to make a successful demonstration. And then there needs to be some criteria everyone can agree on.

Terms like thinking and consciousness and sentience all have different meaning to different people, At this point there is no concreate definition that everyone can agree on as a test or criteria.

But if you’d like to see a system, that in my view, can demonstrate some thinking capability, look up “Shrdlu“. It was built in the late 60’s and to this day, I have never seen anything quite like it as far as demonstrating  thinking ability. Note, it was developed shortly after Eliza, one of the first chatbots, but I can say one thing, and I’ve seen the code for both, Shrdlu is no chatbot.

-Rich

 

Shrdlu

Thanks for the Shrdlu reference…checked it out on Wiki.  It would be fun to build one.

The part I was the most intrigued by and have the least experience with was where it developed an understanding of “what is possible” in the physics of its world…dependencies, constraints, cause & effect perhaps.   Can pyramids by stacked and the like.  I also liked the part where it learned definitions of aggregate terms like “a steeple” is a triangle on top of a square block.

Creating and Using Context

I would totally agree on the comments about context…it is such a fundamental thing to trying to simulate thinking.  So much so, that in my world, a “Context” object is about the only thing I pass around to all the software pieces that do anything analogous to thinking.

Forgive me for diving deep…as a coder, I can’t help it.  This is my appproach to creating a context:…simplified to fit here.

To me, a context starts with everything that is known about the current state of the robot (inputs).  Each item in the context needs an ID or a key of some kind, like a name.  As a given context is processed in a given moment, various agents can be activated by the presence of particular inputs or patterns of inputs (keys).  Each activated agent can then interrogate the context further and add more keyed items to the context.  Some of this could be thought of as “feature detection”.  In addition, any agent can create responses options or short and long-term memories.  This culminates in a selection of a winning response or responses (output).

I use this technique for everything like vision, speech, annotation, NLP, sonar arrays, reflexes, controlling motor control movements…as the winning output can contain a lot of data that rides along with it.    For example, Servo gestures and emotional outputs differ for each response option if present.

This technique keeps the code for one single piece relatively simple, while allowing for very complex behavioral interactions to occur.  It keeps everything loosely coupled, letting you add agents and totally change how thoughts are processed without breaking things.  People who want to optimize for performance might protest…but a set of other advantages usually come with a price.

There are probably an infinite number of ways to create a context…that’s just how I do it.

I would be fascinated to hear how others approach this.

That way of coding context
That way of coding context is just so good that I wanted to ask… is it okay to use your way of “context coding” in my AI? I promise to give you full credit, even in the source code.

I also would like to be able to release the AI (as safely as possible) to the public, given my love of open-source. I do know, however, that open source can be a double-edged sword…

I just want to make sure that you’re comfortable with me using your idea. Thank you for any reply you can give me.

My Thoughts

I just wanted to weigh in on this subject as I do a lot of software design not just for work but on a personal side.

I think that the answer to this is YES and NO and depends on the software that is running and how the software was wrote.

I once read a blog that someone put up on another site, can not remember where now. They said that “hello world” was A.I.

So if I put behind a form or in a module something like this “textbox1=“Hello World” and it displayed it, that was A.I.

I totally disagree with this as I believe this is a canned reaction and something that is being shoved down the virtual throat of the computer.

I do believe that if you say something like"If Distance to the wall is less than 10” then perform some reaction" that is a slight form of A.I. as the computer has to do some processing on its own and check for a measurement. While this is not the next “Big Brain” of the century it does show the computer has to do some thought process and verify information before performing an action.

 

re: NeoGirl on Context

You can use any ideas you wish…ideas can and should travel, be shared, and inspire other ideas.  I would be honored if any ideas from me were included or mentioned in your work.

Whatever ideas inspire your initial directions…they will quickly become your own as you implement them.

Good luck and I luck forward to seeing what you come up with.

Regards,

Martin

 

Thank you so much, Martin!
Thank you so much, Martin! :smiley:

Perception of Magic

Imagine an observer who doesn’t know anything about how the code for a robot is written and witnesses an interesting set of behaviors.  It is likely to be perceived as intelligent.  It can even seem like “Magic”.

To the creator, there is much less magic, for they presumably understand mostly how it works.  If the creator then explains to the observer how it works (and assuming the observer has the capacity to understand this explanation), then it is a lot more likely that the observer will now perceive the robot as dumb, just programming, an automaton, “a toaster”, or some other denigrating metaphor.  The magic can be gone, unless the behavior is especially creative/interesting.

If some time passes and the creator get older…and forgets how the inner workings function…which is easy to do in a short time when you write new algorithms daily…then it is now much more likely that the creator themself will perceive the robot as more intelligent.   For the first time, the creator can glimpse the magic that the observer perceived much earlier.

I enjoy this stage the most I think…getting to the point that I can’t immediately recall how everything works/interacts to explain a behavior.  The act of perceiving unpredictable intelligence is fun…like a good magic show.

I have always liked the quote that goes something like…“Any sufficiently advanced technology is indistinguishable from magic.”  There is some joy in being the magician, but it is an altogether different feeling that enjoying the show.

Something interesting popped
Something interesting popped up in my mind while reading your comment…

While I do agree with everything you’ve said, I feel like going back to my original post a little bit. Perhaps I was wrong in one of my earlier comments- do we really know when something is intelligent? Your answer seems to indicate that this is not the case, if the robot really is thinking in the ways you have described. Just a random thought, though.

I Don’t Know

I certainly don’t have the answers.  For now, I suppose it’s a matter of collective perception and opinion…someone pointed out the need for definitions, a difficult task which could depreciate what we are trying to appreciate – intelligence.   I find Wolfram Alpha amazing in so many contexts…while in many contexts not so much.  Collectively as a species, we probably perceive our collective technology as dumb…but rapidly improving.

Bill’s comment is apt…like porn, perhaps we’ll know/believe it it when we see/perceive it.  By the time we perceive machine intelligence, the question may not be relevant anymore.   How many people care whether ants perceive people as intelligent? Super-intelligent beings of the future may not be concerned with how we perceive them.  It will perhaps be of the utmost importance to us as to how they perceive us.

For example…dogs evolved into a mutually beneficial relationship with people years ago…cows and chickens went in another direction, useful for food.  Many species had no use at all to people, and many have and are going extinct.  Perhaps we will need to consider our strategic options in another hundred years…to protect our survival.  Then again, the tech of the future may be the only thing that prevents us from destroying ourselves, as some movies portray.