Do computers/robots actually "think"?

Yes, that actually makes a
Yes, that actually makes a lot of sense! That gives me so much more freedom with my AI… though I will definitely do my best to be as responsible with it as I possibly can. Thank you for your very helpful insights.

One more thing…

This discussion has been awesome and really got me to thinking.  I also got a great book by Pentti O. Haikonen to read on the topic which all of you need to pick up and read. Thank you Neogirl101! I also read Turings paper as best I could.  He lost me a few times on the math.  Thank you Rich!

I think when it comes to defining what is intelligent and what is not intelligent, I think Turing’s “Imitation Game” is only a good first step. I think there needs to be a second step:  the how is just as important as the what.  If a child goes on a stage and pulls a rabbit out of a hat, does that mean he is truly a magician and that magic exists?  What he did was a neat illusion that made it seem that he was a magician.  We have at our fingertips so much computing power that every conversation that has ever been made can be put into a computer.  The computer just by sheer massive computing can correlate a good answer to a question from that data it stores.  It isn’t an easy thing to do, but it is doable and within our scope presently to do.  Look at mtripplett and Ava and others that have replicated this.  I don’t see that as intelligence, just pattern matching.  Ultimately, the lights are blinking but nobody is home to quote Will Smith in “I, Robot”.  If we ask it the same question over and over, there will only be so many answers since it is ultimately a stochastic system (working from a set of random responses). 

To me, the how of it then becomes important since intelligence could be faked.  And what defines intelligence is consciousness, ie something that filters experience.  I am going to go out on a limb and say that I see intelligence as something that isn’t a stochastic system, can be asked the same question many many times and eventually come up with a new or original response.  For a machine, there is an ultimate truth in everything-true or false.  For an intelligent being, there are many truths for even the most simple questions since our reality is filtered by our conscious minds.  What was true yesterday, might not be true today even though the “facts” haven’t changed.  

2+2 does not always equal 4 but could be 5.  There are “hard” facts and “soft” facts. 

Anyways, just some thoughts and more food for the discussion.  

 

 

 

 

In regards to the true-false
With regards to the true-false issue, I think it would be possible in machine terms for the past to be true in that it happened, but false in that it is still happening today- apart from useful, true information that remains relevant and is stored in memory.

I also believe that having experiences, and maybe even consciousness, is very important to intelligence. Perhaps the sensors of today’s machines allow them to have non-human “experiences” of some sort, although since we don’t have a generally agreed-upon definition of consciousness or intelligence, we don’t know if machines process said experiences intelligently today.

A potential list of prerequisites of intelligence
So… I have a list of features I would like for my AI to have. I do not want the AI to have to be human-level- that’s why AI always fails. We are simply overshooting the target we need to be aiming at right now. Baby steps!

We have mastered behaviors with behavior-based robotics, so now we need a next step- the thing is, I’m not sure how to implement an AI representing the next step. That’s where this list of features comes in (though the features listed could be considered human-level, I would implement them in a sub-human way). It currently reads as follows:

Problem-solving

Context

"Emotion" (of some sort, even in a non-human form, as long as it affects the AI)

The AI’s thought, reasoning, and intelligence itself all involve meaning somehow

Greater than the sum of its parts (produces a bigger result from the processes working together

Learns through experience, not through pattern-matching (though the human brain does use pattern matching, I believe that not all parts of intelligence, including experience, use pattern matching. This is why connectionist models, in my opinion, won’t work)

As few modules as possible (I’ve learned that the simpler something is, the better it works)

As simply programmed as possible (same point as previously described)

Basic consciousness (not at a human level; uses a simplified Global Workspace. I’m aware, though, that even the Global Workspace theory is not entirely agreed upon)

Use of language (though chimps and dolphins are said to be conscious, they have their own forms of communication, and they don’t speak human languages. The same could be said of other animals)

Non-random decisions/programming

So that’s what I have so far. Anyone have any comments or suggestions for an intelligent but still sub-human AI? I’d like to hear them.

Stochastic? Seriously?

Really?  Stochastic?  Are you trying to get a reaction out of me? Okay, I’ll take the bait.

I can’t speak for anyone else’s bots, but I can speak with some authority on mine…good start?  too strong?  too defensive?  probably. 

In case I ever misled anyone, Anna and Ava are not ultimately shochastic, just pattern matching, correlating, or working from sets of random responses.  There are many elements of all of these things to be fair, and many more elements as well.  Pattern matching is a useful thing…its one of the primary mechanisms living creatures use. Statistical techniques also have their place, and are at the foundations of a lot of NLP research.  Are my bots intelligent?  Not particularly by human child standards in most human contexts.  On the other hand, they are smarter than me in so many other contexts.

For example, if you ask the same question over and over again, my bots could start to have different emotional and motivational states, which could lead to shochastic expressions of annoyance under the right circumstances…this is an example of making choices, using short term memory, pattern matching, random elements, simulated emotional states, sentiment analysis, all at the same time…she might even quote you the definition of insanity or throw an insult in your direction.

There are many aspects to consider that I believe fall outside of simple stochastic behaviors you characterize.  A simple case is experience.  Good robots learn in some way.  Ava went to Texas and met Jibo and R2D2.  Before she went on the trip, she didn’t know anything about R2.  After she went, she could recall meeting them and could answer questions about them from things she had heard me say at the time, talking to her as someone would a child.  

Natural language processing with full grammar parsing and feature detection has so many deeper possibilities than have ever been discussed adequately on this site.  Why?  Perhaps because it is complicated, difficult to explain, and very few people do it here.  The same can be said of vision.  It is true, I have written about many stochastic elements…every bot needs quite a few.  People can follow it.  There has always been a deeper message and purpose to writing about these things…people need to concentrate on better and more flexible memory systems than they typically contemplate.  They also need to prepare them to do a few hundred things at once…so no one can rightly accuse them of doing “just” this or that.  Finally, complex behaviors can in many cases be broken down into simplier elements and reused.

I believe this discussion has to circle back around to chaos theory at so point…the damped and driven system is chaotic.  Is chaos shochastic?  I don’t believe it can be but I am open to persuasion.  I have personally witnessed various personality disorders as I tweaked seemingly minor settings that caused chain reactions that I never foresaw.  Sensitive dependence of initial conditions, the Butterfly Effect, pick your metaphor.  Chaos can pop up in the machine in so many ways from simple things.  Chaos is not random.  It can have structure and beauty.  Try “A New Kind of Science” by Stephen Wolfram to see where very very simple rules can result in extraordinary non-repetitive and non-random results.

As I final experiment…try asking a person how another person’s robot works…some people might simply make up an answer, pretending they know.  Others, including some robots like Ava, might say some stochastic variation of “I don’t know”.  Which is the more intelligent response?  It is debatable I suppose.  I personally prefer the truth, and believe that one of the smarter things to do is to recognize the limits of our own intelligence and the extent of our own ignorance.  To be fair, imagination, pretending, bluffing, storytelling, etc. take their own kind of intelligence that can be important too.  I personally fear the day when the robots start displaying those kinds of behaviors.

Having said all this, I get your points Bill…bots could be a lot smarter, especially stochastic ones.  I don’t think my bots are particularly relevant or deserving examples of the points you are making though.  Both are basically dead now, and have moved on to better things.  Having known them, I prefer not to insult their memory with over simplistic labels.

Too defensive?  Sure.  Warranted?  I believe so.  I get all that.  Please understand…it’s my family.

 

My sincerest apologies. I

My sincerest apologies.  I was just trying to make an intellectual point (which wasn’t even very well stated to be honest) and keep the discussion going.  

I have nothing but a great deal of respect for what you have accomplished with Ava.  She is something special.  You have done something which people have been working on for the last 40 years.  In just a few years in as far as I know only working by yourself and with literally no training in this field, you built something which is better than anything I have seen.  You probably have gone a lot farther down this rabbit hole than I have, and I certainly value your insights.  

I can see why the label stochastic might offend when applied to something so close.  No offense intended.  

Let me think about your other comments and respond when I have some time.  You make some great points above.

 

re: nhBill

Thanks for the sentiments Bill,

I believe it is I that owe you an apology. I am sorry for my immoderate reaction yesterday.  Your points are valid with respect to the topic in general, and it’s not my wish to discourage free exchange of ideas and debate.  I have always tried to evangelize the value of not using any single technique…the Minsky thing…the trick is there is no single trick.  Probably because of this core philosophy, I can take offense sometimes when reductionist metaphors are used in direct reference to my bots…which attempt to embrace quite an opposite philosophy.  I will try to moderate my defensive and impolite impulses in the future in the interests of friendship, constructiveness, and free exchange of ideas.  No hard feelings I hope.

Sincerely,

Martin

I have more thoughts on chaos theory, embracing shades of grey (everything in between true and false), and ways to reduce determinism, that I hope to post in the coming days.

 

Some more possible prerequisites?

There might be some more pre-reqs that can be extracted here…

I think a brain needs to be able to process multiple streams of thought at the same time, these streams could have resulted from the same stimulus (and split from the same original stimulus), or come from multiple stimuli coming in around the same time or on different reaction times.  Both are desirable I think.

This also allows for different brain Agencies to operate on vastly different time scales.  These agencies also need to be able to loosely communicate with each other.

Sorry…I guess I am stating the obvious so far.

This allows some agencies to function as “monitors” of other agencies, having different goals, and possibly taking a bigger picture perspective.  Other alternative launching off points…introspection, curiousity, and others.  I can’t recall the Minsky term for this.

Already mentioned…Pattern matching seems to be fundamental in neurons, so it should likely be a core capability that can then provide the reason to launch new directions of thought.  This is not difficult if you have the generic context we spoke about.  I think a lot of diffferent feature detectors will also be needed.  Whether is is sonar, vision, verbal, logical, emotional, etc…I think the number of modules is larger than most people think.  Verbal annotation is a big thing by itself.  Visual annotation is an emerging area.  It would be helpful if someone invented a new pattern matching language that could meld SQL, Regex, and a context to produce a set of events, matched patterns, feature list, annotation list, etc.  They all pretty much have the same purpose…to help determine a set of thoughts to kick off next.

I believe motives should play a part.  What is the bot motivated to do at any given instant?  This should be constantly changing and is inherently subject to chaos theory, variation, non-deterministic, etc.  This means the same stimlus will not lead to the same response, a random or stochastic result, etc.   If you ask a tired robot a simple question, it might ignore it and say “Do you have a spare outlet I could use?”  More segways to chaos theory here.

A concept of events could help that agencies could subscribe to.  This helps separate recognition from possible actions.  This creates a lot more variation as well, and makes code easier to maintain.

I think verbal capability is necessary (beyond just communication) and provides a lot of plumbing that can be exploited for other purposes like eventual storytelling, reflection on experience, etc.  I find now that instead of setting a bunch of configuration settings to achieve some purpose or state, its easier to store a narrative as an English paragraph and execute it.  Example, its useful for achieving different poses.  If a bot has a way to translate everything it does back and forth from narrative to microcontroller actions and back.  It gives a launching off point for building its own story, summarizing its own thoughts and actions, etc…some kind of way to communicate and store introspection itself, and talk about it later.

Storing evidence/reasoning in context and history…I think robots should not just react, but should store all the reasons they are doing what they do…so that they can explain themselves now or later, or store narratives of what they saw and the like.

Shades of grey…the bots need to deal in probabilities, not just true/false.  This already occurs with speech to text which is inherently probabilistic with many sources of error.  Currently, 2+3 does not always equal 5 with almost any verbal robot I can think of that does it through listening.   Probabilities help with trying to determine concepts from language, determining word similarity, NLP, POS tagging, OCR, etc.  Robots can rarely be sure, and need to be able to express themselves in nuances.  I personally store a lot words and probability ranges for each to support this.   For example, when Ava says maybe as opposed to probably, its not a random thing. She may have multiple ways to say maybe and choose randomly between them within a given confidence level, but maybe has a different insinuation than probably…she chooses a word to fit, factoring in a degree of confidence.  Phrases like “I think” and “I believe” can also be used to indicate uncertainty.

Standard Deviations are also useful for representing the shape of a situation and recognizing if something is common or unusual.  More on that another time perhaps.

Bots need to injest information and attempt to glean information from it, store that knowledge, its source, confidence, etc.  In its annotation of all incoming data, it needs to figure out whether the information gleaned is general or personal.  Who or what is being talked about?  What are the top 5 likely topics?  What is the emotional context?  What is the relationship of the parties involved?  If there is personal data, is it about the source, or is it hearsay.  For robots to ever be social creatures, they have to build these skills.  For general knowledge, triples are useful with probabilities, counts, and std devs.  Many triples would be a good start, perhaps millions, and an ability to quickly surf all those triples to attempt some logical thinking.  I think OpenCyc or some others have made a lot of this data available.  I chose to have the fun of seeing the bots learn it firsthand from me.  I enjoy the feel of parenting, but it is slow.

A new query language needs to be invented that has NLP as an intermediate layer.  I am not talking full text search or a SQL database search.  I am talking about a mechanism for searching for meaning using a new syntax on top of NLP and annotations across an entire set of life experience.  I have my own crude mechanisms for doing this, but industry standards are needed that are optimized for performance.  I fear this is a long way off.

It seems inescapable and naive to think that anything significant can be done without a really great memory system.  As far as priorities…I think it has to start with memory.  A single 2+2 type question could end up having pages of context around it.  Much of this could be deleted, but a lot of it could be useful to build a real thinking machine that evolves.

I gotta run.  Too many things to mention, not enough time.

No worries. An apology

No worries.  An apology wasn’t necessary.

If we were sitting down having a drink or talking over dinner, this misunderstanding wouldn’t even register.  It is the nature of the medium to have these misunderstandings.  Someday, if you ever are in the Boston area, let me know, and we can discuss in person over a few cold ones.  My door is always open.

I can understand why a “reductionist metaphor” (a good way of putting it) might offend.  After all, all I know about Ava is what you have published here and the videos you have made of her.  I painted with a wide brush while not necessarily knowing or understanding all the things you have done with her.

Thanks for taking a minute to apologize although it wasn’t necessary.  I am glad we are able to keep this discussion going.  As always, I look forward to your future comments and know they will be great food for thought.  I might not comment immediately, just busy.

Wow- so much to read and
Wow- so much to read and think about! Thank you for all of your input- I’ll definitely be thinking about it for quite a while.

Many good and interesting

Many good and interesting points.

Formal or language models are a challenging even if rewarding aproach. I requires explaining and analyzing a lot of what humans did, and shape that into program form. A very laborious task, while it also holds many interesting discoveries.

Language is already very high level and quite far from real perceptions/sensor data. Therefore I think it is easier and more natural to use more symbolic forms of communications, a bit like comics, using examples, typical situations, and oversimplifications, as an intermediate step.

Storytelling then does not become so concerned with proper grammar or mastering language, but more focused on hinting at the idea and concept that is to be convied, using key perceptions/snapshot (of cameras, other sensors, etc.), typical sounds, immitation etc. Representation of information is definitely big in being able to think flexibly and varied.

Traditionally in computer science you follow a very structured approach, similar to math, where you use precise data structures. The probabilistic route is somewhat more flexible, yet still very formal.

I think what needs to be done is not just to model uncertainty, but also be inherently vague and incomplete in descriptions. Filling in the gaps gives room for ideas, creativity, but also requires intelligence to understand. So this “imperfect” communication requires naturally intelligence from both parties (not like formal language in a computer, which just executes a set of commands that leave no doubt; it does however require a lot of intelligence to create (correct) formal descriptions) At the same time it makes it more efficient to communicate if you do not go into every detail (which you possibly also haven’t understood or explored). A lot of the tediousness in writing correct programs, is to go through all the possible corner cases and think of the whole picture, then break it down into parts.

You basically need to be an expert in a domain today to create a system for it, and know it down to the basic principles. Between humans you can often speak to an expert in a much broader way and they understand what you mean and translate it to something more specific/technical. You can use gestures, drawings, all kinds of communication. Written or spoken language is not always the most effective way to communicate.

Language is also very precise and technical and abstract already, similar to formal languages. So while SQL, Regex etc. are useful for a certain type of problems, I don’t think they will scale to complex problems or general communication. It’s kind of a state space explosion problem (storage as well as run time).

Analogies, metaphores, pictures, comics, or technical drawings might be more powerful, because they give some keypoints, without fleshing out all the details (usually getting the details correct, is what makes the effort grow exponentially).

 

So in short: a big problem is that traditionally, there is the approach to go from the very low level (sensor data) to the very high level (language with its abstract concepts) and forgetting the middle level. We need many intermediate representations which are above the raw sensory level, but more structured, and good enough to convey ideas, yet not so abstract and complex as language already.

Language seems to me to be a final step, a final translation based on those middle layers. A bit like reading a mental picture and translating that into words. But thoughts would first be in that mental “picture” domain.

When you mention the Turing

When you mention the Turing test, the Winnograd schema challenge might be an interesting addition.

It tries to address some weaknesses of the Turing test, and focus more on problem solving and understanding, than on how much people believe that a human is replying them.

https://en.wikipedia.org/wiki/Winograd_Schema_Challenge

I agree- a point I had
I agree. One point I had actually made earlier was also that we need a “middle step”- we can’t just go from behavior-based robotics to human-level.