AI: Is it really intelligent?

Hello everyone! I’m back.

So I had an interesting discussion today about the AI Effect. Simply put, the AI Effect is such that when a computer becomes able to process something a human can, that “something” is no longer considered intelligent.

My original thought was that if human emotion stems from neuromodulation or some other neural process, and we implement something similar in AIs, it would automatically still be perceived as unintelligent simply because we know how it works. (I know though that the *physical* feelings of human emotion remain out of AI’s grasp).

I am aware that this is really out there, but in my opinion, there are somewhat intelligent robots out there. While I don’t believe AI will ever reach the “fully human” level, I also don’t believe that AI, even on an Arduino, is by nature unintelligent.

I *do* believe that there are many kinds of intelligence. Insects displaying foraging behavior and fleeing from danger are behaving rationally and intelligently, though they are not as smart as humans, IMHO. I also believe that while having more functionality in an AI does make it “smarter,” it doesn’t have to have all of our bells and whistles to be intelligent in some way.

I will admit, I am biased towards thinking machines. If there is anything you’d like to say, or if there are any flaws in my reasoning, I would like to hear from you. I look forward to a very interesting and insightful discussion!

1 Like

I would agree with ALL of

I would agree with ALL of your points, perhaps because they mirror my own experiences.

I will paraphrase some of your points (Ps) in my own words:

P1.  Machines can exhibit intelligent behavior

P2.  Once people hear how a machine does something intelligent, then they tend to no longer perceive the behavior as intelligent.

This leads to a few thoughts (Ts) for me.

T1.  Perception is a key issue.  Instead of objectively measuring intelligence, we are subjectively judging whether we perceive something as intelligent.  Turing got to the heart of this with the Turing Test, the “Imitation Game”, etc.

T2:  Lets say we have the ability to explain any single algorithm that we then reduce in our perception as being “dumb”.  If an intellgent machine is running hundreds of algos that are competing to exhibit their behavior at any given moment, we would likely lose the ability to deduce which algorithm is winning at any given moment.  We might then “give up” on trying to mentally reduce the robot to being dumb, and perceive the entire system as “intelligent”.  I know this is not scientific, but I find this to be try in my own years of experimentation.

T3:  Humans are biased to thinking humans are intelligent and machines (and animals/plants) are dumb.  It fits historical experience, and it makes humans feel better about themselves, something they want to do.  Some would call it confirmation bias.  I would argue that most people are not particularly intelligent most of the time.  We are good at being self serving.  I could say the same of a lot of species.  We spent millions of years as hunter/gatherers.  The rate at which we generated new ideas has been amazingly slow.  We also don’t hold ourselves to the same standards as we do with machines.  When a child learns algebra and uses it to solve a problem, we praise them as being smart. We don’t say “They are just doing math.  They are really dumb.”  If we systematically tried to explain human behavior or emotions and judge it as smart/dumb, we might not like the results.  This bias extends to other things like law, religion, racism, the way we treat animals, etc.  We want to think we are intelligent and special, and give ourselves license to exploit anything else.  This comes back to my point about humans being “Self Serving”.

T4:  Putting perception aside, objective measures of intelligence are also key, more so IMO.  This means we need large data sets that are labelled, thus defining “truths” that errors can be measured against.  For neural nets, this allows us to train, backpropogate errors, and arrive at optimal solutions.  At some point, if something can recognize images or speech better than we can, how it does it and our opinions and whether we deem it as intelligent or dumb become philosophical and irrelevant at the same time.

T5:  I think much more work and thought needs to be done on “systems” of many intelligent agents working together.  So many specialists are working on one technique (Neural Nets for example) at a time under some idea that intelligence will be able to be reduced down to a single technique.  While this may turn out to be true in the long run, it is an example of human stupidity in the short run in my opinion.  Having said that, I am a huge fan of many different types of NNs for different types of problems.  Its a huge growing field.  I suppose your “ant colony” is another example.  You might think each ant is dumb, but its hard to argue that the colony working together is not intelligent.

T6:  A lot of people think that something that is simple is deterministic and therefore dumb.  Chaos theory would disagree.  Any system that is damped and driven at the same time has the potential for chaotic (and seemingly very creative) behavior.  Chaos does not mean random, it can have intricate patterns of structure that may never repeat.  Steven Wolfram shows a lot of this in his book “A New Kind of Science” where very simple rules create very complicated results.  It is thought that nature does similar things in many ways.

Finally, you mentioned machines reaching human intelligence.  I personally believe machines will be dumb for a while longer, and then they will be super-intelligent, blowing past us like a car driving past a mile marker on a highway.  The only question is when.  I don’t know whether it will happen in 2 years, 20 years, or after my death.  Once they become super-intelligent, we will lose the ability to perceive how intelligent they are.  We will either be entirely dependent (like toddlers) or victims of circumstances we don’t understand.  The toddler case is probably the best case scenario.

The wakeup call for me was when a more generic version of the AlphaGo AI taught itself chess by playing itself in 4 hours and beat all prior chess programs.  https://www.sciencealert.com/it-took-4-hours-google-s-ai-world-s-best-chess-player-deepmind-alphazero  I think it cracked the Enigma code in 19 minutes.  I would have to think something like that could do most white collar jobs better than people.

To me, this is a new kind of super-intelligence, in a given domain.  At some point, it will be any domain.  What happens when this intelligence can run in embodied form in a robot?

All of this is just one man’s opinion, and I might be dumb.

1 Like

One (really) dumb (semi) human’s (semi) thoughts

This discussion of intelligence, as well as most I have seen, reminds me of "The Blind Men and the Elephant."

www.allaboutphilosophy.org/blind-men-and-the-elephant.htm

We poke around at “intelligence” by choosing specific tasks that we as “intelligent” creatures can do and other creatures and machines can’t (yet) do and define intelligence by those tasks.  When a machine is built (programmed) to do that task, we say, “well, that isn’t really intelligence.” and pick another task (blind man.)

The key is to define intelligence.  It’s mostly accepted that we as humans can do things other species can’t.  But what, why, and how?  A few researchers have pondered long and hard on the subject, with not a lot of good results.  But there have been some.  Unfortunately it’s really hard to find out what other species “think.”  Therefore it’s really hard to figure out what makes us special.  And it may very well turn out that we find, as mtriplett alluded to, that we aren’t really all that special.  I personally don’t think that will happen, but I think the difference between us and other species (for instance dolphins, elephants, chimpanzees) will be a lot smaller than most people are comfortable with.

Our brains are, to a first approximation, big pattern matching machines.  The work of James Albus and William(?) Powers in the 70s and 80s  led to some amazing discoveries and results with neural networks.  But is that all there is to it?  Or is there something more that we aren’t aware of (a soul?)

We, and other creatures, respond to and manipulate our environment.  Our pattern matchers are built and programmed for that.  But, it seems, we humans go further; we “contemplate” a different environment.  And, we contemplate ourselves in our current environment as well as the different environment.  Exploration, space travel, the universe outside Earth, social engineering.  And, perhaps most important, we contemplate our place, our intelligence, other species intelligence, and creating new intelligence.  It appears that other species don’t do that.

So, I think those things are “symptoms” of what makes us different.  They aren’t what makes us different, but instead are caused by what makes us different.  And someone who isn’t blind needs to step back and take a look at the elephant as a whole.

Personally, I think that neural nets (or something that does the same thing a different way) will lead to the first truly intelligent computer.  Unfortunately, most NN research and writing is about image recognition or something similar.  But Albus and Powers mentioned earlier had much greater goals.  They built control systems that responded to and manipulated their environment.  They did it with a few dozen or maybe a few hundred neurons.  We have around 100 billion or so.  A few researchers have built NNs with a few thousand that emulated small creatures.  What would happen if we built one with a 100 billion?

I agree with mtriplett that we will be surpassed.  The genie is out of the bottle now.  Kind of like cloning, if you regulate it people in small, out of the way places in the world are still gonna do it.  And it’s only a matter of time, now.

But what do I know?  I’m just a dumb human!

re: oldguy

Great post.  Great points.

I wasn’t familiar with the fable…good read, and very on point.  I often feel like one of the blind men.  I think a key step to becoming more intelligent is to realize one is blind from the start.  The person that says “I don’t know” has at least started with a foundation on which to build.

Yes, intelligence needs a definition, or multiple, and tests to measure it.  The Turing Test is not really an intelligence test.  It has some value, but in the end is more of a gullibility/perception test for humans.  I have often thought that a good test might be a robot being able to complete K-12 at an above average school alongside human children.  This means class participation, recess, PE, music, group projects, tests, etc.  I studied some training material for teachers of what conceptual skills are to be taught at each grade level…it would be a huge challenge for roboticists, but so is parenting.  At one time I entertained trying to enroll my Anna bot in the first grade and trying to program her throughout the year to keep up…knowing she would likely fail but would improve and move to grade 2 eventually.

As far as the rest of the animal kingdom, I suspect there is a lot more intelligence there then most of us see (we are blind to it).  I watch my cats a lot.  I hear lots of stories about how crazy intelligent wild pigs are.  Its an open question on whether Whales and Dolphin are as smart or smarter.  Obviously, people have achieved all kinds of things like space travel, I believe its due to us living on land (as opposed to Whales/Dolphin), and us having great hands with opposable thumbs.  Living on land has several advantages…it makes it a lot easier to make things, and to preserve information by writing things down.  Imagine trying to make anything underwater, with salt water in everything.   Imagine trying to scratch information on the side of a cave while underwater and attempting to swim at the same time.  Imagine trying to write or make something without hands.  I imagine whales have great verbal skills and memories, as they can’t preserve info otherwise.  If we didn’t have the ability to make tools and experiment, we would probably never have even been an apex predator.  Without language and knowledge preservation, we might have been less sophisticated than a pack of wolves or a pod of dolphin.  For me, it would hard to “contemplate” much, as you talk about, without words/language/symbols.  For that, I am very thankful for being human.  Hands are a plus too.

I too hold a lot of hope for NNs.  There are a lot of specialized types of NNs evolving (and dying), like RNNs, LSTMs, now being replaced by “Attention” Models.  I think the large corporate owned personal assistants (using many different NNs internally) will just keep getting better and better, understanding more and more context and gaining better verbal abilities.  Humans leave info out all the time, but our minds fill in the blanks from the context.  I think the Google Assistant is in the lead.  I expect it to get 100X better in the next 5 years.  I fear the day when AIs start calling us to manipulate us and it works.

I am going to stop rambling now.

You guys have made some
You guys have made some interesting points!

We do have a lot of trouble defining intelligence- the only definition I can think of that is “universal” (though probably not really) are the ones in dictionaries. Then again, looking at the whole picture might generally result in a “univeral” definition. Anything else is probably too subjective!

*Edit: fixed typo

Being Blind

It’s like a twelve step program:  step #1, admit you are bline!  There are a lot of people who won’t say "I don’t know."

I’ve seen some of your work on this site.  It’s quite impressive.    It would sure be interesting to see how your Anna help up in school.  It would be quite educational for US to see that!

I once knew a guy who “preached” to me every day about how superior humans were and how we were masters of the planet/universe, and on and on and …

I finally got tired of it and decided to do something.  I convinced him, over a few days, that dolphins were actually the masters and MUCH more intelligent than us.  They were so much smarter, that they realized a long, long time ago that all of our “society” was a waste of time and resources and would lead to unhappiness.  They decided that just swimming around doing pretty much as they pleased with no worries about money, fancy cars, social status, or whatever, was much more rewarding.  I had fun with it, but at the same time, could it have some truth to it?

Certainly our bodies and our environment have made things easier for us, but I don’t think that necessarily precludes other species from having different, but equally capable, abilities.  Would we recognize another intelligence if we saw it?

Why?

Personally, I think “why” is one of the key factors of intelligence.

We humans ask why.  And then we attempt to answer the question, which leads us into all sorts of (mis)adventures.

Compare really intelligent people to those considerred to not be very intelligent.  I think you will find the biggest difference is that the really intelligent ones ask “why” to most everything they see where the less intelligent ones just accept it.

Do other species ask why?  Hard to say.  But it would sure be interesting to find out.

If we want our robots to be “intelligent” then we need to teach them to be like every four year old:  “Why?”

Why Indeed

Excellent point.  I think you are correct on the “why” line of thought.  That is quite some intuition IMO.

I remember reading something about this (could be 20 yrs ago) where someone attempted to study this scientifically.  Take everything I am saying with a grain of salt as my memory is sketchy on things I read last week, and especially so long ago.

I believe they were studying chimpanzees and human babies and how they seem to learn.  It was evident from their study that human babies were learning in a fundamentally different way.  Somehow they came to the same hypothesis that you did…that the key difference with humans is that we ask why.  This allows humans to gain more fundamental understandings that the chimps could not.  I have no idea of the credibility of the people doing the study or whats been done since then, but there it is. 

Playing the skeptic for a moment:  The conclusion could be a result of confirmation bias.  Humans want to be superior.  I am pretty sure the apes in the “Planet of the Apes” series that were studying ape/human diffs knew in advance what they were going to conclude as well…apes are superior.

I made rudimentary attempts at programming some aspects of “why” into verbal robots.  There are some obvious ways…like getting a robot to ask why when it hears something new that it doesn’t have facts to substantiate.  I had to dampen this way down to only happen a small percentage of the time…otherwise the robot sounded like a curious 5 yr old…asking why constantly.  Another aspect of why is unseen/unheard…getting a robot to maintain a list of all its facts that back up its current train of thought or speech.  This means for everything that is said, there are perhaps a few or a hundred related facts / evidence / etc.  This becomes part of the context of a situation.  One of my goals with this was so that a robot can answer “why” questions from me, explain its own thinking in a narrative, etc.  This is one of the reasons I went down the path of getting Ava to talk in paragraphs…as a stepping stone to answering why question from me and building her own internal narratives.  It was fun to experiment with.

I think I may have known you by a diff name once.  If so, hello old friend.

Thanks!

Thanks for the compliment!  But please don’t read too much into my rambling thoughts.  I’m just an amateur old hack that thinks and talks too much about stuff I know little about.  I doubt I came to that conclusion on my own.  I probably read something similar somewhere.  But, like you, I have trouble remembering what I had for lunch… while I’m having lunch!

Certainly the confirmation bias is real.  And very hard to overcome when all you have is humans trying to do the testing.

I would like to know more about your questioning robot.  It sounds really interesting.  I would especially be interested in your approach to programming it.

I don’t think I’ve ever talked with you before, but see above about my memory.  I’ve never been much of an online guy until recently.  It was recommended to me as a method of escape :slight_smile:

 

Our Successor?

Will this replace us?

https://www.brainchipinc.com/products/akida-neuromorphic-system-on-chip

 

Ramblings of Another Old Guy

I guess there are two major dev tracks for robots…software and hardware.  I don’t know much about the hardware side, but it seems the current trend is to develop new chips (like the Neuromorphic one OldGuy mentioned), that are better aligned with the types of software solutions that are being built.  GPUs were the leader.  Now TPUs (Tensor processing units) seem to be the current winner and have the first mover advantage for modern ML.  Likewise on the software side, Tensorflow seems to be the winner for building anything that learns.

People alive today will be replaced, either by their kids, cyborgs, robots, or another species.  The planet will benefit if the rate of replacement slows or reverses.  Until we humans start taking responsibility for all our actions, we will probably keep killing everything else until it eventually kills us off.  The loss of bio-diversity is scary.  Our only recourse might be to invent our own bio-diversity moving forward, first in simulators.   I digress.

I’ve been studying reinforcement learning lately, and how AlphaZero does it generically and is super-human for games like Go, Chess, and many Atari games.  Its about 500 lines of code on top of Keras/Tensorflow and is open source.  The genie is small, elegant, and definitely out of the bottle.

The next stage in that evolution will likely be adapting the algo for 3D games like first person shooters and the like.  Think about that for a minute.  Imagine super-human intel in something that walks around and shoots and remembers where it went and what it did…albeit in a game.   Wait…games are simulators.  What if superhuman skills developed inside simulators (on roads, on battlefields, in bedrooms, workplaces, sports, and transactional events like trading/sales) were transferable to embodied robots in the real world?  They call it transfer learning.

I believe all of this is exactly what is in the process of happening right now.  That will go much further, and may partially solve the reproduction/bio-diversity issues I brought up.  Self-driving cars are on the way, and will mostly replace jobs.  Cylindrical bedroom toys have already replaced many men.  The idea of battlefield bi-peds/police officers is scary to me, but maybe they wouldn’t shoot as many unarmed people.  Maybe. 

If life is just a game of maximizing some expected future reward function, then why wouldn’t we be replaced?  The entity (person/robot) that dies with the most toys wins.  Maybe our phones will just constantly tell us the best decision/life move.  Anyone who ignores their phone and exerts some freewill will be seen as stupid/crazy and be medicated.  Too many diastrophic futures to contemplate.  I can see the dolphin argument here that OldGuy wrote about…that dolphins had foresight and avoided all our human problems we create by making tools.  The same could have been said for the plains indians.  I suppose it all depends on what reward function is being maximized in the game of life.  The dolphin/indians were perhaps maximizing collective good or happiness.  Western civilization plays the game differently, harnessing greed to try to maximize individual wealth…to personally win by dying with the most toys…screw everyone else.  Life is a team sport though.  Money is of no use if you can’t eat, drink, or breathe.  The western civ game lacks love of “others” – people, species, and nature at large.  It lacks love of even one’s own kids in the form of leaving your kids saddled with debt, a poisoned planet, etc.  I am a child of western civ and benefit from it…but it has issues.  Our ancestors were greedy and lacked foresight.

I will stop, lest I ramble some more.

I think an artificial

I think an artificial intelligence, that is supposed to remind of human intelligence, should be capable of transfering knowledge and learned rules/ideas to new domains/problems.

The major issue with current AI, and why it’s often not seen as AI, but as a tailored solution to rather narrow problems, is that those systems cannot explain why they do what they do, nor can they generalize what they learned.

Though speech synthesis or recognizing certain types of shapes or faces/objects etc., is certainly a type of low level AI.

Thanks for posting this, neogirl101! I had been investigating this very subject when you posted this, and you got me thinking even more. You even got me to start a project that I had been planning but not started.
I will probably have some more comments here later. This whole conversation is interesting and enlightening.

A quick reply before this topic closes.

If you are interested in whether it will ever be conscious (and not just intelligent):

Check out the “Chinese room argument” that explains why simulation (as any kind of software and therefore AI does) is never the same as the simulated thing. Therefore consciousness will be missing from AI: https://en.wikipedia.org/wiki/Chinese_room

A related article is here:

An interesting excerpt:

In a similar way, a simulation of water isn’t going to possess the quality of ‘wetness’, which is a product of a very specific molecular formation of hydrogen and oxygen atoms held together by electrochemical bonds. Liquidity emerges as a physical state that is qualitatively different from that expressed by either molecule alone.

From the same wikipedia article:
Newton’s flaming laser sword reply

Mike Alder argues that the entire argument is frivolous, because it is non-positivist: not only is the distinction between simulating a mind and having a mind ill-defined, but it is also irrelevant because no experiments were, or even can be, proposed to distinguish between the two.[109]

The Chinese room argument is precisely a critique of a simplistic positivist view. There is a huge debate around that topic, the quoted argument is the weakest possible: it simply reiterates a reductionist view that claims no difference exists, eventhough it’s trivial to see that there is a difference between conscious experience and unconscious processing (which we do as well).

Above there was a concrete example with water, that shows why increasing complexity alone/scaling up is not enough.

Perhaps in ten or twenty years the robots will let allow us a harmless chance to update this, so that we will know who was right. But I will point this out: over all of recorded history “experts” got a lot of publicity by stating things were impossible, while the people who ended up making it happen quietly toiled away on doing it.

1 Like

I think with the current approach it will not be possible, because it misses something on the “hardware” level, or in other words, the formalism cannot express certain things, such as consciousness. What that will be remains to be seen. But I don’t think it is fundamentally impossible.
Here is also an interview with Stuart Russel (the author of “Artificial Intelligence: A Modern Approach”) that touches a bit on the topic:

@maelh A very interesting article… it seems that Stuart Russel says that the way AI is implemented, as long as it gets the job done, does not matter.

That’s the view I have also- brains are not computers and computers are not brains, and that’s okay. Computers do not “think” in the sense humans do, but they still process information (albeit in a different way).

IMHO, if neural networks, that are nothing like the human brain, can easily outperform humans on image recognition tasks, then that means that it doesn’t matter if the AI is digital or analog- it’s copying what humans do, just in an entirely different way.

There’s more than one way to do almost everything.

1 Like

Yeah, taking a practical approach to AI is definitely the way to go.
I really posted only here because of some comments on the chat how AI is going to replace humans and surpass them in every way. I think while it’s absolutely possible to achieve many technical abilities, we should also be aware of the limits of what we know, and therefore the limited capability to make predictions or compare AIs to humans.

Regarding consciousness he said the following:

What are the biggest obstacles to developing AI capable of sentient reasoning?
The biggest obstacle is we have absolutely no idea how the brain produces consciousness. It’s not even clear that if we did accidentally produce a sentient machine, we would even know it.

I used to say that if you gave me a trillion dollars to build a sentient or conscious machine I would give it back. I could not honestly say I knew how it works. When I read philosophy or neuroscience papers about consciousness, I don’t get the sense we’re any closer to understanding it than we were 50 years ago.

What is the most common misconception of AI?
That what AI people are working towards is a conscious machine. And that until you have conscious machine, there’s nothing to worry about. It’s really a red herring.

To my knowledge nobody — no one who is publishing papers in the main field of AI — is even working on consciousness. I think there are some neuroscientists who are trying to understand it, but I’m not aware that they’ve made any progress. No one has a clue how to build a conscious machine, at all. We have less clue about how to do that than we have about build a faster-than-light spaceship.

Being aware of that is important for ethical reasons (not to grant rights to completely mechanical systems, and therefore trivialize the rights of actually conscious beings), but also to know where we really are, and what is missing.