I would agree with ALL of
I would agree with ALL of your points, perhaps because they mirror my own experiences.
I will paraphrase some of your points (Ps) in my own words:
P1. Machines can exhibit intelligent behavior
P2. Once people hear how a machine does something intelligent, then they tend to no longer perceive the behavior as intelligent.
This leads to a few thoughts (Ts) for me.
T1. Perception is a key issue. Instead of objectively measuring intelligence, we are subjectively judging whether we perceive something as intelligent. Turing got to the heart of this with the Turing Test, the “Imitation Game”, etc.
T2: Lets say we have the ability to explain any single algorithm that we then reduce in our perception as being “dumb”. If an intellgent machine is running hundreds of algos that are competing to exhibit their behavior at any given moment, we would likely lose the ability to deduce which algorithm is winning at any given moment. We might then “give up” on trying to mentally reduce the robot to being dumb, and perceive the entire system as “intelligent”. I know this is not scientific, but I find this to be try in my own years of experimentation.
T3: Humans are biased to thinking humans are intelligent and machines (and animals/plants) are dumb. It fits historical experience, and it makes humans feel better about themselves, something they want to do. Some would call it confirmation bias. I would argue that most people are not particularly intelligent most of the time. We are good at being self serving. I could say the same of a lot of species. We spent millions of years as hunter/gatherers. The rate at which we generated new ideas has been amazingly slow. We also don’t hold ourselves to the same standards as we do with machines. When a child learns algebra and uses it to solve a problem, we praise them as being smart. We don’t say “They are just doing math. They are really dumb.” If we systematically tried to explain human behavior or emotions and judge it as smart/dumb, we might not like the results. This bias extends to other things like law, religion, racism, the way we treat animals, etc. We want to think we are intelligent and special, and give ourselves license to exploit anything else. This comes back to my point about humans being “Self Serving”.
T4: Putting perception aside, objective measures of intelligence are also key, more so IMO. This means we need large data sets that are labelled, thus defining “truths” that errors can be measured against. For neural nets, this allows us to train, backpropogate errors, and arrive at optimal solutions. At some point, if something can recognize images or speech better than we can, how it does it and our opinions and whether we deem it as intelligent or dumb become philosophical and irrelevant at the same time.
T5: I think much more work and thought needs to be done on “systems” of many intelligent agents working together. So many specialists are working on one technique (Neural Nets for example) at a time under some idea that intelligence will be able to be reduced down to a single technique. While this may turn out to be true in the long run, it is an example of human stupidity in the short run in my opinion. Having said that, I am a huge fan of many different types of NNs for different types of problems. Its a huge growing field. I suppose your “ant colony” is another example. You might think each ant is dumb, but its hard to argue that the colony working together is not intelligent.
T6: A lot of people think that something that is simple is deterministic and therefore dumb. Chaos theory would disagree. Any system that is damped and driven at the same time has the potential for chaotic (and seemingly very creative) behavior. Chaos does not mean random, it can have intricate patterns of structure that may never repeat. Steven Wolfram shows a lot of this in his book “A New Kind of Science” where very simple rules create very complicated results. It is thought that nature does similar things in many ways.
Finally, you mentioned machines reaching human intelligence. I personally believe machines will be dumb for a while longer, and then they will be super-intelligent, blowing past us like a car driving past a mile marker on a highway. The only question is when. I don’t know whether it will happen in 2 years, 20 years, or after my death. Once they become super-intelligent, we will lose the ability to perceive how intelligent they are. We will either be entirely dependent (like toddlers) or victims of circumstances we don’t understand. The toddler case is probably the best case scenario.
The wakeup call for me was when a more generic version of the AlphaGo AI taught itself chess by playing itself in 4 hours and beat all prior chess programs. https://www.sciencealert.com/it-took-4-hours-google-s-ai-world-s-best-chess-player-deepmind-alphazero I think it cracked the Enigma code in 19 minutes. I would have to think something like that could do most white collar jobs better than people.
To me, this is a new kind of super-intelligence, in a given domain. At some point, it will be any domain. What happens when this intelligence can run in embodied form in a robot?
All of this is just one man’s opinion, and I might be dumb.