First off, what a cool place
First off, what a cool place LMR is! In real life I can’t talk about robots for 2 minutes before people’s eyes start glazing over, and here I am talking about AI with a whole bunch of people just as nuts as I am J
This was a few posts back on this forum, but this whole concept of ‘true understanding’ of machines baffles and fascinates me. And let me preface this post with me stating I have absolutely no idea what I’m talking about: my grand total of AI experience is downloading ‘Introduction to Artificial Intelligence, second edition’ on my kindle. But, I’m not letting a little ignorance hold me back.
Martin, let’s take your example on learning about South America. If I would say to CCSR, ‘Huaraz is very cold in July’, it would extract an object (Huaraz), and a property (cold), and a temporal qualification (July), and put these into an associative memory. So next time I ask, ‘tell me about Huaraz’, it would index its memory with Huaraz, and retrieve ‘it’s very cold in July’. Obviously this is not nearly as advanced as Anna’s algorithms, but it may serve as an example.
So I agree you can learn about something by just reading/hearing about it. But what I’m struggling with is, does CCSR really understand what he is saying, and more importantly, does it need to? I don’t want to ask this as a philosophical question, this is after all let’s make robots. But I do want to explore if a machine has a better or more life-like learning potential when we attack the concept of understanding.
After thinking about this a while, I think you are right Martin.
My gut feel told me we humans can only effectively learn about South America without actually going there because we understand the underlying concepts of the words used in the language to describe it. If we read that Huaraz is cold, we understand a physical reaction to cold temperatures, and our decisions and opinions are motivated by that undestanding.
So if I grew up in Florida, and would read Huaraz is cold, did I really learn something? Let’s say I now travel to Huaraz in July, and even though I ‘learned’ its cold, I still only took my shorts. Had I previously really learned the concept of cold and the related visceral reaction to it, I would not have made this mistake?
But then again, if I would have read: “don’t just take shorts to a cold place”, I would not have made that mistake, even not knowing what cold really feels like. So it would seem I can still practically learn to make correct decisions without knowing a true emotional/visceral response to it?
So I’m trending to agree with you Martin, ‘true understanding’ is not a real practical requirement, a robot can seemingly grow unlimited without it.
So what is an ‘emotional function’ in a robot good for then? I’m thinking the missing part maybe motivation, and flowing from that, a personality. A robot trying to optimize a ‘happy function’ based on variables relating to its physical state (temperature, boredom, stress, noise, etc) may be capable of showing behavior not possible if you cross reference abstract knowledge:
- It can answer personalized questions like: “do you like Huaraz?”, or “Do you want to go there?”. If the robot learned that cold makes him happy, because it speeds up its CPU and a MIPS-measurement is one of the variables of its happy-function, it can truely state it does like Huaraz, and why.
- It can start, stop or change conversations, for exampe when it is bored or lonely, or if a negative response make him unhappy. For example, if its happy-function decreases when it hears negative responses, it will automatically change its topic until its conversation partner is happy.
Would this lead to more complex behavior, and perhaps to faster and more directed learning? I simply quote the cliche of us humans still living in caves hadn’t it be for our propensity for unhappiness.
So do you guys feel this is a valuable thing to explore, or should the cliché “of course I have feelings, I’m not a robot” continue to hold? After all, Data’s brother Lore didn’t pan out too well…