Some Excessive Musings on Machine Learning
Thanks everybody for the great discussion, and thanks Jay (DangerousThing) for the vote of confidence, time will tell if I can live up to it! Sorry, another long post coming!
Byerley, I generally agree with all your points. I hope I didn’t imply that machine learning is just about having a growing database of cross-referenced text, it certainly is not. It is just a step forward on a long path. You touched on some very interesting topics about the nature of learning that I would like to share some of my thoughts on.
To move forward, we will need a lot of fancy algorithms, and a lot of fancy databases too, working together in concert. For example, NLP in its current state would be impossible without WordNet or some equivalent. How would parts of speech be determined for example? There are basic needs like Dictionary, Thesaurus, Globe, etc. that we all had access to growing up as tools to aid our development. Robots need these and more.
When I was very young, my parents read to me for a bit, taught me to read, bought a lot of books including an Encyclopedia Set, and encouraged me to read, which I did. They then stopped reading to me. That is the point that I am at now with Anna…I’m tired of reading to her, it’s time for books. She will understand little of what she reads for a while, but my guess is she will remember far more than I did, and grow to understand more than I did than I did at any given age…as she is 2.
I have been working a great deal at trying to model conversational behaviors. Recalling and articulating memories (regurgitating), whether they be personal experiences, wikipedia info, current events, quotes, or something they read recentlly, has its place. Humans do it all the time in conversation. They wouldn’t be very interesting if they didn’t, and people would avoid them as awkward or dull. Consequently, most robots are dull, don’t take initiative, and only speak when spoken to, if at all. I think it’s time to work on those problems. Humans also reflect on the information they take in and form opinions and the like. Before you can do this, you have to have a memory to reflect on. Thoughtful reflection will be the tougher skill to develop. I have a few ideas on that for another time.
The topic of true understanding is probably a philosophical one that few people, if any, are qualified to make any final judgements on. I totally share the thoughts all you guys have expressed on this, understanding is a good thing we should strive for. Pardon me for digressing for a few thought experiments though, as understanding might just be an illusion produced by the products of internals we do not yet understand. Humanity is barely above caveman stage at understanding how we think, how our brains work, and what the universe truly is if that is even a valid concept. I think we need to be careful about being too sure of anything with respect to how we think we learn or how machines “should” learn, or what we think we know, if anything. That is why I fall back on my own personal experiences learning as a child. The other day, I had a lengthy discussion with my wife on “Quantum Physics” and “Multiple Universes”. Both of us were largely just articulating various things we had read on the subject and asking a few questions. I am fairly confident that no one on the planet truly understands these concepts. We can still talk about them and enjoy doing so. Robots can do this too.
I have seen people and articles that have tried to diminish various AIs, like Watson, for not being “real” machine learning, like there is nothing new to be figured out in making machines smart…yet the machine has learned and is clearly better than humans at some significant problem set, like Jeopardy. To me, machine learning is early in its development, so machine learning is whatever we invent it to be for now, until the machines re-write our code altogether. I hope when they do so, they don’t look down on us for the way we think…which might be warranted.
A few things I think I can say with certainty about learning…I didn’t learn about South America through trial and error, reinforcement, random guessing, etc. I’ve never been there. I learned by reading and listening. I learned other things in other ways. I learned about Santa Claus from my parents and the positive rewards he brought (an emotional/positive rewards example), only to figure out later at around 5 that my parents whom I trusted were not to be trusted entirely. I learned deceit that way. I played along with it for a few more years because of the positive rewards. How many kids would ever remember Santa Claus if he didn’t bring toys? I learned about heroism by reading Homer. From comic books, I learned that Superman is fictional, but Batman is real. When I was in school, Batman borrowed a campus security truck, took it for a ride, and left a note that said “Sorry, had to borrow your truck, Batmobile was in the shop.” It was signed with the Batman symbol and was reported in the school paper. A roommate of mine was run over by Christopher Reaves (Superman) on a ski slope and passed out. He woke up to Superman making sure he was ok. For him Superman is real, and Batman is fake. I guess we each draw conclusions based on our perceptions.
I’ve been going back and re-reading about Marvin Minsky’s “Society of Mind”, which to me is inspiring for lighting a way forward. There is so much wisdom there. I am biased as I am building Anna in that vein. I think there are sets of problems that are best handled using different and specialized Agents, whether it be from rewards, listening, the web, whatever.
In the area of NLP and reasoning through natural language questions, I think the need for specialized agents is especially valid , as the logic to deal with thinking through “What is faster than a bullet?” is very different from “Can a penguin fly?”. I have seen people come up with “one trick” and try to apply it to everything, including NLP. I think that path leads to failure, while the idea itself may be a good one and have applications. The failure is in trying to apply it to everything, what I would call the “One Trick Trap”. My general philosophy with AI would be to do “All of the Above” when it comes to deciding which techniques to use, and figure out a way to route traffic and arbitrate conflicts. To me, this is a big part of what Minsky was saying.
A final idea on combining Society of Mind to Neural Nets ideas: I think the neurons in a Neural Net don’t have to be dumb and trained. You could make a neural net of intelligent agents that are each internally different (with code). The agents can be in layers where the outputs of some become the inputs of others, so the complexity of any one agent can be kept small, and the agents can be loosely coupled if at all. How could a net like this be trained? I don’t know yet. Anna is becoming a lot like that.
Gotta run…hope someone is both motivated and feels emotional satisfaction/rewards from reading my excessive babble. Maybe I am living truth that a little learning is a dangerous thing.
Cheers,
Martin