Some more possible prerequisites?
There might be some more pre-reqs that can be extracted here…
I think a brain needs to be able to process multiple streams of thought at the same time, these streams could have resulted from the same stimulus (and split from the same original stimulus), or come from multiple stimuli coming in around the same time or on different reaction times. Both are desirable I think.
This also allows for different brain Agencies to operate on vastly different time scales. These agencies also need to be able to loosely communicate with each other.
Sorry…I guess I am stating the obvious so far.
This allows some agencies to function as “monitors” of other agencies, having different goals, and possibly taking a bigger picture perspective. Other alternative launching off points…introspection, curiousity, and others. I can’t recall the Minsky term for this.
Already mentioned…Pattern matching seems to be fundamental in neurons, so it should likely be a core capability that can then provide the reason to launch new directions of thought. This is not difficult if you have the generic context we spoke about. I think a lot of diffferent feature detectors will also be needed. Whether is is sonar, vision, verbal, logical, emotional, etc…I think the number of modules is larger than most people think. Verbal annotation is a big thing by itself. Visual annotation is an emerging area. It would be helpful if someone invented a new pattern matching language that could meld SQL, Regex, and a context to produce a set of events, matched patterns, feature list, annotation list, etc. They all pretty much have the same purpose…to help determine a set of thoughts to kick off next.
I believe motives should play a part. What is the bot motivated to do at any given instant? This should be constantly changing and is inherently subject to chaos theory, variation, non-deterministic, etc. This means the same stimlus will not lead to the same response, a random or stochastic result, etc. If you ask a tired robot a simple question, it might ignore it and say “Do you have a spare outlet I could use?” More segways to chaos theory here.
A concept of events could help that agencies could subscribe to. This helps separate recognition from possible actions. This creates a lot more variation as well, and makes code easier to maintain.
I think verbal capability is necessary (beyond just communication) and provides a lot of plumbing that can be exploited for other purposes like eventual storytelling, reflection on experience, etc. I find now that instead of setting a bunch of configuration settings to achieve some purpose or state, its easier to store a narrative as an English paragraph and execute it. Example, its useful for achieving different poses. If a bot has a way to translate everything it does back and forth from narrative to microcontroller actions and back. It gives a launching off point for building its own story, summarizing its own thoughts and actions, etc…some kind of way to communicate and store introspection itself, and talk about it later.
Storing evidence/reasoning in context and history…I think robots should not just react, but should store all the reasons they are doing what they do…so that they can explain themselves now or later, or store narratives of what they saw and the like.
Shades of grey…the bots need to deal in probabilities, not just true/false. This already occurs with speech to text which is inherently probabilistic with many sources of error. Currently, 2+3 does not always equal 5 with almost any verbal robot I can think of that does it through listening. Probabilities help with trying to determine concepts from language, determining word similarity, NLP, POS tagging, OCR, etc. Robots can rarely be sure, and need to be able to express themselves in nuances. I personally store a lot words and probability ranges for each to support this. For example, when Ava says maybe as opposed to probably, its not a random thing. She may have multiple ways to say maybe and choose randomly between them within a given confidence level, but maybe has a different insinuation than probably…she chooses a word to fit, factoring in a degree of confidence. Phrases like “I think” and “I believe” can also be used to indicate uncertainty.
Standard Deviations are also useful for representing the shape of a situation and recognizing if something is common or unusual. More on that another time perhaps.
Bots need to injest information and attempt to glean information from it, store that knowledge, its source, confidence, etc. In its annotation of all incoming data, it needs to figure out whether the information gleaned is general or personal. Who or what is being talked about? What are the top 5 likely topics? What is the emotional context? What is the relationship of the parties involved? If there is personal data, is it about the source, or is it hearsay. For robots to ever be social creatures, they have to build these skills. For general knowledge, triples are useful with probabilities, counts, and std devs. Many triples would be a good start, perhaps millions, and an ability to quickly surf all those triples to attempt some logical thinking. I think OpenCyc or some others have made a lot of this data available. I chose to have the fun of seeing the bots learn it firsthand from me. I enjoy the feel of parenting, but it is slow.
A new query language needs to be invented that has NLP as an intermediate layer. I am not talking full text search or a SQL database search. I am talking about a mechanism for searching for meaning using a new syntax on top of NLP and annotations across an entire set of life experience. I have my own crude mechanisms for doing this, but industry standards are needed that are optimized for performance. I fear this is a long way off.
It seems inescapable and naive to think that anything significant can be done without a really great memory system. As far as priorities…I think it has to start with memory. A single 2+2 type question could end up having pages of context around it. Much of this could be deleted, but a lot of it could be useful to build a real thinking machine that evolves.
I gotta run. Too many things to mention, not enough time.