A few techniques I have used…
I find the SOM (Self Organizing Map) VERY interesting. I have seen papers expressing emotions as vectors. I think aspects of visual and verbal could be done with vectors.
Most of my memory experience has been with data that is easy to imagine in relational datastores. Whether Access (as Jeff discussed) or SQL Server, the concept is the same…to model memory structures and then store lots of them with their relationships to other memory structures. I think this technique has many uses (its how the world stores data), but other techniques are needed in addition to move forward.
Currently, I use a CacheHelper service that sits in between the code and the databases and decides what to hold “in memory” and what to go to the databases for. It also takes care of building indexes to access by ID, Name, or other factors, for speed. This works well for frequently accessed data (like words, phrases, and many others) which is constantly being looked up. It works well too when their are logic questions to deal with like “Can a penguin fly?” where many memory nodes (possibly hierarchical in nature) have to be evaluated to get an answer.
For NLP parses, I also use a special intermediary “cache helper” (keyed by sentence) and “Parse Dictionary” where I cache features about a given parse of a given sentence. This lets me avoid hitting the NLP parse routines twice for the same sentence in the same session…this lets you ask questions about a recent or distant conversation without having to incur huge delays as each sentence is found, prioritized, and parsed. Example: “What color was the bus?” - where recent references to buses need to be recalled, parsed, and colors identified.
For persistence, I used to use a “generic” database structure that could store any memory type. I then needed “meta memories” - memories about memories - to define and document the structure of each memory type. This model made it easy to develop a user interface on top of all memories so I culd see what was going on.
Later I decided to support “specific” databases - meaning “non-generic” - where each specific memory type was modeled as a separate table. In order to build a UI, I still needed the meta-memories - which I derived from the system tables automaticallly. As a side benefit…This UI has so many business applications…imagine apps that build themselves. This model has advantages in that additional indexes can be added for specific tables…a necessity now that I am dealing with some tables that have millions of rows. I would say it is a necessity as anything over 75K rows.
I layered a generic rules engine on top of this…where the rules (pre-conditions (think patterns or stimulus) and post-conditions (think responses) were modelled in a few tables. Rules could be layered on top of any table in any db…hundreds of them. Reflexes are one use of a rules engine. There are many others.
Other than conversation histories, I haven’t gotten into storing the detailed histories over time of sensor data…tooo much data. I feel like some intermediary derived data (features, vectors, etc) might be better and smaller and easier to deal with.
I speculated once on creating what I called “Data Cubes” …whereby a “Data Cube” object would monitor a given sensor input and store (in memory) the recent history and summary stats on less recent history in various time increments (minute, hourly, daily, etc.). This cube would handle storing the summary stats asyncronously to a db when it chose to, The point of this whole idea was to allow a robot to monitor a set of inputs (through multiple cubes) and attempt to derive knowledge or correlations from observation. Example: Getting a robot to observe that it tends to get warmer when the sun comes up. Beyond correlation, since the cubes would store data over time…perhaps they could be programmed to recognize cause and effect…by recognizing a change in one precedes a change in another. I think it is all possible, but never built the cubes or tried it. We used to have to derive formulas for raw data in school. This basically represents coming up with a hypothesis or idea. Robots could do this too…which would be really cool I think, a robot coming up with ideas from observation.
I haven’t mentioned visual here…I think its an area where something like the SOM or some other techniques would be useful…I haven’t done anything I would consider useful visually.