You sold me on Jetson
Thanks everyone for the input. I think I’m sold on the Jetson. I want to order a Pi this week too, just to learn how to setup and program it before diving into Jetson. I wish I could run Android OS on the Jetson. I would be a lot more comfortable converting Anna’s server brain to that.
The potential GigaFlops on the Jetson far outpaces the total output of the Intel/parallella supersomputer, which in turn greatly outpaces a 32 Node RPi setup by 10 fold. In the longer run, I might need a setup with a lot more memory than either though.
I have noticed that my current brain uses up a couple hundred megabytes of memory when running on a PC, much of that is probably the OpenNLP. I’d like to be able to support an orders of magnitude increase in memories though, and bring in OpenCyc data (6 million+ memories) and/or OpenCog if one of theirs has a memory dump.
I’ve been reading Society of Mind in more detail, which talks about different agencies in the brain running on different timeframes (with different latency). It would seem that you could model this by dividing up a set of brain functions into different agencies which operate asynchronously. There is no great reason that things like sentiment detection, moods, motivations, and other personality behaviors couldn’t run on different timeframes and have slight latency before they “catch up” to more instant changes in the environment. I’m probably going to start doing this type of thing on the server, I’m already doing it on the Android to make more economical use of cpu cycles. A lot of speech processing could even work that way as people take a bit to think about things that are more complex. I’m probably going into too much detail…
A jetson with the 4+1 ARM processors could work pretty well for dividing the main functions right off the bat. Under this model, almost all functions (except trivia lookup like wiki, wolfram, weather) would operate on the bot itself.
Division of Major Brain Functions Amongst Jetson ARM Processors
1. Memory Management
2a. Sensor Processing
2b. Automatic Behaviors (rules, reflexes, etc)
2c. Self (Personality, Mood, Emotion, Motivation)
3. Verbal Processing
4. Vision - this would also farm out alot of subprocessing to the CUDA cores. I so need a subject matter expert here. Object recognition escapes me.
Some of these major functions would need to be grouped together on the same processors ( like 2a, 2b, and 2c maybe, in several threads). Maybe memory isn’t as much of an issue if I can prioritize memories and keep less significant memories on disk. I’d still love to have a lot more GB though.
A lot of “learning and instrospection” could be farmed out to the CUDA cores I guess…either real-time or during dream states. I’m at a loss for how to properly make use of 192 of them…I guess I’ll figure that out when the time comes.
The Jetson uses different voltage levels (1.8), so how would I interface with sensors without having a separate Arduino Mega on board?
That’s all I got on this train of thought. Minsky is good, really good.