The Parallella Board?

I was wondering what people think of this board and the supercomputer that this guy built with them.

The Board:

https://www.parallella.org/board/

This guy built his own supercomputer with 8 of them (each with 18 cores) wrapped around a gigabit switch and a couple of Intel I3 NUCs.

The Supercomputer:

https://www.parallella.org/2014/06/03/my-name-is-brian-and-i-build-supercomputers-in-my-spare-time/

I keep standing on the sidelines, waiting to jump into Rpi, but wanted something with a little more cubic inches of raw horsepower.  This looked like a fun build if I could figure out the software side to make use of it.

What do you guys think?  Are there better alternatives?

I'm starting to think about designing future versions of my robot brains into more parallel functions.  This seemed interesting.

I think for the price the
I think for the price the NVIDIA TK1 dev kit gives you much more: 4+1 ARM processors for Linux, and 192 CUDA processors for anything you want. It’s got 2GB RAM, a USB 3 connection, a SATA connector, plus GPIOs and such. And it uses very little power; I think about 10 watts when running the GPU all out.

Just do an Amazon or NewEgg search for “jetson tk1”

Unfortunately I bought my Parallela before I heard of the TK1 and this month I spent all my savings for a FLiR dev kit so I can have a bitty IR camera for one of my bots. I figure it will make facial recognition easier.

Another parallel alternative
Another parallel alternative is just PCs with large graphics cards that aren’t connected to the monitor.

I think the next real PC I build may be something like that. I may even be able to do this with one of the machines I have because I’ve got three machines with graphics cards and only one of them with a good motherboard.

Rpi it’s kinda slow but usable

Boot in 26sec, 2sec wait for opening almost all programs, but no lag at all when used in terminal mode.
As a server, model B+ it’s excellent, and can use an external hard drive, even an ssd.
Raspberry doesn’t have the best flops/price, but it’s the minimum build i could find.
For image processing another board with a beefier GPU would be better, as a file server a Pi-like microcomputer(Banana pi also has a sata connector).

You sold me on Jetson

Thanks everyone for the input.  I think I’m sold on the Jetson.  I want to order a Pi this week too, just to learn how to setup and program it before diving into Jetson.  I wish I could run Android OS on the Jetson.  I would be a lot more comfortable converting Anna’s server brain to that.

The potential GigaFlops on the Jetson far outpaces the total output of the Intel/parallella supersomputer, which in turn greatly outpaces a 32 Node RPi setup by 10 fold.  In the longer run, I might need a setup with a lot more memory than either though.

I have noticed that my current brain uses up a couple hundred megabytes of memory when running on a PC, much of that is probably the OpenNLP.  I’d like to be able to support an orders of magnitude increase in memories though, and bring in OpenCyc data (6 million+ memories) and/or OpenCog if one of theirs has a memory dump.

I’ve been reading Society of Mind in more detail, which talks about different agencies in the brain running on different timeframes (with different latency).  It would seem that you could model this by dividing up a set of brain functions into different agencies which operate asynchronously.  There is no great reason that things like sentiment detection, moods, motivations, and other personality behaviors couldn’t run on different timeframes and have slight latency before they “catch up” to more instant changes in the environment.  I’m probably going to start doing this type of thing on the server, I’m already doing it on the Android to make more economical use of cpu cycles.  A lot of speech processing could even work that way as people take a bit to think about things that are more complex.  I’m probably going into too much detail…

A jetson with the 4+1 ARM processors could work pretty well for dividing the main functions right off the bat.  Under this model, almost all functions (except trivia lookup like wiki, wolfram, weather) would operate on the bot itself.  

Division of Major Brain Functions Amongst Jetson ARM Processors

1.  Memory Management

2a.  Sensor Processing

2b.  Automatic Behaviors (rules, reflexes, etc)

2c.  Self (Personality, Mood, Emotion, Motivation)

3.  Verbal Processing

4.  Vision - this would also farm out alot of subprocessing to the CUDA cores.  I so need a subject matter expert here.  Object recognition escapes me.

Some of these major functions would need to be grouped together on the same processors ( like 2a, 2b, and 2c maybe, in several threads).  Maybe memory isn’t as much of an issue if I can prioritize memories and keep less significant memories on disk.  I’d still love to have a lot more GB though.

A lot of “learning and instrospection” could be farmed out to the CUDA cores I guess…either real-time or during dream states.  I’m at a loss for how to properly make use of 192 of them…I guess I’ll figure that out when the time comes.

The Jetson uses different voltage levels (1.8), so how would I interface with sensors without having a separate Arduino Mega on board?

That’s all I got on this train of thought.  Minsky is good, really good.

The Jetson is a development
The Jetson is a development kit to see if you could use this chip in a product. It will probably never be made expandable,

However, you can get most of the advantages (except for the low power consumption) with a PC motherboard and a couple of graphics boards.

I will say that I think a couple of Jetsons in a loose cluster would be wonderful for robotics. One could handle vision, one could handle the audio. Maybe one could do both, I’m not sure.

For a lower power solution, there is the Intel Newton. It’s main problem is level shifting because it’s pins are at 1.8V I thnk.

re: 6677

Thanks 6677 for all the info.  Your comments about the Bionic Phone got me looking into newer phones a bit.

I’m no expert on CPUs, but this 9-Core hardware looks interesting.  Still only 2GB RAM, but simply swapping it out with the Bionic and porting more brain functions to run locally has some appeal, probably because I can keep the bots small and stay using Android.

http://consumer.huawei.com/en/mobile-phones/tech-specs/ascend-mate7.htm

CPU:

Hisilicon Kirin 925

4x1.8GHz + 4x1.3GHz + 1x230MHz

 

 

re: DThing

You mentioned the 2 loosely coupled Jetsons…

Maybe 1 Jetson and 1 Supersized NUC with a lot of memory.  Jetson for Video and Calculation…NUC for Brain/Memories/DBs.  There would be cross-talk to delegate memories/tasks from left to right brain.  A little software architecture would keep this simple and force more things to be async…a good thing anyway. 

If I’m putting this on bot, I guess a lot depends on whether I continue to use an Android face, as the newer phones are getting faster, the Jetson might not be needed as much.  Better phones with a lot more graphics processing abilities are coming.  A droid and a NUC might do and be more compact.  If the RAM grows, the droid might be good enough by itself.  Check out that Ascend Mate7 link I posted.

 

I think the NUC maxes out at
I think the NUC maxes out at 16 GB of RAM. Of course you can add 1 SSD as the PCI-e (or whatever the miniaturized slots are called). When I built mine the most I could get in a slot was 250 GB with the other slot for Bluetooth and wifi. I also put in a 1TB laptop SSD in the SATA port, which was also the largest I could find at the time.

If I can I will do some benchmarking of OpenCog on my NUC next month.