Multiple Machine Redundent AI (MMRAI)

I’ve been thinking about Artficial General Intelligence (AGI) and OpenCog. These things would be much easier to implement if we had a full research supercomputer. OpenCog has that and still hasn’t produced a human level AGI.

Since we don’t, I’m thinking of another way of doing things.

Martin is building an AI server on his home computer. This is great, but with the internet the way it is, I expect lag time will be too great at times. It’s also a very linguisticly based solution to getting computers to speak. It’s a great solution for this as anybody who has seen the videos of Super Droid Bot Anna will see.

On the other hand, I’m more concept-based. I want my robots to actually think. Anna does a wonderful imatation of this. If there was a Maker Faire anywhere near his place with a clean internet connection, Anna would be the hit of the show.

I don’t think this is the best solution. I’m thinking of something new. An AI based on OpenCog and friends. But instead of a supercomputer I can’t afford how about loosely distributed computing?

What if a small part ran in the background of a lot of normal computers, much the same as OpenSETI and OpenFolding do? This way there would always be parts of the AI close (in internet terms) to each of us and we could use it. To do this the architecture of the underlying programs would have to be rewritten to make the working parts much smaller and more redundant. It should somehow be useful to everybody who runs a part. Perhaps it could be useful as an intelligent search engine because that is part of cognition.

If each of us could spread this program (honestly, I’m not thinking of sys admins putting this program on people’s machines without their knowledge) to computers near them, we might be able to build our own supercomputer.

Or I could wait ten years until the tech gets good enough that I can build a supercomputer.

Any thoughts?

Common indexed data storage.

Common indexed data storage. And then you can have all kind of AI agents you want using it.

A big problem of strong AI it’s that human brain it hasn’t been completely mapped yet.

I even tried using neural networks, but they either train fast or don’t train at all. And it’s better to train multiple networks and then chain them, than building a big one.

There could be a service which can answer API calls for a data set, an example set, a function and makes a neural network which answer that problem. This way you can have an AI model on demand. Neural networks act like optimized decision trees, so further queries can be answered much faster by the local machine.Or use API calls to gather data set and examples.

Anyway if we can make a neural network which knows how to make a target neural network we are on the way for the singularity:)

 

In my opinion it would be a

In my opinion it would be a big mistake to centralize intelligence. Imagine all humans would share the same intelligence base…Artificial intelligence must be embedded, not lay around on a server somewhere in the world.

I do not consider robots which use third party AI services as artificial intelligent. They are just shells and useless for AI research. One who uses artificial intelligence must understand the algorithm behind completely. And if I understand the algorithm I can write my own program.

Often I read we need super computer for artificial intelligence. You will try hard to write an artificial intelligent program which has more than a few MBs.

And finally I always miss a definition at the beginning of a discussion. Artificial intelligence is math, and math starts always with a definition. About what AI are we talking? Weak AI or strong AI? While strong AI is still more or less a philosophical topic, weak AI has some well defined fields.

I’m not sure if it’s AI
I’m not sure if it’s AI research in general, or just the higher end of what used to be called strong AI, but the phrases I’m seeing are Specialized AI (for weak AI) and Artificial General Intelligence for strong AI.

One problem is that all but the largest of robots can’t fit in the computing power needed for AGI, which is the goal of OpenCog. They have some pretty good math people there as well as the other specialists needed for a cognitive/perceptual/linguistic problems such as a human-level AGI. If you want the math behind it, go to www.opencog.org.

Now what is the difference between a robot with embedded AI and one whose brain exists on a server in another building? As long as the connection is fast enough, it shouldn’t matter. They have a couple of projects going with avatars: some with small humanoid robots, and some as virtual avatars in a game-like world (at least one group is using OpenCog to produce better game NPCs). And yes, there are those on the project who feel that a human-level AGI doesn’t have to be embedded in a robot, but can exist in a computer without a body.

With specialized AI you can produce a lot of very good systems. But they won’t be able to handle general problems very well.

I want a robot who can handle general problems. I might never see it in my lifetime, but the OpenCog people are betting I’m wrong.

I want a robot, that if I enter a new house, will be able to recognize each of the rooms by function. I want a robot that can recognize people and create specialized insults for each of them. I expect that at least for the time being, the AGI will reside in my van when I’m on the road (or if the van is my home). I don’t know how I’ll get a good secure connection at a Makerfaire, but I’ll work on that when I need to.

I don’t believe that having one sever will work for a large number of robots unless it is done professionally. I think that Martin’s ideas are good, but I’m afraid they won’t scale, and the lag time will be too much. However, he’s got something that most of the experts in the field don’t have: a working robot that appears to talk appropriately.

However, creating and cleaning the basic knowledge base for an AGI could, I believe, be done off-line in a distributed way. I’m not sure of the details yet, but I’m hoping that some people here come up with something interesting.

For example, the perceptual part of OpenCog, DeStim (I think that’s the correct spelling) saves general representations of objects (called centroids) that need to be created from multiple views and multiple levels. To save space and make the searching faster, it may be possible to find centroids that are close enough to each other and use a single centroid in that storage. That is one thing that could be done off-line in a distributed way.

And then, people could download or update their general knowledge base from a central server every now an then.

And there are those of us who do not believe that AI is just math. Yes, math can be used to describe the data structures and such. However, it takes a lot of other specialties also, to bring in the intelligence. I think one of the reasons that early AI efforts failed, aside from the hardware of the 1950s, is that they thought of it purely as math. I don’t think that building an AGI is a one man project, if I get anywhere it’s because somebody has already built the tools for me to use.

What specialties? Early AI

What specialties? Early AI efforts failed because the models were not good enough. And they are still not good. Humans can’t see reality. All we see is a model of the world which we developed over the years. In every science scientists building models. There is a theory, and then there is maybe a better theory. There is no absolute truth for us. Never. Even not in math (Gödel’s incompleteness theorems).

As long as we’re talking about common computers we’re talking about math. Every command you enter is pure math, just a different notation. Nothing can be programmed if it can’t be described with math. There is no “and now I add a little bit voodoo here and there”. Ok, maybe on an Apple…:stuck_out_tongue:

I have looked at the OpenCog math I while ago. They are using all the common algorithms like SVM, kNN, NB, NN.

Fascinating Thread…

Now this is a very fascinating thread. You have definitely raised some key issues that have been on my mind quite a bit.

The first step would seem to setup OpenCog on a server on the web, with an API we can toy with, hook 2 robots into it, and prove we can do a lot of useful things with it.  Sadly all we have so far is the OpenCog brochure which sounds really good.  What type of hardware setup/budget would you propose to get things started?  What type of setup would you see long term?  I would like to help get this going with either funding or time, whether its OpenCog or something else that’s open.

You mentioned distributed approaches (like SETI).  I would agree that this is an approach deserving a lot of consideration.  This is where I hope to head with Anna’s brain…probably in 2016-2017 once everything else matures and stabilizes, partially to solve the scaling issue but also to let a lot more people get “hands on” with the software if they want.  There are various ways to partition things, and the best way is not clear to me yet.  A lot of this has to do with different types of memories…there are benefits to sharing and distibuting some, while others are best kept localized and private.  There are other features that simply don’t need to be distributed as they aren’t used that often.

Common indexed data storage” as Silux mentioned…is key.  A lot of standards would be needed.

I have so much more to say on this topic, but I’m late for something and gotta run.

Martin

About indexing data, xml

About indexing data, xml should do fine; it can express even whole languages and models.

About AI, another approach it’s to have a broad repository of programs which contain a skill. When the robot needs to do a new task, searches between its programs, if none can apply it can search the remote repository with a program that can do it. If the repository check fails, it asks the human to teach the task, by direct programming, or by supervised learning, and then shares its experience, uploading to the repository.

The applications on the repository can have a manifest file like android apps, which tell the robot the purpose, the requirements and the permissions needed to run the application.(hopefully lasers will require root access)

Simple skills learned by supervised examples can be learned and compiled by neural networks; skills like learn hand-written letters, understand voice, learn basic math can be learned, but new features like a new hardware need to be taught by software.

This way there’s no need for a supercomputer. Linux repositories can be already used and setting up a repo server it’s low cost in terms of processing power, energy and bandwidth.

 

Right now I’m just thinking
Right now I’m just thinking about it. OpenCog has several background processes that might be possible to be farmed out. But for most things, OpenCog should run on a local server.

OpenCog uses a lot of data. One of their facilities in China, I think, has a network of small “supercomputers”; I suspect they are rack mount PCs with many graphics cards used for parallel processing. I also suspect lots of RAM and disk arrays.

Not having children, at some point in the future I might be able to afford one such machine. If I need it. It all depends on what is going on with my life at that point.

One of the things I’d like to look at are the files that control AtomSpace. People that use it claim it’s slow, but nobody is interested in doing a rewrite. It is something I might be able to handle.

Right now the computer I have it installed on is an i5 something NUC, which is not a speed demon. I have to add extra storage via USB. Right now I have a 3 TB USB drive waiting for after Lee’s surgery next month. If I had the money, I would buy a box which allows me to add four 6 TB drives, but again it would be USB 3. I’m waiting of that to see what is really needed in terms of disk space.

And I’m reading the papers on the OpenCog papers page because the project is poorly documented. I’m hoping to get a basic understanding of the entire project.

About supercomputers,

About supercomputers, there’s never such a thing as too much power. If there’s free cpu i could use it to crunch data for money, or other things. But i’m actually more interested in a System on a Chip arrays, than a powerful pc setup.

I thought about something
I thought about something like a large cluster of TK1s or BBBs or something, but a lot of AI is fairly memory intensive so I don’t think that there is enough RAM to do this in.

Though the TK1 (quad core +1 ARM, plus Tegra GPU with 192 processors, USB 3, etc) would make for an impressive tower. But unfortunately they are $192 per piece and that doesn’t include disk and so forth. They are nice because they tend to stay under 10 watts of power.

I think most of the time the

I think most of the time the ai will be on the net harvesting data, rather to crunch numbers. Then you’d want a processing unit for vision and speech. A unit with a real time OS to deal with sensors and a unit with a general use OS to deal with data.

Lots depends how you treat data and what you are storing. If you just tape all time the video stream on the disk, or perform a physics simulation of the environment(more than 100 actors, 1000 objects,…), it’s going to be really expensive. Not even human do that. The biggest difference it’s in the architecture and data structure.

Harvesting what data? What

Harvesting what data? What is relevant and what not? Filling up the memory with useless data is not human like. The whole idea connecting the robot to the internet is not human like. Brute force is not human like. The first lack I see is you didn’t study humans. Start to study humans. Children. Yes, children. They will tell you more than every AI book can tell you. Study that what you want to model.

**Markus,

Most of AI is not**
Markus,

Most of AI is not human-like. Yes, we can try to use the human brain as a model, but we don’t have a computer that can emulate it yet. Sometimes recording all the data while the robot is interacting with people and the environment could potentially be good because then the robot can go over everything during down-time and perhaps make connections that it didn’t have time to do in real time. I don’t know if this would be beneficial or not.

Let’s say I bring a bot to a SF convention. It might be easier to explain to the bot after everything was done what those blue people were and so forth.

What data to harvest?
Markus has a real point in what data to harvest. This is probably why I’m wanting to farm certain things out until out personal computing power becomes great enough to do this in real time with a computer I got in a crackerjacks box. :slight_smile:

Using math, I have to first postulate that Groucho exists and that he is with me at a science fiction convention. Now these are rather busy places, and I doubt that much of it could be absorbed in real time. I’m not even sure that anything more than basic video information could be processed with hundreds of people in a single area.

Now, let’s say that Groucho’s brain preprocesses the video and IR, doing basic feature analysis. Then in 5 second chunks this server sends out those chunks to other smaller distributed machines for further analysis. They would send back their deep analysis which would probably need less bandwidth than the initial video.

This is collected and sent to Groucho’s brain which combines the analyses and perhaps stores some and makes new nodes, for example “Cosplay”.

Due to the nature of how OpenCog works, things are remembered until they are forgotten. This is one of the things that makes OpenCog able to fit on a Linux PC. The entire program isn’t that huge, but the database in RAM can potentially be humongous.

For example, let’s activate in our own minds the concept of “Health Care.” I’m from the US, so I immediately also activate concepts and groups of related concepts such as “The Affordable Health Care Act,” “When is my next appointment?”, and “When I get divorced can I afford health insurance?” And many more.

This is why the database in RAM and disk is so large.

Now, not all of the sensory data is remembered, mainly just what is learned from the sensory stuff. The forgetting is handled by the DeSTIM and OpenCog combination.

or querying around

From my experience young children ask lots of questions, and build knowledge about it. And play a lot:)
Maybe harvesting isn’t the right word for that, a machine can’t ever download all the data on the internet, but can make intelligent queries to find relevant results.

instead of the full video

instead of the full video you could register some vectorized images, or find a way to just store meaningful changes. Just like i forgot all the details about what i ate last day, but i remember that the previous weekend i got salmon,which was meaningful because makes me happy.
The robot may find weird those blue people, and maybe ask what those were later.

There was an interesting
There was an interesting throw-away line in a proposal for a grant for CogBot (a robotic preschooler). The writer said that somebody (I’m wanting to say Hubble, but I don’t think that’s quite it) has proven that the ease of writing an AGI goes up with increasing resources, such that if you had almost unlimited computing resources it would be trivial.

They also implied that the economy that is forced on us by limited computing resources may be helpful to creating a human-like intelligence because humans have finite resources.

Whatever, for the foreseeable future I have limited resources, though I may be able to harvest video boards from some computers I have with dead motherboards to if they can be used for OpenCV and such.

swap as virtual ram

you can tell a linux pc to use a whole hard disk or a partition as ram, using swap files. Of course it will be a really slow ram…
SSD have enough speed to give real time feeling, and for 70-80$ you can get 120Gb of swap storage, while ram is waaay more expensive. SSD have limited write cycles, but they should last 5-10 years of everyday use as swap. Hopefully when it will be unwritable there will be cheaper ram:)

**Markus,

Having our brains**
Markus,

Having our brains filled with useless garbage is the essence of humanity. Do you remember the name of your first “girlfriend” in grade school (Adrienne Barr - and no, I haven’t seen her in almost 50 years). Do you remember the theme to Gilligan’s Island (The mate was a mighty sailing man…) and how it changed between season one and two? Do you remember TV ads that are no longer running (“Where’s the beef!”). How many songs do you remember that you don’t even like (achy breaky heart comes to mind for me)?

Dave Barry had a phrase for this: Brain Sludge.

And now try to list things

And now try to list things you forgot during your life :slight_smile: