I’ve been thinking about Artficial General Intelligence (AGI) and OpenCog. These things would be much easier to implement if we had a full research supercomputer. OpenCog has that and still hasn’t produced a human level AGI.
Since we don’t, I’m thinking of another way of doing things.
Martin is building an AI server on his home computer. This is great, but with the internet the way it is, I expect lag time will be too great at times. It’s also a very linguisticly based solution to getting computers to speak. It’s a great solution for this as anybody who has seen the videos of Super Droid Bot Anna will see.
On the other hand, I’m more concept-based. I want my robots to actually think. Anna does a wonderful imatation of this. If there was a Maker Faire anywhere near his place with a clean internet connection, Anna would be the hit of the show.
I don’t think this is the best solution. I’m thinking of something new. An AI based on OpenCog and friends. But instead of a supercomputer I can’t afford how about loosely distributed computing?
What if a small part ran in the background of a lot of normal computers, much the same as OpenSETI and OpenFolding do? This way there would always be parts of the AI close (in internet terms) to each of us and we could use it. To do this the architecture of the underlying programs would have to be rewritten to make the working parts much smaller and more redundant. It should somehow be useful to everybody who runs a part. Perhaps it could be useful as an intelligent search engine because that is part of cognition.
If each of us could spread this program (honestly, I’m not thinking of sys admins putting this program on people’s machines without their knowledge) to computers near them, we might be able to build our own supercomputer.
Or I could wait ten years until the tech gets good enough that I can build a supercomputer.
Any thoughts?