I'm always asking myself, "What is logical for how to build a smarter bot with today's tech and the tech soon to come?"
Currently, I come to the following conclusions. I realize that each point I make is hugely debatable, I’m just putting out some opinions, not trying to prove anything. This is the course I am on for my currently, so I thought it might stimulate fun discussion.
1. A bot can't know everything, so at some point a bot will need to "look up something" on the internet. Likely, a bot will need to look up many things at the same time, or do many things that involve internet resources.
2. I believe the main "brain" of smarter bots should be physically "off bot" and "on the web" for small and medium sized household bots that have internet connectivity. I used to really want everything to be on a bot, but I come to this conclusion for performance, economic, and reuse reasons.
Performance: A bot can call its "Internet Brain" once, and the "Internet Brain" or IB, can call many other web services/resources as needed, in separate threads, before figuring out what to do.
Economics: Bots that have to carry the power and weight of "big brains" will be bigger and more expensive than most people would like. I’d personally like to have 3 or more bots per household, so they need to be affordable, and smart.
Reuse: Should bot brains be custom builds? I don't think so. I believe brains should be reused. Until we figure out how to better share/leverage software agents and develop some common concepts/interfaces/etc, we will all be building bots that aren't as smart and useful as they could be.
3. Bots should not wait for or expect to get an answer as to what to do to any given circumstance immediately. Basically, things should be asynchronous. This means bots should make a call to an IB with something like "Is it going to rain on Tuesday?" and then call again a fraction of a second later to see if an answer is waiting. A mechanism for a server to call the bot when the answer is ready would obviously be better.
4. Bots will have different sensors, actuators, behavior, etc. This means Internet Brains (IBs) will need to support many different configurations. I will refer to this as "Metadata Driven IBs", or MDIBs. It is logical for this metadata to exist on the internet and be maintainable by robot builders through an app of some kind. It would be very helpful (but exceedingly unlikely) if standard concepts and structure could emerge for this metadata. There would be a huge amount of this metadata and many different substructures. (Instead of waiting for these standards which will never happen, I will probably just come up with some. Why not?)
5. People will want to interface with their bots through various devices while not physically around them. They may want to see an avatar of their bot, onsite images, video, maps, or sensor data through phone, web, tablet, etc. These maps might be consolidated “views” on multiple bots/sensor data, like home automation data / internet of things stuff.
6. Bots that are owned by the same person should be able to share data so as to increase their “situational awareness” of a given location. The internet of things should be tied into as well. This should be a function of the MDIB. Any bot in your house should know whether the front door is locked, what the thermostat is set to, whether there is motion at your back door, or a flood in your basement.
7. Complex rules should be able to be built on the MDIB coordinating the home, its sensors, and one or more bots.
8. If a MDIB is a “black box” that is configurable, useful, and interoperable, then robot developers do not really need to know or care what technology was used to build it.
9. While MDIBs should run “on the internet”, they should also be able to be extended and customized back “into the home” by supporting some common interfaces and being able to call internet resources that homeowners put on their own computers. This means developers/homeowners should be able to build intelligent agents, register them with the MDIB (Metadata Driven Internet Brain), configure them through the MDIB app, write and publish their code on their PC or other device, and then have the MDIB start using their custom agents when appropriate to do so.
10. What is the role of microcontrollers in this model of robot building? Robots still need an on-board brain. This brain needs to directly handle sensors which are timing sensitive activities (like sonars, gyros, etc.), actuators, motors, etc. This brain will need to handle “reflex actions” like obstacle avoidance, and be able to call an MDIB for higher level less time sensitive brain functions. A unified “Dictionary of Commands” will need to be devised to robots can communicate with MDIBs and implement commands given to them.
11. How should data intensive sensor processing (like video processing) be handled in this model? That is an open question. I suspect a hybrid approach with most of it being done onboard and some “interesting” frames being sent to a MDIB for additional processing (object recognition, localization, face recognition, etc)
The next question is “How should a brain work?”
To me, that is an unsolved problem. I ran into a quote again today that reminded my own efforts and deserves repeating:
What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. – Marvin Minsky, The Society of Mind, p. 308
My efforts at a brain thus far can basically be summed up as a collection of services and software agents that use different techniques to accomplish general and specific tasks. A service figures out which agents are applicable to the circumstances and executes them. When all these agents are done, the service arbitrates any conflicts to determine what desired behavior gets sent back to a robot.
Given this concept of a brain (which might not be a good one, but lets run with it for the sake of this point) I think it is quite easy to visualize the “Society of Mind” concept as an MDIB. If a configurable brain is built as a collection of agents running largely independently of one another, with all the configuration stored as metadata and maintainable through an app, many robots would be able to easily share agents/code/metadata/knowledge.
As new agents or new versions of existing agents are written and introduced into the collective MDIB DNA, some robots might use them, others not. I can only guess that robots would get a lot smarter, a lot faster, at a much lower cost, with much less code.
What do you folks think? (other than the obvious, that I just wrote a manifesto)