Subsumption Architecture

This is just nit-picking, but you actually can argue “no, that’s not RED that is ORANGE” or whatever. Color is a perception based on the wavelength of the light, and the wavelength is on a continuous scale. So where does one color stop and another begin? There’s no absolute answer. Some traditional cultures (Native Americans?) considered green and blue to be just different shades of the same color. Even today, Japanese culture mixes green and blue in some cases (on a traffic light, they use the word for ‘blue’ to refer to the green light).

There are really very few absolutes of anything. A.I. techniques such as “fuzzy logic” are based on the idea that there are ‘no absolutes’, but rather only ‘relative differences’. So instead of saying ‘red’ and ‘yellow’, you say ‘longer wavelength’ and ‘shorter wavelength’.

Pete

I agree, I only used the RED analogy example to explain what I was talking about.

There are several ways to approach a problem and I do not claim that I am an expert. If I had the skills, I would try to create a machine that could learn from experience and interaction just the same way we do, but that will never happen.

Best case, Ill be able to get my biped to move using the subsumption arch. :laughing:

Chris,

Nothing dictates that each sensor have it’s own discrete module. A single module could handle the integration of both sets of sensor data.

Whether a thurough sensor preprocessing and integration is in violation of the spirit of the architecture is debatable, but a liberty that most choose in one form or another.

Before you do that though, consider what sort of reactions you would want under those circumstances. Are you sure they can’t be accomplished be refactoring the behaviors?

I’m a big fan of Brooks but I’m not personally entirely sold on subsumption. I think at very least it has some serious scalability issues while still retaining any maintainability. And while I’ve done a lot of reading, I’ve not actually tried to implement a system based on these ideas. So I’m happy to debate the potential merits of the system, but understand that my support for it as well as actual experience is limited to philosophy rather than personal success with it.

I agree with everything Andy said. How you program the modules is entirely up to the programmer. I am going to stick with a simple approach since I don’t plan to have my bot fold my laundry. :laughing:

I can imagine the challenge that would be involved in making a complex system that has interacting modules.

My goal is to have my bot interact in its environment, in an intelligent manor. The goal of the subsumption architecture that I want to use will allow my bot to roam autonomously in a dynamic world environment. Since I only want to be entertained by watching it move about, I can have a very simple system that is within my capacity to achieve.

I compare my bot with the Parallax BoeBot but with arms and legs. :laughing: