Subsumption Architecture

I have been reading about Rodney Brook’s Subsumption Architecture and was wondering how this would work with the SSC-32 servo controller. In the examples provided it uses FSM (Finite State Machine) techniques. If a ssc-32 is used then I really can’t use FSM. I’m wondering if I use the ssc-32 if I will be able to still use Subsumption techniques.

Has anyone tied using the Subsumption Architecture?

sure you can, just it’s simulated instead of actually distributed across multiple processors. You can have as many modules as you want with their own message queues. The modules are executed by some master loop. They don’t necessarily have to be executed in sequence or even at the same frequency. One or more of those modules would then have the ability to set servo positions. An easy way to do this would be to have an array of servo positions globally accessible. Then an periodic update would send them over to the SSC32.

Thanks Andy,

Then I am thinking of creating the modules, and instead of using FSM to send pulses to the servos, just send it to the SSC-32, which would replace FSM. I do believe you need to have the modules in order, however, since Subsumption executes the highest priority last. I could be misunderstanding something but that is how I understand the article that I have.

The priority is only relevant to the message queues. The order and frequency of which the modules execute is not dictated. The SSC32 isn’t going to replace the state machines. Each of the modules would be a state machine in the subsumption arch.

So really FSM and Subsumption are both part of the same design more or less?

start here: A Robust Layered Control System for a Mobile Robot

then: Intelligence Without Representation

I am reading both articles you provided links for and it is truly a fantastic method from a programming standpoint and in terms of practicality. I am reading “A Robust Layered Control System for a Mobile Robot” now and it’s like a good book that I can’t put down.

I do plan to use this in my autonomous program. I love the expandability of this concept where you can cut-n-paste new behaviors and have all the other behaviors work with it. I will start with color LEDs to see output in the beginning, to get used to this kind of logic, as well has learn how to code this in Pbasic. I am still learning Pbasic and I need a lot more practice. :laughing:

I fully understand the concept and it’s benefits, but putting it in Pbasic code is the real challenge. I am going to have to start very small and work my way up which is another huge benefit of this concept, which allows this.

I went against your recommended order of which article to read first. I read " Intelligence Without Representation" first. :laughing:

They go hand in hand so it’s no big deal. :stuck_out_tongue:

Keep in mind while implementing it that none of the “rules” are hard and fast. Most implementations I’ve seen are some adaptation on the original ideas. In practicality, most designers aren’t as concerned about the “robustness” of a multiprocessor design and simply simulate multiple behaviors in a single chip. Additionally, the raw processing power available in a $4 chip today is enormous when compared with what he was working with. Adaptations often include shared memory, finer granularity on priority levels, types of allowed content of the messages and so forth.

The crux of what I’m trying to say is to use the model as a starting point. If some feature of it is particularly cumbersome and an impediment to what your trying to accomplish, change it.

Thats advice I will take.

I was reading and there is mention of muti-processors for these layers, and I was thinking one processor is all I would need. Like you said, follow the simple model, but use what’s needed to get the results desired.

This is exciting stuff!

Andy, thanx for the article

Mike,
As I gain understanding of Subsumption Architecture I find it very similar to Maslow’s Hierarchy of Human Needs en.wikipedia.org/wiki/Maslow’s_h … y_of_needs

There are things that the robotic platform must do at the lowest level in order to carry out a particular goal orientated task in a complex environment, i.e. move around. Upon that, other “behaviors” can be built that utilize the lower levels of operation.

It would come to reason, if the goal of the lowest levels are in support of the continued operation of the platform you have a need based approach to robotic existence. This is analogous to the “Hierarchy of Human Needs” and represents a strong model for autonomous robotic behavior.

That darn Mars rover is still running simply because it has some ‘awareness’ of its current state, utilizes a high level of the layered control architecture to monitor that state and bases its ‘behaviors’ or lower level outputs on that state awareness.

If you are interested, some of the concepts mentioned in the articles Andy kindly provided are nicely implemented with the Servopod using IsoMax that is offered by Lynxmotion. Although single processor (Motorola), the concept of State Machines is one of the foundations of the Isomax language and a pleasure to learn (as I am trying to do). I strongly agree with Andy when he proposed the Subsumption Architecture as a starting point.

As I reread what I just wrote above I can see multiple interpretations based on the Subsumption Architecture’s concept of levels of competence.
The right code, for the right task, at the right time, with the proper result is the ultimate level of control compentence, irregardless of what level we start at.

I have to thank you Mike for introducing this thought provoking thread and for the Andy’s article paths. I learned quite bit today. Not the least of which I learned that at times over the past year or so (since ownership of the IsopodX) I have been using thought patterns in line with the Subsumtion Architeture model without knowing it.

Chris

Thanks Chris,

I am confident that with low level robotics that I plan to use for my project, that the Subsumption Architecture and the Finite State Machine is a perfect fit. My favorite feature is the expandability of the behaviors without having to “rebuild” the entire code structure since there are no variables to to rely on, and each module works independent of other modules.

I have tried a simple three layer design using example code from another article to test adding a new behavior to an existing set of behaviors. To my surprise it worked with no errors. This is a true parallel system which gives the robustness that Rodney Books talks about.

Andy,

I understand now what you meant when you said the modules can be in any order. The modules CAN be in any order, (not that I thought you were wrong) what matters is the subsumption order in the main routine. If I decide that the subsumption order is not right, all I have to do is rearrange the order in which the modules are executed.

This stuff is great; I love the ease of modifying the program without worrying about it affecting all the other modules.

I ordered the PING sonar sensor from LM, so when it gets here I will experiment using it in my new “Avoid” behavior.

Unfortunately, I will not be able to do anything until I get back from New Hampshire. I am going for training on programming a machine called Takaya. It’s not real programming; rather, it’s setting the machine to perform a certain test. The Takaya is like a GENRAD machine, where it takes probes to check certain parts of a circuit board for shorts, solder ability, component values, etc. Watching the probes move all over the board is neat to watch, It looks like a sewing machine stabbing the board all over the place. :laughing:

I have not read any of the articles you guys are talking about, but I gather a fundamental part of it is multi-processing of some form.

Given that, perhaps the “Propeller chip” from Parallax is relevant? It’s a single CPU chip with a bunch of separate processing engines that share certain common resources.

Pete

Yes it is indeed multi-processing per Rodney Brooks’s original concept. He used several processors that could talk to each other using wires to send messages to each processor. I will probably not use the Propeller because of its complexity. It would be perfect for this kind of thing however.

The propeller is amazing. Perhaps when I get more experience under my belt I will try to learn how to use it. The chip can definitely handle real time processing

Pete,
With my quick survey of that chip it does seem like it might be ideal. Though before long a fairly simple system will outgrow 8 behaviors and will end up multitasking anyways.

I’m not positive, but I seem to recall that at least several of early bots by Brooks had the “parallel” architecture emulated on a single processor shoreside communicating via a radio link.

I think you haven’t quite gotten it yet. The robustness claim comes from the idea that there isn’t really a main behavior/routine. So if a portion of the system fails, only that behavior is lost, the rest of the system (other processors/behaviors) continue unaware of the loss. Lots of different modules always doing their own thing, at their own pace. The priority comes in when the modules communicate with each other. The only chronologic order that is imposed between the individual modules is done through the message supression.

What I meant by the order and frequency not necessarily being important is that how frequently a particular behavior is examined (sensor data read for instance) is dependent only on that behaviors needs. For example, your sonar might be fired 10 times a second. When it sees an obstacle, it might supress the drive forward behavior for 30 seconds. The drive forward behavior might be be written such that it drives forward for 2 seconds, then checks if it should keep driving forward. So the frequency at which those modules run is vastly different, but that difference is resolved by the message queueing.

In addition, the robustness would encompass a hint of overlaping functionality both in terms of software and hardware(i.e. multiple sonar and ‘fallback’ IR for functionality overlap during some sort of map build)

So - who cares if the CDROM of my PC isnt working because I still have my DVDROM working and this has a fallback buddy in the form of a network drive should I need to copy that excellent example of source code using one of the various copy capable programs. It dont matter which hardware module supplants the failed intended as long as the task gets done when it needs to and the system isnt crippled by its loss.
Under the* robustness *claim the loss shouldnt be a problem.

There will be no central “authority” handling the matter of replacing the lost functionality of a failed module or the handling of message passing.

However, without some form of global variable (of which the Subsumption Architecture reasons is not needed) it seems unlikely that the messaging between modules could be of use. Opinions?

Chris

Chris,

I’m not sure I follow your concern. Is it that a message queueing system is unweildy to build without some shared memory? Or rather that the message passing isn’t sufficient to allow effective communication between the modules?

On building the thing, I could see a bus such as I2C with individually adressable nodes in multimaster mode. The individual nodes manage their own queueing. The inhibit lines affect the queues management. These could be implemented as just another message with a flag indicating that it should be examind and processed immediately rather than queued.

If your concern is whether this form of communication is sufficient, I’d point to it’s successful implementations :slight_smile:

Andy,

When you say message comunications, are you refering to a varaible passing some sort of data? or are you talking about a physical wire that sends high/low signals to each processor (or FSMs)?

The Subsumption architecture that I am working on does not include comunications between each module which is why I was saying that the order by which the bahaviours in the main loop are placed will afect the subsumption order. The higest priority behaviour should apear last in the main loop because if a high priority situation occures that modue will subsume all other module’s outputs. Perhaps you are following the more complex architecture where mine is based on the general idea but much more simplified.

Here is my Pbasic code:

It’s a work in progress so there will be some discrepancies

[code]’ {$STAMP BS2}
’ {$PBASIC 2.5}

'========================================================================

'Finite State Machine & Subsumption Architecture

'========================================================================

'---- I/O Definitions ]-------------------------------------------------

Ping PIN 15

’ ----- Constants ]-------------------------------------------------------

pDUR VAR Byte
PACT CON 5

Trigger CON 5 ’ trigger pulse = 10 uS
Scale CON $200 ’ raw x 2.00 = uS

RawToIn CON 889 ’ 1 / 73.746 (with **)
RawToCm CON 2257 ’ 1 / 29.034 (with **)

IsHigh CON 1 ’ for PULSOUT
IsLow CON 0

’ ----- Variables ]-------------------------------------------------------

rawDist VAR Word ’ raw measurement
inches VAR Word
cm VAR Word

ileft VAR IN9 'IR LED outputs
iright VAR IN0 'i=(see code)
IEN CON 5 'enable FOR 555
ilast VAR Byte 'hit counter

i VAR Byte 'LOOP counter, whatever
tmp VAR Word 'temporary holder
seed VAR Word 'RANDOM number seed
'These are FOR the servo routines
LEFT CON 15 'left wheel port
RIGHT CON 3 'right wheel port
SACT CON 5 'times through act routine
drive VAR Word 'wheel command combo
ldrive VAR drive.BYTE1 'left wheel command
rdrive VAR drive.BYTE0 'right wheel command
aDur VAR Byte 'duration of pulse left
'Servo drive commands
fd CON $6432 'forward
rv CON $3264 'REVERSE
st CON $4b4b 'STOP
tr CON $644b 'turn right
tl CON $4b32 'turn left
rr CON $6464 'rotate right
rl CON $3232 'rotate left
'wander values
wstate VAR Byte 'shared Byte
wDir VAR Word 'wander value
wDur VAR Byte 'wander duration
'set up FOR running
wstate =0 'initial wander state

main: 'this is the main activity LOOP
GOSUB wander
GOSUB avoid 'Last module ( Higest Priority )
GOSUB act 'Move servos based on modules output
GOTO main

'========================================
'Behaviors follow
'========================================
wander:
'randomly wander around
BRANCH wstate,[wcDir,wcDur]
'state 2 immed. follows
wDur = wDur - 1
IF wDur > 0 THEN wDone1
drive = wDir 'get direction
wstate = 0 'reset state
wDone1: 'completed
RETURN
wcDir: 'choose direction
RANDOM seed 'RANDOM direction
i = seed & %111 'mask off FOR 0-7 only
LOOKUP i,[tr,fd,fd,fd,rr,fd,fd,tl],wDir 'chose direction
wstate = 1 'NEXT state
RETURN
wcDur: 'choose duration
RANDOM seed 'RANDOM direction AND duration
wDur = (seed & %111111) + 20 'mask FOR 64 choices
wstate = 2 'NEXT state
RETURN

avoid:'PING routine
IF pDur > 0 THEN pDec 'already doing one, got here
pDur = PACT 'times through this one
Read_Sonar:
Ping = IsLow ’ make trigger 0-1-0
PULSOUT Ping, Trigger ’ activate sensor
PULSIN Ping, IsHigh, rawDist ’ measure echo pulse
rawDist = rawDist */ Scale ’ convert to uS
rawDist = rawDist / 2

inches = rawDist ** RawToIn ’ convert to inches
cm = rawDist ** RawToCm ’ convert to centimeters

DEBUG CRSRXY, 15, 3,                        ' update report screen
      DEC rawDist, CLREOL,
      CRSRXY, 15, 4,
      DEC inches, CLREOL,
      CRSRXY, 15, 5,
      DEC cm, CLREOL

pDec: 'decrement stuff
pDur = pDur - 1
pDone:
RETURN

act: 'moves servo motors
IF aDur > 0 THEN aDec 'already doing one, got here
aDur = SACT 'times through this one
PULSOUT LEFT,ldrive * 10

DEBUG HOME, ? ldrive,CR
DEBUG ? rdrive ,CR
DEBUG ? aDur

PULSOUT RIGHT,rdrive * 10
aDec: 'decrement stuff
aDur = aDur - 1
aDone:
RETURN[/code]

Andy,

I think my concern is more of a curiosity of this type of control and to what I perceive as inherent limitations built into the model. My understanding of the Subsumption Architecture is that there is not central authority. Each higher layer is capable of overriding all inferior levels at any other time and take control of the system for new purpose. At some later point, the level that took control can give control back control to the overridden level without any knowledge on the part of the overridden level as to where control came from. This loose coupling of interaction between the ‘subsumer’ and the ‘subsumee’ introduces a level of incompetence within the system which only compounds problems encountered in external interaction.

Under this hierarchy there seems to be a level of rigidity that could be disastrous. For example, two different layers or two hardware components within the same level could have a requirement of functionality from a lower layer at approximately the same time. Suppose a portion of my sonar array detects an incoming object from the south (i.e. large, rolling boulder) while at the same time the infra red proximity detector also determines there is a drop off to the north. The discernment of which hazard to respond to is not a simple matter of subsumption of the IR sensor module or the Sonar Array module. Prioritization has to be performed that requires the exchange of information between modules that go beyond a simple inhibition signal. Each of the two hazards must be processed and the repercussions compared. Handling such a crisis is where my curiosity originates.

I do believe your example of the hardware connection is spot on. I struggle with the prioritization of need that would perhaps be better served with larger packet exchange between levels.

Chris

In your example of two hazardous situations happening at the same time; if the sonar detects an in coming object it would move out of the way, changing directions. Depending on which direction it moves to, this may or may not put the edge hazard in a safe location. Depending on how you code the behaviors, you could cause the bot to move backwards avoiding the incoming object first and then resume forwards so when it sees a drop-off, that would become the next high priority.

I understand your point and you are right there are going to be some situations where the Subsumption architecture will not be able to react properly with multiple hazards.

This is not true A.I. although those observing the bot may make that interpretation. To me the Subsumption Architecture is a structured model that has all thinking pre-processed by the designer or the person writing the code in this case. As we write the code we think, then we code to tell the machine what to do or how to react. The thinking has been done by us, then the system is applied in code.

I think to achieve high levels of A.I. there would have to be a system that takes experience and memory and then base it’s decisions from that which is what we humans do the day we are borne. We interact, experience, remember, and learn. Our opinions are based off of experience and influences, which is why people can disagree and debate on many things. People can’t argue that the color RED is RED because of fact, but people can argue complex issues such as war.

Subsumption can’t do this level processing because of the disconnect between modules. Data must be shared.

Form me and my little biped, subsumption is all I need. :laughing: