Subsumption

Has anyone here used subsumption to control robot behaviors? I have been reading a lot of stuff on the Dallas Personal Robotics Group and found some interesting stuff about subsumption and arbitration for robot behaviors. The more I read, the more I thought this would be great to add to WALTER, so I continued reading this paper about subsumption. It sounds more difficult than it really is.

Subsumption doesn’t seem all that difficult to implement, and according to the DPRG information, it actually brings a smoother movement to the robot because the behaviors merge into one sequence with no gaps.

Has anyone else here done any work with subsumption?

8-Dale

I have tried to use it but it got a little confusing with keeping track of all the back and forth jumping around that goes on in the code. I have to admit that its modular design makes it really easy to add new behaviors without having to change the entire code, you simply add the new behavior in the right subsumption order and presto, your bot has a new behavior. This is really good for obstacle avoidance, or any dynamically changing environment.

FSM on the other hand is way to complicated for me to figure out. It requires knowing how fast the MCU is able to execute code and have it workout in such a way that it can send out 2us pulses to a servo as one example. With FSM, you can have the MCU seemingly multitask several operations at once but the proper timing of these events has to be calculated.

I had a module based on robot system architectures last year, and we covered subsumption and ego-control as well as a few other varients. Was interesting stuff. I know this is a bit of a late reply as the original post was november last year but if its still of interest i’ll try dig out some computerised notes.

Yeah, a little late! I didn’t reply either.

I thought I was going to write Subsumption code, but what I ended up with is really just an FSM that changes between canned gait sequences for walk forward and turn! I’m only doing simple obstacle avoidance on Loki using two IR sensors and sonar. Maybe I’ll add another FSM to add a behavior, like following light.

I don’t know that you have to worry too much about “how fast the MCU is able to execute code”, but you do have to plan on calling the FSM(s) maybe 10 times a second (System Tick).

Not really. if you mean ISR code to generate R/C servo PWM pulse streams, yes. That needs to be running. But it’s a separate, almost hidden task to be done. In addition to running servos, you also handle any RS-232 comms to control or monitor a 'bot, read the sonar with I2C, take A/D readings of two (or more) IR sensors, even flash a “heart beat” LED and 3 or 4 status LEDs.

FSM routines (modules) DO seem to make the processor able to handle multiple tasks. Kinda like a “baby RTOS”. I like it.

A class? Yes, it would be interesting to see what you covered! Please DO see what you can dig up! I’d be interested!

Alan KM6VV

I didn’t see where there is a lot of jumping back and forth. Subsumption is a bit difficult to implement on an Atom micro because there is no real multitasking available. However, I belive my Subsumption engine comes very close, or as close as it can come, to the real thing on an Atom. I just need to jump to an Atom PRO to get more program memory.

Yes, indeed it is! I have added code that actually allows lower behaviors to be executed after the higher level behaviors are done IF it is reasonable to do so. Some higher level behaviors are mutually exclusive though, so lower level behaviors must be disabled in these cases - not difficult to do at all.

You may be over complicating FSM, Mike. Subsumption actually is an FSM engine. It may be what is called an AFSM (A = augmented) engine.

8-Dale

Oh yes, please do share what you can! :smiley:

8-Dale