A Brain functioning approach

Hi again, along next lines I will try to explain the way in which I'm designing Hurby's brain, from sensory to actuator module. I think it can be interesting and useful for some of you. Nevertheless as I'm already under its design, all of your comments and suggestions are welcome.

In previous blogs I talked about the Subsumption architecture (which controls different kinds of behaviour like happiness or anger) and the Fuzzy Inference module which manages each of these behaviours.

In this blog I will talk about the whole execution model of Hurby's brain. In a first approach, I've divided the brain in 3 main modules: Sensory Cortex, Behaviour Architecture and Motor Cortex. And as a picture is worth a thousand words, I will show it through this one:

1-_Hurby_s_Brain_Activity.jpg

As you can see, Sensory Cortex will scan periodically for events on different sensors along the Hurby's body. This module will adapt such events and will build inputs for the Behaviour Architecture. In this second module, inputs are processed through a reasoning algorithm. This reasoning will generate different outputs according with those behaviours presents on Hurby's brain. Those actions will be received by Motor Cortex module. It will evaluate those actions and select the highest priority one. Once selected it will be applied to actuators (eyes, ears, feet and mouth).

As shown, Hurby's brain seems a simple mechanism/process, but let's go a bit deeper, and see what's going on inside.

2_-_Execution_flow_from_sensory_to_motor_cortex.jpg

This figure gives us a bit more of information about its internals. As you can see, Sensory Cortex will start another module (called sensor scanner). This scanner will check for events each 100ms. When an event is detected, a callback function is executed and hence Sensory Cortex is notified about that event. Sensory Cortex processes the event, build and Input data structure and sends it to the Behaviour Subsumption Architecture. From here to the end, what you see is the same than shown in first picture.

Now, that execution flow (from sensors to actuators) is clear, I would like to go deeper and explain in detail how each module works.

Sensory Cortex

This module is an active object that behaves according with next state machine description:

3_-_Sensory_Cortex_State_Machine.jpg

After initialization it will restore several parameters from PS (permanent store memory), in this case the timestamp in which Hurby was touched last time and the number of times that my daughters have played with it. And it will be waiting for events detected by the sensor scanner. But each minute, this Sensory Cortex gets active by a timeout event, even if no sensor touch events have been received.

So, each minute, this Cortex will update TSLP variable (time since last play) and will send input data to the Subsumption Architecture. It is easy to see, that if no sensor touch events are received, TSLP will go increasing gradually and Hurby will be less and less happy.

If sensor events are notified by the scanner, Sensory Cortex changes its execution to a Being Touched state. In this case, a pair of variables get into action: 

  • firstTouch: it is the timestamp in which sensor event has been received after a period of time of inactivity
  • lastTouch: it is the timestamp in which last sensor event has been received during this playing period of time.

Each time a sensor is touched, Sensory Cortex updates its TSLP and PTT (playing time trend) variables and generates a new input data structure for the Behaviour Architecture.

If during a playing period of time, there is one minute of inactivity, all variables are stored into PS memory and Sensory Cortex will go back to its default Waiting events state.

Behaviour Subsumption Architecture

This module will receive an input data structure from the Sensory Cortex and will generate actions to the Motor Cortex module, for this reason, we can say it is a passive module, fired each time a new input is received:

4_-_Subsumption_Architecture_Model.jpg

Input data is decomposed according with the event source, an applied to different behaviours that will generate their own action response. This layered architecture allows the execution of all of these behaviours in parallel; as upper in the stack, higher its priority and all of the generated actions will arrive to Motor Cortex module.

Motor Cortex

This module, as Sensory Cortex, is an active object that is continuosly waiting for actions from the Subsumption architecture.

5_-_Motor_Cortex_Action_Execution_model.jpg

When a new action is received, it is added to an ActionList. If Motor Cortex hasn't any action to process and a new one (or several) are added to the list, it changes its execution state to Selecting Actions. In this state, it checks in the ActionList, those stored actions and evaluates which is the highest priority one. Also, it checks if that action inhibits the rest of pending actions in the list or not. Once an action is selected, its internal data its extracted and Motor Cortex swtiches to Executing Action state.

In this state, it will send signals to actuators (servos or voice text-to-speech synthesis module). Each action can be applied during fixed a period of time or continuosly up to the arrival of a new action. If selected action is executed during a fixed period of time, once is timed up, Motor Cortex will switch to Selecting Actions state again. If ActionList is empty, it will come back to the default Waiting for actions state.

 


 

In conclussion, Hurby's brain process is a simple mechanism divided into 3 independent modules that are connected in a unidirected flow: sensors-reasoning-actuators. Thanks to both Cortex (Sensory and Motor) as both are active objects, this brain can be running automatically forever as should be expected in a robot like this.

And, that's all. See you on next blog!!!!