viernes, 4 de junio de 2010

Dynamic Neural Fields and cooperative robots


Dynamic Neural Fields formalize how neural populations represent the continuous dimensions characterizing movements, perceptual features and cognitive decisions of agents. Neural fields evolve dynamically generating elementary forms of cognition. Many of social activities are based on the ability of individuals to predict the consequences of other´s behavior. We have to interpret actions of our partners in collaborative works.
Erlhagen et al. (2007) try to understand motor intentions for building artificial agents using Dynamic Neural Fields. The authors consider that action understanding relies on the notion that the observer uses its motoric abilities to replicate the observed actions and its effects. They are inspired in activity patterns of mirror neurons in prefrontal cortex that postulate a chain between neurons coding motor acts. Depending on the specific chain of mirror neurons activated by contextual and external cues, the observer will predict (in a probable manner) what the observed agent is going to do. Erlhagen and collaborators represent the activity of neural populations encoding different motor acts and goals by means of Dynamic Neural Fields. The synaptic links between any two populations in the network is established using a Hebbian learning dynamics.
Let think in autonomous robots which interact in the context of a join construction task and let be a simple reaching-grasping-placing scenario. An observing robot R1 has to select a complementary action sequence depending of the inferred action goal of the other robot R2, or partner robot. So, R2 may grasp an object to place it in front of R1 with the intention to hand it over. Neural populations in the action observation layer and the action simulation layer encode motor primitives such as grasping. Such neural populations in the goal layer are associated with the respective chains in the action simulation layer. To model the dynamics of the different neural populations is used a discrete version of a dynamic neural field. Each dynamic field represents a population of 2N neurons which diverge into an excitatory and an inhibitory subpopulation, each of dimension N. The activation of an excitatory and an inhibitory neuron i at time t is governed by a coupled system of differential equations. The firing rate and the shunting term for the excitation, are non-linear functions of sigmoid shape and the interaction strength between any two neurons within the subpopulation is established by fixed synaptic weight functions which dicrease as a function of the distance between the neurons. A Hebbian learning rule for increasing the synaptic efficacy between presynaptic and postsynaptic neurons is given.
For establishing the chains, the authors employ a learning by observation paradigm in which a teacher demonstrates the sequences, each composed of motor primitives. Once the neural population becomes active, the activity propagates to all synaptically coupled populations. But only a population defining a particular chain will reach a suprathreshold activation level.
The simulation of Erlhagen et al. 2007, shows that the neural representations implementing mechanisms like motor simulation and cue integration may emerge as the result of real-time interactions of local neural populations. But, more important is that they speculate that learning by imitation takes place in two steps: first, the links between chain elements are established allowing a fluent execution of particular action sequences. In a second place, similar action sequences may have a different outcome and then, the focus shits towards the links between the goal and the contextual cues. These consequences will be crucial in the cooperative robotics domain.