sábado, 17 de enero de 2009

Joint Action and Artificial Intelligence




In this article, we examine how the research about joint action (Sebanz, Knoblich, Bekkering...), can be intertwisted to the ideas about distributed coordination in uncertain multiagent systems (Maheswaran, Szekely, Rogers, Sánchez...)."Joint action can be regarded as any form of social interaction whereby two or more individuals coordinate their actions in space and time to bring about a change in the environment" (Sebanz, Bekkering, Knoblich, 2006, p. 70). According to the authors, successful joint action depends on the abilities to share representations, to predict actions and to integrate predicted effects of own and other´s actions. A mechanism for sharing representations of objects and events is to direct one´s attention to where an interaction partner is attending. However, a more direct mechanism is provided by action observation; studies about mirror neurons show that during observation of an action, a corresponding representation in the observer´s action system is activated. On the other hand, an efficient means to predict other´s actions that is not based on action observation is knowing what another´s task is. A series of recent studies has shown that individuals form shared representations of tasks quasiautomatically, even when it is more effective to ignore one another. But how individuals adjust their actions to those of another person in time and space can not be explained just by the assumption that representations are shared; action coordination is achieved by integrating the "what" and "when" of others´actions in one´s own action planning. This affects the perception of object affordances, and permits joint anticipatory action control. Very recently, Maheswaran, Rogers and Sanchez (2007) introduce the idea of distributed coordination in multiagent systems. I believe that the research by Sebanz and others in humans can be applied successfully to the area of distributed Artificial Intelligence. Centralized systems can generate fully coordinated policies but put a very high computational load on a single agent. Given a team of agents, every agent in the team has a set of activities that it can perform. Each activity has probabilistic outcomes. Only the agent has current knowledge of its policy at all times. The team reward is a function of the qualities of all activities, and the agents´ objective is to maximize this reward at some terminal time. One way this function can be composed is with a tree where the activities are leaf nodes. Each non-leaf node is associated with an "ancestral" operator which takes the qualities of its children as input. The output of the root node is the team reward function. An agent´s subjective view of the reward function can be defined. It is considered the case where each agent see all ancestral nodes of activities they own, and any nodes and links that connect to its activities and their ancestral nodes via directional operators. The authors introduce distributed coordination between artificial agents, that is, something like the joint action in humans studied by Sebanz and others in humans. In an immediate future, a fertile cross will happen between these two roads.

No hay comentarios: