Mostrando entradas con la etiqueta Computer Simulation. Mostrar todas las entradas
Mostrando entradas con la etiqueta Computer Simulation. Mostrar todas las entradas

viernes, 5 de marzo de 2010

Goran Trajkovski on imitation to modeling agents societies


In this brief review we analyse some ideas about simulation of imitation by means of artificial agents. Dr. Trajkovski is Director of Cognitive Agency and Robotics Lab in Towson University (USA). He is specialized in Cognitive Engineering and has published books like "An imitation-based approach to modeling homogenous agents and societies" or the recents "Handbook of Research on Agent-Based Societies: Social and Cultural Interactions" or "Handbook of Research on Computational Arts and Creative Informatics". Trajkovski introduces agents capable of performing four elementary actions (forward, backward, left, and right) and of noticing 10 different percepts. Each agent is equipped with one food sensor and has one hunger drive. When the hunger drive is activated for the first time, the agent performs random walk during which expectancies are stored in the associative memory. When food is sensed, expectancy emotional context is set to a positive value. Every time in the future the hunger drive is activated, the agent uses the context values of the expectancies to direct its actions. It chooses the action that will lead to expectancy with maximum context value.
Agents inhabit a two-dimensional world surrounded by walls and populated with obstacles. Sensing another agents takes the agent into its imitating mode. The agents have begun to inhabit the environment at different times. They are also being born in different places in the environment. While inhabiting the environment, they explore different portions of the environment that may be very different, or perceptually similar. In environments inhabited by heterogeneous agents, the fundamental problem is the problem of interagent communication. According to Trajkovski, an example of interagent communication is the phenomenon of multilingual agents that can serve as translators. The author proposes an enactivist (Varela, Thompson and Rosch, 1991) representation model, based on the treatment of agents as dynamical systems. The agent during the interaction with the environment generates its inputs and makes a mapping from the continuous domain of the inputs to the discrete domain of the percepts. The sequence of these percepts would reflect the structure of the environment. The basic idea is to divide up the set of possible states into a finite number of pieces and keep track of which piece the state of the system lays in at every iteration. Each piece is associated with a symbol, and in this way the evolution of the system is described by an infinite sequence of symbols. The agent generates symbols or percepts.
Trajkovski shows that imitation is far from a trivial phenomenon and that humans are wired for imitation by means of the research in multi-agent systems. He gives a solid and fertile attempt to establish a mechanism of interagent communication in the multi-agent environment using classical algebraic theories and fuzzy algebraic structures. His contributions are very relevant for the social cognition, mixing studies about animal intelligence (Thorndike) with studies about imitation in humans.

viernes, 22 de agosto de 2008

Implementation Intentions and Artificial Intelligence



In this article I will explain briefly the main conclusions presented by the author of this blog in Berlin ("International Congress of Psychology", 2008). My contribution, in collaboration with Professor Dr. Javier González Marqués (Chair of the Department of Basic Psychology at the Complutense University of Madrid), was entitled "Implementation Intentions and Artificial Agents" and establishes an interesting connection between social cognition in humans employing a particular type of intentions and its simulation and performance by intelligent artificial agents.

An intention is a type of mental state that regulates the transformation of motivational processes in volitional processes. Peter Gollwitzer distinguishes between goal intentions and implementation intentions. Goal intentions act in the strategic level whereas implementation intentions operate in the level of the planning. Goal intentions admit to be formulated by means of the expression "Intend to achieve X!", where X specifies a wished final state. However, implementation intentions can be enunciated like "I intend to do X when situation Y is encountered". This means that in an implementation intention, an anticipated situation or situational cue is linked to a certain goal directed behavior.

We have made a computer simulation that allows to compare the behavior of two artificial agents: both simulate the fulfillment of implementation intentions, but whereas one of them incarnates to A0 agent whose overturned behavior will be something more balanced towards the goal intention to obtain the reward R, A1 agent will reflect a more planning behavior, that is, more oriented towards the avoidance of obstacles and the advantage of the situational cues.
The hypothesis to demonstrate will consist of which, with a slight difference in the programming of both agents, A1 agent not only will yield a superior global performance but that will reach goal R before A0 in a greater number of occasions. This is clearly in consonance with the results of Gollwitzer and collaborators about the superiority to plan in humans the actions by means of implementation intentions as opposed to the mere attempt to execute a goal intention.

Gollwitzer and Sheeran (2004) have made a meta-analytical study of the effects exerted by the formulation of implementation intentions in the behavior of achievement of goals on the part of the agents. We set out to transfer the fundamental parameters with humans to A1 agent and to compare results with A0 agent more oriented to the execution of the goal intention to reach R. According to authors (2004, p. 26), the general impact of implementation intentions on the achievement of goals is of d=0.65, based on k=94 tests that implied 8461 participants. An important effect (op. cit., p. 29) was obtained for implementation intentions when the achievement of goals was blocked by adverse contextual influences (d=0.93). The accessibility to the situational cues was of d=0.95. To A1 we have assigned a 65 percent of percentage in the achievement of the goal. We have located a difference of 30 points in the achievement of R and according to a difference of percentage in the achievement of R on the part of A0 of 16 points, A0 was assigned a degree of achievement of 81 percent. As for the accessibility of the situational cues L, this one is very high in A1 agent (95) and considering that A1 can add 30 points more than A0, taking advantage of the situations, we have assigned to A0 a percentage of accessibility of the 76 percent. Considering that the degree of avoidance of obstacles S on the part of A1 is very high (93), to A0, we have assigned a difference to it of 19 points, that is to say, of the 74 percent. However, to fall in anyone of places S counts equal reason why it affects to the penalty for both agents.

We give account of the results, once made 5000 trials, with an average of about 48 movements by trial. We have considered the total number of plays, points (average), total resumptions (average), total victories or the number of times that the agent reaches R in the first place, number of situational cues L, number of obstacles S and number of carried out movements. The system of assigned points was:

A0: start: +50; L0-L5: +20; S0-S5: -5; R: +150; D0 (dissuasive agent intercepting the agents A0 and A1): -150; penalty by each movement: -1.

A1: start: +50; L0-L5: +25: S0-S5: -5; R: +120; D0 (dissuasive agent intercepting the agents A0 and A1): -150; penalty by each movement: -1.


The diversity of tasks that the agents have to execute in the board ends up interacting of a dynamic and significant way. This still is appraised with greater forcefulness in the one perhaps that it is the most decisive and surprising result of this exercise of simulation: the one that the most planning agent A1 achieves goal R in a greater percentage of times than A0, when A0 has been programmed to perceive and to accede to R with greater facility. We believe that our simulation has fulfilled the basic objective of supporting, in the area of Artificial Intelligence, the experimental conclusions with humans, of Gollwitzer and other authors about the superiority of the use of implementation intentions in the goal achievement, against the emphasis located in the execution of the goal intentions. As an obvious result, this task, given its limited nature, has not collected all the possibilities. Thus, the issue of the beginning of goal purpose has not been approached. Neither has the issue of the fact that the agents abandon the purpose of reaching R or that they seek alternative goals. On the other hand, not even the effect on the learning of the task as consequence of successive frustrations has been outlined. It would be interesting, to introduce agents not only based on learning rules but also adaptive agents.