Mostrando entradas con la etiqueta Social Psychology. Mostrar todas las entradas
Mostrando entradas con la etiqueta Social Psychology. Mostrar todas las entradas

sábado, 26 de noviembre de 2011

A computational simulation of laws of imitation in Social Psychology

(Spatial prisoner´s dilemma for b=1.6)

I am going to explain the design of a gamed based on the spatial prisoner introducing the three laws of imitation defined by the French Sociologist Jean Gabriel Tarde. It was presented by Carlos Pelta in the "2011 Meeting of the European Mathematical Psychology Group", celebrated in Paris.

The first law or law of Close contact (LCC) describes how individuals in close intimate contact with one another imitate each other´s behavior. The second law of imitation or imitation of superiors by inferiors (LSI) establishes people follow the model of high status in hopes their imitative behavior will get the rewards associated with being of a "superior" class. Tarde´s third law is the law of insertion (LOI): new acts and behaviors are superimposed on old ones and subsequently either reinforce or discourage previous customs. The following imitation rules are introduced: (1) Conf rule (Conformist rule) simulating the law of Close contact (LCC): if your behavior is different from that of the neighboring agent, copy its behavior; (2) Maxi rule (Maximization rule) simulates the law LSI and is so defined: if the neighbor agent gets higher payoffs, copy its behavior; (3) Fashion rule: copy the behavior with the highest frequency of appearance in your neighborhood (in case of equal frequency, copy at random); (4) Snob rule: copy the behavior with a lower frequency of appearance in your neighborhood (if the frequency of behavior appearance is the same, copy at random). Rules (3) and (4) simulate the law of insertion (LOI), alternating the copy of the latest choice made with the Fashion rule and the copy using the Snob rule in every round of the game. The agents have memory for these two rules for the 3 previous rounds of the game.

Once taken into account all these rules in a spatial prisoner´s dilemma, and combining all the possible values of b between 1 and 1.9, with an initial distribution of cooperators between 0.1 and 0.9, a memory M between 1 and 9 rounds for the rules (3) and (4) and changing the number N of agents and the number of rounds of the game, it is concluded that in our game the imitation rules by Tarde yield a preferential attractor and a low proportion of cooperating individuals. Although we have introduced two rules of stochastic nature (3) and (4), its effect is nullified by the proper mimetic dynamics, which means that they can not even be present in the attractor. Thus, agents attracted by non stochastic rules, and b values that increasingly are encouraging defection, are mass defined as defectors which find ways to maintain their payoffs as high as possible. But this circumstance supports Tarde´s law LSI because the imitation of the agents with higher payoffs (defectors) is majority also including the case with an initial rate of 0.9 cooperators receiving a payoff of 1 (defectors receive payoffs from 1.1 to 1.9). Besides our simulation verifies the law LOI, combining rules (1) and (2), because the most imitated behavior or Maximization behavior, makes via rule (1), the new behavior reinforced, discouraging the cooperative behavior of the agents with lesser payoffs.

viernes, 4 de diciembre de 2009

Agent Societies and Artificial Intelligence



Very recent articles have addressed the problem of the interaction between populations of virtual robots. Vijayakumar and Davis ("Metacognition in a "society of mind"") investigate the concept of mind as a control system using the "Society of Mind" idea from Marvin Minsky. They develop Metacognition in a model based on the differentiation between metacognitive strategies. Metacontrol is a part of metacognition. The authors explore Metacognition mechanisms in developing optimal agents for the fungus world testbed. The fungus world environment allows the behaviours of a robot to be monitored, measured and compared. It is generated a swarm intelligence of how the group of agents work together to achieve a common goal. Vijayakumar and Davis implement an architecture with six layers including reflexive, reactive, deliberative, learning, metacontrol and metacognition layers. The authors design an experiment with different types of agents (random, reflexive, reactive...) All agents move in the environment, changing direction in case of obstacles. To compare the results of each agent, the following data were collected: life expectancy, fungus consumption, resource collection and metabolism. Total performance was denoted by the combination of resource collected and life expectancy. Experiments were conducted for each type of agent. In Simulation 1, deliberative agents collected 50% of resource and left 64% of energy. In Simulation 2, the results of deliberative agents were compared to Metacognition agents, collecting metacognitive agents 82% of resource and lefting 71% of energy. The result concludes that Metacognition agents are better than other cognition and deliberative agents. Thus, Metacognition is a very powerful tool for control and self-reflection and intelligent and optimal agents can be viewed as collective behaviours as a "Society of Mind". Really to develop a Metacognition concept in Artificial Intelligence seems necessary for building self configurable computational models and true intelligence.