Utilitarian economics says each person ought to seek to maximize personal gains, either by acting completely selfishly or by being seemingly altruistic only to personally benefit by sharing the spoils of the group. Studies that track this sort of behaviour seem to be at odds. Some find that humans indeed seek to maximize their profits, while others find that humans are altruistic — forgoing maximum profit for the benefit of the group as a whole. A new study by Oxford researchers suggests that we all would like to maximize our profits, it’s just that some simply don’t understand the rules of the game. In other words, they act altruistic because they don’t know how to be selfish, which in effect doesn’t make them altruistic at all.
The researchers write that economics experiments made so far divide humans into two paradigms: fairminded cooperators that act for the good of the group and selfish “free riders” that exploit the altruism of others. These thinking is modeled by the results of games study participants played in which those who cooperate share the group’s prize pool, but at a personal cost. More precisely, studies account for two types of people: 50% being conditional cooperators, who approximately match the contributions of their groupmates, and about 25% being free riders who sacrifice nothing. People in fall in the 25 percent range either contribute more or less the same every time (the unconditional contributors — do they care?) or people who express complex behaviour that can’t be described broadly.
What we can gather from this is that people are generally pro-social. But there’s another explanation, the Oxford researchers reckon: people are confused and don’t know how to play ‘the game’. The game of life? Maybe.
The team organized a public-goods game in the same way as those that have been previously used to measure if there are distinct social types. Individuals were grouped in four, and each given 20 monetary units (MUs) that they can either keep for themselves or partially/fully contribute to a group project. The sum of the contributions was multiplied by 1.6 MU and shared equally among the members. Therefore, for each 1 MU contributed, a person lost 0.4 MU. The maximum profit one would gain is thus by contributing nothing, nil, zero MU.
Researchers explained the rules and possible outcomes of the game to the participants in person, on paper and on a computer screen. Here’s where the catch was though. First, participants were told they will be play with computers only. This was explicitly stated beforehand. So, joining the group will be three computers that were programmed to play randomly. No human would gain from their contributions. If they chose to cooperate, they would lose money. There is no logical reason for them to contribute. But the findings were striking.
“We found that when playing with computers, individuals can be divided into the same behavioral types that have previously been observed when playing with humans (34) (Fig. 1). Specifically, we found that 21% (n = 15) are noncooperators (free riders) who contribute 0 MU, irrespective of the computer contribution, and 50% (n = 36) are conditional cooperators, who contribute more
when the computer contributes more (1–6). These conditional cooperators are adjusting their behavior in response to the computer’s contribution, even though they have been told that their contributions will not benefit others and despite the fact that the income-maximizing strategy does not depend on how much the computer contributes. The remaining 29% (21) of players exhibited some other pattern”
Next, the participants played the same game — this time with human players. The researchers sought to predict the outcomes of the games based on results in the previous experiment with computers. Before they could proceed playing one of the six series of games, each player had to click yes to a popup saying: “I understand I am now playing with real people.”
The findings suggest that people don’t know how to play the game, and all of these public-good studies that suggest most people are willing to cooperate do not, in fact, reflect underlying social preferences. Switching to human opponents did not alter the initial results.
“We also found that how individuals conditioned their behavior on their beliefs about the behavior of their groupmates did not differ in response to whether they were playing with computers or
humans,” the researchers conclude in the paper published in PNAS.“Overall, our results show that individuals behave in the same way, irrespective of whether they are playing computers or humans, even when controlling for beliefs. Therefore, the previously
observed differences in human behavior do not need to be explained by variation in the extent to which individuals care about fairness or the welfare of others.”