Volunteering as Red Queen Mechanism for Cooperation in Public Goods Games

See allHide authors and affiliations

Science  10 May 2002:
Vol. 296, Issue 5570, pp. 1129-1132
DOI: 10.1126/science.1070582


The evolution of cooperation among nonrelated individuals is one of the fundamental problems in biology and social sciences. Reciprocal altruism fails to provide a solution if interactions are not repeated often enough or groups are too large. Punishment and reward can be very effective but require that defectors can be traced and identified. Here we present a simple but effective mechanism operating under full anonymity. Optional participation can foil exploiters and overcome the social dilemma. In voluntary public goods interactions, cooperators and defectors will coexist. We show that this result holds under very diverse assumptions on population structure and adaptation mechanisms, leading usually not to an equilibrium but to an unending cycle of adjustments (a Red Queen type of evolution). Thus, voluntary participation offers an escape hatch out of some social traps. Cooperation can subsist in sizable groups even if interactions are not repeated, defectors remain anonymous, players have no memory, and assortment is purely random.

Public goods are defining elements of all societies. Collective efforts to shelter, protect, and nourish the group have formed the backbone of human evolution from prehistoric time to global civilization. They confront individuals with the temptation to defect, i.e., to take advantage of the public good without contributing to it. This is known as Tragedy of the Commons, Free Rider Problem, Social Dilemma, or Multiperson Prisoner's Dilemma—the diversity of the names underlines the ubiquity of the issue (1–7).

Theoreticians and experimental economists investigate this issue by public goods games (8–11), which are characterized by groups of cooperators doing better than groups of defectors, but defectors always outperforming the cooperators in their group. In typical examples, the individual contributions are multiplied by a factor r and then divided equally among all players (12). With r smaller than the group size, this is an example of a social dilemma (13, 14): Every individual player is better off defecting than cooperating, no matter what the other players do. Groups would therefore consist of defectors only and forego the public good. For two-player groups, this is the prisoner's dilemma game. In this case, cooperation based on direct or indirect reciprocation can get established, provided the probability of another round is sufficiently high (15,16). But retaliation does not work if many players are engaged in the game (17), because players intending to punish a defector can do so only by refraining from cooperation in subsequent rounds, thereby also punishing the cooperators in the group.

If players are offered, after each round, the option of fining specific coplayers, cooperation gets firmly established. This happens even if punishment is costly to the punisher (18,19) and if players believe that they will never meet again (20). But such fining, or alternatively rewarding (21), requires that players can discriminate individual defectors. Although reward and punishment must be major factors in human cooperation, we draw attention to a simpler mechanism. It consists in allowing the players not to participate, and to fall back on a safe “side income” that does not depend on others. Such risk-averse optional participation can foil exploiters and relax the social dilemma, even if players have no way of discriminating against defectors (22).

We consider three strategic types: cooperators and defectors, both willing to engage in the public goods game and speculate (though with different intentions) on the success of a joint enterprise; and “loners,” who rely on some autarkic way of life. Cooperators will not stably dominate the population in such a voluntary public goods game, but neither will exploiters. Their frequencies oscillate, because the public good becomes unattractive if free riders abound.

To model this scenario with evolutionary game theory, we assume a large population consisting of cooperators, defectors, and loners. From time to time, a random sample of N individuals is offered the option to engage in a public goods game. The loners will refuse. They each get a payoff P l = σ. The remaining group of S players of the sample consist ofn c cooperators and S n c defectors. If S = 1, we assume that this single player has to act like a loner. We normalize the individual investment to 1. The defectors' payoff is thenP d =rn c/S, and the cooperator's payoff is P c = P d − 1 (owing to the cost of cooperation). Hence, in every group, defectors do better than cooperators. We assume r > 1 (if all cooperate, they are better off than if all defect) and 0 < σ < r − 1 (better to be a loner than in a group of defectors; but better still to be in a group of cooperators). We stress that players' strategies are decided before the samples are selected, and do not depend on the composition of the group. No anticipation, preferential assortment, or conditional response is involved. Cooperation persists in this minimalistic scenario under a wide variety of assumptions concerning population structure or adaptation mechanisms. The results are extremely robust and do not depend on any particular brand of evolutionary game theory.

In a well-mixed population, analytic expressions for the payoff values can be derived (23). The strategies display a rock-scissors-paper cycle. If most players cooperate, it pays to defect. If defectors are prevalent, it is better to stay out of the public goods game and resort to the loners' strategy. But if most players are loners, groups of small size S can form. For such groups, the public goods game is no longer a social dilemma: Although defectors always do better than cooperators, in any given group, the payoff for cooperators, when averaged over all groups, will be higher than that of defectors (and loners), and so cooperation will increase. This is an instance of the well-known Simpson's paradox (24). Thus, group size S divides the game into two parts. For small group size, cooperation is dominant, and for large size, defection; but the mere option to drop out of the game preserves the balance between the two options, in a very natural way.

The game dynamics describing the frequencies of the strategies depends on how players imitate others and learn (Fig. 1) (25,26). If, for instance, they occasionally update their strategy by picking another player at random, and adopting that model's strategy with a probability proportional to the payoff difference (provided it is positive), then this yields the usual replicator dynamics (27). It can be fully analyzed despite the highly nonlinear payoff terms (28). Forr < 2, we observe brief recurrent bursts of cooperation interrupting long periods of prevalence of the loner's strategy. For r > 2, a mixed equilibrium appears, and all orbits are periodic. The time average of the ratio of cooperators to defectors corresponds to the equilibrium values, and the time average of the payoff is the same for all strategies, and hence equal to the loner's payoff σ. Other imitation mechanisms may lead to other oscillatory dynamics. In particular, if players always adopt the strategy of their randomly chosen “model” whenever that model has a higher payoff, then individual-based simulations display stable oscillations for the frequencies of the three strategies (29). This finding is very robust and little affected by additional effects like hyperbolic discounting, random changes of strategies, or occasional errors leading to the adoption of strategies with lower payoffs. The oscillations persist if σ, r, andN are random variables. Another updating mechanism is the best-reply dynamics based on the assumption that from time to time, individuals switch to whatever is the best strategy, given the current composition of the population. The best-reply dynamics mechanism displays damped oscillations converging to a stable polymorphism.

Figure 1

Optional public goods games in large, well-mixed populations. The three equilibria ec ,ed , and el are saddle points, denoting homogeneous populations of cooperators, defectors, and loners. (A) and (B) describe the replicator dynamics = xi (Pi−P̄), where is the average payoff in the population. For r ≤ 2 (A), the interior of the simplex S 3 consists of orbits issued from and returning to e l. Only brief intermittent bursts of cooperation are observed. (B) For r > 2, an equilibrium point Q appears, surrounded by closed orbits. (C) With perfect information, i.e., best-reply dynamics, Q becomes an attractor. The dashed lines divide S 3 into three regions where cooperation, defection, and loners dominate. (D) Individual-based simulations confirm the stability of the cycles in finite populations, if the strategy of a randomly picked individual is imitated whenever it performs better. Parameters: N = 5; (A) r = 1.8, σ = 0.5; (B) to (D)r = 3, σ = 1; (D) population size, 5000; number of interactions, 106.

So far, we have considered well-mixed populations: Groups form randomly, and potential “role models” are chosen randomly. But the option to withdraw from the game boosts cooperation also for other population structures. For instance, we may assume that individuals are bound to a rigid spatial lattice and interact only with their nearest neighbors (Fig. 2) (30). As in the related prisoner's dilemma game (31), cooperators tend to fare better in the spatial than in the well-mixed case. In the optional public goods game, this is even more pronounced: Cooperators persist for all values of r > σ + 1, whereas in the compulsory game (i.e., without the loner's option), cooperation can persist only for considerably larger values of r (Fig. 3) (32). Thus, loners protect cooperation. The dynamics displays traveling waves driven by the rock-scissors-paper succession of cooperators, defectors, and loners (29, 33).

Figure 2

Representative snapshots of the optional public goods games on a square lattice with synchronous updates. In (A) and (B), the deterministic rule applies where each site is taken over by the best strategy within its 3 by 3 neighborhood. In (C) and (D), the stochastic rule prescribes that 80% of all sites adopt more successful neighboring strategies, with a probability proportional to the payoff difference. Blue refers to cooperators, red to defectors, and yellow to loners. Intermediate colors indicate players that have just changed their strategy. For low multiplication rates [r = 2.2 in (A) and (C)], persistent traveling waves are observed regardless of the details of the update rules. In (B), for r = 3.8, cooperators thrive on their own and loners go extinct. But in (D), for the same high value of r, cooperators would go extinct in the absence of loners, owing to the randomness. In a typical configuration, clusters of cooperators are surrounded by defectors and the latter again are surrounded by loners. Cooperators occasionally manage to break through the defectors clutch and invade domains of loners. Parameters: 50 by 50 lattice, periodic boundaries, σ = 1.

Figure 3

Average frequencies and payoffs in the spatial public goods for (A) compulsory and (B) voluntary participation with a loner's payoff of σ = 1. Individuals imitate more successful neighboring strategies with a probability proportional to the payoff difference. In (A), cooperators (blue line) persist for sufficiently high interest rates r ≲ 3.90 through cluster formation, i.e., by minimizing interactions with defectors (red line). Interestingly, they always achieve substantially higher payoffs than defectors. In (B), the additional protection against exploitation provided by loners (green line) enables cooperators to persist for all r > σ + 1. Forr ≲ 4.17, the loner strategy no longer represents a valuable alternative and goes extinct—cooperators thrive on their own. As in (A), the payoff of cooperators is substantially higher than for defectors but, somewhat surprisingly, for low r, the average population payoff (blue line) drops even below σ, and hence the population would be better off without the opportunity to participate in a public goods game.

In the public goods game, the drop-out option allows groups to form on a voluntary basis and thus to relaunch cooperation again and again. But each additional player brings a diminishing return and an increased threat of exploitation. As in the land of the Red Queen, “it takes all the running you can do, to keep in the same place.” Individuals keep adjusting their strategies but in the long run do no better than if the public goods option had never existed. On the other hand, voluntary participation avoids the deadlock of mutual defection that threatens any public enterprise in larger groups.

  • * To whom correspondence should be addressed. E-mail: karl.sigmund{at}


View Abstract

Navigate This Article