game theory

game theory
a mathematical theory that deals with strategies for maximizing gains and minimizing losses within prescribed constraints, as the rules of a card game: widely applied in the solution of various decision-making problems, as those of military strategy and business policy. Also called theory of games.
[1950-55]

* * *

Branch of applied mathematics devised to analyze certain situations in which there is an interplay between parties that may have similar, opposed, or mixed interests.

Game theory was originally developed by John von Neumann and Oscar Morgenstern in their book The Theory of Games and Economic Behavior (1944). In a typical game, or competition with fixed rules, "players" try to outsmart one another by anticipating the others' decisions, or moves. A solution to a game prescribes the optimal strategy or strategies for each player and predicts the average, or expected, outcome. Until a highly contrived counterexample was devised in 1967, it was thought that every contest had at least one solution. See also decision theory; prisoner's dilemma.

* * *

Introduction
 branch of applied mathematics that provides tools for analyzing situations in which parties, called players, make decisions that are interdependent. This interdependence causes each player to consider the other player's possible decisions, or strategies, in formulating his own strategy. A solution to a game describes the optimal decisions of the players, who may have similar, opposed, or mixed interests, and the outcomes that may result from these decisions.

      Although game theory can be and has been used to analyze parlour games, its applications are much broader. In fact, game theory was originally developed by the Hungarian-born American mathematician John von Neumann (von Neumann, John) and his Princeton University colleague Oskar Morgenstern (Morgenstern, Oskar), a German-born American economist, to solve problems in economics. In their book The Theory of Games and Economic Behavior (1944), von Neumann and Morgenstern asserted that the mathematics developed for the physical sciences, which describes the workings of a disinterested nature, was a poor model for economics. They observed that economics is much like a game, wherein players anticipate each other's moves, and therefore requires a new kind of mathematics, which they called game theory. (The name may be somewhat of a misnomer—game theory generally does not share the fun or frivolity associated with games.)

      Game theory has been applied to a wide variety of situations in which the choices of players interact to affect the outcome. In stressing the strategic aspects of decision making, or aspects controlled by the players rather than by pure chance, the theory both supplements and goes beyond the classical theory of probability (probability theory). It has been used, for example, to determine what political coalitions or business conglomerates are likely to form, the optimal price at which to sell products or services in the face of competition, the power of a voter or a bloc of voters, whom to select for a jury, the best site for a manufacturing plant, and the behaviour of certain animals and plants in their struggle for survival. It has even been used to challenge the legality of certain voting systems.

      It would be surprising if any one theory could address such an enormous range of “games,” and in fact there is no single game theory. A number of theories have been proposed, each applicable to different situations and each with its own concepts of what constitutes a solution. This article describes some simple games, discusses different theories, and outlines principles underlying game theory. Additional concepts and methods that can be used to analyze and solve decision problems are treated in the article optimization.

Classification of games
      Games can be classified according to certain significant features, the most obvious of which is the number of players. Thus, a game can be designated as being a one-person, two-person, or n-person (with n greater than two) game, with games in each category having their own distinctive features. In addition, a player need not be an individual; it may be a nation, a corporation, or a team comprising many people with shared interests.

      In games of perfect information, such as chess, each player knows everything about the game at all times. poker, on the other hand, is an example of a game of imperfect information because players do not know all of their opponents' cards.

      The extent to which the goals of the players coincide or conflict is another basis for classifying games. Constant-sum games are games of total conflict, which are also called games of pure competition. Poker, for example, is a constant-sum game because the combined wealth of the players remains constant, though its distribution shifts in the course of play.

      Players in constant-sum games have completely opposed interests, whereas in variable-sum games they may all be winners or losers. In a labour-management dispute, for example, the two parties certainly have some conflicting interests, but both will benefit if a strike is averted.

      Variable-sum games can be further distinguished as being either cooperative or noncooperative. In cooperative games players can communicate and, most important, make binding agreements; in noncooperative games players may communicate, but they cannot make binding agreements, such as an enforceable contract. An automobile salesperson and a potential customer will be engaged in a cooperative game if they agree on a price and sign a contract. However, the dickering that they do to reach this point will be noncooperative. Similarly, when people bid independently at an auction they are playing a noncooperative game, even though the high bidder agrees to complete the purchase.

      Finally, a game is said to be finite when each player has a finite number of options, the number of players is finite, and the game cannot go on indefinitely. Chess, checkers, poker, and most parlour games are finite. Infinite games are more subtle and will only be touched upon in this article.

      A game can be described in one of three ways: in extensive, normal, or characteristic-function form. (Sometimes these forms are combined, as described in the section Theory of moves (game theory).) Most parlour games, which progress step by step, one move at a time, can be modeled as games in extensive form. Extensive-form games can be described by a “game tree,” in which each turn is a vertex of the tree, with each branch indicating the players' successive choices.

      The normal (strategic) form is primarily used to describe two-person games. In this form a game is represented by a payoff matrix, wherein each row describes the strategy of one player and each column describes the strategy of the other player. The matrix entry at the intersection of each row and column gives the outcome of each player choosing the corresponding strategy. The payoffs to each player associated with this outcome are the basis for determining whether the strategies are “in equilibrium,” or stable.

      The characteristic-function form is generally used to analyze games with more than two players. It indicates the minimum value that each coalition of players—including single-player coalitions—can guarantee for itself when playing against a coalition made up of all the other players.

One-person games
      One-person games are also known as games against nature. With no opponents, the player only needs to list available options and then choose the optimal outcome. When chance is involved the game might seem to be more complicated, but in principle the decision is still relatively simple. For example, a person deciding whether to carry an umbrella weighs the costs and benefits of carrying or not carrying it. While this person may make the wrong decision, there does not exist a conscious opponent. That is, nature is presumed to be completely indifferent to the player's decision, and the person can base his decision on simple probabilities. One-person games hold little interest for game theorists.

Two-person constant-sum games

Games of perfect information
      The simplest game of any real theoretical interest is a two-person constant-sum game of perfect information. Examples of such games include chess, checkers, and the Japanese game of go. In 1912 the German mathematician Ernst Zermelo proved that such games are strictly determined; by making use of all available information, the players can deduce strategies that are optimal, which makes the outcome preordained (strictly determined). In chess, for example, exactly one of three outcomes must occur if the players make optimal choices: (1) White wins (has a strategy that wins against any strategy of Black); (2) Black wins; or (3) White and Black draw. In principle, a sufficiently powerful supercomputer could determine which of the three outcomes will occur. However, considering that there are some 1043 distinct 40-move games of chess possible, there seems no possibility that such a computer will be developed now or in the foreseeable future. Therefore, while chess is of only minor interest in game theory, it is likely to remain a game of enduring intellectual interest.

Games of imperfect information
      A “saddlepoint” in a two-person constant-sum game is the outcome that rational players would choose. (Its name derives from its being the minimum of a row that is also the maximum of a column in a payoff matrix—to be illustrated shortly—which corresponds to the shape of a saddle.) A saddlepoint always exists in games of perfect information but may or may not exist in games of imperfect information. By choosing a strategy associated with this outcome, each player obtains an amount at least equal to his payoff at that outcome, no matter what the other player does. This payoff is called the value of the game; as in perfect-information games, it is preordained by the players' choices of strategies associated with the saddlepoint, making such games strictly determined.

 The normal-form game in Table 1—> is used to illustrate the calculation of a saddlepoint. Two political parties, A and B, must each decide how to handle a controversial issue in a certain election. Each party can either support the issue, oppose it, or evade it by being ambiguous. The decisions by A and B on this issue determine the percentage of the vote that each party receives. The entries in the payoff matrix represent party A's percentage of the vote (the remaining percentage goes to B). When, for example, A supports the issue and B evades it, A gets 80 percent and B 20 percent of the vote.

      Assume that each party wants to maximize its vote. A's decision seems difficult at first because it depends on B's choice of strategy. A does best to support if B evades, oppose if B supports, and evade if B opposes. A must therefore consider B's decision before making its own. Note that no matter what A does, B obtains the largest percentage of the vote (smallest percentage for A) by opposing the issue rather than supporting it or evading it. Once A recognizes this, its strategy obviously should be to evade, settling for 30 percent of the vote. Thus, a 30 to 70 percent division of the vote, to A and B respectively, is the game's saddlepoint.

      A more systematic way of finding a saddlepoint is to determine the so-called maximin and minimax values. A first determines the minimum percentage of votes it can obtain for each of its strategies; it then finds the maximum of these three minimum values, giving the maximin. The minimum percentages A will get if it supports, opposes, or evades are, respectively, 20, 25, and 30. The largest of these, 30, is the maximin value. Similarly, for each strategy B chooses, it determines the maximum percentage of votes A will win (and thus the minimum that it can win). In this case, if B supports, opposes, or evades, the maximum A will get is 80, 30, and 80, respectively. B will obtain its largest percentage by minimizing A's maximum percent of the vote, giving the minimax. The smallest of A's maximum values is 30, so 30 is B's minimax value. Because both the minimax and the maximin values coincide, 30 is a saddlepoint. The two parties might as well announce their strategies in advance, because the other party cannot gain from this knowledge.

Mixed strategies and the minimax theorem
 When saddlepoints exist, the optimal strategies and outcomes can be easily determined, as was just illustrated. However, when there is no saddlepoint the calculation is more elaborate, as illustrated in Table 2—>.

      A guard is hired to protect two safes in separate locations: S1 contains $10,000 and S2 contains $100,000. The guard can protect only one safe at a time from a safecracker. The safecracker and the guard must decide in advance, without knowing what the other party will do, which safe to try to rob and which safe to protect. When they go to the same safe, the safecracker gets nothing; when they go to different safes, the safecracker gets the contents of the unprotected safe.

      In such a game, game theory does not indicate that any one particular strategy is best. Instead, it prescribes that a strategy be chosen in accordance with a probability distribution, which in this simple example is quite easy to calculate. In larger and more complex games, finding this strategy involves solving a problem in linear programming, which can be considerably more difficult.

      To calculate the appropriate probability distribution in this example, each player adopts a strategy that makes him indifferent to what his opponent does. Assume that the guard protects S1 with probability p and S2 with probability 1 − p. Thus, if the safecracker tries S1, he will be successful whenever the guard protects S2. In other words, he will get $10,000 with probability 1 − p and $0 with probability p for an average gain of $10,000(1 − p). Similarly, if the safecracker tries S2, he will get $100,000 with probability p and $0 with probability 1 − p for an average gain of $100,000p.

      The guard will be indifferent to which safe the safecracker chooses if the average amount stolen is the same in both cases—that is, if $10,000(1 − p) = $100,000p. Solving for p gives p = 1/11. If the guard protects S1 with probability 1/11 and S2 with probability 10/11, he will lose, on average, no more than about $9,091 whatever the safecracker does.

      Using the same kind of argument, it can be shown that the safecracker will get an average of at least $9,091 if he tries to steal from S1 with probability 10/11 and from S2 with probability 1/11. This solution in terms of mixed strategies, which are assumed to be chosen at random with the indicated probabilities, is analogous to the solution of the game with a saddlepoint (in which a pure, or single best, strategy exists for each player).

      The safecracker and the guard give away nothing if they announce the probabilities with which they will randomly choose their respective strategies. On the other hand, if they make themselves predictable by exhibiting any kind of pattern in their choices, this information can be exploited by the other player.

      The minimax theorem, which von Neumann proved in 1928, states that every finite, two-person constant-sum game has a solution in pure or mixed strategies. Specifically, it says that for every such game between players A and B, there is a value v and strategies for A and B such that, if A adopts its optimal (maximin) strategy, the outcome will be at least as favourable to A as v; if B adopts its optimal (minimax) strategy, the outcome will be no more favourable to A than v. Thus, A and B have both the incentive and the ability to enforce an outcome that gives an (expected) payoff of v.

Utility theory
      In the previous example it was tacitly assumed that the players were maximizing their average profits, but in practice players may consider other factors. For example, few people would risk a sure gain of $1,000,000 for an even chance of winning either $3,000,000 or $0, even though the expected (average) gain from this bet is $1,500,000. In fact, many decisions that people make, such as buying insurance policies, playing lotteries, and gambling at a casino, indicate that they are not maximizing their average profits. Game theory does not attempt to state what a player's goal should be; instead, it shows how a player can best achieve his goal, whatever that goal is.

      Von Neumann and Morgenstern understood this distinction; to accommodate all players, whatever their goals, they constructed a theory of utility (utility and value). They began by listing certain axioms that they thought all rational decision makers would follow (for example, if a person likes tea better than coffee, and coffee better than milk, then that person should like tea better than milk). They then proved that it was possible to define a utility function for such decision makers that would reflect their preferences. In essence, a utility function assigns a number to each player's alternatives to convey their relative attractiveness. Maximizing someone's expected utility automatically determines a player's most preferred option. In recent years, however, some doubt has been raised about whether people actually behave in accordance with these axioms, and alternative axioms have been proposed.

Two-person variable-sum games
      Much of the early work in game theory was on two-person constant-sum games because they are the easiest to treat mathematically. The players in such games have diametrically opposed interests, and there is a consensus about what constitutes a solution (as given by the minimax theorem). Most games that arise in practice, however, are variable-sum games; the players have both common and opposed interests. For example, a buyer and a seller are engaged in a variable-sum game (the buyer wants a low price and the seller a high one, but both want to make a deal), as are two hostile nations (they may disagree about numerous issues, but both gain if they avoid going to war).

      Some “obvious” properties of two-person constant-sum games are not valid in variable-sum games. In constant-sum games, for example, both players cannot gain (they may or may not lose, but they cannot both gain) if they are deprived of some of their strategies. In variable-sum games, however, players may gain if some of their strategies are no longer available. This might not seem possible at first. One would think that if a player benefited from not using certain strategies, the player would simply avoid those strategies and choose more advantageous ones, but this is not always the case. For example, in a region with high unemployment a worker may be willing to accept a lower salary to obtain or keep a job, but if a minimum wage law makes that option illegal, the worker may be “forced” to accept a higher salary.

      The effect of communication is particularly revealing of the difference between constant-sum and variable-sum games. In constant-sum games it never helps a player to give an adversary information, and it never hurts a player to learn an opponent's optimal strategy (pure or mixed) in advance. However, these properties do not necessarily hold in variable-sum games. Indeed, a player may want an opponent to be well-informed. In a labour-management dispute, for example, if the labour union is prepared to strike, it behooves the union to inform management and thereby possibly achieve its goal without a strike. In this example, management is not harmed by the advance information (it, too, benefits by avoiding a costly strike). In other variable-sum games, knowing an opponent's strategy can sometimes be disadvantageous. For example, a blackmailer can only benefit if he first informs his victim that he will harm him—generally by disclosing some sensitive and secret details of the victim's life—if his terms are not met. For such a threat to be credible, the victim must fear the disclosure and believe that the blackmailer is capable of executing the threat. (The credibility of threats is a question that game theory studies.) Although a blackmailer may be able to harm a victim without any communication taking place, a blackmailer cannot extort a victim unless he first adequately informs the victim of his intent and its consequences. Thus, the victim's knowledge of the blackmailer's strategy, including his ability and will to carry out the threat, works to the blackmailer's advantage.

Cooperative versus noncooperative games
      Communication is pointless in constant-sum games because there is no possibility of mutual gain from cooperating. In variable-sum games, on the other hand, the ability to communicate, the degree of communication, and even the order in which players communicate can have a profound influence on the outcome.

 In the variable-sum game shown in Table 3—>, each matrix entry consists of two numbers. (Because the combined wealth of the players is not constant, it is impossible to deduce one player's payoff from the payoff of the other; consequently, both players' payoffs must be given.) The first number in each entry is the payoff to the row player (player A), and the second number is the payoff to the column player (player B).

      In this example it will be to player A's advantage if the game is cooperative and to player B's advantage if the game is noncooperative. Without communication, assume each player applies the “sure-thing” principle: it maximizes its minimum payoff by determining the minimum it will receive whatever its opponent does. Thereby, A determines that it will do best to choose strategy I no matter what B does: if B chooses i, A will get 3 regardless of what A does; if B chooses ii, A will get 4 rather than 3. B similarly determines that it will do best to choose i no matter what A does. Selecting these two strategies, A will get 3 and B will get 4 at (3, 4).

      In a cooperative game, however, A can threaten to play II unless B agrees to play ii. If B agrees, its payoff will be reduced to 3 while A's payoff will rise to 4 at (4, 3); if B does not agree and A carries out its threat, A will neither gain nor lose at (3, 2) compared to (3, 4), but B will get a payoff of only 2. Clearly, A will be unaffected if B does not agree and thus has a credible threat; B will be affected and obviously will do better at (4, 3) than at (3, 2) and should comply with the threat.

      Sometimes both players can gain from the ability to communicate. Two pilots trying to avoid a midair collision clearly will benefit if they can communicate, and the degree of communication allowed between them may even determine whether or not they will crash. Generally, the more two players' interests coincide, the more important and advantageous communication becomes.

      The solution to a cooperative game in which players have a common goal involves coordinating the players' decisions effectively. This is relatively straightforward, as is finding the solution to constant-sum games with a saddlepoint. For games in which the players have both common and conflicting interests—in other words, in most variable-sum games, whether cooperative or noncooperative—what constitutes a solution is much harder to define and make persuasive.

The Nash solution
      Although solutions to variable-sum games have been defined in a number of different ways, they sometimes seem inequitable or are not enforceable. One well-known cooperative solution to two-person variable-sum games was proposed by the American mathematician John F. Nash (Nash, John F., Jr.), who received the Nobel Prize for Economics (Nobel Prize winners by category (economics)) in 1994 for this and related work he did in game theory.

      Given a game with a set of possible outcomes and associated utilities for each player, Nash showed that there is a unique outcome that satisfies four conditions: (1) The outcome is independent of the choice of a utility function (that is, if a player prefers x to y, the solution will not change if one function assigns x a utility of 10 and y a utility of 1 or a second function assigns the values of 20 and 2). (2) Both players cannot do better simultaneously (a condition known as Pareto-optimality). (3) The outcome is independent of irrelevant alternatives (in other words, if unattractive options are added to or dropped from the list of alternatives, the solution will not change). (4) The outcome is symmetrical (that is, if the players reverse their roles, the solution will remain the same, except that the payoffs will be reversed).

      In some cases the Nash solution seems inequitable because it is based on a balance of threats—the possibility that no agreement will be reached, so that both players will suffer losses—rather than a “fair” outcome. When, for example, a rich person and a poor person are to receive $10,000 provided they can agree on how to divide the money (if they fail to agree, they receive nothing), most people assume that the fair solution would be for each person to get half, or even that the poor person should get more than half. According to the Nash solution, however, there is a utility for each player associated with all possible outcomes. Moreover, the specific choice of utility functions should not affect the solution (condition 1) as long as they reflect each person's preferences. In this example, assume that the rich person's utility is equal to one-half the money received and that the poor person's utility is equal to the money received. These different functions reflect the fact that additional income is more precious to the poor person. Under the Nash solution, the threat of reaching no agreement induces the poor person to accept one-third of the $10,000, giving the rich person two-thirds. In general, the Nash solution finds an outcome such that each player gains the same amount of utility.

The Prisoners' Dilemma
 To illustrate the kinds of difficulties that arise in two-person noncooperative variable-sum games, consider the celebrated Prisoners' Dilemma (PD), originally formulated by the American mathematician Albert W. Tucker. Two prisoners, A and B, suspected of committing a robbery together, are isolated and urged to confess. Each is concerned only with getting the shortest possible prison sentence for himself; each must decide whether to confess without knowing his partner's decision. Both prisoners, however, know the consequences of their decisions: (1) if both confess, both go to jail for five years; (2) if neither confesses, both go to jail for one year (for carrying concealed weapons); and (3) if one confesses while the other does not, the confessor goes free (for turning state's evidence) and the silent one goes to jail for 20 years. The normal form of this game is shown in Table 4—>.

      Superficially, the analysis of PD is very simple. Although A cannot be sure what B will do, he knows that he does best to confess when B confesses (he gets five years rather than 20) and also when B remains silent (he serves no time rather than a year); analogously, B will reach the same conclusion. So the solution would seem to be that each prisoner does best to confess and go to jail for five years. Paradoxically, however, the two robbers would do better if they both adopted the apparently irrational strategy of remaining silent; each would then serve only one year in jail. The irony of PD is that when each of two (or more) parties acts selfishly and does not cooperate with the other (that is, when he confesses), they do worse than when they act unselfishly and cooperate together (that is, when they remain silent).

      PD is not just an intriguing hypothetical problem; real-life situations with similar characteristics have often been observed. For example, two shopkeepers engaged in a price war may well be caught up in a PD. Each shopkeeper knows that if he has lower prices than his rival, he will attract his rival's customers and thereby increase his own profits. Each therefore decides to lower his prices, with the result that neither gains any customers and both earn smaller profits. Similarly, nations competing in an arms race and farmers increasing crop production can also be seen as manifestations of PD. When two nations keep buying more weapons in an attempt to achieve military superiority, neither gains an advantage and both are poorer than when they started. A single farmer can increase his profits by increasing production, but when all farmers increase their output a market glut ensues, with lower profits for all.

      It might seem that the paradox inherent in PD could be resolved if the game were played repeatedly. Players would learn that they do best when both act unselfishly and cooperate. Indeed, if one player failed to cooperate in one game, the other player could retaliate by not cooperating in the next game, and both would lose until they began to “see the light” and cooperated again. When the game is repeated a fixed number of times, however, this argument fails. To see this, suppose two shopkeepers set up their booths at a 10-day county fair. Furthermore, suppose that each maintains full prices, knowing that if he does not, his competitor will retaliate the next day. On the last day, however, each shopkeeper realizes that his competitor can no longer retaliate and so there is little reason not to lower their prices. But if each shopkeeper knows that his rival will lower his prices on the last day, he has no incentive to maintain full prices on the ninth day. Continuing this reasoning, one concludes that rational shopkeepers will have a price war every day. It is only when the game is played repeatedly, and neither player knows when the sequence will end, that the cooperative strategy can succeed.

      In 1980 the American political scientist Robert Axelrod engaged a number of game theorists in a round-robin tournament. In each match the strategies of two theorists, incorporated in computer programs, competed against one another in a sequence of PDs with no definite end. A “nice” strategy was defined as one in which a player always cooperates with a cooperative opponent. Also, if a player's opponent did not cooperate during one turn, most strategies prescribed noncooperation on the next turn, but a player with a “forgiving” strategy reverted rapidly to cooperation once its opponent started cooperating again. In this experiment it turned out that every nice strategy outperformed every strategy that was not nice. Furthermore, of the nice strategies, the forgiving ones performed best.

Theory of moves
      Another approach to inducing cooperation in PD and other variable-sum games is the theory of moves (TOM). Proposed by the American political scientist Steven J. Brams, TOM allows players, starting at any outcome in a payoff matrix, to move and countermove within the matrix, thereby capturing the changing strategic nature of games as they evolve over time. In particular, TOM assumes that players think ahead about the consequences of all of the participants' moves and countermoves when formulating plans. Thereby, TOM embeds extensive-form calculations within the normal form, deriving advantages of both forms: the nonmyopic thinking of the extensive form disciplined by the economy of the normal form.

      To illustrate the nonmyopic perspective of TOM, consider what happens in PD as a function of where play starts:
● When play starts noncooperatively, players are stuck, no matter how far ahead they look, because as soon as one player departs, the other player, enjoying his best outcome, will not move on. Outcome: The players stay at the noncooperative outcome.
● When play starts cooperatively, neither player will defect, because if he does, the other player will also defect, and they both will end up worse off. Thinking ahead, therefore, neither player will defect. Outcome: The players stay at the cooperative outcome.
● When play starts at one of the win-lose outcomes (best for one player, worst for the other), the player doing best will know that if he is not magnanimous, and consequently does not move to the cooperative outcome, his opponent will move to the noncooperative outcome, inflicting on the best-off player his next-worst outcome. Therefore, it is in the best-off player's interest, as well as his opponent's, that he act magnanimously, anticipating that if he does not, the noncooperative outcome (next-worst for both), rather than the cooperative outcome (next-best for both), will be chosen. Outcome: The best-off player will move to the cooperative outcome, where play will remain.

      Such rational moves are not beyond the pale of most players. Indeed, they are frequently made by those who look beyond the immediate consequences of their own choices. Such far-sighted players can escape the dilemma in PD—as well as poor outcomes in other variable-sum games—provided play does not begin noncooperatively. Hence, TOM does not predict unconditional cooperation in PD but, instead, makes it a function of the starting point of play.

Biological applications
      One fascinating and unexpected application of game theory in general, and PD in particular, occurs in biology. When two males confront each other, whether competing (competition) for a mate or for some disputed territory, they can behave either like “hawks”—fighting until one is maimed, killed, or flees—or like “doves”—posturing a bit but leaving before any serious harm is done. (In effect, the doves cooperate while the hawks do not.) Neither type of behaviour, it turns out, is ideal for survival: a species containing only hawks would have a high casualty rate; a species containing only doves would be vulnerable to an invasion by hawks or a mutation that produces hawks, because the population growth rate of the competitive hawks would be much higher initially than that of the doves.

      Thus, a species with males consisting exclusively of either hawks or doves is vulnerable. The English biologist John Maynard Smith showed that a third type of male behaviour, which he called “bourgeois,” would be more stable than that of either pure hawks or pure doves. A bourgeois may act like either a hawk or a dove, depending on some external cues; for example, it may fight tenaciously when it meets a rival in its own territory but yield when it meets the same rival elsewhere. In effect, bourgeois animals submit their conflict to external arbitration to avoid a prolonged and mutually destructive struggle.

 As shown in Table 5—>, Smith constructed a payoff matrix in which various possible outcomes (e.g., death, maiming, successful mating), and the costs and benefits associated with them (e.g., cost of lost time), were weighted in terms of the expected number of genes propagated. Smith showed that a bourgeois invasion would be successful against a completely hawk population by observing that when a hawk confronts a hawk it loses 5, whereas a bourgeois loses only 2.5. (Because the population is assumed to be predominantly hawk, the success of the invasion can be predicted by comparing the average number of offspring a hawk will produce when it confronts another hawk with the average number of offspring a bourgeois will produce when confronting a hawk.) Patently, a bourgeois invasion against a completely dove population would be successful as well, gaining the bourgeois 6 offspring. On the other hand, a completely bourgeois population cannot be invaded by either hawks or doves, because the bourgeois gets 5 against bourgeois, which is more than either hawks or doves get when confronting bourgeois. Note in this application that the question is not what strategy a rational player will choose—animals are not assumed to make conscious choices, though their types may change through mutation—but what combinations of types are stable and hence likely to evolve.

      Smith gave several examples that showed how the bourgeois strategy is used in practice. For example, male speckled wood butterflies seek sunlit spots on the forest floor where females are often found. There is a shortage of such spots, however, and in a confrontation between a stranger and an inhabitant, the stranger yields after a brief duel in which the combatants circle one another. The dueling skills of the adversaries have little effect on the outcome. When one butterfly is forcibly placed on another's territory so that each considers the other the aggressor, the two butterflies duel with righteous indignation for a much longer time.

N-person games
      Theoretically, n-person games in which the players are not allowed to communicate and make binding agreements are not fundamentally different from two-person noncooperative games. In the two examples that follow, each involving three players, one looks for Nash equilibria—that is, stable outcomes from which no player would normally depart because to do so would be disadvantageous.

Sequential and simultaneous truels
      As an example of an n-person noncooperative game, imagine three players, A, B, and C, situated at the corners of an equilateral triangle. They engage in a truel, or three-person duel, in which each player has a gun with one bullet. Assume that each player is a perfect shot and can kill one other player at any time. There is no fixed order of play, but any shooting that occurs is sequential: no player fires at the same time as any other. Consequently, if a bullet is fired, the results are known to all players before another bullet is fired.

      Suppose that the players order their goals as follows: (1) survive alone, (2) survive with one opponent, (3) survive with both opponents, (4) not survive, with no opponents alive, (5) not survive, with one opponent alive, and (6) not survive, with both opponents alive. Thus, surviving alone is best, dying alone is worst.

      If a player can either fire or not fire at another player, who, if anybody, will shoot whom? It is not difficult to see that outcome (3), in which nobody shoots, is the unique Nash equilibrium—any player that departs from not shooting does worse. Suppose, on the contrary, that A shoots B, hoping for A's outcome (2), whereby he and C survive. Now, however, C can shoot a disarmed A, thereby leaving himself as the sole survivor, or outcome (1). As this is A's penultimate outcome (5), in which A and one opponent (B) are killed while the other opponent (C) lives, A should not fire the first shot; the same reasoning applies to the other two players. Consequently, nobody will shoot, resulting in outcome (3), in which all three players survive.

      Now consider whether any of the players can do better through collusion. Specifically, assume that A and B agree not to shoot each other; if either shoots another player, they agree it would be C. Nevertheless, if A shoots C (for instance), B could now repudiate the agreement with impunity and shoot A, thereby becoming the sole survivor.

      Thus, thinking ahead about the unpleasant consequences of shooting first or colluding with another player to do so, nobody will shoot or collude. Thereby all players will survive if the players must act in sequence, giving outcome (3). Because no player can do better by shooting, or saying they will do so to another, these strategies yield a Nash equilibrium.

      Next, suppose that the players act simultaneously; hence, they must decide in ignorance of each others' intended actions. This situation is common in life: people often must act before they find out what others are doing. In a simultaneous truel there are three possibilities, depending on the number of rounds and whether or not this number is known:
● One round. Now everybody will find it rational to shoot an opponent at the start of play. This is because no player can affect his own fate, but each does at least as well, and sometimes better, by shooting another player—whether the shooter lives or dies—because the number of surviving opponents is reduced. Hence, the Nash equilibrium is that everybody will shoot. When each player chooses his target at random, it is easy to see that each has a 25 percent chance of surviving. Consider player A; he will die if B, C, or both shoot him (three cases), compared with his surviving if B and C shoot each other (one case). Altogether, one of A, B, or C will survive with probability 75 percent, and nobody will survive with probability 25 percent (when each player shoots a different opponent). Outcome: There will always be shooting, leaving one or no survivors.
N rounds (n ≥ 2 and known). Assume that nobody has shot an opponent up to the penultimate, or (n − 1)st, round. Then, on the penultimate round, either of at least two players will rationally shoot or none will. First, consider the situation in which an opponent shoots A. Clearly, A can never do better than shoot, because A is going to be killed anyway. Moreover, A does better to shoot at whichever opponent (there must be at least one) that is not a target of B or C. On the other hand, suppose that nobody shoots A. If B and C shoot each other, then A has no reason to shoot (although A cannot be harmed by doing so). However, if one opponent, say B, holds his fire, and C shoots B, A again cannot do better than hold his fire also, because he can eliminate C on the next round. (Note that C, because it has already fired his only bullet, does not threaten A.) Finally, suppose that both B and C hold their fire. If A shoots an opponent, say B, then his other opponent, C, will eliminate A on the last, or nth, round. But if A holds his fire, the game passes onto the nth round and, as discussed in (1) above, A has a 25 percent chance of surviving, assuming random choices. Thus, if nobody else shoots on the (n − 1)st round, A again cannot do better than hold his fire during this round. Whether the players refrain from shooting on the (n − 1)st round or not—each strategy may be a best response to what the other players do—shooting will be rational on the nth round if there is more than one survivor and at least one player has a bullet remaining. Moreover, the anticipation of shooting on the (n −1)st or nth round may cause players to fire earlier, perhaps even back to the first and second rounds. Outcome: There will always be shooting, leaving one or no survivors.
N rounds (n unlimited). The new wrinkle here is that it may be rational for no player to shoot on any round, leading to the survival of all three players. How can this happen? The argument in (1) above that “if you are shot at, you might as well shoot somebody” still applies. However, even if you are, say, A, and B shoots C, you cannot do better than shoot B, making yourself the sole survivor—outcome (1). As before, you do best—whether you are shot at or not—if you shoot somebody who is not the target of anybody else, beginning on the first round. Suppose, however, that B and C refrain from shooting in the first round, and consider A's situation. Shooting an opponent is not rational for A on the first round because the surviving opponent will then shoot A on the next round (there will always be a next round if n is unlimited). On the other hand, if all the players hold their fire, and continue to do so in subsequent rounds, then all three players will remain alive. While there is no “best” strategy in all situations, the possibilities of survival will increase if n is unlimited. Outcome: There may be zero, one (any of A, B, or C), or three survivors, but never two. To summarize, shooting is never rational in a sequential truel, whereas it is always rational in a simultaneous truel that goes only one round. Thus, “nobody shoots” and “everybody shoots” are the Nash equilibria in these two kinds of truels. In simultaneous truels that go more than one round, by comparison, there are multiple Nash equilibria. If the number of rounds is known, then there is one Nash equilibrium in which a player shoots, and one in which he does not, at the start, but in the end there will be only one or no survivors. When the number of rounds is unlimited, however, a new Nash equilibrium is possible in which nobody shoots on any round. Thus, like PD with an uncertain number of rounds, an unlimited number of rounds in a truel can lead to greater cooperation.

Power in voting: the paradox of the chair's position
      Many applications of n-person game theory are concerned with voting, in which strategic calculations are often rampant. Surprisingly, these calculations can result in the ostensibly most powerful player in a voting body being hurt. For example, assume the chair of a voting body, while not having more votes than other members, can break ties. This would seem to make the chair more powerful, but it turns out that the possession of a tie-breaking vote may backfire, putting the chair at a disadvantage relative to the other members. In this manner the greater resources that a player has may not always translate into greater power, which here will mean the ability of a player to obtain a preferred outcome.

      In the three-person noncooperative voting game to be analyzed, players are assumed to rank the possible outcomes that can occur. The problem in finding a solution is not a lack of Nash equilibria, but too many. So the question becomes, Which, if any, are likely to be selected by the players? Specifically, is one more appealing than the others? The answer is “yes,” but it requires extending the idea of a sure-thing strategy to its successive application in different stages of play.

      To illustrate the chair's problem, suppose there are three voters (X, Y, and Z) and three voting alternatives (x, y, and z). Assume that voter X prefers x to y and y to z, indicated by xyz; voter Y's preference is yzx, and voter Z's is zxy. These preferences give rise to what is known as a Condorcet voting paradox because the social ordering, according to majority rule, is intransitive: although a majority of voters (X and Z) prefers x to y, and a majority (X and Y) prefers y to z, a majority (Y and Z) also prefers z to x. (The French Enlightenment philosopher Marie-Jean-Antoine-Nicolas Condorcet (Condorcet, Marie-Jean-Antoine-Nicolas de Caritat, marquis de) first examined such voting paradoxes following the French Revolution.) So there is no Condorcet winner—that is, an alternative that would beat every other choice in separate pairwise contests.

      Assume that a simple plurality determines the winning alternative. Furthermore, in the event of a three-way tie (there can never be a two-way tie if there are three votes), assume that the chair, X, can break the tie, giving the chair what would appear to be an edge over the other two voters, Y and Z, who have the same one vote but no tie-breaker.

      Under sincere voting, everyone votes for his first choice, without taking into account what the other voters might do. In this case, voter X will get his first choice (x) by being able to break a three-way tie in favour of x. However, X's apparent advantage will disappear if voting is “sophisticated.”

      To see why, first note that X has a sure-thing, or dominant, strategy of “vote for x”; it is never worse and sometimes better than any other strategy, whatever the other two voters do. Thus, if the other two voters vote for the same alternative, x will win; and X cannot do better than vote sincerely for x, so voting sincerely is never worse. On the other hand, if the other two voters disagree, X's tie-breaking vote (along with his regular vote) will be decisive in x's selection, which is X's best outcome.

 Given the dominant-strategy choice of x on the part of X, then Y and Z face reduced strategy choices, as shown in Table 6—> for the first reduction. (It is a reduction because X's strategy of voting for x is taken as a given.) In this reduction, Y has one, and Z has two, dominated strategies (indicated by D), which are never better and sometimes worse than some other strategy, whatever the other two voters do. For example, observe that “vote for x” by Y always leads to his worst outcome, x. This leaves Y with two undominated strategies, “vote for y” and “vote for z,” which are neither dominant nor dominated strategies: “vote for y” is better than “vote for z” if Z chooses y (leading to y rather than x), whereas the reverse is the case if Z chooses z (leading to z rather than x). By contrast, Z has a dominant strategy of “vote for z,” which leads to outcomes at least as good as and sometimes better than his other two strategies.

 When voters have complete information about each other's preferences, they will eliminate the dominated strategies in the first reduction. The elimination of these strategies gives the second reduction matrix, as shown in Table 7—>. Then Y, choosing between “vote for y” and “vote for z” in this matrix, would eliminate the now dominated “vote for y” because that choice would result in x's winning due to the chair's tie-breaking vote. Instead, Y would choose “vote for z,” ensuring z's election, which is the next-best outcome for Y. In this manner z, which is not the first choice of a majority and could in fact be beaten by y in a pairwise contest, becomes the sophisticated outcome, which is the outcome produced by the successive elimination of dominated strategies by the voters (beginning with X's sincere choice of x).

      Sophisticated voting results in a Nash equilibrium because none of the players can do better by departing from their sophisticated strategy. This is clearly true for X, because x is his dominant strategy; given X's choice of x, z is dominant for Z; and given these choices by X and Z, z is dominant for Y. These “contingent” dominance relations, in general, make sophisticated strategies a Nash equilibrium.

      Observe, however, that there are four other Nash equilibria in this game. First, the choice of each of x, y, or z by all three voters are all Nash equilibria, because no single voter's departure can change the outcome to a different one, much less a better one, for that player. In addition, the choice of x by X, y by Y, and x by Z—resulting in x—is also a Nash equilibrium, because no voter's departure would lead to his obtaining a better outcome.

      In game-theoretic terms, sophisticated voting produces a different and smaller game in which some formerly undominated strategies in the larger game become dominated in the smaller game. The removal of such strategies—sometimes in several successive stages—can enable each voter to determine what outcomes are likely. In particular, sophisticated voters can foreclose the possibility that their worst outcomes will be chosen by successively removing dominated strategies, given the presumption that other voters will do likewise.

      How does sophisticated voting affect the chair's presumed extra voting power? Observe that the chair's tie-breaking vote is not only not helpful but positively harmful: it guarantees that X's worst outcome (z) will be chosen if voting is sophisticated. When voters' preferences are not so conflictual (note that the three voters have different first, second, and third choices when, as here, there is a Condorcet voting paradox), the paradox of the chair's position does not occur, making this paradox the exception rather than the rule.

The von Neumann–Morgenstern theory
      Von Neumann and Morgenstern were the first to construct a cooperative theory of n-person games. They assumed that various groups of players might join together to form coalitions, each of which has an associated value defined as the minimum amount that the coalition can ensure by its own efforts. (In practice, such groups might be blocs in a legislative body or business partners in a conglomerate.) They described these n-person games in characteristic-function form—that is, by listing the individual players (one-person coalitions), all possible coalitions of two or more players, and the values that each of these coalitions could ensure if a counter-coalition comprising all other players acted to minimize the amount that the coalition can obtain. They also assumed that the characteristic function is superadditive: the value of a coalition of two formerly separate coalitions is at least as great as the sum of the separate values of the two coalitions.

      The sum of payments to the players in each coalition must equal the value of that coalition. Moreover, each player in a coalition must receive no less than what he could obtain playing alone; otherwise, he would not join the coalition. Each set of payments to the players describes one possible outcome of an n-person cooperative game and is called an imputation. Within a coalition S, an imputation X is said to dominate another imputation Y if each player in S gets more with X than with Y and if the players in S receive a total payment that does not exceed the coalition value of S. This means that players in the coalition prefer the payoff X to the payoff Y and have the power to enforce this preference.

      Von Neumann and Morgenstern defined the solution to an n-person game as a set of imputations satisfying two conditions: (1) no imputation in the solution dominates another imputation in the solution and (2) any imputation not in the solution is dominated by another one in the solution. A von Neumann–Morgenstern solution is not a single outcome but, rather, a set of outcomes, any one of which may occur. It is stable because, for the members of the coalition, any imputation outside the solution is dominated by—and is therefore less attractive than—an imputation within the solution. The imputations within the solution are viable because they are not dominated by any other imputations in the solution.

      In any given cooperative game there are generally many—sometimes infinitely many—solutions. A simple three-person game that illustrates this fact is one in which any two players, as well as all three players, receive one unit, which they can divide between or among themselves in any way that they wish; individual players receive nothing. In such a case the value of each two-person coalition, and the three-person coalition as well, is 1.

      One solution to this game consists of three imputations, in each of which one player receives 0 and the other two players receive 1/2 each. There is no self-domination within the solution, because if one imputation is substituted for another, one player gets more, one gets less, and one gets the same (for domination, each of the players forming a coalition must gain). In addition, any imputation outside the solution is dominated by one in the solution, because the two players with the lowest payoffs must each get less than 1/2; clearly, this imputation is dominated by an imputation in the solution in which these two players each get 1/2. According to this solution, at any given time one of its three imputations will occur, but von Neumann and Morgenstern do not predict which one.

      A second solution to this game consists of all the imputations in which player A receives 1/4 and players B and C share the remaining 3/4. Although this solution gives a different set of outcomes from the first solution, it, too, satisfies von Neumann and Morgenstern's two conditions. For any imputation within the solution, player A always gets 1/4 and therefore cannot gain. In addition, because players B and C share a fixed sum, if one of them gains in a proposed imputation, the other must lose. Thus, no imputation in the solution dominates another imputation in the solution.

      For any imputation not in the solution, player A must get either more or less than 1/4. When A gets more than 1/4, players B and C share less than 3/4 and, therefore, can do better with an imputation within the solution. When player A gets less than 1/4, say 1/8, he always does better with an imputation in the solution. Players B and C now have more to share; but no matter how they split the new total of 7/8, there is an imputation in the solution that one of them will prefer. When they share equally, each gets 7/16; but player B, for example, can get more in the imputation (1/4, 1/2, 1/4), which is in the solution. When players B and C do not divide the 7/8 equally, the player who gets the smaller amount can always do better with an imputation in the solution. Thus, any imputation outside the solution is dominated by one inside the solution. Similarly, it can be shown that all of the imputations in which player B gets 1/4 and players A and C share 3/4, as well as the set of all imputations in which player C gets 1/4 and players A and B share 3/4, also constitute a solution to the game.

      Although there may be many solutions to a game (each representing a different “standard of behaviour”), it was not apparent at first that there would always be at least one in every cooperative game. Von Neumann and Morgenstern found no game without a solution, and they deemed it important that no such game exists. However, in 1967 a fairly complicated 10-person game was discovered by the American mathematician William F. Lucas that did not have a solution. This and later counterexamples indicated that the von Neumann–Morgenstern solution is not universally applicable, but it remains compelling, especially since no definitive theory of n-person cooperative games exists.

The Banzhaf value in voting games
      In the section Power in voting: the paradox of the chair's position (game theory), it was shown that power defined as control over outcomes is not synonymous with control over resources, such as a chair's tie-breaking vote. The strategic situation facing voters intervenes and may cause them to reassess their strategies in light of the additional resources that the chair possesses. In doing so, they may be led to “gang up” against the chair. (Note that Y and Z do this without any explicit communication or binding agreement; the coalition they form against the chair X is an implicit one and the game, therefore, remains a noncooperative one.) In effect, the chair's resources become a burden to bear, not power to relish.

      When players' preferences are not known beforehand, though, it is useful to define power in terms of their ability to alter the outcome by changing their votes, as governed by a constitution, bylaws, or other rules of the game. Various measures of voting power have been proposed for simple games, in which every coalition has a value of 1 (if it has enough votes to win) or 0 (if it does not). The sum of the powers of all the players is 1. When a player has 0 power, his vote has no influence on the outcome; when a player has a power of 1, the outcome depends only on his vote. The key to calculating voting power is determining the frequency with which a player casts a critical vote.

      American attorney John F. Banzhaf III proposed that all combinations in which any player is the critical voter—that is, in which a measure passes only with this voter's support—be considered equally likely. The Banzhaf value for each player is then the number of combinations in which this voter is critical divided by the total number of combinations in which each voter (including this one) is critical.

      This view is not compatible with defining the voting power of a player to be proportional to the number of votes he casts, because votes per se may have little or no bearing on the choice of outcomes. For example, in a three-member voting body in which A has 4 votes, B 2 votes, and C 1 vote, members B and C will be powerless if a simple majority wins. The fact that members B and C together control 3/7 of the votes is irrelevant in the selection of outcomes, so these members are called dummies. Member A, by contrast, is a dictator by virtue of having enough votes alone to determine the outcome. A voting body can have only one dictator, whose existence renders all other members dummies, but there may be dummies and no dictator (an example is given below).

      A minimal winning coalition (MWC) is one in which the subtraction of at least one of its members renders it losing. To illustrate the calculation of Banzhaf values, consider a voting body with two 2-vote members (distinguished as 2a and 2b) and one 3-vote member, in which a simple majority wins. There are three distinct MWCs—(3, 2a), (3, 2b), and (2a, 2b)—or combinations in which some voter is critical; the grand coalition, comprising all three members, (3, 2a, 2b), is not an MWC because no single member's defection would cause it to lose.

      As each member's defection is critical in two MWCs, each member's proportion of voting power is two-sixths, or one-third. Thus, the Banzhaf index, which gives the Banzhaf values for each member in vector form, is (1/3, 1/3, 1/3). Clearly, the voting power of the 3-vote member is the same as that of each of the two 2-vote members, although the 3-vote member has 50 percent greater weight (more votes) than each of the 2-vote members.

      The discrepancy between voting weight and voting power is more dramatic in the voting body (50, 49, 1) where, again, a simple majority wins. The 50-vote member is critical in all three MWCs—(50, 1), (50, 49), and (50, 49, 1), giving him a veto because his presence is necessary for a coalition to be winning—whereas the 49-vote member is critical in only (50, 49) and the 1-vote member in only (50, 1). Thus, the Banzhaf index for (50, 49, 1) is (3/5, 1/5, 1/5), making the 49-vote member indistinguishable from the 1-vote member; the 50-vote member, with just one more vote than the 49-vote member, has three times as much voting power.

      In 1958 six West European countries formed the European Economic Community (European Union) (EEC). The three large countries (West Germany, France, and Italy) each had 4 votes on its Council of Ministers, the two medium-size countries (Belgium and The Netherlands) 2 votes each, and the one small country (Luxembourg) 1 vote. The decision rule of the Council was a qualified majority of 12 out of 17 votes, giving the large countries Banzhaf values of 5/21 each, the medium-size countries 1/7 each, and—amazingly—Luxembourg no voting power at all. From 1958 to 1973—when the EEC admitted three additional members—Luxembourg was a dummy. Luxembourg might as well not have gone to Council meetings except to participate in the debate, because its one vote could never change the outcome. To see this without calculating the Banzhaf values of all the members, note that the votes of the five other countries are all even numbers. Therefore, an MWC with exactly 12 votes could never include Luxembourg's (odd) 1 vote; while a 13-vote MWC that included Luxembourg could form, Luxembourg's defection would never render such an MWC losing. It is worth noting that as the Council kept expanding with the addition of new countries and the formation of the European Union, Luxembourg never reverted to being a dummy, even though its votes became an ever smaller proportion of the total.

      The Banzhaf and other power indices, rooted in cooperative game theory, have been applied to many voting bodies, not necessarily weighted, sometimes with surprising results. For example, the Banzhaf index has been used to calculate the power of the 5 permanent and 10 nonpermanent members of the United Nations Security Council. (The permanent members, all with a veto, have 83 percent of the power.) It has also been used to compare the power of representatives, senators, and the president in the U.S. federal system.

      Banzhaf himself successfully challenged the constitutionality of the weighted-voting system used in Nassau county, New York, showing that three of the County Board's six members were dummies. Likewise, the former Board of Estimate of New York City, in which three citywide officials (mayor, chair of the city council, and comptroller) had two votes each and the five borough presidents had one vote each, was declared unconstitutional by the U.S. Supreme Court; this was because Brooklyn had approximately six times the population of Staten Island but the same one vote on the Board, in violation of the equal-protection clause of the 14th Amendment of the U.S. Constitution that requires “one person, one vote.” Finally, it has been argued that the U.S. electoral college, which is effectively a weighted voting body because almost all states cast their electoral votes as blocs, violates one person, one vote in presidential elections, because voters from large states have approximately three times as much voting power, on a per-capita basis, as voters from small states.

      Game theory is now well established and widely used in a variety of disciplines. The foundations of economics, for example, are increasingly grounded in game theory; among game theory's many applications in economics is the design of Federal Communications Commission auctions of airwaves, which have netted the U.S. government billions of dollars. Game theory is being used increasingly in political science to study strategy in areas as diverse as campaigns and elections, defense policy, and international relations. In biology, business, management science, computer science, and law, game theory has been used to model a variety of strategic situations. Game theory has even penetrated areas of philosophy (e.g., to study the equilibrium properties of ethical rules), religion (e.g., to interpret Bible stories), and pure mathematics (e.g., to analyze how to divide a cake fairly among n people). All in all, game theory holds out great promise not only for advancing the understanding of strategic interaction in very different settings but also for offering prescriptions for the design of better auction, bargaining, voting, and information systems that involve strategic choice.

Morton D. Davis Steven J. Brams

Additional Reading
The seminal work in game theory is John von Neumann and Oskar Morgenstern, Theory of Games and Economic Behavior, 3rd ed. (1953, reprinted 1980). Avinash K. Dixit and Barry J. Nalebuff, Thinking Strategically: The Competitive Edge in Business, Politics, and Everyday Life (1991), uses case studies, without formal mathematical analysis, to introduce the principles of game theory. Two introductions that require only high school algebra are Avinash K. Dixit and Susan Skeath, Games of Strategy (1999); and Philip D. Straffin, Game Theory (1993).Applications of game theory are presented in Nesmith C. Ankeny, Poker Strategy: Winning with Game Theory (1981, reprinted 1982); Robert Axelrod, The Evolution of Cooperation (1984); Douglas G. Baird, Robert H. Gertner, and Randal C. Picker, Game Theory and the Law (1994); Steven J. Brams, Biblical Games: Game Theory and the Hebrew Bible, 2nd ed. (2002); Steven J. Brams, Theory of Moves (1994); Dan S. Felsenthal and Moshé Machover, The Measurement of Voting Power: Theory and Practice, Problems and Paradoxes (1998); Barry O'Neill, Honor, Symbols, and War (1999); and Karl Sigmund, Games of Life: Explorations in Ecology, Evolution, and Behavior (1993).Histories of game theory can be found in William Poundstone, Prisoner's Dilemma: John von Neumann, Game Theory, and the Puzzle of the Bomb (1992); and E. Roy Weintraub (ed.), Toward a History of Game Theory (1992).Steven J. Brams

* * *


Universalium. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Game theory — is a branch of applied mathematics that is used in the social sciences (most notably economics), biology, engineering, political science, computer science (mainly for artificial intelligence), and philosophy. Game theory attempts to… …   Wikipedia

  • game theory — ➔ theory * * * game theory UK US noun [U] ECONOMICS ► an economic theory that tries to calculate what choices people will make, when those choices depend on the choices of others: »Some of the most interesting ideas are coming from game theory,… …   Financial and business terms

  • game theory — n. A branch of mathematics that deals with strategies for maximizing gains or minimizing losses in competitive situations having defined constraints and involving random factors. Note: Game theory is used for modelling and analysis of various… …   The Collaborative International Dictionary of English

  • game theory — n. a method of using mathematical analysis to select the best available strategy in order to minimize one s maximum losses or maximize one s minimum winnings in a game, war, business competition, etc …   English World dictionary

  • Game Theory —   [engl.], Spieltheorie …   Universal-Lexikon

  • game theory — The general theory of the rational behaviour of two or more people in circumstances where their interests are, at least in part, conflicting. In The Theory of Games and Economic Behaviour(1947), John Von Neumann and Oskar Morgenstern attempted to …   Dictionary of sociology

  • game theory — noun (economics) a theory of competition stated in terms of gains and losses among opposing players • Syn: ↑theory of games • Topics: ↑economics, ↑economic science, ↑political economy • Members of this Topic: ↑ …   Useful english dictionary

  • game theory — A mathematical theory, developed by J. von Neumann (1903 57) and O. Morgenstern (1902 77) in 1944, concerned with predicting the outcome of games of strategy (rather than games of chance) in which the participants have incomplete information… …   Big dictionary of business and management

  • game theory — The systematic analysis in competitive conditions of strategies for the optimal selection of alternative courses of action. In game theory, unlike *decision theory, the outcomes of an individual’s decisions depend on the actions of other… …   Auditor's dictionary

  • Game Theory — A model of optimality taking into consideration not only benefits less costs, but also the interaction between participants. Game theory attempts to look at the relationships between participants in a particular model and predict their optimal… …   Investment dictionary

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”