Matrix game (LP for game theory)

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search

Author: David Oswalt (SysEn 6800 Fall 2020)

Steward: Wei-Han Chen, Fengqi You

Game Theory and Linear Programming

John von Neumann (1903–1957) and Oskar Morgenstern (1902–1977)

Game theory can be defined as a formal language for modeling and analyzing the interactive behaviors of intelligent, rational decision-makers (or players). Game theory provides the mathematical methods necessary to analyze the decisions of two or more players based on their preferences to determine a final outcome. The theory was first conceptualized by mathematician Ernst Zermelo in the early 20th century. However, John von Neumann pioneered modern game theory through his book Theory of Games and Economic Behavior, written with co-author Oskar Morgenstern. For this reason, John con Neumann is often credited by historians as the Father of Game Theory[1][2]. This theory has provided a framework for approaching complex, high-pressure situations and has a broad spectrum of applications. These applications of game theory have helped shape modern economics and social sciences as we know them today and are discussed in the Applications section below.

Analyzing game theoretic situations is a practical application of linear programming. These situations can get quite complex mathematically, but one of the simplest forms of game is called the Finite Two-Person Zero-Sum Game (or Matrix Game for short).  In a Matrix Game, two players are involved in a competitive situation in which one player’s loss is the other’s gain. Some common terms related to the Matrix Game that will be used throughout this chapter have been defined below:

Game – Any social situation involving two or more individuals [2].

Players – The individuals involved in a game. In the case of two-person zero-sum games, these players are assumed to be rational and intelligent [2].

Rationality – A decision maker is considered to be rational if he or she makes decisions consistently in pursuit of his or her own objectives. Assuming a player to be rational implies that said player’s objective is to maximize his or her own payoff [2].

Utility – The scale upon which a decision’s payoff is measured [2].

Analyzing these games uses John von Neumann’s Minimax Theorem that was derived using the Brouwer Fixed-Point Theorem. However, over time it was proven that the Matrix Game could be solved using Linear Programming along with the Duality Theorem[3]. This solution to the Matrix game has been proven in the Theory and Algorithmic Discussion section below.

Theory and Algorithmic Discussion

Consider a simple two-player zero-sum matrix game called Evens and Odds. In this game, two players each wager $1 before simultaneously showing either one or two fingers. If the sum of the fingers showing is even, player 1 wins the pot for that round ($2). If the sum of the fingers showing is odd, player 2 wins the pot for that round. As with all matrix games, the assumption that both players are rational and intelligent decision makers with the goal of maximizing their own total payoff in each round applies. The expected utility for each player can be defined using a payoff matrix, P. In this payoff matrix, the rows and columns represent the decisions of player 1 and player 2 respectively. The below payoff matrix represents the payoff to player 1 in this matrix game.

In this example, since each player has an equal ½ probability of throwing one or two fingers, neither player has a distinct advantage. Consider now a less-trivial game where the payoff matrix is no longer evenly distributed, shown below.

While it may be intuitive that player 1 has the edge in this new game, making this determination is not as clear for much more complicated games. This is where the mathematics behind game theory comes into play. Consider a more general form of a two-person zero-sum game where two players are allowed to pick from a finite set of actions. Let represent the finite set of actions that player one (or the “row player”) can choose from for all . Likewise, let represent the finite set of actions that player two (or the “column player”) can choose from for all . The general form of the payoff matrix for a matrix game is now shown below. Note that all positive payments go to the row player and all negative payments go to the column player.

Next, we assume that each player is making a random selection in accordance with a fixed probability distribution. This probability distribution is defined by what is called the stochastic vector, . Each component of the stochastic vector, , denotes the probability that the row player selects action . This stochastic vector is made up of nonnegative probabilities that sum up to one per the fundamental law of probability:

and

where e is a vector of all ones. Likewise, the stochastic vector for the column player can be defined as , with the probabilities that this player selects action denoted by. To compute the expected payoff to the column player, the payoff from each outcome in the sets and times the probability of that outcome are summed. Thus, the column player’s expected payoff is defined as

.

Since we have assumed that our column player acts rationally, we can expect them to act in accordance with the stochastic vector x. In other words, the column player has adopted strategy x. The row player’s best option for defending against strategy x is to adopt strategy y*, in which they act to minimize the column player’s payout:

By assuming that our column player acts intelligently, this implies that they are aware of the row player’s strategy to minimize their payoff. Hence, the column player can employ strategy x* that maximizes their payoff given the row player’s strategy y*:

[Insert 11.2]

The above equation can be solved by reformulating it as a linear program. By taking the inner optimization over the deterministic strategies, this equation can be re-written as:

[Max min i ei T Ax]

In order to put a lower bound on the minimization term, a new variable v is introduced. This gives us the following linear program:

[Index notation LP],

or in vector notation,

[Vector notation, no block matrix notation].

The above max-min linear program governs the column player’s strategy x*. We can use this linear program to determine the row player’s strategy y* by taking the duel to yield a min-max linear program:

[Insert min max from middle of 177]

Similarly to the max-min linear program used for the column player’s strategy, the above equation can be reformulated into a linear program by taking the inner optimization over the deterministic strategies and introducing a new variable u:

[Insert vector notation without block matrix]

These linear programs can be solved to find the optimal strategies x* and y*. The Minimax Theorem can now be used to verify that both solutions are consistent with one another. The Minimax Theorem states:

MINIMAX THEOREM. There exist stochastic vectors x* and y* for which

[State equation]

In order to prove the above theorem, we first consider the fact that

[Insert v* equation],

and

[Insert u* equation].

Since the max-min linear program for x* and the min-max linear program for y* are duals of one another, we can assume that v* = u*. Therefore,

[Restate the minimax theorem].

By solving the above equation for the optimal values v* = u* yields what is called the value of the game. The value of a game shows how much utility each player can expect to gain or lose on average. In the event that v* = u* = 0, the game is considered to be fair, meaning neither player has a distinct disadvantage. In order to illustrate the power of the minimax theorem in solving matrix games, a numerical example has been provided in the section below.

Numerical Example

  • Fourth and Goal Dilemma
  • Offensive and defensive decisions and payoffs
  • Results

Other Applications of the Matrix Game

  • Economics
  • War
  • Gambling
  • Intelligence and Foreign Policy

Conclusion

  • TBD

References

  • Add later