# Matrix game (LP for game theory)

Author: David Oswalt (SysEn 6800 Fall 2020)

Steward: Wei-Han Chen, Fengqi You

## Game Theory and Linear Programming

John von Neumann (1903–1957) and Oskar Morgenstern (1902–1977)

Game theory can be defined as a formal language for modeling and analyzing the interactive behaviors of intelligent, rational decision-makers (or players). Game theory provides the mathematical methods necessary to analyze the decisions of two or more players based on their preferences to determine a final outcome. The theory was first conceptualized by mathematician Ernst Zermelo in the early 20th century. However, John von Neumann pioneered modern game theory through his book Theory of Games and Economic Behavior, written with co-author Oskar Morgenstern. For this reason, John con Neumann is often credited by historians as the Father of Game Theory[1][2]. This theory has provided a framework for approaching complex, high-pressure situations and has a broad spectrum of applications. These applications of game theory have helped shape modern economics and social sciences as we know them today and are discussed in the Applications section below.

Analyzing game theoretic situations is a practical application of linear programming. These situations can get quite complex mathematically, but one of the simplest forms of game is called the Finite Two-Person Zero-Sum Game (or Matrix Game for short).  In a Matrix Game, two players are involved in a competitive situation in which one player’s loss is the other’s gain. Some common terms related to the Matrix Game that will be used throughout this chapter have been defined below:

Game – Any social situation involving two or more individuals [2].

Players – The individuals involved in a game. In the case of two-person zero-sum games, these players are assumed to be rational and intelligent [2].

Rationality – A decision maker is considered to be rational if he or she makes decisions consistently in pursuit of his or her own objectives. Assuming a player to be rational implies that said player’s objective is to maximize his or her own payoff [2].

Utility – The scale upon which a decision’s payoff is measured [2].

Analyzing these games uses John von Neumann’s Minimax Theorem that was derived using the Brouwer Fixed-Point Theorem. However, over time it was proven that the Matrix Game could be solved using Linear Programming along with the Duality Theorem[3]. This solution to the Matrix game has been proven in the Theory and Algorithmic Discussion section below.

## Theory and Algorithmic Discussion

Consider a simple two-player zero-sum matrix game called Evens and Odds. In this game, two players each wager $1 before simultaneously showing either one or two fingers. If the sum of the fingers showing is even, player 1 wins the pot for that round ($2). If the sum of the fingers showing is odd, player 2 wins the pot for that round. As with all matrix games, the assumption that both players are rational and intelligent decision makers with the goal of maximizing their own total payoff in each round applies. The expected utility for each player can be defined using a payoff matrix, P. In this payoff matrix, the rows and columns represent the decisions of player 1 and player 2 respectively. The below payoff matrix represents the payoff to player 1 in this matrix game.

${\displaystyle P={\begin{bmatrix}2&-2\\-2&2\end{bmatrix}}}$

In this example, since each player has an equal ½ probability of throwing one or two fingers, neither player has a distinct advantage. Consider now a less-trivial game where the payoff matrix is no longer evenly distributed, shown below.

${\displaystyle P={\begin{bmatrix}1&-2\\-3&2\end{bmatrix}}}$

While it may be intuitive that player 1 has the edge in this new game, making this determination is not as clear for much more complicated games. This is where the mathematics behind game theory comes into play. Consider a more general form of a two-person zero-sum game where two players are allowed to pick from a finite set of actions. Let ${\displaystyle i}$ represent the finite set of actions that player one (or the “row player”) can choose from for all ${\displaystyle i\in (1,2,...,n)}$. Likewise, let ${\displaystyle j}$ represent the finite set of actions that player two (or the “column player”) can choose from for all ${\displaystyle j\in (1,2,...,m)}$. The general form of the payoff matrix for a matrix game is now shown below. Note that all positive payments go to the row player and all negative payments go to the column player.

${\displaystyle P=[p_{ij}]}$

Next, we assume that each player is making a random selection in accordance with a fixed probability distribution. This probability distribution is defined by what is called the stochastic vector, ${\displaystyle y}$. Each component of the stochastic vector, ${\displaystyle y_{i}}$, denotes the probability that the row player selects action ${\displaystyle i}$. This stochastic vector is made up of nonnegative probabilities that sum up to one per the fundamental law of probability:

${\displaystyle y\geq 0{\text{ and }}e^{T}y=1,}$

where e is a vector of all ones. Likewise, the stochastic vector for the column player can be defined as ${\displaystyle x}$, with the probabilities that this player selects action ${\displaystyle j}$ denoted by${\displaystyle x_{j}}$. To compute the expected payoff to the column player, the payoff from each outcome ${\displaystyle (i,j)}$ in the sets ${\displaystyle i\in (1,2,...,n)}$ and ${\displaystyle j\in (1,2,...,m)}$ times the probability of that outcome are summed. Thus, the column player’s expected payoff is defined as

${\displaystyle \sum _{i,j}y_{i}a_{ij}x_{j}=y^{T}Px}$.

Since we have assumed that our column player acts rationally, we can expect them to act in accordance with the stochastic vector x. In other words, the column player has adopted strategy x. The row player’s best option for defending against strategy x is to adopt strategy y*, in which they act to minimize the column player’s payout:

{\displaystyle {\begin{aligned}{\text{min}}&~~y^{T}Px\\{\text{s.t}}&~~e^{T}y=1\\&~~y\geq 0\\\end{aligned}}}

By assuming that our column player acts intelligently, this implies that they are aware of the row player’s strategy to minimize their payoff. Hence, the column player can employ strategy x* that maximizes their payoff given the row player’s strategy y*:

${\displaystyle \max _{x}\min _{y}y^{T}Px}$

The above equation can be solved by reformulating it as a linear program. By taking the inner optimization over the deterministic strategies, this equation can be re-written as:

{\displaystyle {\begin{aligned}{\text{max}}&~~{\text{min}}_{i}e_{i}^{T}Ax\\{\text{s.t}}&~~\sum _{j=1}^{n}x_{j}=1\\&~~x_{j}\geq 0&~~j=1,2,...,n\\\end{aligned}}}

In order to put a lower bound on the minimization term, a new variable v is introduced. This gives us the following linear program:

{\displaystyle {\begin{aligned}{\text{max}}&~~v\\{\text{s.t}}&~~v\leq e_{i}^{T}Px&~~i=1,2,...,m\\&~~\sum _{j=1}^{n}=1\\&~~x_{j}\geq 0&~~j=1,2,...,n\\\end{aligned}}}

or in vector notation,

{\displaystyle {\begin{aligned}{\text{max}}&~~v\\{\text{s.t}}&~~ve-Px\leq 0\\&~~e^{T}x=1\\&~~x\geq 0\\\end{aligned}}}

The above max-min linear program governs the column player’s strategy x*. We can use this linear program to determine the row player’s strategy y* by taking the duel to yield a min-max linear program:

${\displaystyle \min _{x}\max _{y}y^{T}Px}$

Similarly to the max-min linear program used for the column player’s strategy, the above equation can be reformulated into a linear program by taking the inner optimization over the deterministic strategies and introducing a new variable u:

{\displaystyle {\begin{aligned}{\text{max}}&~~u\\{\text{s.t}}&~~ue-P^{T}y\leq 0\\&~~e^{T}y=1\\&~~y\geq 0\\\end{aligned}}}

These linear programs can be solved to find the optimal strategies ${\displaystyle x*}$ and ${\displaystyle y*}$. The Minimax Theorem can now be used to verify that both solutions are consistent with one another. The Minimax Theorem states that there exist stochastic vectors ${\displaystyle x*}$and ${\displaystyle y*}$for which

${\displaystyle \max _{x}y^{*T}Px=\min _{y}y^{T}Ax^{*}}$

In order to prove the Minimax Theorem, we first consider the fact that

${\displaystyle v^{*}=\min _{i}e_{i}^{T}Px^{*}=\min _{y}y^{T}Ax*,}$

and

${\displaystyle u*=\max _{j}e_{j}^{T}P^{T}y^{*}=\max _{x}x^{T}A^{T}y*=\max _{x}y^{*T}Ax}$

Since the max-min linear program for x* and the min-max linear program for y* are duals of one another, we can assume that v* = u*. Therefore,

${\displaystyle \max _{x}y^{*T}Px=\min _{y}y^{T}Ax^{*}}$

By solving the above equation for the optimal values v* = u* yields what is called the value of the game. The value of a game shows how much utility each player can expect to gain or lose on average. In the event that v* = u* = 0, the game is considered to be fair, meaning neither player has a distinct disadvantage. In order to illustrate the power of the minimax theorem in solving matrix games, a numerical example has been provided in the section below.

## Numerical Example

Many decisions made in sports can be modeled as finite two-person zero-sum games. Take, for example, a common dilemma seen in American football. The offense has driven down the field and is just a few short yards of scoring. The team has four plays, or downs, to score. On the third down, the team gets stopped by the defense and is unable to score, leaving only one more play to make it happen. There are two options for scoring. The first is a field goal, in which the team kicks the ball through the uprights for 3 points. The second option is to run a passing or running play for a touchdown, worth 7 points. This is often referred to as a “Fourth and Goal” situation and is a dilemma that play-callers face in most football games. While the option of scoring a touchdown yields a higher payoff, it is a much risker option as running and passing plays are easier to defend against than a field goal. For this reason, football coaches often settle on kicking a field goal on 4th down instead of going for it. This anticlimactic end to a long and exciting drive often leaves fans with an unsatisfying feeling, knowing that their team was only a few yards from scoring a touchdown. While kicking the field goal nearly guarantees 3 points, is it smarter to employ a more aggressive strategy and go for the touchdown? Game theory can help determine the strategy that will yield the highest amount of points on average over time.

There are a few assumptions to be made in order to model this 4th Down Dilemma. The first is that both football teams are ideal. What this means is that if the offense chooses a run play and the defense chooses to defend a run play, then the run will be stopped with zero yards gained. It also means that if the offense chooses a run play and the defense incorrectly chooses to defend a passing play, then the play will be successful with a touchdown scored. We are also assuming that if the offense chooses to kick a field goal, then it is guaranteed to be successful. This is assumed due to the fact that field goals from just a few yards out are very rarely missed. The final assumption is that all other factors contributing to play calling are neglected. This could include situations such as the offense being down 2 points with only a few seconds on the clock, when a field goal for 3 points would be the obvious best strategy. With this strategy in mind, a the payoff to the offense can be outlined as follows:

Payoff to Offense - 4th Down Dilemma
Defense
Run Pass FG
Offense Run -7 7 3
Pass 7 -7 3
FG 3 3 3

The above payoff table can also be depicted by the following payoff matrix, ${\displaystyle P}$, where the columns represent the defensive team's actions and the rows represent the offensive team's actions.

${\displaystyle P={\begin{bmatrix}-7&7&3\\7&-7&3\\3&3&3\end{bmatrix}}}$

In order to determine the optimal strategy, the offense must solve the below linear program:

{\displaystyle {\begin{aligned}{\text{max}}&~~w\\{\text{s.t.}}&~~{\begin{bmatrix}-7&7&3&1\\7&-7&3&1\\3&3&3&1\\1&1&1&0\end{bmatrix}}{\begin{bmatrix}x_{1}\\x_{2}\\x_{3}\\w\end{bmatrix}}{\begin{matrix}\leq \\\leq \\\leq \\=\end{matrix}}{\begin{bmatrix}0\\0\\0\\w\end{bmatrix}}\\\end{aligned}}}

## Other Applications of the Matrix Game

The rise of game theory spanned the time frame in which both World War I and World War II occurred, so naturally one of the earliest applications was in developing winning military strategies. Game theory was used to make high-pressure decisions on attack and defense strategies that optimized their impact within a set of constraints. The Battle of Bismarck Sea between Japanese and American forces in 1943 is one of the most historic examples of game theory in this context. In this battle, the US Air Force analyzed an attack situation using a two-person zero-sum game to maximize the amount of time they had to bomb a Japanese naval fleet, given the limited information they had about the convoy’s route. This demonstrates the fact that the word “game” in “game theory” can be misleading. Not all applications of game theory are fun games. One of the other earlier applications of game theory was in economics. This ended up growing into one of the more significant applications of game theory and has formed modern economics as we know it today. The theory played a major role in the development of many sub-disciplines of economics, such as industrial organization, international trade, labor economics, and macroeconomics [1]. As game theory matured, its applications expanded into various fields of social science, including political science, international relations, philosophy, sociology and anthropology. It is also used in biology and computer science. To this day, economics remains the most prominent application of game theory.

• TBD