1 / 38

Markov Chains and the Theory of Games

9. Markov Chains and the Theory of Games. Markov Chains Regular Markov Chains Absorbing Markov Chains Game Theory and Strictly Determined Games Games with Mixed Strategies. Any square matrix that satisfies the properties:. 2. The sum of the entries in each column of T is 1.

linnea
Télécharger la présentation

Markov Chains and the Theory of Games

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 9 Markov Chains and the Theory of Games • Markov Chains • Regular Markov Chains • Absorbing Markov Chains • Game Theory and Strictly Determined Games • Games with Mixed Strategies

  2. Any square matrix that satisfies the properties: 2. The sum of the entries in each column of T is 1. can be referred to as a stochastic matrix. Ex. Stochastic, columns add to 1 Not stochastic, column 1: sum = 1.3

  3. Markov Process (Chain) Stochastic Process in which the outcomes at any stage of the experiment depend only on the outcomes of the preceding stage. The state is the outcome at any stage. The outcome at the current stage is called the current state.

  4. Ex. We are given a process with 2 choices: A and B. It is expected that if a person chooses A then that person has a 30% probability of choosing A the next time. If a person chooses B then that person has a 60% probability of choosing B the next time. 0.3 A 0.4 A A B Basically: 0.7 B B 0.6 This can be represented by a transition matrix

  5. State 1 State 2 The probability that something in state 1 will be in state 1 in the next step is a11 = 0.3 The probability that something in state 2 will be in state 1 in the next step is a12 = 0.4

  6. Transition Matrix A transition matrix associated with a Markov chain with n states is an n X n matrix T with entries aij Current State Next state Next state Current state 2. The sum of the entries in each column of T is 1.

  7. Ex. It has been found that of the people that eat brand X cereal, 85% will eat brand X again the next time and the rest will switch to brand Y. Also, 90% of the people that eat brand Y will eat brand Y the next time with the rest switching to brand X. At present, 70% of the people will eat brand X and the rest will eat brand Y. What percent will eat brand X after 1 cycle? X Initial state Transition: Y One cycle: So 62.5% will eat brand X

  8. Notice from the previous example: is called a distribution vector In general, the probability distribution of the system after n observations is given by

  9. Regular Markov Chain A stochastic matrix T is a regular Markov chain if the sequence T, T2, T3,… approaches a steady-state matrix in which the rows of the limiting matrix are all equal and all the entries are positive. Ex. Regular: all entries positive

  10. T is Regular: all entries of T2 positive Ex. Notice Ex. Not regular: entries of T to a power will never all be positive. Notice

  11. Steady–State Distribution Vector Ex. Given Notice (after some work) Tends toward The steady-state distribution vector is the limiting vector from the repeated application of the transition matrix to the distribution vector.

  12. Finding the Steady-State Distribution Vector Let T be a regular stochastic matrix. Then the steady-state distribution vector X may be found by solving the vector equation together with the condition that the sum of the elements of the vector X be equal to 1.

  13. Ex. Find the steady-state vector for the transition matrix: both are Also need: which gives: So

  14. Absorbing Stochastic Matrix An absorbing stochastic matrix has the properties: • There is at least one absorbing state. • It is possible to go from any non-absorbing state to an absorbing state in one or more stages. Ex. Absorbing Matrix State 3 is an absorbing state and an object may go from state 2 or 1 (non-absorbing states) to state 3

  15. Given an absorbing stochastic matrix it is possible to rewrite it in the form: Absorbing Nonabsorbing I: identity matrix O: zero matrix Ex. 1 4 2 3 1 2 3 4 1 2 3 4 3 2 4 1

  16. Finding the Steady-State Matrix for an Absorbing Stochastic Matrix Suppose an absorbing stochastic matrix A has been partitioned into submatrices Then the steady-state matrix of A is given by where the order of the identity matrix is chosen to have the same order as R.

  17. Ex. Compute the steady-state matrix for the matrix from the previous example:

  18. The steady-state matrix is: 1 4 2 3 Original columns 3 2 4 1 Original rows This means that an object starting in state 3 will have a probability of 0.366 of being in state 2 in the long term.

  19. Game Theory A combination of matrix methods with the theory of probability to determine the optimal strategies to be used when opponents are competing to maximize gains (minimize losses).

  20. Ex. Rafe (row player) and Carley (column player) are playing a game where each holds out a red or black chip simultaneously (neither knows the other’s choice). The betting is summarized below

  21. We can summarize the game as a payoff matrix for Rafe: C1C2 R1 R2 The game is a zero-sum game since one person’s payoff is the same as the other person’s loss. Rafe basically picks a row and Carley picks a column. Since the matrix is a payoff for Rafe, he wants to maximize the entry while Carley wants to minimize the entry.

  22. Rafe should look at the minima for each row, then pick the larger of the minima. This is called the Maximin strategy. C1C2 minima R1 R2 –10 2 maxima 5 3 Carley should look for the maxima of the columns, then pick the smallest of the maxima. This is called the Minimax strategy. From this we see that Rafe should pick row 2 while Carley should pick column 2.

  23. Maximin Strategy (R’s move) • For each row (payoff matrix), find the smallest entry in that row. • Choose the row for which the entry found in step 1 is as large as possible. Minimax Strategy (C’s move) • For each column of the payoff matrix, find the largest entry in that column. • Choose the column for which the entry found in step 1 is as small as possible.

  24. Ex. Determine the maximin and minimax strategies for each player in a game that has the payoff matrix: Row minima –2 –4 Column maxima 4 3 –2 The row player should pick row 1. The column player should pick column 3. *Note: the column player is favored under these strategies (win 2)

  25. Optimal Strategy The optimal strategy in a game is the strategy that is most profitable to a particular player.

  26. Strictly Determined Game A strictly determined game is characterized by the following properties: • There is an entry in the payoff matrix that is simultaneously the smallest entry in its row and the largest entry in its column. This entry is called the saddle point for the game. • The optimal strategy for the row (column) player is precisely the maxmin (minmax) strategy and is the row (column) containing the saddle point.

  27. The Value of the Game The saddle point of a strictly determined game is also referred to as the value of the game. If the value of a strictly determined game is positive, then the game favors the row player. If the value is negative, it favors the column player. If the value is zero, the game is called a fair game.

  28. From the previous example: This is a strictly determined game with saddle point. The optimal strategies are for row player to pick row 1 and the column player to pick column 3. The value of the game is –2. It favors the column player.

  29. Example A two-person, zero-sum game is defined by the payoff matrix a. Show that the game is strictly determined and find the saddle point(s) for the game. The circled entry, -2, is simultaneously the smallest entry in its row and the largest entry in its column. Therefore the game is strictly determined, with the entry as its saddle point.

  30. Example (Cont.) b. What is the optimal strategy for each player? Recall The optimal strategy for the row player is to make the move represented by the second row of the matrix, and the optimal strategy for the column player is to make the move represented by the third column.

  31. Example (Cont.) c. What is the value of the game? Does the game favor one player over the other? Recall The value of the game is -2, which implies that if both players adopt their best strategy, the column player will win 2 units in a play. Consequently, the game favors the column player.

  32. Mixed Strategies Making different moves during a game. A row (column) player may choose different rows (columns) during the game. Ex.The game below has no saddle point. From a minimax/maximin strategy the row player should pick row 2 and the column player should pick column 3.

  33. For a mixed strategy, let the row player pick row 2, 80% of the time and row 1, 20 % of the time. Let the column player pick column 1, 2, and 3, 10%, 20%, and 70% of the time respectively. Column player: Row player: To find the expected value of the game we compute: payoff Expected value = 1.1

  34. Expected Value of a Game Let P and Q be the mixed strategies for the row player R and the column player C respectively. The expected value, E,of the game is given by: Payoff matrix

  35. Example The payoff matrix for a certain game is given by Find the expected payoff to the row player if the row player R uses her maximin strategy 50% of the time and chooses each of the other two rows 25% of the time, while C chooses each column 50% of the time.

  36. Solution In this case, R’s mixed strategy may be represented by the row vector and C’s mixed strategy may be represented by the column vector The expected payoff to the row player is given by:

  37. Optimal Strategies for Nonstrictly Determined Games Payoff matrix Row player: Column player: where Value of the game:

  38. Ex. Given the payoff matrix, find the optimal strategies and then the value of the game. Row player should pick each row 50% of the time. Column player should pick column 1, 90% of the time. Value:

More Related