1 / 122

Markov Chains Regular Markov Chains Absorbing Markov Chains

9. Markov Chains Regular Markov Chains Absorbing Markov Chains Game Theory and Strictly Determined Games Games with Mixed Strategies. Markov Chains and the Theory of Games . 9.1. Markov Chains. Transitional Probabilities.

taro
Télécharger la présentation

Markov Chains Regular Markov Chains Absorbing Markov Chains

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 9 • Markov Chains • Regular Markov Chains • Absorbing Markov Chains • Game Theory and Strictly Determined Games • Games with Mixed Strategies Markov Chains and the Theory of Games

  2. 9.1 Markov Chains

  3. Transitional Probabilities • In this chapter we will be concerned with a special class of stochastic processes in which the probabilities associated with the outcomes at any stage of the experiment depend only on the outcomes of the preceding stage. • Such a process is called a Markov process, or Markov chain. • The outcome at any stage of the experiment in a Markov process is called the state of the experiment. • In particular, the outcome at the current stage of the experiment is called the current state of the process.

  4. Applied Example: Common Stocks • An analyst at Weaver and Kline, a stock brokerage firm, observes that the closing price of the preferred stock of an airline company over a short span of time depends only on its previous closing price. • At the end of each trading day, he makes a note of the stock’s performance for that day, recording the closing price as “higher,” “unchanged,” or “lower” according to whether the stock closes higher, unchanged, or lower than the previous day’s closing price. • This sequence of observations may be viewed as a Markov chain. Applied Example 1, page 484

  5. Applied Example: Common Stocks • If on a certain day the stock’s closing price is higher than that of the previous day, then the probability that it closeshigher, unchanged, or lower on the next trading day is .2, .3, and .5, respectively. • Next, if the stock’s closing price is unchanged from the previous day, then the probability that it closeshigher, unchanged, or lower on the next trading day is .5, .2, and .3, respectively. • Finally, if the stock’s closing price is lower than that of the previous day, then the probability that it closeshigher, unchanged, or lower on the next trading day is .4, .4, and .2, respectively. • With the aid of tree diagrams, describe the transitionbetween states and the probabilities associated with these transitions. Applied Example 1, page 484

  6. Applied Example: Common Stocks Solution • The Markov chain being described has three states, each of which may be displayed by constructing a tree diagram in which the associated probabilities are shown on the appropriate limbs: • If the current state is higher, the tree diagram is: Higher Unchanged Lower .2 .3 .5 Higher Applied Example 1, page 484

  7. Applied Example: Common Stocks Solution • The Markov chain being described has three states, each of which may be displayed by constructing a tree diagram in which the associated probabilities are shown on the appropriate limbs: • If the current state is unchanged, the tree diagram is: Higher Unchanged Lower .5 .2 .3 Unchanged Applied Example 1, page 484

  8. Applied Example: Common Stocks Solution • The Markov chain being described has three states, each of which may be displayed by constructing a tree diagram in which the associated probabilities are shown on the appropriate limbs: • If the current state is lower, the tree diagram is: Higher Unchanged Lower .4 .4 .2 Lower Applied Example 1, page 484

  9. Transition Probabilities • The probabilities encountered in the last example are called transition probabilities because they are associated with the transition from one state to the next in the Markov process. • These transition probabilities may be conveniently represented in the form of a matrix. • Suppose for simplicity that we have a Markov chain with three possible outcomes at each stage of the experiment. • Let’s refer to these outcomes as state 1, state 2, and state 3. • Then the transition probabilities associated with the transition from state 1 to each of the states 1, 2, and 3 in the next phase of the experiment are precisely the respective conditional probabilities that the outcome is state 1, state 2, and state 3given that the outcomestate 1 has occurred.

  10. Transition Probabilities • In short, the desired transition probabilities are respectively P(state 1 | state 1), P(state 2 | state 1), and P(state 3 | state 1). • Thus, we can write: • These can be represented with a tree diagram as well: Next state State 1 State 2 State 3 a11 a21 a31 a11= P(state 1 | state 1) a21= P(state 2 | state 1) a31= P(state 3 | state 1) State 1 Current state

  11. Transition Probabilities • Similarly, the transition probabilities associated with the transition from state 2 can be presented as conditional probabilities, as well as in a tree diagram: Next state State 1 State 2 State 3 a12 a22 a32 a12= P(state 1 | state 2) a22= P(state 2 | state 2) a32= P(state 3 | state 2) State 2 Current state

  12. Transition Probabilities • Finally, the transition probabilities associated with the transition from state 3 can be presented as conditional probabilities, as well as in a tree diagram: Next state State 1 State 2 State 3 a13 a23 a33 a13= P(state 1 | state 3) a23= P(state 2 | state 3) a33= P(state 3 | state 3) State 3 Current state

  13. Transition Probabilities • These observations lead to the following matrix representation of the transition probabilities: Current state State 1 State 2 State 3 State 1 State 2 State 3 Next state

  14. Applied Example: Common Stocks • Use a matrix to represent the transition probabilities obtained earlier. Solution • There are three states at each stage of the Markov chain under consideration. • Letting state 1, state 2, and state 3 denote the states “higher,” “unchanged,” and “lower,” respectively, we find that Applied Example 2, page 484

  15. Applied Example: Common Stocks • Use a matrix to represent the transition probabilities obtained earlier. Solution • Thus, the required matrix representation is given by Applied Example 2, page 484

  16. Transition Matrix • A transition matrix associated with a Markov Chain with nstates is an nxnmatrix T with entriesaij( 1 i n; 1 j n) • The transition matrix has the following properties: • aij 0 for all i and j. • The sum of the entries in each column of T is 1.

  17. Applied Example: Urban-Suburban Population Flow • Because of the continued successful implementation of an urban renewal program, it is expected that each year 3% of the population currently residing in the city will move to the suburbs and 6% of the population currently residing in the suburbs will move into the city. • At present, 65% of the total population of the metropolitan area lives in the city itself, while the remaining 35% lives in the suburbs. • Assuming that the total population of the metropolitan area remains constant, what will be the distribution of the population oneyear from now? Applied Example 4, page 487

  18. Applied Example: Urban-Suburban Population Flow Solution • We can use a tree diagram to see the Markov process under consideration: • Thus, the probability that a person selected at random will be a city dweller one year from now is given by (.65)(.97) + (.35)(.06) = .6515 Population one year later Current population City Suburb .97 .03 City Suburb .65 .35 City Suburb .06 .94 Applied Example 4, page 487

  19. Applied Example: Urban-Suburban Population Flow Solution • We can use a tree diagram to see the Markov process under consideration: • The probability that a person selected at random will be a suburb dweller one year from now is given by (.65)(.03) + (.35)(.94) = .3485 Population one year later Current population City Suburb .97 .03 City Suburb .65 .35 City Suburb .06 .94 Applied Example 4, page 487

  20. Applied Example: Urban-Suburban Population Flow Solution • The process under consideration may be viewed as a Markov chain with two possible states at each stage of the experiment: State 1: “living in the city” State 2: “living in the suburbs” • The transition matrix associated with this Markov chain is State 1 State 2 State 1 State 2 Transition matrix T = Applied Example 4, page 487

  21. Applied Example: Urban-Suburban Population Flow Solution • Next, observe that the initial (current) probability distribution of the population may be summarized in the form of the column vector • Using the probabilities obtained with the tree diagram, we may write the population distribution one year later as State 1 State 2 Initial-state matrix X0 = State 1 State 2 Distribution after one year X1 = Applied Example 4, page 487

  22. Applied Example: Urban-Suburban Population Flow Solution • We can now verify that so this problem may be solved using matrix multiplication. Applied Example 4, page 487

  23. Applied Example: Urban-Suburban Population Flow • Now, find the population distribution of the city aftertwo years and three years. Solution • Let X1, X2, and X3 be the column vectors representing the population distribution of the metropolitan area after one year, two years, and three years, respectively. • To find X2, we take X1 to represent the “initial” probability distribution in this part of the calculation; thus • Similarly, for X3 we have Applied Example 4, page 487

  24. Applied Example: Urban-Suburban Population Flow • Now, find the population distribution of the city aftertwo years and three years. Solution • Observe that we have • These results are easily generalized. Applied Example 4, page 487

  25. Distribution Vectors • Let there be a Markov process in which there are n possible states at each stage of the experiment. • Let the probability of the system being in state 1, state 2, … , state n, initially, be given byp1, p2, … , pn, respectively. • Such a distribution may be represented as an n-dimensional distribution vector and the probability distribution of the system after mobservations is given by

  26. 9.2 Regular Markov Chains

  27. Steady-State Distribution Vectors • In the last section, we derived a formula for computing the likelihood that a physical system will be in any one of the possible states associated with each stage of a Markov process describing the system. • In this section we use this formula to help us investigate the long-term trends of certain Markov processes.

  28. Applied Example: Educational Status of Women • A survey conducted by the National Commission on the Educational Status of Women reveals that 70% of the daughters of women who have completed2 or more years of college have also completed2 or more years of college, whereas 20% of the daughters of women who havehad less than2years of college have completed2 or more years of college. • If this trend continues, determine, in the long run, the percentage of women in the population who will have competed at least 2 years of college given that currently only 20% of the women have completed at least 2years of college. Applied Example 1, page 494

  29. Applied Example: Educational Status of Women Solution • This problem may be viewed as a Markov process with two possible states: State 1: “completed 2 or more years of college” State 2: “completed less than 2 years of college” • The transition matrix associated with this Markov chain is given by and the initialdistribution vector is given by Applied Example 1, page 494

  30. Applied Example: Educational Status of Women Solution • To study the long-term trend, let’s compute X1, X2, … • These vectors give the proportion of women with2 or more years of college and that of women with less than 2years of college after each generation. After one generation After two generations After three generations Applied Example 1, page 494

  31. Applied Example: Education Status of Women Solution • Proceeding further, we obtain the following sequence of vectors: • From the result of these computations, we see that as m increases, the probability distribution vectorXmapproaches the probability distribution vector • Such a vector is called the limiting, or steady-state, distribution vector for the system. Click forward to see more generations After nine generations Applied Example 1, page 494

  32. Applied Example: Educational Status of Women Solution • We interpret these results as follows: • Initially, 20% of all women have completed2 or more years of college, whereas 80% have completed less than 2years of college. • We see that generation after generation, the proportion of the first groupincreases while the proportion of the second groupdecreases. • In the long run, the proportions stabilize, so that 40% of all women will have completed2 or more years of college, whereas 60% will have completed less than2years of college. Applied Example 1, page 494

  33. Regular Markov Chain • Continuing with last example, if we calculate T, T2, T3, … we can see that the powers Tmof the transition matrixTtend toward a fixed matrix as m gets larger and larger: • We can see that the larger the value of m the closer resulting matrix approaches the matrix • Such a matrix is called the steady-state matrix for the system. Click forward to see different powers of T

  34. Regular Markov Chain • Note that the steady-state matrix from our last example has columns that are all equal and all the entries are positive: • A matrix T having this property is called a regular Markov chain.

  35. Regular Markov Chain • A stochastic matrixT is a regular Markov chain if the sequence T, T2, T3, … approaches a steady-state matrix in which the columns of the limiting matrix are all equal and all the entries are positive. • A stochastic matrixT is regular if and only if some power of T has entries that are all positive.

  36. Example • Determine whether the matrix is regular: Solution • Since all the entries of the matrix are positive, the given matrix is regular. Example 2, page 497

  37. Example • Determine whether the matrix is regular: Solution • One of the entries is equal to zero, so let’s compute the second power of the matrix: • Since the second power of the matrix has entries that are all positive, we conclude that the given matrix is in fact regular. Example 2, page 497

  38. Example • Determine whether the matrix is regular: Solution • Denote the given matrix by A. Then • Since A3 = A, it follows that A4 = A2 and so on. • Therefore, these are the only two matrices that arise for any power of A. Example 2, page 497

  39. Example • Determine whether the matrix is regular: Solution • Denote the given matrix by A. Then • Some of the entries of A and A2 are notpositive, so any power of A will have entries that are not positive. • Thus, we conclude the matrix is not regular. Some entries are not positive Example 2, page 497

  40. Finding the Steady-State Distribution Vector • Let T be a regular stochastic matrix. • Then the steady-statedistribution vectorX may be found by solving the vector equation TX = X together with the condition that the sum of the elements of the vectorX be equal to 1.

  41. Example • Find the steady-state distribution vector for the regular Markov chain whose transition matrix is Example 3, page 498

  42. Example Solution • Let be the steady-state distribution vector, where the numbers x and y are to be determined. • The conditionTX = X translates into the matrix equation or, equivalently, the system of linear equations Example 3, page 498

  43. Example Solution • But each equation that makes up the system is equivalent to thesingle equation • Next, the condition that the sum of the elements of X add up to 1 gives • To find the values of x and y that meet bothconditions we solve the system Example 3, page 498

  44. Example Solution • The solution to the system is • So, the required steady-statedistribution vector is given by which agrees with the result obtained earlier. Example 3, page 498

  45. 9.3 Absorbing Markov Chains

  46. Absorbing Markov Chains • In this section we investigate the long-term trends of a certain class of Markov chains that involve transition matrices that are not regular. • In particular, we study Markov chains in which the transition matrices, know as absorbing matrices, have special properties we will describe.

  47. Absorbing Markov Chains • Consider the stochastic matrix associated with a Markov process: • We see that after one observation, the probability is 1 that an object previously in state 1 will remain in state 1.

  48. Absorbing Markov Chains • Consider the stochastic matrix associated with a Markov process: • Similarly, we see that an object previously in state 2must remain in state 2.

  49. Absorbing Markov Chains • Consider the stochastic matrix associated with a Markov process: • Next, we find that an object previously in state 3 has a probability of • .2 of going tostate 1. • .3 of going tostate 2. • .5 of remaining instate 3. • 0 (no chance) of going tostate 4

  50. Absorbing Markov Chains • Consider the stochastic matrix associated with a Markov process: • Finally, we see that an object previously in state 4must go to state 2.

More Related