1 / 34

Markov Chains

Presented by Dajiang Zhu 11 /1/2011. Markov Chains. Michael Mitzenmacher & Eli Upfal. OUTLINE. Introduction of M arkov chains Definition One example Two problems as examples 2-SAT Algorithm (simply introduce random walks) 3-SAT Algorithm. Introduction. A stochastic process

zahi
Télécharger la présentation

Markov Chains

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Presented by Dajiang Zhu 11/1/2011 Markov Chains Michael Mitzenmacher & Eli Upfal

  2. OUTLINE • Introduction of Markov chains • Definition • One example • Two problems as examples • 2-SAT Algorithm (simply introduce random walks) • 3-SAT Algorithm

  3. Introduction • A stochastic process X={ X(t) : t ∈ T } X(t)- the state of the process at time t. If for all t • If assumes values from a countably infinite set, then X is a discrete space process. • If assumes values from finite set, then the process is finite. • If T is a countably infinite set, then X is a discrete time process.

  4. Introduction • Definition: A discrete time stochastic process , , , … is a Markov chain if Pr(=|=, =,…,=) = Pr(=|=) =

  5. Introduction • Note: • The above definition expresses that the state depends on the previous state , but it is independent of the particular history of how the process arrived at state . (Markov property or memoryless property) • The Markov property DOES NOT imply that is independent of the random variables , ,…, ; it just implies that any dependency of on the past is captured in the value of .

  6. Introduction = Pr( =j | =i ) is the probability that the process moves from i to j in one step. The Markov property implies that the Markov chain is uniquely defined by the one step transition matrix: P =

  7. Introduction • Let (t) = ( (t), (t), (t),… ) be the vector giving the distribution of the state of the chain at time t. We have : (t) = (t-1) (t) = (t-1) P

  8. Introduction • For any n≥0 we define the n-step transition probability = Pr( = j | = i ) As the probability that the chain moves from state i to state j in exactly n steps. = = P⋅ = Thus, for any t≥0 and n≥1, (t+n) = (t)

  9. Introduction-example 1/4 1/3 1 0 1 2 1/2 1/6 1/2 3 3/4 1/4 1/4 P = =

  10. 2-SAT • (∨) ∧ ( ∨ ) ∧ (∨) ∧ (∨) ∧ (∨) • How about assign them with 0? • (∨) is not satisfied. Choose this clause and select to be set to 1. • (∨) is not satisfied. Continue…

  11. 2-SAT • 2-SAT Algorithm 1. Start with an arbitrary truth assignment. 2. Repeat up to 2m times, terminating if all clauses are satisfied: (a) Choose an arbitrary clause that is not satisfied. (b) Choose uniformly at random one of the literals in the clause and switch the value of its variable. 3. If a valid truth assignment has been found, return it. 4. Otherwise, return that the formula is unsatisfiable.

  12. 2-SAT • Let us define the following: n : the number of variables in the formula. e.g. n=4 A 2-SAT formula has O() clauses, each step can be executed in O() time. S: A satisfying assignment for the n variables. : The variable assignment after the i-th step of the algorithm. : The number of variables in the current assignment that have the same value as in the satisfying assignment S.

  13. 2-SAT • When =n, the algorithm terminates with a satisfying assignment. • When n, how evolves over time? How long it takes before reaches n.

  14. 2-SAT • If = 0, then for any change in variable value on the next step we have = 1. Pr(=1|=0) = 1 • If 1≤≤n-1, we have Pr (=j+1|=j) ≥ 1/2 Pr (=j-1|=j) ≤1/2

  15. 2-SAT • We build the following Markov chain , , , … = ; Pr(=1|=0) = 1; Pr(=j+1|=j) = 1/2 Pr (=j-1|=j) =1/2

  16. 2-SAT • Random walk A fandom walk on G is a Markov chain defined by the movement of a particle between vertices of G. In this process, the place of the particle at a given time step is the state of the system. If the particle is at vertex i, and i has d(i) outgoing edges, then the probability that the particle follows the edge(i, j) and moves to a neighbor j is 1/d(i). http://vlab.infotech.monash.edu.au/simulations/swarms/random-walk/

  17. 2-SAT • A few observation • The Markov chain , , , … is a pessimistic version of the stochastic process , , , … • , , ,… models a random walk on an undirected graph G. • Let be the expected number of steps to reach n when starting from j. -For the 2-SAT process, is an upper bound on the expected number of steps to fully match S when starting from a truth assignment that matches S in j locations.

  18. 2-SAT • Lemma Assume that a 2-SAT formula with n variables has a satisfying assignment, and that the 2-SAT algorithm is allowed to run until it finds a satisfying assignment. Then the algorithm finds a satisfying assignment in expected steps.

  19. 2-SAT Clearly = 0 = + 1 Let be a random variable representing the number of steps to reach n from state j. E[] = E[(1+)+(1+)] = + = + + 1

  20. 2-SAT • We have the following system of equations: = 0 = + + 1, 1≤j≤n-1 = + 1 => = + 2j +1 =

  21. 2-SAT • Theorem The 2-SAT algorithm always returns a correct answer if the formula is unsatisfiable. If the formula is satisfiable, with probability at least 1- the algorithm returns a satisfying assignment. Otherwise it incorrectly returns that the formula is unsatisfiable.

  22. 2-SAT • Proof • If there is no satisfying assignment then the algorithm correctly returns that the formula is unsatisfiable. • If the formula is satisfiable, divide the execution of the algorithm into segments of 2 steps each. • Let Z be the number of steps, from the start of segment i, until the algorithm finds a satisfying assignment. • We have Pr( Z > 2 ) ≤ = 1/2

  23. 3-SAT • 3-SAT Algorithm 1. Start with an arbitrary truth assignment. 2. Repeat the clauses are satisfied: (a) Choose an arbitrary clause that is not satisfied. (b) Choose one of the literals uniformly at random, and change the value of the variable in the current truth assignment.

  24. 3-SAT • Following the same reasoning as for the 2-SAT algorithm: Pr (=j+1|=j) ≥ 1/3 Pr (=j-1|=j) ≤2/3 Thus we have : Pr(=1|=0) = 1; Pr(=j+1|=j) = 1/3 Pr (=j-1|=j) = 2/3

  25. 3-SAT • We have the following system of equations: = 0 = + + 1, 1≤j≤n-1 = + 1 => = --3(n-j)

  26. 3-SAT • Two key observations: • If we choose an initial truth assignment uniformly at random, the number of variables that match S has a binomial distribution with expectation n/2. • Once the algorithm starts, it is more likely to move toward 0 than toward n. The longer we run the process, the more likely it has moved toward 0. Therefore we are better off re-starting the process with many randomly chosen initial assignments and running the process each time for a small number of steps, rather than running the process for many steps on the same initial assignment.

  27. 3-SAT • Modified 3-SAT Algorithm • Repeat until all clauses are satisfied: (a) Start with a truth assignment chosen uniformly at random. (b) Repeat the following up to 3n times terminating if a satisfying assignment is found. i) Choose an arbitrary clause that is not satisfied. ii) Choose one of the literals uniformly at random, and change the value of the variable in the current truth assignment.

  28. 3-SAT • How many times the process needs to restart before it reaches a satisfying assignment?

  29. 3-SAT Let q represent the probability that the modified process reaches S in 3n steps starting with a truth assignment chosen uniformly at random. Let represent the probability that our modified algorithm reaches S when it starts with a truth assignment that includes exactly j variables that do no agree with S.

  30. 3-SAT • Imaging : • A particle moving on the integer line, with probability 1/3 of moving up by one and probability 2/3 of moving down by one. • The probability of exactly k moves down and k+j moves up in a sequence of j+2k moves:

  31. 3-SAT • It is a lower bound on the probability that the algorithm reaches a satisfying assignment within j+2k≤3n steps, starting with an assignment that has exactly j variables that did not agree with S. ≥

  32. 3-SAT • In particular, k=j ≥ = ≥ • C= • Stirling’s formula : ≤m! ≤2

  33. 3-SAT • q = ≥ ≥ • Assuming a satisfying assignment exists, the number of random assignments the process tries before finding a satisfying assignment is a geometric random variable with parameter q. • The expected number of assignments tried is 1/q. • For each assignment the algorithm uses at most 3n steps. O()

  34. Questions and discussion • Questions? Thanks!

More Related