1 / 25

Lecture 26

Lecture 26. Game Theoretic Techniques. Randomized gaming stratgey. In the randomized technique, choosing a strategy amounts to choosing a probability distribution vector from the set of all probability distribution. The optimal strategy for R and C is and . Von Neumann’s Minimax Theorem.

larue
Télécharger la présentation

Lecture 26

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 26 Game Theoretic Techniques

  2. Randomized gaming stratgey • In the randomized technique, choosing a strategy amounts to choosing a probability distribution vector from the set of all probability distribution. The optimal strategy for R and C is and

  3. Von Neumann’s Minimax Theorem • Corollary: The largest expected value that R can guarantee using a mixed strategy is equal to the smallest expected value that C can guarantee using a mixed strategy. For any two-person game, specified by M, , i.e all games have solutions when randomized strategies are used.

  4. Loomis Theorem • If is fixed then, reduces to a linear expression in q of the form . • The opponent (Column player) can minimize his payoff by setting to 1 the that has the smallest coefficient. (Exercise : Prove it !) • So, if C knows the probability distribution of R, C’s mixed strategy reduces to pure strategy. • Same can be shown in opposite direction also.

  5. Loomis Theorem For any two-person zero-sum game specified by a matrix M,

  6. Applying on Algorithm Design: Yao’s Technique • Let C is an algorithm designer and R is an adversary and is responsible for designing inputs that will thwart C’s algorithm. In the payoff matrix: • Column (A) = all possible correct, deterministic, terminating algorithms for inputs of fixed size. • Row (I)= all possible inputs of fixed size.

  7. Yao’s technique • As a randomized algorithm is a probability distribution over the deterministic algorithms, we can regard it as a probability distribution on A. • Let be the expected cost of on input i • The intrinsic cost of is the running time on the worst input i.e.

  8. Yao’s technique • A pure strategy for C corresponds to choice of a deterministic algorithm. • A pure strategy for R corresponds to a specific input. • Notice that an optimal pure strategy for C corresponds to an optimal deterministic algorithm.

  9. Deterministic complexity Minimum gives the optimal deterministic algorithm This is .

  10. Mixed strategy • We are interested in the mixed strategy. • A mixed strategy for C is a probability distribution over the space of (always correct) deterministic algorithms, so it is a Las Vegas randomized algorithm. • An optimal mixed strategy for C is an optimal Las Vegas algorithm. • A mixed strategy for R is a distribution over the space of all inputs.

  11. Distributional Complexity • Expected running time of jth algorithm = . • Expected running time of best deterministic algorithm = . • So worst distribution will have the expected running time as =. • Note this is left hand side of Loomi’s theorem.

  12. R.H.S of Loomis theorem • Expected running time of a randomized algorithm (q) on first input = • Expected running time of a randomized algorithm on second input =. • Thus worst expected running time of a randomized algorithm =. • The least worst expected running time achievable by any randomized algorithm = . • This is same as R.H.S of Loomi’s theorem.

  13. Loomi’s Theorem • Loomi’s theorem implies that the distributional complexity equals the least possible running time achievable by any randomized algorithm.

  14. So we have …

  15. Yao’s Minimax principal

  16. Observation • Observation: The expected running time of the optimal deterministic algorithm for an arbitrarily chosen input distribution p is a lower bound on the expected running time of the optimal randomized algorithm for the problem. • So, one can choose any distribution p on the input and prove a lower bound on the expected running time of all deterministic algorithm for that distribution.

  17. Back to AND-OR Tree • The tree T2,k is equivalent to a balanced binary tree all of whose leave is at distance 2k from the root, and all of whose internal nodes compute the NOR function.

  18. Proof by Induction • Base Case: • AND-OR Tree (a ∨ b)∧(c ∨ d). • NOR Tree: Z = ((a ∨ b)’∨ (c ∨ d)’)’ = ((a ∨ b)’)’ ∧ ((c ∨ d)’)’ =(a ∨ b) ∧ (c ∨ d). Rest is exercise…

  19. Applying strategy • We will choose a convenient distribution on the input and prove a lower bound on the expected running time of all deterministic algorithm for that distribution. The input in this case corresponds to the values of the leaves on the tree we are given to evaluate.

  20. Contd.. • Let each leaf is set to 1 with probability . Then the probability that the input to a NOR node is 0 is • Thus the probability that the output of a NOR node is 1 is equal to the probability that both its inputs are 0, which is So the probability that a NOR node with two leaves will return a 1 is p. • Likewise two sibling NOR nodes returning 1 with probability p will cause their parent NOR node to return 1 with probability p. • Ultimately every NOR node returns a 1 with probability p.

  21. Depth First prunning • Consider any deterministic algorithm trying to evaluate NOR tree. • In order to evaluate the fewest leaves possible, the algorithm should determine the value of one entire subtree of a particular node. • If the value of the first subtree is 1 then the other subtree can be pruned immediately. • This is known as depth-first prunning.

  22. Proposition • Let T be a NOR Tree each of whose leaves is independently set to 1 with probability for a fixed value Let WT denote the minimum, over all deterministic algorithms, of the expected number of steps to evaluate T. Then, there is a depth-first pruning algorithm that evaluates T in the same number of expected steps, which is WT. • (Proof is out of scope)

  23. Let be the expected number of steps to evaluate a tree of height h that has n nodes, each of which is set to 1 with probability .

  24. Theorem • The expected running time of any randomized algorithm that always evaluates an instance of T2,k correctly is at least , where n = 22k is the number of leaves.

  25. We have not proved what we wanted! • Our lower bound of is less than the expected run time of our randomized algorithm, which was at most . • This means we have not accomplished our second goal. The reason for this, however, is that the probability distribution that we chose for our analysis was extremely convenient. • Since no reasonable algorithm would evaluate both children of a NOR node if both of them return 1, we need to construct a random distribution in which the values assigned to the leaves are not independent of each other — and thereby preclude the possibility of both inputs to a NOR node being 1. • Only then would we be able to prove the best lower bound. Such an analysis exists and shows that our algorithm is optimal.

More Related