220 likes | 560 Vues
Dynamic Programming. Discrete time frame Multi-stage decision problem Solves backwards. Dynamic Programming. All multistage decision problems can be formulated in terms of dynamic programming Not all multistage decision processes can be solved by DP
E N D
Dynamic Programming • Discrete time frame • Multi-stage decision problem • Solves backwards
Dynamic Programming • All multistage decision problems can be formulated in terms of dynamic programming • Not all multistage decision processes can be solved by DP • Not all DP problems are multistage decision problems (may be 1 decision stage within dynamic problem)
Multistage Decision Process ... characterized by the task of finding a sequence of decisions (or path) which maximizes (or minimizes) an appropriately defined objective function
Stage ... the discrete point in time at which a decision can be made. State ... Condition of the process at a particular stage ... Defined by the value of all state variables and other qualitative characteristics
State Variables (St) ... variables to describe the condition or state of the system at each stage • Usually the hardest part of a DP model to develop • Must describe the system completely enough to give “good” decision rules but remain small enough to have a manageable decision rule and computer program
Decision (Xt) ... variables which the decision maker controls at each stage – these variables control the state of the system in the next stage (state transition) Planning Horizon (T) ... finite or infinte
Return Function ... gives the immediate returns given the state, stage, and decision made Policy ... defines the sequence of decisions to be made for a given state. In DP a decision is given for all possible combinations of state at each stage
Optimal Policy • The sequence of decision (policy) that optimizes (maximizes or minimizes) the objective function • If decisions are separated by large time intervals, future returns may be discounted
Bellman‘s Principle of Optimality • Fundamental concept forming the basis for DP formulation • An optimal policy has the property that, whatever the initial state and decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decisions
Markovian Requirement • An optimal policy starting in a given state depends only on the state of the process at that stage and not on the state at preceding stages. The path is of no consequence, only the present stage and state • The state variables fully describe the state of the system at each stage and capture all past decisions
Multi-Stage Decision Process Return r1 Return r2 Return r3 Terminal Value S1 Stage 1 S2 Stage 2 S2 ... ST Stage 3 Decision x1 Decision x2 Decision xT
2 5 8 Start 1 East End 10 West 3 6 9 4 7 Traveling Salesman
Minimize the cost for each run? 1 – 2 – 6 – 9 – 10 Total cost = 14 However, 1 – 4 – 6 – 9 – 10 Total cost = 12 !
Minimize the cost for each run? • Ignores basic tenet of dynamic optimization • Basic Tenet – by taking into account future consequences of present acts one is led to make choices though possibly sacrificing some present rewards will lead to a preferred sequence of events
Enumerate all possible routes? • 18 in this example • Will give an optimal solution • Harder to as problem gets larger • Curse of dimensionality
Curse of Dimensionality • Computational Curse • Formulation Curse • Large Decision Rule Curse • Counterintuitive Decision Rule Curse • Acceptance Curse (by Analyst and Decision Maker)
Dynamic Programming • Reduce an n-period optimization process to n one period optimization processes • More efficient than total enumeration • Usually works backwards