1 / 203

Non-linearities, catastrophic risk and thresholds in resource economics

Non-linearities, catastrophic risk and thresholds in resource economics. Eric Nævdal eric.navdal@econ.uio.no. Purpose of class. Teach some advanced methods in optimal control theory Familiarize students with some applications of these methods to natural resource management.

kbrandt
Télécharger la présentation

Non-linearities, catastrophic risk and thresholds in resource economics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Non-linearities, catastrophic risk and thresholds in resource economics Eric Nævdal eric.navdal@econ.uio.no

  2. Purpose of class • Teach some advanced methods in optimal control theory • Familiarize students with some applications of these methods to natural resource management. • Enable students to solve simple problems numerically. • A very applied course. No theorems; just methods!

  3. Prerequisites • A decent understanding of ordinary differential equations. • A decent understanding of deterministic optimal control theory. • E.g. Maxc 0∫TU(c,x)e-rtdt subject to x(0) = x0 and dx/dt=f(x,c). Here 0 < T ≤ ∞.

  4. Cook book solution in easy steps • Define the Hamiltonian: H(c,x) = U(c,x) + λf(c,x) 2) Find optimality conditions • c = argmax H → c = c(x,λ) • dλ/dt = rλ – ∂H/∂x =rλ – ∂U/∂x – λ∂f/∂x • Insert c from into dx/dt and = dλ/dt 3) You now have two differential equations.

  5. What to do with the differential equations • Compute steady states x* and λ*. • f(c(x,λ),x) = 0 and • rλ – U’x(c(x*,λ*),x*)– λf’x(c(x*,λ*),x*) = 0 • Draw a phase diagram. λ dλ/dt = 0 λ* dx/dt = 0 x* x • Paths converging to steady states candidates for optimal solution

  6. Optimal Control cont’d • The co-state λ(t) has an interesting economic interpretation. • If somebody at time t gave you a present of 1unit of x so that x(t) jumps to x(t) +1, then λ(t) is (roughly) the value of that present at time t. • Entirely analogous to shadow prices in static theory. • If T < ∞, then we have the transversality condition λ(t) = 0 if x(T) is free.

  7. Transversality conditions for infinite Horizon problems • The economic literature is full of flawed transversality conditions. • The reason is that it is hard to find general conditions without using stuff like lim sup. • For the purposes of this class we are satisfied if we can find at least one path where both the (current value) shadow price and the state variable converges to finite numbers. • If T = ∞, then we really have a hard time with pinning down good transversality conditions. See Seierstad and Sydsæter (1987) for details.

  8. Alternative approach that will (once) be used later • Take the c(x,λ) function and differentiate it. • Gives dc/dt = c’xdx/dt + c’λdλ/dt = = c’xdx/dt + c’λ(rλ – ∂U/∂x – λ∂f/∂x) The solve c(x,λ) with respect to λ. Gives λ(c,x). • Use dc/dt and dx/dt after inserting λ(c,x) as alternative differential equations

  9. Example • A firm has benefits from pollution given by: (u0 – u)2 • The pollution accumulates in nature according to dx/dt = u – δx • Damages from accumulated pollution given by –x2. • The problem: maxu∫∞ (-x2 - (u0 – u)2)e-rtdt subject to: dx/dt = u – δx , x(0) given Solution Define the Hamiltonian: H= -x2 - (u0 – u)2 + λ(u – δx) • The value of u that maximizes the Hamiltonian is given by u = u0 + ½λ for u0 + ½λ > 0. Else u = 0. • dλ/dt = rλ + 2x + δλ • dx/dt = u0 + ½λ – δx

  10. Computing Steady States gives • x* = u0(r+δ)/(1+δ(r+δ)) • λ* = –2u0/(1+δ(r+δ)) • From these expressions we see (for example) • dx*/dr > 0 • dx*/dδ < 0

  11. Phase diagram

  12. Phase diagram with paths

  13. Phase diagram with paths and optimal paths for infinite horizon

  14. The same solution seen as a function of time

  15. Phase diagrams with finite vs inifinite time horizon

  16. Optimal Solution for various time horizons

  17. Crucial insight • If T is chosen sufficiently large, there will be some value t* such that the optimal solution for the infinite horizon problem and the optimal solution for the finite horizon problem will be numerically indistinguishable over the interval [0,t*]. This allows us to solve infinite horizon problems on the computer

  18. Getting a deeper understanding of Optimal Control • Some mathematical background: Boundary value problems. A general class of differential equations. • Consider the problem dL/dt =λL, L(0) = L0. You should all know that the solution is: L0eλt. • But what about this problem?: dL/dt =λL, L(T) = LT

  19. Solving a boundary value problem dL/dt =λL implies that L(t) = Ceλt for some constant C. This constant is found by using the boundary value condition: CeλT = LT Gives that C = LTe-λT. Therefore L(t) = LTeλ(t-T) . Important. We can not independtly specify both L(0) and L(T). There is only one constant!

  20. More boundary value problems dL/dt =λL, L(0) = 1. dN/dt = N – L, N(1) = 1. Solution: L(t) = Ceλt and N(t) = (λ – 1)-1 (et – eλt)C +etK. We have two constants C and K. Determined by: Ceλ0 = 1 and (λ – 1)-1 (e1 – eλ1)C +e1K = 1 • C = 1 and K = (e(λ – 1))-1 (1 + e – eλ – λ) • Solution is L(t) = eλt and • N(t) = (λ – 1)-1 ((et – eλt) +et-1 (1 + e – eλ – λ)

  21. Why boundary value problems? • The solution to an optimal control problem may be written as a boundary value problem. Best seen in finite time problems: • max 0∫TU(c,x)e-rtdt subject to x(0) = x0 and dx/dt=f(x,c). Here 0 < T< ∞. x(T) free. • The maxmimum principle we know, but look at the transversality condition. λ(T)=0.

  22. An even simpler example Define the Hamiltonian: H= -ax - ½(u0 – u)2 + λ(u – δx) • The value of u that maximizes the Hamiltonian is given by u = u0 + ½λ for u0 + ½λ > 0. Else u = 0. • dλ/dt = rλ + a + δλ • dx/dt = u0 + ½λ – δx

  23. A Philosophical digression • The difference between human ecology (AKA economics) and ecology. • An ecosystem and its inhabitants are unemcumbered by precognition. Humans are not. • In order to understand an ecosystem we need differential equations and initial values. • In order to understand human behaviour we need transversality conditions. Humans operate by backwards induction

  24. Numerical methods • Standard Optimal Control Problems in the literature do the following: • Find explicit solutions. Works for very few problems. • Phase diagram. Only works for problems with one state variable. • Steady state analysis. May be hard to do for some problems. Some times steay states are uninteresting • Alternative: Numerical analysis

  25. Numerical Methods in finite time - Shooting • The basic problem; The Maximum Principle gives us a set of differential equations. • An optimal solution must start with the known and correct initial value of x(0). It must also start with an unknown correct value of λ(0) such that λ(T) = 0. • Alternatively if x(T) is given, λ(0) must start from a value so that those constraint holds. • The fundamental problem: Solve a rather complicated equation to find λ(0).

  26. A solution in two steps • First write computer code that solves the differential equations for arbitrary values of x(0) and λ(0). (As if we are solving an initial value problem.) • Then write a routine that finds the value of λ(0) that sets λ(T) = 0, (or x(T) to the required value). • Luckily, there are ways of doing this without writing much code. • Using solver functions in Excel • BVP4C routine in Matlab • Both methods work well, but may have to be tweaked.

  27. Step 1. The 4th order Runge – Kutta Method • General formulation. For an OC problem the vector y = [x, λ]. • Let dy/dt = f(t, y). Let h be a small number. Then y(t) is usually well approximated by the following sequence: • y(t+h) = y(t) +(h/6)×(k1 + 2k2 + 2k3 + k4) k1= f(t, y(t)), k2=f(t + h/2, y(t) +hk1/2) k3=f(t + h/2, y(t) +hk2/2), k4= f(t + h, y(t) + hk3)

  28. Starting Example • Let dy/dt =y (1- y), y(0) = ½. The solution to this differential equation : • y(t) = Exp(t)/(1 + Exp(t)) • No difference whatsoever!

  29. Setting up the differential equations for a control problem • We will return to our previous example. The differential equations are: • dλ/dt = rλ + 2x + δλ • dx/dt = u0 + ½λ – δx • Start Excel. • Click on Sheet Tab to View Code • Open a module (Not class module! • Write code

  30. May look like this:

  31. Implemenent Runge Kutta in Spreadsheet. • Time to load spread sheet Small Optimal Control Example

  32. Step 2 – Finding λ(0) • May in principle be done by programming some suitable search algorithm. • We are going to let Excel take care of it. • Two ways of doing this • Goal Seek Function. Slow, robust, only handles problems with one state variable • The Solver. Fast, will stop if the algorithm encounters errors, Handles a large number state variables. • Let’s do it.

  33. Alternative • Use Matlab BVC4P function. • Not really better or more robust. • Good for when a large number of problems must be solved. • Also, if the initial guess is far off, all solvers crash. BVC4P is good to generate a sequence of solutions that converges to the problem that one actually wants to solve.

  34. Using Numerical Analysis Optimal Vaccination of non-persistent epidemics • Very policy relevant • Economists have made very limited contributions • Shows the power of numerical analysis when out standard tool kit breaks down • Solutions programmed in Matlab

  35. Typical trajectory after outbreak – No vaccination • McKendrick-Kermac model • Suceptibles x, infected y and recovered/dead z dx/dt = –βxy, x(0) = N – ε dy/dt = βxy – γy y(0) = ε = ininital infected population dz/dt = γy, z(0) = 0 Essensial paramter γ/β.

  36. Trajectory without vaccination Note the effect of reducing suceptibles before outbreak. - More suceptibles at the end of an epidemic episode. - Fewer infected

  37. Model with vaccination • Individuals may be vaccinated u. dx/dt = –βxy – u, x(0) = N – ε dy/dt = βxy – γy y(0) = ε = ininital infected dz/dt = γy + u, z(0) = 0 Objective function: ∫(-wy - ½cu2)e-rtdt K is the cost of disease . ½cu2 is the cost of vaccination Must be solved numerically. Standard tools of optimal control useless.

  38. Optimal vaccination - Low cost of disease (w)

  39. Optimal vaccination - High cost of disease (w)

  40. The value of reducing the stock of suceptibles • The shadow price of x multiplied by -1 is the value of vaccinating one ”population unit.” • The graph show the shadow price on x for different values of w • What does it mean that some of the curves are not monotonic? Succeptibles at outbreak

  41. Explaining “increasing returns” • ”Brush fire” effekt. At high levels of x, the disease spreads so rapidly that the return on vaccination prior to outbreak is reduced. Flow with the punch (relatively speaking) becomes optimal strategy. • High disease costs reduces brush fire effect

  42. New Section Multiple Equilibria • Many systems exhibit non-linear dynamics. May or may represent a technical challenge • Here we look at convexo-concave differential equations. • Important to note that systems that naturally exhibit multiple equilibria may not do so when controlled optimally

  43. Example – Eutrophication • Let x be the nutrient (phosphoros and nitrogen) loading in a lake. • Let u be the deposition of nutrients. • The ecologists claim that the dynamics of the lake may be reasonably modeled by: • Analysis taken from W.A. Brock and D. Starrett and K-G Mäler, A. Xepapadeas, A.de Zeeuw

  44. Dynamics with low loading (u)

  45. The effect of increased (constant) loading

  46. The flip is irreversible even if u is set to zero!

  47. Management • For economic analysis we need some evaluation of consequences. • Instantaneous benefits from nutrient use given by ½ln(u) • Instantaneous damages from eutrophication given by –cx2.

  48. The optimal management problem

  49. Optimality Conditions

  50. Transforming into equations in (u,x) space

More Related