1 / 20

Stochastic Differentiation

Stochastic Differentiation. Lecture 3. Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO Working Group on Continuous Optimization. Content. Concept of stochastic gradient Analytical differentiation of expectation

aulii
Télécharger la présentation

Stochastic Differentiation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Stochastic Differentiation Lecture 3 Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO Working Group on Continuous Optimization

  2. Content • Concept of stochastic gradient • Analytical differentiation of expectation • Differentiation of the objective function of two-stage SLP • Finite difference approach • Stochastic perturbation approximation • Likelihood approach • Differentiation of integrals given by inclusion • Simulation of stochastic gradient • Projection of Stochastic Gradient

  3. Expected objective function The stochastic programming deals with the objective and/or constraint functions defined as expectation of random function: • elementary event in the probabilityspace: - the measure, defined by probability density function:

  4. Concept of stochastic gradient The methods of nonlinear stochastic programming are built using the concept of stochastic gradient. The stochastic gradient of the function is the random vector such that:

  5. Methods of stochastic differentiation • Several estimators examined for stochastic gradient: • Analytical approach (AA); • Finite difference approach (FD); • Likelihood ratio approach (LR) • Simulated perturbation approximation.

  6. Stochastic gradient: an analytical approach

  7. Analytical approach (AA) Assume, density of random variable doesn’t depends on the decision variable. Thus, the analytical stochastic gradient coincides with the gradient of random integrated function:

  8. Analytical approach (AA) Let consider the two-stage SLP: vectors q, h, and matrices W, T can be random in general

  9. Analytical approach (AA) The stochastic analytical gradient is defined as by the a set of solutions of the dual problem

  10. Finite difference (FD) approach Let us approximate the gradient of the random function by finite differences. Thus, the each ith component of the stochastic gradient is computed as: is the vector with zero components except ith one, equal to 1, is some small value.

  11. Simulated perturbation stochastic approximation (SPSA) where is the random vector obtaining values 1 or -1 with probabilities p=0.5, is some small value (Spall 1992).

  12. Likelihood ratio (LR) approach

  13. Stochastic differentiation of integrals given by inclusion Let consider the integral on the set given by inclusion

  14. Stochastic differentiation of integrals given by inclusion The gradient of this function is defined as where is defined through derivatives of p and f(see, Uryasev (1994), (2002))

  15. Simulation of stochastic gradient We assume here that the Monte-Carlo sample of a certain size N are provided for any are independent random copies of i.e., distributed according to the density

  16. Sampling estimators of the objective function the sampling estimator of the objective function: and the sampling variance are computed

  17. Sampling estimator of the gradient The gradient is evaluated using the same random sample:

  18. Sampling estimator of the gradient The sampling covariance matrix is applied later on for normalising of the gradient estimator. Say, the Hotelling statistics can be used for testing the value of the gradient:

  19. Computer simulation • Two-stage stochastic linear optimisation problem. • Dimensions of the task are as follows: • the first stage has 10 rows and 20 variables; • the second stage has 20 rows and 30 variables. http://www.math.bme.hu/~deak/twostage/ l1/20x20.1/ (2006-01-20).

  20. Wrap-Up and conclusions • The methods of nonlinear stochastic programming are built using the concept of stochastic gradient • Several methods exist to obtain the stochastic gradient by evaluating the objective function and stochastic gradient by the same random sample.

More Related