1 / 19

Semi-Stochastic Gradient Descent Methods

Semi-Stochastic Gradient Descent Methods. Jakub Kone čný (joint work with Peter Richt árik ) University of Edinburgh. Introduction. Large scale problem setting. Problems are often structured Frequently arising in machine learning. is BIG. Structure – sum of functions. Examples.

jonny
Télécharger la présentation

Semi-Stochastic Gradient Descent Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Semi-Stochastic Gradient Descent Methods Jakub Konečný(joint work with Peter Richtárik) University of Edinburgh

  2. Introduction

  3. Large scale problem setting • Problems are often structured • Frequently arising in machine learning is BIG Structure – sum of functions

  4. Examples • Linear regression (least squares) • Logistic regression (classification)

  5. Assumptions • Lipschitz continuity of derivative of • Strong convexity of

  6. Gradient Descent (GD) • Update rule • Fast convergence rate • Alternatively, for accuracy we need iterations • Complexity of single iteration – (measured in gradient evaluations)

  7. Stochastic Gradient Descent (SGD) • Update rule • Why it works • Slow convergence • Complexity of single iteration – (measured in gradient evaluations) a step-size parameter

  8. Goal GD SGD Fast convergence gradient evaluations in each iteration Slow convergence Complexity of iteration independent of Combine in a single algorithm

  9. Semi-Stochastic Gradient DescentS2GD

  10. Intuition • The gradient does not change drastically • We could reuse the information from “old” gradient

  11. Modifying “old” gradient • Imagine someone gives us a “good” point and • Gradient at point , near , can be expressed as • Approximation of the gradient Gradient change We can try to estimate Already computed gradient

  12. The S2GD Algorithm Simplification; size of the inner loop is random, following a geometric rule

  13. Theorem

  14. Convergence rate • How to set the parameters ? For any fixed , can be made arbitrarily small by increasing Can be made arbitrarily small, by decreasing

  15. Setting the parameters • The accuracy is achieved by setting • Total complexity (in gradient evaluations) Fix target accuracy # of epochs stepsize # of iterations # of epochs cheap iterations full gradient evaluation

  16. Complexity • S2GD complexity • GD complexity • iterations • complexity of a single iteration • Total

  17. Related Methods • SAG – Stochastic Average Gradient (Mark Schmidt, Nicolas Le Roux, Francis Bach, 2013) • Refresh single stochastic gradient in each iteration • Need to store gradients. • Similar convergence rate • Cumbersome analysis • MISO - Minimization by Incremental Surrogate Optimization (Julien Mairal, 2014) • Similar to SAG, slightly worse performance • Elegant analysis

  18. Related Methods • SVRG – Stochastic Variance Reduced Gradient (Rie Johnson, Tong Zhang, 2013) • Arises as a special case in S2GD • Prox-SVRG (Tong Zhang, Lin Xiao, 2014) • Extended to proximal setting • EMGD – Epoch Mixed Gradient Descent (Lijun Zhang, MehrdadMahdavi , Rong Jin, 2013) • Handles simple constraints, • Worse convergence rate

  19. Experiment • Example problem, with

More Related