1 / 61

The Rate of Convergence of AdaBoost

The Rate of Convergence of AdaBoost. Indraneel Mukherjee Cynthia Rudin Rob Schapire. AdaBoost (Freund and Schapire 97). AdaBoost (Freund and Schapire 97). Basic properties of AdaBoost’s convergence are still not fully understood.

morrie
Télécharger la présentation

The Rate of Convergence of AdaBoost

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Rate of Convergence of AdaBoost IndraneelMukherjee Cynthia Rudin Rob Schapire

  2. AdaBoost (Freund and Schapire 97)

  3. AdaBoost (Freund and Schapire 97)

  4. Basic properties of AdaBoost’s convergence are still not fully understood.

  5. Basic properties of AdaBoost’s convergence are still not fully understood. We address one of these basic properties: convergence rates with no assumptions.

  6. AdaBoost is known for its ability to combine “weak classifiers” into a “strong” classifier • AdaBoost iteratively minimizes “exponential loss” (Breiman 99, Frean and Downs, 1998; Friedman et al., 2000; Friedman, 2001; Mason et al., 2000; Onoda et al., 1998; Ratsch et al., 2001; Schapire and Singer, 1999)

  7. AdaBoost is known for its ability to combine “weak classifiers” into a “strong” classifier • AdaBoost iteratively minimizes “exponential loss” (Breiman 99, Frean and Downs, 1998; Friedman et al., 2000; Friedman, 2001; Mason et al., 2000; Onoda et al., 1998; Ratsch et al., 2001; Schapire and Singer, 1999)

  8. AdaBoost is known for its ability to combine “weak classifiers” into a “strong” classifier • AdaBoost iteratively minimizes “exponential loss” (Breiman 99, Frean and Downs, 1998; Friedman et al., 2000; Friedman, 2001; Mason et al., 2000; Onoda et al., 1998; Ratsch et al., 2001; Schapire and Singer, 1999)

  9. AdaBoost is known for its ability to combine “weak classifiers” into a “strong” classifier • AdaBoost iteratively minimizes “exponential loss” (Breiman 99, Frean and Downs, 1998; Friedman et al., 2000; Friedman, 2001; Mason et al., 2000; Onoda et al., 1998; Ratsch et al., 2001; Schapire and Singer, 1999)

  10. AdaBoost is known for its ability to combine “weak classifiers” into a “strong” classifier • AdaBoost iteratively minimizes “exponential loss” (Breiman 99, Frean and Downs, 1998; Friedman et al., 2000; Friedman, 2001; Mason et al., 2000; Onoda et al., 1998; Ratsch et al., 2001; Schapire and Singer, 1999) Exponential loss:

  11. Exponential loss:

  12. Exponential loss:

  13. Known: • AdaBoost converges asymptotically to the minimum of the exponential loss (Collins et al 2002, Zhang and Yu 2005) • Convergence rates under strong assumptions: • “weak learning” assumption holds, hypotheses are better than random guessing (Freund and Schapire 1997, Schapire and Singer 1999) • assume that a finite minimizer exists (Rätsch et al 2002, many classic results) • Conjectured by Schapire (2010) that fast convergence rates hold without any assumptions. • Convergence rate is relevant for consistency of AdaBoost (Bartlett and Traskin 2007).

  14. Known: • AdaBoost converges asymptotically to the minimum of the exponential loss (Collins et al 2002, Zhang and Yu 2005) • Convergence rates under assumptions: • “weak learning” assumption holds, hypotheses are better than random guessing (Freund and Schapire 1997, Schapire and Singer 1999) • assume that a finite minimizer exists (Rätsch et al 2002, many classic results) • Conjectured by Schapire (2010) that fast convergence rates hold without any assumptions. • Convergence rate is relevant for consistency of AdaBoost (Bartlett and Traskin 2007).

  15. Known: • AdaBoost converges asymptotically to the minimum of the exponential loss (Collins et al 2002, Zhang and Yu 2005) • Convergence rates under assumptions: • “weak learning” assumption holds, hypotheses are better than random guessing (Freund and Schapire 1997, Schapire and Singer 1999) • assume that a finite minimizer exists (Rätsch et al 2002, many classic results) • Conjectured by Schapire (2010) that fast convergence rates hold without any assumptions. • Convergence rate is relevant for consistency of AdaBoost (Bartlett and Traskin 2007).

  16. Known: • AdaBoost converges asymptotically to the minimum of the exponential loss (Collins et al 2002, Zhang and Yu 2005) • Convergence rates under assumptions: • “weak learning” assumption holds, hypotheses are better than random guessing (Freund and Schapire 1997, Schapire and Singer 1999) • assume that a finite minimizer exists (Rätsch et al 2002, many classic results) • Conjectured by Schapire (2010) that fast convergence rates hold without any assumptions. • Convergence rate is relevant for consistency of AdaBoost (Bartlett and Traskin 2007).

  17. Outline • Convergence Rate 1: Convergence to a target loss “Can we get within of a ‘reference’ solution?” • Convergence Rate 2: Convergence to optimal loss “Can we get within of an optimal solution?”

  18. Main Messages • Usual approaches assume a finite minimizer • Much more challenging not to assume this! • Separated two different modes of analysis • comparison to reference, comparison to optimal • different rates of convergence are possible in each • Analysis of convergence rates often ignore the “constants” • we show they can be extremely large in the worst case

  19. Convergence Rate 1: Convergence to a target loss “Can we get within of a “reference” solution?” • Convergence Rate 2: Convergence to optimal loss “Can we get within of an optimal solution?” Based on a conjecture that says...

  20. radius B

  21. radius B

  22. radius B  

  23. radius B  

  24. This happens at: radius B  

  25. This happens at: radius B  

  26. This happens at: radius B  

  27. Best known previous result is that it takes at most order rounds (Bickel et al).

  28. Intuition behind proof of Theorem 1 • Old fact: if AdaBoost takes a large step, it makes a lot of progress:

  29. measures progress measures distance radius B   

  30. Intuition behind proof of Theorem 1 • Old Fact: • First lemma says:

  31. Intuition behind proof of Theorem 1 • Old Fact: • First lemma says: • Second lemma says: • Combining:

  32. Lemma: There are simple datasets for which the number of rounds required to achieve loss L* is at least (roughly) the norm of the smallest solution achieving loss L*

  33. Lemma: There are simple datasets for which the norm of the smallest solution achieving loss L* is exponential in the number of examples.

  34. Rate on a Simple Dataset (Log scale) 3e-06 3e-05 3e-04 3e-03 3e-02 Loss – (Optimal Loss) 10 100 1000 10000 1e+05 Number of rounds

  35. Outline • Convergence Rate 1: Convergence to a target loss “Can we get within of a “reference” solution?” • Convergence Rate 2: Convergence to optimal loss “Can we get within of an optimal solution?”

  36. Better dependence on than Theorem 1, actually optimal. • Doesn’t depend on the size of the best solution within a ball • Can’t be used to prove the conjecture because in some cases C>2m. (Mostly it’s much smaller.)

  37. Main tool is the “decomposition lemma” • Says that examples fall into 2 categories, • Zero loss set Z • Finite margin set F. • Similar approach taken independently by (Telgarsky, 2011)

  38. + + + + - + + - + + - + - + + - - - - - - -

  39. + + + + - + + - + + - + - + F + - - - - - - -

  40. + + + + - + + - + + - + - + Z + - - - - - - -

  41. Decomposition Lemma • For any dataset, there exists a partition of the training examples into Z and Fs.t. these hold simultaneously:

  42. + + + + + - - + + + + - - - + + - - - - - -

More Related