1 / 7

Introduction to Stereo Vision and Gradient Descent in Probability and Machine Learning

This brief introduction covers key concepts in stereo vision, probability, and machine learning as discussed by Lam Tran. Highlighting personal background from San Diego, CA, and academic roots at the University of Rochester (Class of 2009), the material explores the dynamics of gradient descent, local minima, and energy dynamics. Insights into climbing out of local minima using random exploration are provided alongside an experimental analysis of a sinusoidal function with varying alpha values, demonstrating mean performance in stochastic trials.

aglaia
Télécharger la présentation

Introduction to Stereo Vision and Gradient Descent in Probability and Machine Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CFU REU: Week 1 Lam Tran

  2. Brief Introduction • Home town: San Diego, CA • University of Rochester, Rochester, NY • Class 2009 • Research Interest: Probability and Machine Learning

  3. Stereo Vision (Big Picture) Learn θ

  4. Stereo Vision 2(Big Picture) θ

  5. Gradient Descent • How to get out of local minimal • Δ energy = old energy - new energy (negative) • Climb out of hill • Climb when • Rand < Exp(beta*(Δ energy )) • Rand ~ Gauss Distribution and Exp(beta* Δ energy ) ~ [0 .. 1] • When beta = 0, always move to new states. • When beta = large, move only when new energy < old energy (stuck at local minimal, similar to gradient descent).

  6. Gradient Descent • Worst case scenario • Energy gained ~ height of the hill climbed. • Avoid climbing hills with high energy change • Exp(beta*(Δ energy )) ~ inv proportional to Δ energy • Hills with higher Δ energy has a smaller probability of being climbed.

  7. Experimentation • F(x) = alpha*sin(x) with beta = 3. • 1000 trails with 100 sample sizes • Alpha = 1, mean = 99.70% • Alpha = 2, mean = 88.45% • Alpha = 4, mean = 45.11%

More Related