1 / 4

A linear system LS solution

Robust Inversion using Biweight norm Jun Ji , Hansung University ( visiting the University of Texas at Austin ). SEG 2011 San Antonio. Introduction. Least-squares ( l 2 ) inversion: Sensitive to outliers.

ella
Télécharger la présentation

A linear system LS solution

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Robust Inversion using Biweight normJun Ji, Hansung University ( visiting the University of Texas at Austin ) SEG 2011 San Antonio Introduction Least-squares (l 2) inversion:Sensitive to outliers Least-absolute (l 1) inversion Resistant to outliers (i.e. Robust)Variants of l 1 : - Huber norm - Hybrid norm etc. IRLS Review A linear system LS solution WLS solution (Weighted LS) robust weights require a nonlinear inversion such as IRLS. IRLS (Iteratively Reweighted LS) IRLS algorithm implementation using nonlinear Conjugate Gradient (NCG) method (Claerbout, 1991) Compute residual Compute weighting Solve WLS to find model Iterate until satisfy

  2. Robust norm : l 1 norm l 1 norm function : Weighting : Robust norm : Huber norm (Huber, 1981) Huber norm function : Weighting : ε= 1.345 x MAD/0.6746 ( ~95% of efficiency for Gaussian Noise) (Holland & Welsch, 1977) Robust norm : Hybrid norm (Bube &Langan, 1977) Hybrid l 1 / l 2 norm function : Weighting : ε ~ 0.6 x σ(Bube &Langan, 1977) Robust norm : Biweight norm (Beaton & Tukey, 1974) Tukey’sBiweight (Bisquare Weight) norm function : Weighting : ε = 4.685 x MAD/0.6745 ( ~95% of efficiency for Gaussian Noise) (Holland & Welsch, 1977) • Problems for Biweight norm IRLS • Local minimum (due to noncovex measure)  good initial guess (e.g. Huber norm sol.) would be helpful • Carefully choose the threshold (ε) and do not change during iteration

  3. Properties of different norms Single parameter estimation problem with N observations di Minimize squares of error (l 2 norm) : Minimize absolute of error (l 1 norm): Example data : ( 2, 3, 4, 5, 66 ) => Mean : 16, Median : 4, More robust estimation : ~ 3.5 Examples - Line fitting BG noise : N(µ,σ)=(0, 0.02) Outliers (20% of data) : 2 spikes(4.5,5) + 8 points with N(3,0.1) Example : Hyperbola fitting BG noise : N(0,0.4) Outliers 1) Three spikes : 10 times of signal amplitude 2) A bad trace with N(0,1) 3) 12 bad traces with U(10,2) ~ 10 % of data BG noise : N(0,0.4) Outliers 1) Three spikes of 10 times of signal amplitude 2) A bad trace with N(0,1)

  4. Real data Example Conclusions • IRLS using Biweight norm provides a robust inversion method like the variants of l 1 norm approaches such as l 1 , Huber, and Hybrid norms. • Biweight norm inversion sometimes demonstrates better estimation than the one of l 1 norm variants when outliers are not simple. • For optimum performance • need a good initial guess (e.g. Huber norm solution) to converge to the global minimum • carefully choose threshold (ε) based on noise distribution and do not change during iteration

More Related