1 / 12

On the Interpolation Algorithm Ranking

10th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences from 10th to 13th July 2012, Florianópolis, SC, Brazil. On the Interpolation Algorithm Ranking. Carlos López-Vázquez LatinGEO – Lab SGM+Universidad ORT del Uruguay.

maik
Télécharger la présentation

On the Interpolation Algorithm Ranking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 10th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences from 10th to 13th July 2012, Florianópolis, SC, Brazil. On the Interpolation Algorithm Ranking Carlos López-Vázquez LatinGEO – Lab SGM+Universidad ORT del Uruguay

  2. What is algorithm ranking? • There exist many interpolation algorithms • Which is the best? • Is there a general answer? • Is there an answer for my particular dataset? • How to define the better-than relation between two given methods? • How confident should I be regarding such answer?

  3. What has been done? • {A} • {B} • Many papers so far • Permanent interest • How is a typical paper? • Takes a dataset as an example • N points sampled somewhere • Subdivide N in two sets: Training Set {A} and Test Set {B} • A∩B=Ø; N=#{A}+#{B} • Repeat for all available algorithms: • Define interpolant using {A}; blindly interpolate at locations of {B} • Compare known values at {B}with those interpolated ones • Compare? Typically through RMSE/MAD • Better-Than is equivalent to lower-RMSE

  4. Is RMSE/MAD/etc. suitable as a metric? • Different interpolation algorithms lead to different look • RMSE might not be representative. Why? • Let’s consider spectral properties Images from www.spatialanalysisonline.com

  5. Some spectral metric of agreement • For example, ESAM metric • U=fft2d(measured error field), U(i,j)≥0 • V=fft2d(interpolated error field), V(i,j)≥0 • ideally, U=V • 0≤ESAM(U,V)≤1 • ESAM(W,W)=1 Hint!: There might be better options than ESAM

  6. How confident should I be regarding such answer? • Given {A} and {B}a deterministic answer • How to attach a confidence level? Or just some uncertainty? • Perform Cross Validation (Falivene et al., 2010) • Set #{B}=1, and leave the rest with {A} • N possible choices (events) to select B • Evaluate RMSE for each method and event • Average for each method over N cases • Better-than is now Average-run-better-than • Simulate • Sample {A} from N, #{A}=m, m<N • Evaluate RMSE for each method and event, and create rank(i) • Select confidence level, and apply Friedman’s Test to all rank(i) n wines judges each rank k different wines

  7. The experiment • DEM of Montagne Sainte Victoire (France) • Sample {B}, 20 points, held fixed Apply six algorithms Evaluate RMSE, MAD, ESAM, etc. Evaluate ranking(i) • Evaluate ranking of means over i • Apply Friedman’s Test and compare • Do 250 times: Sample {A} points

  8. Results • Ranking using mean of simulated values might be different from Friedman’s test • Ranking using spectral properties might disagree with that of RMSE/MAD • Friedman’s Test has a sound statistical basis • Spectral properties of the interpolated field might be important for some applications

  9. Thank you! Questions?

  10. Results • Other results, valid for this particular dataset • Ranking using ESAM varies with #{A} • According to ESAM criteria, Inverse Distance Weighting (IDW) quality degrades as #{A} increases • According to RMSE criteria, IDW is the best • With a significative difference w.r.t. the second • With 95% confidence level • Irrespective of #{A} • According to ESAM criteria, IDW is NOT the best

  11. Other possible spectral metrics (to be developed)

More Related