1 / 30

Fuzzy Verification toolbox: definitions and results

Fuzzy Verification toolbox: definitions and results. Felix Ament MeteoSwiss, Switzerland. Mesoscale model (5 km) 21 Mar 2004. Global model (100 km) 21 Mar 2004. Observed 24h rain. Sydney. Sydney. RMS=13.0. RMS=4.6. Motivation for new scores. Which rain forecast would you rather use?.

erik
Télécharger la présentation

Fuzzy Verification toolbox: definitions and results

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fuzzy Verification toolbox: definitions and results Felix AmentMeteoSwiss, Switzerland

  2. Mesoscale model (5 km) 21 Mar 2004 Global model (100 km) 21 Mar 2004 Observed 24h rain Sydney Sydney RMS=13.0 RMS=4.6 Motivation for new scores Which rain forecast would you rather use?

  3. Fine scale verification: Fuzzy Methods “… do not evaluate a point by point match!” General Recipe • (Choose a threshold to define event and non-event) • define scales of interest • consider statistics at these scales for verification Scale Evaluate box statistics forecast observation  score depends on spatial scale and intensity x x Intensity

  4. A Fuzzy Verification Toolbox Ebert, E.E., 2007: Fuzzy verification of high resolution gridded forecasts: A review and proposed framework. Meteorol. Appls., submitted.Toolbox available at http://www.bom.gov.au/bmrc/wefor/staff/eee/fuzzy_verification.zip

  5. Applying fuzzy scores Fuzzy scores provide a wealth of information, but • the results seems to be contrasting • their interpretation is sometimes difficult • contain too many numbers poor good

  6. Application versus testbed Application Testbed

  7. A Fuzzy Verification testbed Virtual truth (Radar data, model data, synthetic field) Fuzzy Verification Toolbox Perturbation Generator Realizations of virtual erroneousmodel forecasts Analyzer Realizations ofverification results • Assessment of • sensitivity (mean) • [reliability (STD)] • Two ingredients: • Reference fields: Hourly radar derived rain fields, August 2005 flood event, 19 time stamps (Frei et al., 2005) • Perturbations:  next slide

  8. Perturbations

  9. Perfect forecast All scores should equal ! • But, in fact, 5 out of 12 do not!

  10. Effect of „Leaking“ Scores Problem: Some methods assume no skill at scales below window size! An example: Joint probability method forecast observation pobs=0.5 pforecast=0.5 Forecast Assuming random ordering within window Not perfect! OBS

  11. Expected response to perturbations coarse spatial scale fine low highintensity Sensitivity: expected (=0.0); not expected (=1.0) Summary in terms of contrast: Contrast := mean( ) – mean( )

  12. Summary real good Contrast Leaking Scores • Leaking scores show an overall poor performance • “Intensity scale” and “Practically Perfect Hindcast” perform in general well, but … • Many score have problem to detect large scale noise (LS_NOISE); “Upscaling” and “50% coverage” are beneficial in this respect STD good

  13. Spatial detection versus filtering Dx=25km  Horizontal translation (XSHIFT) with variable displacement Dx • “Intensity scale” method can detect spatial scale of perturbation • All other methods like the “Fraction Skill score” just filter small scale errors Dx=10km  Dx=5km 

  14. Redundancy of scores • Correlation (%) of resulting scores between all score for all thresholds, window sizes – averaged over all types of perturbation: •  Groups of scores: • UP, YN, MC, FB, PP • FZ, JP • FB, PP, (IS)

  15. August 2005 flood event Precipitation sum 18.8.-23.8.2005: (Hourly radar data calibrated using rain gauges (Frei et al., 2005)) Mean: 106.2mm Mean: 73.1mm Mean: 62.8mm Mean: 43.2mm

  16. Fuzzy Verification of August 2005 flood Based on 3 hourly accumulations during August 2005 flood period (18.8.-23.8.2005) COSMO-2 COSMO-7 Scale(7km gridpoints) Intensitythreshold (mm/3h) good bad

  17. Fuzzy Verification of August 2005 flood Difference of Fuzzy Scores COSMO-2 better neutral Scale(7km gridpoints) COSMO-7 better Intensity threshold (mm/3h)

  18. RADAR D-PHASE Demonstration of Probabilistic Hydrological and Atmospheric Simulation of flood Events in the Alpine region • Operational phase (June until November 2007) is running • 33 atmospheric models take part … • … and store there output in a common format in one data archive Standard verification (see Poster) Let’s apply the fuzzy toolbox • Models: COSMO -2, -7, -DE, -EU • Period: August 2007 • Lead times: most recent forecast starting at forecast hour +03. • Observations: Swiss Radar data aggregated on each model grid • To be verified: 3h accumulation of precip.

  19. D-PHASE: August 2007Intensity Scale score (preliminary), 3h accumulation COSMO-2 COSMO-7 COSMO-DE COSMO-EU

  20. Conclusions • Fuzzy Verification score are a promising framework for verification of high resolution precipitation forecasts. • The testbed is a useful tool to evaluate the wealth of scores (not necessarily fuzzy ones): • Not all scores indicate a perfect forecast by perfect scores (Leaking scores). • The “intensity scale” method is able to detect the specific scale of an spatial error. • MeteoSwiss goes for: Upscaling, Intensity scale, Fraction skill score ( and Pracitically perfect hindcast) methods. • First long term application for D-PHASE has just started.

  21. Summary ideal good Contrast Leaking Scores STD good

  22. D-PHASE: August 2007

  23. D-PHASE: August 2007

  24. D-PHASE: August 2007 – cosmoch7

  25. D-PHASE: August 2007 – Cosmoch2

  26. D-PHASE: August 2007 - LME

  27. D-PHASE: August 2007 - LMK

  28. Mean: 73.1mm Mean: 43.2mm Mean: 62.8mm Mean: 106.2mm August 2005 flood event Precipitation sum 18.8.-23.8.2005: 7

  29. August 2005 flood event Fuzzy Verification (hourly accumulations): COSMO-2 COSMO-7

  30. August 2005 flood event Fuzzy Verification COSMO-2 – COSMO-7: • Suprisingly, small differences • However, COSMO2 seems to be slightly better slightly better at

More Related