1 / 35

NWP precipitation forecasts: Validation and Value

NWP precipitation forecasts: Validation and Value. Deterministic Forecasts Probabilities of Precipitation Value Extreme Events François Lalaurette, ECMWF. Deterministic Verification. Deterministic: one cause (the weather today - the analysis),

pabla
Télécharger la présentation

NWP precipitation forecasts: Validation and Value

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NWP precipitation forecasts: Validation and Value Deterministic Forecasts Probabilities of Precipitation Value Extreme Events François Lalaurette, ECMWF

  2. Deterministic Verification • Deterministic: • one cause (the weather today - the analysis), • one effect (the weather in n days - the forecast) • Verification of the forecast using observations • categorical (e.g. verify events when daily rainfall > 50mm) • continuous (needs a definition or norm for errors) • - e.g. (RRforec.-RRobs)2 (Root Mean Square Errors)

  3. New Physics 60 levels + new precipitation scheme New microphysics T511 3D-var 4D-var Deterministic Verification: Biases • Bias=mean(observation-forecast) Diurnal cycle (too much convective rain by 12h, too little by 00h - local time) T319

  4. Deterministic Verification: Bias maps • (DJF 2001) Overestimation of orographic precipitation

  5. Deterministic Verification: scatter plots • Error distribution

  6. % of days <0.1 mm 58% 39% Deterministic Verification: Frequency Distribution • Small amounts of precipitation are much more frequent in the forecast than in SYNOP observations

  7. Deterministic Verification: Heavy rainfall • Higher resolution has brought more realistic distributions of heavy rainfall

  8. Deterministic Verification: Does all this make sense? • Synop observations catchment area (raingauge) = O(10-1 m2) • Model grid catchment area = O(1000 km2) • a large number of independent SYNOP observations per model grid are required for the assessment of the precipitation fluxes in a model grid box. • high resolution climatological data - O(10 per model grid box)- are not exchanged in real time, but can be used for a-posteriori verification • two studies recently explored the sensitivity of ECMWF verification to the upscaling of observations (Ghelli and Lalaurette, 2000 used data from Meteo-France while Cherubini et al. used data from MAP)

  9. Climatological network (Météo-France) Synop data collected from the GTS Deterministic verification: Super-observations

  10. Deterministic verification: Super observations (2) • The bias towards too many light rain events is to a large extend a representativity artifact

  11. Probabilities of Precipitation • PoP can be derived following 2 strategies: • To derive the PDF from past error (conditional) statistics (MOS, Kalman Filter) e.g. using scatter diagrams • To transport a prescribed PDF for initial errors into the future (dynamical or “ensemble” approach) • ECMWF runs 50 perturbed forecasts at T255L40 (+ 1 control)

  12. Probabilities of Precipitation (EPSgram) • Forecast for Prague, Base time 10/5/2001 12UTC

  13. Probabilistic Verification • What do we want to verify? • Whether probabilities are biased… • e.g., when an event is forecast with probability 60%, it should verify 6 times out of 10 (no more, no less!) • but then forecasting with the probability=climate frequency is a “perfect” forecast • … or whether the probabilistic product is useful • compared, for example, with a single, deterministic forecast

  14. Probabilistic Verification: 1) Reliability Diagrams

  15. Probabilistic Verification: 4) Brier Scores • BS=(1/N) (p-o)2 • p is the probability forecast (relative number of EPS members forecasting the event) • o is the verification (=0 if the event did occur, =1 otherwise) • the Brier score varies from 0 (perfect, deterministic forecast) to 1 (perfectly wrong, deterministic forecast) • the Brier Skill Score measures the relative performance with respect to the climate (for which p=pc, the relative frequency of occurrence in the long term climate) • BSS=1-(BS/BSC)

  16. Proba. Verification: Brier Skill Scores Time Series 60 levels + new precipitation Bugfix Rnorm+ Rnorm- Stochastic Phys T255

  17. Forecast Value: Brier Scores partition • The BS can be split into the sample climate uncertainty, the forecast reliability (BS_REL), and the forecast resolution (BS_RSL): • resolution tells how informative the probabilistic forecast is; it varies from zero for a system for which all forecasted probabilities verify with the same frequency of occurrence to the sample uncertainty for a system for which the frequency of verifying occurrences takes only values 0 or 100% (such a system resolves perfectly the forecast between occurring and non-occurring events); • reliability tells how close the frequencies of observed occurrences are from the forecasted probabilities (on average, when an event is forecasted with probability p, it should occur with the same frequency p); • uncertainty varies from 0 to 0.25 and indicates how close to 50% the occurrence of the event was during the sample period (uncertainty is 0.25 when the event is split equally into occurrence and non-occurrence).

  18. Forecast Value: Categorical Forecasts • Categorical forecast - Step 1: event definition • e.g.: will rain exceed 10mm over the 24h period H+72/H+96? • Step 2: gather verification data • H=number of good forecasts of the event occurring • M=number of misses (no-forecast but the event occurred) • F=number of false alarms (yes-forecast of a no-event) • Z=number of good forecasts of a no-event • False Alarm Rate=F/(F+Z) • Hit Rate=H/(H+M)

  19. Value of Probabilistic Categorical Forecast: Relative Operative Characteristics • Forecast of the event can be made at different probability levels (10%, 20%, etc…) P>0 P>10% P>20%

  20. Categorical Forecast Economic Value (Richardson, 2000) • Cost/loss ratio (C/L) decision model can be based on several decision-making strategies: • To take preventive action (with cost C) on a systematic basis; • To never take action (and therefore facing loss L when the event occurs); • taking action when the event is forecast by the meteorological model; • taking action when the event occurs (this strategy is based on the availability of a perfect forecast model)

  21. Categorical Forecast Economic Value (Richardson, 2000) • Strategies 1 and 2 can be combined • always take action if the cost/loss ratio is smaller than the climatological frequency of occurrence of the event, and not to take action otherwise. • The economic value of the meteorological forecast is then computed as the reduction of the expense made possible by the use of the meteorological forecast:

  22. Categorical Forecast Economic Value (Richardson, 2000)

  23. Refinements of the EPS verification procedures • Address the skill over smaller areas (need to gather several events categories - CRPS) • Specifically target extreme events (need climatological data) • Refine the references (“Poor Man Ensembles”) • Show the ensemble forecast of Z500 is not more skillful than cheaper alternatives (distributions of errors over the previous year and/or multi-model ensemble) (Atger, 1999; Ziemann, 2000) • The ensemble maximum skill seems to be achieved for abnormal situations

  24. Extreme Events: Recent examples A) November French Floods (12-13/11/1999) Inches 25 20 15 10 5 150km

  25. TL159 EPS proba precip. >20mm (0.8”) >5% >35% >65% Extreme Events: November floods TL319 precip. Acc 72-96h [40, 80]mm >80mm 1100km

  26. Extreme Events: November floods • Verification against SYNOP data 0%<p 10%<p 20%<p

  27. Extreme Events: An EPS Climate • 3 years (January 1997 to December 1999) • constant horizontal resolution (TL159) • Monthly basis, valid. 12UTC • Europe Lat/Lon grid (0.5x0.5 - oversampling ) • T2m, Precip (24, 120, 240h acc.), 10m-wind speed • 50 members (D5+D10) + Control (D0, D5+D10)  around 10,000 events per month  post-processing is fully non-parametric (archived values are all 100 percentiles + 1‰ and 999‰)

  28. Extreme Events: An EPS Climate (2)

  29. 1‰ Extreme Events: EPS Climate (November) • 24h rain rates exceeded with frequency: 1%

  30. Extreme Events : Proposals • A better definition of events worth plotting • e.g.: Number of EPS members forecasting values of 10m-wind speeds exceeding the 99% threshold in the “EPS Climate” • A non-parametric “Extreme Forecast Index”? • Based on how far the EPS distribution is from the Climate distribution

  31. Extreme Events : Extreme Forecast Index • A CRPS-like distance between distributions: • By re-scaling  using the climate distribution, we can create a dimensionless, signed measure: • The Extreme Forecast Index is: • 0% when forecasting the climate distribution, • 25% for a determinist forecast of the median, • 100% for a deterministic forecast of an extreme

  32. Extreme Events: EFI Maps for November Floods

  33. Extreme Events: Verification issues • The proposal is to extend the products from physical parameters • (e.g. amounts of precipitation) • to the forecast of climatological quantiles • (e.g. the forecast to day is for a precipitation event that was not occurring more than one time out of 100 in our February climatology) • Need local climatologies to rescale the observed values • What to do with major model changes?

  34. Summary • Data currently exchanged on the GTS (SYNOP) can only address very crude measures of precipitation forecast performance (biases) or on scales much broader than resolved by the model (e.g. hydrologic basins) • High resolution networks are needed to upscale the data from local to model grids; • Ensemble forecasts have shown some skill in assessing the probabilities of occurrence in the medium range; an optimum combination of dynamical and statistical PoP remains to achieve

  35. Summary (2) • Value of probability forecast compared to pure deterministic forecast of precipitation are easy to establish • Some idea of extreme events can be found in the model direct output... provided it is seen from a model perspective • A framework for the verification of these extreme events forecasts has been established, but needs gathering long climatological records from a range of stations

More Related