1 / 28

Presentation to ECMWF Forecast Product User Meeting

Presentation to ECMWF Forecast Product User Meeting. 16th June 2005. Met Office use and verification of the monthly forecasting system Bernd Becker, Richard Graham. Met Office monthly forecast suite Products from the Monthly Outlook Standard Verification System. Monthly Forecasting System.

luisa
Télécharger la présentation

Presentation to ECMWF Forecast Product User Meeting

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Presentation to ECMWF Forecast Product User Meeting 16th June 2005

  2. Met Office use and verification of the monthly forecasting system Bernd Becker, Richard Graham. • Met Office monthly forecast suite • Products from the Monthly Outlook • Standard Verification System

  3. Monthly Forecasting System • Coupled ocean-atmosphere integrations: a 51-member ensemble is integrated for 32 days every week. • Atmospheric component: IFS with the latest operational cycle 29r1 and with a TL159L40 resolution (320 * 161) • Oceanic component: HOPE (from Max Plank Institute) with a zonal resolution of 1.4 degrees and 29 vertical levels • Coupling: OASIS (CERFACS). Coupling every ocean time step (1 hour) • Perturbations: • Atmosphere: Singular vectors + stochastic physics • Ocean: SST perturbations in the initial conditions + wind stress perturbations during data assimilation. • Hindcast statistics: • 5-member ensemble integrated over 32 days during the past 12 years. • Representing a 60-member ensemble. • Running every week

  4. Properties of quintile PDF Forecast Hind cast HC min qb 2 qb 4 HC max qb1 qb 3 FXmin FXmax FXmean qm2 qm3 qm4 qm1 qm5 well below below normal above well above FORMOST global PDF data

  5. FORMOST Monthly Outlook

  6. Example UK 12-18 day temperature forecast for 10 climate districts Probability forecast “….a sudden change to below or well-below average temperatures is expected…” (Forecast text issued 11th Feb for week 21-27 of Feb) Verification Deterministic forecast (based on most probable category or ensemble mean)

  7. Example global capability tercile probability forecast – Europe, days12-18 Verification (ECMWF operations) P(abv) P(avg) P(blw) Tmean Precip

  8. WK 3&4 WK 2 Obs. T anom. 24-30. Jan WK 1

  9. Verification May 2002 - April 2005 • 93 forecast/ observation pairs of Temperature and Precipitation • Verifying Observations: • ECMWF short range (12-36 hrs) forecasts over the period • Global Forecasts: • Relative Operating Characteristics for quintile forecast • Reliability Diagram • Brier skill score decomposition

  10. Relative Operating Characteristics 3 POD= H/(H+M) ROC measures the ability of the forecast to discriminate between two alternative outcomes. measuring resolution. It is not sensitive to bias in the forecast, so says nothing about reliability. A biased forecast may still have good resolution and produce a good ROC curve, which means that it may be possible to improve the forecast through calibration. The ROC can thus be considered as a measure of potential usefulness. The ROC is conditioned on the observations (i.e., given that bin X occurred, what was the corresponding forecast?) The area under the ROC curve indicates skill above climatology, when > 0.5 cold warm POFD= FA/(FA+CR)

  11. ROC Map ROC Score for Temperature well below normal 19 to 32 days ahead

  12. 3 Brier Score HR=H/(H+FA) Brier Score = RMS ensemble forecast 1/N Sum_i (P_k – O_k)^2 BS = Reliability – Resolution + Uncertainty Reliability: conditional Bias, weighted vertical distance of the reliability curve to the diagonal. The flatter the less resolution. Resolution: difference between fx pdf and expectation, variance (HR). The larger (better), the more often the fx pdf is steeper and narrower than the climatological pdf. (Flat sharpness plot) Uncertainty: difference between obs pdf and sample average pdf, variance(H/total), .2(1-.2)=0.16 always > Resolution Bracketed Terms: BSS= 1 – BS/BS_clim BS_clim= REL(=0) - Res(=0) + Unc

  13. Monthly Verification (Europe): TmeanROC & reliability, all seasons, days 12-18 HR=H/(H+FA) 2 2 POD= H/(H+M) • Based on 93 operational forecasts (all seasons) • Verification data = ECMWF T+12-36

  14. Monthly Verification (Europe): PrecipROC & reliability, all seasons, days 12-18 POD= H/(H+M) HR=H/(H+FA) 2 2 POFD= FA/(FA+CR)

  15. Conclusion • The monthly forecasts model runs are produced at ECMWF, products are derived at the Met Office, operationally. • Standardised Verification system (SVS) for Long-range Forecasts (LRF) is beginning to take shape. • Forecasts for day 19-32 are as useful as climatology. • Predictions of Quintile 1 and 5 are more skilful than of Quintiles 2 to 4 • Europe is a difficult region to predict at long time range. Weather uncertainty imposes a fundamental limit on the sharpness of the probabilistic forecast. • The Monthly Outlook is a powerful tool to provide forecast guidance up to a month ahead in many areas.

  16. THANK YOU! • Richard Graham • Margaret Gordon • Andrew Colman

  17. The Met Office We are one of the world's leading providers of environmental and weather-related services. Our solutions and services meet the needs of many communities of interest…from the general public, government and schools, to civil aviation and almost every industry sector around the world.

  18. Desirable Attributes of Scores • Equitable: • Equitable without dependence on the forecast distribution: • Rewards for correct forecasts inversely proportional to their event frequencies • Penalties for incorrect forecasts directly proportional to their event frequencies • Random forecasts and constant forecasts have the same (0) skill. Note: 2 & 3 imply that all information in the contingency table is taken into account.

  19. Recap: Post processing and Products • Data Volume reduction before transfer to The Met Office: Calculate • Tercile/Quintile boundaries from the Hindcast ensemble • Tercile/Quintile populations from the Forecast ensemble • Maximum, Mean and Minimum from Forecast and from Hindcast • Forecast Tercile/Quintile averages • Average in time to week 1, 2 and 3&4. • UK Forecast: http://www.bbc.co.uk/weather/ukweather/monthly_outlook.shtml • Interpolation to points representing UK climate regions • Calibration with historical UK climate region observations • Interpretation of the Histogram, Ensemble mean or Mode in cases with large spread, • derive deterministic forecast tercile/quintile • Mapping Tercile/Quintile average onto calibration PDF • to derive deterministic forecast value • Global Forecast: • Tercile/Quintile probabilities • Calibrate by overlaying Tercile/Quintile boundaries derived from 1989 – 1998 ERA40 data

  20. Key Points • Monthly Outlooks are a split process between ECMWF and Met Office • The Monthly Outlook is operational • Standardised Verification system is shaping up

  21. Most probable Quint Low spread, Ensemble mean a good “best estimate”, high confidence. High Spread, ensemble mean wrong in 80% of all cases. High Spread, delta probability > 5%. Most probable quint “best estimate”. Hedge by use of above/below quint.

  22. FORMOST Verification Week 2 t Quint boundaries Observed Deterministic

  23. FORMOST Verification Week 3&4, UK average t

  24. FORMOST Verification Week 3&4, UK average t

  25. Monthly forecast: 3 category probabilities temperature precipitation Issued: 9th June 2005 valid: 20th to 26th June prob. of above prob. of average prob. of below

  26. Grid point diagnostics Stratify by magnitude of the probability at each grid point Properties of a contingency table per grid point • Hit : Q =Qobs & P(Q)>= Pthresh • Miss: Q =Qobs & P(Q) < Pthresh • False Alarm: Q != Qobs & P(Q)>= Pthresh • Correct rejection: Q !=Qobs & P(Q)<Pthresh 0 0 1 1 3 5 8 13 20 23 1 1 2 3 7 8 11 13 20 29 100 % probability 90 80 70 60 50 40 30 20 10 wan 5 QUINT 4 n 3 bn 2 wbn 1 POD = H / (H+M) conditioned on Observations POFD=FA / (FA+CR) Hit Rate = H / (H+FA) conditioned on Forecasts H M FA CR

  27. 3 EUROPE T HR=H/(H+FA) 3 POD= H/(H+M) 19 lead time 2 weeks period

  28. 3 EUROPE P HR=H/(H+FA) 3 POD= H/(H+M) 19d lead time 2 weeks period

More Related