1 / 19

An Overview of Forecast Verification Procedures

An Overview of Forecast Verification Procedures. Alan F. Hamlet, Andy Wood, Dennis P. Lettenmaier JISAO/CSES Climate Impacts Group Dept. of Civil and Environmental Engineering University of Washington. Why Create a Set of Retrospective Forecast Verification Procedures?

michelina
Télécharger la présentation

An Overview of Forecast Verification Procedures

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Overview of Forecast Verification Procedures • Alan F. Hamlet, Andy Wood, • Dennis P. Lettenmaier • JISAO/CSES Climate Impacts Group • Dept. of Civil and Environmental Engineering • University of Washington

  2. Why Create a Set of Retrospective Forecast Verification Procedures? 1) Creates a well-defined, objective, and consistent framework for comparing the retrospective performance of different forecasting procedures. 2) Provides quantitative data for potential users to assess the performance of a particular forecasting procedure in comparison with current practices or other alternate approaches.

  3. Definitions: Forecast Value: Economic (or intangible) value of the forecast in the context of some management decision process or other systematic response by a particular forecast user. Metrics of this type are designed to show the value to a particular forecast user. Forecast Skill: Forecast performance as measured by a quantitative skill metric relative to some standard (e.g. climatology). Skill metrics are designed to objectively compare different forecasts.

  4. Red = observed Blue = ensemble mean October 1 PDO/ENSO Forecast January 1 ESP forecast

  5. A Simple Skill Metric for Deterministic Forecasts Take the central tendency of the forecast (ensemble mean) and compare to the long term mean of observations (climatology) using the MSE of the forecast relative to the observed value being forecast. Skill_1 = 1 - MSE (ensemble mean)/ MSE(climatology) or when evaluating over the historic period: Skill_1 = 1 - MSE (ensemble mean)/ [variance of observations] Where MSE is the mean squared error relative to the observed value being forecast. Note that a skill of 1.0 is a perfect forecast, a skill of 0.0 is equivalent to climatology, and a negative value means that climatology is a better forecast.

  6. What’s wrong with the simple metric in the previous example? Obs Value PDF 2 1 1 1 Metrics that use only the central tendency of each forecast pdf will fail to distinguish between red, green, and aqua forecasts, but will identify the purple forecast as inferior. Example metric: MSE of ensemble mean compared to MSE of long term mean of observations (variance of obs.)

  7. A More Sophisticated Skill Metric for ESP Forecasts In this case the MSE of the forecast relative to the observation is calculated for each of the ensemble members, and the average of these squared errors over all ensemble members is calculated. Note that in this case both the forecast and the climatology are treated as an ensembles as opposed to a single deterministic trace. Skill_2 = 1 - [  (forecast - obs)2/N /  (climatology - obs)2/M ] where N is the number of forecast ensemble members and M is the number of climatological observations. The difference between this skill metric and the simple one is that this metric rewards accuracy, but also punishes spread.

  8. Obs Value PDF 4 1 2 3 More sophisticated metrics that reward accuracy but punish spread will rank the forecast skill from highest to lowest as aqua, green, red, purple. Example metric: average RMSE of ALL ensemble members compared to average RMSE of ALL climatological observations.

  9. An Example Case Study 1) A Comparison of Forecast Skill between: a) January 1 ESP forecasts (no climate forecast) b) ENSO-Based Forecasts for October 1, November 1, and December 1 (hydrologic initial conditions plus climate forecast)

  10. Oct Nov Jan ESP Dec

  11. Oct Nov Jan ESP Dec

  12. Oct Nov Dec Jan ESP

  13. Comparison of Skill For Warm ENSO Years

  14. Comparison of Skill For Cool ENSO Years

  15. Comparison of Skill For ENSO Neutral Years

  16. Some Studies of Potential Interest: • Weber, F.W., L.Perreault, V. Fortin, Measuring the performance of hydrological forecasts for hydropower production at BC Hydro and Hydro-Quebec • Tom Pagano’s (NRCS) retrospective skill analysis for the NRCS streamflow forecasting equations • Kevin Werner (NWS WRSC) and Andy Wood (UW) are currently embarking on a multi-model retrospective ESP evaluation for several hydrologic forecasting models. Participation by other groups is encouraged as well.

  17. Summary: Establishing a forecast verification procedure facilitates an objective and consistent comparison between alternate forecasting procedures over a retrospective period. Forecast skill metrics that reward accuracy and punish spread are more appropriate for ESP forecasts than simple metrics based on the central tendency of the forecasts because ESP forecasts may have different variability than the climatology and (at least in the PNW) there are relatively small shifts in the central tendency of the forecast from year to year. Fall and early winter ENSO-based long-lead forecasts for the Columbia basin (based on resampling methods) typically have a lower skill than January 1 ESP forecasts based on persistence of hydrologic state (soil moisture and snow), but frequently have higher skill than climatology. Prior to December 1 the ENSO based forecasts are considerably less robust than the Jan 1 ESP forecasts. For ENSO neutral years a climatological forecast is generally superior until Dec 1.

More Related