1 / 34

Multi-Model Ensembling for Seasonal-to-Interannual Prediction: From Simple to Complex

Multi-Model Ensembling for Seasonal-to-Interannual Prediction: From Simple to Complex. Lisa Goddard and Simon Mason International Research Institute for Climate & Society The Earth Institute of Columbia University. Benefit of Using Multiple Models RPSS for 2m Temperature (JFM 1950-1995).

dinos
Télécharger la présentation

Multi-Model Ensembling for Seasonal-to-Interannual Prediction: From Simple to Complex

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multi-Model Ensembling for Seasonal-to-Interannual Prediction:From Simple to Complex Lisa Goddard and Simon MasonInternational Research Institute for Climate & SocietyThe Earth Institute of Columbia University

  2. Benefit of Using Multiple Models RPSS for 2m Temperature (JFM 1950-1995) Combining models reduces deficiencies of individual models Benefit of Combining Models

  3. Varying Complexity in Building MM Refining: (1) RAW MODEL PROBABILITIES (simple) Tercile thresholds determined by model history -- Counting (2) RECALIBRATED PDF PROBABILITIES (less simple) - Contingency table recalibration (CT): categorical probabilities determined by category of ensemble mean - Uncertainty in forecast PDFs based on ensemble mean MSE

  4. Varying Complexity in Building MM Combining: (1) POOLED MM ENSEMBLE (simple) Each model weighted equally (2) PERFORMANCE-BASED MM ENSEMBLE (less simple) - Bayesian: determine optimal weights for AGCMs & climatology by maximizing likelihood -Multiple linear regression (MLR): obtain probabilities from prediction error variance using first few moments of ensemble distributions - Canonical Variate (CV): maximize discrimination between categories using first few moments of ensemble distributions

  5. t=1 2 3 4 Climo Fcst“Prior” GCM Fcst“Evidence” Combining: Based on model performance Bayesian Model Combination … … … Combine “prior” and “evidence” to produce weighted “posterior” forecast probabilities, by maximizing the likelihood.

  6. Canonical Variate Analysis Combining: Based on obs. relative to model The canonical variates are defined to maximize the ratio of the between-category to within-categoryvariance.

  7. NINO3.4

  8. DEMETER Data

  9. Equal Weighting Probabilistic forecasts were obtained by counting the number of ensemble members beyond the outer quartiles, and then averaging across the three models. The pooled ensemble is thus an equally-weighted combination of predictions uncorrected for model skill (although corrected for model drift). Reliability is good for all three categories.

  10. Canonical Variate Analysis

  11. Multiple Linear Regression

  12. Conclusions I

  13. Terrestrial Climate

  14. Data AGCMs: Simulations for 1950-2000 * CCM3 (NCAR) – 24 runs * ECHAM4.5 (MPI) – 24 runs * ECPC (Scripps) – 10 runs * GFDL AM2p12b – 10 runs * NCEP-MRF9 (NCEP/QDNR) – 10 runs * NSIPP1 (NASA-GSFC) – 9 runs Observations: 2m Air Temperature and Precipitationfrom CRU-UEA (v2.0)

  15. Effect of Probability TreatmentJFM 2m air temperature over land

  16. Effect of Probability Treatment

  17. Effect of Combination Method

  18. Effect of Combination MethodRAW Probabilities

  19. Effect of Combination MethodPDF Probabilities

  20. Conclusions II • Reliability of N models pooled together, with uncalibrated PDFs, is better than any individual AGCM. • Gaussian (PDF) recalibration gives some improvement, but Bayesian recalibration gives the greatest benefit. • Reliability is typically gained at the expense of resolution.

  21. ISSUES • Number of Models • Length of training period • When simple is complex enough?

  22. Effect of # of Models3 vs 6 AGCMS; 45-year training period Different approaches are more similarwith more models. (Robertson et al., 2004, MWR)

  23. RPSS for 2m TemperatureBayesian MM from Raw Probs. – 6 models, 45-yr training Jan-Feb-Mar Jul-Aug-Sep

  24. RPSS for PrecipitationBayesian & Pooled MM from Raw Probs. – 6 models, 45-yr training Jan-Feb-Mar Jul-Aug-Sep

  25. Reliability Diagrams* several methodsyield similar results overthe United States. * MMs are remarkablyreliable over the US,even though the accuracyis not high.

  26. CONCLUSIONS III • MM simulations over the US are remarkably reliable, even if their not terribly accurate. • Simple pooling of the AGCMs, with uncalibrated probabilities, is equivalent to any of our techniques over the U.S. • Doesn’t require long history, but largenumber of models (>5?) is desirable.

  27. GRAND CONCLUSIONS • Overall, we find that recalibrating individual models gives better results than putting models together in complex combination alorithm. • In comparing different recalibration/combination methods, we find that generally a gain in reliability is countered with a loss in resolution. • More complicated approaches are not necessarily better. This needs to investigated for different forecast situations (i.e. variables, region, season).

  28. Ranked Probability Skill Scores Temperature Jan-Feb-Mar (1950-1995)

  29. Ranked Probability Skill Scores Precipitation Jul-Aug-Sep (1950-1999) • Comparing treatment of probability • - Even with 6 models, have regionsof large negative RPSS Suggests common model errors • Recalibration reduces, but does noteliminate, large errors • Some improvement of positive skill Recal-Raw

  30. Ranked Probability Skill Scores Precipitation Jul-Aug-Sep (1950-1999) • Comparing combination methods • Performance-based combination eliminates large errors • More improvement of positive skill • More cases of negative skill turned topositive skill

  31. Canonical Variate Analysis • A number of statistical techniques involve calculating linear combinations (weighted sums) of variables. The weights are defined to achieve specific objectives: • PCA – weighted sums maximize variance • CCA – weighted sums maximize correlation • CVA – weighted sums maximize discrimination

  32. Canonical Variate Analysis

  33. Effect of Probability Treatment

More Related