1 / 18

Judging the credibility of climate projections

Judging the credibility of climate projections. Chris Ferro (University of Exeter) Tom Fricker , Fredi Otto, Emma Suckling. Uncertainty in Weather, Climate and Impacts (13 March 2013, Royal Society, London). Credibility and performance.

damon
Télécharger la présentation

Judging the credibility of climate projections

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Judging the credibility of climate projections Chris Ferro (University of Exeter) Tom Fricker, Fredi Otto, Emma Suckling Uncertainty in Weather, Climate and Impacts (13 March 2013, Royal Society, London)

  2. Credibility and performance Many factors may influence credibility judgments, but should do so if and only if they affect our expectations about the performance of the predictions. Identify credibility with predicted performance. We must be able to justify and quantify (roughly) our predictions of performance if they are to be useful.

  3. Performance-based arguments Extrapolate past performance on basis of knowledge of the climate model and the real climate (Parker 2010). Define a reference class of predictions (including the prediction in question) whose performances you cannot reasonably order in advance, measure the performance of some members of the class, and infer the performance of the prediction in question. Popular for weather forecasts (many similar forecasts) but less use for climate predictions (Frame et al. 2007).

  4. Climate predictions Few past predictions are similar to future predictions, so performance-based arguments are weak for climate. Other data may still be useful: short-range predictions, in-sample hindcasts, imperfect model experiments etc. These data are used by climate scientists, but typically to make qualitative judgments about performance. We propose to use these data explicitly to make quantitative judgments about future performance.

  5. Bounding arguments 1. Form a reference class of predictions that does not contain the prediction in question. 2. Judge if the prediction problem in question is harder or easier than those in the reference class. 3. Measure the performance of some members of the reference class. This provides a bound for your expectations about the performance of the prediction in question.

  6. Bounding arguments S = performance of a prediction from reference class C S′ = performance of the prediction in question, from C′ Let performance be positive with smaller values better. Infer probabilities Pr(S > s) from a sample from class C. If C′ is harder than C then Pr(S′ > s) > Pr(S > s) for all s. If C′ is easier than C then Pr(S′ > s) < Pr(S > s) for all s.

  7. Hindcast example Global mean, annual mean surface air temperatures. Initial-condition ensembles of HadCM3 launched every year from 1960 to 2000. Measure performance by the absolute errors and consider a lead time of 10 years. 1. Perfect model: try to predictensemble member 1 2. Imperfect model: try to predictCNRM-CM5 model 3. Reality: try to predictHadCRUT3 observations

  8. Hindcast example

  9. 1. Errors when predict HadCM3

  10. 2. Errors when predict CNRM-CM5

  11. 3. Errors when predict reality

  12. Recommendations Use existing data explicitly to justify quantitative predictions of the performance of climate projections. Collect data on more predictions, covering a range of physical processes and conditions, to tighten bounds. Design hindcasts and imperfect model experiments to be as similar as possible to future prediction problems. Train ourselves to be better judges of relative performance, especially to avoid over-confidence.

  13. References Otto FEL, Ferro CAT, Fricker TE, Suckling EB (2012) On judging the credibility of climate projections. Climatic Change, submitted Allen M, Frame D, Kettleborough J, Stainforth D (2006) Model error in weather and climate forecasting. In Predictability of Weather and Climate (eds T Palmer, R Hagedorn) Cambridge University Press, 391-427 Frame DJ, Faull NE, Joshi MM, Allen MR (2007) Probabilistic climate forecasts and inductive problems. Phil. Trans. Roy. Soc. A 365, 1971-1992 Knutti R (2008) Should we believe model predictions of future climate change? Phil. Trans. Roy. Soc. A 366, 4647-4664 Parker WS (2010) Predicting weather and climate: uncertainty, ensembles and probability. Stud. Hist. Philos. Mod. Phys. 41, 263-272 Parker WS (2011) When climate models agree: the significance of robust model predictions. Philos. Sci. 78, 579-600 Smith LA (2002) What might we learn from climate forecasts? Proc. Natl. Acad. Sci. 99, 2487-2492 Stainforth DA, Allen MR, Tredger ER, Smith LA (2007) Confidence, uncertainty and decision-support relevance in climate predictions. Phil. Trans. Roy. Soc. A 365, 2145-2161

  14. Future developments Bounding arguments may help us to form fully probabilistic judgments about performance. Let s = (s1, ..., sn) be a sample from S ~ F(∙|p). Let S′ ~ F(∙|cp) with priors p ~ g(∙) and c ~ h(∙). Then Pr(S′ ≤ s|s) = ∫∫F(s|cp)h(c)g(p|s)dcdp. Bounding arguments refer to prior beliefs about S′ directly rather than indirectly through beliefs about c.

  15. Predicting performance We might try to predict performance by forming our own prediction of the predictand. If we incorporate information about the prediction in question then we must already have judged its credibility; if not then we ignore relevant information. Consider predicting a coin toss. Our own prediction is Pr(head) = 0.5. Then our prediction of the performance of another prediction is bound to be Pr(correct) = 0.5 regardless of other information about that prediction.

  16. 1. Perfect model errors

  17. 1. Perfect model errors

More Related