1 / 27

Climate Forecasting: what is required for scientific forecasting of climate change? Professor Robert Fildes, Nikos Koure

Climate Forecasting: what is required for scientific forecasting of climate change? Professor Robert Fildes, Nikos Kourentzes. Lancaster Centre for Forecasting. A Scientific Forecasting Procedure - the minimum requirements?. Define the problem

montana
Télécharger la présentation

Climate Forecasting: what is required for scientific forecasting of climate change? Professor Robert Fildes, Nikos Koure

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Climate Forecasting: what is required for scientific forecasting of climate change?Professor Robert Fildes, Nikos Kourentzes Lancaster Centre for Forecasting

  2. A Scientific Forecasting Procedure- the minimum requirements? • Define the problem • Decision problem/ context: stakeholders, time horizon • Variable(s) of interest • Define the set of models under consideration • A benchmark method should be included (e.g naïve no change) • Define the data set to be used • In model estimation • In model selection • In validation • Where judgment is used, its use should be explicit and structured • Affecting choice of data, initial conditions, parameter estimates

  3. To be effective a model should be: (Little, 1970) • Complete on ‘important’ dimensions • Comprehensible to the stakeholders • Robust • Controllable – “quick answers to scenarios” Climate modellers have only focussed on ‘completeness’

  4. Inputs Initial Conditions Outputs Judging the Forecasts - and judging the models • Define the criteria to be used in choosing between models/ procedures • Do the Models reproduce • Stylised facts • Historical trends, (surface and tropospheric) • Extreme events, cyclical ‘anomalies’, e.g El Nino • Correspondence with established sub-models • Input-output correspondence • Forecast encompassing • Ex ante forecasting accuracy over the chosen horizons • Better that simpler alternatives Many multivariate observations imply many anomalies

  5. Global Warming Forecasts:Green & Armstrong’s Critique The Objective To develop the ‘best’ policies to deal with future climate Forecasts are therefore needed of: • Long-term climate (mean global temperature) • Effects of temperature change on humans • Estimated costs and benefits of alternative policies They claim: • “A policy should only be implemented if valid and reliable forecasts of the effects [..] and the forecasts show net benefit Green & Armstrong, 2007. Global warming: forecasts by scientists vs scientific forecasts, Energy and Environment,

  6. Green & Armstrong’s Critique II • To be regarded as ‘based on what is known about scientific forecasting’ a forecasting procedure should adhere to the established Principles Of Forecasting, Armstrong ed. (2001) • 140 principles across a wide range of forecasting problems/ situations • E.g • use all important variables • Select simple methods unless empirical evidence calls for more complex approach • Use ex ante error measures • Focus on average global temperature predictions

  7. Forecasting Global Temperature- the basis of fears of global warming • IPCC (Intergovernmental Panel on Climate Change) provides the most authoritative forecasts • The 4th report, released in 2007 made the following predictions. For scenarios B1 & A1T Growth per decade = .21°C It doesn’t sound much but!!

  8. Basis of IPCC Forecasts The Models Based on • Coupled atmosphere-ocean General Circulation Models (AOGCM: Wikipedia) • Climate models are systems of differential equations based on the basic laws of physics, fluid motion, and chemistry. • To “run” a model, scientists divide the planet into a 3-dimensional grid + time, apply the basic equations, and evaluate the results. • Atmospheric models calculate winds, heat transfer, radiation, relative humidity, and surface hydrology within each grid and evaluate interactions with neighboring points. • ‘prognostic’ equations roll out the current system state over time

  9. Solving the Models • Data base selection • Deterministic solutions to non-linear differential equations • Computational intensive • Reliability issues (both coding and numerical) • Parameterisation is based on experimentally established constants • Model overparameterised • Varying degrees of precision in parameter estimates • Fitting used within constraints: some parameters pre-fixed • Initial conditions to be specified

  10. Validation Claims by the IPCC Ch8. Working Group 1; “Climate Models and their Evaluation” • Combined models “encompassing a variety of perspectives makes it significantly less likely that significant model errors are being overlooked” • Claim test models to simulate ‘present climate’ • Apparently support forecast validation tests (p.595) • Claim forecast evaluation supports (some of the) models’ ability to represent key input-output (forcing) relationships • But may be less relevant to long term climate response • Notes the problem of tuning a model to give a good representation “cannot be used to build confidence in the model”

  11. Green and Armstrong- their critique of the modelling • “Climate models are mathematical ways for the experts to express an opinion” And experts have limited forecasting abilities! • Many ‘principles’ ignored in their construction and presentation But • What (forecasting) models pass these tests? The issue • What elements are validated? • If we accept G&A, what ‘principles’ are followed

  12. Biased choice of data No clear forecast horizon(s) No clearly defined estimation/ fitting procedure No discussion of judgmental elements No benchmark No forecast tests But G&A do not propose alternative Issue of bias identified (current ‘best’ data Long term (20 years?) Captures many stylised facts El Nino Forcings due to volcanic eruptions Recent trends But misses others Tropospheric temps increasing Beats G&A’s benchmark (probably) in forecasting tests! But not established The Key Principles Transgressed!

  13. Data – dealing with diverse measures HardCrut3 NASA

  14. A ‘contrarian’ controversy! But for forecasting purposes, relationship between the two series is strong (.98) • Both show same stylised facts • The supposed ‘cessation of gw’ in 1998 says little (at this point) about 20 year forecasts • For forecasting purposes, use both • ‘Experts’ should be able to identify the most appropriate series • Either average forecasts • Average series

  15. Benchmark tests • G&A propose a random walk benchmark • Effective in conditions of the high uncertainty that characterises climate change • And argue that the choice of method should be based on ex ante forecast tests • Undermined however by structural change (Clements & Hendry) Accepting G&A • We propose alternative benchmarks

  16. Choosing an Appropriate Benchmark • Models should have the potential to capture stylised facts, e.g possible trend • G&A quote Carter as doubting we can estimate current trend. Nonsense – see e.g. Garcia-Ferrer. • Should be simple • And well-validated in a wide range of circumstances • Random walk provides an implausible benchmark • We compare • Random walk • Simple exponential smoothing • Holt’s Linear Trend • Gardner’s damped trend Note that any of the benchmarks would not pass many of the principles either Key Forecasting Principle: select model based on ex ante forecast accuracy

  17. The Forecast Comparisons • Using global annual average temperature • Error measure: MAE and MdAE • Upside error that causes the problem • Choice of fitting period • 1850 to 1947 • Forecast horizon: 10 and 20 years

  18. The Forecast Comparisons II 10 Year ahead forecasting accuracy: 10 Year ahead forecast: The probability of a trend model producing more accurate forecasts than the random walk

  19. The Forecast Comparisons III • Holt’s trend model produces more accurate forecasts than the random walk • Using the Principles this therefore should be used as a benchmark comparison for Climate Models • The twenty year forecast gives an increase of: .55°C per decade • This corresponds (unsurprisingly) to the IPCC forecast from the business-as-usual scenario • Using another principle, we could combine forecasts from different methods to produce a forecast disaster! • The only good news is the uncertainty in the ex ante error measures is high! There is some small possibility that there will be no increase at all.

  20. Multivariate Models Emissions Temperature Cumulative CO2 • ‘Green house’ mechanisms well validated, • both theoretically and empirically

  21. Neural Networks • Fitted to 1947 • Both annual emissions and cumulative C02 included • Lags to 5 years; lags to 30; trend pre-processing • Stepwise regression of lags used in model selection • Multivariate forecasts are conditional

  22. Multivariate Results - Ranks • Univariate trend models for 10 year ahead forecasts • Multivariate for 20 year ahead?? • trend dominating noise?

  23. The Scenarios compared

  24. The Scenarios compared

  25. The Scenarios compared • Scenario I: CO2 (and cumulative CO2) constant • Scenario II: CO2 (and cumulative CO2) increase until 2010 • and then constant

  26. The Scenarios compared • Scenario I: CO2 (and cumulative CO2) constant • Scenario II: CO2 (and cumulative CO2) increase until 2010 • and then constant • Scenario III: CO2 (and cumulative CO2) increase

  27. What we’ve shown • Climate models (AOGCM) models built and validated with no regard to forecasting principles (supporting G&A) • We can infer their forecasting performance is unlikely to be good • Little emphasis on validation • No forecasting accuracy tests • A good benchmark for judging their accuracy for 20 year ahead forecasts is Holt’s trend model • Over the next 20 years a benchmark forecast is an increase of 1C • Low probability of no increase at all And we offer • Limited evidence on the causality of CO2 increases causing warming • If a causal input-output relationship can be established, linked to a control path, then cost-effective policies should be developed to mitigate the effects, • Current policy depends on current knowledge. • A no change forecast is a forecast without support in the data or theory • Input output control relationships need establishing

More Related