1 / 33

Time Series Forecasting: The Case for the Single Source of Error State Space Model

Time Series Forecasting: The Case for the Single Source of Error State Space Model. J. Keith Ord, Georgetown University Ralph D. Snyder, Monash University Anne B. Koehler, Miami University Rob J. Hyndman, Monash University Mark Leeds, The Kellogg Group

adair
Télécharger la présentation

Time Series Forecasting: The Case for the Single Source of Error State Space Model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Time Series Forecasting: The Case for the Single Source of Error State Space Model J. Keith Ord, Georgetown University Ralph D. Snyder, Monash University Anne B. Koehler, Miami University Rob J. Hyndman, Monash University Mark Leeds, The Kellogg Group http://www.buseco.monash.edu.au/depts/ebs/pubs/wpapers/2005

  2. Outline of Talk • Background • General SSOE model • Linear and nonlinear examples • Estimation and model selection • General linear state space model • MSOE and SSOE forms • Parameter spaces • Convergence • Equivalent Models • Explanatory variables • ARCH and GARCH models • Advantages of SSOE

  3. Review Paper A New Look At Models for  Exponential Smoothing (2001). JRSS, series D [The Statistician], 50, 147-59. Chris Chatfield, Anne Koehler, Keith Ord &Ralph Snyder

  4. Framework Paper A State Space Framework for Automatic Forecasting Using Exponential Smoothing(2002) International J. of Forecasting, 18, 439-454 Rob Hyndman, Anne Koehler, Ralph Snyder & Simone Grosse

  5. Some background • The Kalman filter: Kalman (1960), Kalman & Bucy (1961) • Engineering: Jazwinski (1970), Anderson & Moore (1979) • Regression approach: Duncan and Horn (JASA, 1972) • Bayesian Forecasting & Dynamic Linear Model: Harrison & Stevens (1976, JRSS B); West & Harrison (1997) • Structural models: Harvey (1989) • State Space Methods: Durbin & Koopman (2001)

  6. Single Source of Error (SSOE)State Space Model • Developed by Snyder (1985) among others • Also known as the Innovations Representation • Any Gaussian time series has an innovations representation [SSOE looks restrictive but it is not!]

  7. Why a structural model? • Structural models enable us to formulate model in terms of unobserved components and to decompose the model in terms of those components • Structural models will enable us to formulate schemes with non-linear error structures, yet familiar forecast functions

  8. General Framework: Notation

  9. Single Source of Error (SSOE)State Space Model

  10. Simple Exponential Smoothing (SES)

  11. Another Form for State Equation

  12. Reduced ARIMA Form ARIMA(0,1,1):

  13. Another SES Model

  14. Same State Equation for Second Model

  15. Reduced ARIMA Model for Second SES Model NONE

  16. Point Forecasts for Both Models

  17. SSOE Model for Holt-Winters Method

  18. Likelihood, Exponential Smoothing, and Estimation

  19. Model Selection p is the number of free states plus the number of parameters

  20. General Linear State Space Model

  21. Special Cases

  22. Linear SSOE Model

  23. SSOE for Holt’s Linear Trend Exponential Smoothing

  24. MSOE Model for Holt’s Liner Trend Exponential Smoothing

  25. Parameter Space 1 • Both correspond to the same ARIMA model in the steady state BUT parameter spaces differ • SSOE has same space as ARIMA • MSOE space is subset of ARIMA • Example: for ARIMA (0,1,1),  = 1-  • MSOE has 0 <  < 1 • SSOE has 0 <  <2 equivalent to –1 <  < 1

  26. Parameter space 2 • In general, ρ = 1 (SSOE) yields the same parameter space as ARIMA, ρ = 0 (MSOE) yields a smaller space. • No other value of ρ yields a larger parameter space than does ρ = 1 [Theorems 5.1 and 5.2] • Restricted parameter spaces may lead to poor model choices [e.g. Morley et al., 2002]

  27. Convergence of the Covariance Matrix for Linear SSOE

  28. Convergence 2 • The practical import of this result is that, provided t is not too small, we can approximate the state variable by its estimate • That is, heuristic forecasting procedures, such as exponential smoothing, that generate forecast updates in a form like the state equations, are validated.

  29. Equivalence • Equivalent linear state space models (West and Harrison) will give rise to the same forecast distribution. • For the MSOE model the equivalence transformation H of the state vector typically produces a non-diagonal covariance matrix. • For the SSOE model the equivalence transformation H preserves the perfect correlation of the state vectors.

  30. Explanatory Variables

  31. ARCH Effects

  32. Advantages of SSOE Models • Mapping from model to forecasting equations is direct and easy to see • ML estimation can be applied directly without need for the Kalman updating procedure • Nonlinear models are readily incorporated into the model framework

  33. Further Advantages of SSOE Models • Akaike and Schwarz information criteria can be used to choose models, including choices among models with different numbers of unit roots in the reduced form • Largest parameter space among state space models. • In Kalman filter, the covariance matrix of the state vector converges to 0.

More Related