Download
the economic sentiment indicator n.
Skip this Video
Loading SlideShow in 5 Seconds..
The Economic Sentiment Indicator PowerPoint Presentation
Download Presentation
The Economic Sentiment Indicator

The Economic Sentiment Indicator

109 Views Download Presentation
Download Presentation

The Economic Sentiment Indicator

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. The Economic Sentiment Indicator

  2. ESI (EFN Report)

  3. Comments • Forecasting-model: • Integrated process • Forecasting intervals spread out rapidly • Point-forecasts do not converge to the mean • Series is bounded: integration=misspecification • The 40%-Interval (and a fortiori higher confidence intervals) contains both trend directions • Impossible to infer the occurrence of a turning-point • Forecasts are uninformative

  4. An Artificial Example(Dynamics Close to Business Surveys) Model-Misspecification Multi-Step Ahead Forecasting

  5. Artificial Time Series (close to KOF-Economic Barometer)

  6. Series Dynamics and Characteristics • Bounded time series • As are many important economic time series like rates for example (GDP growth-rate, unenmployment rate, interest rates, log-returns, …) • Best Forecast is known • Identify ARIMA-forecasting-model • TRAMO, X-12-ARIMA

  7. Forecasting-Model and Diagnostics

  8. Problems • In applications TRAMO and/or X-12-ARIMA often identify airline-models • Interesting series (for example rates) are often bounded, see examples below • Here: Model is I(2)-process • Misspecification cannot be detected • One-step ahead forecasts good • σ=1.16 (true innovations are N(0,1) ) • What are the consequences? • Multi-step ahead perspective

  9. Multi-step ahead Forecasts0 months after TP1 of cycle

  10. Multi-step ahead Forecasts6 months after TP1 of cycle

  11. Multi-step ahead Forecasts1 year after TP1 of cycle

  12. Multi-step ahead Forecasts 20 months after TP1 and 0 months after TP2

  13. Multi-step ahead Forecasts3 months after TP 2

  14. Multi-step ahead Forecasts6 months after TP 2

  15. Comments • One-step ahead forecasts are good • σ=1.16 • Poor multi-step ahead performance • The first TP1 is `detected’ after 20 months • A false positive trend slope is suggested when the second (down-turn) TP2 occurs • The down-slope after TP2 is detected with 6 months delay • The low-frequency part (cycle) is completely misspecified • Model assumes spectral mass lies in frequency zero

  16. Multi-step ahead 95% Interval-Forecasts: 6 months after TP2

  17. Multi-step ahead 50% Interval-Forecasts: 6 months after TP2

  18. Comments • Forecast intervals spread out much too rapidly • True ones are of constant width • Width of misspecified ones is O(h3/2) where h is the forecasting horizon • It is impossible to assert the occurrence of TP’s • even 50%-intervals are completely uninformative (spread out too fast)

  19. Conclusions • Misspecification cannot be detected • Statistics based on one-step ahead performances are not well suited for most practically relevant forecasting applications • One-step ahead performances are good • Mean-reversion of time series cannot be captured by misspecified model • Turning-points are detected much too late • Performance in TP is particularly poor • Linear forecast cannot capture curvature • Forecast intervals spread out much too rapidly • Completely uninformative (even 50%)

  20. NN3

  21. Receive updates: • www.neural-forecasting-competition.com

  22. Competitors • Theta-model (winner of M3) • Forecast-Pro (best commercial package M3) • Autobox (ARIMA-based high-performer) • X-12-ARIMA • Latest neural net designs • …

  23. Data

  24. Data/Criterion • Length between 50 and 110 observations • Economic real monthly data (no artificial simulation context) • Finance • Macroeconomic data • With/without season • MAPE on 1-18 step-ahead forecasts

  25. NN3 Results Results on the Complete Dataset of 111 Time Series This represents the actual benchmark of the NN3 competition, as the reduced dataset of 11 series is included in the 111. Congratulations to all of you that were able to forecast this many time series automatically! Please find the results for the top 50% of submissions released below by name and description. All other participants must contact the competition organisers via email to agree the disclosure of their name and method with their rank.

  26. NN3 Results Results on the Complete Dataset of 111 Time Series This represents the actual benchmark of the NN3 competition, as the reduced dataset of 11 series is included in the 111. Congratulations to all of you that were able to forecast this many time series automatically! Please find the results for the top 50% of submissions released below by name and description. All other participants must contact the competition organisers via email to agree the disclosure of their name and method with their rank.

  27. Method for NN3 Starting Point: Standard Approach Flexible and adaptive

  28. Component- and State-Space-Models

  29. Interpretation • If season then SARMA(1,0,0)(1,0,0) • No season: AR(2) (possible cycle) • Noise terms in state equation • Variability (adaptivity) of Trend • Variability (adaptivity) of Trend-Growth • Stability of cycle or season • Model allows for changing levels, slopes and seasonals • Adaptivity is controlled by variance of noise terms in state equation: hyperparameters

  30. Model- and Hyperparameters • Noise variances state: • Adaptivity/stability of components: 3 hyperparameters • AR(2) or SARMA(1,0,0)(1,0,0): • 2 Model-parameters • Initial States: • 2 parameters for trend and trend-growth • Variance Initial States • 2 hyperparameters for trend and trend-growth • Interpretation: shrinkage towards initial solution

  31. Modifications of traditional approachCustomization Experiences of past Competitions, Own Experience

  32. Experience ↔ 6 Modifications • Past Competitions • Fit model according to relevant criterion #3 • Performance dependent on Forecasting horizon #4 • Combination of forecasts often improves over individual forecasts #5 • Own experience • Out-of sample performance #1 • Robustification of MSE #2 • Speed of trend-slope estimate #6

  33. Estimation • Traditionally: Kalman-filter leads to ML-estimates under Gaussianity • In-sample full-ML estimates • Modifications #1, #2 and #3 • Estimates are computed based on true out of sample performances • Criterion is robustified • ML-criterion is modified • MAPE • Last observations more important than first ones • Account for pure multi-step ahead forecasting as well as model-structure (one-step ahead ML-criterion)

  34. Modifications: out-of-sample, robustification, Criterion

  35. Discussion Robustification • Does not make sense for traditional in-sample criterion • Outliers can be masked by parameter distortions • In out-of-sample perspective outliers can be detected easily • Parameters are not distorted by outlier • Using a robust scale estimate for decision makes sense • Outliers are down-weighted (psi-function vanishes) • Cost: extent (speed) of adaptivity

  36. Discussion Criterion • Criterion is ad hoc • First term: • Pure absolute multi-step ahead out-of-sample forecasting performance • Absolute errors because of MAPE • Accounts for forecasting horizon • Down-weights the past

  37. Discussion Criterion • Second term: • Traditional Likelihood (up to robustification) • One-step ahead • Stabilizes Model parameters and up-dating equations • Mean-square criterion ↔ errors are bounded • Local mean-square through robustification • Avoids out-of-sample overfitting by hyperparameters • Down-weights the past

  38. Modifications #4 and #5Forecast-Horizon and -Combination • Optimize parameters specifically for each forecasting horizon • Robustified • Out-of-sample • 18 Models • Combine these 18 forecast functions • Median • Accounts for numerical problems

  39. Modification #6: `Speed’ and `Reliability’ through TP-Filter • A fast and reliable TP-filter is computed • DFA: • If sign of (state space trend-slope) differs from sign of real-time TP-estimate, then sign of the former is changed • TP-filter is faster and more reliable

  40. Open Issues/Problems

  41. Open Issues/Problems • Numerical optimization • Hyperparameters • Non-Linearity due to robustification • Median of 18 forecasting functions alleviates problems (but is not optimal) • Choice of α (in modified ML-criterion) and robustification rule (2.5*median) arbitrary • No experience before (and after) NN3 • Tuning-Parameters

  42. Open Issues/Problems • Optimization criterion is ad hoc • Term 1 accounts for pure forecasting • Term 2 accounts for likelihood • Stability, overfitting • Relative weighting of both terms is arbitrary

  43. Open Issues/Problems • Changing the sign of the trend slope if it disagrees with TP-filter is arbitrary • Choice of model is to some extent arbitrary • AR(2) and SARMA(1,0,0)(1,0,0) • Should try ARMA for controlling the stability • No formal identification routine

  44. Open Issues/Problems • No Irregular Observations • Outliers • Level-shifts • Transitory changes • No intervention variables • Difficult to evaluate partial and/or overall contribution(s) of proposed modifications • Multidimensional problem • Analysis on NN3-data when released

  45. New Evidences/Principles

  46. Simplicity vs. Complexity • Goodrich (2003):"Perhaps the success of the Theta method depends upon its use of the global trend rather than the local… It strengthens the conviction that, ceteris paribus, simple methods outperform more complex ones." • Trend slope of local trend • Constraints of TP-filter imply `immediate’ local trend • Vanishing time delay in pass-band • Method complex • 9 parameters for state-space, 16 parameters for TP-filter • Numerically difficult, computationally intensive

  47. Unusual Observations • Outliers: treatment of unusual observations • May be useful ex post (to improve parameter estimates) • Difficult to use ex ante at current boundary (in forecasting) • Is an `unusual’ current observation an outlier (transitory) or a shift (permanent)? • Adaptive robust models based on out-of-sample performances are less sensitive

  48. Comparison with Traditional BSM Basic Structural Model First 10 Series of NN3

  49. Traditional BSM • Estimation • In-sample mean-square full ML • Past performance as important as present one in Likelihood • No robustification • One-step ahead criterion • No forecast combination • No TP-filter • Simpler models for cycle/seasonal

  50. Series 1