1 / 25

Quantitative Analysis

Themes. These lectures will deal with time-series analysis and forecasting.In time-series analysis we will deal with:The classical decomposition of time-series data into its component parts. Trend.Cyclical factors.Seasonal factors.Random factors.In forecasting we will deal with:Na

primo
Télécharger la présentation

Quantitative Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. Quantitative Analysis BEO6501

    2. Themes These lectures will deal with time-series analysis and forecasting. In time-series analysis we will deal with: The classical decomposition of time-series data into its component parts. Trend. Cyclical factors. Seasonal factors. Random factors. In forecasting we will deal with: Nave forecasts. Using regression analysis as a means of forecasting.

    4. Decomposition Components: Long-term trend (T). I.e. The long-term general movement in the data. It could be linear (straight line). It could be non-linear (a smooth curve). Cyclical effect (C). I.e. The movement in the data that can be associated with the business cycle. Possibilities: Rising in the boom and falling during recession. E.g. New motor car sales. Falling in the boom and rising during recession. E.g. Unemployment. Regularity? Some cycles are short (a few years) and others much longer. Some recessions are very deep and others not. Some booms are very strong and others not.

    5. Decomposition (cont.) Components: Seasonal effects (S). I.e. Fairly regular variation that repeats itself every year. Regularity? Peaks and troughs occurring in the same month or quarter each year. Upswings and downswings of approximately the same amount each year. Random variation. I.e. Variation that is not one of the other three kinds. By definition, it is not predictable.

    12. Trends An equation. If an inspection of a graph suggests a straight line trend, we can use regression. Dependent: Y = original data. Independent: X or t = 1, 2, 3, 4, 5, 6, The independent variable is the number of the observation. Model: Y = ?0 + ?1 t + ?. Error ? is the variation in the data that is not due to the trend. Use regression. Interpret the estimated value of ?1 as the average increase (or decrease) in the trend per period. This output has been generated by SPSS.

    18. Cyclical factors Estimation. First, we need to remove seasonal and random variation from the data. We can de-seasonalize the data using seasonal indices. Coming soon. We can apply a smoothing technique to remove random variation. In the data we have just used, seasonal effects are not an issue because the data is annual data. Assume too that the random variation is negligible. Second, we estimate the trend equation. We calculate the trend values corresponding to each actual observation. Third, we calculate actual/trend 100. The result is an index that shows the percentage above or below trend (100.0). 105.0 would mean 5% greater than trend.

    21. Seasonal variation Aim. To construct seasonal indices. To measure average seasonal variation for each quarter or month. E.g To aid forecasting. To de-seasonalize data. I.e. So that we can see underlying patterns in the data.

    22. Using Excel to de-seasonalize quarterly data. Any four consecutive quarters is a full year and the seasonal factors cancel out. Dividing by 4 returns it to quarterly (minus seasonal effects). Unfortunately, the dates are also averaged in this process and become misaligned with the original dates. We can fix the dates by averaging the results pair-wise. It is easy in a spreadsheet. Say the original data is in A2 to A121 (heading in A1). Remember the process. Add the first bunch of 4 and divide by 4. A2 to A5. Move the window down 1 and add the next bunch of 4 and divide by 4. A3 to A6. Add the results and divide by 2. A2 and A6 have been counted once and A3, A4 and A5 twice. The formula for this in cell B4? =(A2 + 2*A3 + 2*A4 + 2*A5 + A6)/8. Copy down.

    23. Seasonal (cont.) Model: Y = Trend Cyclical Seasonal Random. Or Y = T C S R. The CMA process removes seasonality from the data. So CMA = T C R. Using division, it follows that: S = Y CMA. If there are strong seasonal patterns in the data: The S values for any given quarter (say December) will be similar. The S values for different quarters will vary. We average them for each quarter using the median (to avoid problems with outliers.) The quarterly averages are seasonal indices. They can be multiplied by 100.

    24. Seasonal (cont.) Uses. Average seasonal effects. Say IDEC = 122.7. Seasonal factors are associated with a 22.7% increase in the December quarter. Say IJUNE = 92.5. Seasonal factors are associated with a 7.5% decrease in the June quarter. De-seasonalizing. De-seasonalized = Observed Seasonal Index 100. Look for SA or sa in a table or graph to indicate seasonal adjustment. Check out the next slide. What do they tell us about beer?

    26. Forecasting The benefits of good forecasts. These ought to be obvious. If we have good information about the future we can plan for it. We might be able to take advantage of it. The approach. We try to identify the underlying patterns in time-series data. If these patterns seem to have been stable in the past, there is a reasonable chance (perhaps a very good chance) that they will continue. Sometimes they dont. Check out the next two slides.

    32. Forecasting (cont.) Regression one more time. We can construct a regression model that has a trend and seasonal variables in it. The trend component is as before. I.e. Y = ?0 + ?1 t where t = 1, 2, 3, We use seasonal dummy variables as follows: Q1 = 1 if it is the March quarter, 0 otherwise. Q2 = 1 if it is the June quarter, 0 otherwise. Q3 = 1 if it is the September quarter, 0 otherwise. Q4 = 1 if it is the December quarter, 0 otherwise. We have to omit one of these from the model. The full model is: Y = ?0 + ?1 t + ?2 Q2 + ?3 Q3 + ?4 Q4. Note: It makes no difference to the forecasts which one is omitted. The output is generated here by SPSS.

    36. Forecasting (cont.) Using SPSS. Result: Forecast = 1397.2 + 36.3 Time + 66.2 Q2 + 96.1 Q3 + 642.1 Q4. Interpretation: 36.3 Time. The trend increases by $36.2 m each quarter. 66.2 Q2. In the June quarter, seasonal factors lift sales by $66.2 m on average compared with March (omitted dummy). 96.1 Q3. In the September quarter, seasonal factors lift sales by $96.1 m on average compared with March (omitted dummy). 642.1 Q4. In the December quarter, seasonal factors lift sales by $642.1 m on average compared with March (omitted dummy).

    37. Forecasting (cont.) Predicting 2002 (and we shouldnt expect exactly the same results as before). Calculations: March 2002 ? t = 80, Q2 = 0, Q3 = 0 and Q4 = 0. Forecast = 1397.2 + 36.3 80 + 66.2 0 + 96.1 0 + 642.1 0. I.e. Forecast = $ 4301.2 m. June 2002 ? t = 81, Q2 = 1, Q3 = 0 and Q4 = 0. Forecast = 1397.2 + 36.3 81 + 66.2 1 + 96.1 0 + 642.1 0. I.e. Forecast = $ 4403.7 m.

    44. Other methods There are many forecasting techniques. There is no technique that out-performs all others. Sophistication does not necessarily mean that predictions will be more accurate than those produced with less sophisticated methods. We now look briefly at two other approaches. The first is difficult using Excel software, and the second cannot be done with it. Forecast Pro will do both easily. The following slides outline the models.

    67. Evaluation Recognition of a good model. Good models produce forecasts that are reasonably close to the true values. To measure this we use the MAPE statistic. Mean Absolute Percentage Error.

    68. Evaluation (cont.) MAPE. A small MAPE tells us that the forecasts fit the true data closely. A good fit with data that already exists does not necessarily mean that the model will produce good forecast of future data. MAPE and similar statistic that measure goodness of fit are often good, but the forecasts of unseen data are not so good. The problem is that we cannot measure how well a model predicts until after the predicted events actually happen. By then, forecasts are irrelevant.

    69. Evaluation (cont.) Approach. We estimate out models using most of our data. We refer to this as historical data. We then forecast beyond the historical period. We refer to this as held-out data or forecast period data. We then calculate MAPE in the forecast period only. If MAPE is small, the model would have made good forecasts of unknown data at the end of the historical period. Whether the model would continue to make good forecasts is uncertain, however there is then a case that the model has worked well in the past and might do so in the future.

More Related