1 / 9

A comparison on GARCH parameter estimation: SVR versus ML

A comparison on GARCH parameter estimation: SVR versus ML. Ramya Ramakrishnan Advanced Machine Learning. Overview. GARCH is a well known method in the financial community for modeling and predicting the conditional volatility of market returns

badru
Télécharger la présentation

A comparison on GARCH parameter estimation: SVR versus ML

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A comparison on GARCH parameter estimation: SVR versus ML Ramya Ramakrishnan Advanced Machine Learning

  2. Overview • GARCH is a well known method in the financial community for modeling and predicting the conditional volatility of market returns • Assumes data is normally distributed and parameter estimates are based on ML procedures • Financial data is rarely normally distributed: leptokurtic • We use Support Vector Regression (SVR) to estimate the parameters of GARCH • SVR does not assume that there is a probability density function over the return series and it adjusts the parameters based on empirical risk minimization • SVR defines an insensitivity zone that results in its ability to deal with any pdf • Results on simulation and experimental data show • GARCH models can be accurately estimated using SVR • SVR estimates have higher predictive ability that those obtained using ML methods

  3. Implement SVR using IRWLS methodology Compare Estimation Results for SVR and ML using Simulated Data Compare Estimation Results for SVR and ML using Empirical Data Methodology

  4. Understanding GARCH(1,1)Generalized Autoregressive Conditional Heteroskedasticity • GARCH • “heteroskedasticity”: variances of the error terms is not equal – the error terms may be expected to be larger for some points/ranges of the data than for others • “conditional heteroskedasticity”: heteroskedasticity that is not random and has autocorrelation • time varying volatility or volatility clustering. a process yt follows a GARCH(1,1) model if yt = μ + σtεt σt2 = ω + αyt-12 + βσt-12 εt is an uncorrelated process with zero mean and unit variance μ≈ 0 without affecting model performance Forecast : yt,predict2 = ωpredict + (αpredict+βpredict)yt-1,actual2

  5. Importance of GARCH in Finance • Financial returns series often clearly exhibit conditional heteroskedasticity (volatility clustering) • Being able to accurately forecast volatility is especially important in finance for risk analysis, portfolio selection, and derivative pricing • Goal of GARCH models is to provide a volatility measure

  6. Understanding SVRIterated Re-Weighted Least Squares • Problem formulation • min Lp = 0.5||w||2 – 0.5∑(aiei2 + ai*(ei*)2) • where: • ei = ε – yi + Ф(xi)w + b ei* = ε + yi + Ф(xi)w + b • ai = 2αi / ei ai* = 2αi / ei* • Basic Procedure • 1. Fixing ai and ai* minimize Lp • 2. Recalculate ai and ai* from the solution in step 1 • 3. Repeat until convergence • Project Parameters • RBF kernel : exp(-||xi-xj||2/2σ2) • σ and ε terms selected by cross validation

  7. Simulation Results • Relative R-squared = (R2SVR – R2ML)/ R2ML • As kurtosis increases, SVR estimates provide better predictive results • Performance for normal distribution varies by sample size • 1000 samples: ML does marginally better • 500 samples: SVR does better than ML 10 independent trials

  8. Empirical Results on S&P 100 Returns

  9. Conclusions • SVR based estimates of GARCH parameters produce more accurate predictions of financial volatility than ML estimates • ML tries to fit the residuals to a Gaussian distribution but if this is not the case it will increase the error by forcing the residuals to be Gaussian. • SVR tries to get the best fit with the data, not relying on prior knowledge and focuses on minimizing the prediction error with a given machine complexity

More Related