1 / 27

Prediction of Rare Volatility Surface Patches using Artificial Neural Network

Prediction of Rare Volatility Surface Patches using Artificial Neural Network. Regression based model vs. Neural networks based model. Regression based model doesn’t consider infrequently available patches as less reliable Neural networks based model

dorie
Télécharger la présentation

Prediction of Rare Volatility Surface Patches using Artificial Neural Network

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Prediction of Rare Volatility Surface Patches usingArtificial Neural Network

  2. Regression based model vs. Neural networks based model • Regression based model doesn’t consider infrequently available patches as less reliable • Neural networks based model • can consider the frequency of data by varying the learning rate between observations • easily implementable in today’s fast computers • can better model the underlying relationship between input and output data sets than regression based models

  3. Outline of the Algorithm • Generation of data with missing values • Fill the missing values with a regression algorithm to construct the data set. • Train the neural network with the above data set with a modified learning rate using incremental (online) approach. • Simulate the above neural network with some testing data set and find results.

  4. Generation of data Data Format: V=(v1, v2, v3,…, v27) • v1 .. v20 represent frequently available surface patches 2. v21 .. v22 represent macro-economic factors 3. v23 .. v27 represent infrequently available surface patches

  5. Generation of data (contd.) • Facts: • No real life data • patches and macro economic factors correspond to a volatility surface • Data generation using simulated surfaces : • Generate surfaces (with random parameters) to simulate volatility surfaces, and • Obtain data (set of Vs)from these generated surfaces

  6. Randomized data generation (2D view)

  7. Randomized data generation (3D view)

  8. Generated data

  9. Generating Data with Missing Values • Randomly miss values from v23… v27 with the following rates

  10. Data after missing

  11. Fill the missing values with some regression algorithm

  12. Regularized EM Algorithm • Steps 1. Estimate mean values and covariance matrices of incomplete datasets. 2. Impute missing values in incomplete datasets. • Based on iterated analyses of linear regressions of variables with missing values on variables with available values • Regression coefficients are estimated by Ridge Regression

  13. Data after missing values filled by Regularized EM Algorithm

  14. Train neural network with modified learning rate

  15. … … Structure of Neural Network

  16. Weight, Gradient & Learning Rate (LR) E R R O R Weight of a single link

  17. Effect of small and large LR

  18. Effect of change in Learning rate

  19. Learning rate of an observation • Learning rate of an observation = lr*normalized multiplier of an observation, where lr is some constant. • Calculating multiplier of a observation 1. Frequency norm 2. Log frequency norm

  20. Learning rate of an observation (contd.) Frequency norm • Multiplier of an observation, mfn = • Frequency of an output node is the number of observations, where a value for that output node is available (i.e., not missing) • mfn is between [0,1]

  21. Learning rate of an observation (contd.) Log frequency norm • Map mfn within [lr/2, 2*lr] using logarithmic scaling

  22. Incorporating mfn and mlfn in training of NN using Incremental Mode

  23. Constant LR Unmodified Learning Rate

  24. Modified LR with Frequency NORM Modified Learning Rate with Frequency Norm

  25. Modified LR with Log Frequency NORM Modified Learning Rate with Log Frequency Norm

  26. Simulation Result • 3 different methods have been tested 1.Modified learning rate with frequency norm 2.Modified learning rate with log frequency norm 3.ANN with constant learning rate • Each method is run for 10 times and 100 epochs • Result shows that the method with modified learning rate using frequency norm outperforms the other two methods consistently.

  27. Simulation result (Graph)

More Related