1 / 45

Objectives of Presentation

Data Mining: Neural Network Applications by Louise Francis CAS Convention, Nov 13, 2001 Francis Analytics and Actuarial Data Mining, Inc. louise_francis@msn.com www.francisanalytics.com. Objectives of Presentation. Introduce insurance professionals to neural networks

anikah
Télécharger la présentation

Objectives of Presentation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Mining:Neural Network Applications by Louise FrancisCAS Convention, Nov 13, 2001Francis Analytics and Actuarial Data Mining, Inc.louise_francis@msn.comwww.francisanalytics.com

  2. Objectives of Presentation • Introduce insurance professionals to neural networks • Show that neural networks are a lot like some conventional statistics • Indicate where use of neural networks might be helpful • Show practical examples of using neural networks • Show how to interpret neural network models

  3. Conventional Statistics: Regression • One of the most common methods is linear regression • Models a relationship between two variables by fitting a straight line through points.

  4. A Common Actuarial Example: Trend Estimation

  5. A Common Actuarial Example:Trend Estimation

  6. The Severity Trend Model • Severityt = Constant(1+trend)t-t0 • Log(Severity)= Constant+(t-t0)*log(1+trend)+error

  7. Solution to Regression Problem

  8. Neural Networks • Also minimizes squared deviation between fitted and actual values • Can be viewed as a non-parametric, non-linear regression

  9. The MLP Neural Network

  10. The Activation Function • The sigmoid logistic function

  11. The Logistic Function

  12. Simple Trend Example: One Hidden Node

  13. Logistic Function of Simple Trend Example

  14. Fitting the Curve • Typically use a procedure which minimizes the squared error – like regression does

  15. Trend Example: 1 Hidden Node

  16. Trend Example: 2 Hidden Nodes

  17. Trend Example: 3 Hidden Nodes

  18. Universal Function Approximator • The backpropigation neural network with one hidden layer is a universal function approximator • Theoretically, with a sufficient number of nodes in the hidden layer, any nonlinear function can be approximated

  19. How Many Hidden Nodes? • Too few nodes: Don’t fit the curve very well • Too many nodes: Over parameterization • May fit noise as well as pattern

  20. How Do We Determine the Number of Hidden Nodes? • Hold out part of the sample • Cross-Validation • Resampling • Bootstrapping • Jacknifing • Algebraic formula

  21. Hold Out Part of Sample • Fit model on 1/2 to 2/3 of data • Test fit of model on remaining data • Need a large sample

  22. Cross-Validation • Hold out 1/n (say 1/10) of data • Fit model to remaining data • Test on portion of sample held out • Do this n (say 10) times and average the results • Used for moderate sample sizes • Jacknifing similar to cross-validation

  23. Bootstrapping • Create many samples by drawing samples, with replacement, from the original data • Fit the model to each of the samples • Measure overall goodness of fit and create distribution of results • Uses for small and moderate sample sizes

  24. Jacknife Result

  25. Result for Sample Hold Out

  26. Interpreting Complex Multi-Variable Model • How many hidden nodes? • Which variables should the analyst keep?

  27. Measuring Variable Importance • Look at weights to hidden layer • Compute sensitivities: • a measure of how much the predicted value’s error increases when the variables are excluded from the model one at a time

  28. Technical Predictors of Stock Price A Complex Multivariate Example

  29. Stock Prediction: Which Indicator is Best? • Moving Averages • Measures of Volatility • Seasonal Indicators • The January effect • Oscillators

  30. The Data • S&P Index since 1930 • Close Only • S&P 500 since 1962 • Open • High • Low • Close

  31. Moving Averages • A very commonly used technical indicator • 1 week MA of returns • 2 week MA of returns • 1 month MA of returns • These are trend following indicators • A more complicated time series smoother based on running medians called T4253H

  32. Volatility Measures • Finance literature suggests volatility of market changes over time • More turbulent market -> higher volatility • Measures • Standard deviation of returns • Range of returns • Moving averages of above

  33. Seasonal Effects

  34. Oscillators • May indicate that market is overbought or oversold • May indicate that a trend is nearing completion • Some oscillators • Moving average differences • Stochastic

  35. Stochastic • Based on observation that as prices increase closing prices tend to be closer to upper end of range • In downtrends, closing prices are near lower end of range • %K = (C – L5)/(H5 – L5) • C is closing prince, L5 is 5 day low, H5 is 5 day high • %D = 3 day moving average of %K

  36. Neural Network Result • Variable Importance • Month • %K (from stochastic) • Smoothed standard deviation • Smoothed return • 2 Week %D (from stochastic) • 1 week range of returns • Smoothed %K • R2 was .15 or 15% of variance explained

  37. What are the Relationships between the Variables?

  38. Neural Network Result for Seasonality

  39. Neural Network Result for Oscillator

  40. Neural Network Result for Seasonality and Oscillator

  41. Neural Network Result for Seasonality and Standard Deviation

  42. Neural Network Result for Seasonality and Standard Deviation

  43. How Many Nodes?

  44. Conclusions • Neural Networks are a lot like conventional statistics • They address some problems of conventional statistics: nonlinear relationships, correlated variables and interactions • Despite black block aspect, we now can interpret them • Can find further information, including paper, at www.casact.org/aboutcas/mdiprize.htm • Paper and presentation can be found at www.francisanalytics.com

More Related