1 / 23

Approximation of Aggregate Losses

Approximation of Aggregate Losses. Dmitry Papush Commercial Risk Reinsurance Company CAS Seminar on Reinsurance June 7, 1999 Baltimore, MD. Approximation of an Aggregate Loss Distribution. Usual Frequency - Severity Approach: Analyze Number of Claims Distribution

alaina
Télécharger la présentation

Approximation of Aggregate Losses

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Approximation of Aggregate Losses Dmitry Papush Commercial Risk Reinsurance Company CAS Seminar on Reinsurance June 7, 1999 Baltimore, MD

  2. Approximation of an Aggregate Loss Distribution Usual Frequency - Severity Approach: Analyze Number of Claims Distribution and Claim Size Distribution separately, then convolute.

  3. The Problem • How to approximate an Aggregate Loss Distribution if there is no individual claim data available? - What type of distribution to use?

  4. Method Used • Choose severity and frequency distributions • Simulate number of claims and individual claims amounts and the corresponding aggregate loss • Repeat many times (5,000) to obtain a sample of Aggregate Losses • For different Distributions calculate Method of Moments parameter estimators using the simulated sample of Aggregate Losses • Test the Goodness of fit

  5. Assumptions for Frequency and Severity Used Frequency: Negative Binomial Severity: Five parameter Pareto, Lognormal Layers: 0 - $250K (Low Retention) 0 - $1000K (High Retention) $750K xs $250K (Working Excess) $4M xs $1M (High Excess)

  6. Distributions Used to Approximate Aggregate Losses • Lognormal • Normal • Gamma

  7. Gamma Distribution b-a *xa-1*exp(-x/b) f(x) = G(a) Mean = a * b Variance = a * b2

  8. Goodness of Fit Tests • Percentile Matching • Limited Expected Loss Costs

  9. Example 1. Small Book of Business, Low Retention: Expected Number of Claims = 50, Layer: 0 - $250K, Severity - 5 Parameter Pareto

  10. Example 1.

  11. Example 2. Large Book of Business, Low Retention: Expected Number of Claims = 500, Layer: 0 - $250K, Severity - 5 Parameter Pareto

  12. Example 2.

  13. Example 3. Small Book of Business, High Retention: Expected Number of Claims = 50, Layer: 0 - $1,000K, Severity - 5 Parameter Pareto

  14. Example 3.

  15. Example 4. Large Book of Business, High Retention: Expected Number of Claims = 500, Layer: 0 - $1,000K, Severity - 5 Parameter Pareto

  16. Example 4.

  17. Example 5. Working Excess Layer: Layer: $750K xs $250K, Expected Number of Claims = 20, Severity - 5 Parameter Pareto

  18. Example 5.

  19. Example 6. High Excess Layer: Layer: $4M xs $1M, Expected Number of Claims = 10, Severity - 5 Parameter Pareto

  20. Example 6.

  21. Example 7. High Excess Layer: Layer: $4M xs $1M, Expected Number of Claims = 10, Severity - Lognormal

  22. Example 7.

  23. Results • Normal works only for very large number of claims • Lognormal is too severe on the tail • Gamma is the best approximation out of the three

More Related