1 / 22

Regulatory Review of Predictive Models

Regulatory Review of Predictive Models. Mike Woods, FCAS, CSPA. Agenda. Current State of Modeling Regulation Future State of Modeling Regulation NAIC CASTF White Paper – Overview Feedback from Industry Closing Remarks. Current State of Predictive Modeling Regulation. Overview

darrius
Télécharger la présentation

Regulatory Review of Predictive Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Regulatory Review of Predictive Models Mike Woods, FCAS, CSPA

  2. Agenda • Current State of Modeling Regulation • Future State of Modeling Regulation • NAIC CASTF White Paper – Overview • Feedback from Industry • Closing Remarks

  3. Current State of Predictive Modeling Regulation • Overview • Varying levels of modeling expertise by state • Varying levels of modeling governance by state • Few state-level laws or regulations related to predictive modeling • Modeling ASOP is still a work-in-progress • Credential related to modeling in the actuarial field (CSPA) still in its infancy • Bottom Line • Framework exists around reviewing GLMs, but not consistent from state-to-state • Regulation and standard-setting of predictive modeling is beginning to be etched into stone

  4. Future State of Predictive Modeling Regulation • NAIC has released a white paper to help guide states on how to review predictive models and create consistent framework across states • Although we will probably continue to see varying regulation strategies across states, white paper should consistency

  5. Example of Feedback on NAIC Paper • “…the paper is comprehensive in its scope. The need for a set of best practices in the review process is well noted...” • “…there is potential for the process to become unmanageable for both the modelers and the reviewers...” • “…best practices could foster comprehensive upfront dialogue between the filing company and regulator that supports an efficient and effective review appropriately…” • “…it is imperative that any best practice not create a one-size-fits-all prescriptive checklist that may unduly restrict the use of advanced mathematical and actuarial techniques…”

  6. NAIC Paper on Predictive Models – Introduction • Goal of Paper • “Identify best practices to serve as a guide to state insurance departments in their review of predictive models underlying rating plans.” • Key Regulatory Principles • State insurance regulators will maintain their current rate regulatory authority. • Share information to aid companies in getting insurance products to market more quickly. • Share expertise and discuss technical issues regarding predictive models. • Maintain confidentiality, where appropriate, regarding predictive models. • Are best practices needed? • Modeling methods are rapidly evolving and growing in complexity • Many state insurance departments do not have in-house actuarial support or have limited resources • Need to provide states with guidance and assistance when reviewing predictive models • Scope of Paper • Focus of paper is GLMs in Personal Auto and Homeowners • Paper indicates that best practices discussed are largely transferrable to others lines and methods, but some are disagreeing with this point

  7. NAIC Paper on Predictive Models – Overview 92 Items Requested “Though the list seems long, the insurer should already have internal documentation on the model for more than half of the information listed. The remaining items on the list require either minimal analysis (approximately 25%) or deeper analysis to generate the information for a regulator (approximately 25%).” • Model Input • Data [7] • Sub-Models [5] • Adjustments and Scrubbing [7] • Data Organization [4] • Final Data Information [1] • Building the Model • High-level Narrative for Building the Model [9] • Medium-level Narrative for Building the Model [6] • Predictor Variables [4] • Massaging Data, Model Validation and Goodness-of-Fit Measures [14] • “Old Model” Versus “New Model” [5] • Modeler/Software [3] • The Filed Rating Plan • General Impact of Model on Rating Algorithm [5] • Relevance of Variables / Relationship to Risk of Loss [1] • Comparison of Model Outputs to Current and Selected Rating Factors [3] • Responses to Data, Credibility and Granularity Issues [3] • Definitions of Rating Variables [2] • Supporting Data [2] • Consumer Impacts [10] • Accurate Translation of Model into a Rating Plan [1] Comments • Majority of items requested are straightforward and essential to model review. • Obvious that a lot of thought went into creating framework. • However, today we are going to discuss the more contentious items.

  8. NAIC Paper on Predictive Models – Excerpts • A.5.a • If the raw data selected to build the model is in a format that can be made available to the regulator, provide it. • B.2.c • Describe the univariate testing and balancing that was performed during the model-building process, including a verbal summary of the thought processes involved. • B.3.c • Provide an intuitive argument for why an increase in each predictor variable should increase or decrease frequency, severity, loss costs, expenses, or whatever is being predicted. • B.4.f • Identify the threshold for statistical significance and explain why it was selected. Provide a verbal defense for keeping the variable for each discrete variable level where the p-values were not less than the chosen threshold. • B.4.g • For overall discrete variables, provide type 3 chi-square tests, p-values, F tests and any other relevant and material test. • B.4.l • Provide support demonstrating that the GLM assumptions are appropriate (for example, the choice of error distribution).

  9. NAIC Paper on Predictive Models – Excerpts • B.6.a • Provide the names, contact emails, phone numbers and qualifications of the key persons who: • Led the project • Compiled the data • Built the model • Performed peer review • C.2.a • Provide an explanation how the characteristics/rating variables, included in the filed rating plan, logically and intuitively relate to the risk of insurance loss (or expense) for the type of insurance product being priced. Include a discussion of the relevance each characteristic/rating variable has on consumer behavior that would lead to a difference in risk of loss (or expense). • C.6.a • Provide state-specific, book-of-business specific univariate historical experience data consisting of, at minimum, earned premiums, incurred losses, loss ratios and loss ratio relativities for each category of model output(s) proposed to be used within the rating plan.

  10. NAIC Paper on Predictive Models – Excerpts • C.6.b • Provide an explanation of any material (especially directional) differences between model indications and state-specific univariate indications. • C.7.h • Identify rating variables that remain static over a consumer’s lifetime versus those that will be updated periodically. Document guidelines for variables that are listed as static yet for which the underlying consumer attributes may change over time. • C.7.i • Provide the regulator with a description of how the company will respond to consumers’ inquiries about how their premium was calculated.

  11. Responses to NAIC Paper Nebraska DOI Missouri DOI Nevada DOI CAS Machine Learning Task Force Utah DOI California DOI

  12. Response Highlights – AAA Summary • Pare down framework to make more manageable • Submitting raw data could be problematic • Security risk • Could be difficult for regulators to receive and read due to size • May violate contractual obligations to third parties • Framework should consider ASOPs • ASOP 12: Risk Classification • ASOP 31: Actuarial Communication • Framework asked for predictor variables to be intuitive or causal, but ASOP 12 does not require so • ASOP 12 says this is preferred, but not essential

  13. Response Highlights – Allstate Summary • White paper asks companies to provide evidence of a causal relationship between risk characteristics and expected cost • However, this is in contract to ASOP 12 (Risk Classification) where this is not a requirement • The white paper asks companies to provide information and data such that a regulator would be reproducing, rather than reviewing, the filed model. • However, ASOP 41 does not consider this a requirement of actuarial communication • Raw data is also considered proprietary by Allstate • Framework places large of an emphasis on state-specific results, but state-specific results usually not credible • Framework places large on an emphasis on univariate indications, but multivariate indications are generally considered more accurate • Did not feel comfortable providing names, emails, and phone numbers of everyone that worked on model as was requested in white paper

  14. Response Highlights – P&C Insurance Association Summary • Believes the creation of review best practices should not be an initiative to create new rating standards that extend the statutory scope of the rate review process • Actuarial Standards Boards develops professional standards for actuaries • Current framework requires information beyond what is currently asked for in state department reviews, as well as standards set in ASOPs • Cannot create one-size-fits-all checklist in reviewing models • Modeling space still evolving • Insurers leverage predictive analytics differently based on their goals and book of business • Mentions ASOP 12 (Risk Classification) conflicting with framework • Concerned with filing timelines • Suggests specific questions to remove from white paper

  15. Response Highlights – ISO Summary • Proposed Framework is very extensive and they have not encountered a DOI that requested this amount of information before in 10+ years of filings GLMs • Feels review process around use of GLMs has been well-established for years • New requirements may stifle innovation • Some information may be proprietary or confidential • Some information is outside of scope of the actuarial review of a filing with a GLM • Certain questions may be equally applicable to “traditional” rate filings, yet are not asked in response to those questions • For example, raw data is not generally asked for in a rate filing • Filed rating plan should be evaluated rather than other potential rating plans • Development of a countrywide model would become burdensome if a filer had to separately evaluate the results for all 50 states separately.

  16. Response Highlights – CAS Machine Learning CAS Machine Learning Task Force Summary • Scope of white paper may be too broad. Initially the paper discusses GLMs, but also contains statements that imply the expansion of scope to all model types • Concerned with confidentiality of models • Paper should contain new confidentiality protections to balance against additional information requirements • Mentioned unease with amount of information requested which may lead to regulators try to build their own version of a model, rather than reviewing the model presented • Suggest revisiting which questions are considered “essential” in framework • Suggest regulatory focus should be on resultant rates rather than the models producing them

  17. Response Highlights – Cincinnati Summary • Review framework should be principles-based, rather than the current rules-based approach • Principles-based approach would use guiding principles, such as: • Ensure that a predictive model does not promote, encourage or permit unfair/improper discrimination. • Ensure that a predictive model does not promote, encourage or permit improper strategy (price optimization, for example). • Ensure that the covariates and model predictions included in a model bear a reasonable resemblance to the subject matter being modeled. • Require adoption of internal company controls designed to periodically review all predictive models for violations of the core principles described above. • NAIC embraced principle-based approach in life insurance in 2017 and could do the same with P&C insurance

  18. Response Highlights – FICO Summary • The value in FICO’s scoring trade secrets and proprietary methods for scoring would be put at risk if the company were required to disclose information without full and continuing confidentiality. • The proposal leaves the decision about confidentiality of a company’s intellectual property and trade secrets entirely within the discretion of each state regulator. • Nearly a decade ago, FICO was forced to withdraw its credit-based insurance scoring models from new or amended filings in the State of Florida over a very similar issue – lack of appropriate confidentiality protections. • Since FICO cannot be left in a precarious position with respect to the protection of its intellectual property, if the drafted white paper is adopted, as written, by any state without necessary trade secrets and other intellectual property protections in place, FICO may be forced to remove our FICO Insurance Score models from use by our insurance clients in that state, creating wholly unnecessary market disruption.

  19. Response Highlights - Others • LexisNexis • Similar to FICO’s response, LexisNexis states concerns with providing the amount of information requested and how it will affect their proprietary algorithms • NAMIC • Framework calls for an inordinate amount of compliance expenditure • Too prescriptive • Concerns with confidentiality, proprietary information, and contractual terms • Missouri DOI • Data mining will dramatically increase occurrences of false positives and framework does not address this • Nebraska DOI • Regulators should be able to challenge variables for which the rate filer provides no explanation that rings true intuitively and logically • Nevada DOI • Crucial to engage in discussion of the intuitive, logical, or plausible relationships of individual risk attributes to the risk of insurance loss • Utah DOI • Framework should not include requirement for intuitive or logical support for the selected explanatory variables. • Uses gender as example of historical variable without logical explanation

  20. Closing Remarks Summary of Comments • Many different opinions on the matter of regulating predictive modeling • One entity wanted to get rid of checklist completely • Some wanted the checklist to be more robust • Most wanted the checklist to be shorter • Some common themes from industry respondents • Concern with proprietary algorithms and data • Concern that framework conflicts with ASOPs • Concern that white paper is too extensive or prescriptive What happens next? • NAIC using comments to create a new draft • Targeting the NAIC National Meeting in December to vote on and adopt a framework for reviewing predictive model filings

  21. Questions ?

  22. Appendix • NAIC CASTF Home Page • https://www.naic.org/cmte_c_catf.htm • Predictive Model White Paper • https://www.naic.org/documents/cmte_c_catf_exposure_predictive_model_white_paper.pdf • White Paper Comments • https://www.naic.org/documents/cmte_c_catf_190212_predictive_analytics.pdf

More Related