Ethical Use of AI in Insurance Modeling and Decision-Making – FTI Consulting

With increased availability of next-generation technology and data mining tools, insurance company use of external consumer data sets and artificial intelligence (AI) and machine learning (ML)-enabled analytical models is rapidly expanding and accelerating. Insurers have initially targeted key business areas such as underwriting, pricing, fraud detection, marketing distribution and claims management to leverage technical innovations to realize enhanced risk management, revenue growth and improved profitability. At the same time, regulators worldwide are intensifying their focus on the governance and fairness challenges presented by these complex, highly innovative tools specifically, the potential for unintended bias against protected classes of people.

In the United States, the Colorado Division of Insurance recently issued a first-in-the-nation draft regulation to support the implementation of a 2021 law passed by the states legislature.1 This law (SB21-169) prohibits life insurers from using external personal data and information sources (ECDIS), or employing algorithms and models that use ECDIS, where the resulting impact of such use is unfair discrimination against consumers on the basis of race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity or gender expression.2 In pre-release public meetings with industry stakeholders, the Colorado Department of Insurance also offered guidance that similar rules should be expected in the not-too-distant future for property & casualty insurers. In the same vein, UK and EU regulators are now penning new policies and legal frameworks to prevent AI model-driven consumer bias, ensure transparency and explainability of model-based decisions for customers and other stakeholders, and impose accountability for insurers who leverage these capabilities.3

Clearly, regulators around the globe believe that well-defined guard rails are needed to ensure the ethical use of external data and AI-powered analytics in insurance decision-making. Moreover, in some jurisdictions, public oversight and enablement bodies such as the U.S. Department of Commerces National Institute of Standards and Technology (NIST) are also actively working to define cross-industry guidelines and rules for the acceptable use of external data to train AI/ML-powered decision support models without resulting in discrimination against protected classes of consumers.4 Examples of potentially disfavored data may include:

Based on the Colorado draft regulation that was recently published, the expected breadth of pending new AI and external data set rules could mean potentially onerous execution challenges for insurers, who seek to balance the need for proactive risk management, market penetration and profitability objectives with principles of consumer fairness. For many insurers, internal data science and technology resources that are already swamped with their day jobs will be insufficient to meet expected reporting and model testing obligations across the multiple jurisdictions in which their companies do business. In other situations, insurers may lack appropriate test data and skill sets to assess potential model bias. In either instance or both, model testing and disclosure obligations will continue to mount and support will be needed to satisfy regulator demands and avoid the significant business ramifications of non-compliance.

So, how can insurance companies and their data science/technology teams best address the operational challenges that evolving data privacy and model ethics regulations will certainly present? Leading companies that want to get ahead of the curve may opt to partner with skilled experts, who understand the data and processing complexities of non-linear AI/ML-enabled models. The best of these external operators will also bring to the table deep insurance domain knowledge to assure context for testing and offer reliable, independent and market-proven test data and testing methodologies that can be easily demonstrated and explained to insurers and regulators alike.

The burden of regulatory compliance in the insurance industry cannot be diminished and can challenge a companys ability to attain target business benefits if these two seemingly opposing objectives compliance and profitability -- are not managed in a proactively strategic and supportive way. With appropriate guidance and execution, insurers who comply with new and emerging regulations for the use of AI-powered decision-support models and external data sets may actually realize a number of tangible benefits beyond compliance, including more stable analytic insight models, improved new business profitability and operational scalability, and a better customer experience that enhances brand loyalty and drives customer retention and enhanced lifetime value.

More here:

Ethical Use of AI in Insurance Modeling and Decision-Making - FTI Consulting

Related Posts

Comments are closed.