Canadian Institute of Actuaries Invited Sessions
Session I – Data-driven Solutions in Insurance
A population sampling framework for claim reserving in general insurance
Speaker: Sebastián Calcetero Vanegas, University of Toronto
This talk introduces a novel perspective on claim reserving by viewing it as a population sampling problem. It highlights techniques from this field that can be applied to claim reserving, showing how they mirror existing approaches while identifying areas for improvement. In particular, we demonstrate that population sampling provides a statistical framework for claim reserving by introducing augmented inverse probability weighting (AIPW) estimators of the reserve and showing that macro- and micro-level reserving models emerge as extreme yet natural cases. This formulation seamlessly integrates principles from both aggregate and individual models into a single estimation, enabling more accurate and flexible predictions. Moreover, this talk addresses sampling bias arising from partially observed claims data—an often overlooked challenge in insurance—and illustrates how advanced statistical methods from the sampling literature can be adapted to improve predictive accuracy and expand actuaries’ analytical tools. We showcase the framework using insurance data from a Canadian company.
Predicting classification errors using NLP-based machine learning algorithms and expert opinions
Speaker: Peiheng Gao, Western University
Various intentional and unintentional biases of humans manifest in classification tasks, such as those related to risk management. In this paper we demonstrate the role of ML algorithms when accomplishing these tasks and highlight the role of expert know-how when training the staff as well as, and very importantly, when training and fine-tuning ML algorithms. In the process of doing so and when facing well-known inefficiencies of the traditional F1 score, especially when working with unbalanced datasets, we suggest a modification of the score by incorporating human-experience-trained algorithms, which include both expert-trained algorithms (i.e., with the involvement of expert experiences in classification tasks) and staff-trained algorithms (i.e., with the involvement of experiences of those staff who have been trained by experts). Our findings reveal that the modified F1 score diverges from the traditional staff F1 score when the staff labels exhibit weak correlation with expert labels, which indicates insufficient staff training. Furthermore, the Long Short-Term Memory (LSTM) model outperforms other classifiers in terms of the modified F1 score when applied to the classification of textual narratives in consumer complaints.
Integration of traditional and telematics data for efficient insurance claims prediction
Speaker: Hashan Peiris, Simon Fraser University
For risk classification in auto insurance, companies need to deal with two types of claim count data sets: traditional data sets with fewer features and a larger number of observations and telematics data sets with more features and a smaller number of observations. As driver telematics reflects the driver’s habits on the road, it has gained attention for better risk classification in auto insurance. However, the scarcity of observations with telematics features has been problematic, which could be due to privacy concerns or favorable selection compared to the data points with traditional features.
To handle this issue, we propose a data integration technique based on calibration weights for usage-based insurance with multiple sources of data. The proposed framework can efficiently integrate traditional data and telematics data by effectively utilizing multiple sources of data for better risk classification and tariffication. It can also deal with possible favorable selection issues related to telematics data availability.
This conclusion is supported by a simulation study and empirical analysis using a synthetic telematics dataset under four different selection scenarios of selection bias: random selection, age selection, mileage selection, and favorable selection. It turns out that the proposed approach could achieve satisfactory performance both in the in-sample estimation and in the out-of-sample prediction compared to the existing benchmarks for automobile insurance ratemaking practices. Thus, the proposed approach can potentially improve risk classification in auto insurance and assist insurers in making informed decisions. Further, the proposed approach can be extended to data integration for mixed-effects models where a policyholder is observed over time, so that the proposed framework can also consider random effects for experience ratemaking and the fixed effects.
Session II – Innovative Tools for Pension and Life Insurance
Optimal hurdle rate and investment policies in lifetime pension pools
Speaker: Yingfei Sun, Simon Fraser University
Lifetime pension pools provide retirees with lifelong income by pooling mortality risk and dynamically adjusting benefits based on investment performance and demographic changes within the pool. Their benefit structure depends on two key design elements: the investment policy and the hurdle rate. The investment policy determines the allocation of assets, while the hurdle rate represents the assumed interest rate used to value the pool’s cash flows and guide benefit adjustments.
Existing research on asset allocation in lifetime pension pools is limited, with most studies relying on simplistic investment strategies, such as constant or static allocations, or portfolios composed solely of risk-free assets. Moreover, the optimal hurdle rate has been largely overlooked in the literature. This study addresses both gaps by simultaneously exploring the optimal hurdle rate and investment strategies, employing dynamic programming to account for varying levels of risk aversion.
We propose a framework that jointly determines the optimal fixed hurdle rate and asset allocations over time using a hyperbolic absolute risk aversion utility function to model the members’ risk preferences. Our results demonstrate that the investment policy adjusts dynamically in response to the pool’s assets and the number of survivors. Higher risk aversion leads to more conservative allocations and lower hurdle rates, whereas lower risk aversion results in greater allocations to risky assets and higher hurdle rates. We then conduct robustness tests on key factors—pool size, financial market dynamics, mortality assumptions, and subjective discount factors—and demonstrate that our results remain robust to these variations.
Hedging targeted risks with reinforcement learning: application to life insurance contracts with embedded guarantees
Speaker: Carlos Octavio Pérez Mendoza, Concordia University
We propose a deep reinforcement learning (RL) framework for optimizing the hedging of targeted risks in financial instruments exposed to multiple risk factors, with a particular focus on life insurance contracts such as variable annuities. Our methodology integrates Shapley decompositions to quantify the contribution of each risk source to liability cash flows, enabling precise profit and loss attribution. By isolating and hedging only the risks designated for mitigation, our approach ensures that non-targeted risks remain unaffected. Through numerical experiments based on Monte Carlo simulations, we demonstrate that our RL-based strategy outperforms traditional methods like delta hedging. Specifically, it effectively reduces targeted risks without increasing exposure to other risk factors and proportionally decreases overall risk exposure. This framework provides life insurers with a robust and adaptable tool for comprehensive risk management.
Mortality prediction via transfer learning-based approaches
Speaker: Yechao Meng, University of Prince Edward Island
Borrowing information from populations with similar mortality patterns is a well-established strategy for improving mortality predictions in a target population. This approach is closely aligned with the concept of Transfer Learning, a rapidly evolving field in modern data mining and machine learning. Transfer Learning aims to enhance the performance of predictive models by leveraging knowledge from related source domains and applying it to target domains.
This project is focused on developing parameter-transfer-based methods for mortality prediction in actuarial applications. We investigate how data from other mortality datasets can be effectively integrated into a parameter transfer learning framework to improve predictions for a target population. Our methodology involves incorporating classic mortality prediction models into a regularization framework, utilizing various penalty forms, and applying an iterative updating algorithm. We will also explore alternative transfer learning approaches through extensive numerical studies to assess their applicability with real-world mortality data.
Session III – Research Opportunities with the Canadian Institute of Actuaries
Speaker: Michael Bean, Actuary, Research Program, Canadian Institute of Actuaries
The Canadian Institute of Actuaries (CIA) has a long history of serving the research needs of the Canadian actuarial community. During the past three years, the CIA’s research program was reimagined and now focuses on three key areas: core research, which serves the needs of industry practitioners, academic research, which serves the needs of the academic actuarial community, and contributed research, which provides a forum for disseminating research on topics of current interest to Canadian actuaries.
Attend this session to find out about the reimagined research program, including a new partnership with the Actuarial Foundation of Canada that provides funding for academic research involving collaboration with industry.