I spent a summer interning at the Institute for Defense Analyses (IDA), where I worked in the Operational Evaluation Division, responsible for rigorously evaluating and providing recommendations for military systems that the Department of Defense is planning to buy. Time-to-event data is common in these evaluation settings, but oftentimes the standard right-censoring assumptions do not hold. For example, interest might be in time to detection for a chemical agent detector, and a test is implemented to evaluate the detector. There likely will be situations where unobserved detections would never have been detected no matter how long the observation period, an obvious violation of the right censoring assumption.
Mixture cure models - which model a "cure rate" and survival time separately - have been developed in the biomedical literature for these types of scenarios, often to analyze data from cancer patients where some individuals are expected to go fully into remission. However, mixture cure models have seen little use in engineering or other non-biomedical applications.
Part of my summer project involved determining if mixture cure models should be added to the statistical toolbox for IDA statisticians. I implemented a Monte Carlo simulation study evaluating standard survival models, and comparing them to mixture cure models under data-generating processes with a varying cure fraction (percentage of subjects that never experience the event of interest). Models considered were the Cox proportional hazards model; Weibull, lognormal, and generalized gamma parametric AFT models; semi-parametric proportional hazards mixture cure model; and semi-parametric AFT mixture cure model. Results indicated that standard survival models are sufficient for most scenarios, and mixture cure models only need to be used when the cure fraction is very high (> 0.7).
Slides from a research talk I gave on this topic can be found below, and code implementing the Monte Carlo simulation study can be found on my Github page.