Aim 1: Evaluate effects of each identified nondrug ADRD dyadic interventions on four family and health policy outcomes (family hours caregiving, days in a nursing home, costs to families/Medicaid/Medicare, and quality-adjusted life-years of persons with ADRD and their caregivers). Effects on outcomes will be evaluated for the at-large population. Also, we will evaluate how differential access to interventions by race (African American, Asian, and White) and ethnicity (Hispanic) impacts outcomes.
- Identify/estimate quality of life (EJ; RI)
- Validate and calibrate model findings (EJ, SC, RI, FE)
- We will validate the model predicted outcomes of time in the nursing home and place of death (community or nursing home).
- In the ADRD-MM, the probability of entering a nursing home was based on a Weibull model estimated using data from the NACC. The advantage of using NACC data is that it allows ADRD-MM to connect ADRD clinical features (i.e., cognition, function, and behavior) to the risk of entering a nursing home. In the NACC, there were few observed transitions from the nursing home to the community, so in ADRD-MM we estimated the probability of exiting the nursing home using published data from the State of Arkansas. Therefore, the data used to estimate ADRD-MM nursing home admission/discharges may not be generalizable.
- We will perform an external validation on the target outcomes of time in the nursing home and place of death using the Minimum Data Set (MDS 3.0 years 2011-2016), a data source not used to estimate model parameters. The MDS is a federally mandated clinical assessment for nearly all nursing home residents in the US and it captures the point of nursing home admission and if applicable time of discharge. Because the MDS uses measures of cognition, function, and behavior that are different from those in the ADRD-MM, the MDS cannot be used to directly estimate transition probabilities in the simulation. Rather, we will identify a cohort in the MDS of newly admitted ADRD nursing home residents. ADRD status will be defined based on a Cognitive Function Scale score of severe and an active diagnosis of ADRD.134 Preliminary evaluation of the MDS identified 200,000 unique residents that have ADRD. Using the identified MDS ADRD cohort, we will determine mean length of time in a nursing home and the proportion of the cohort that dies in the nursing home. We will make a distinction between short and long stay residents (a resident for >1 year).
- We will use outcomes from the control arms of identified RCTs (e.g., change in cognition) as calibration targets. Using a similar approach described above, we will calibrate ADRD-MM parameters to match trial outcomes. Our modeling of the treatment effects, depending on the impact of the intervention, ensures we will directly match reported findings on function or behavior. However, we will calibrate simulated treatment cases (i.e. group 2) as needed to ensure our simulated treatment population matches other measures (e.g., cognition) observed in treatment arms of the source RCTs.
- Expert advisors will review modeling assumptions and results for face validity.
- Model intervention effects (EJ, SC, RI, FE)
- Using the ADRD-MM, we will evaluate what happens to an average person with ADRD if they participate in the identified interventions compared to if they are exposed to usual care. Usual care is defined as care consistent with clinical practice as observed in the epidemiologic data used to construct the ADRD-MM.
- To conduct the counterfactual analysis, we will generate two identical simulated incident ADRD cases (group 1 = usual care; and group 2 = intervention). Simulated ADRD cases in group 1 will experience usual care and associated declines in cognition, function, and behavior and corresponding costs. Simulated cases in group 2 will experience an intervention effect (e.g., slower functional decline for a period of time). By being exposed to an intervention effect, the simulated ADRD cases in group 2 will experience different transitions and costs. A key assumption of our modeling is that the clinical benefits observed in the RCTs (e.g., reduction in behaviors) will also translate into changes in transitions and costs as identified by associations in the observational data. This counterfactual analysis is similar to a matched observational study where individuals are identical in all demographic characteristics except for the exposure (i.e., intervention). To model treatment effects in the ADRD-MM, we will find the difference in function or behavior between the simulated usual care cases (group 1) and intervention cases (group 2) such that we replicate the effect size observed in the original RCTs.
- RCTs measure function and/or behavior using instruments that are different from those in the ADRD-MM. These RCTs measure the same underlying construct (e.g., behavior) but do so with different instruments. In such cases, we will convert effect sizes from RCTs to a standardized mean difference (SMD, namely the difference in means between intervention and control divided by the pooled standard deviation). Standardizing transforms these measures to a uniform scale. We will then simulate the SMD in the ADRD-MM. In an exploratory analysis we will use data harmonization and item response theory (IRT) to synthesize effect sizes from studies that use different measures and covert all measures to a uniform scale (CO-I Dr. Gross is an expert in IRT). IRT uses a probabilistic model to relate individual item responses to an underlying construct.
- Modeling differential access to interventions by race/ethnicity (EJ, SC, RI, FE)
- We will model treatment effects by race/ethnicity when such data are available from RCTs.
- Most RCTs do not report effects by race/ethnicity and when such data are unavailable, we will assume that intervention effects are universal regardless of race/ethnicity. Nevertheless, racial/ethnic groups may not universally have access to interventions. In our study, ability to access community-based interventions will be based on being in the community and having family structures similar to those tested in the RCTs.For example, many of the source RCTs required the person with ADRD to have a spouse or adult child caregiver. However, family structures differ by race/ethnicity (e.g., African Americans have younger and more fictive kin caregivers than Whites). With a different family structure (which source RCTs did not evaluate) it is unclear if many of the dyadic interventions can be offered as a method of care. As another example, African Americans, Asians, Hispanics, and Whites enter nursing homes at different rates. Subsequently, individuals who enter nursing homes earlier in the disease process will have less time in the community and less opportunity to be exposed to community-based nondrug ADRD interventions. On a population level, differences in access to interventions can result in differential effectiveness of nondrug ADRD interventions.
- To model differential access, we will simulate persons with ADRD using population characteristics representative of incident African American, Asian, Hispanic, and White cases. We will also use published race/ethnic specific nursing home entry/exit rates and family characteristics (e.g., probability of having a spouse caregiver). These characteristics will be used to determine if a simulated individual in ADRD-MM access the selected nondrug interventions. For example, a simulated African American would not access a proven intervention in ADRD-MM if they do not have a spouse caregiver and the intervention was only tested with spouse caregivers. We will extrapolate results to a population level using US Census data and published race/ethnic incidence rates. We will compare expected outcomes between racial/ethnic groups. Finally, we will evaluate implementation strategies (e.g., offering nondrug interventions earlier in the disease process to groups with higher nursing home entry rates) that could potentially improve access to proven programs.
- Structural Sensitivity analysis (EJ, SC, RI, FE)
- We will conduct structural sensitivity analyses in which we will evaluate treatment effects under five assumptions 1) effect is immediate (base-case); 2) effect is delayed; 3) effect immediately ends based on last reported follow up in source RCT (base-case); 4) effect slowly ends after last follow up in source RCT (not shown in Figure 1); and 5) effect is indefinite.
- Uncertainty analyses (EJ, SC, RI, FE)
- Our core modeling approach accounts for stochastic uncertainty. That is, we account for the randomness in the model-predicted outcomes by modeling variation between individuals (e.g., treatment effects randomly vary based on data from source RCT). To evaluate the importance of the ADRD-MM inputs we will conduct one-way and two-way sensitivity analyses.
- In one-way sensitivity analyses we will independently vary each ADRD-MM input (e.g., effectiveness of treatment) between its 95% confidence interval or based on recommendations from the expert advisors. In two-way sensitivity analyses we will jointly vary two ADRD-MM inputs. One-way and two-way sensitivity analyses provide insight into the influence of model inputs on results.
- To evaluate the effect of uncertainty of ADRD-MM inputs (i.e., uncertainty in estimation of the parameters) we will conduct a probabilistic sensitivity analysis (PSA). The PSA simultaneously varies all ADRD-MM model inputs in a series of simulations. Fixed inputs (values for model parameters) will be replaced with distributions (e.g., probabilities following beta distribution), informed from data, to propagate parameter uncertainty through the model and generate distributions and 95% credible intervals around the model outcomes (e.g., cost).
- Sensitivity analyses for real-world effectiveness (EJ, SC, RI, FE)
- Most of the identified interventions were tested in efficacy trials (i.e., stage II - NIH Stage Model) and our base-case modeling (described above) reflects these estimates. We will conduct a set of analyses to account for dilution of an intervention effect that can occur when programs are adopted in the real world. We will simulate treatments assuming 5%-15% loss of effect from efficacy trials. We will engage the expert advisors to provide insight into implementation challenges that can reduce intervention effects and when possible we will directly model these barriers.