National HIV programs funded under the President's Emergency Plan for AIDs Relief (PEPFAR) where I worked for six years operate under strict performance accountability frameworks. Facilities are measured on a defined cascade:
HIV diagnosis
Linkage to care
Retention in treatment
Viral suppression
Funding and technical support are directly tied to measurable performance across these cascade stages. Small breakdowns — missed linkage, loss to follow-up, incomplete documentation — translate into measurable declines in population-level outcomes.
The system challenge is not simply clinical. It is structural.
Performance depends on:
Accurate and timely documentation
Clear patient attribution to facilities
Workforce capacity
Operational follow-up processes
Data visibility across sites
Inconsistent documentation and limited data harmonization across facilities created blind spots in identifying where cascade attrition occurred and why.
I lead the implementation of READI (Rapid, Efficient, and Data-Driven Implementation) across multiple countries where IntraHealth International implemented HIV programs under PEPFAR.
The intervention included:
Standardized cascade definitions across heterogeneous data systems
Patient-level longitudinal tracking to identify drop-off points
Facility- and geography-level stratification
Integration of workforce and service availability indicators
Operational dashboards accessible to clinicians and program managers
Rather than reporting static aggregate performance, the dashboards highlighted dynamic attrition points — where patients were lost between stages of care.
We built tools that allowed teams to ask:
Are losses occurring at diagnosis-to-linkage or retention-to-suppression?
Are specific facilities underperforming due to staffing constraints?
Is documentation inconsistency masking true performance?
The goal was not only to measure outcomes, but to connect measurement directly to operational response.
The analytic framework enabled:
Targeted technical assistance to underperforming facilities
Focused follow-up for patients at highest risk of loss to care
More efficient allocation of limited supervisory resources
Improved documentation completeness
By making cascade attrition visible and actionable, programs were able to prioritize high-leverage intervention points rather than broadly distributing resources.
Most importantly, the dashboards shifted performance conversations from reactive reporting to proactive system adjustment.
Although the funding mechanisms differ, the structural logic of the HIV cascade closely mirrors U.S. value-based care programs — particularly Medicare Advantage Stars and HEDIS (Healthcare Effectiveness Data and Information Set) gap closure.
In both systems:
Performance is measured against defined endpoints
Incentives are tied to measurable quality metrics
Attrition between stages drives financial consequences
Documentation integrity directly affects reimbursement
For example:
Viral suppression parallels diabetes A1c control.
Retention in care parallels medication adherence measures.
Linkage to care parallels post-discharge follow-up metrics.
Accurate HIV staging parallels accurate risk adjustment coding.
In Medicare Advantage, quality bonus payments and benchmark adjustments are sensitive to relatively small percentage shifts in performance. Similarly, in PEPFAR programs, incremental changes in cascade outcomes can significantly alter funding allocation and strategic prioritization.
The core systems insight is consistent:
Performance improvement requires identifying where attrition occurs, stratifying risk, and aligning operational resources to close measurable gaps.
Risk stratification logic in MA — identifying patients at highest risk of readmission or care gaps — mirrors the identification of patients at highest risk of loss to follow-up in HIV programs.
Similarly, RAF (Risk Adjustment Factors) capture accuracy in Medicare Advantage depends on documentation integrity and workforce engagement — just as cascade measurement depends on complete and timely documentation in global programs.
The incentives change. The systems logic does not.
Both environments require:
Clear attribution of patients to responsible entities
Harmonized data definitions across sites
Visibility into stage-based progression
Workforce alignment with measurement frameworks
Feedback loops between analytics and operations
My experience building cascade optimization systems in resource-constrained environments translates directly to designing analytic tools for Medicare Advantage gap closure, HEDIS performance, and shared savings frameworks.
The context shifts. The underlying analytic architecture remains fundamentally similar.
Incentive alignment under performance-based reimbursement
Dashboard design for actionable attrition insights
Cross-system data harmonization
Translation of systems logic across contexts