Our workshop will introduce students to the complexities of predictive modeling using an engaging, competitive gaming approach. This dynamic and collaborative environment will challenge teams to use diverse modeling approaches—including spatial models, ordinary differential equations (ODEs), and cutting-edge AI/ML techniques—to predict system behaviors and outcomes based on limited and strategically manipulated datasets. Teams are expected to develop a model (or multiple models) prior to the workshop.
We recommended that participants form teams of 1 to 4 people prior to the workshop.
A mentor is recommended but not required. The mentor is not required to attend but it would be useful if the mentor can help to prepare the student(s) before the workshop and is available for the team to consult with during the workshop.
We plan three challenges simulating a year's influenza epidemic:
The data given to workshop participants will include data for severe cases (hospitalizations) as well as total cases over season time. Note that in the real world, total cases data is typically not available. For this challenge, the data released will be accurate to two significant figures without any added uncertainty.
In the second challenge, the data release will include only severe cases. This scenario is closer to a real-world situation where the number of infected individuals is not well known. As with the first challenge, this second challenge will also use accurate data from a model.
In the third challenge, representing another simulated year of influenza epidemic, only data on severe cases will be included. Additionally, the released data will contain noise to represent the intrinsic uncertainty in the weekly case data that is encountered in the real world.
There are no restriction on the computational model, programming language, or computing platform used.
Mechanism-based, statistical, or Artificial Intelligence - Machine Learning (AI/ML) approaches, or any combination of approaches are allowed.
Any historical data set can be used.
No hacking into organizers' computers is allowed. 😀
tbd
The accuracy of the competitor’s predictions will be evaluated based on their ability to forecast accurately using the following criteria:
Prediction of the total number of severe cases for the entire flu season.
Root Mean Square Error (RMSE): Measuring the deviation of predictions from the SIR curve for severe cases throughout the entire flu season.
Early Prediction of the Final State: Assessing how quickly competitors can accurately forecast the final state during the data release cycles.
Parameter Estimation: Evaluating the ability to accurately predict key parameters such as R0, vaccination effectiveness, or duration of cross-protection between the two viral strains.
Three winning teams will be announced on August 1, 2025.
Members of the winning team will be awarded a certificate and a prize.