Part 1
9:30 - 10:15
[Miriam Cobo Cano, Lara Lloret Iglesias]
Effectively evaluating AI models requires careful attention to data quality, feature selection, and performance assessment. This session explores common pitfalls in both data preprocessing and model evaluation, covering techniques to ensure data integrity, robust validation, and meaningful performance metrics across different learning paradigms.
10:15 - 11:00
[Miriam Cobo Cano, Lara Lloret Iglesias]
Participants will engage in practical exercises and experiment with various methods for evaluating deep learning models. This includes:
Exploring data preprocessing methods for quality control.
Understanding how feature selection impacts model outcomes.
Analyzing model performance through diverse assessment methods with multiple metrics. Interpreting results to refine model evaluation strategies.
Applying evaluation principles in real-world scenarios. Continuous assessment to guide model improvements and decision-making.
Coffee break 11:00 - 11:30
Part 2
12:00 - 12:30
[Pietro Vischia]
This session delves into uncertainty quantification and conformal prediction. In addition, it explores the effects of regularization and out-of-distribution extrapolation.
12:30 - 13:00
[Pietro Vischia]
Participants will practice uncertainty quantification techniques and regularization methods, including:
Understanding the effects of data distribution and external factors on model robustness, and how to estimate uncertainties derived from these factors.
Understand the effects of improperly splitting available data into training/test/application sets, and how to obtain robust estimates using conformal prediction.
Learn how to dimension a model correctly, while avoiding introducing unnecessary biases via regularization.
Develop expertise in introducing prior knowledge in a model in the form of inductive bias.