Registration 8:00 - 9:00
9:00 - 9.45
Abstract:
The CERN Large Hadron Collider (LHC) hosts some of the most ambitious experiments in fundamental physics, such as ATLAS and CMS, which together produce tens of petabytes of data per year. Extracting rare physics signals from this overwhelming data stream presents immense computational and analytical challenges.
Artificial Intelligence, particularly modern machine learning techniques, is shaping how data analysis is performed in these experiments. From improving event reconstruction to enhancing signal-to-background discrimination, AI is significantly boosting the sensitivity of searches for new phenomena and precision measurements--for instance, deepening our understanding of the Higgs boson.
In this talk, I will present recent developments in the use of AI at the LHC, highlight specific physics applications, and discuss ongoing efforts to ensure robust and semi-interpretable ML models, especially in the presence of domain shift—as models trained on large-scale simulations are ultimately applied to real detector data.
9.45 - 10:30
Abstract: soon
10:30 - 11.00
Break 11 - 11:30
11:30 - 12.15
Abstract: soon
12:15 - 13:00
Abstract:
Geospatial Foundation Models (GFMs) are cutting-edge tools for analyzing Earth observation data. However, current evaluations frequently do not meet the required standards, focusing on simplistic tasks and datasets that don’t reflect real-world complexities. These evaluations also lack diversity in terms of image resolutions, sensor types, and global coverage. To tackle these challenges, we introduce PANGAEA, a groundbreaking evaluation protocol that spans various datasets, tasks, and sensor types. PANGAEA sets a standardized benchmark for GFMs, openly comparing them with traditional supervised models like UNet and vanilla ViT. Our findings reveal that GFMs don’t consistently outperform supervised models under diverse conditions, highlighting their current limitations and further direction on how to improve them.
Lunch Break 13:00 - 14:00
14:30 - 15:15
Abstract:
Imagine we have a model that predicts whether or not a CT scan contains a tumour: traditional approaches tend to provide binary predictions, while not providing information on the model’s confidence in each prediction. Conformal Prediction (CP) is a is a framework for uncertainty quantification that offers an estimate of the confidence in the model’s predictions: instead of providing just a point estimate, it provides a set of possible outcomes (prediction set), together with a measure of confidence in each outcome. These prediction sets come with a (mathematical!) guarantee of coverage of the true outcome, ensuring that they will detect at least a pre-fixed percentage of true values. CP is a model-agnostic paradigm, requiring no retraining of the model and making no major assumptions about the distribution of the data.We Humans, when faced with uncertainty, tend to express indecision and offer alternatives. We will see that CP can be a key tool to include a human in the decision-making loop, once the ‘humanised’ machine is able to express its uncertainty. CP therefore offers a robust framework that allows stakeholders to make more informed decisions, even more so in high-risk sectors such as healthcare, finance and autonomous systems.
14:45 - 15:15
Rapid Estimation of Earthquake Location and Magnitude Using a Large Language Model
Towards Operational Earthquake and Marsquake Data Denoising
15:15 - 17:00
17.00 - 18.00