Room: Loyola - Friday July 04, 14:00 – 18:30
Reliable forecasting of the Earth system is essential for mitigating natural disasters and supporting human progress. Traditional numerical models, although powerful, are extremely computationally expensive1. Recent advances in artificial intelligence (AI) have shown promise in improving both predictive performance and efficiency2,3, yet their potential remains underexplored in many Earth system domains. Here we introduce Aurora, a large-scale foundation model trained on more than one million hours of diverse geophysical data. Aurora outperforms operational forecasts in predicting air quality, ocean waves, tropical cyclone tracks and high-resolution weather, all at orders of magnitude lower computational cost. With the ability to be fine-tuned for diverse applications at modest expense, Aurora represents a notable step towards democratizing accurate and efficient Earth system predictions. These results highlight the transformative potential of AI in environmental forecasting and pave the way for broader accessibility to high-quality climate and weather information.
Understanding when and where climate change signals emerge from natural variability is a major challenge. Traditionally, analysts rely on physical intuition to examine specific regions or variables where changes are expected, an approach that may overlook subtle or unexpected signals. We introduce a data-driven framework that uses neural networks to assess climate distinguishability: how well an AI model can differentiate a forced climate (e.g., with elevated CO₂) from a control scenario. High classification accuracy implies a strong, detectable signal; low accuracy
suggests changes are weak or uncertain. We further apply explainable AI to identify which regions and variables drive this distinguishability, offering insight into the physical mechanisms of change. We demonstrate this framework on climate model simulations with and
without Stratospheric Aerosol Injection (SAI). Results show that the specific intervention decelerates future climatic changes and leads to a less novel climate than the no-SAI scenario. This approach provides a general, interpretable method for detecting climate signals and informing policy.
Explainability plays a pivotal role in building trust and fostering the adoption of artificial intelligence (AI) in healthcare and beyond, particularly in high-stakes domains such as neuroscience and environmental science, where decisions directly affect patient
and societal outcomes. While progress in AI interpretability has been substantial, there remains a lack of clear, domain-specific guidelines for constructing meaningful and contextually relevant explanations.
In this talk, I will explore how explainable AI (XAI) can be effectively integrated into scientific domains. I will outline practical strategies for leveraging interpretability methods to uncover novel patterns in neural data and discuss how these insights can inform the identification of emerging biomarkers. Drawing on recent developments, I will highlight adaptable XAI frameworks that enhance transparency and support data-driven discovery. Lastly, I will demonstrate how these concepts can be extended beyond healthcare to fields such as environmental science.
To validate these concepts, I will present illustrative case studies involving large language models (LLMs) and vision transformers applied to neuroscience. These examples serve as proof of concept, showcasing how explainable AI can not only translate complex model behavior into human-understandable insights, but also support the discovery of novel patterns and potential biomarkers relevant to both clinical and broader research applications.
As a climate scientist who has been involved in neural modelling since the early 1990s, I would like
to briefly reconstruct the applications of this modelling in the climate field.
This reveals the different rationales for its application, from an alternative use to classical dynamic
models (GCMs), to one that is synergistic with them. In the first case, neural-network models
(more generally, data-driven ones) can examine and discover new specific characteristics of the
climate system or, when applied to the study of the same problem, can “measure” the robustness
of our results. In the second case, they can refine our understanding and hopefully obtain more
local results, as in the case of downscaling dynamical models with neural network ones.
Finally, new models of deep learning and generative artificial intelligence are beginning to be
applied to the study of climate problems and show interesting prospects for future improvements
in our knowledge.
Carbon removals are an integral part of achieving net-zero targets and limiting warming to 2°C to avoid dangerous climate change. But what makes a carbon removal high-quality, and how can machine learning (ML) help? This talk will explore practical applications of ML for carbon removals. I will discuss key applications such as satellite-based biomass estimation, fire risk monitoring, and ML-enhanced models for assessing the permanence of carbon removals. I will also highlight emerging uses of ML in voluntary carbon markets (VCM). The talk will focus on practical challenges of ML implementation for carbon removals—such as model transparency, resolution, and data availability—as well as opportunities for future development.