Hannah Kerner, Arizona State U
Meet: https://meet.google.com/niy-gtpk-sro
YouTube Stream: https://youtube.com/live/7UFqkUbRgmI
Join group to receive calendar invite: https://groups.google.com/a/modelingtalks.org/g/talks
Abstract:
Remote sensing foundation models must learn from instruments that differ in physics, spatial scale, coverage, temporal cadence, and information content. These characteristics challenge assumptions underlying mainstream natural image and language models, demanding new architectural and training strategies. In this talk, I will discuss foundation models designed for the unique opportunities and challenges of Earth and for Mars remote sensing data. These models adopt different approaches to multimodal pretraining shaped by their distinct data regimes and downstream application objectives. Through this comparison, I will discuss how data heterogeneity impacts representation learning approaches and suggest new directions for multimodal foundation models that go beyond natural images and language.
Bio:
Hannah Kerner is an Assistant Professor in the School of Computing and Augmented Intelligence at Arizona State University. Her research focuses on advancing the foundations and applications of machine learning to foster a more sustainable, responsible, and fair future for all. Her lab is conducting research projects in machine learning for remote sensing, algorithmic bias, and machine learning theory. She translates research advances to real-world impact through her roles as the AI/Machine Learning Lead for NASA Harvest and NASA Acres, Center Faculty for the ASU Center for Global Discovery and Conservation Science (GDCS), and Research Director for Taylor Geospatial. She has been recognized by multiple prestigious research awards including NSF CAREER (2025), Schmidt Sciences AI2050 Early Career Fellowship (2025), and Forbes 30 Under 30 in Science (2021).