The research focus of our lab is advancing machine learning and image processing for challenging applications such as geospatial imaging and biomedicine. Our current research focus entails development of machine learning and image processing techniques for robust analysis of multi-sensor, multi-scale, high dimensional data. Our lab is supported in part by the following sponsors: NASA, NSF, NIH, DoD and Amazon AWS.
Included below are a few highlights from some of our recent work:
Active Learning for Large Vision Models
Active Learning with LVMs: We present an end-to-end framework for adapting Large Vision Models (LVMs) to geospatial semantic segmentation tasks. Our acquisition function combines traditional uncertainty measures with model gradients to identify pixels that will most effectively improve the model’s performance. The approach outperforms existing methods across various labeling budgets when transferring models to previously unseen global regions.
Preprint (accepted as WACV 2026 workshop paper): TBA
A Sensor Agnostic Domain Generalization Framework for Leveraging Geospatial Foundation Models: Enhancing Semantic Segmentation via Synergistic Pseudo-Labeling and Generative Learning
Going beyond simple fine-tuning: Knowledge transfer in the context of Geospatial Foundation Models. This work addresses domain differences that can arise under variations in sensors/sensing conditions that can otherwise lead to sub-optimal performance when deploying foundation models.
Preprint (accepted as CVPR 2025 workshop paper): https://arxiv.org/pdf/2505.01558
Vision Transformers for Multi-Channel Earth Observations
Layer-Optimized Spatial-Spectral Transformers for Hyperspectral Imagery
2025 WACV Workshop paper: Layer Optimized Spatial Spectral Masked Autoencoder for Semantic Segmentation of Hyperspectral Imagery
Investigating the design choices and how they impact hierarchical spectral vision transformers