AAAI 2025 Bridge on
Knowledge-guided ML
Bridging Scientific Knowledge and AI
(KGML-Bridge-AAAI-25)
Held as part of the Bridge Program at AAAI 2025
February 25 to 26, 2025
Pennsylvania Convention Center (Room: 121B) | Philadelphia, PA, USA
Held as part of the Bridge Program at AAAI 2025
February 25 to 26, 2025
Pennsylvania Convention Center (Room: 121B) | Philadelphia, PA, USA
Scientific knowledge-guided machine learning (KGML) is an emerging field of research where scientific knowledge is deeply integrated in ML frameworks to produce solutions that are scientifically grounded, explainable, and likely to generalize on out-of-distribution samples even with limited training data. By using both scientific knowledge and data as complementary sources of introduction in the design, training, and evaluation of ML models, KGML seeks a distinct departure from black-box data-only methods and holds great potential for accelerating scientific discovery in a number of disciplines.
The goal of this bridge is to nurture the community of researchers working at the intersection of ML and scientific areas and shape the vision of the rapidly growing field of KGML. This bridge is a continuation of the KGML 2024 Bridge organized at AAAI 2024, the KGML 2024 Workshop organized at the University of Minnesota, the AAAI Fall Symposium Series organized in 2020, 2021, and 2022, and two previous NSF-funded workshops (KGML2020 and KGML2021). See the KGML book and a recent perspective article for a coverage of topics in KGML.
Day 1: Feb 25
9:45 am to 10.15 am
Invited Talk by Jigyasa Nigam
Title: Unpacking the synergy of physics and data: machine learning for atomistic systems
Abstract: Machine learning (ML)-driven computational modeling of molecules and materials has become a cornerstone of scientific inquiry, particularly in the atomic-scale search for compounds with distinct properties. Unlike many other domains, ML in this context benefits from a wealth of physical laws that govern the relationships between inputs and outputs. For example, atomic configurations, naturally represented by a set of position coordinates in 3D space, transform reliably under rotations, translations, and inversions. Most recent ML approaches which act as surrogate models of macroscopic properties, such as the potential energy surface or dipole moments, leverage domain knowledge by incorporating techniques that reflect these geometric symmetries between atomic structures and their corresponding target properties. By embedding physical priors in intricate end-to-end architectures or designing symmetry-adapted input structural descriptors, ML has drastically accelerated the prediction, simulation, design, and characterization of diverse material systems from input geometries. More recently, there has been a growing interest in modeling intermediate quantum mechanical (QM) components, such as electron densities, and effective single-particle Hamiltonians, which underly these structure-property relationships. With the emergence of these approaches, it has become possible to simultaneously obtain multiple output properties through established relationships or physics-based operations on these intermediate ML predictions. Given the vast design space of ML, we face a crucial question - should ML be applied directly to predict target properties, bypassing the need for QM calculations, or is its potential better realized by integrating it within a workflow that emulates QM calculations? In this talk, I will present strategies employed by several ML frameworks to incorporate geometric symmetries and physics-based constraints. I will highlight how the integration of fundamental physical principles with data-driven methods impacts accuracy and extends the modeling capabilities to complex targets, including the self-consistent QM Hamiltonians.
Bio: Jigyasa Nigam is a postdoc at MIT, supported by the Postdoctoral Fellowship for Excellence in Engineering. In 2024, she completed her PhD in Physics at EPFL under the mentorship of Prof. Michele Ceriotti. Her research centers on modeling molecular and material properties and the quantum mechanical workflows that underlie structure-property relationships. In particular, Jigyasa’s work emphasizes the integration of physical symmetries and constraints into machine learning models, enhancing their interpretability and reliability. Currently, she is working with Prof. Tess Smidt on investigating the robustness of equivariant models in the presence of approximate symmetries and phenomena driven by broken symmetries.
10.15 am to 10.30 am
10.30 am to 11.00 am
11.00 am to 11.30 am
Invited Talk by: Nat Trask
Title: Structure preserving digital twins via conditional neural Whitney forms
Abstract: Motivated by the ever-increasing success of machine learning in language and vision models, many aim to build AI-driven tools for scientific simulation and discovery. Contemporary techniques drastically lag behind their comparatively mature counterparts in modeling and simulation however, lacking rigorous notions of convergence, physical realizability, uncertainty quantification, and verification+validation that underpin prediction in high-consequence engineering settings. One reason for this is the use of "off-the-shelf" ML architectures designed for language/vision without specialization to scientific computing tasks. In this work, we establish connections between graph neural networks and the finite element exterior calculus (FEEC). FEEC forms the backbone of modern mixed finite element methods, tying the discrete topology of geometric descriptions of space (cells, faces, edges, nodes and their connectivity) to the algebraic structure of conservations laws (the div/grad/curl theorems of vector calculus). By building a differentiable learning architecture mirroring the construction of Whitney forms, we are able to learn models combining the robustness and UQ of traditional FEM with the drastic speedups and data assimilation capabilities of ML. We present an architecture we have recently developed which admits conditional generative modeling, allowing one to sample from the space of finite element models consistent with given observational data in near real time.
Bio: Dr. Trask is an associate professor in Mechanical Engineering and Applied Mechanics at the University of Pennsylvania, holds joint listings in Material Science and Applied Mathematics and Computational Science, and maintains a joint faculty appointment with Sandia National Laboratories. Prior to coming to UPenn in 2023, he was Principle Member of Technical Staff at Sandia for 8 years. He obtained his PhD from the Division of Applied Mathematics in 2016 working with Dr. Martin Maxey. Dr. Trask is the recipient of the Department of Energy Early Career Award and the NSF MSPRF award, and is deputy director of the multi-institutional DOE MMICCs center SEA-CROGS, developing ML-enabled digital twins for earth and embedded systems. His research spans a broad range of multiphysics and multiscale problems, including: fusion power, shock physics, weather and climate systems, semiconductor physics, and energy storage.
11.30 am to 12.00 pm
Invited Talk by: Syrine Belakaria [Slides]
Title: Active Learning for Derivative-Based Global Sensitivity Analysis with Gaussian Processes
Abstract: Many real-world design and analysis challenges call for identifying which input variables most strongly influence key outputs, but such evaluations often require expensive experiments. For example, in-vehicle safety experimentation, we investigate how the thickness of various components affects critical safety objectives. The fundamental challenge is to allocate limited experimental resources effectively while maintaining accurate sensitivity insights. In this talk, I will describe our novel framework for active learning for global sensitivity analysis of expensive black-box functions. Specifically, we focus on derivative-based global sensitivity measures (DGSMs) under Gaussian process surrogate models. I will explain how we derive new acquisition functions, based on uncertainty reduction and information gain, that directly target DGSMs to maximize the value of each experiment. I will also showcase the sample-efficiency benefits of our approach on a variety of real-world engineering problems. Overall, this work paves the way for more efficient and accurate sensitivity analyses in science and engineering applications where each experiment carries a high resource cost.
Bio: Syrine Belakaria is a Data Science Postdoctoral fellow in Computer Science at Stanford University working with Professor Stefano Ermon and Professor Barbara Engelhardt. She obtained her PhD in Computer Science from Washington State University where she was advised by Professor Jana Doppa; MS in Electrical Engineering from the University of Idaho; and Engineering degree in Information Technology from the Higher School of Communication of Tunis, Tunisia. She is the recepient of the IBM PhD Fellowship (2021-2023), was selected for the MIT Rising Stars in EECS (2021), and received the WSU Harriet Rigas Outstanding Woman in Doctoral Studies Award (2023). She has spent time as a research intern at Microsoft Research and Meta Research. Her general research interests lie in the broad area of AI for science and engineering, with a current focus on adaptive experiment design and active learning. She aims to enhance both the quality and efficiency of generative model alignment and to accelerate the discovery of novel drugs, by developing practical, resource-efficient methods that bridge fundamental algorithms with real-world scientific and engineering challenges.
12.00 pm to 12.30 pm
Invited Talk by: Olivera Kotevska [Slides]
Title: Enhancing Privacy and Communication Efficiency in Federated Learning: Dynamic Sketching and Shuffling Mechanisms
Abstract: Federated Learning (FL) has transformed distributed machine learning by enabling collaborative model training without requiring raw data exchange. However, achieving both communication efficiency and strong privacy guarantees remains a key challenge. This presentation explores advanced techniques for privacy-preserving and communication-efficient FL, introducing a dynamic sketching mechanism optimized through Bayesian optimization. This approach significantly reduces communication overhead, achieving compression ratios of up to 62x while maintaining privacy guarantees. Additionally, we will discuss a shuffling mechanism as an extra layer of privacy protection, further strengthening differential privacy in FL settings. Our findings demonstrate the robustness and adaptability of these techniques, making them valuable solutions for scalable, secure, and efficient federated learning in real-world applications.
Bio: Dr. Olivera Kotevska is a Research Scientist in the Computer Science and Mathematics Division (CSMD) at Oak Ridge National Laboratory (ORNL), Tennessee, USA. Her research focuses on trustworthy AI and deep learning, with a particular interest in the intersection of privacy preservation, control, and efficiency of deep learning for scientific applications. Prior to joining ORNL in 2019, Dr. Kotevska was an International Guest Researcher at the National Institute of Standards and Technology (NIST), Maryland, USA. Before pursuing her Ph.D., she worked in software development for telecommunications companies. She has taken on various leadership and organizational roles, including chairing IEEE subcommittees, serving on the editorial boards of ACM Transactions on Internet Technology and Sensors' IoT Data Analytics journals, and organizing numerous workshops and programs. She is also the founder and chair of the IEEE Women in Engineering East Tennessee Affinity Group. Dr. Kotevska has been recognized with IEEE Senior Membership and has received ORNL CSMD Outstanding Mentorship and Outreach Awards for her contributions to research and professional development.
12.30 pm to 2.00 pm
2.00 pm to 2.45 pm
Keynote Talk by: Chandan Reddy [Slides]
Title: Scientific Equation Discovery via Programming with Large Language Models
Abstract: Equation discovery is a crucial aspect of computational scientific discovery, traditionally approached through symbolic regression (SR) methods that focus mainly on data-driven equation search. Current approaches often struggle to fully leverage the rich domain-specific knowledge that scientists typically rely on. We present LLM-SR, an iterative approach that combines the power of large language models (LLMs) with evolutionary program search and data-driven optimization to discover scientific equations more effectively and efficiently while incorporating scientific prior knowledge. LLM-SR integrates several key aspects of the discovery process, namely, scientific knowledge representation and reasoning (via LLMs’ prompting and prior knowledge), hypothesis generation (equation skeleton proposals produced by LLMs), data-driven evaluation and optimization, and evolutionary search for iterative refinement. Through this integration, our approach discovers interpretable and physically meaningful equations while ensuring efficient exploration of the equation search space and generalization to out-of-domain data. We will demonstrate LLM-SR’s effectiveness across various scientific domains - nonlinear oscillators, bacterial growth, and material stress behavior. This work not only improves the accuracy and interpretability of discovered equations but also enhances the efficiency of the search process by leveraging scientific prior knowledge.
Bio: Chandan Reddy is a Professor in the Department of Computer Science at Virginia Tech. He received his Ph.D. from Cornell University and his M.S. from Michigan State University. His primary research interests include Machine Learning and Natural Language Processing, with applications in Healthcare, Software, E-commerce, and Human Resource Management. Dr. Reddy's research has been funded by organizations such as the NSF, NIH, DOE, DOT, and various industries. He has authored over 190 peer-reviewed articles in leading conferences and journals. He received several awards for his research work including the Best Application Paper Award at the ACM SIGKDD conference in 2010, the Best Poster Award at the IEEE VAST conference in 2014, and the Best Student Paper Award at the IEEE ICDM conference in 2016. He was also a finalist in the INFORMS Franz Edelman Award Competition in 2011. Dr. Reddy serves (or has served) on the editorial boards of journals such as ACM TKDD, ACM TIST, NPJ AI, and IEEE Big Data. He is a Senior Member of the IEEE and a Distinguished Member of the ACM. More information about his work is given at https://creddy.net.
2.45 pm to 3.15 pm
Invited Talk by: Alina Peluso
Title: Enhancing Veterans’ Health through Clinical Knowledge Integration and Advanced Modeling
Abstract:
Bio: Alina Peluso is a research scientist in Biostatistics in the Advanced Computing for Health Sciences Section, part of the Computational Sciences and Engineering Division at Oak Ridge National Laboratory (ORNL). She received her B.S. and M.S. degree in Statistics from the University of Milan-Bicocca (Italy) and a Ph.D. in Statistics from Brunel University London (UK). Her Ph.D. work advances the methodology and the application of regression models with discrete response including approaches to model a binary response in a health policy evaluation framework, as well as flexible discrete Weibull-based regression models (zero inflated, generalized linear mixed and generalized additive models) for count response variables leading to various applications in many fields. Prior to joining ORNL, she worked as a lecturer in Statistics at Brunel University London (UK), and a postdoctoral research associate within the school of Medicine at Imperial College London (UK) and at the Francis Crick institute (UK) where she applied machine-learning and statistical modeling to the analysis of omics data to enhance biomedical discoveries and to predict pathway dynamics for precision medicine. Her current research interests include casual inference in longitudinal data, regression models for count data, environmental and disease epidemiology, computational methods for statistical genomics and bioinformatics, bayesian learning and spatio-temporal modeling.
3.15 pm to 3.30 pm
3.30 pm to 4.00 pm
4.00 pm to 5.00 pm
5.00 pm to 5.30 pm
Poster Session 1
Poster Titles:
Building a wildlife Knowledge Graph
Evaluating Hybrid Modeling Methods
FREE: The Foundational Semantic Recognition for Modeling Environmental Ecosystems
HyPER: Knowledge Guided Correction for Improved Neural Surrogate Rollout
Improving Prediction Performance In Physics-Informed Machine Learning With Pre-Training and Adaptation
Tensor Completion for Surrogate Modeling of Material Property Prediction
Day 2: Feb 26
9.00 am to 9.45 am
Abstract: Reliable forecasts of the Earth system are crucial for human progress and safety from natural disasters. Artificial intelligence offers substantial potential to improve prediction accuracy and computational efficiency in this field, however this remains underexplored in many domains. Here we introduce Aurora, a large-scale foundation model for the Earth system trained on over a million hours of diverse data. Aurora outperforms operational forecasts for air quality, ocean waves, tropical cyclone tracks, and high-resolution weather forecasting at orders of magnitude loss computational cost than dedicated existing systems. With the ability to fine-tune Aurora to diverse application domains at only modest computational cost, Aurora represents significant progress in making actionable Earth system predictions accessible to anyone.
Bio: Paris Perdikaris is an Associate Professor of Mechanical Engineering and Applied Mechanics at the University of Pennsylvania. He received his Ph.D. in Applied Mathematics from Brown University (2015), and worked as a postdoctoral researcher at the Massachusetts Institute of Technology (2015-2017). His research interests span a range of topics at the interface of computational science and machine learning, including the development of foundation models for Earth system modeling, physics-informed neural networks and neural operators, generative models, and uncertainty quantification for sequential decision making in scientific and engineering applications.
9:45 am to 10.15 am
Invited Talk by: Licheng Liu [Slides]
Title: Bridging Agroecosystem Science and AI: Advances in Knowledge-Guided Machine Learning
Abstract: Agricultural production drives nearly a quarter of global greenhouse gas (GHG) emissions, highlighting a critical need for next-generation agroecosystem models that enhance both predictive accuracy and mechanistic understanding. Traditional process-based models, despite their strong theoretical underpinnings, can be limited by computational complexity and parameter uncertainty. Meanwhile, purely data-driven methods leverage large datasets effectively but often lack interpretability—making it challenging to incorporate domain-specific knowledge and fully capture complex biophysical processes.
Knowledge-Guided Machine Learning (KGML) bridges these gaps by integrating scientific knowledge (e.g., physiological and biochemical principles) with modern machine learning algorithms, creating hybrid models that deliver both accuracy and explainability. In this presentation, we detail how KGML-based approaches advance GHG emission estimates in agroecosystems, improve crop and soil process simulations, and empower data-driven decision-making. By showcasing real-world applications, we illustrate how KGML addresses uncertainties in carbon-nitrogen-water fluxes while preserving mechanistic insights essential for sustainable agriculture. Ultimately, next-generation agroecosystem modeling grounded in KGML offers a powerful framework for tackling climate change, advancing AI-driven discoveries, and guiding climate-resilient agricultural practices.
Bio: Dr. Licheng Liu is a senior research scientist in the Department of Bioproducts and Biosystems Engineering at the University of Minnesota, where he also leads the Knowledge-Guided Machine Learning (KGML) division in the AI-CLIMATE Institute. His research centers on understanding greenhouse gas (GHG) sources and sinks in agricultural and natural ecosystems—explaining their roles in climate change and providing actionable insights for mitigation. To achieve this, Dr. Liu integrates advanced analytical tools such as process-based models and KGML, multi-source data from modern sensing techniques, and AI-accelerated optimization algorithms in decision-making.
10.30 am to 11.00 am
11.00 am to 11.30 am
Invited Talk by: Alison Appling [Slides]
Title: Knowledge-guided evolution of a multi-scale, past-and-future river temperature prediction capability
Abstract: Building on several years of collaboration with academic colleagues in the knowledge-guided machine learning community, the U.S. Geological Survey is developing a national deep-learning-based model for daily river temperatures and their uncertainties covering the period 1980-2060. This model can support national water resources assessments and provide equitable access to water temperature predictions for citizens across the contiguous U.S. (865,000 km total stream length), but the current version has a spatial resolution that is too coarse (median discrete reach length 13.7 km) to support local management of fish and other natural resources. Therefore, we are also collaborating on multi-scale methods that transfer knowledge from the coarse national model to local watersheds at higher spatial resolution (initially for median reach length 1.6 km). Known gaps remain in these models’ ability to capture thermal effects of processes such as groundwater discharge and reservoir management, which we have begun to address with a variety of data-driven, knowledge-guided, and model-coupling enhancements. With such additions, we are evolving from a basic machine learning prediction capability toward the ability to meaningfully support both national and local water resource management.
Bio: Dr. Alison Appling is a water data scientist with the U.S. Geological Survey (USGS). She uses a combination of statistical, theoretical, and knowledge-guided machine learning methods to model water quality in rivers and lakes. She is especially passionate about developing new, improved, and integrated modeling methods to understand patterns and drivers of water quality. She also manages multi-method water prediction projects at USGS, including data-driven, process-based, and integrated approaches that span the water cycle.
11.30 am to 12.00 pm
Abstract:
Bio: B. Aditya Prakash is an Associate Professor in the College of Computing at the Georgia Institute of Technology (“Georgia Tech”). He received a Ph.D. from the Computer Science Department at Carnegie Mellon University in 2012, and a B.Tech (in CS) from the Indian Institute of Technology (IIT) -- Bombay in 2007. He has published one book, more than 95 papers in major venues, holds two U.S. patents and has given several tutorials at leading conferences. His work has also received multiple best-of-conference, best paper and travel awards. His research interests include Data Science, Machine Learning and AI, with emphasis on big-data problems in large real-world networks and time-series, with applications to computational epidemiology/public health, urban computing, security and the Web. Tools developed by his group have been in use in many places including ORNL and Walmart. He has received several awards such as Facebook Faculty Awards (2015 and 2021), the NSF CAREER award and was named as one of ‘AI Ten to Watch’ by IEEE. His work has also won awards in multiple data science challenges (e.g the Catalyst COVID19 Symptom Challenge) and been highlighted by several media outlets/popular press like FiveThirtyEight.com. He is also a member of the infectious diseases modeling MIDAS network and core-faculty at the Center for Machine Learning (ML@GT) and the Institute for Data Engineering and Science (IDEaS) at Georgia Tech. Aditya’s Twitter
12.00 pm to 12.30 pm
Invited Talk by: Alexander Rodriguez
Title: Bridging AI and Scientific Models in Epidemiology: Harnessing the Best of Both Worlds
Abstract: With the increasing availability of real-time multimodal data, a new opportunity has emerged for capturing previously unobservable facets of the spatiotemporal dynamics of epidemics. Epidemic forecasting is a crucial tool for public health decision making and planning. However, our comprehension of how epidemics spread remains limited, primarily due to the intricate interplay of various dynamics, particularly social and pathogen-related complexities. In this talk, I will present our research at the intersection of time series, scientific machine learning, and multi-agent systems. Our work focuses on integrating data, representation learning, and theoretical knowledge from mechanistic epidemiological models to enhance capabilities across multiple downstream tasks in public health.
Bio: Alexander Rodríguez is an Assistant Professor of Computer Science at the University of Michigan, Ann Arbor. His research spans the intersection of machine learning, time series, and scientific modeling, with a focus on applications in public health and community resilience. His work has garnered recognition through publications at premier AI conferences and multiple awards, including the 2024 ACM SIGKDD Dissertation Award Runner Up, the 2024 Outstanding Dissertation Award from the College of Computing at Georgia Tech, and a best paper award. His homepage is alrodri.engin.umich.edu.
12.30 pm to 2.00 pm
2.00 pm to 2.30 pm
Invited Talk by: Steve Waiching Sun
Title: Discovering high-precision mathematical models from data with projected neural additive method
Abstract:
Bio:
2.30 pm to 3.00 pm
Abstract: Physics Informed Machine Learning (PIML) represents an approach to problem-solving in scientific and engineering domains by integrating fundamental physics principles and constraints with data-driven machine learning approaches. This synergy not only enhances predictive accuracy and model interpretability but also ensures that solutions remain grounded in the underlying physics, even when extrapolating to out-of-distribution scenarios. Within the Julia programming ecosystem, Reactant and Lux offer robust tools that leverage advanced compiler optimizations to accelerate both machine learning workflows and scientific computing tasks. This tutorial will delve into the practical application of these tools for constructing Physics-Informed Machine Learning models, demonstrating their efficacy in solving complex problems, such as modeling hypersonic flows, while adhering to the underlying physical laws. We aim to empower researchers at the intersection of machine learning and scientific domains. Attendees will gain insights into how tools like Reactant and Lux can advance the field of Knowledge-Guided Machine Learning, fostering solutions that are not only data-driven but also scientifically interpretable and generalizable.
Bio: Avik Pal is a Ph.D. Candidate in Electrical Engineering and Computer Science at MIT, working in the Julia Lab under the supervision of Dr. Alan Edelman and Dr. Christopher Rackauckas. Previously, he obtained his S.M. from MIT and a B.Tech. in Computer Science from the Indian Institute of Technology Kanpur. His research on scientific machine learning entails developing efficient algorithms for physics-informed machine learning with a focus on efficient numerical solvers and compiler optimizations. He is a core contributor to several open-source software frameworks, including NonlinearSolve.jl, a high-performance nonlinear solver suite; Reactant & Enzyme-JAX, a tensor compiler for scaling scientific computations; and Lux.jl, a neural network framework for scientific machine learning. His software has thousands of monthly downloads and is widely used in the research community. Beyond academia, he has also worked at Google AI on differentiable wildfire simulators and at Intel Labs on parameter-efficient deep learning.
3.00 pm to 3.30 pm
Abstract: The Fourier Neural Operator (FNO) is one of the most popular model architectures in physics-informed machine learning, and has demonstrated strong performance on a number of benchmark tasks. However, the model continues to struggle at prediction compressible flow solutions or solutions over long durations. In this work, we investigate the performance of FNO in existing literature and identify two primary limitations of the FNO model. We then test the identified limitations on a synthetic toy problem and discuss how they appear in current work. In doing so, we aim to provide evidence for future directions of research which may improve the performance and efficacy of neural operator models.
Bio: Sean Current is a PhD Candidate in Computer Science at the Ohio State University under the advisement of Dr. Srinivasan Parthasarathy. His research focuses on methods of knowledge-guided machine learning for scientific applications in physics and chemistry. Sean previously graduated with a Bachelor of Science from the University of Arizona in Spring 2020, majoring in Mathematics and Information Science. During his studies, Sean has had the opportunity to intern with a variety of aerospace researchers, including Boeing, NASA, and Sandia National Laboratories.
3.30 pm to 4.00 pm
4.00 pm to 5.00 pm
5.00 pm to 5.30 pm
Poster Session 2
Poster Titles:
Active Symbolic Discovery of Ordinary Differential Equations via Phase Portrait Sketching
Atmospheric Super-Resolution with Neural Operators
Geoinformatics-Guided Machine Learning for Power Plant Classification
Improved deep learning of chaotic dynamical systems with multistep penalty losses
Iterative Knowledge-Guided Validation: Enhancing Factoid QA Reliability
Reduced complexity modeling of fluidic oscillators with data-driven boundary conditions
Reinforcement Learning Stability Analysis in the Latent Space of Actions
Scalable, adaptive, and explainable scientific machine learning with applications to surrogate models of partial differential equations
Spatial Distribution-Shift Aware Knowledge-Guided Machine Learning
Arka Daw
Oak Ridge National Laboratory
dawa@ornl.gov
Nikhil Muralidhar
Stevens Institute of Technology nmurali1@stevens.edu
Taniya Kapoor
TU Delft t.kapoor@tudelft.nl
Kai-Hendrik Cohrs
Universitat de València kai.cohrs@uv.es
Anuj Karpatne
Virginia Tech
karpatne@vt.edu
Xiaowei Jia
University of Pittsburgh
xiaowei@pitt.edu
Ramakrishnan Kannan
Oak Ridge National Laboratory
kannanr@ornl.gov
Vipin Kumar
University of Minnesota
kumar001@umn.edu