Structure-Preserving Scientific
Computing and Machine Learning
Summer School and Hackathon
June 16 - 25, 2025
June 16 - 25, 2025
This Summer School and Hackathon event on Structure-Preserving Scientific Computing and Machine Learning will take place in June 16 - 25, 2025, at the University of Washington, Seattle.
We invite talented and motivated graduate students from Canada and the United States to participate in a unique Summer School and Hackathon. The goal of this event is to expose graduate students to the exciting and emergent field of Structure-Preserving Scientific Computing and Machine Learning, through four mini-courses taught by world-leading experts in computational mathematics and by gaining practical experience through Hackathon projects led by scientists and researchers in academia, government agency, industry, and national laboratory.
Interested graduate students are encouraged to apply early to secure travel funding support and accommodation.
Application Submission
To apply, submit your application and supporting documents via the google form here.
Application Deadline
April 15th 2025, or until 40 graduate students have been accepted.
Speaker biography: Lukas Einkemmer is a full professor at the University of Innsbruck, from which he also obtained his PhD in 2014. His research focuses on the efficient solution of partial differential equations using modern high-performance computer systems, in particular high-dimensional problems such as those used in plasma physics. He has authored some of the pioneering work on using dynamical low-rank approximation for kinetic problems and formulating such methods in a continuous setting. He has also made significant contributions to structure preserving algorithms both for classical and low-rank methods. For his research he has been awarded the SciCADE New Talent award in 2015 and later the prize of the state capital Innsbruck for scientific research.
Course description: Gaining a better understanding of plasma systems is essential in applications ranging from nuclear fusion to astrophysics. Many such problems require a kinetic model. However, performing computer simulations in the up to six-dimensional phase space is extremely expensive due to the curse of dimensionality. Recently, dynamical low-rank methods have been developed into an efficient approach to tackle such problems. The drastic reduction in computational cost that can often be achieved also opens the possibility of using kinetic simulations in a multi-query context (i.e., for optimization, uncertainty quantification, or as input to machine learning algorithms).
In this minicourse, we will introduce the basics of dynamical low-rank methods and show how such techniques can be applied to kinetic models. We show how to implement a simple Python based dynamical low-rank solver and discuss how to use our low-rank framework Ensign (https://github.com/leinkemmer/Ensign) to easily and efficiently implement such algorithms in practice. Similar to many other complexity reduction techniques, naive dynamical low-rank algorithms do not preserve the underlying physical structure of the problem. We thus report on recent advances that allow us to obtain mass, momentum, and energy conservative as well as asymptotic preserving dynamical low-rank schemes and show some examples of problems where this is crucial to obtain physically relevant results.
Speaker biography: Raymond Spiteri is a professor in the Department of Computer Science at the University of Saskatchewan. His areas of research are numerical analysis, scientific computing, and high-performance computing. His specialty is designing efficient methods for the time integration of ordinary and partial differential equations.
As a numerical analyst, he has been afforded the luxury of being able to actively collaborate with scientists and engineers of various flavors. He also has a long record of industry collaboration with companies, both large and small. His current applications include simulation of electrical activity in the heart, large-scale hydrologic flows, numerical weather prediction, plasmas, and fluidized-bed gasifiers. He has organized numerous conferences, including the International Conference on High-Performance Computing Systems and Applications (2007), the Computational Fluid Dynamics Society of Canada Annual Meeting (2008), the Canadian Applied and Industrial Mathematics Society Annual Meeting (2014), and the Go20 Conference on Scientific Computing and Software annually since 2023 as well as co-organized many others including joint SIAM-CAIMS Annual meetings and Math to Power Industry workshops.
Course description: A common method for the numerical solution of partial differential equations is the method of lines, where space-like variables are discretized to yield a large set of ordinary differential equations. In many practical cases, such systems may benefit from treatment by multiple numerical methods (e.g., specialized to distinct physical operators), or they are simply too large to be handled monolithically by a single numerical method. In this minicourse, we survey the basics of the many flavors operator-splitting methods for differential equations, including implicit-explicit (IMEX) methods, fractional-step methods, and alternating direction implicit methods. We begin with the theoretical motivation and foundations of operator-splitting methods, including common strategies for when and how to split. From here, we specialize to representation and stability analysis of fractional-step methods based on Runge-Kutta sub-integrators and general principles for method and software design. We conclude with some hands-on exercises in designing methods and solving problems with the pythOS operator-splitting software.
Speaker biography: Chris Budd OBE, is a Professor of Applied Mathematics at the University of Bath (since 1995) and Director of Knowledge Exchange at the Bath Institute for Mathematical Innovation (IMI) which has a primary mission of taking mathematics out to the broader community. He has been involved in knowledge exchange all of his career, both in academia and industry. He has particular interests in mathematical, scientific computing, and machine learning, and works closely with many industries, especially the environmental, electronic, power generating, and food industries. He is currently the director of Maths for Deep Learning which is a large three university programme doing research into Scientific Machine Learning.
He is on the Council for the IMA and chairs the IMA Modelling and Algorithms group. A passionate advocate for outreach in mathematics, he is also a Professor of Mathematics at the Royal Institution and a fellow of Gresham College.
He has written over 120 papers, supervised over 40 PhD students, and has been principle investigator for grants over £6M. He was awarded the 2020 JPBM Communications Award, the British government award for Modelling and Data Support (SAMDS) for his ‘exceptional contribution’ to epidemiological and modelling work during the Covid-19 pandemic, the 2022 University of Bath Research Medal, and is a National Teaching Fellow (the highest UK award for HE teaching quality).
Course description: The first part of the course introduces the fundamental concepts behind scientific machine learning and how it can be used to model and learn complex dynamical systems . After shortly introducing this field and explaining its many applications dynamical systems in physics, biology, and other domains, the course will move to the basics of deep learning, showing how neural networks can be integrated with numerical ODE solvers and variational methods Key techniques for numerical integration such as PINNS and the Finite Element method will be covered, alongside practical Python implementations using PyTorch. By the end of the course, students will be able to apply these techniques to simple problems, such as the solution of differential equations.
The second part of the course delves deeper into the concepts from the first part of the course, expanding on Neural ODEs and introducing Neural Operators, a more general framework for learning the dynamics of complex systems, including partial differential equations (PDEs). A major focus will be on Neural Operators, including their theoretical basis, Fourier Neural Operators (FNOs) and DeepONets, which provide powerful tools for learning high-dimensional, spatiotemporal dynamics. The course will also feature real-world applications in fields such as fluid dynamics, and engineering. Through case studies and projects, students will apply these techniques to complex systems, gaining hands-on experience with cutting-edge research tools.
Project leader biography: Jack Coughlin is currently a Research Scientist at Pasteur Labs. He received his PhD in Applied Mathematics from the University of Washington, where he was co-advised by Professors Jingwei Hu and Uri Shumlak. He has industrial experience as a software engineer, where he served as a technical lead on a variety of projects and mentored junior engineers.
Project description: In this project, summer school attendees will optimize a simplified whole-device model of a Z Pinch for maximum fusion gain by combining continuum kinetic plasma modeling, dynamical low-rank methods, and end-to-end differentiable programming with JAX. The Z Pinch is a fusion concept currently being pursued by Zap Energy which relies on the "pinch effect" to compress a plasma between two high-voltage electrodes. The total current driven across the plasma gap is the key parameter determining performance, with fusion gain scaling with the 11th (!!) power of current. However, kinetic plasma physics is known to impact Z Pinch current on-axis through sheath effects at the electrodes. An accurate picture of Z Pinch compression therefore requires coupling a kinetic model of the sheath to a model of the pulsed power driver. Once all components are in place, an optimization problem emerges: how may the pulsed power and plasma parameters be tuned to maximize fusion gain for a given input power budget?
Attendees will be provided with basic differentiable implementations of three components: a continuum kinetic solver, an RLC circuit solver, and an optimization driver. From there, they will be challenged to improve the performance and accuracy of the whole pipeline, using techniques such as
dynamical low-rank approximation for the kinetic solver
training a machine-learning surrogate to learn the mapping of voltage, temperature, and density to current
implementing additional physics, including bremsstrahlung radiative cooling and Dougherty-Fokker-Planck collisions, that critically impact Z Pinch performance
deploying compute-heavy components to GPU-accelerated machines inside of tesseracts (portable, reusable containers for differentiable software).
Project leader biography: Terry Haut is a staff scientist in the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory (LLNL). He received his Ph.D. in Applied Mathematics at the University of Colorado, Boulder in 2008 and is a current member of LLNL’s Deterministic Transport Team, with expertise in thermal radiative transfer (TRT). His research interests include developing efficient numerical methods for TRT, with an emphasis on asymptotic-preserving methods and dimension-reduction tensor techniques such as the Dynamic Low Rank method.
Project description: Thermal Radiative Transfer (TRT) describes how energy is exchanged between radiation (such as X-rays) and materials. TRT is essential for modeling Inertial Confinement Fusion and can account for up to 90% of the computational time and memory in multiphysics simulations.
Simulating TRT involves solving time-dependent integro-differential equations that represent the distribution of radiation energy. These equations capture the energy density of radiation at each point in space, for every direction and frequency. The problem is high-dimensional and features time scales that can differ by many orders of magnitude, requiring implicit time-stepping methods. A significant challenge in TRT simulations is efficiently solving these equations, especially in regimes where photons interact strongly with the material. In these cases, standard iterative methods may require an extremely large number of iterations to converge.
This hackathon will focus on developing physics-informed preconditioners for a simplified TRT model. The simplified TRT model ignores nonlinear feedback between radiation and material and assumes symmetry in two spatial dimensions, reducing the problem to one spatial and one angular dimension. Students will write a finite element code in Python to solve these linear transport equations. They will also design preconditioners based on an asymptotic analysis of the discretized system in the limit of strong photon-background interactions. They will demonstrate, that with an appropriately designed preconditioner, iterative solvers for the steady-state equations converge rapidly, regardless of the photon mean free path.
By the end of the hackathon, students will gain experience with Discontinuous Galerkin (DG) finite element methods, learn the basics of photon transport modeling in a simplified context, and understand how to design efficient, physics-informed iterative methods.
Project leader biography: Oliver J. Bear Don't Walk IV is a citizen of the Apsáalooke Nation and is a Postdoctoral Scholar at the University of Washington in Biomedical Informatics and Medical Education. Oliver's research lies at the intersection of clinical natural language processing (NLP), fairness, and ethics. Their thesis focused on the technical and ethical aspects of extracting socio-demographic information from clinical notes. Dr. Bear Don't Walk's current work focuses on applying intersectionality to fairness audits of machine learning systems used in the care of patients with HIV. Additionally, he collaborates with Indigenous communities to describe Indigenous social drivers of health and to incorporate this information into biomedical informatics, thereby enhancing the relevance and effectiveness of healthcare technologies for Indigenous populations. Oliver is thankful for the community support which has brought him this far, and as such Oliver pays it forward through teaching and mentorship positions such as serving as an organizer for IndigiData and a Director on the American Medical Informatics Association’s Board of Directors.
Project description: Deep learning (DL) approaches in the clinical domain have benefitted from the explosion of data brought on by the adoption of electronic health records (EHRs) through the Health Information Technology for Economic and Clinical Health Act. This plethora of data has led to DL models being used for clinical risk prediction, healthcare disparities identification, and clustering. However, supervised DL techniques still require labels to learn well, which can be difficult to create while ensuring high quality. On the other hand, unsupervised techniques can leverage the large amount of data without the need for gold-standard labels and still support knowledge generation, such as subtype discovery. We leverage a DL approach to identify subgroups of patients that can work with large amounts of unlabeled data from the EHR.
The approach will leverage autoencoders, prior distribution constraints, and data from the EHR on patients groups likely to have clinically relevant subgroups. Autoencoders have been shown to successfully encode patient information that can be useful for a variety of downstream tasks while greatly reducing the dimensionality of patient representations. An approach to train an autoencoder to learn useful patient representations while also learning patient subgroups in an unsupervised manner is the Dirichlet Variational Autoencoder.
Building a Dirichlet Variational Autoencoder will be informative for participants to learn the basics of deep learning approaches in addition to more advanced techniques such as the stochastic gradient variational Bayes estimator. We will use the Modified National Institute of Standards and Technology dataset as a proof of concept for participants to ensure that their models are able to learn correctly. While we would ideally leverage EHR data for patient groups with multiple subtypes (e.g., diabetes, cancer), there are multiple privacy and security concerns with using such data. As a contingency, we plan to use the publicly accessible dataset MIMIC-IV to learn subtypes for patients with sepsis.
Project leader biography: Stéphane Gaudreault is a research manager and the head of the Scientific Machine Learning Research Group at Environment Canada, where he leads a team focused on developing next-generation weather prediction models. His research areas lie at the intersection of numerical analysis, high-performance scientific computing, and machine learning. He holds a master's degree in applied mathematics from École Polytechnique de Montréal and a bachelor's degree in computer science from Université de Montréal. He is an associate editor for Monthly Weather Review and a professional affiliate with the University of Saskatchewan and the University of California, Merced.
Project description: This hackathon project will investigate the performance and accuracy of Neural Ordinary Differential Equations (Neural ODEs) across diverse applications, from classification tasks to complex dynamical systems. Participants will conduct systematic numerical experiments using Python and the torchdiffeq package, comparing different time integration methods and training approaches (adjoint method vs. backpropagation through time). The project will progress from standard benchmarks like MNIST classification and the Van der Pol oscillator to more challenging problems including stiff systems and partial differential equations from PDEBench, such as the 1D Burgers equation. The hands-on experience will provide participants with a deep understanding of when and how to apply Neural ODEs effectively in computational science applications.