Program

SON 2020+2

Monday, Sep 26, 2022


Sina Ober-Blöbaum, University of Paderborn

Mixed order and multirate variational integrators for the simulation of dynamics on different time scales


Variational principles are powerful tools for the modelling and simulation of conservative mechanical and electrical systems. As it is well-known, the fulfilment of a variational principle leads to the Euler-Lagrange equations of motion describing the dynamics of such systems. A discretisation of the variational principle leads to unified numerical schemes called variational integrators with powerful structure-preserving properties such as symplecticity, momentum preservation and excellent long-time behaviour.

After a broad introduction to variational integrators we will focus on different recent research aspects. These include high and mixed order construction and convergence analysis of variational integrators, a multi rate version for the efficient simulation of dynamics on different time scales as well as their use in solving optimal control problems. The theoretical results will be demonstrated numerically by means of several applications.



Kathrin Flaßkamp, Saarland University

Data-based motion primitives for dynamic control systems


Continuous-time, controlled dynamical system behavior can be encoded into motion primitives, such that an abstraction as a finite automaton is obtained. Motion primitives exploit system structures such as symmetry and relative equilibria. They have classically been derived either analytically or numerically; in both cases based on a system model. In this talk, however, we propose to base the automaton design on data. Focusing on autonomous driving, we derive a motion primitive library that represent typical human driving behavior.



Thomas Berger, University of Paderborn

Funnel control and applications


The control of dynamical systems with prescribed performance requirements is an active research area of systems and control theory. In view of necessary safety guarantees, suitable control techniques are of high practical relevance. Concurrently, a certain robustness is necessary because a precise measurement of the full state is often not available and many system parameters are unknown or uncertain. In this talk, the method of funnel control will be presented as a control technique which exhibits these requisite properties and only requires the knowledge of some structural invariants of the system class. This class can contain both finite and infinite dimensional systems, as the dimension of the state does not need to be known. For infinite dimensional systems we distinguish between systems which have a well defined relative degree (and are hence amenable to funnel control by straightforward arguments) and systems for which this is not the case - and hence individual methods are required.



Dimitris Giannakis, New York University

Learning closures of dynamical systems with quantum mechanics


We present a data-driven scheme for learning closures of dynamical systems based on the mathematical framework of quantum mechanics and Koopman/transfer operator techniques. Given a system in which some components of the state are unknown, this method models the unresolved degrees of freedom as being in a time-dependent “quantum-state”, which determines their influence on the resolved variables. The quantum state is an operator on a space of observables and evolves over time under the action of the transfer operator. The quantum state representing the unresolved degrees of freedom is updated at each timestep by the values of the resolved variables according to a quantum Bayes’ law. Moreover, kernel functions are utilized to allow the quantum Bayes’ law to be implemented numerically. We present applications of this methodology to the Lorenz 63 and multiscale Lorenz 96 systems, and show how this approach preserves important statistical and qualitative properties of the underlying chaotic systems.


Tuesday, Sep 27, 2022


Ian Melbourne, University of Warwick

Simulation of stochastic differential equations driven by a Lévy process


There are numerous difficulties related to simulating an SDE driven by a Lévy process, and viable methods for realistic examples seem hard to find in the numerical analysis literature. This is exacerbated by the fact that in many situations the appropriate stochastic integral is of Marcus type, which is not related to Ito or Stratonovich by a simple transformation.

Here we describe an approach to simulating Marcus SDEs based on deterministic homogenisation. (Joint work with Georg Gottwald.)



Kathrin Padberg-Gehle, Leuphana University Lüneburg

Data-based analysis of Lagrangian transport


Transport and mixing processes in fluid flows are crucially influenced by coherent structures and the characterisation of these Lagrangian objects is a topic of intense current research. While established mathematical approaches such as variational or transfer operator based schemes require full knowledge of the flow field or at least high resolution trajectory data, this information may not be available in applications. In this talk, we review different spatio-temporal clustering approaches and show how these can be used to identify coherent behaviour in flows directly from Lagrangian trajectory data. We demonstrate the applicability of these methods in a number of example systems, including turbulent convection.



Stefan Klus, Heriot-Watt University

Koopman-based spectral clustering of directed and time-evolving graphs


While spectral clustering algorithms for undirected graphs are well established and have been successfully applied to unsupervised machine learning problems ranging from image segmentation and genome sequencing to signal processing and social network analysis, clustering directed graphs remains notoriously difficult. We will first exploit relationships between the graph Laplacian and transfer operators and in particular between clusters in undirected graphs and metastable sets in stochastic dynamical systems and then use a generalization of the notion of metastability to derive clustering algorithms for directed and time-evolving graphs. The resulting clusters can be interpreted as coherent sets, which play an important role in the analysis of transport and mixing processes in fluid flows. We will illustrate the results with the aid of guiding examples and simple benchmark problems.



Yannis Kevrekidis, John Hopkins University

No equations, no variables, no parameters, no space and no time: Data and the modeling of complex systems


Obtaining predictive dynamical equations from data lies at the heart of science and engineering modeling, and is the linchpin of our technology. In mathematical modeling one typically progresses from observations of the world (and some serious thinking!) first to equations for a model, and then to the analysis of the model to make predictions.

Good mathematical models give good predictions (and inaccurate ones do not) - but the computational tools for analyzing them are the same: algorithms that are typically based on closed form equations.

While the skeleton of the process remains the same, today we witness the development of mathematical techniques that operate directly on observations -data-, and appear to circumvent the serious thinking that goes into selecting variables and parameters and deriving accurate equations. The process then may appear to the user a little like making predictions by "looking in a crystal ball". Yet the "serious thinking" is still there and uses the same -and some new- mathematics: it goes into building algorithms that jump directly from data to the analysis of the model (which is now not available in closed form) so as to make predictions. Our work here presents a couple of efforts that illustrate this ``new” path from data to predictions. It really is the same old path, but it is travelled by new means.



Konstantin Mischaikow, Rutgers University

Identifying Nonlinear Dynamics with High Confidence from Sparse Data


There are a variety of statistical techniques that given sufficient time series identify explicit models, e.g. differential equations or maps, that are then evaluated to predict dynamics. However, chaotic dynamics and bifurcation theory implies sensitivity with respect to small errors in data and parameters, respectively. This suggests a potential inherent instability in going directly from data to models. We propose a novel method, combining Conley theory and Gaussian Process surrogate modeling with uncertainty quantification, through which it is possible to characterize local and global dynamics, e.g., existence of fixed points, periodic orbits, connecting orbits, bistability, and chaotic dynamics, with lower bounds on the confidence that this characterization of the dynamics is correct.



Felix Nüske, MPI Magdeburg

Data-driven Approximation of the Koopman Generator


In the context of Koopman operator based analysis of dynamical systems, the generator of the Koopman semigroup is of central importance. Models for the Koopman generator can be used, among others, for system identification, coarse graining, and control of the system at hand.

Bounds for the approximation and estimation error in this context are paramount to a better understanding of the method. In this talk, I will first discuss recent results on estimating the finite-data estimation error for Koopman generator models based on ergodic simulations. I will then present recent advances allowing for the approximation of the generator on tensor-structured subspaces by means oflow-rank representations. This approach allows modelers to employ high-dimensional approximation spaces, while controlling the computational effort at the same time. Model applications to molecular dynamics simulation datasets will conclude the talk.



Cecilia González-Tokman, The University of Queensland

Quenched results for open and closed random dynamical systems


We will present recent results on ergodic properties and thermodynamic formalism for random open and closed dynamical systems. The focus will be on the so-called quenched perspective, which aims at describing the long-term behavior of the system for fixed (but generic) realizations of the environment, noise or forcing. Examples include non-transitive systems and random intermittent maps with geometric potentials. (Joint work with Jason Atnip, Gary Froyland and Sandro Vaienti).


Wednesday, Sep 28, 2022


Péter Koltai, Free University of Berlin

Space-time methods for coarse-graining of non-autonomous systems


The decomposition of the state space of a dynamical system into almost invariant sets is important for understanding its essential macroscopic behavior. Thanks to the works of Michael Dellnitz and others, the concept is reasonably well understood for autonomous dynamical systems. It has been generalized for non-autonomous systems, leading to the notion of coherent sets. Aiming at a unified theory, in this talk we will first present connections between the measure-theoretic autonomous and non-autonomous concepts. We shall do this by considering the augmented state space. Second, we will extend the framework to finite-time systems, and show that it is especially well-suited for manipulating the mixing properties of the dynamics. Third, we will show how this framework can be used to identify the birth and death of coherent sets.



Martin Golubitsky, The Ohio State University

Homeostasis and Input-Output Networks


A prototypical example of homeostasis occurs in warm-blooded mammals where the internal body temperature is held approximated constant on variation of the external ambient temperature. Our study of homeostasis focuses on biochemical networks and abstracts these networks in three ways. First, we assume that an input node and an output node are designated. Second, an input-output function is defined by how the output varies on change of the input. Third, infinitesimal homeostasis (the derivative of output with respect to input vanishes) replaces homeostasis (output is approximately constant on variation of input). In this talk we use graph theoretic methods to classify infinitesimal homeostasis motifs including feedforward loops, substrate inhibition, and negative feedback loops.



Andrzej Banaszuk, Lookheed Martin

From fixing problems to the design of dynamics: lessons learned the hard way.


We will summarize lessons learned in applied dynamical systems research that made impact on design of aerospace systems. Technical topics will include use of ergodic theory and operator-theoretic methods (Koopman and Perron-Frobenius). Application areas will include jet engines and unmanned systems.



Tuhin Sahai, Raytheon Technologies

At the Intersection of Dynamical Systems Theory, NP-hard problems, and Computational Complexity


Traditionally, computational complexity and the analysis of non-deterministic polynomial-time (NP-hard) problems have fallen under the purview of computer science and discrete optimization. However, dynamical systems theory has increasingly been used to construct new algorithms and shed light on the hardness of problem instances.

We explore the use of dynamical systems for approximating the solutions of (and analyzing) NP-hard problems that arise in a wide variety of applications. In particular, we start by considering decentralized graph clustering. By evolving waves in the graph (governed by a local update equation at each node) followed by a decentralized frequency detection step, one can accurately compute the cluster assignment for each node. We explore the use of operator theoretic methods for improving the frequency estimation step in this framework.

Next, we develop novel relaxations for the iconic traveling salesman problem (TSP) to the manifold of orthogonal matrices. We then construct flows on the manifold such that the system equilibria correspond to tours of the TSP. We apply tools from dynamical system theory to compute subsets of the stable manifold of the globally optimal solution to shed light on the search space complexity. We also use these relaxations to construct a new heuristic for computing candidate sets for the popular Lin-Kernighan-Helsgaun (LKH) approach.

Finally, we construct dynamical systems for the celebrated unique games conjecture (UGC) and use these systems to study the hardness of approximation of UGC instances. We conclude with a survey of other efforts at the intersection of dynamical systems theory and combinatorial optimization.


Thursday, Sep 29, 2022


Prashant Mehta, University of Illinois

Poincaré Inequality for Stability of Markov and Conditioned Processes

The Poincaré (or spectral gap) inequality (PI) is central to the subject of stochastic stability of Markov processes. The PI is the simplest condition which quantifies ergodicity and convergence to stationarity: The Poincaré constant gives the rate of exponential decay. Apart from stochastic stability, the PI has a rich history. It is the fundamental inequality in the study of the elliptic PDEs.

My talk is on the problem of nonlinear filter stability when the hidden Markov process is ergodic. The main contribution is the conditional PI, which is shown to yield filter stability. The proof is based upon a recently discovered duality which is used to transform the nonlinear filtering problem into a stochastic optimal control problem for a backward stochastic differential equation (BSDE). Based on these dual formalisms, a comparison is drawn between the stochastic stability of a Markov process and the filter stability. The latter relies on the conditional PI described in our work, whereas the former relies on the standard form of PI.

This is joint work with Jin Won Kim and Sean Meyn.



Oliver Schütze, Cinvestav-IPN; Oliver Cuate Gonzalez, ESFM-IPN

Continuation Methods for the Numerical Treatment of Multi- and Many Objective Optimization Problems


In many applications the problem arises that several objectives have to be optimized concurrently leading to multi-objective optimization problems (MOPs). One important characteristic of MOPs is that their solution sets -- the so-called Pareto sets -- do not consist of one single solution. Instead, one can expect that these sets form, at least locally and under certain (mild) conditions on the problem, (k-1)-dimensional manifolds, where k is the number of objectives involved in the problem. In this talk, we will present several continuation methods for the treatment of MOPs addressing both multi-objective problems (k less or equal to 4) and many objective problems (k larger than 4). The applicability and usefulness of all methods will be demonstrated on benchmark problems as well as on MOPs arising from real-world applications.



Christof Schütte, Zuse Institute Berlin/Free University of Berlin

Transition manifolds and effective molecular dynamics


The use of transfer operator related techniques for understanding complex systems in the life sciences started from a joint paper with Michael Dellnitz at the end of the last millennium. Since then, it has developed into a broad field of research with more than 1500 research articles on themes directly related to transfer operators and metastability in the molecular dynamics alone. This talk will shortly review this development and then discuss some of the most recent advances regarding transition manifolds and effective dynamics of molecular systems, as well as their utilization for finding new pain relief drugs.



Gary Froyland, University of New South Wales

On the influence of Michael Dellnitz: a personal retrospectrum


I will review some of joint papers with Michael, as well as those Michael has influenced in one way or another. Toward the end I will discuss some recent work.


Friday, Sep 30, 2002


Kathrin Klamroth, University of Wuppertal

Training Neural Networks using Multiobjective Optimization: Physics Informed Neural Networks for COVID-Predictions


Co-authors: Fabian Heldmann, Matthias Ehrhardt, Malena Reiners, Michael Stiglmayr, Sarah Treibert


The description of data by appropriate physical systems is a challenging task, especially when the data are subject to uncertainties and the underlying dynamics can only be approximated. A recent example is the prediction of SARS-CoV-2 transmission rates. Physics informed neural networks (PINNs) are an efficient tool in such situations because they can represent problems for which measured data are available and for which the dynamics in the data are expected to follow some physical laws. Rather than representing the data using a purely physical model on one hand, or using a purely data-driven model on the other hand, PINNs have the potential to compromise between data accuracy and physical plausibility by combining a data loss term with a residual loss term that is computed based on the governing differential equations.

We suggest a multiobjective perspective on the training of PINNs by treating the data loss and the residual loss as individual objective functions in a truly biobjective optimization approach. We argue that multiobjective optimization techniques are specifically designed to handle trade-offs and to compute compromises, and are thus a perfect tool for PINN training.

As a showcase example, we use COVID-19 predictions in Germany and built an extended

susceptibles-infected-recovered (SIR) model that is expressed by an ODE-system to model the transition rates and to predict future infections. We discuss different techniques for multiobjective PINN training and present numerical results. As an outlook, we briefly discuss the multiobjective nature of neural network training in general and present some results from the field of image classification.



Oliver Junge, Technical University of Munich

GAIO – 25 years later


GAIO (Global Analysis of Invariant Objects) is a software package for set oriented computations, originating from early works of Michael. It provides efficient implementations of algorithms for problems from dynamical systems as well as (multiobjective) optimization. We review the development of the package over the last 25 years and talk about recent advances.



Sebastian Peitz, University of Paderborn

Sample efficiency in model-based and model-free data-driven control


As in almost every other branch of science, the advances in data science and machine learning have also resulted in improved modeling, simulation and control of nonlinear dynamical systems, prominent examples being autonomous driving or the control of complex chemical processes. However, many of these approaches face the issues that they (1) do not have strong performance guarantees and (2) often tend to be very data hungry. In this presentation, we discuss different approaches, both model-based and model-free, to improve the sample efficiency in data-driven control. We also briefly address the question of error bounds in the model-based setting, and we demonstrate the performance using several example systems governed by partial differential equations.