Bilateral Workshop on Applied mathematics

Bayreuth -- Tel Aviv

12-14 March 2019, Tel Aviv University

Abstracts

Haim Avron

Reconstructing Signals with Simple Fourier Transforms

Reconstructing continuous signals based on a small number of discrete samples is a fundamental problem across science and engineering. In practice, we are often interested in signals with ``simple'' Fourier structure -- e.g., those involving frequencies within a bounded range, a small number of frequencies, or a few blocks of frequencies. More broadly, any prior knowledge about a signal's Fourier power spectrum can constrain its complexity. Intuitively, signals with more highly constrained Fourier structure require fewer samples to reconstruct.

We formalize this intuition by showing that, roughly speaking, a continuous signal from a given class can be approximately reconstructed using a number of samples equal to the statistical dimension of the allowed power spectrum of that class. We prove that, in nearly all settings, this natural measure tightly characterizes the sample complexity of signal reconstruction.

Surprisingly, we also show that, up to logarithmic factors, a universal non-uniform sampling strategy can achieve this optimal complexity for any class of signals. We present a simple, efficient, and general algorithm for recovering a signal from the samples taken. For bandlimited and sparse signals, our method matches the state-of-the-art. At the same time, it gives the first computationally and sample efficient solution to a broad range of problems, including multiband signal reconstruction and common kriging and Gaussian process regression tasks.

Our work is based on a novel connection between randomized linear algebra and the problem of reconstructing signals with constrained Fourier structure. We extend tools based on statistical leverage score sampling and{column-based matrix reconstruction to the approximation of continuous linear operators that arise in the signal fitting problem. We believe that these extensions are of independent interest and serve as a foundation for tackling a broad range of continuous time problems using randomized methods.

This is joint work with Michael Kapralov, Cameron Musco, Christopher Musco, Ameya Velingker and Amir Zandieh

Robert Baier

Adaptive Computation of Reachable Sets by Distance Functions and OCP Solvers

Several applications like e.g., collision avoidance for cars, require not only the knowledge of one optimal trajectory but the knowledge of the behaviour of all trajectories and a good approximation of the reachable set, i.e., the end points of all feasible trajectories at a given end time. Reachable sets of nonlinear state-constrained control problems with bounded controls can be calculated by various approaches, e.g., by solving a pde, evolving level sets, by set-valued Runge-Kutta methods with a box/pixel approach or by overestimating methods.

This talk suggests an adaptive method based on solvers for direct discretization of optimal control problems (OCP) or optimization problems. The objective is motivated by the distance function from some grid point to the unknown (usually nonconvex) reachable set. By varying a grid point in a bounding box covering the reachable set, the OCP solver generates a point inside or at the boundary of the reachable set.

Adapting a subdivision technique originally used for the approximation of global attractors yields rather simple convergence proofs, a refining overestimation of the reachable set by collection of boxes and an adaptive implementation that outperforms the algorithm if applied to a uniform state space discretization.

As applications lower-dimensional projected reachable sets of a robot model and a single-track model for collision avoidance with up to six states and three controls are computed. Features and possible speedups of the algorithm by parallelization are demonstrated.

Tuvi Etzion

Thermal-Management Coding for High-Performance Interconnects

High temperatures have dramatic negative effects on interconnect performance and, hence, numerous techniques have been proposed to reduce the power consumption of on-chip buses. However, existing methods fall short of fully addressing the thermal challenges posed by high-performance interconnects. In this paper, we introduce new efficient coding schemes that make it possible to directly control the peak temperature of a bus by effectively cooling its hottest wires. This is achieved by avoiding state transitions on the hottest wires for as long as necessary until their temperature drops off. We also reduce the average power consumption by making sure that the total number of state transitions on all the wires is below a prescribed threshold. We show how each of these two features can be coded for separately or, alternatively, how both can be achieved at the same time. In addition, error-correction for the transmitted information can be provided while controlling the peak temperature and/or the average power consumption.

In general, our cooling codes use $n > k$ wires to encode a given $k$-bit bus. One of our goals herein is to determine the minimum possible number of wires $n$ needed to encode $k$ bits while satisfying any combination of the three desired properties. We provide full theoretical analysis in each case. In particular, we show that $n = k+t+1$ suffices to cool the $t$ hottest wires, and this is the best possible. Moreover, although the proposed coding schemes make use of sophisticated tools from combinatorics, discrete geometry, linear algebra, and coding theory, the resulting encoders and decoders are fully practical. They do not require significant computational overhead and can be implemented without sacrificing a large circuit area.

Elza Farkhi

Filippov-type stability theorems for differential and difference inclusions

Filippov-type theorems provide estimates of the error in the solutions set of ordinary differential equations (ODE) or a differential inclusions (DI) under perturbations in the initial condition and the right-hand side. They lie in the basis of stability and convergence analysis of dynamical systems. In the classical setting, Lipschitz continuity of the right-hand side (RHS) of the ODE or DI with respect to the state variable is required (Filippov, '67). Filippov showed certain stability also for ODEs with right-hand sides satisfying a condition called today One-Sided Kamke/ Lipschitz (OSK/L) condition.

The one-sided Lipschitz condition for set-valued maps we use relaxes both the classical Lipschitz continuity and the dissipativity. Allowing certain types of discontinuities, the OSL/OSK condition still preserves stability of the solution and reachable sets under perturbations and discretizations and is estimated by Filippov-type theorems. For negative OSL constant estimates are derived on an infinite time horizon that express set-valued asymptotic stability.

We also state Filippov-type stability theorems for Euler type difference inclusions, with respect to perturbations in the initial set, in the right-hand side as a set (outer perturbation) and in the state variable entering the right-hand side (inner perturbation). In the special case of strengthened one-sided Lipschitz (SOSL) set-valued maps better estimates are obtained.

As applications, the distance between the discrete reachable set of the (set-valued) explicit and implicit Euler method, and the distance between the reachable sets of nonconvex difference inclusions with their relaxed (convexified) counterparts are estimated.

The talk is bsed on joint works with R. Baier (Bayreuth), Tz. Donchev (Sofia) and others.

Lars Grüne

Computation of nonsmooth control Lyapunov functions on a grid

Control Lyapunov functions play an important role for controller design for nonlinear control systems. They can be interpreted as a road map, which for each point of the state space indicates a set of good directions towards the target from which the value of the controller can be computed. Lyapunov functions can be represented as sub- or supersolutions of certain partial differential equations (PDEs) and due to this fact various numerical schemes for computing Lyapunov functions have been designed based on discretization schemes for PDEs.

However, most of these schemes require appropriate smoothness of the function to be computed. It is known from seminal works of Artstein, Clarke, Sontag and others in the 1980s and 1990s that Control Lyapunov functions are in general not smooth. The talk explains a new way to cope with this problem. It is based on joint work with Robert Baier (Bayreuth), Philipp Braun, and Chris Kellett (both Newcastle, Australia).

Thomas Kriecherbauer

On the universal distributions of random matrix theory

It was already conjectured by Wigner and Dyson almost 60 years ago that the fluctuations of eigenvalues of random matrices follow universal limiting laws as the matrix size tends to infinity. This conjecture has been established rigorously for a large class of matrix ensembles during the past two decades. As a by-product it was discovered that, somewhat surprisingly, these universal laws also appear in a number of exactly solvable models of combinatorics and statistical mechanics. In this talk we review some of these developments. Moreover, we introduce Riemann-Hilbert problems as one of the key analytic tools in proving the above mentioned limit laws.

Arie Levant

Homogeneous differential inclusions in exact robust differentiation and output regulation

This is a joint work with Adam Jbara, Avi Hanan, Applied Mathematics Department, Tel-Aviv University

The control problem is the problem of regulating some process in order to achieve some control goals. Most of the real-life control problems contain uncertainties. One of the natural methods of dealing with such uncertainties is the description of the process by a controlled differential inclusion (DI).

The most basic, and still one of the most challenging control problems is the output regulation of a black box with a single (numeric) control input and a single (numeric) output. The mathematical model of the process is not available, and only some basic assumptions are postulated. Such an output-regulation problem is naturally described as the stabilization problem of a controlled DI describing the uncertain dynamics of the output. The idea is to develop a control feedback which would turn the controlled DI into a finite-time stable homogeneous DI.

The presented strategy produces a feedback control based on a number of the real-time-calculated derivatives of the output. Both the feedback function and the differentiator are developed using the theory of homogeneous DIs.

Finite-time-exact arbitrary-order real-time differentiation is demonstrated that is robust to unmodeled input noises, delays and digital errors. The differentiator is exact on a large class of inputs in the absence of noises. It is proved to have optimal asymptotics in the presence of bounded noises and is capable to reject unbounded noises of small average values.

The same control is demonstrated to be efficient in keeping a car to a given route and regulating the blood glucose level of a living rat.

The proposed universal control algorithms are ready to use, do not require much calculations, and can be easily applied without understanding the underlying mathematical theory.

Michael Margaliot

Revisiting Totally Positive Differential Systems

A matrix is called totally nonnegative (TN) if all its minors are nonnegative, and totally positive (TP) if all its minors are positive. Multiplying a vector by a TN matrix does not increase the number of sign variations in the vector. In a largely forgotten paper, Schwarz (1970) considered matrices whose exponentials are TN or TP. He also analyzed the evolution of the number of sign changes in the vector solutions of the corresponding linear system.

In a seemingly different line of research, Smillie (1984), Smith (1991), and others analyzed the stability of nonlinear tridiagonal cooperative systems by using the number of sign variations in the derivative vector as an integer-valued Lyapunov function.

We provide a tutorial on these fascinating research topics and show that they are intimately related. This allows to derive generalizations of the results by Smillie (1984) and Smith (1991) while simplifying the proofs. This also opens the door to many new and interesting research directions.

Joint work with Eduardo D. Sontag, Northeastern University.

Alona Mokhov

Approximation of Set-Valued Functions -an overview

This is a joint work with Nira Dyn, Elza Farkhi and Elena Berdysheva. The talk surveys our work on the approximation of set-valued functions (multifunction),functions mapping the points of a closed real interval to general compact sets in Rn .The approach is to adapt approximation methods for real-valued functions to set-valued functions, by replacing operations between numbers by operations between sets. For multifunctions with compact convex images, adaptation based on Minkowski convex combinations of sets yield approximating operators in the Hausdorff metric. Yet, if the images of set-valued functions are not necessarily convex, then the approximations methods may fail. To avoid covexification, approximation methods for set-valued functions with general compact images are adapted using metric linear combinations of sets. This adaptation method is not restricted to positive operators. As examples we study metric Bernstein operators, metric Schoenberg operators and metric polynomial interpolants. Error estimates are given with respect to Hausdorff distance in terms of the regularity properties of the approximated set-valued function. Using metric linear combinations of sets the metric integral of set-valued functions is defined. The metric integral is free of the convexifcation drawback, which is typical to the commonly used Aumann integral of SVFs. We use the metric integral to smooth multifunction by defining its metric Steklov set-valued function. The error is measured by the averaged Hausdorff distance.

Nir Sharon

Recent progress in nonlinear subdivision schemes

Recent years gave rise to exciting developments in methods for approximating manifolds and manifold-valued objects. In this talk, we focus on the case of one-dimensional data series over manifolds and the use of univariate subdivision schemes for generating smooth curves on the manifold, starting from the data series. In particular, we concentrate on two fundamental questions: how can we construct such operators and their analysis.

Zachi (Itzhak) Tamo

Optimal repair of polynomials: Achieving the cut-set bound

It is a well known fact that any k evaluation points of a polynomial P(x) of degree less than k suffice to calculate (recover) the evaluation at any other point. Motivated by applications to error correcting codes for storage systems, we consider the following question. Is it possible to recover (P (\alpha) for some \alpha, by observing some partial information of d >= k evaluation points P (\alpha_i)? More precisely, the observed partial information is f_i (P (\alpha_i)) for some function f_i which is not necessarily injective, and the goal is to perform the recovery, while minimizing the total amount of observed partial information.

Alfred Wassermann

q-analogs of combinatorial structures

The classical theory of q-analogs of mathematical objects and functions has its beginnings in the work of Euler. In 1957, Tits further suggestedthat combinatorics of sets could be regarded as the limiting case q -> 1 of combinatorics of vector spaces over the finite field GF(q).While combinatorial designs are being studied since the 1830s, it took until the early 1970s that Ray-Chaudhuri, Cameron and Delsarte independently introduced q-analogs of designs.

It turns out that a subclass - namely q-analogs of Steiner systems - are the best possible constant-dimension subspace codes for random network coding. This is analog to the situation in "classical" coding theory, where (combinatorial) Steiner systems are the best possible constant-weight codes for a given length and minimum distance.

In this talk we will give an introduction to the subject and survey recent developments in q-analogs of designs and subspace codes.

A special focus will be on q-analogs of "group divisible designs".

This is joint work with Marco Buratti (Perugia), Daniel Heinlein, Michael Kiermaier, Sascha Kurz (Bayreuth) and Anamari Nakić (Zagreb).

Holger Wendland

Kernel-based reconstructions for parametric PDEs

Holger Wendland: In uncertainty quantification, an unknown quantity has to be reconstructed which depends typically on the solution of a partial differential equation. This partial differential equation itself may depend on parameters, some of them are deterministic and some are random. To approximate the unknown quantity one thus has to solve the partial differential equation (usually numerically) for several instances of the parameters and then reconstruct the quantity from these simulations. As the number of parameters may be large, this becomes a high- dimensional reconstruction problem. In this talk, I will address the topic of reconstructing such unknown quantities using kernel-based reconstruction methods on sparse grids. I will introduce to the topic, explain the reconstruction process and provide error estimates. This talk is based upon joint work with C. Rieger (Bonn) and R. Kempf (Bayreuth).