08:30-09:00 Gathering & Coffee
09:00-09:15 Greetings
09:15-09:45 Eitan Tadmor, University of Maryland, USA
Swarm-Based Gradient Descent Method for Non-Convex Optimization
We introduce a new swarm-based gradient descent (SBGD) method for non-convex optimization. The swarm consists of agents, identified with positions x and masses m. The key to their dynamics is transition of mass from high to lower ground, and a time stepping protocol, h(x,m), which decreases with m. The interplay between positions and masses leads to dynamic distinction between `leaders' and `explorers'. Heavier agents lead the swarm near local minima with small time steps. Lighter agents, which explore the landscape by taking large time steps, are expected to encounter improved position for the swarm; if they do, then they assume the role of heavy swarm leaders and so on. Convergence analysis and numerical simulations demonstrate the effectiveness of SBGD method as a global optimizer.
09:45-10:15 Haim Avron, Tel Aviv University, Israel
Modern Tensor Factorizations and Applications to Longitudinal `Omics Data Analysis
Many real-world data are inherently of multi-way structure. However, multidimensional data is often processed as two-dimensional arrays (matrices), thus, ignoring the inherent higher-order structure therein. Arguably, matrization of higher-order data is such a common practice due to the ubiquitousness and strong theoretical foundations of matrix algebra. In the talk, I will discuss recent developments in multilinear algebra that have established an Eckart-Young like best rank approximation for the tensor tubal singular value decomposition (tSVDM), providing theoretical justification for the superiority of tensor-based approximations over matrization for the first time. I will then further explain how we utilized the tSVDM to construct a principal component analysis (PCA) analog for 3rd order tensors which we refer to as the M-product based Tensor Component Analysis (TCAM). We derive optimality for the TCAM, namely, the maximization of variance and distortion minimization for the TCAM embedding. These theoretical guarantees put TCAM as a promising utility for multi-way data analysis tasks. We explore this utility in the context of analyzing high dimensional so-called ``omics'' data, which is collected in longitudinal biological experiments.
10:15-10:45 Coffee
10:45-11:15 Alfred Bruckstein, Technion, Israel
On Distributed and Decentralized Spatial Task Allocation Problems
We consider a problem of geometric task allocation, wherein a large swarm of simple mobile agents must detect the locations of targets in the plane and position themselves nearby according to a local demand. The target locations are represented by a demand profile Φ(x,y) that also determines how many agents are needed at each location. The demand profile is a-priori unknown to the agents of the swarm and is uncovered by the agents as they spread over the region of interest. The agents are identical, autonomous, oblivious and indistinguishable, do not communicate and have finite sensing range. They must spread to meet the requirements using only local information about Φ and about the positions of nearby agents. All agents must act according to the same local sensing-based rule of motion, and cannot explicitly communicate nor share information. We consider an optimization-based approach to the problem which results in attraction-repulsion type dynamics. Repulsion encourages agents to spread out and explore the region so as to find the tasks, and attraction causes them to accumulate at task locations. The dynamics is simply an implementation of gradient descent minimizing a very natural “error'' functional, and was tested extensively through numerical simulations. Snapshots of simulations with this approach will be presented and discussed.
(talk based on joint work with Michael Amir, Ariel Barel, Yaakov Bloch and Yigal Koifman)
11:15-11:45 Yonina Eldar, Weizmann Institute of Science, Israel
Model Based Deep Learning: Applications to Imaging and Communications
Deep neural networks provide unprecedented performance gains in many real-world problems in signal and image processing. Despite these gains, the future development and practical deployment of deep networks are hindered by their black-box nature, i.e., a lack of interpretability and the need for very large training sets.
On the other hand, signal processing and communications have traditionally relied on classical statistical modeling techniques that utilize mathematical formulations representing the underlying physics, prior information and additional domain knowledge. Simple classical models are useful but sensitive to inaccuracies and may lead to poor performance when real systems display complex or dynamic behavior. Here we introduce various approaches to model based learning which merge parametric models with optimization tools and classical algorithms leading to efficient, interpretable networks from reasonably sized training sets. We will consider examples of such model-based deep networks to image deblurring, image separation, super resolution in ultrasound and microscopy, efficient communication systems, and finally we will see how model-based methods can also be used for efficient diagnosis of COVID19 using X-ray and ultrasound.
11:45-12:00 Abarbanel Prize Ceremony
12:00-12:30 Prize lecture: Yoel Shkolnisky, Tel Aviv University, Israel
Electron microscopy – Introduction for Mathematicians
Cryo-electron microscopy is a method for recovering the three-dimensional structure of molecules, which is revolutionizing our biological understanding. I will give an introduction to the method, present its current key mathematical challenges, and give some examples to the mathematical ideas underlying its data processing algorithms.
12:30-14:00 Lunch
14:00-14:30 Amir Boag, Tel Aviv University, Israel
Fast Direct Solution of Scattering Problems Using Generalized Source Integral Equations
Recent years have seen an increasing interest in the development of fast direct integral equation solvers. These do not rely on the convergence of iterative procedures for obtaining the solution. Instead, they compute a compressed factorized form of the impedance matrix resulting from the discretization of an underlying integral equation. The compressed form can then be applied to multiple right-hand sides, at a relatively low additional cost. The most common class of direct integral equation solvers exploits the rank-deficiency of off-diagonal blocks of the impedance matrix, in order to express them in a compressed manner. However, such rank deficiency is inherent to problems of small size compared to the wavelength as well as to problems of reduced dimensionality, e.g., elongated and quasi-planar problems.
The present work proposes a class of Generalized Source Integral Equation (GSIE) formulations, which aim to extend the range of problems exhibiting inherent rank-deficiency. The new formulation effectively reduces the problem’s dimensionality and, thus, allows for efficient low-rank matrix compression. When the formulation is used with a hierarchical matrix compression and factorization algorithm, a fast direct solver is obtained. The computational bottlenecks introduced by the proposed generalized formulation are reduced by using non-uniform grid (NG) sampling techniques. The NG techniques, originally developed for fast iterative solvers, are employed for efficient computation of fields produced by the generalized sources. The formulation’s properties and limitations are studied and its use for the development of fast direct solvers is showcased.
Joint work with Arkadi Sharshevsky, Dor Zvulun, and Yaniv Brick.
TALK CANCELLED! Eran Treister, Ben-Gurion University of the Negev, Israel
A Hybrid Shifted Laplacian Multigrid and Domain Decomposition Preconditioner for the Elastic Helmholtz Equations
The shifted Laplacian multigrid method is a well-known approach for preconditioning the indefinite linear system arising from the discretization of the acoustic Helmholtz equation. This equation is used to model wave propagation in the frequency domain. However, in some cases, the acoustic equation is not sufficient for modeling the physics of wave propagation, and one has to consider the elastic Helmholtz equation. Such a case arises in geophysical seismic imaging applications, where the earth's subsurface is the elastic medium. The elastic Helmholtz equation is much harder to solve than its acoustic counterpart, partially because it is three times larger and partially because it models more complicated physics. The elasticity operator contains a grad-div term, and its large nullity makes the solution of the equation more difficult for methods used for the acoustic Helmholtz equation. Despite this, there are very few solvers available for the elastic equation compared to the array of solvers that are available for the acoustic one. In this work, we extend the shifted Laplacian approach to the elastic Helmholtz equation by combining the complex shift idea with approaches for linear elasticity. We provide Local Fourier Analysis and numerical evidence that the convergence rate of our method is independent of Poisson's ratio. Moreover, to better handle the problem size, we present a hybrid multigrid and domain decomposition preconditioner, in which the local nature of the shifted Laplacian works in synergy with the decomposition of the problem into local sub-domain problems. We demonstrate the efficiency and properties of our solver using numerical experiments for problems with heterogeneous media in two and three dimensions.
Joint work with Rachel Yovel.
14:30-15:00 Barak Sober, Hebrew University of Jerusalem, Israel
Exploring Self Similarity in Neural Network Approximation Spaces
In the desire to quantify the success of neural networks in deep learning and other applications, there is a great interest in understanding which functions are efficiently approximated by the outputs of neural networks. By now, a variety of results show that a wide range of functions can be approximated with sometimes exponential accuracy (in terms of the number of parameters used) by these outputs. In this talk, I will present two recent results showing that self-similarity in the approximated function can be captured very efficiently by neural networks. Namely, we will discuss the class of Iterated Function Systems and the indicators of their resulting sets (i.e., Fractal sets) as well as Refinable Functions, which are prevalent in constructing Wavelet bases.
15:00-15:30 Coffee
15:30-16:00 Gil Ariel, Bar-Ilan University, Israel
Self-Organized Criticality in Stochastic Networks
Many complex systems in nature seem to be poised at criticality, i.e., close to a phase transition, bifurcation point, or other singularities. Examples include earthquakes, financial markets, forest fires, flocks of birds, and more. Typically, criticality only occurs at a particular set of parameters. Therefore, the ubiquity of critical systems in nature is surprising and unexpected. One possible explanation is self-organized criticality, that suggests a mechanism by which the dynamics of a system intrinsically evolve, or self-organize, into the critical state. Thus, there is no need to tune parameters to some precise values. In other words, the critical point is an attractor.
The talk will describe the concept of self-organized criticality and present a new route obtained through temporal fluctuations in stochastic networks. As an example, we present applications to epidemic models over dynamic scale-free graphs, offering a possible explanation for why some epidemics, which we expect to disappear, don't.
Joint work with Yoram Louzoun and Inbar Seroussi.
16:00-16:30 Gilad Lerman, University of Minnesota, USA
An Unpooling Layer for Graph Generation
We propose a novel and trainable graph unpooling layer for effective graph generation. The unpooling layer receives an input graph with features and outputs an enlarged graph with desired structure and features. We prove that the output graph of the unpooling layer remains connected and for any connected graph there exists a series of unpooling layers that can produce it from a 3-node graph. We apply the unpooling layer within the generator of a generative adversarial network as well as the decoder of a variational autoencoder. We give extensive experimental evidence demonstrating the competitive performance of our proposed method on synthetic and real data. This is a joint work with Yinglong Guo and Dongmian Zou
16:30-17:00 Wine & Cheese