Keynotes

We are pleased to announce two outstanding keynote speakers:

  • Michael Unser, EPFL

Representer theorems for the design of deep neural networks and the resolution of continuous-domain inverse problems

Our purpose in this talk is to reenforce the (deep) connection between splines and learning techniques. To that end, we first describe a recent representer theorem that states that the extremal points of a broad class of linear inverse problems with generalized total-variation regularization are adaptive splines whose type is linked to the underlying regularization operator L. For instance, when L is n-th derivative (resp., Laplacian) operator, the optimal reconstruction is a non-uniform polynomial (resp., polyharmonic) spline with the smallest possible number of adaptive knots. The crucial observation is that such continuous-domain solutions are intrinsically sparse, and hence compatible with the kind of formulation (and algorithms) used in compressed sensing. We then make the link with current learning techniques by applying the theorem to optimize the shape of individual activations in a deep neural network. By selecting the regularization functional to be the 2nd-order total variation, we obtain an “optimal” deep-spline network whose activations are piece-linear splines with a few adaptive knots. Since each spline knot can be encoded with a ReLU unit, this provides a variational justification of the popular ReLU architecture. It also suggests some new computational challenges for the determination of the optimal activations involving linear combinations of ReLUs.

References

1. M. Unser, J. Fageot, J.P. Ward, "Splines Are Universal Solutions of Linear Inverse Problems with Generalized TV Regularization," SIAM Review, vol. 59, no. 4, pp. 769-793, December 2017.

2. M. Unser, "A Representer Theorem for Deep Neural Networks," arXiv:1802.09210 [stat.ML]

  • Jong Chul Ye, KAIST

Deep Convolutional Framelets for the Analysis of Deep Learning Approaches in Biomedical Image Reconstruction

Recently, deep learning approaches with various network architectures have achieved significant performance improvement over existing iterative reconstruction methods in various imaging problems. However, it is still unclear why these deep learning architectures work for specific inverse problems. Moreover, in contrast to the usual evolution of signal processing theory around the classical theories, the link between deep learning and the classical signal processing approaches, such as wavelets, nonlocal processing, and compressed sensing, are not yet well understood. To address these issues, here we show that the long-sought missing link is the convolution framelets for representing a signal by convolving local and nonlocal bases. This discovery reveals the limitations of many existing deep learning architectures for inverse problems, and leads us to propose a novel theory for a deep convolutional framelet neural network. Using numerical experiments with various inverse problems such as x-ray CT, MRI, ultrasound, etc, we demonstrate that deep convolutional framelet neural network shows consistent improvement over existing architectures and have advantages in terms of the optimization landscape. This discovery suggests that the success of deep learning stems not from a magical black box, but rather from the power of a novel signal representation using a nonlocal basis combined with a data-driven local basis, which is indeed a natural extension of classical signal processing theory.