Below you can find links to code on some of my projects. Feel free to download and use it.

Please contact me if you find bugs or struggle with applying the code to your data: leon.bungert@hcm.uni-bonn.de

Convergence Rates for Lipschitz Learning on Graphs

The code which computes empirical rates of convergence for Lipschitz learning on graphs can be found at this github repository. It is written in Python and uses the GraphLearning toolbox. Below we show some impressions of our results. Please see our paper for additional details.

In this github repository we perform additional numerical experiments for percolation distances on Poisson clouds, which relates to Lipschitz learning as explained in our second paper on this topic.

A Bregman Training Framework for Sparse Neural Networks

The code which implements our algorithms for training sparse neural networks in an inverse scale space manner can be found at this github repository. It is written in Python and uses PyTorch. Below we show some impressions of our algorithms. Please see our paper for additional details.

Inverse scale space of convolutional filters

Our algorithm adds descriptive kernels gradually in the training process

Convolutional features, epoch 0

Convolutional features, epoch 5

Convolutional features, epoch 20

Convolutional features, epoch 100

Unveiling an autoencoder

Our inverse scale space algorithm automatically discovers an autoencoder architecture for a denoising task.

CLIP: Cheap Lipschitz Training of Neural Networks

The code which implements our Cheap Lipschitz Training (CLIP) algorithm from our paper can be found at this github repository. It is written in Python and uses PyTorch.

CLIP on a toy regression problem: Using Lipschitz regularization yields stable networks. Without regularization (right) one obtains large oscillations and Lipschitz constants.

Nonlinear Power Method for Proximal Operators and Neural Networks

The code which reproduces the results from the paper Nonlinear Power Method for Computing Eigenvectors of Proximal Operators and Neural Networks can be found at this github repository. It is written in Python and MATLAB and partially requires the Operator Discretization library (ODL).

Initial image

Power iteration 100

Power iteration 600

Converged eigenvector

Robust Image Reconstruction with Misaligned Structural Information

The code which reproduces the results from the paper Robust Image Reconstruction with Misaligned Structural Information (IEEE Access) can be found at this github repository. It is written in Python and requires the Operator Discretization library (ODL).

Blind image fusion for hyperspectral imaging with the directional total variation

The code which reproduces the results from the paper Blind image fusion for hyperspectral imaging with the directional total variation (Inverse Problems) can be found at this github repository. It is written in MATLAB.