Jeremias Sulam

For thousands of years explorers were inspired by the sight of uncharted shores, or by the defiant look of new and higher peaks - right after having overtaken the last. Others rejoiced with the discovery of a new star cruising across the sky, and thrived when realizing that they could predict where the bright dot would be with the passing of time.

Me? I'm fascinated by our understanding of the information contained in signals: from the image of the mountain peak to immunohistochemistry images in digital pathology. This understanding is often formalized through the construction of models, thereby capturing the information contained in these different data sources. If successful, one can deploy these constructions to tackle inverse problems of different kinds, prediction, clustering and other machine learning tasks, and more.

I am an Assistant Professor in the Biomedical Engineering Department, and hold a secondary appointment in the Computer Science Department, at Johns Hopkins University. I'm also affiliated with the Mathematical Institute for Data Science (MINDS), the Center for Imaging Science (CIS) and Kavli. I received my Bioengineering degree from UNER (Argentina) in 2013, and my PhD in Computer Science from Technion in 2018 with Miki Elad. My research interests are focused on general signal and image processing, sparsity-inspired modeling, machine learning and their application to biomedical sciences.

Contact: Office 320B, Clark Hall, Homewood Campus (Baltimore, MD)


Phone: 410-516-9776 / Fax 410-516-4594


  • October 2020 - Kuo-Wei's paper presented at MICCAI, including open source implementation for our learned proximal networks for QSM, as well as new open dataset.

  • September 2020 - 3 papers accepted to NeurIPS.

  • June 2020 - Interested in over-parametrization in machine learning? In a recent pre-print, we show how over-realization can help recovery in dictionary learning.

  • June 2020 - Our joint grant with Zhihui Zhu on Deep Sparse Model was awarded by NSF.

  • May 2019 - Presented an invited talk on sparse modeling and deep learning at IPAM's Deep Geometric Learning of Big Data and Applications. Slides and Video of the talk can be found here!

  • Feb 2019 - Thanks to Demba Ba for the invitation and opportunity to talk at Harvard EE Seminar!

  • Thanks to Rich Baraniuk and Tan Nguyen for the invitation to give a talk at the workshop on Integrating Deep Learning Theories @ NeurIPS '18. Check the slides in TALKS.

  • PyTorch implementation of Multi-Layer ISTA and FISTA networks for CNN is made available @ GitHub

  • October 2018: I have joined the BME Department and the MINDS institute at Johns Hopkins University!

  • July 2018 – Our new review paper, Theoretical Foundations of Deep Learning via Sparse Representations: A Multilayer Sparse Model and Its Connection to Convolutional Neural Networks has just been featured in Signal Processing Magazine.

  • June 2018 – Our paper on Multi-Layer Sparse Modelling just accepted to IEEE-TSP!

  • April 2018 – Travelling to ICASSP to present our work Projecting onto the Multi-Layer Convolutional Sparse Coding Model at the Special Session on Learning Signal Representation using Deep Learning!

  • November 2017 – Thanks to Gitta Kutyniok and all the organizers for the invitation to present our work at the CoSIP Intense Course on Deep Learning! Here at the slides of my talk.