Jeremias Sulam
For thousands of years explorers were inspired by the sight of uncharted shores, or by the defiant look of new and higher peaks - right after having overtaken the last. Others rejoiced with the discovery of a new star cruising across the sky, and thrived when realizing that they could predict where the bright dot would be with the passing of time.
I'm fascinated by our understanding of the information contained in data, from the image of the mountain peak to immunohistochemistry images in digital pathology. This understanding is often formalized through the construction of models, thus capturing the information contained in these different data sources. If successful, one can deploy these constructions to tackle inverse problems of different kinds, prediction, clustering and other machine learning tasks, and more. I'm particularly interested in the responsible use of machine learning, studying aspects of robustness and interpretability.
I am an Assistant Professor in the Biomedical Engineering Department, and hold a secondary appointment in the Computer Science Department and the Department of Applied Mathematics and Statistics, at Johns Hopkins University. I'm also affiliated with the Mathematical Institute for Data Science (MINDS), the Center for Imaging Science (CIS), and Kavli. I received my Bioengineering degree from UNER (Argentina) in 2013, and my PhD in Computer Science from Technion in 2018 with Miki Elad. I am a recipient of the National Science Foundation’s Early CAREER Award. My research interests are focused on general signal and image processing, sparsity-inspired modeling, machine learning and their application to biomedical sciences.
Contact: Office 320B, Clark Hall, Homewood Campus (Baltimore, MD)
Email: jsulam at jhu dot edu
Funding
I am very grateful to NSF, NIH, DARPA, CISCO Research, CANON Medical Research and the Toffler Cheritable Trust for sponsoring part of our research.
News!
September 2024 - Honored to be inivted to participate of the Computational Harmonic Analysis in Data Science and Machine Learning workshop at Casa Matemática Oaxaca.
July 2024 - Ambar defends his PhD thesis!
June 2024 - Fun to participate and speak at the Mathematics of Deep Learning workshop at Casa Matemática Oaxaca.
June 2024 - Honored to receive both a Discovery Award as well as Catalyst Award from Hopkins' office for research!
April 2024 - Our paper on Learned Proximal Networks was presented at ICLR. Check out the project website for this paper!
December 2023 - Two papers presented at Neurips 2024, on estimating and controlling for fairness under missing data, and on rethinking impossibility results for adversarial robustness.
September 2023 - Check out my talk on Estimating and Controlling for Fairness via Sensitive Attribute Predictors at the workshop on Algorithms, Fairness and Equity at SLMath Institute @ Berkeley.
January 2023 - Thrilled to receive the NSF CAREER award on Interpretable and Robust Machine Learning Models. Read more here and here.
December 2022 - Presented our recent work on Over-Realized Dictionary Learning, published in JMLR this year, in the Journal Track of Neurips '22.
November 2022 - Jacopo's abstract wins the Trainee Research Prize at the RSNA meeting 2022! Read more about his work here.
September 2022 - Had a blast organizing and presenting at our mini symposium on Mathematics of Interpretable Machine Learning hosted at SIAM's meeting on Mathematics of Data Science.
September 2022 - Gave an invited talk at the International Conference on Computational Harmonic Analysis on Overparametrized and Robust Sparse Models (slides here).
New paper with Joshua Agterberg presented at AISTATS: Entrywise Recovery Guarantees for Sparse PCA via Sparsistent Algorithms.
Our paper, led by Zhihui Zhu, on global optimization landscape for Neural Collapse presented as spotlight at Neurips 2021.
Zhenzhen's short paper on Label Cleaning MIL in Digital Pathology gets a talk at Medical Imaging Meets Neurips 2021
Jeff Ruffulo's short paper on language models and weakly supervised learning gets a talk at Machine Learning in Structural Biology, at Neurips 2021.
July 2021 - Jacopo's and Alex's paper on Hierarchical Games for Image Explanations wins best paper award at ICML Workshop on Interpretable Machine Learning for Healthcare. You can find his 5 min presentation here (and our repo here too).
October 2020 - Kuo-Wei's paper presented at MICCAI, including open source implementation for our learned proximal networks for QSM, as well as new open dataset.
September 2020 - 3 papers accepted to NeurIPS.
June 2020 - Interested in over-parametrization in machine learning? In a recent pre-print, we show how over-realization can help recovery in dictionary learning.
June 2020 - Our joint grant with Zhihui Zhu on Deep Sparse Model was awarded by NSF.
May 2019 - Presented an invited talk on sparse modeling and deep learning at IPAM's Deep Geometric Learning of Big Data and Applications. Slides and Video of the talk can be found here!
Feb 2019 - Thanks to Demba Ba for the invitation and opportunity to talk at Harvard EE Seminar!
Thanks to Rich Baraniuk and Tan Nguyen for the invitation to give a talk at the workshop on Integrating Deep Learning Theories @ NeurIPS '18. Check the slides in TALKS.
PyTorch implementation of Multi-Layer ISTA and FISTA networks for CNN is made available @ GitHub
October 2018: I have joined the BME Department and the MINDS institute at Johns Hopkins University!
July 2018 – Our new review paper, Theoretical Foundations of Deep Learning via Sparse Representations: A Multilayer Sparse Model and Its Connection to Convolutional Neural Networks has just been featured in Signal Processing Magazine.
June 2018 – Our paper on Multi-Layer Sparse Modelling just accepted to IEEE-TSP!
April 2018 – Travelling to ICASSP to present our work Projecting onto the Multi-Layer Convolutional Sparse Coding Model at the Special Session on Learning Signal Representation using Deep Learning!
November 2017 – Thanks to Gitta Kutyniok and all the organizers for the invitation to present our work at the CoSIP Intense Course on Deep Learning! Here at the slides of my talk.