As AI percolates diverse areas of society, new problems arise at the intersection of this powerful technology and different dimensions of social concern. As with any new tool, employing AI in sensitive domains must be approached with caution; first, to ensure that the fundamental and theoretical principles behind these methods are thoroughly understood, and second, to safeguard transparency, moral values, and adherence to regulatory constraints. Unfortunately, neither of these desiderata is quite satisfied today.
My research lies at the intersection of the disciplines of computer science, applied mathematics, and the biomedical sciences, and I work on problems in machine learning (ML) and biomedical imaging. I focus on creating frameworks and algorithms that provide theoretical guarantees and practical tools for robust and interpretable ML, ensuring they are not only accurate but also reliable and explainable. In a nutshell, my research agenda is focused on i) analyzing state-the-art systems, typically given by deep learning models, to theoretically understand their capabilities, limitations, and vulnerabilities; ii) designing new methods to address limitations of off-the-shelf systems, chiefly on aspects of interpretability, robustness, and fairness; and iii) designing new methodology to address challenges in the biomedical sciences, from imaging problems in MRI and microscopy to prognosis analysis and prediction in computational cancer biology and antibody docking problems in protein folding problems.
I am the William R. Brody Faculty Scholar and an Assistant Professor in the Biomedical Engineering Department at Johns Hopkins University, and hold secondary appointments in the Computer Science Department and the Department of Applied Mathematics and Statistics. I'm also a core faculty member with the Mathematical Institute for Data Science (MINDS), the Center for Imging Science (CIS), the Kavli Neuroscience Discovery Institute, and the Data Science and AI (DSAI) Institute. I received my Bioengineering degree from UNER (Argentina) in 2013, and my PhD in Computer Science from Technion in 2018 with Miki Elad.
I am a recipient of the National Science Foundation’s Early CAREER Award and the Best Graduates award from the Argentinean Academy of Engineering.
Contact: Office 320B, Clark Hall, Homewood Campus (Baltimore, MD)
Email: jsulam at jhu dot edu
I am very grateful to NSF, NIH, DARPA, CISCO Research, CANON Medical Research, Toffler Cheritable Trust, and the Chan Zuckerberg Initiative for sponsoring part of our research.
You can find my complete Curriculum Vitae here.
News!
July 2025 - I'm honored to be appointed as the William R. Brody Faculty Scholar in the School of Engineering.
October 2024 - Honored to receive the Johns Hopkins's Catalyst Award this year!
September 2024 - Honored to be invited to participate of the Computational Harmonic Analysis in Data Science and Machine Learning workshop at Casa Matemática Oaxaca.
July 2024 - Ambar defends his PhD thesis!
June 2024 - Fun to participate and speak at the Mathematics of Deep Learning workshop at Casa Matemática Oaxaca.
June 2024 - Honored to receive both a Discovery Award as well as Catalyst Award from Hopkins' office for research!
April 2024 - Our paper on Learned Proximal Networks was presented at ICLR. Check out the project website for this paper!
December 2023 - Two papers presented at Neurips 2024, on estimating and controlling for fairness under missing data, and on rethinking impossibility results for adversarial robustness.
September 2023 - Check out my talk on Estimating and Controlling for Fairness via Sensitive Attribute Predictors at the workshop on Algorithms, Fairness and Equity at SLMath Institute @ Berkeley.
January 2023 - Thrilled to receive the NSF CAREER award on Interpretable and Robust Machine Learning Models. Read more here and here.
December 2022 - Presented our recent work on Over-Realized Dictionary Learning, published in JMLR this year, in the Journal Track of Neurips '22.
November 2022 - Jacopo's abstract wins the Trainee Research Prize at the RSNA meeting 2022! Read more about his work here.
September 2022 - Had a blast organizing and presenting at our mini symposium on Mathematics of Interpretable Machine Learning hosted at SIAM's meeting on Mathematics of Data Science.
September 2022 - Gave an invited talk at the International Conference on Computational Harmonic Analysis on Overparametrized and Robust Sparse Models (slides here).
New paper with Joshua Agterberg presented at AISTATS: Entrywise Recovery Guarantees for Sparse PCA via Sparsistent Algorithms.
Our paper, led by Zhihui Zhu, on global optimization landscape for Neural Collapse presented as spotlight at Neurips 2021.
Zhenzhen's short paper on Label Cleaning MIL in Digital Pathology gets a talk at Medical Imaging Meets Neurips 2021
Jeff Ruffulo's short paper on language models and weakly supervised learning gets a talk at Machine Learning in Structural Biology, at Neurips 2021.
July 2021 - Jacopo's and Alex's paper on Hierarchical Games for Image Explanations wins best paper award at ICML Workshop on Interpretable Machine Learning for Healthcare. You can find his 5 min presentation here (and our repo here too).
October 2020 - Kuo-Wei's paper presented at MICCAI, including open source implementation for our learned proximal networks for QSM, as well as new open dataset.
September 2020 - 3 papers accepted to NeurIPS.
June 2020 - Interested in over-parametrization in machine learning? In a recent pre-print, we show how over-realization can help recovery in dictionary learning.
June 2020 - Our joint grant with Zhihui Zhu on Deep Sparse Model was awarded by NSF.
May 2019 - Presented an invited talk on sparse modeling and deep learning at IPAM's Deep Geometric Learning of Big Data and Applications. Slides and Video of the talk can be found here!
Feb 2019 - Thanks to Demba Ba for the invitation and opportunity to talk at Harvard EE Seminar!
Thanks to Rich Baraniuk and Tan Nguyen for the invitation to give a talk at the workshop on Integrating Deep Learning Theories @ NeurIPS '18. Check the slides in TALKS.
PyTorch implementation of Multi-Layer ISTA and FISTA networks for CNN is made available @ GitHub
October 2018: I have joined the BME Department and the MINDS institute at Johns Hopkins University!
July 2018 – Our new review paper, Theoretical Foundations of Deep Learning via Sparse Representations: A Multilayer Sparse Model and Its Connection to Convolutional Neural Networks has just been featured in Signal Processing Magazine.
June 2018 – Our paper on Multi-Layer Sparse Modelling just accepted to IEEE-TSP!
April 2018 – Travelling to ICASSP to present our work Projecting onto the Multi-Layer Convolutional Sparse Coding Model at the Special Session on Learning Signal Representation using Deep Learning!
November 2017 – Thanks to Gitta Kutyniok and all the organizers for the invitation to present our work at the CoSIP Intense Course on Deep Learning! Here at the slides of my talk.