Welcome
Hi, I'm Alex Lin! I am a computer science PhD candidate at the School of Engineering and Applied Sciences at Harvard University. My advisor is Prof. Demba Ba. The goal of my research is to develop viable machine learning methods for biomedical applications. You can learn more about my focus areas here and my publications here.
I believe that impactful research requires interdisciplinary perspectives. Therefore, my work often blends ideas from different fields, such as biology/medicine, statistics, and computer science. Some central themes of my research include:
Biomedical inverse problems: Many important questions in biology and medicine can be cast as "inverse problems", in which observable data is used to infer hidden signals of interest. For example, clinicians utilize a patient's observed symptoms to determine their unknown disease state, while radiologists collect light/sound/radio measurements from medical scanners (i.e. x-rays/ultrasounds/MRIs) to construct anatomical images for diagnosis. I work on creating novel methodology to help solve inverse problems across diverse areas of biomedicine -- from medical imaging to computational neuroscience to drug discovery to population genetics.
Probabilistic modeling & uncertainty quantification: Probabilistic models provide an elegant framework for translating real-world problems into the language of statistics. When tackling inverse problems in high-risk biomedical applications, it is important to not only find a solution, but also quantify a model's uncertainty (or trustworthiness) for that solution. I develop novel techniques for Bayesian inference, a powerful suite of statistical tools that synergize domain knowledge with observed data to provide uncertainty quantification within probabilistic models.
Scalable computational algorithms: In recent years, we have witnessed an explosion of new technology for data collection across many areas of biology and medicine. Inference algorithms that incorporate these data sources must be able to scale to both high dimensions and large dataset sizes. A central focus of my research is to design new and efficient algorithms that can leverage advances in hardware (e.g. parallel computing, GPUs). One of the most exciting modern tools is deep learning, which has brought unprecedented scale to many domains.
I can be contacted at the following email address:
alin [at] seas.harvard.edu
Recent News
Oct 2023: I gave an invited talk (along with Alex Lu and Stanley Hua) on Disentangling Meaningful Signal from Experimental Noise within Deep Learning Models at the Broad Institute's Models Inference & Algorithms (MIA) seminar.
Jun 2023: Our paper on using energy-based neural networks to learn log-concave densities has been accepted at AABI 2023! Another paper on understanding the bias of text-to-image models (e.g. Stable Diffusion) has been accepted at the ICML 2023 Workshop on Challenges of Deploying Generative AI!
Apr 2023: Our paper on probabilistic unrolling for scalable learning of latent Gaussian models has been accepted at ICML 2023!
Nov 2022: Our paper on improving generalization of deep learning for microscopy images was accepted as an oral presentation at MLCB 2022!
Sep 2022: I attended the 9th Heidelberg Laureate Forum in Germany.
Jun 2022: Our journal paper Covariance-Free Sparse Bayesian Learning was accepted by IEEE Transactions on Signal Processing!
Jun 2022: I began my internship at Microsoft Research with Dr. Alex Lu -- looking forward to a great summer!
May 2022: I presented our two accepted papers in-person at IEEE ICASSP 2022 in Singapore.
May 2022: I passed my qualifying exam and am now a PhD candidate! Thank you to my committee members Prof. Finale Doshi-Velez, Prof. Yves Atchadé, Prof. Sham Kakade, and Prof. Demba Ba.
Mar 2022: I was awarded an NDSEG Fellowship!
Feb 2022: Our abstract on Bayesian Sensitivity Encoding Enables Parameter-Free, Highly Accelerated Joint Multi-Contrast Reconstruction was accepted as an oral presentation at ISMRM 2022.
Jan 2022: We had two papers accepted at IEEE ICASSP 2022 on (1) High-Dimensional Sparse Bayesian Learning without Covariance Matrices and (2) Mixture Model Auto-Encoders: Deep Clustering through Dictionary Learning.