I am a senior lecturer (assistant professor) of Computer Science at Ben-Gurion University, working in the area of machine and deep learning.
I hold a BSc in Computer Engineering, MSc in Electrical Engineering, and a PhD in Computer Science, all from the Technion — Israel Institute of Technology. I conducted my PhD research under the supervision of Prof. Alfred Bruckstein and Prof. Michael Elad. I was a postdoctoral fellow in the Computer Science Department at the Technion, and a postdoctoral research associate in the Department of Electrical and Computer Engineering at Rice University where I worked with Prof. Richard Baraniuk.
I had the great pleasure to co-organize the 2021 and 2022 Workshop on the Theory of Overparameterized Machine Learning (TOPML). More recently, I was an area chair at NeurIPS 2024, 2025 and ICML 2025.
My current research with my group has two main themes:
Foundational understanding of deep learning and overparameterized machine learning: Despite the practical success of deep learning, much of its inner-workings and properties of learned models are still insufficiently understood. Specifically, many deep neural networks are overparameterized models, i.e., they are highly complex with many more parameters than the number of training data examples, and trained to overfit or perfectly fit their training data; such overparameterization contradicts classical machine learning guidelines and pose important new questions on the merits and limitations of modern common practice.
Mathematical optimization algorithms for deep learning, machine learning and computer vision.
Papers from the recent years:
Y. Dar, V. Muthukumar, R. G. Baraniuk, "A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning", arXiv:2109.02355
Y. Dar, "Mixture of Many Zero-Compute Experts: A High-Rate Quantization Theory Perspective", arXiv:2510.03151
G. Alon and Y. Dar, "How Does Overparameterization Affect Machine Unlearning of Deep Neural Networks?", arXiv:2503.086332503.08633
S. Hendy and Y. Dar, "TL-PCA: Transfer Learning of Principal Component Analysis", arXiv:2410.10805
Y. Sharon and Y. Dar, "How Do the Architecture and Optimizer Affect Representation Learning? On the Training Dynamics of Representations in Deep Neural Networks", arXiv:2405.17377
K. Abitbul and Y. Dar, "How Much Training Data is Memorized in Overparameterized Autoencoders? An Inverse Problem Perspective on Memorization Evaluation", in ECML 2024.
Y. Dar, D. LeJeune, and R. G. Baraniuk, "The Common Intuition to Transfer Learning Can Win or Lose: Case Studies for Linear Regression", SIAM Journal on Mathematics of Data Science, vol. 6, issue 2, 2024.
Y. Dar and R. G. Baraniuk, "Double Double Descent: On Generalization Errors in Transfer Learning between Linear Regression Tasks", SIAM Journal on Mathematics of Data Science, vol. 4, issue 4, 2022.
Y. Dar, P. Mayer, L. Luzi, and R. G. Baraniuk, "Subspace Fitting Meets Regression: The Effects of Supervision and Orthonormality Constraints on Double Descent of Generalization Errors", in ICML 2020.
G. Somepalli, L. Fowl, A. Bansal, P. Yeh-Chiang, Y. Dar, R. Baraniuk, M. Goldblum, T. Goldstein, "Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boundary Perspective", in CVPR 2022.
Y. Dar, L. Luzi, and R. G. Baraniuk, "Frozen Overparameterization: A Double Descent Perspective on Transfer Learning of Deep Neural Networks", arXiv:2211.11074.
L. Luzi, Y. Dar, and R. G. Baraniuk, "Double Descent and Other Interpolation Phenomena in GANs", arXiv:2106.04003.
For more details on my research, see the publications page.
E-mail: ydar@bgu.ac.il