Siyuan Ma
I received my Ph.D. from The Ohio State University in 2019. During my time at OSU, I was fortunate to work with Mikhail Belkin on exciting topics in machine learning, especially the double descent risk curve and large batch training.
I am now working at Google to scale and automate Ads modelling. I believe that new models and algorithms will soon emerge from the recent advancements in learning theory and further transform influential products like Google Ads.
Email: siyuan.ma9@gmail
Like Hui, Siyuan Ma, Mikhail Belkin, Kernel Machines Beat Deep Neural Networks on Mask-based Single-channel Speech Enhancement, Interspeech 2019.
Siyuan Ma, Mikhail Belkin, Kernel Machines that Adapt to GPUs for Effective Large Batch Training, SysML 2019.
Mikhail Belkin, Daniel Hsu, Siyuan Ma, Soumik Mandal, Reconciling Modern Machine-learning Practice and the Classical Bias–variance Trade-off, PNAS 2019.
Siyuan Ma, Raef Bassily, Mikhail Belkin, The Power of Interpolation: Understanding the Effectiveness of SGD in Modern Over-parametrized Learning, ICML 2018.
Mikhail Belkin, Siyuan Ma, Soumik Mandal, To Understand Deep Learning We Need to Understand Kernel Learning, ICML 2018.
Siyuan Ma, Mikhail Belkin, Diving into the Shallows: a Computational Perspective on Large-scale Shallow Learning, NIPS 2017.
Dejun Teng, Lei Guo, Rubao Lee, Feng Chen, Siyuan Ma, Yanfeng Zhang, Xiaodong Zhang, LSbM-tree: Re-enabling Buffer Caching in Data Management for Mixed Reads and Writes, ICDCS 2017.
Kaibo Wang, Kai Zhang, Yuan Yuan, Siyuan Ma, Rubao Lee, Xiaoning Ding, Xiaodong Zhang, Concurrent Analytical Query Processing with GPUs, VLDB 2014.
Yin Huai, Siyuan Ma, Rubao Lee, Owen O’Malley, Xiaodong Zhang, Understanding Insights into the Basic Structure and Essential Issues of Table Placement Methods in Clusters, VLDB 2013.
Tian Luo, Siyuan Ma, Rubao Lee, Xiaodong Zhang, Deng Liu, Li Zhou, S-CAVE: Effective SSD Caching to Improve Virtual Machine Storage Performance, PACT 2013.