Berivan Isik

I am a PhD student in Electrical Engineering Department at Stanford University advised by Tsachy Weissman and Sanmi Koyejo, where I am affiliated with the SAIL and StatsML groups. My research is supported by Stanford Graduate Fellowship (2019-2023), Google Ph.D. Fellowship (2023-2026), and a Meta research grant. I received my MS degree from Stanford University in June 2021 and my BS degree from Middle East Technical University in June 2019, both in Electrical Engineering. 

Previously, I visited Stanford in 2018 as an undergraduate researcher under the supervision of Ayfer Ozgur and Peter Kairouz. In 2021 summer, I worked at Google as a research intern hosted by Philip Chou and George Toderici. From February to September 2022, I did a second research internship at Google under the supervision of Philip Chou, Onur Guleryuz, and Danhang Tang. From October 2022 to February 2023, I worked at Amazon as an applied scientist intern hosted by Dmitri Pavlichin. In 2023 summer, I was a visiting researcher in Nicolas Papernot's group at Vector Institute for Artificial Intelligence.

I have been working at Google Research as a student researcher since October 2023 under the supervision of Natalia Ponomareva and Sergei Vassilvitskii

Collaborators (past & present): Nicolas Papernot (UofT & Vector Institute), Ce Zhang (ETH Zurich), Peter Kairouz (Google), Ahmad Beirami (Google), Peter Richtarik (KAUST), Ayfer Ozgur (Stanford), Arindam Banerjee (UIUC), Deniz Gunduz (Imperial College London), Philip Chou (Google), George Toderici (Google).

berivan.isik [at] stanford [dot] edu 

LinkLinkedInGitHubTwitter

Interests

My research interests are scalable & trustworthy machine learning, privacy, fairness, robustness, compression, and information theory. Recently, I have been working on differential privacy, model compression, federated learning, transfer learning, data validation for foundation models, efficient training/finetuning of large language models, and learned compression.

Publications

Exact Optimality of Communication-Privacy-Utility Tradeoffs in Distributed Mean Estimation [PDF] [code]

Berivan Isik, Wei-Ning Chen, Ayfer Ozgur, Tsachy Weissman, Albert No

Conference on Neural Information Processing Systems (NeurIPS), 2023.

Preliminary version presented at the ICML 2023 Federated Learning Workshop and the Theory and Practice of Differential Privacy (TPDP) Workshop in 2023.



Sparse Random Networks for Communication-Efficient Federated Learning  [PDF] [code] 

Berivan Isik*, Francesco Pase*, Deniz Gunduz, Tsachy Weissman, Michele Zorzi 

International Conference on Learning Representations (ICLR), 2023.

Preliminary version presented at the International Workshop on Federated Learning: Recent Advances and New Challenges, NeurIPS, 2022. [Oral]



Communication-Efficient Federated Learning through Importance Sampling [PDF]

Berivan Isik*, Francesco Pase*, Deniz Gunduz, Sanmi Koyejo, Tsachy Weissman, Michele Zorzi 

Preliminary version presented at the Workshop on Federated Learning and Analytics, ICML, 2023.

Preprint, 2023.



An Information-Theoretic Justification for Model Pruning [PDF] [code] 

Berivan Isik, Tsachy Weissman, Albert No

International Conference on Artificial Intelligence and Statistics (AISTATS), 2022.

Preliminary version presented at the Sparsity in Neural Networks Workshop, 2021. [Spotlight talk]

An Information-Theoretic Understanding of Maximum Manifold Capacity Representations [PDF]

Berivan Isik*, Rylan Schaeffer*, Victor Lecomte*, Mikail Khona, Yann LeCun, Andrey Gromov, Ravid Shwartz-Ziv, Sanmi Koyejo

Preliminary version accepted at the Workshop on Unifying Representations in Neural Models (UniReps), NeurIPS, 2023.  [Oral]

Preliminary version accepted at the Workshop on Information-Theoretic Principles in Cognitive Systems (InfoCog), NeurIPS, 2023.  [Spotlight talk]

Preliminary version accepted at two other NeurIPS 2023 Workshops: Self-Supervised Learning - Theory and Practice (SSL), Symmetry and Geometry in Neural Representations (NeurReps). 



GPT-Zip: Deep Compression of Finetuned Large Language Models [PDF]

Berivan Isik*,  Hermann Kumbong*, Wanyi Ning*, Xiaozhe Yao*, Sanmi Koyejo, Ce Zhang

Preliminary version presented at the Workshop on Efficient Systems for Foundation Models, ICML, 2023.



Lossy Compression of Noisy Data for Private and Data-Efficient Learning [PDF]

Berivan Isik, Tsachy Weissman

IEEE Journal of Selected Areas in Information Theory (JSAIT), 2023.



Sandwiched Video Compression: Efficiently Extending the Reach of Standard Codecs with Neural Wrappers [PDF]

Berivan Isik, Onur Guleryuz, Danhang Tang, Jonathan Taylor, Philip A. Chou

IEEE International Conference on Image Processing (ICIP), 2023.



Neural Network Compression for Noisy Storage Devices [PDF]

Berivan Isik, Kristy Choi, Xin Zheng, Tsachy Weissman, Stefano Ermon, H.-S. Philip Wong, Armin Alaghi

ACM Transactions on Embedded Computing Systems (TECS), 2023.

Preliminary version presented at the Deep Learning through Information Geometry Workshop,  NeurIPS, 2020.



Learning under Storage and Privacy Constraints [PDF1] [PDF2 (longer)]

Berivan Isik, Tsachy Weissman

IEEE International Symposium on Information Theory (ISIT), 2022.



LVAC: Learned Volumetric Attribute Compression for Point Clouds using Coordinate Based Networks [PDF] [code] 

Berivan Isik, Philip A. Chou, Sung Jin Hwang, Nick Johnston, George Toderici

Journal of Frontiers in Signal Processing (FSP), 2022.



Upper Bounds on the Rate of Uniformly-Random Codes for the Deletion Channel [PDF]

Berivan Isik, Francisco Pernice, Tsachy Weissman

Submitted to IEEE Transactions on Information Theory (TIT), 2023.



Neural 3D Compression via Model Compression  [PDF]

Berivan Isik

WiCV, CVPR, 2021.



rTop-k: A Statistical Estimation Approach to Distributed SGD [PDF] 

Berivan Isik*Leighton P. Barnes*,  Huseyin A. Inan*,  Ayfer Ozgur

IEEE Journal on Selected Areas in Information Theory (JSAIT), 2020. 

Preliminary version presented at the Federated Learning for User Privacy and Data Confidentiality, ICML, 2020. [Long Oral]


Patents

 "Learned Volumetric Attribute Compression Using Coordinate-Based Networks", US Patent 17/708,628.

Philip A. Chou, Berivan Isik, Sung Jin Hwang, Nick Johnston, George Toderici 

Invited Talks

Workshops

Involvements

Hobbies