Ahmad Beirami

Research Data Scientist

EA Digital Platform - Data & AI

Email: ahmad(dot)beirami AT gmail(dot)com

Biography

Ahmad Beirami is a research scientist with Electronic Arts (EA) leading fundamental research and development on training artificial agents in multi-agent systems. His research interests broadly include AI, machine learning, statistics, information theory, and networks. Prior to joining EA in 2018, he held postdoctoral fellow positions at Duke, MIT, and Harvard. He is the recipient of the 2015 Sigma Xi Best PhD Thesis Award from Georgia Tech.

Education & Professional Experience

  • Research Scientist in EA Digital Platform - Data & AI, Electronic Arts (2018-present)
  • Postdoctoral Fellow in EE, Harvard University (Mentor: Vahid Tarokh, 2016-2017)
  • Postdoctoral Associate in EECS, MIT (Mentor: Muriel Médard, 2015-2017)
  • Postdoctoral Associate in ECE, Duke University (Mentor: Robert Calderbank, 2014-2016)
  • Ph.D. in ECE, Georgia Tech (Advisor: Faramarz Fekri, 2014)
  • M.Sc. in ECE, Georgia Tech (Advisor: Faramarz Fekri, 2011)
  • B.Sc. in EE, Sharif University of Technology (2007)

Honors & Awards

  • Distinction in Teaching Award, Harvard University (2017)
  • Exemplary Reviewer, IEEE Transactions on Communications (2016)
  • 2015 Sigma Xi Best Ph.D. Thesis Award, Georgia Tech (2015)
  • 2013-2014 Graduate Research Excellence Award, School of ECE, Georgia Tech (2014)
  • Outstanding Research Award, Center for Signal and Information Processing, Georgia Tech (2014)
  • Outstanding Service Award, Center for Signal and Information Processing, Georgia Tech (2014)
  • Best Student Paper Nomination, 51st IEEE International Midwest Symposium on Circuits and Systems (2008)
  • Bronze Medal, 20th Iranian National Mathematics Olympiad (2002)

Selected Publications

Data-dependent randomized features for generalizability in large-scale supervised learning:

  • S. Shahrampour, A. Beirami, and V. Tarokh, "On data-dependent random features for improved generalization in supervised learning," in Proc. of The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI 2018), pp. 4026-4033. [arXiv]

Computationally efficient approximate cross validation and parameter tuning in supervised/unsupervised learning:

  • A. Beirami, M. Razaviyayn, S. Shahrampour, and V. Tarokh, "On optimal generalizability in parametric learning," in Proc. of 2017 Advances in Neural Information Processing Systems (NIPS 2017), pp. 3455-3465. [arXiv]

A generalization of weak typicality using tilted information measures:

  • A. Beirami, R. Calderbank, M. Christiansen, K. Duffy, and M. Médard, "A characterization of guesswork on swiftly tilting curves," accepted for publication at IEEE Transactions on Information Theory, 2018 (Short version appeared in Allerton 2015). [arXiv]

Information-theoretic techniques for converse sample complexity bounds in supervised learning:

  • M. Nokleby, A. Beirami, and R. Calderbank, "Rate-distortion bounds on Bayes risk in supervised learning," submitted to IEEE Transactions on Information Theory, 2016 (Short version appeared in ISIT 2016). [arXiv]

Fast data compression in large-scale networks using unsupervised learning:

  • A. Beirami, M. Sardari, and F. Fekri, "Packet-level network compression: realization and scaling of the network-wide benefits," IEEE/ACM Transactions on Networking, vol. 24, no. 3, pp. 1588-1604, June 2016 (Short version appeared in INFOCOM 2012). [arXiv]

See my Google Scholar page for a complete list of publications.

Teaching Experience

  • Section instructor for Harvard ES 156 — Signals and Systems (Spring 2017) (Rating: 4.79/5.00)
  • Recitation instructor for MIT EECS 6.02 — Intro to EECS II: Digital Communication Systems (Fall 2015) (Rating: 6.00/7.00)
  • Principal instructor for Duke ECE 587/STA 563 — Information Theory (Spring 2015) (Rating: 4.86/5.00)