Milad Hashemi
miladhashemi@utexas.edu
I am currently a Research Scientist on the ML, Systems, and Cloud AI team at Google. I completed my PhD in 2016 as a member of the HPS research group at UT Austin, advised by Professor Yale Patt.
Publications and Patents
Full Overview: Google Scholar, dblp.
Amir Yazdanbakhsh, Aviral Kumar, Kevin Swersky, Milad Hashemi, Sergey Levine, "Data-Driven Offline Optimization for Architecting Hardware Accelerators," ICLR 2021.
Will Grathwohl, Kevin Swersky, Milad Hashemi, David Duvenaud, Chris J. Maddison, "Oops I Took A Gradient: Scalable Sampling for Discrete Distributions," ICML 2021. Outstanding Paper Award Honorable Mention.
Will Grathwohl, Jacob Jin Kelly, Milad Hashemi, Mohammad Norouzi, Kevin Swersky, David Duvenaud, "No MCMC for me: Amortized sampling for fast and stable training of energy-based models," ICLR 2021.
Zhan Shi, Akanksha Jain, Kevin Swersky, Milad Hashemi, Parthasarathy Ranganathan, Calvin Lin., "A Hierarchical Neural Model of Data Prefetching," Architectural Support for Programming Languages and Operating Systems (ASPLOS), April 2021. IEEE MICRO Top-Picks Honorable Mention
Zhan Shi, Chirag Sakhuja, Milad Hashemi, Kevin Swersky, Calvin Lin. "Learned Hardware/Software Co-Design of Neural Accelerators".
Yujun Yan, Kevin Swersky, Danai Koutra, Parthasarathy Ranganathan, Milad Hashemi, "Neural Execution Engines: Learning to Execute Subroutines," Neural Information Processing Systems (NeurIPS), December 2020.
Evan Liu, Milad Hashemi, Kevin Swersky, Parthasarathy Ranganathan, Junwhan Ahn, "An Imitation Learning Approach to Cache Replacement," The International Conference on Machine Learning (ICML), July 2020.
Zhan Shi, Kevin Swersky, Parthasarathy Ranganathan, and Milad Hashemi "Learning Execution through Neural Code Fusion," The International Conference on Learning Representations (ICLR) 2020 and ML for Systems Workshop @ ISCA-2019, June 2019.
Milad Hashemi, Kevin Swersky, Jamie A. Smith, Grant Ayers, Heiner Litz, Jichuan Chang, Christos Kozyrakis, and Parthasarathy Ranganathan "Learning Memory Access Patterns," The International Conference on Machine Learning (ICML), July 2018. MIT Technology Review Coverage.
Milad Hashemi, Onur Mutlu, and Yale N. Patt "Continuous Runahead: Transparent Hardware Acceleration for Memory Intensive Workloads," The 49th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), October 2016. Nominated for the Best Paper Award.
Milad Hashemi, Debbie Marr, Doug Carmean, and Yale N. Patt "Efficient Execution of Bursty Applications," IEEE Computer Architecture Letters (CAL), July 2015. Best of CAL 2016.
Milad Hashemi, Khubaib, Eiman Ebrahimi, Onur Mutlu, and Yale N. Patt "Accelerating Dependent Cache Misses with an Enhanced Memory Controller," The 43rd ACM/IEEE International Symposium on Computer Architecture (ISCA), June 2016.
Milad Hashemi, and Yale N. Patt "Filtered Runahead Execution with a Runahead Buffer,"The 48th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), December 2015.
Khubaib, M. Aater Suleman, Milad Hashemi, Chris Wilkerson, and Yale N. Patt "MorphCore: An Energy-Efficient Microarchitecture for High Performance ILP and High Throughput TLP," The 45th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), December 2012. Best Paper Award.
Professional Service
Co-Editor, IEEE MICRO Special Issue on Machine Learning for Systems, September 2020.
Co-organizer, Graph Representation Learning and Beyond, co-located with ICML 2020.
Co-founder/steering-committee, ML for Systems Workshop, co-located at NeurIPS 2018 - 2023.
Co-founder/organizer, ML for Computer Architecture and Systems at ISCA 2019 - 2022.
Dissertation
Milad Hashemi "On-Chip Mechanisms to Reduce Effective Memory Access Latency," The University of Texas at Austin, August 2016.
Education
2011-2016: Ph.D. in Electrical and Computer Engineering; The University of Texas at Austin
2009-2011: M.S. in Electrical and Computer Engineering; The University of Texas at Austin
2005-2009: B.S. in Electrical Engineering; The University of Washington