Hanqiu Chen
Sharc Lab Advisor: Prof. Cong (Callie) Hao
Georgia Institute of Technology Electrical and Computer Engineering
Email: hanqiu.chen@gatech.edu
Sharc Lab Advisor: Prof. Cong (Callie) Hao
Georgia Institute of Technology Electrical and Computer Engineering
Email: hanqiu.chen@gatech.edu
I am currently a Ph.D. student in Georgia Tech ECE starting from Fall 2022. I received my B.S. degree in Physics from Nanjing University in 2022. I am now doing research under the supervision of Prof. Callie Hao in Sharc Lab. My research interests include high-level synthesis, designing FPGA accelerators for GNNs and DNNs, and hardware-efficient computing systems and architecture design for machine learning and robotic computing. I'm opening to summer internship opportunities in 2025 summer. I have summarized my current Ph.D. research progress and future Ph.D. research plan here [slides]. I'm welcome for discussions and future collaborations!
2024/7: My paper: Residual-INR: Communication Efficient On-Device Learning Using Implicit Neural Representation was accepted by ICCAD 2024 [paper][slides]!
2024/6: I attended DAC 2024 and presented our exploration of CXL memory in collaboration with Samsung: ICGMM: CXL-enabled Memory Expansion with Intelligent Caching Using Gaussian Mixture Model [paper][slides][poster]. I was also selected as DAC Young Fellow! [poster]
2024/6: I attended 1st RAS Data Center Summit and presented our current progress on GNN resilience analysis: ResGNN: A Generic Framework for Measuring Graph Neural Network Resilience Against Faults and Attacks in Hardware Systems [paper][slides]
2024/5: I joined Samsung Semiconductor Memory Solution Lab as a return summer intern to do research on the benefits of CXL in LLM training.
2024/3: My paper: ICGMM: CXL-enabled Memory Expansion with Intelligent Caching Using Gaussian Mixture Model was accepted by DAC 2024!
2023/11: I attended ICCAD 2023 and presented my work: Rapid-INR: Storage Efficient CPU-free DNN Training Using Implicit Neural Representation [paper][slides][poster]. I also received ICCAD 2023 student travel grant!
2023/8: My paper: Rapid-INR: Storage Efficient CPU-free DNN Training Using Implicit Neural Representation was accepted by ICCAD 2023!
2023/5: I have joined Samsung Semiconductor Memory Solution Lab as a summer internship and working on Memory Sematic SSD FPGA (MS-FPGA) tiered memory cache controller interface and policy engine design.
2023/5: I attended FCCM 2023 and presented my paper: DGNN-Booster: A Generic FPGA Accelerator Framework For Dynamic Graph Neural Network Inference. I also did a poster in FCCM PhD Forum. [paper][slides][poster]
2023/3: My paper: DGNN-Booster: A Generic FPGA Accelerator Framework For Dynamic Graph Neural Network Inference was accepted by FCCM2023!
2022/11: I attended IISWC 2022 and presented my paper: Bottleneck Analysis of Dynamic Graph Neural Network Inference on CPU and GPU. [paper] [slides]
2022/9: My paper: Bottleneck Analysis of Dynamic Graph Neural Network Inference on CPU and GPU was accepted by IISWC 2022!
2022/8: I joined Georgia Tech ECE Sharc Lab as a Ph.D. student under the supervision of prof. Cong (Callie) Hao.
2022/6: I received my B.S. Physics degree from Nanjing University and awarded as the outstanding graduate student!
2022/3: I attended the virtual ASAP 2022 and presented my paper: Mask-Net: A Hardware-efficient Object Detection Network with Masked Region Proposals. [paper][slides]
2022/1: My paper: Mask-Net: A Hardware-efficient Object Detection Network with Masked Region Proposals was acceepted by ASAP 2022!
2021/11: I attended SRC@ICCAD21 and made a presentation about my work Mask-Net: A Hardware-efficient Object Detection Network with Masked Region Proposals. I was awarded the 4th place in the undergraduate group in this student research competition. [Slides] [Certificate]
Software-hardware Co-design for efficient architecture and systems: hardware-efficient machine learning, CXL system design for ML workloads
High-performance Reconfigurable Computing: FPGA, embedded systems, IoT, edge computing
Electronic Design Automation (EDA): ML-assisted EDA tools optimization, High-level Synthesis (HLS)
Stefan Abi-Karam, Rishov Sarkar, Allison Seigler, Sean Lowe, Zhigang Wei, Hanqiu Chen, Nanditha Rao, Lizy John, Aman Arora and Cong Hao: HLSFactory: A Framework Empowering High-Level Synthesis Datasets for Machine Learning and Beyond (MLCAD2024) [paper]
Hanqiu Chen, Xuebin Yao, Pradeep Subedi and Cong Hao: Residual-INR: Communication Efficient On-Device Learning Using Implicit Neural Representation (ICCAD2024) [paper][slides]
Nan Wu, Yingjie Li, Hang Yang, Hanqiu Chen, Steve Dai, Cong Hao, Cunxi Yu and Yuan Xie: Survey of Machine Learning for Software-assisted Hardware Design Verification: Past, Present, and Prospect (ACMTrans. Des. Autom. Electron. Syst. 2024) [paper]
Hanqiu Chen, Zishen Wan and Cong Hao: ResGNN: A Generic Framework for Measuring Graph Neural Network Resilience Against Faults and Attacks in Hardware Systems (RAS Data Center Summit 2024) [paper][slides]
Hanqiu Chen, Yitu Wang, Vitorio Cargnini, Mohammadreza Soltaniyeh, Dongyang Li, Gongjin Sun, Pradeep Subedi, Andrew Chang, Yiran Chen and Cong Hao: ICGMM: CXL-enabled Memory Expansion with Intelligent Caching Using Gaussian Mixture Model (DAC2024) [paper][slides][poster]
Hanqiu Chen, Hang Yang, Stephen Fitzmeyer, Cong Hao: Rapid-INR: Storage Efficient CPU-free DNN Training Using Implicit Neural Representation (ICCAD2023) [paper][slides][poster]
Hanqiu Chen, Cong Hao: DGNN-Booster: A Generic FPGA Accelerator Framework For Dynamic Graph Neural Network Inference (FCCM2023) [paper][slides][poster]
Hanqiu Chen, Yihan Jiang, Yahya Alhinai, Eunjee Na, Cong Hao: Bottleneck Analysis of Dynamic Graph Neural Network Inference on CPU and GPU (IISWC 2022) [paper] [slides]
Hanqiu Chen and Cong Hao: Mask-Net: A Hardware-efficient Object Detection Network with Masked Region Proposals (ASAP2022) [paper][slides]
Ph.D. in Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA (August 2022 - Present)
B.S. in Physics, Nanjing University, Nanjing, Jiangsu, China (August 2018- June 2022)
Samsung Semiconductor Department: Memory Solution Lab Manager: Andrew Chang Location: San Jose, CA, USA (May 2023 - August 2023, May 2024 - August 2024)
Machine learning framework and libraries: PyTorch, PyTorch geometric
Programming Languages: C/C++, Python, Verilog HDL, Chisel
High-Level Synthesis: Xilinx Vitis HLS, HLS programming with FPGA, PYNQ
Circuit and Device Design: NI Multisim, Silvaco TCAD, Cadence
Numerical Computation: MATLAB, Mathematica
Text Editing and Data Processing: Latex, Origin
Computer Architecture Simulator: gem5