Han Xu
Tenure-Track Assistant Professor (From Fall 2024)
Department of Electrical and Computer Engineering
The University of Arizona
Email: xuhan2@arizona.edu
Publications | Teaching | Talks and Tutorials | Google Scholar | CV
About Me
I obtained my PhD from the Department of Computer Science and Engineering, at Michigan State University. I am fortunately advised by Dr. Jiliang Tang. Before joining MSU, I received my master degree in Statistics from University of Michigan, Ann Arbor and my bachelor degree in Mathematics from Nankai University, China.
I have a broad research interests in Trustworthy ML/AI, including robustness, fairness, privacy and copyright issues, as well as the related problems in real-world applications, such as images, graph data, text data and financial data. Besides, I have a growing research interest in studying the trustworthy problems about generative models, including Large Language Models (LLMs) and Diffusion Models (DMs).
I am joining the ECE department at the University of Arizona this Fall (2024), and I am actively seeking students working with me. If you are interested, please feel free to contact me.
Open-Source Projects
DeepRobust: A Platform for Adversarial Attacks and Defenses, in AAAI 2021.
A PyTorch-based Github repository, covering algorithms of adversarial attacks and defenses, for image data and graph data.
[Website] [paper] [code]HC-Var: Human and ChatGPT texts with Variety, 2023
A comprehensive dataset with human and ChatGPT texts to foster the studies about ChatGPT generated text detection. It includes the text data in various language tasks, topics and prompts.
[Dataset] [paper] [code]
News
03/2024 Our paper is accepted by NACCL Findings 2024
03/2024 I am thrilled to receive the Outstanding Graduate Student award from Department of Computer Science and Engineering at MSU.
01/2024 I give a guest lecture in University of Illinois, Chicago (UIC) to introduce trustworthiness about LLMs.
01/2024 Our paper Sharpness-Aware Data Poisoning Attack is accepted by ICLR 2024 as spotlight (top 5%).
12/2023 I am invited as a reviewer for ICML 2024.
11/2023 I give a guest lecture in University of Nevada, Las Vegas (UNLV).
11/2023 I give an Early Career Research Talk in MSU Data Science Student Conference. I introduce our recent works about poisoning attacks as data protection techniques.
10/2023 My co-authored paper is accepted by WACV 2024.
10/2023 I give a guest lecture in Emory University. I introduce ML security in Generative AI.
10/2023 I am invited as a keynote speaker of the AAAI 2024 workshop: Addressing Socioethical Effects of AI.
10/2023 I design a short-term PhD course about Trustworthy AI. With my advisor Jiliang Tang, we co-deliver (virtually) this course at Department of Computer Science, Aalborg University, Denmark.
09/2023 Check our new preprints about: (1) ChatGPT texts detection; (2) memorization effects of LLMs; (3) backdoor attacks; (4) watermarks on generative models.
09/2023 We build a dataset "HC_Var" (see in HuggingFace), for ChatGPT generated text detection, as well as the corresponding codes. This project is collaborated with with a high school student Amy Liu from Okemos High School. I am really proud of her inspiring ideas for our project.
08/2023 I am invited as a reviewer for ICLR 2024.
08/2023 I am invited as a Program Committee for AAAI 2024 in the Safe, Robust and Responsible AI Track (SRRAI).
06/2023 Amy Liu from Okemos High School join our research group for her HSHSP Program (High School Honors Science, Math and Engineering Program). We work together on collecting ChatGPT outputs to foster the ChatGPT detection techniques.
06/2023 I feel excited about recent advances about Large Language Models (LLMs). I start to investigating problems on how to detect the LLM generated text from human texts. I also guide and collaborate with Shenglai Zeng and Yaxin Li on studying the privacy concerns of LLMs.
05/2023 I give an "academic lightning talk" in 2023 Ethical AI Forum, held by University of Michigan, Ann Arbor (Michigan Institute of Data Science).
05/2023 I pass comprehensive exam.
05/2023 Check our new preprints about: (1) data poisoning attacks; (2) watermark techniques to protect user's data against generative models.
05/2023 Our paper about the memorization of adversarial robust models is accepted by KDD 2023.
05/2023 Our paper about categorical attacks and defenses is accepted by ICML 2023.
04/2023 I participate a Field Trip to Elektrobit and Continental Automotive in Detroit. I see how Trustworthy AI techniques promote auto-vehicles safety in car manufacturing.
03/2023 I am invited as a reviewer for NeurIPS 2023.
01/2023 Our paper about unlearnable examples is accepted by ICLR 2023.
01/2023 Our paper about GNN's robustness and explainability is accepted by ICDE 2023.
Older News
12/2022 I collaborate with Dr. Wenqi Fan to write a survey paper about trustworthy AI in recommender systems.
09/2022 A paper about imbalanced adversarial training collaborated with Wentao Wang (the student I mentor) is accepted by ICDM 2022.
08/2022 I give a short talk about "Fairness in Adversarial Robust DNNs" as a Junior Researcher Spotlight in KDD 2022, Washington DC.
08/2022 We organize and present in KDD 2022 tutorial about adversarial attacks and defenses. Compared to previous two virtual tutorials, we finally hold our tutorial in-person in Washington DC.
06/2022 I start my internship in VISA Research, under supervision of Menghai Pan. I learn a lot about trustworthy AI, such as anomaly detection, in industry.
09/2021 Our collaborated paper about robust GNN models is published in NeurIPS 2022.
08/2021 We organize and present in KDD 2021 tutorial (virtually) about adversarial attacks and defenses.
04/2021 My paper about fairness in adversarial robust models is published in ICML 2021. I am so grateful for the help from my mentor and advisor Xiaorui Liu and Jiliang Tang.
12/2020 My paper about adversarial attacks against meta learning is published in SDM 2021. It is my first accepted paper.
10/2020 Our repository DeepRobust is accepted by AAAI 2021 as a demo.
08/2020 I organize and present the tutorial in KDD 2020 (virtually) about adversarial attacks and defenses.
07/2020 I attend KDD Cup Competition about adversarial attacks on graph data. We are top 10 winner!
03/2020 We release our first version of DeepRobust. We build this python platform to facilitate the researches about attacks and defenses.
03/2020 I start "working from home" due to pandemic.
12/2019 I pass my qualifying exam.
11/2019 My survey paper is published in International Journal of Automation and Computing.
06/2019 I write a survey paper about adversarial attacks and defenses to summarize the papers I read.
05/2019 I attend SDM Doctoral Forum in Calgary, Canada. I present my research about adversarial training against spatial attacks.
01/2019 Working on my first research project about adversarial training against spatially transformed adversarial attacks.
08/2018 I start my PhD Journey. Read papers of adversarial attacks and defenses. Learning a lot from reading papers.