Pin-Yu Chen (陳品諭)
Principal Research Staff Member, IBM Research AI; MIT-IBM Watson AI Lab; RPI-IBM AIRC
IBM Thomas J. Watson Research Center, NY, USA
Link to my Twitter Google scholar profile CV Bio
Contact: pinyuchen.tw at gmail.com (primary reviewer account), pin-yu.chen at ibm.com
I am a Principal Research Scientist of Trusted AI Group & PI of MIT-IBM Watson AI Lab, IBM Thomas J. Watson Research Center. I am also the Chief Scientist of RPI-IBM AI Research Collaboration program. My recent research focuses on AI safety and robustness, and more broadly, making AI trustworthy. Here is my <bio>. Check out my research vision and portfolio.
- My research contributes to IBM Watsonx.governance, Adversarial Robustness Toolbox, AI Explainability 360, AI Factsheets 360, and Watson Studio
- I am open to collaboration with highly motivated researchers!
- I wrote a book on "Adversarial Robustness for Machine Learning" with Cho-Jui Hsieh
- Workshop organizer (selected): ICML('22,'23), KDD('19-'22), MLSyS'22, NeurIPS('21,'24)
- Tutorial presenter (selected): NeurIPS'22, AAAI('24,'23,'22), CVPR('23,'21,'20), IJCAI'21, MLSS'21, ECCV'20
- Area Chair/Senior PC: NeurIPS, ICLR, ICML, AAAI, IJCAI, AISTATS, PAKDD, ICASSP, ACML
- Technical Program Committee: IEEE S&P, ACM CCS, IEEE Signal Processing Society (MLSP & ASPS)
- Editor: TMLR, IEEE Transactions on Signal Processing
Featured Talks
Featured Media Coverage
AI Safety, Robustness, and Trustworthy ML: <New York Times_finetune_safety> <TechTalks_book> <Forbes_AI_resilience> <NYAS_2024> VentureBeat_BadDiffusion> <Technology_Networks_AIvsCar> <EETimes_adversarialAI> <Portswigger_interview> <Techerati_interview> <Venturebeat_ML_security> <TheRegister_Adv_Tweet> <TheNextWeb_BAR> <Analytics_India_Magazine_ZO_opt> <AItrends_interview> <TheNextWeb_sanitization> <TechTalks_Robust_AI> <Nature_News> <EE_TIMES_adv_robustness> <PHYS.ORG_AutoZOOM> <TechTalks_Paraphrasing> <SiliconANGLE> <Venturebeat_Adv_T-Shirt> <Quartz_Adv_T-Shirt> <WIRED_Adv_T-Shirt> <TechTalks_temporal_dependency> <VB_Paraphrasing> <Forbes_CEM>
Machine Learning for Scientific Discovery: <Communications_ACM_AI_creativity> <Academic_Times_MNNN> <MIT_News_Covid-19> <VentureBeat_CLASS> <New_Atlas_CLASS> <Axios_CLASS> <WRAL_TechWire_CLASS> <Psychology_Today_CLASS> <ACS_CLASS> <Chemistry_World_CLASS> <Technicity_CLASS> <VOX_CLASS> <AZO_Robotics_CURE> <OSU_CURE>
Cyber Security: <IEEE COMSOC Technology News> <IEEE Xplore Spotlight> <PNNL research highlight>
IBM: <IBM_Safe_LoRA> <IBM_Larimar> <IBM_ICL> <IBM_Innovation_Robustness> <IBM_AI_Red_Teaming> <IBM_AI_Time_Series> <IBM_Blog_AI_Forensics> <IBM_Blog_Adversartial_Robustness> <IBM_blog_AI_Drug_Discovery> <IBM_blog_BadDiffusion> <IBM_QMO> <IBM_CLASS> <IBM_Blog_Certification> <IBM_Research_AI_Review_2019> <IBM Response to NIST RFI on AI>
Funded Research Projects
Notre Dame–IBM Technology Ethics Lab Collaboration [2024-present]
DARPA CDAO Joint AI Test Infrastructure Capability (JATIC) [2023-present]
IARPA program on Microelectronics in Support of Artificial Intelligence (MicroE4AI) [2022]
IBM PI of the Department of Energy project "A Robust Event Diagnostics Platform: Integrating Tensor Analytics and Machine Learning into Real-time Grid Monitoring" [2019 -2021]
RPI-IBM AI Research Collaboration (AIRC): PI of ongoing RPI-IBM research projects [2019 - present]
MIT-IBM Watson AI Lab: PI of ongoing MIT-IBM research projects. Two of them are featured in <MIT Quest for Intelligence Research> and <MIT_News> [2018 - present]
UIUC-IBM Center for Cognitive Computing System Research (C3SR) [2019 - 2022]
Selected Awards and Honors
IBM Corporate Technical Recognition Award (Significant External Honors) (2024)
Distinguished Speaker of the ACM (2024-2027)
Best Paper Award at ICLR 2023 BANDS Workshop
Best Paper Runner-Up Award at UAI 2022
Best Paper Award at ECCV 2022 AROW Workshop
IBM Corporate Technical Award: Trustworthy AI (2021)
IBM Outstanding Research Accomplishment Awards: Foundation Models for Science (2023), Federated Learning (2022), Adversarial Robustness (2020), Deep Learning on Graphs (2020), Trustworthy AI (2020)
IBM Accomplishment Awards: Dynamical Systems and Machine Learning (2022), Robust AI (2021), Generative AI (2021), Optimization for AI (2021), Adversarial Robustness (2019), Deep Learning on Graphs (2019), Trustworthy AI (2019)
Special IBM Research Division Team Award for COVID-19 Research (2020)
IBM Master Inventor (2020-current)
Listed as “Top Subject Matter Experts in AI & ML” by onalytica (2020) <Link>
NeurIPS Best Reviewer Award (2017) <Link>
Best Paper Finalist, ACM Workshop on Artificial Intelligence and Security (2017)
Outstanding Performance Award at Pacific Northwest National Laboratory (2015)
Univ. Michigan Rackham International Student Fellowship (Chia-Lun Lo Fellowship) (2013-2014) <Link>
EE:Systems Fellowship, University of Michigan, Ann Arbor (2012-2013)
Best Master Thesis Award of Graduate Institute of Communications Engineering, National Taiwan Univ. (2011)
Second Best Master Thesis Award of Chinese Institute of Electrical Engineering (2011)
IEEE GLOBECOM GOLD Best Paper Award (2010) <Link>
Ranked 1st Place (Full Scores) in Taiwan National College Entrance Exam (2005)
New Preprints
Ming-Chang Chiu, Shicheng Wen, Pin-Yu Chen, and Xuezhe Ma, “MegaCOIN: Enhancing Medium-Grained Color Perception for Vision-Language Models,”
Chung-Ting Tsai, Ching-Yun Ko, I-Hsin Chung, Yu-Chiang Frank Wang, and Pin-Yu Chen, “Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models,”
Zhi-Yi Chin, Kuan-Chen Mu, Mario Fritz, Pin-Yu Chen, and Wei-Chen Chiu, “In-Context Experience Replay Facilitates Safety Red-Teaming of Text-to-Image Diffusion Models,”
Jun Qi, Chao-Han Yang, Samuel Yen-Chi Chen, and Pin-Yu Chen, “Quantum Machine Learning: An Interplay Between Quantum Computing and Machine Learning,”
Jun Qi, Chao-Han Yang, Samuel Yen-Chi Chen, Pin-Yu Chen, Hector Zenil, and Jesper Tegner, “Leveraging Pre-Trained Neural Networks to Enhance Machine Learning with Variational Quantum Circuits,”
Yung-Chen Tang, Pin-Yu Chen, and Tsung-Yi Ho, “Defining and Evaluating Physical Safety for Large Language Models,” <LLM_Physical_Safety_page>
Kuo-Han Hung, Ching-Yun Ko, Ambrish Rawat, I-Hsin Chung, Winston H. Hsu, and Pin-Yu Chen, “Attention Tracker: Detecting Prompt Injection Attacks in LLMs,” <Attention_Tracker_page>
Yujun Zhou, Jingdong Yang, Kehan Guo, Pin-Yu Chen, Tian Gao, Werner Geyer, Nuno Moniz, Nitesh V Chawla, and Xiangliang Zhang, “LabSafety Bench: Benchmarking LLMs on Safety Issues in Scientific Labs,” <LabSafety_project_page> <LabSafety_code> <LabSafety_data>
Han Shen, Pin-Yu Chen, Payel Das, and Tianyi Chen, “SEAL: Safety-enhanced Aligned LLM Fine-tuning via Bilevel Data Selection,”
Xiang Li, Pin-Yu Chen, and Wenqi Wei, “SONAR: A Synthetic AI-Audio Detection Framework and Benchmark,”
Ching-Yun Ko, Pin-Yu Chen, Payel Das, Youssef Mroueh, Soham Dan, Georgios Kollias, Subhajit Chaudhury, Tejaswini Pedapati, and Luca Daniel, “Large Language Models can be Strong Self-Detoxifiers,”
Hongkang Li, Meng Wang, Songtao Lu, Xiaodong Cui, and Pin-Yu Chen, “Training Nonlinear Transformers for Chain-of-Thought Inference: A Theoretical Generalization Analysis,”
Jiayi Ye, Yanbo Wang, Yue Huang, Dongping Chen, Qihui Zhang, Nuno Moniz, Tian Gao, Werner Geyer, Chao Huang, Pin-Yu Chen, Nitesh V Chawla, and Xiangliang Zhang, “Justice or Prejudice? Quantifying Biases in LLM-as-a-Judge,” <LLM_Judge_bias_page>
Ambrish Rawat, Stefan Schoepf, Giulio Zizzo, Giandomenico Cornacchia, Muhammad Zaid Hameed, Kieran Fraser, Erik Miehling, Beat Buesser, Elizabeth M. Daly, Mark Purcell, Prasanna Sattigeri, Pin-Yu Chen, and Kush R. Varshney, “Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI,”
Youngsun Lim, Hojun Choi, Pin-Yu Chen, and Hyunjung Shim, “Evaluating Image Hallucination in Text-to-Image Generation with Question-Answering,”
Sarwan Ali, Taslim Murad, Prakash Chourasia, Haris Mansoor, Imdad Ullah Khan, Pin-Yu Chen, and Murray Patterson, “Position Specific Scoring Is All You Need? Revisiting Protein Sequence Classification Tasks,”
Hsi-Ai Tsao, Lei Hsiung, Pin-Yu Chen, and Tsung-Yi Ho “When Does Visual Prompting Outperform Linear Probing for Vision-Language Models? A Likelihood Perspective,”
Shashank Kotyan, Pin-Yu Chen, and Danilo Vasconcellos Vargas, “Linking Robustness and Generalization: A k* Distribution Analysis of Concept Clustering in Latent Space for Vision Models,”
Erh-Chung Chen, Pin-Yu Chen, I-Hsin Chung, and Che-Rung Lee, “Data-Driven Lipschitz Continuity: A Cost-Effective Approach to Improve Adversarial Robustness,”
Zhiyuan He, Pin-Yu Chen, and Tsung-Yi Ho, “RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection,”
Wei Li, Pin-Yu Chen, Sijia Liu, and Ren Wang, “PSBD: Prediction Shift Uncertainty Unlocks Backdoor Detection,”
Lin Lu, Hai Yan, Zenghui Yuan, Jiawen Shi, Wenqi Wei, Pin-Yu Chen, and Pan Zhou, “AutoJailbreak: Exploring Jailbreak Attacks and Defenses through a Dependency Lens,”
Chen Xiong, Xiangyu Qi, Pin-Yu Chen, and Tsung-Yi Ho, “Defensive Prompt Patch: A Robust and Interpretable Defense of LLMs against Jailbreak Attacks,” <DefensivePromptPatch_page>
Xiangyu Qi, Yangsibo Huang, Yi Zeng, Edoardo Debenedetti, Jonas Geiping, Luxi He, Kaixuan Huang, Udari Madhushani, Vikash Sehwag, Weijia Shi, Boyi Wei, Tinghao Xie, Danqi Chen, Pin-Yu Chen, Jeffrey Ding, Ruoxi Jia, Jiaqi Ma, Arvind Narayanan, Weijie J Su, Mengdi Wang, Chaowei Xiao, Bo Li, Dawn Song, Peter Henderson, and Prateek Mittal, “AI Risk Management Should Incorporate Both Safety and Security,”
Shashank Kotyan, Po-Yuan Mao, Pin-Yu Chen, and Danilo Vasconcellos Vargas, “Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion Models!,”
Zhenhan Huang, Tejaswini Pedapati, Pin-Yu Chen, Chunhen Jiang, and Jianxi Gao, “Graph is all you need? Lightweight data-agnostic neural architecture search without training,”
Erh-Chung Chen, Pin-Yu Chen, I-Hsin Chung, and Che-Rung Lee, “Steal Now and Attack Later: Evaluating Robustness of Object Detection against Black-box Adversarial Attacks,” [CDAO; ART]
Yi-Shan Lan, Pin-Yu Chen, and Tsung-Yi Ho, “NaNa and MiGu: Semantic Data Augmentation Techniques to Enhance Protein Classification in Graph Neural Networks,”
Swapnaja Achintalwar, Adriana Alvarado Garcia, Ateret Anaby-Tavor, Ioana Baldini, Sara E. Berger, Bishwaranjan Bhattacharjee, Djallel Bouneffouf, Subhajit Chaudhury, Pin-Yu Chen, Lamogha Chiazor, Elizabeth M. Daly, Rogério Abreu de Paula, Pierre Dognin, Eitan Farchi, Soumya Ghosh, Michael Hind, Raya Horesh, George Kour, Ja Young Lee, Erik Miehling, Keerthiram Murugesan, Manish Nagireddy, Inkit Padhi, David Piorkowski, Ambrish Rawat, Orna Raz, Prasanna Sattigeri, Hendrik Strobelt, Sarathkrishna Swaminathan, Christoph Tillmann, Aashka Trivedi, Kush R. Varshney, Dennis Wei, Shalisha Witherspooon, and Marcel Zalmanovici, “Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations,”
Shyam Marjit, Harshit Singh, Nityanand Mathur, Sayak Paul, Chia-Mu Yu, and Pin-Yu Chen, “DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Models,” <DiffuseKronA_project_page> [WACV 2025]
Bharat Runwal, Tejaswini Pedapati, and Pin-Yu Chen, “From PEFT to DEFT: Parameter Efficient Finetuning for Reducing Activation Density in Transformers,” <DEFT_code>
Jia-Hong Huang, Chao-Han Huck Yang, Pin-Yu Chen, Min-Hung Chen, and Marcel Worring, “Conditional Modeling Based Automatic Video Summarization,”
Xilong Wang, Chia-Mu Yu, and Pin-Yu Chen, “Exploring the Benefits of Differentially Private Pre-training and Parameter-Efficient Fine-tuning for Table Transformers,” <DP-TabTransformer_code>
Jun Qi, Chao-Han Huck Yang, Pin-Yu Chen, Min-Hsiu Hsieh, Hector Zenil, and Jesper Tegner, “Classical-to-Quantum Transfer Learning Facilitates Machine Learning with Variational Quantum Circuit,”
Ria Vinod, Pin-Yu Chen, and Payel Das, “Reprogramming Pretrained Language Models for Protein Sequence Representation Learning,”
Ming-Chang Chiu, Yingfei Wang, Derrick Eui Gyu Kim, Pin-Yu Chen, and Xuezhe Ma, “On Human Visual Contrast Sensitivity and Machine Vision Robustness: A Comparative Study,”
Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, and Jihun Hamm, “On Certifying and Improving Generalization to Unseen Domains,”
Selected Publications
I. Adversarial Machine Learning and Robustness of Neural Networks
-Attack & Defense
Zhiyuan He, Yijun Yang, Pin-Yu Chen, Qiang Xu, and Tsung-Yi Ho
“Overload: Latency Attacks on Object Detection for Edge Devices,” CVPR 2024
Erh-Chung Chen, Pin-Yu Chen, I-Hsin Chung, and Che-rung Lee
“Rethinking Backdoor Attacks on Dataset Distillation: A Kernel Method Perspective,” ICLR 2024
Ming-Yu Chung, Sheng-Yen Chou, Chia-Mu Yu, Pin-Yu Chen, Sy-Yen Kuo, and Tsung-Yi Ho
“Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift,” AAAI 2024
Shengwei An, Sheng-Yen Chou, Kaiyuan Zhang, Qiuling Xu, Guanhong Tao, Guangyu Shen, Siyuan Cheng, Shiqing Ma, Pin-Yu Chen, Tsung-Yi Ho, and Xiangyu Zhang
“VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models,” NeurIPS 2023
Sheng-Yen Chou, Pin-Yu Chen, and Tsung-Yi Ho
“Robust Mixture-of-Expert Training for Convolutional Neural Networks,” ICCV 2023
Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Huan Zhang, Pin-Yu Chen, Shiyu Chang, Zhangyang Wang, and Sijia Liu
“How to Backdoor Diffusion Models?,” CVPR 2023
Sheng-Yen Chou, Pin-Yu Chen, and Tsung-Yi Ho
<Best Paper Award at ICLR 2023 BANDS Workshop> <BadDiffusion_code> <VentureBeat_BadDiffusion> <IBM_blog_BadDiffusion> <TEXAL_BadDiffusion> <Threat_Prompt_BadDiffusion>
“Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations,” CVPR 2023
Lei Hsiung, Yun-Yun Tsai, Pin-Yu Chen, and Tsung-Yi Ho
“FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning,” ICLR 2023
Kaiyuan Zhang, Guanhong Tao, Qiuling Xu, Siyuan Cheng, Shengwei An, Yingqi Liu, Shiwei Feng, Guangyu Shen, Pin-Yu Chen, Shiqing Ma, and Xiangyu Zhang
“Less is More: Dimension Reduction Finds On-Manifold Adversarial Examples in Hard-Label Attacks,” SaTML 2023
Washington Garcia, Pin-Yu Chen, Somesh Jha, Scott Clouse, and Kevin R. B. Butler
“Distributed Adversarial Training to Robustify Deep Neural Networks at Scale,” UAI 2022 (*equal contribution)
Gaoyuan Zhang, Songtao Lu, Yihua Zhang, Xiangyi Chen, Pin-Yu Chen, Quanfu Fan, Lee Martie, Lior Horesh, Mingyi Hong, and Sijia Liu
<Best paper runner-up award at UAI 2022> <DAT_code>
“CAT: Customized Adversarial Training for Improved Robustness,” IJCAI 2022
Minhao Cheng, Qi Lei, Pin-Yu Chen, Inderjit Dhillon, and Cho-Jui Hsieh
“A Word is Worth A Thousand Dollars: Adversarial Attack on Tweets Fools Stock Prediction,” NAACL 2022
Yong Xie, Dakuo Wang, Pin-Yu Chen, Jinjun Xiong, Sijia Liu, and Sanmi Koyejo
<AdvTweet_code> <TheRegister_Adv_Tweet> <IBM_Blog_Adv_Tweet>
“CAFE: Catastrophic Data Leakage in Vertical Federated Learning,” NeuIPS 2021
Xiao Jin, Pin-Yu Chen, Chia-Yi Hsu, Chia-Mu Yu, and Tianyi Chen
“Adversarial Attack Generation Empowered by Min-Max Optimization,” NeuIPS 2021
Jingkang Wang*, Tianyun Zhang*, Sijia Liu, Pin-Yu Chen, Jiacen Xu, Makan Fardad, and Bo Li (*equal contribution)
“How Robust are Randomized Smoothing based Defenses to Data Poisoning?” CVPR 2021
Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, and Jihun Hamm
“On Fast Adversarial Robustness Adaptation in Model-Agnostic Meta-Learning,” ICLR 2021
Ren Wang, Kaidi Xu, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Chuang Gan, and Meng Wang
“Self-Progressing Robust Training,” AAAI 2021
Minhao Cheng, Pin-Yu Chen, Sijia Liu, Shiyu Chang, Cho-Jui Hsieh, and Payel Das
“Curse or Redemption? How Data Heterogeneity Affects the Robustness of Federated Learning,” AAAI 2021
Syed Zawad, Ahsan Ali, Pin-Yu Chen, Ali Anwar, Yi Zhou, Nathalie Baracaldo, Yuan Tian, Feng Yan
“Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases,” ECCV 2020
Ren Wang, Gaoyuan Zhang, Sijia Liu, Pin-Yu Chen, Jinjun Xiong, and Meng Wang
“Adversarial T-shirt! Evading Person Detectors in A Physical World,” ECCV 2020
Kaidi Xu, Gaoyuan Zhang, Sijia Liu, Quanfu Fan, Mengshu Sun, Hongge Chen, Pin-Yu Chen, Yanzhi Wang, and Xue Lin
<Venturebeat_Adv_T-Shirt> <Import_AI_Adv_T-Shirt> <The_Register_Adv_T-Shirt> <NEU_News_Adv_T-Shirt> <Boston Globe_Adv_T-Shirt> <VICE_Adv_T-Shirt> <ODSC_Adv_T-Shirt> <Quartz_Adv_T-Shirt> <WIRED_Adv_T-Shirt> <Comm_ACM_Adv_T-Shirt> <機器之心_Adv_T-Shirt>
“Proper Network Interpretability Helps Adversarial Robustness in Classification,” ICML 2020
Akhilan Boopathy, Sijia Liu, Gaoyuan Zhang, Cynthia Liu, Pin-Yu Chen, Shiyu Chang, and Luca Daniel
“Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness,” ICLR 2020
Pu Zhao, Pin-Yu Chen, Payel Das, Karthikeyan Natesan Ramamurthy, and Xue Lin
<Model_Sanitization_code> <TechTalks_sanitization> <TheNextWeb_sanitization>
“DBA: Distributed Backdoor Attacks against Federated Learning,” ICLR 2020
Chulin Xie, Keli Huang, Pin-Yu Chen, and Bo Li
“Sign-OPT: A Query-Efficient Hard-label Adversarial Attack,” ICLR 2020
Minhao Cheng*, Simranjit Singh*, Patrick H. Chen, Pin-Yu Chen, Sijia Liu, and Cho-Jui Hsieh (*equal contribution)
“Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples,” AAAI 2020
Minhao Cheng, Jinfeng Yi, Pin-Yu Chen, Huan Zhang, and Cho-Jui Hsieh
“Towards Query-Efficient Black-Box Adversary with Zeroth-Order Natural Gradient Descent,” AAAI 2020
Pu Zhao, Pin-Yu Chen, Siyue Wang, and Xue Lin
Pu Zhao, Sijia Liu, Pin-Yu Chen, Nghia Hoang, Kaidi Xu, Bhavya Kailkhura, and Xue Lin
Xiao Wang*, Siyue Wang*, Pin-Yu Chen, Yanzhi Wang, Brian Kulis, Xue Lin, and Sang Chin (*equal contribution)
<HRS_code> <TechTalks_HRS> <Medium_HRS> <IBM_Research_Blog_GNN_HRS>
“Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective,” IJCAI 2019
Kaidi Xu*, Hongge Chen*, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, and Xue Lin (*equal contribution)
“Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification,” SysML 2019
Qi Lei*, Lingfei Wu*, Pin-Yu Chen, Alexandros G. Dimakis, Inderjit S. Dhillon, and Michael Witbrock (*equal contribution)
<Paraphrasing_attack_code> <VB_Paraphrasing> <TechTalks_Paraphrasing> <Jiqizhixin_Paraphasing> <Nature_News>
“Query-Efficient Hard-label Black-box Attack: An Optimization-based Approach,” ICLR 2019
Minhao Cheng, Thong Le, Pin-Yu Chen, Jinfeng Yi, Huan Zhang, and Cho-Jui Hsieh
“Structured Adversarial Attack: Towards General Implementation and Better Interpretability,” ICLR 2019
Kaidi Xu* Sijia Liu*, Pu Zhao, Pin-Yu Chen, Huan Zhang, Quanfu Fan, Deniz Erdogmus, Yanzhi Wang, Xue Lin (*equal contribution)
“Characterizing Audio Adversarial Examples Using Temporal Dependency,” ICLR 2019
Zhuolin Yang, Bo Li, Pin-Yu Chen, Dawn Song
<TD_code> <poster> <TechTalks_temporal_dependency> <IBM_Research_Blog_Temporal_Dependency> <Nature_News>
“AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks,” AAAI 2019 (oral presentation)
Chun-Chen Tu*, Paishun Ting*, Pin-Yu Chen*, Sijia Liu, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh, and Shin-Ming Cheng (*equal contribution)
<AutoZOOM_code> <slides> <poster> <EE_TIMES> <TechTalks_1> <TechTalks_2> <IBM_Research_Blog_AutoZOOM> <PHYS.ORG_AutoZOOM> <IBM_Research_AI_Review_2019> <MC.AI_AutoZOOM>
“Is Ordered Weighted $\ell_1$ Regularized Regression Robust to Adversarial Perturbation? A Case Study on OSCAR,” IEEE GlobalSIP 2018
Pin-Yu Chen*, Bhanukiran Vinzamuri*, and Sijia Liu (*equal contribution)
<poster>
“Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning,” ACL 2018
Hongge Chen*, Huan Zhang*, Pin-Yu Chen, Jinfeng Yi, and Cho-Jui Hsieh (*equal contribution)
“EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples,” AAAI 2018
Pin-Yu Chen*, Yash Sharma*, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh (*equal contribution)
<EAD_code> <cleverhans> <adversarial_robustness_toolbox> <Foolbox> <slides>
“ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models,” ACM CCS Workshop on AI-Security, 2017
Pin-Yu Chen*, Huan Zhang*, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh (*equal contribution)
<ZOO_code> <adversarial_robustness_toolbox> <slides> (best paper award finalist)
-Robustness Evaluation & Verification & Certification
“GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models,” NeurIPS 2024
Zaitang Li, Pin-Yu Chen, and Tsung-Yi Ho
“Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks,” ACM CCS 2023
Chulin Xie, Yunhui Long, Pin-Yu Chen, Qinbin Li, Sanmi Koyejo, and Bo Li
“MultiRobustBench: Benchmarking Robustness Against Multiple Attacks,” ICML 2023
Sihui Dai, Saeed Mahloujifar, Chong Xiang, Vikash Sehwag, Pin-Yu Chen, and Prateek Mittal
“Convex Bounds on the Softmax Function with Applications to Robustness Verification,” AISTATS 2023
Dennis Wei, Haoze Wu, Min Wu, Pin-Yu Chen, Clark Barrett, and Eitan Farchi
“AI Maintenance: A Robustness Perspective,” IEEE Computer Magazine
Pin-Yu Chen and Payel Das
“Holistic Adversarial Robustness of Deep Learning Models,” AAAI 2023 (senior member presentation track)
Pin-Yu Chen and Sijia Liu
“On the Adversarial Robustness of Vision Transformers,” Transactions on Machine Learning Research, 2022
Rulin Shao, Zhouxing Shi, Jinfeng Yi, Pin-Yu Chen, and Cho-Jui Hsieh
”A Spectral View of Randomized Smoothing under Common Corruptions: Benchmarking and Improving Certified Robustness,” ECCV 2022
Jiachen Sun, Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Dan Hendrycks, Jihun Hamm, and Z. Morley Mao
“Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness,” ICML 2022
Tianlong Chen*, Huan Zhang*, Zhenyu Zhang, Shiyu Chang, Sijia Liu, Pin-Yu Chen, and Zhangyang Wang (*equal contribution)
“Vision Transformers are Robust Learners,” AAAI 2022
Sayak Paul* and Pin-Yu Chen* (*equal contribution)
“Training a Resilient Q-Network against Observational Interference,” AAAI 2022
Chao-Han Huck Yang, I-Te Danny Hung, Yi Ouyang, and Pin-Yu Chen
<CIQ_code>
“When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?,” NeurIPS 2021
Lijie Fan, Sijia Liu, Pin-Yu Chen, Gaoyuan Zhang, and Chuang Gan
“Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning,” NeurIPS 2021
Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, and Jihun Hamm
“CRFL: Certifiably Robust Federated Learning against Backdoor Attacks,” ICML 2021
Chulin Xie, Minghao Chen, Pin-Yu Chen, and Bo Li
“Non-Singular Adversarial Robustness of Neural Networks,” ICASSP 2021
Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, and Pin-Yu Chen
“Hidden Cost of Randomized Smoothing,” AISTATS 2021
Jeet Mohapatra, Ching-Yun Ko, Tsui-Wei (Lily) Weng, Sijia Liu, Pin-Yu Chen, and Luca Daniel
“Fast Training of Provably Robust Neural Networks by SingleProp,” AAAI 2021
Akhilan Boopathy, Lily Weng, Sijia Liu, Pin-Yu Chen, Gaoyuan Zhang, and Luca Daniel
“Higher-Order Certification For Randomized Smoothing,” NeurIPS 2020 (spotlight)
Jeet Mohapatra, Ching-Yun Ko, Tsui-Wei (Lily) Weng, Pin-Yu Chen, Sijia Liu, and Luca Daniel
“Towards Verifying Robustness of Neural Networks Against A Family of Semantic Perturbations,” CVPR 2020 (oral presentation)
Jeet Mohapatra, Tsui-Wei (Lily) Weng, Pin-Yu Chen, Sijia Liu, and Luca Daniel
“Towards Certificated Model Robustness Against Weight Perturbations,” AAAI 2020
“PROVEN: Certifying Robustness of Neural Networks with a Probabilistic Approach,” ICML 2019
Tsui-Wei Weng, Pin-Yu Chen, Lam M. Nguyen, Mark S. Squillante, Ivan Oseledets, Akhilan Boopathy, and Luca Daniel
<PROVEN_code> <slides>
“CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks,” AAAI 2019 (oral presentation)
Akhilan Boopathy, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, and Luca Daniel
<CNN-Cert_code> <slides> <poster> <EE_TIMES> <TechTalks_Robust_AI> <IBM_Research_Blog_CNN-Cert> <MIT_IBM_Medium_CNN-Cert> <IBM Response to NIST RFI on AI> <MC.AI_CNN-Cert>
“Efficient Neural Network Robustness Certification with General Activation Functions,” NeurIPS 2018
Huan Zhang*, Tsui-Wei Weng*, Pin-Yu Chen, Cho-Jui Hsieh, and Luca Daniel (*equal contribution)
“Is Robustness the Cost of Accuracy? A Comprehensive Study on the Robustness of 18 Deep Image Classification Models,” ECCV 2018
Dong Su*, Huan Zhang*, Hongge Chen, Jinfeng Yi, Pin-Yu Chen, and Yupeng Gao (*equal contribution)
<Tradeoff_code> <slides>
“Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach,” ICLR 2018
Tsui-Wei Weng*, Huan Zhang*, Pin-Yu Chen, Jinfeng Yi, Dong Su, Yupeng Guo, Cho-Jui Hsieh, and Luca Daniel (*equal contribution)
<CLEVER_code> <adversarial_robustness_toolbox> <IBM_Research_Blog> <SiliconANGLE> <MIT_IBM_Medium> <IBM Response to NIST RFI on AI> <Fool_the_Bank_demo>
“NeuralFuse: Learning to Recover the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes,” NeurIPS 2024
Hao-Lun Sun, Lei Hsiung, Nandhini Chandramoorthy, Pin-Yu Chen, and Tsung-Yi Ho
“Time-LLM: Time Series Forecasting by Reprogramming Large Language Models,” ICLR 2024
Ming Jin, Shiyu Wang, Lintao Ma, Zhixuan Chu, James Y. Zhang, Xiaoming Shi, Pin-Yu Chen, Yuxuan Liang, Yuan-Fang Li, Shirui Pan, and Qingsong Wen
“Model Reprogramming Outperforms Fine-tuning on Out-of-distribution Data in Text-Image Encoders,” SaTML 2024
Andrew Geng and Pin-Yu Chen
“Model Reprogramming: Resource-Efficient Cross-Domain Machine Learning,” AAAI 2024
Pin-Yu Chen
“RADAR: Robust AI-Text Detection via Adversarial Learning,” NeurIPS 2023
Xiaomeng Hu, Pin-Yu Chen, and Tsung-Yi Ho
“Reprogramming Pretrained Language Models for Antibody Sequence Infilling,” ICML 2023
Igor Melnyk, Vijil Chenthamarakshan, Pin-Yu Chen, Payel Das, Amit Dhurandhar, Inkit Padhi, and Devleena Das
“Identification of the Adversary from a Single Adversarial Example,” ICML 2023
Minhao Cheng, Rui Min, Haochen Sun, and Pin-Yu Chen
“Benchmarking Machine Learning Robustness in Covid-19 Genome Sequence Classification,” Nature Scientific Reports
Sarwan Ali, Bikram Sahoo, Alexander Zelikovskiy, Pin-Yu Chen, and Murray Patterson
“Reprogrammable-FL: Improving Utility-Privacy Tradeoff in Federated Learning via Model Reprogramming,” SaTML 2023
Huzaifa Arif, Alex Gittens, and Pin-Yu Chen
“Robust Text CAPTCHAs Using Adversarial Examples,” IEEE Big Data 2022
Rulin Shao, Zhouxing Shi, Jinfeng Yi, Pin-Yu Chen, and Cho-Jui Hsieh
“Adversarial Examples can be Effective Data Augmentation for Unsupervised Machine Learning,” AAAI 2022
Chia-Yi Hsu, Pin-Yu Chen, Songtao Lu, Sijia Lu, and Chia-Mu Yu
<UAE_code>
“Optimizing Molecules using Efficient Queries from Property Evaluations,” Nature Machine Intelligence, 2021
“Voice2Series: Reprogramming Acoustic Models for Time Series Classification,” ICML 2021
Chao-Han Huck Yang, Yun-Yun Tsai, and Pin-Yu Chen
<V2S_code>
“Characteristic Examples: High-Robustness, Low-Transferability Fingerprinting of Neural Networks,” IJCAI 2021
Siyue Wang, Xiao Wang, Pin-Yu Chen. Pu Zhao, and Xue Lin
“AID: Attesting the Integrity of Deep Neural Networks,” DAC 2021
Omid Aramoon, Pin-Yu Chen, and Gang Qu,
“Don't Forget to Sign the Gradients!,” MLSyS 2021
Omid Aramoon, Pin-Yu Chen, and Gang Gu
“Fake it Till You Make it: Self-Supervised Semantic Shifts for Monolingual Word Embedding Tasks,” AAAI 2021
Maurício Gruppi, Sibel Adali, and Pin-Yu Chen
<S4_code> <Sense_demo>
Yun-Yun Tsai, Pin-Yu Chen, and Tsung-Yi Ho
<BAR_code>
II. Foundation Models and Generative AI (e.g., PEFT, Prompting, Safety, Red-Teaming)
“Navigating the Safety Landscape: Measuring Risks in Finetuning Large Language Models,” NeurIPS 2024
ShengYun Peng, Pin-Yu Chen, Matthew Hull, and Duen Horng Chau
“Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by Exploring Refusal Loss Landscapes,” NeurIPS 2024
Xiaomeng Hu, Pin-Yu Chen, and Tsung-Yi Ho
“Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models,” NeurIPS 2024
Chia-Yi Hsu, Yu-Lin Tsai, Chih-Hsun Lin, Pin-Yu Chen, Chia-Mu Yu, and Chun-Ying Huang
“Self-Taught Recognizer: Toward Unsupervised Adaptation for Speech Foundation Models,” NeurIPS 2024
Yuchen Hu, Chen Chen, Chao-Han Huck Yang, Chengwei Qin, Pin-Yu Chen, Eng Siong Chng, and Chao Zhang
“A Deep Dive into the Trade-Offs of Parameter-Efficient Preference Alignment Techniques,” ACL 2024
Megh Thakkar, Quentin Fournier, Matthew D Riemer, Pin-Yu Chen, Amal Zouaq, Payel Das, and Sarath Chandar
“Duwak: Dual Watermarks in Large Language Models,” ACL 2024 (Findings)
Chaoyi Zhu, Jeroen Galjaard, Pin-Yu Chen, and Lydia Y. Chen
“Prompting4Debugging: Red-Teaming Text-to-Image Diffusion Models by Finding Problematic Prompts,” ICML 2024
“What Would Gauss Say About Representations? Probing Pretrained Image Models using Synthetic Gaussian Benchmarks,” ICML 2024
Ching-Yun Ko, Pin-Yu Chen, Payel Das, Jeet Mohapatra, and Luca Daniel
“Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark,” ICML 2024
Yihua Zhang, Pingzhi Li, Junyuan Hong, Jiaxiang Li, Yimeng Zhang, Wenqing Zheng, Pin-Yu Chen, Jason D. Lee, Wotao Yin, Mingyi Hong, Zhangyang Wang, Sijia Liu, and Tianlong Chen
“TrustLLM: Trustworthiness in Large Language Models,” ICML 2024 (position paper)
Lichao Sun, Yue Huang, Haoran Wang, Siyuan Wu, Qihui Zhang, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bhavya Kailkhura, Caiming Xiong, Chao Zhang, Chaowei Xiao, Chunyuan Li, Eric Xing, Furong Huang, Hao Liu, Heng Ji, Hongyi Wang, Huan Zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, John Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, Ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, Willian Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, and Yue Zhao
“Larimar: Large Language Models with Episodic Memory Control,” ICML 2024
Payel Das, Subhajit Chaudhury, Elliot Nelson, Igor Melnyk, Sarath Swaminathan, Sihui Dai, Aurélie Lozano, Georgios Kollias, Vijil Chenthamarakshan, Jiří, Navrátil, Soham Dan, and Pin-Yu Chen
“Language Agnostic Code Embeddings,” NAACL 2024
Saiteja Utpala, Alex Gu, and Pin Yu Chen
“Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!,” ICLR 2024 (oral, top 1.2%)
Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, and Peter Henderson
<LLM-Tuning-Safety_Page> <Stanford_HAI_policy_brief> <New York Times_finetune_safety> <VentureBeat_finetune_safety> <WinBuzzer_finetune_safety> <TS2_finetune_safety> <TheRegister_finetune_safety> <Info_Lopare_finetune_safety> <PCMag_safety> <Silicon_Sonnets_safety> <Stanford_HAI_finetune_safety> <NewScientist_finetune_safety>
“Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?” ICLR 2024
Yu-Lin Tsai, Chia-Yi Hsu, Chulin Xie, Chih-Hsun Lin, Jia-You Chen, Bo Li, Pin-Yu Chen, Chia-Mu Yu, and Chun-Ying Huang
“AutoVP: An Automated Visual Prompting Framework and Benchmark,” ICLR 2024
Hsi-Ai Tsao, Lei Hsiung, Pin-Yu Chen, Sijia Liu, and Tsung-Yi Ho
Chen Chen, Ruizhe Li, Yuchen Hu, Sabato Marco Siniscalchi, Pin-Yu Chen, Ensiong Chng, and Chao-Han Huck Yang
“Large Language Models are Efficient Learners of Noise-Robust Speech Recognition,” ICLR 2024 (spotlight, top 5%)
Yuchen Hu, Chen Chen, Chao-Han Huck Yang, Ruizhe Li, Chao Zhang, Pin-Yu Chen, and EnSiong Chng
“Locally Differentially Private Document Generation Using Zero Shot Prompting,” EMNLP 2023 (Findings)
Saiteja Utpala, Sara Hooker, and Pin-Yu Chen
HyPoradise: An Open Baseline for Generative Speech Recognition with Large Language Models, NeurIPS 2023
Chen Chen, Yuchen Hu, Chao-Han Huck Yang, Sabato Marco Siniscalchi, Pin-Yu Chen, and Ensiong Chng
“Exploring the Benefits of Visual Prompting in Differential Privacy,” ICCV 2023
Yizhe Li, Yu-Lin Tsai, Chia-Mu Yu, Pin-Yu Chen, and Xuebin Ren
“Understanding and Improving Visual Prompting: A Label-Mapping Perspective,” CVPR 2023
Aochuan Chen, Yuguang Yao, Pin-Yu Chen, Yihua Zhang, and Sijia Liu
III. Cyber Security & Network Resilience
“Traffic-aware Patching for Cyber Security in Mobile IoT,” IEEE Communications Magazine, 2017
S.-M. Cheng, Pin-Yu Chen, C.-C. Lin, and H.-C. Hsiao
“Decapitation via Digital Epidemics: A Bio-Inspired Transmissive Attack,” IEEE Communications Magazine, 2016
Pin-Yu Chen, C.-C. Lin, S.-M. Cheng, C.-Y. Huang, and H.-C. Hsiao
“Multi-Centrality Graph Spectral Decompositions and Their Application to Cyber Intrusion Detection,” IEEE ICASSP, 2016
“Action Recommendation for Cyber Resilience,” ACM CCS Workshop, 2015
S. Choudhury, Pin-Yu Chen, L. Rodriguez, D. Curtis, P. Nordquist, I. Ray, K. Oler, and P. Nordquist,
“Sequential Defense against Random and Intentional Attacks in Complex Networks”, Physical Review E, 2015
Pin-Yu Chen and S.-M. Cheng
“Assessing and Safeguarding Network Resilience to Centrality Attacks,” IEEE Communications Magazine, 2014
Pin-Yu Chen and A. O. Hero
“Information Fusion to Defend Intentional Attack in Internet of Things,” IEEE Internet of Things Journal, 2014
Pin-Yu Chen, S.-M. Cheng, and K.-C. Chen
“Smart Attacks in Smart Grid Communication Networks,” IEEE Communications Magazine, 2012
Pin-Yu Chen, S.-M. Cheng, and K.-C. Chen
IV. Graph Learning and Network Data Analytics
“Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural Networks,” ICLR 2023
Shuai Zhang, Meng Wang, Pin-Yu Chen, Sijia Liu, Songtao Lu, and Miao Liu
”Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling,” ICML 2022
Hongkang Li, Meng Wang, Sijia Liu, Pin-Yu Chen, and Jinjun Xiong
“Fast Learning of Graph Neural Networks with Guaranteed Generalizability: One-hidden-layer Case,” ICML 2020
Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, and Jinjun Xiong
“hpGAT: High-order Proximity Informed Graph Attention Network,” IEEE Access, 2019
Zhining Liu, Weiyi Liu, Pin-Yu Chen, Chenyi Zhuang, and Chengyun Song
“Fast Incremental von Neumann Graph Entropy Computation: Theory, Algorithm, and Applications,” ICML 2019 (long oral presentation)
Pin-Yu Chen, Lingfei Wu, Sijia Liu, and Indika Rajapakse
<FINGER_code> <slides>
“Neural-Brane: Neural Bayesian Personalized Ranking for Attributed Network Embedding,” Data Science and Engineering & ASONAM, 2019
Vachik S. Dave, Baichuan Zhang, Pin-Yu Chen, Mohammad Al Hasan
“Learning Graph Topological Features via GAN,” IEEE Access, 2019
Weiyi Liu, Hal Cooper, Min-Hwan Oh, Pin-Yu Chen, Sailung Yeung, Fucai Yu, Toyotaro Suzumura, Guangmin Hu
“Scalable Spectral Clustering Using Random Binning Features,” ACM KDD, 2018 (oral presentation)
Lingfei Wu, Pin-Yu Chen, Ian En-Hsu Yen, Fangli Xu, Yinglong Xia, and Charu Aggarwal
<IBM_Research_Blog> <poster> <slides> <SC-RB_Code>
“Phase Transitions and a Model Order Selection Criterion for Spectral Graph Clustering,” IEEE Transactions on Signal Processing, 2018
“On the Supermodularity of Active Graph-based Semi-Supervised Learning with Stieltjes Matrix Regularization,” IEEE ICASSP, 2018
Pin-Yu Chen* and Dennis Wei* (*equal contribution)
<poster>
“Revisiting Spectral Graph Clustering with Generative Community Models,” IEEE ICDM, 2017
Pin-Yu Chen and L. Wu
<slides>
“Multilayer Spectral Graph Clustering via Convex Layer Aggregation: Theory and Algorithms,” IEEE Transactions on Signal and Information Processing over Networks, 2017
Pin-Yu Chen and A. O. Hero
(awarded IEEE GlobalSIP Student Travel Grant) <slides> <MIMOSA_code>
“Bias-Variance Tradeoff of Graph Laplacian Regularizer,” IEEE Signal Processing Letters, 2017
Pin-Yu Chen and S. Liu
“Incremental Eigenpair Computation for Graph Laplacian Matrices: Theory and Applications,” Social Network Analysis and Mining, 2018
“When Crowdsourcing Meets Mobile Sensing: A Social Network Perspective,” IEEE Communications Magazine, 2015
Pin-Yu Chen, S.-M. Cheng, P.-S. Ting, C.-W. Lien, and F.-J Chu
“Deep Community Detection,” IEEE Transactions on Signal Processing, 2015
Pin-Yu Chen and A. O. Hero
<DCD_code>
“Phase Transitions in Spectral Community Detection,” IEEE Transactions on Signal Processing, 2015
Pin-Yu Chen and A. O. Hero
“Universal Phase Transition in Community Detectability under a Stochastic Block Model,” Physical Review E, 2015
Pin-Yu Chen and A. O. Hero
“Local Fiedler Vector Centrality for Detection of Deep and Overlapping Communities in Networks,” IEEE ICASSP, 2014
V. Event Propagation Models in Networks
“Identifying Influential Links for Event Propagation on Twitter: A Network of Networks Approach,” IEEE Transactions on Signal and Information Processing over Networks, 2018
Pin-Yu Chen, Chun-Chen Tu, Paishun Ting, Ya-Yun Luo, Danai Koutra, and Alfred Hero
“Analysis of Data Dissemination and Control in Social Internet of Vehicles,” IEEE Internet of Things Journal, 2018
Pin-Yu Chen, Shin-Ming Cheng and Meng-Hsuan Sung
“Analysis of Information Delivery Dynamics in Cognitive Sensor Networks Using Epidemic Models,” IEEE Internet of Things Journal, 2017
Pin-Yu Chen, S.-M. Cheng, and H.-Y. Hsu
“Optimal Control of Epidemic Information Dissemination over Networks,” IEEE Transactions on Cybernetics, 2014
Pin-Yu Chen, S.-M. Cheng, and K.-C. Chen
“On Modeling Malware Propagation in Generalized Social Networks,” IEEE Communications Letters, 2011
S.-M. Cheng, W. C. Ao, Pin-Yu Chen, and K.-C. Chen
“Information Epidemics in Complex Networks with Opportunistic Links and Dynamic Topology," IEEE GLOBECOM, 2010
Pin-Yu Chen, and K.-C. Chen
VI. Optimization Methods and Algorithms for Machine Learning and Signal Processing
“Compressed Decentralized Proximal Stochastic Gradient Method for Nonconvex Composite Problems with Heterogeneous Data,” ICML 2023
Yonggui Yan, Jie Chen, Pin-Yu Chen, Xiaodong Cui, Songtao Lu, and Yangyang Xu
“Zeroth-order Optimization for Composite Problems with Functional Constraints,” AAAI 2022 (oral presentation)
Zichong Li, Pin-Yu Chen*, Sijia Liu*, Songtao Lu*, and Yangyang Xu* (*alphabetical order)
“Mean-based Best Arm Identification in Stochastic Bandits under Reward Contamination,” NeurIPS 2021
Arpan Mukherjee, Ali Tajer, Pin-Yu Chen, and Payel Das
“Rate-improved Inexact Augmented Lagrangian Method for Constrained Nonconvex Optimization,” AISTATS 2021
Zichong Li, Pin-Yu Chen*, Sijia Liu*, Songtao Lu*, and Yangyang Xu* (*alphabetical order)
“Optimizing Mode Connectivity via Neuron Alignment,” NeurIPS 2020
“ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training,” NeurIPS 2020
Chia-Yu Chen, Jiamin Ni, Songtao Lu, Xiaodong Cui, Pin-Yu Chen, Xiao Sun, Naigang Wang, Swagath Venkataramani, Vijayalakshmi (Viji) Srinivasan, Wei Zhang, and Kailash Gopalakrishnan
“A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning,” IEEE Signal Processing Magazine, 2020
Sijia Liu, Pin-Yu Chen, Bhavya Kailkhura, Gaoyuan Zhang, Alfred Hero, and Pramod K. Varshney
“SignSGD via Zeroth-Order Oracle,” ICLR 2019
Sijia Liu, Pin-Yu Chen, Xiangyi Chen, and Mingyi Hong
“Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization,” NeurIPS 2018
Sijia Liu, Bhavya Kailkhura, Pin-Yu Chen, Pai-Shun Ting, Shiyu Chang, and Lisa Amini
<poster>
“Accelerated Distributed Dual Averaging over Evolving Networks of Growing Connectivity,” IEEE Transactions on Signal Processing, 2018
Sijia Liu, Pin-Yu Chen, and Alfred Hero
“Zeroth-Order Online Alternating Direction Method of Multipliers: Convergence Analysis and Applications,” AISTATS 2018
Sijia Liu, Jie Chen, Pin-Yu Chen, and Alfred Hero
<poster>
VII. Interpretability, Explainability, Fairness, and Causality for Machine Learning
“The Devil is in the Neurons: Interpreting and Mitigating Social Biases in Language Models,” ICLR 2024
Yan Liu, Yu Liu, Xiaokang Chen, Pin-Yu Chen, Daoguang Zan, Min-Yen Kan, and Tsung-Yi Ho
“Uncovering and Quantifying Social Biases in Code Generation,” NeurIPS 2023
Yan Liu, Xiaokang Chen, Yan Gao, Zhe Su, Fengji Zhang, Daoguang Zan, Jian-Guang Lou, Pin-Yu Chen, and Tsung-Yi Ho
“Better May Not Be Fairer: A Study on Subgroup Discrepancy in Image Classification” ICCV 2023
Ming-Chang Chiu, Pin-Yu Chen, and Xuezhe Ma
“Treatment Learning Causal Transformer for Noisy Image Classification,” WACV 2023
Chao-Han Huck Yang, I-Te Danny Hung, Yi-Chieh Liu, and Pin-Yu Chen
<TLT_code>
“Training a Resilient Q-Network against Observational Interference,” AAAI 2022
Chao-Han Huck Yang, I-Te Danny Hung, Yi Ouyang, and Pin-Yu Chen
<CIQ_code>
“AI Explainability 360: Impact and Design,” IAAI 2022
Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang
“Leveraging Latent Features for Local Explanations,” KDD 2021
Ronny Luss*, Pin-Yu Chen*, Amit Dhurandhar*, Prasanna Sattigeri*, Yunfeng Zhang*, Karthikeyan Shanmugam, and Chun-Chen Tu (*equal contribution)
“AI Explainability 360: An Extensible Toolkit for Understanding Data and Machine Learning Models,” Journal of Machine Learning Research, 2020
Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John T. Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, and Yunfeng Zhang (alphabetical order)
“An Information-Theoretic Perspective on the Relationship Between Fairness and Accuracy,” ICML 2020
Sanghamitra Dutta, Dennis Wei, Hazar Yueksel, Pin-Yu Chen, Sijia Liu, and Kush R. Varshney
“When Causal Intervention Meets Adversarial Perturbation and Image Masking for Deep Neural Networks,” IEEE ICIP 2019
Chao-Han Huck Yang*, Yi-Chieh Liu*, Pin-Yu Chen, Xiaoli Ma, Yi-Chang James Tsai (*equal contribution)
<Code>
“Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives,” NeurIPS 2018
Amit Dhurandhar*, Pin-Yu Chen*, Ronny Luss, Chun-Chen Tu, Paishun Ting, Karthikeyan Shanmugam, and Payel Das (*equal contribution)
VIII. Deep Learning and Generalization
“How Do Nonlinear Transformers Learn and Generalize in In-Context Learning?,” ICML 2024
Hongkang Li, Meng Wang, Songtao Lu, Xiaodong Cui, and Pin-Yu Chen
Hongkang Li, Meng Wang, Tengfei Ma, Sijia Liu, Zaixi Zhang, and Pin-Yu Chen
“A Provably Effective Method for Pruning Experts in Fine-tuned Sparse Mixture-of-Experts,” ICML 2024
Mohammed Nowaz Rabbani Chowdhury, Meng Wang, Kaoutar El Maghraoui, Naigang Wang, Pin-Yu Chen, and Christopher Carothers
“SF-DQN: Provable Knowledge Transfer using Successor Feature for Deep Reinforcement Learning,” ICML 2024
Shuai Zhang, Heshan Devaka Fernando, Miao Liu, Keerthiram Murugesan, Songtao Lu, Pin-Yu Chen, Tianyi Chen, and Meng Wang
“On the Convergence and Sample Complexity Analysis of Deep Q-Networks with $\epsilon$-Greedy Exploration,” NeurIPS 2023
Shuai Zhang, Hongkang Li, Meng Wang, Miao Liu, Pin-Yu Chen, Songtao Lu, Sijia Liu, Keerthiram Murugesan, and Subhajit Chaudhury
“Pessimistic Model Selection for Offline Deep Reinforcement Learning,” UAI 2023
Chao-Han Huck Yang, Zhengling Qi, Yifan Cui, and Pin-Yu Chen
“Which Features are Learnt by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression,” ICML 2023 (oral presentation)
Yihao Xue, Siddharth Joshi, Eric Gan, Pin-Yu Chen, and Baharan Mirzasoleiman
“Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks,” ICML 2023 (oral presentation)
Mohammed Nowaz Rabbani Chowdhury, Shuai Zhang, Meng Wang, Sijia Liu, and Pin-Yu Chen
“A Theoretical Understanding of Shallow Vision Transformers: Learning, Generalization, and Sample Complexity,” ICLR 2023
Hongkang Li, Meng Wang, Sijia Liu, and Pin-Yu Chen
“When Neural Networks Fail to Generalize? A Model Sensitivity Perspective,” AAAI 2023
Jiajin Zhang, Hanqing Chao, Amit Dhurandhar, Pin-Yu Chen, Ali Tajer, Yangyang Xu, and Pingkun Yan
“Make an Omelette with Breaking Eggs: Zero-Shot Learning for Novel Attribute Synthesis,” NeurIPS 2022
Yu Hsuan Li, Tzu-Yin Chao, Ching-Chun Huang, Pin-Yu Chen, and Wei-Chen Chiu
”Sharp-MAML: Sharpness-Aware Model-Agnostic Meta Learning,” ICML 2022
Momin Abbas, Quan Xiao, Lisha Chen, Pin-Yu Chen, and Tianyi Chen
”Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis: an Integrated Framework,” ICML 2022
Ching-Yun Ko, Jeet Mohapatra, Sijia Liu, Pin-Yu Chen, Luca Daniel, and Lily Weng
“MAML is a Noisy Contrastive Learner in Classification,” ICLR 2022
Chia Hsiang Kao, Wei-Chen Chiu, and Pin-Yu Chen
“Auto-Transfer: Learning to Route Transferable Representations,” ICLR 2022
Keerthiram Murugesan*, Vijay Sadashivaiah*, Ronny Luss, Karthikeyan Shanmugam, Pin-Yu Chen, and Amit Dhurandhar (*equal contribution)
“How Unlabeled Data Improve Generalization in Self-training? A One-hidden-layer Theoretical Analysis,” ICLR 2022
Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, and Jinjun Xiong
“Predicting Deep Neural Network Generalization with Perturbation Response Curves,” NeurIPS 2021
Yair Schiff, Brian Quanz, Payel Das, and Pin-Yu Chen
“Formalizing Generalization and Adversarial Robustness of Neural Networks to Weight Perturbations,” NeurIPS 2021
Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, and Pin-Yu Chen
“Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks,” NeurIPS 2021
Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, and Jinjun Xiong
Technical Reports
[T11] Vijay Arya, Rachel KE Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C Hoffman, Stephanie Houde, Q Vera Liao, Ronny Luss, Aleksandra Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R Varshney, Dennis Wei, and Yunfeng Zhang. “One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques,”
[T10] Rise Ooi, Chao-Han Huck Yang, Pin-Yu Chen, Vìctor Eguìluz, Narsis Kiani, Hector Zenil, David Gomez-Cabrero, Jesper Tegnèr, “Controllability, Multiplexing, and Transfer Learning in Networks using Evolutionary Learning”
[T9] Sijia Liu, Pin-Yu Chen, Alfred Hero, and Indika Rajapakse, “Dynamic Network Analysis of the 4D Nucleome”
[T8] Sheng-Chun Kao*, Chao-Han Huck Yang*, Pin-Yu Chen, Xiaoli Ma, and Tushar Krishna, “Reinforcement Learning based Interconnection Routing for Adaptive Traffic Optimization,” poster paper at IEEE/ACM International Symposium on Networks-on-Chip (NOCS), 2019 (*equal contribution)
[T7] Chia-Yi Hsu, Pin-Yu Chen, and Chia-Mu Yu, “Characterizing Adversarial Subspaces by Mutual Information,” poster paper at AsiaCCS, 2019
[T6] Pin-Yu Chen, Sutanay Choudhury, Luke Rodriguez, Alfred O. Hero, and Indrajit Ray, “Enterprise Cyber Resiliency Against Lateral Movement: A Graph Theoretic Approach,” technical report for a book chapter in “Industrial Control Systems Security and Resiliency: Practice and Theory,” Springer, 2019
[T5] Sijia Liu and Pin-Yu Chen, “Zeroth-Order Optimization and Its Application to Adversarial Machine Learning,” IEEE Intelligent Informatics BULLETIN (invited paper)
[T4] Hongge Chen, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, Cho-Jui Hsieh, “Show-and-Fool: Crafting Adversarial Examples for Neural Image Captioning”
[T3] Yash Sharma and Pin-Yu Chen, “Bypassing Feature Squeezing by Increasing Adversary Strength”
[T2] Zhuolin Yang, Bo Li, Pin-Yu Chen, Dawn Song, “Towards Mitigating Audio Adversarial Perturbations”
[T1] Pin-Yu Chen, Meng-Hsuan Sung, and Shin-Ming Cheng, “Buffer Occupancy and Delivery Reliability Tradeoffs for Epidemic Routing”
Patents
[PA46] Amino Acid Sequence Infilling
[PA45] Detecting Actions in Video using Machine Learning and based on Bidirectional Feedback between Predicted Type and Predicted Extent
[PA44] Reprogrammable Federated Learning
[PA43] Certification-based Robust Training by Refining Decision Boundary
[PA42] Temporal Action Localization with Mutual Task Guidance
[PA41] Self-supervised semantic shift detection and alignment
[PA40] Neural capacitance: neural network selection via edge dynamics
[PA39] Protein Structure Prediction using Machine Learning
[PA38] Testing adversarial robustness of systems with limited access
[PA37] Counterfactual Debiasing Inference for Compositional Action Recognition
[PA36] Image Grounding with Modularized Graph Attention Networks
[PA35] Compositional Action Machine Learning Mechanisms
[PA34] Determining Analytical Model Accuracy with Perturbation Response
[PA33] Model-Agnostic Input Transformation for Neural Networks
[PA32] Decentralized Policy Gradient Descent and Ascent for Safe Multi-agent Reinforcement Learning
[PA31] Embedding-Based Generative Model for Protein Design
[PA30] Distributed Adversarial Training for Robust Deep Neural Networks
[PA29] Generating Unsupervised Adversarial Examples for Machine Learning
[PA28] Self-supervised semantic shift detection and alignment
[PA27] Transfer learning with machine learning systems
[PA26] Summarizing Videos Via Side Information
[PA25] Detecting Trojan Neural Networks
[PA24] State-augmented Reinforcement Learning
[PA23] Query-based Molecule Optimization and Applications to Functional Molecule Discovery
[PA22] Efficient Search of Robust Accurate Neural Networks
[PA21] Arranging content on a user interface of a computing device
[PA20] Filtering artificial intelligence designed molecules for laboratory testing
[PA19] Training robust machine learning models
[PA18] Robustness-aware quantization for neural networks against weight perturbations
[PA17] Inducing Creativity in an Artificial Neural Network
[PA16] Interpretability-Aware Adversarial Attack and Defense Method for Deep Learnings
[PA15] Mitigating adversarial effects in machine learning systems
[PA14] Designing and folding structural proteins from the primary amino acid sequence
[PA13] Contrastive explanations for images with monotonic attribute functions
[PA12] Efficient and secure gradient-free black box optimization
[PA11] Explainable machine learning based on heterogeneous data
[PA10] Computational creativity based on a tunable creativity control function of a model
[PA9] Integrated noise generation for adversarial training
[PA8] Framework for Certifying a lower bound on a robustness level of convolutional neural networks
[PA7] Adversarial input identification using reduced precision deep neural networks
[PA6] Model agnostic contrastive explanations for structured data
[PA5] Contrastive explanations for interpreting deep neural networks
[PA4] Computational Efficiency in Symbolic Sequence Analytics Using Random Sequence Embeddings
[PA3] Graph similarity analytics
[PA2] Testing adversarial robustness of systems with limited access
[PA1] System and methods for automated detection, reasoning and recommendations for resilient cyber systems
Conference/Workshop Organizer
[NeurIPS 2024] New Frontiers in Adversarial Machine Learning
[QCE 2024] International Workshop on Quantum Computing and Reinforcement Learning
[ICML 2022 & 2023] New Frontiers in Adversarial Machine Learning
[MLSyS 2022] Cross-Community Federated Learning: Algorithms, Systems and Co-designs
[ICASSSP 2022 & 2023] Special Session for Quantum Machine Learning for Speech and Language Processing
[NeurIPS 2021] New Frontiers in Federated Learning: Privacy, Fairness, Robustness, Personalization and Data Ownership
[KDD 2019-2022] Adversarial Learning Methods for Machine Learning and Data Mining
[IEEE GlOBECOM 2020] Industrial Forum Co-Chair
[IEEE GlobalSIP 2018] Signal Processing for Adversarial Machine Learning
[IEEE ICME 2018] Machine Learning and Artificial Intelligence for Multimedia Creation
Tutorial Presenter
[ICASSP 2024] Parameter-Efficient and Prompt Learning for Speech and Language Foundation Models
[ICASSP 2024 & AAAI 2024] Zeroth-Order Machine Learning: Fundamental Principles and Emerging Applications in Foundation Models
[INTERSPEECH 2023] Resource-Efficient and Cross-Modal Learning Toward Foundation Models
[ICASSP 2023] Parameter-Efficient Learning for Speech and Language Processing: Adapters, Prompts, and Reprogramming
[OAMLS 2022] Holistic Adversarial Robustness for Deep Learning
[NeurIPS 2022] Foundational Robustness of Foundation Models
[ICASSP 2022] Adversarial Robustness and Reprogramming for Speech and Language Processing: Challenges and New Opportunities
[MLSS 2021] Holistic Adversarial Robustness for Deep Learning
[IJCAI 2021] Quantum Neural Networks for Speech and Natural Language Processing
[CVPR 2021] Practical Adversarial Robustness in Deep Learning: Problems and Solutions
[IEEE HOST 2020] Security Issues in AI and Their Impacts on Hardware Security
[ECCV 2020] Adversarial Robustness of Deep Learning Models: Attack, Defense, Verification, and Beyond
[CVPR 2020] Zeroth Order Optimization: Theory and Applications to Deep Learning
[ICASSP 2020] Adversarial Robustness of Deep Learning Models: Attack, Defense and Verification
[IEEE BigData 2018] Recent Progress in Zeroth Order Optimization and Its Applications to Adversarial Robustness in Deep Learning
Service
Editorial Board
- Transactions on Machine Learning Research (TMLR): 2022-current
- IEEE Transactions on Signal Processing: 2024-current
- PLOS ONE (2017-2022)
- KSII-TIIS (area editor): 2020-2021
- IEEE J-IOT (Guest): 2019
Senior Members
IEEE and ACM: 2023-current
Technical Committee
IEEE Signal Processing Society (Machine Learning for Signal Processing & Applied Signal Processing Systems): 2022-current
Area Chair/Senior PC
NeurIPS (AC), ICLR (AC), ICML (AC), AAAI (SAC), IJCAI (Senior PC), PAKDD (SPC), ICASSP (AC), AISTATS (AC), ACML (AC)
Featured conference reviewers
NuerIPS, ICML, AAAI, ICLR, IJCAI, CVPR, SaTML, ICDM, WWW, INFOCOM, GLOBECOM, ICC, WCNC, ACC, ICASSP, ICME; ACMMM; ACM CCS
Featured journal reviewers
JMLR; Proc. IEEE, IEEE T-SP, T-IP, J-STSP, T-SIPN, T-KDE, T-PAMI, J-SAC, ToN, T-WC, T-VT, CL, SPL, T-PDS, T-IFS, T-NNLS, WCM, WCL, J-IoT, SPL, T-CNS, ACCESS, J-ETCAS, Netw. Mag., Comm. Mag.; PLOS ONE; Communications of the ACM
Mentorship
Students having me in PhD Thesis Committee:
Erh-Chung Chen (National Tsing Hua University)
Amirmasoud Ghiassi (TU Delft)
Shashank Kotyan (Kyushu University)
Yihao Xue (UCLA)
Ching-Yun (Irene) Ko (MIT)
Hongkang Li (Rensselaer Polytechnic Institute)
Arpan Mukherjee (Rensselaer Polytechnic Institute)
Ivánkay Ádám Dániel (EPFL)
Aniruddha Saha (University of Maryland, Baltimore County)
Maurício Gruppi (Rensselaer Polytechnic Institute)
Chao-Han Huck Yang (Georgia Institute of Technology)
Joey Tatro (Rensselaer Polytechnic Institute)
Chun-Chen Tu (University of Michigan, Ann Arbor)
Internship
Pacific Northwest National Laboratory (PNNL) - Data Science PhD Intern
action recommendations for real-time service degradation attacks
user segmentation and host hardening against lateral movement attacks
Fun and Proud Fact: My Erdos number is 4 (through two distinct paths)!!
Me -> Alfred Hero -> Wayne Stark -> Robert McEliece -> Paul Erdos
Me -> Pai-Shun Ting -> John. P. Hayes -> Frank Harary -> Paul Erdos