IEEE Workshop on Trustworthy and Privacy-Preserving Human-AI Collaboration
Co-located with IEEE International Conference on CIC/TPS/CogMI
November 11th 2025 in Wyndham Grand Pittsburgh Downtown, Pittsburgh, PA
This workshop explores the evolving relationship between humans and AI systems, with a focus on fostering trustworthy and privacy-preserving collaboration. As AI capabilities grow and its presence in daily life expands, it is essential that these systems align with human values to remain responsible, effective, and secure. Although human-AI collaboration offers significant potential for enhanced decision-making and societal benefit, it also raises critical challenges, such as privacy risks, trust and safety concerns, and cybersecurity threats across diverse domains.
Our goal is to foster interdisciplinary dialogue and shape a roadmap for effective and trustworthy human-AI collaboration. We invite contributions that bridge the gap between machine intelligence and human understanding, particularly in shared decision-making scenarios. The workshop promotes the development of adaptive, hybrid, and emerging AI systems that respond to dynamic contexts while respecting human agency and enhancing human capabilities. We welcome insights from user studies and the design of collaborative frameworks that strengthen trust, transparency, privacy, and security. We also encourage discussions addressing key questions such as: What methods and metrics are needed to evaluate human-AI teams effectively? What factors influence trust, performance, and responsible AI deployment?
Topics of interest include, but are not limited to:
Human-AI collaborative paradigms across domains, such as transportation, healthcare, manufacturing, and education.
Fairness, transparency, ethics, and accessibility in AI
Trust, privacy, and security in AI
Cognitive, affective, and social aspects of safe Human-AI collaboration
Methods and metrics for assessing human-AI teamwork and its trustworthiness
Multi-modal sensing and social signal processing in human-AI interaction to enhance trust and safety
Trusted Human-centered/interactive machine learning
Approaches for privacy-, security-, and trustworthy-by-design human-AI collaboration and teaming
Breakfast (provided by conference)
7:15 AM – 8:30 AM
Welcome and Opening Remarks
8:30 AM – 8:45 AM
Keynote 1
08:45 AM – 09:45 AM
Jason Hong, Professor, Human-Computer Interaction Institute, Carnegie Mellon University
Title: Auditing AI Systems for Bias and Fairness
Coffee Break & Informal Networking
09:45 AM – 10:00 AM
Keynote 2
10:00 AM – 11:00 AM
Dr. Weisong Shi, Alumni Distinguished Professor & Chair, Department of Computer & Information Sciences, University of Delaware
Title: Vehicle Computing: A New Computing Paradigm in the Era of Autonomous Driving
Panel — Trustworthy- and Privacy-by-Design Human–AI Collaboration Systems
11:00 AM – 12:15 PM
Panelists: Qiaoning Zhang (Arizona State University); Sauvik Das (Carnegie Mellon University); Jason Hong (Carnegie Mellon University); Shandong Wu (University of Pittsburgh); Na Du (University of Pittsburgh)
Moderator: Danda B. Rawat, Howard University
Lunch Break (provided by conference)
12:15 PM – 1:30 PM
Paper Session 1: Trust & Human–AI Collaboration
01:30 PM – 02:50 PM (Each talk: ~15 min + 5 min Q&A / transitions)
1:30 – 1:50 — Quantifying Trust in Human–AI Teams: A Statistical Framework for Task-Based Calibration of AI Autonomy in Compliance Auditing
Priya Mohan (Independent Researcher), Yugandhar Suthari (University of the Cumberlands), Sahil Dhir (Independent Researcher)
1:50 – 2:10 — Voice Design and Trust in Automated Vehicles: Findings and a Research Agenda
Jiongyu Chen (Arizona State University), Qiaoning Zhang (Arizona State University)
2:10 – 2:30 — Uncertainty Quantification for Deep Learning-based Medical Imaging Classification Model Evaluation and Individualized Risk Estimation
Jiren Li (University of Pittsburgh), Dooman Arefan (University of Pittsburgh), Shandong Wu (University of Pittsburgh)
2:30 – 2:50 — The Role of Perceived Social Identity in Human–AI Collaboration
Jessica Barfield (University of Kentucky)
Coffee Break & Informal Networking
2:50 PM – 3:20 PM
Paper Session 2: Privacy, Security & Human–AI Collaboration
3:20 PM – 5:00 PM (Each talk: ~15 min + 5 min Q&A / transitions)
3:20 – 3:40 — RegEase: Simplifying Insurance Compliance
Samhitha Poreddy (Verisk Analytics)
3:40 – 4:00 — Backdoor-Aware Adaptive Aggregation for Wireless Ad Hoc Federated Learning
Atsuya Muramatsu (The University of Tokyo), Hideya Ochiai (The University of Tokyo)
4:00 – 4:20 — Certified Attribute Privacy in CAN Latent Space
Jamil Arbas (Toronto Metropolitan University), Shadan Ghaffaripour (Toronto Metropolitan University), Ali Miri (Toronto Metropolitan University)
4:20 – 4:40 — MidgleyNet — A Proof of Concept: How Learned Fingerprint Injections Can Hijack Deepfake Detectors
Michele Porzio (Yuan Ze University), James Sutton (Yuan Ze University), Naeem Ul Islam (Yuan Ze University)
4:40 – 5:00 — Multimodal Deep Fusion Architecture for Human Activity and Fall Detection in Elderly Care
Debashis Das (Meharry Medical College), Laure Bien Aime (Meharry Medical College), Pushpita Chatterjee (Meharry Medical College), Uttam Ghosh (Meharry Medical College)
Closing Remarks & Adjournment
5:00 PM – 5:15 PM
Auditing AI Systems for Bias and Fairness
Abstract: Auditing is an underexamined but powerful tool for finding systematic
problems in AI systems, fostering trust in those systems, and also
for holding companies and organizations using AI accountable.
Interestingly, many people are already organically coming together
to audit many kinds of algorithmic systems. In this talk, I'll give
an overview of our team's research in this space over the past 6
years, focusing on:
- How do people do auditing of algorithmic systems today?
- How can we help everyday people audit AI systems?
- How can we use AI to help crowds of people with auditing?
- How can auditing help with AI literacy?
Biography: Jason Hong is a professor in the Human Computer Interaction Institute,
part of the School of Computer Science at Carnegie Mellon University.
His research is in the areas of mobile computing, usable privacy and
security, and responsible AI. He was also a co-founder of Wombat
Security Technologies, which was acquired by Proofpoint in March 2018.
Vehicle Computing: A New Computing Paradigm in the Era of Autonomous Driving
Abstract: Vehicles have been used mainly for transportation in the last century. With the proliferation of onboard computing and communication capabilities, we envision that future connected and autonomous vehicles (CAVs) will serve as a mobile computing platform in addition to their conventional transportation role for the next century. In this presentation, Dr. Shi will present the vision of Vehicle Computing, a new era for the automotive industry, followed by two vital enabling technologies: autonomous driving and edge computing. Finally, he will talk about the recent development of D-STAR, a live and evolving testbed for vehicle computing on the STAR campus at the University of Delaware.
Biography: Dr. Weisong Shi is an Alumni Distinguished Professor and Department Chair of Computer and Information Sciences at the University of Delaware (UD). He leads the Connected and Autonomous Research (CAR) Laboratory. Dr. Shi is the Honorary Center Director of a recently funded NSF eCAT Industry-University Cooperative Research Center (IUCRC) (2023-2028), focusing on Electric, Connected, and Autonomous Technology for Mobility. He is an internationally renowned expert in edge computing, autonomous driving, and connected health. His pioneer paper, “Edge Computing: Vision and Challenges,” has been cited over 9300 times. He is the Editor-in-Chief of IEEE Internet Computing Magazine and the founding steering committee chair of several conferences, including the ACM/IEEE Symposium on Edge Computing (SEC), IEEE/ACM International Conference on Connected Health (CHASE), and IEEE International Conference on Mobility (MOST). He is a fellow of IEEE and a member of CRA’s Computing Community Consortium (CCC) Council.
Workshop papers should follow the same submission guidelines and instructions for the main conference (IEEE TPS). The paper should not exceed 10 pages, including references. The standard IEEE conference paper format should be used. The IEEE two-column conference template can be downloaded from here. For questions, please contact the workshop organizers.
Submit your paper through EasyChair and select the "IEEE Workshop on Trustworthy and Privacy-Preserving Human-AI Collaboration" Track.
Submission deadline: Sept. 8, 2025 (Sept 30, 2025)
Acceptance notification: Sep 30, 2025 (Oct 17, 2025)
Final version due: Oct 24, 2025
Co-chair: Na Du, University of Pittsburgh, na.du@pitt.edu
Co-chair: James B. D. Joshi, University of Pittsburgh, jjoshi@pitt.edu
Co-chair: Danda B. Rawat, Howard University, danda.rawat@howard.edu
Imtiaz Ahmed, Howard University
Shih-Yi Chien, National Sun Yat-sen University
Yiheng Feng, Purdue University
Bimal Ghimire, Penn State University
Helge Janicke, Edith Cowan University
Muslum Ozgur Ozmen, Arizona State University
Houbing Herbert Song, University of Maryland, Baltimore County
Yuba Siwakoti, Central Washington University
Pingbo Tang, Carnegie Mellon University
Shandong Wu, University of Pittsburgh
Qiaoning Zhang, Arizona State University