IEEE Workshop on Trustworthy and Privacy-Preserving Human-AI Collaboration
Co-located with IEEE International Conference on CIC/TPS/CogMI
November 2025 in Pittsburgh, PA
This workshop explores the evolving relationship between humans and AI systems, with a focus on fostering trustworthy and privacy-preserving collaboration. As AI capabilities grow and its presence in daily life expands, it is essential that these systems align with human values to remain responsible, effective, and secure. Although human-AI collaboration offers significant potential for enhanced decision-making and societal benefit, it also raises critical challenges, such as privacy risks, trust and safety concerns, and cybersecurity threats across diverse domains.
Our goal is to foster interdisciplinary dialogue and shape a roadmap for effective and trustworthy human-AI collaboration. We invite contributions that bridge the gap between machine intelligence and human understanding, particularly in shared decision-making scenarios. The workshop promotes the development of adaptive, hybrid, and emerging AI systems that respond to dynamic contexts while respecting human agency and enhancing human capabilities. We welcome insights from user studies and the design of collaborative frameworks that strengthen trust, transparency, privacy, and security. We also encourage discussions addressing key questions such as: What methods and metrics are needed to evaluate human-AI teams effectively? What factors influence trust, performance, and responsible AI deployment?
Topics of interest include, but are not limited to:
Human-AI collaborative paradigms across domains, such as transportation, healthcare, manufacturing, and education.
Fairness, transparency, ethics, and accessibility in AI
Trust, privacy, and security in AI
Cognitive, affective, and social aspects of safe Human-AI collaboration
Methods and metrics for assessing human-AI teamwork and its trustworthiness
Multi-modal sensing and social signal processing in human-AI interaction to enhance trust and safety
Trusted Human-centered/interactive machine learning
Approaches for privacy-, security-, and trustworthy-by-design human-AI collaboration and teaming
Workshop papers should follow the same submission guidelines and instructions for the main conference (IEEE TPS). The paper should not exceed 10 pages, including references. The standard IEEE conference paper format should be used. The IEEE two-column conference template can be downloaded from here. For questions, please contact the workshop organizers.
Submit your paper through EasyChair and select the "IEEE Workshop on Trustworthy and Privacy-Preserving Human-AI Collaboration" Track.
Submission deadline: Aug 31, 2025 (Sept 8, 2025)
Acceptance notification: Sep 25, 2025 (Sep 30, 2025)
Final version due: Oct 10, 2025
Co-chair: Na Du, University of Pittsburgh, na.du@pitt.edu
Co-chair: James B. D. Joshi, University of Pittsburgh, jjoshi@pitt.edu
Co-chair: Danda B. Rawat, Howard University, danda.rawat@howard.edu
Imtiaz Ahmed, Howard University
Shih-Yi Chien, National Sun Yat-sen University
Yiheng Feng, Purdue University
Bimal Ghimire, Penn State University
Helge Janicke, Edith Cowan University
Muslum Ozgur Ozmen, Arizona State University
Houbing Herbert Song, University of Maryland, Baltimore County
Yuba Siwakoti, Howard University
Pingbo Tang, Carnegie Mellon University
Shandong Wu, University of Pittsburgh
Qiaoning Zhang, Arizona State University