TrustedAI
Together with my students and collaborators, I am working on creating Trusted AI systems by engaging with stakeholders and pursuing a few novel ideas. For me this means covering: (1) explainable AI (XAI), (2) fairness and ethics, (3) data management & privacy, (4) human-AI interaction and (5) testing practices.
We have a few ideas. The first idea is to rate AI systems for trust (or by converse, lack of it) as a third-party service where ratings convey behavior discovered via testing the system without access to system's training data. The second idea is to secure deep-learning based AI services against adversarial attacks. The third idea is to verify AI machine learning (ML) models using Blockchain. Watch this space for more!
Technical Community Engagement and Standardization
University of South Carolina's Primary Investigator (PI)/ lead for participation in National Institute of Standards and Technology (NIST)'s Artificial Intelligence Safety Institute Consortium (AISIC) where we are contributing to many task groups [announcement, members]
Contributed to World Economic Forum (WEF) framework for building ethical chatbots for health called Chatbot Reset in 2020 in which the process was to conduct discussions by a variety of stakeholders around the globe on issues of relevance to developers, medical administrators and regulators, among others. Then, 4 different pilot projects used it to build and deploy chatbots including the Apollo group in India. A report has come out in 2021 on how the 4 projects used it and what they found. [2020-2021]
Mentored development of "Handbook Handbook on Data Protection and Privacy for Developers of Artificial Intelligence (AI) in India: Practical Guidelines for Responsible Development of AI" by Data Security Council of India (DSCI) in collaboration with the German Development Cooperation (GIZ), Digital India Foundation, and Koan Advisory Group. Available online [2020-2021]
Represented IBM in Partnership on AI’s (PAI's) working group on “Fair, Transparent, and Accountable AI" [ 2018-2019]. Contributed to Partnership on AI’s (PAI) report on “Minimum Requirements for the Responsible Deployment of AI Systems” [April 2019]
Funding
JP Morgan Chase research award (gift), 2023-2024, Automatically Generating Composable Trust Certificates for Blackbox AI Systems in Finance and Assessing their Impact on End Users [AI Ethics, Multi-modal Causal Rating, Finance]
CISCO research award (gift), 2022-2023, Rating AI for Trust [AI Ethics, Testing]
VAJRA visiting faculty fellowship (India), 2021-2023, Rating AI and Indian perspective [AI Ethics, Testing, Recommendation - Teaming, Food]
Activities and Teaching
Graduate courses
Trusted AI, Fall 2021- CSCE 590-1 at University of South Carolina.
[Regular from Fall 2023] CSCE 581 - Trusted Artificial Intelligence (3 Credits)
AI Trust – responsible/ethical technology, fairness/ lack of bias, explanations (XAI), machine learning, reasoning, software testing, data quality and provenance, tools and projects.
Prerequisites: C or better in CSCE 240 and CSCE 350.
Prerequisite or Corequisite: D or better in CSCE 330.
Tutorial on Trusting AI by Testing and Rating Third Party Offerings, in conjunction with 29th International Joint Conference on Artificial Intelligence (IJCAI 2020), Biplav Srivastava, Francesca Rossi, Yokohoma, Japan, January 2021
Recognized as VAJRA Faculty by Department of Science and Technology, Government of India to work with faculties and students at IIT Roorkee, India in 2022 on AI Trust
Idea 1: Rating AI Systems
Papers (8)
Kausik Lakkaraju, Biplav Srivastava, Marco Valtorta, Rating Sentiment Analysis Systems for Bias Through a Causal Lens, IEEE Transactions on Technology and Society, doi: 10.1109/TTS.2024.3375519, 2024. On Arxiv at: https://arxiv.org/abs/2302.02038 [AI;Bias;Causality;Rating;Sentiment Analysis Systems;User Trust, Journal]
Biplav Srivastava, Kausik Lakkaraju, Mariana Bernagozzi, Marco Valtorta, Advances in Automatically Rating the Trustworthiness of Text Processing Services, AI Ethics (Springer Nature), https://doi.org/10.1007/s43681-023-00391-5, 2023; also in AAAI Spring Symposium on AI Trustworthiness Assessment, San Francisco, 2023. [AI Rating, Trust, Journal]
Kausik Lakkaraju, Aniket Gupta, Biplav Srivastava, Marco Valtorta, Dezhi Wu, The Effect of Human v/s Synthetic Test Data and Round-tripping on Assessment of Sentiment Analysis Systems for Bias, Fifth IEEE International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications, Atlanta, USA, 2023. On Arxiv at: https://arxiv.org/abs/2401.12985 [Sentiment Rating, AI Trust, Conference]
Mariana Bernagozzi, Biplav Srivastava, Francesca Rossi and Sheema Usmani, Gender Bias in Online Language Translators: Visualization, Human Perception, and Bias/Accuracy Trade-offs, IEEE Internet Computing, Special Issue on Sociotechnical Perspectives, Nov/Dec 2021 [Visualizing Ethics Rating, User Survey, Journal]
Mariana Bernagozzi, Biplav Srivastava, Francesca Rossi and Sheema Usmani, VEGA: a Virtual Environment for Exploring Gender Bias vs. Accuracy Trade-offs in AI Translation Services, AAAI 2021. [Visualizing Ethics Rating, Demonstration paper]
Biplav Srivastava, Francesca Rossi, Sheema Usmani, and Mariana Bernagozzi, Personalized Chatbot Trustworthiness Ratings, IEEE Transactions on Technology and Society, 2020. Pre-publication version on Arxiv - https://arxiv.org/abs/2005.10067, 2020. [Chatbot, Trust Rating, Ethics, Journal]
Biplav Srivastava, Francesca Rossi, Rating AI Systems for Bias to Promote Trustable Applications, IBM Journal of Research and Development, 2019. [AI Service Rating, Ethics, Journal]
Biplav Srivastava, Francesca Rossi, Towards Composable Bias Rating of AI Systems, 2018 AI Ethics and Society Conference (AIES 2018), New Orleans, Louisiana, USA, Feb 2-3, 2018. [AI Service Rating, Ethics, Conference]
Tools/ Resources (1)
ROSE: tool and data ResOurces to explore the instability of SEntiment analysis systems (video, pre-print paper), Aug 2021.
Patents/ Applications (3)
System and Method for Chatbot Trust Rating, 2020. [Application Publication: 20210097085]
Generating Representative Unstructured Data to Test Artificial Intelligence Services for Bias, 2019. [US Patent: US 10,783,068]
Assigning bias ratings to Services, 2018. [US Patent: US 11,301,909]
Idea 2: Securing AI Systems
Papers (1)
Bryant Chen, Wilka Carvalho, Nathalie Baracaldo, Heiko Ludwig, Benjamin Edwards, Taesung Lee, Ian Molloy, Biplav Srivastava, Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering, Artificial Intelligence Safety Workshop (SafeAI) at AAAI-2019, Honolulu, USA, Jan 2019. Recognized as Best Paper at SafeAI 2019 Workshop. [AI-NN, Safety]
Patents/ Applications (2)
Automatically Determining Whether an Activation Cluster Contains Poisonous Data
Automatically Determining Poisonous Attacks on Neural Networks
Idea 3: Verifying AI ML Models using Blockchain
Papers (2)
Ravi Kiran Raman, Roman Vaculin, Michael Hind, Sekou L. Remy, Eleftheria K. Pissadaki, Nelson Kibichii Bore, Roozbeh Daneshvar, Biplav Srivastava, Kush R. Varshney, A Scalable Blockchain Approach for Trusted Computation and Verifiable Simulation in Multi-Party Collaborations, IEEE International Conference on Blockchain and Cryptocurrency, (ICBC 2019), Seoul, 2019. [AI Verification, Blockchain]
Nelson Kibichii Bore, Ravi Kiran Raman, Isaac M. Markus, Sekou L. Remy, Oliver Bent, Michael Hind, Eleftheria K. Pissadaki, Biplav Srivastava, Roman Vaculin, Promoting Distributed Trust in Machine Learning and Computational Simulation, IEEE International Conference on Blockchain and Cryptocurrency, (ICBC 2019), Seoul, 2019. [AI Verification, Blockchain]