Trusting AI by Testing and Rating Third Party Offerings
By Biplav Srivastava, AI Institute, University of South Carolina and Francesca Rossi, IBM
at IJCAI-PRICAI2020, the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence now Virtual
at Yokohama, Japan (January 7-15, 2021)
For any technology to succeed in society, people have to trust it for safety and performance. This is as much applicable for planes, medicines and children’s toys as Artificial Intelligence (AI) offerings (services and applications). As new AI applications have been deployed for mass scale use, there has been increase in trust issues with AI, regarding properties such as fairness and explainability.
The aim of the tutorial is to help newcomers to AI, as well experienced researchers, understand issues related to trust, summarize methods to tackle them, be familiar with available open source tools and give pointers to take up research in this area. The tutorial will consider a wide variety of popular types of Natural Language Processing-based AI services for illustration, such as machine translation, entity extraction, object identification and conversation agents (chatbots).
There are three points-of-view to look at a technology: producer (i.e., developer), consumer (e.g., lay user) and third party. This tutorial will uniquely focus on an independent third party view of AI technologies made available by producers and available for use by consumers, who can be end users or developers who build complex AI services.
The topics we will cover are:
3. Assessing/ Testing AI Services
4. Rating AI services for trust
5. AI Trust Ecosystem
AI Ethics, IJCAI 2019 - The tutorial was focused on understanding the issues and general approaches for solving them.
Fairness and Bias in Peer Review and other Sociotechnical Intelligent Systems, AAAI 2020 - The tutorial wants to focus on understanding of fairness ideas and explore application to paper reviewing.