Trusting AI by Testing and Rating Third Party Offerings

By Biplav Srivastava, AI Institute, University of South Carolina and Francesca Rossi, IBM

at IJCAI-PRICAI2020, the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence now Virtual at Yokohama, Japan (January 7-15, 2021)

See: [Slides] (~13MB), [Video] (~120MB)

Scheduled time: 7th January 2021 (8:40 pm EST US) / 8th July 2021 (10:40 am Japan); [Ref: Tutorial Schedule]

For any technology to succeed in society, people have to trust it for safety and performance. This is as much applicable for planes, medicines and children’s toys as Artificial Intelligence (AI) offerings (services and applications). As new AI applications have been deployed for mass scale use, there has been increase in trust issues with AI, regarding properties such as fairness and explainability.

The aim of the tutorial is to help newcomers to AI, as well experienced researchers, understand issues related to trust, summarize methods to tackle them, be familiar with available open source tools and give pointers to take up research in this area. The tutorial will consider a wide variety of popular types of Natural Language Processing-based AI services for illustration, such as machine translation, entity extraction, object identification and conversation agents (chatbots).

There are three points-of-view to look at a technology: producer (i.e., developer), consumer (e.g., lay user) and third party. This tutorial will uniquely focus on an independent third party view of AI technologies made available by producers and available for use by consumers, who can be end users or developers who build complex AI services.

The topics we will cover are:

1. Motivation

a. AI today

b. The need for trust with technology

c. Trust by design v/s post-development bootstrapping

2. Background

a. AI Systems

i. data types: text, image, audio, structured

ii. invocation: stateless, stateful

iii. interaction with people: single user, multiple

b. Trust issues

i. Fairness

ii. Bias

iii. Abuse

iv. Historical perspective and law

c. (Post-facto) Explanation and Interpretable models

d. AI as a 3rd party offering

3. Assessing/ Testing AI Services

a. Definitions

b. Data preparation

c. Assessing AI

d. Using and testing models

e. Bias mitigation methods

f. Working session with AIF360, Themes

4. Rating AI services for trust

a. Desiderata for rating

b. Stateless services: translator

c. Stateful service: conversation agent

d. Rating for AI systems with mixed data types

5. AI Trust Ecosystem

a. Activities of multi-stakeholder bodies – Partnership on AI, WEF, OECD

b. Government Regulations: Europe, US direction

Related tutorials

  1. AI Ethics, IJCAI 2019 - The tutorial was focused on understanding the issues and general approaches for solving them.

  2. Fairness and Bias in Peer Review and other Sociotechnical Intelligent Systems, AAAI 2020 - The tutorial wants to focus on understanding of fairness ideas and explore application to paper reviewing.

  3. Tutorials on Explainable AI: From Theory to Motivation, Applications and Limitations, at AAAI 2019 and AAAI 2020