Deployable AI (DAI)
Workshop at AAAI 2023
Workshop at AAAI 2023
Speakers
Talk Title: AI for Social Good
Talk Title: Reward (Mis)design for Autonomous Driving and Accumulating Safety Rules from Catastrophic Action Effects
Abstract: This talk highlights two recent findings pertaining to the safety of autonomous driving. The first is that most RL researchers use reward functions that are riskier than those reflected by drunk teenage drivers; the second is a method for monotonically improving the safety of a fleet of vehicles over time
Talk Title: Bringing Order to Chaos: Probing the Disagreement Problem in XAI
Talk Title: Robustness in the era of large pretrained models
Machine learning systems often fail catastrophically under the presence of distribution shift—when the test distribution differs in some systematic way from the training distribution. This notion of robustness has remained an open challenge. The past few years have seen the rise of large models trained on broad data at scale that can be adapted to several downstream tasks (e.g. BERT, GPT, DALL-E). In this talk, via theory and experiments, we will discuss how such models open up new avenues, but also require new techniques for improving robustness.
Talk Title: Knowing when you don’t know: Training a classifier with an abstain option
Many real-world applications allow a classifier to abstain from making a prediction. For example, the classifier may be allowed to defer on "hard" samples to a human expert, at some monetary cost, or to a larger model, at additional computational cost. In some cases, the classifier may also be allowed to give up on samples that it deems to be "outlier", i.e. significantly different from the standard population. The goal in these settings is to learn both a classification and an abstention mechanism to optimize a suitable cost-accuracy trade-off.
A classical approach to solving these problems is Chow’s rule, which thresholds the maximum softmax probability from a standard classifier and abstains on samples with low probability. Although a competitive baseline, recent works have shown that this simple approach can significantly underperform in practice. In this talk, we re-assert that Chow’s rule is suboptimal for two important practical settings, and describe how one can learn sample-dependent versions of this rule, which we show are theoretically optimal and empirically effective.
Talk Title: AI-facilitated Human Decision Making
Abstract: This talk will discuss results from a set of human-subject experiments where human decision makers are provided with algorithmic advice. We observe that human decision makers exhibit bias in their interactions with the algorithm and the algorithm could alter their decision making process. We them demonstrate that an interactive advising approach that learns when to provide advice and only provides advice at times of need can improve human decision making.