The Speakers

Active Learning in Realistic Human-AI Settings

Active learning literature has historically investigated the selection of optimal queries by a learning agent, primarily focused on acquiring labels for uncertainty or error reduction. In contrast, proficient learners, like humans both: (1) integrate diverse forms of information during learning and (2) incorporate learning context into their reasoning about what questions to ask a teacher. This talk presents research on Active Learning in problem settings that capture important characteristics of more realistic human-AI environments, but were previously underexplored in prior literature on Interactive Learning. In particular, it is assumed that the artificial agent (a) exists in a changing (nonstationary) environment, (b) is situated for extended durations of time with a human teacher, and (c) is subject to teacher bandwidth constraints. It is inspired by settings where a robotic agent co-exists with and must learn from its human partners. We contribute a general decision-theoretic active learning framework that enables a learner to autonomously manage interaction with human partners. The agent infers both when to request help and what type of information to query, based upon its expectation of learning progress. We extend this framework to additionally optimize for the teacher’s time and cognitive availability constraints (i.e. learning context), within the agent’s objective function. Overall, this body of work gives rise to a richer communication and more flexible learning mechanism, where an agent can both (a) initiate different types of communication actions with a teacher and (b) adapt its action selection to the teacher’s bandwidth constraints. It is inspired by the expressive decision-making capabilities of human learners.

Beyond ranking: Building robust time-to-event models

Survival analysis or time-to-event studies focus on modeling the time of a future event, such as death or failure, and investigate its relationship with covariates or predictors of interest. Specifically, we may be interested in the causal effect of a given intervention or treatment on survival time. A typical question may be: will a given therapy increase the chances of survival of an individual or population? Such causal inquiries on survival outcomes are common in the fields of epidemiology and medicine. In this talk, I will introduce our recently proposed coun- terfactual inference framework for survival analysis which adjusts for bias from two sources, namely, confounding (from covariates influencing both the treatment assignment and the outcome) and censoring (informative or non- informative). To account for censoring biases, I will argue for flexible and nonparametric generative modeling of event times and propose neural time-to-event models that account for calibration and uncertainty, while predicting accurate absolute event times. Moreover, I will formulate a model-free nonparametric hazard ratio metric for com- paring treatment effects or leveraging prior randomized real-world experiments in longitudinal studies. Further, I will use the proposed model-free hazard-ratio estimator to identify or stratify heterogeneous treatment effects. Finally, I will present extensive results on challenging datasets, such as the Framingham Heart Study and the AIDS clinical trials group (ACTG).

Facing an Adult Problem: New Data Sources for Fair Machine Learning

The research community investigating fairness and machine learning has recognized the importance of good data. When it comes to tabular data, however, most research papers on the topic continue to involve only a fairly limited collection of datasets, chief among them the UCI Adult data. Derived from 1994 US Census data, this small dataset has made an appearance in hundreds of research papers and served as the basis for the development and comparison of many algorithmic fairness interventions.

In this talk, I will share some archaeology of the UCI Adult dataset with you, discuss its impact on the fairness community and highlight some of its limitations. I will then introduce you to a new collection of datasets derived from US Census data sources that vastly extend the existing data ecosystem for research on fair machine learning. These new datasets surface a range of empirical insights relevant to ongoing debates about statistical fairness criteria and algorithmic fairness interventions.

Joint work with Frances Ding, John Miller, and Ludwig Schmidt.

On Complementing & Extending Human Intellect

Abstract TBD

In the Pursuit of Responsible AI: Developing AI Systems for People with People

Wide-spread adoption of AI systems in the real-world has brought to light concerns around biases hidden in these systems and reliability and safety risks. Addressing these concerns in real-world applications through tools, guidance and processes has paramount importance. This endeavor also introduces new research directions for our community. In this talk, I'll discuss why taking a human-centric view into the development and deployment of AI systems can help to overcome the shortcomings of AI systems and lead to better outcomes in the real-world. I'll share several directions of research we are pursuing towards effective human-AI partnership through combining the complementary strengths of human and machine reasoning.

Mechanism Design for Social Good: Maximising the utility of a limited number of COVID tests

As the world struggles to limit the rising infection rate and mounting death toll as a result of the ongoing COVID-19 pandemic, one of the key challenges for developing nations has been their limited access to testing resources.

Testing is an essential tool for combating epidemics, as it gives crucial estimates of virus prevalence, and allows for the identification of infected individuals, thereby forming the basis of containment policies. However, testing resources are often scarce due factors such as limited access to reagents, shortages in trained lab technicians, and deficient logistics.

For this reason, a novel strategy is needed to maximise the benefit of each available test. Additionally, any proposed solution must account for an ideal balance between virus containment and the socioeconomic welfare of the population.

In this session, I will talk about Test & Contain, a proposed strategy for increasing the utility of a limited number of COVID tests. Our strategy, which will soon be piloted in research institutions in San Luis Potosi, Mexico, is based on techniques gleaned from optimization and algorithmic mechanism design. Our goal is to assist policy-makers in understanding inherent tradeoffs stemming from limited testing resources, while providing feasible testing strategies that exemplify their desired tradeoffs.

At a broader level, I will also touch upon Mechanism Design For Social Good (MD4SG), a multidisciplinary initiative that brings together leading researchers, domain experts, non-profits, and companies from around the world to apply techniques from algorithms, optimization, and mechanism design to help improve access to opportunity.

Optimizing for Human-centric AI in the Global South

Recent advances in Artificial Intelligence (AI) suggest that AI applications could transform fields such as agriculture, education, healthcare, and more in the Global South. However, as researchers and technology companies rush to develop AI-enabled technologies that improve the lives of marginalized communities, it is critical to consider the needs and perceptions of the workers who will have to integrate these tools into the essential services they provide to rural communities. This talk will reflect on the state of AI within the Global South, examining the perceptions and use of AI in low-resource contexts, and highlight factors impacting unanticipated deployments. Our goal is to provide actionable steps to embrace inclusive design methodologies within AI development that prioritize the needs of marginalized communities, elevating their status from beneficiaries of AI systems to engaged stakeholders.

Machine-assisted Teaching for Open-ended Problem Solving

Abstract TBD

Towards Human-Centered AI: Harnessing Explanations to Improve Human Decision Making in Discovery Tasks

AI plays an increasingly prominent role in decision making for societally critical domains such as criminal justice, healthcare, and fake news. It is crucial that the AI systems be able to explain the basis for the decisions they are recommending in ways that humans can easily comprehend, thus serving as a bridge between humans and AI. In order to build explanations for humans, I will first highlight the distinction between emulation and discovery in building AI and its implications on the role of human explanations. I will then demonstrate the effectiveness of explanations as real-time assistance in improving human decision-making and as model-driven tutorials to help humans understand model behavior. I will conclude with discussions on the promise of explanations and the limits of current explanations.

Empowering end-users through AI transparency and interpretability

Machine learning is playing an increasingly influential role in the world due to dramatic technical leaps in recent years. Yet, it can often be challenging for these systems to meet end-user needs in practice. For instance, user trust is one of the hardest challenges in the adoption of AI-systems in high-stakes domains such as healthcare. In this presentation I'll discuss how AI interpretability and transparency techniques aimed at end-users, instead of at ML experts, can help put people in control of ML systems, empowering them to probe these systems for trust calibration.