Human-Centric Machine Learning
NeurIPS 2019 Workshop, Vancouver
Machine learning (ML) tools are increasingly employed to inform and automate consequential decisions for humans, in areas such as criminal justice, medicine, employment, welfare programs, and beyond. ML has already established its tremendous potential to not only improve the accuracy and cost-efficiency of such decisions but also minimize the impact of certain human biases and prejudices. The technology, however, comes with significant challenges, risks, and potential harms. Examples include (but are not limited to) exacerbating discrimination against historically disadvantaged social groups, threatening democracy, and violating people's privacy. This workshop aims to bring together experts from a diverse set of backgrounds (ML, human-computer interaction, psychology, sociology, ethics, law, and beyond) to better understand the risks and burdens of big data technologies on society, and identify approaches and best practices to maximize the societal benefits of Machine Learning.
The workshop takes a broad perspective on Human-centric ML and addresses a wide range of challenges from diverse, multi-disciplinary viewpoints. We strongly believe that for society to trust and accept the ML technology, we need to ensure the interpretability and fairness of data-driven decisions. We must have reliable mechanisms to guarantee the privacy and security of people's data. We should demand transparency, not just in terms of the disclosure of algorithms, but also in terms of how they are used and for what purposes. And last but not least, we need to have a modern legal framework to provide accountability and allow subjects to dispute and overturn algorithmic decisions when warranted. The workshop particularly encourages papers that take a multi-disciplinary approach to tackle the above challenges.
One of the main goals of this workshop is to help the community understand where it stands after a few years of rapid development and identify promising research directions to pursue in the years to come. We, therefore, encourage authors to think carefully about the practical implications of their work, identify directions for future work, and discuss the challenges ahead.
This workshop is part of the ELLIS “Human-centric Machine Learning” program.
Call for papers and important dates
Topics of interest include but are not limited to:
- Fairness: algorithmic fairness, human perceptions of fairness, cultural dependencies
- Transparency & Interpretability: interpretable algorithms, explanations of ML systems, human usability of explanation methods
- Privacy: relationships between fairness, security, and privacy, alignment between mathematical privacy and people’s perception of privacy
- Accountability & Governance: existing legal frameworks, how do state-of-the-art technologies for fairness and interpretability comply with regulation such as GDPR, governance examples for human-centric ML decision-making
We accept submissions in the form of extended abstracts. Submission must adhere to the NeurIPS format and be limited to 4 pages, including figures and tables. We allow an unlimited number of pages for references and supplementary material, but reviewers are not required to review the supplementary material.
We only accept papers that have not yet been published elsewhere in an indexed journal or conference. We do, however, accept submission currently under review at another venue. All papers must be anonymized for double-blind reviewing as described in the submission instructions and submitted via EasyChair (link below).
The workshop will not have formal proceedings, but accepted papers will be posted on the workshop website. We emphasize that the workshop is non-archival, so authors can later publish their work in archival venues. Accepted papers will be either presented as a talk or poster (to be determined by the workshop organizers).
Submission deadline: 15 Sep 2019, 23:59 Anywhere on Earth (AoE)
Author notification: 30 Sep 2019, 23:59 Anywhere on Earth (AoE)
Camera-ready deadline: 30 Oct 2019, 23:59 Anywhere on Earth (AoE)
🚧 coming soon 🚧
planned: invited talks, contributed talks, poster session, panel discussions
- Plamen Angelov (Lancaster University)
- Silvia Chiappa (DeepMind)
- Manuel Gomez Rodriguez (Max Planck Institute for Software Systems)
- Hoda Heidari (ETH Zürich)
- Niki Kilbertus (MPI for Intelligent Systems, University of Cambridge)
- Nuria Oliver (Data-Pop Alliance, Vodafone Institute)
- Isabel Valera (Max Planck Institute for Intelligent Systems)
- Adrian Weller (The Alan Turing Institute, University of Cambridge)
Related workshops @ NeurIPS 2019
- Privacy in Machine Learning (PriML)
- Minding the Gap: Between Fairness and Ethics
- “Do the right thing”: machine learning and causal inference for improved decision making
- Joint Workshop on AI for Social Good
- Workshop on Federated Learning for Data Privacy and Confidentiality
- Fair ML in Healthcare
- AI for Humanitarian Assistance and Disaster Response
- Safety and Robustness in Decision-making
- Robust AI in Financial Services: Data, Fairness, Explainability, Trustworthiness, and Privacy