Bias and Fairness in AI

Workshop at ECMLPKDD 2020, Ghent, Belgium, September 18, 2020

News! SIGKDD Explorations special issues on Fairness and Bias in AI published!

News! the 2nd BIAS workshop at ECMLPKDD 2021
News! call for papers: special issue on Bias and Fairness in AI in the Data Mining and Knowledge Discovery journal

AI techniques based on big data and algorithmic processing are increasingly used to guide decisions in important societal spheres, including hiring decisions, university admissions, loan granting, and crime prediction. They are applied by search engines, Internet recommendation systems and social media bots, influencing our perceptions of political developments and even of scientific findings. However, there are growing concerns with regard to the epistemic and normative quality of AI evaluations and predictions. In particular, there is strong evidence that algorithms may sometimes amplify rather than eliminate existing bias and discrimination, and thereby have negative effects on social cohesion and on democratic institutions.

Scholarly reflection of these issues has begun and despite the large volume of related research lately a lot of work remains to be done. In particular, we still lack a comprehensive understanding of how pertinent concepts of bias or discrimination should be interpreted in the context of AI and which technical options to combat bias and discrimination are both realistically possible and normatively justified. The workshop will discuss these issues based on the shared research question: How can standards of unbiased attitudes and non-discriminatory practices be met in (big) data analysis and algorithm-based decision-making?


Topics of Interest

The workshop will focus (but is not limited) on the following topics:

  • Fairness measures, Statistical fairness

  • Methods for detecting algorithmic discrimination

  • Debiasing strategies

  • “Interaction” between fairness and other learning challenges like imbalanced data or rare classes

  • Explainability, traceability, data and model lineage

  • Benchmark datasets

  • Formalization, measurement and mitigation of unfairness in machine learning, including construction of training data sets, model induction/selection and model outputs

  • New or reconciled fairness impossibility results

  • Fairness, equity and justice by design

  • Fairness in predictive modeling used for decision making and decision support

  • Fairness in non-iid data including network, text, time series and other complex evolving data

  • Fairness in unsupervised learning (clustering, PCA), network embeddings

  • Fairness in federated learning

  • Fairness in matchmaking, recommenders and search engines

  • Fairness in resource allocation

  • Fairness in personalized interventions

  • Counterfactual reasoning for fairness

  • Visual analytics for studying / auditing fairness

  • HCI for studying / auditing fairness

  • Auditing machine learning wrt fairness

  • Case studies of fairness-aware machine learning

  • Interdisciplinary studies (law, social sciences) on fairness in machine learning

  • New benchmarks for fairness research

  • Software and demonstrations for studying fairness


FAccT network

The BIAS 2020 workshop is proudly a part of the FAccT network, to research and engage with fairness, accountability, and transparency scholars across connected disciplines.