Socially Responsible AI for Data Mining: Theory and Practice

A Virtual SDM'22 Tutorial

Date & Time: Friday, April 29. 3:15 PM-5:15 PM (ET)

Tutorial Description

People and society have grown increasingly reliant on artificial intelligence (AI) technologies. AI has the potential to drive us towards a future in which all of humanity flourishes. It also comes with substantial risks of oppression and unfairness. Technologists and AI researchers have a responsibility to develop AI systems that satisfy fairness, accountability, transparency, and ethical requirements. They have responded with great effort to design more responsible AI algorithms. However, existing technical solutions are often narrow in scope and have been primarily directed towards decision-making algorithms for scoring or classification tasks, with an emphasis on fairness, discrimination, and unwanted bias. This tutorial seeks to provide a holistic understanding of socially responsible AI (SRAI), a novel systematic framework that encompasses the foundations and theories of responsible AI (e.g., fairness, privacy, and interpretability), and the three human-centered actions aimed to Protect, Inform users, and Prevent potential harms of AI systems. We use emerging data mining tasks with social impact to illustrate each action. We conclude with open problems and key challenges.

More details can be found in the tutorial proposal.

Presenters

Arizona State University

University of Southern California


University of Michigan


Arizona State University


Tutorial Outline (Slides)

1. Background and Motivation (10 min)

(a) Motivating Examples, (b) Challenges, and (c) Why now


2. Theories in SRAI (50 min)

(a) Bias and Fairness (10 min)

i. Coverage in Training Data, ii. Diversity versus Fairness, and iii. Equity versus Fairness

(b) Privacy (10 min)

i. Anonymity Is Virtually Impossible, ii. Differential Privacy, and iii. Circles of Privacy

(c) Causal Interpretability/Explainability (15 min)

i. Model-based Causal Interpretability, ii. Counterfactual Explanation, and iii. Visualization

(d) Adversarial Attacks (15 min)

i. Attacks that Harm Fairness Measures, and ii. Countering Attacks


3. Break (5 min)


4. SRAI for Social Good (50 min)

(a) Protecting – Cyberbullying Detection (15 min)

(b) Informing – Fake News Detection (15 min)

(c) Preventing – Quantifying Representation Harms (20 min)


5. Open Problems and Frontiers (10 min)

(a) Challenges (b) Need for Interdisciplinary Research and (c) Future Work

Relevant Material

  1. Cheng, L., Varshney, K. R., & Liu, H. (2021). Socially responsible AI algorithms: issues, purposes, and challenges. Journal of Artificial Intelligence Research, 71, 1137-1181.

  2. V. H. Jagadish. Responsible data science. In 2019 WSDM, pages 1-1. ACM, 2019.

  3. Cheng, L., Mosallanezhad, A., Sheth, P., & Liu, H. (2021). Causal Learning for Socially Responsible AI. IJCAI Survey Track.

  4. Moraffah, R., Karami, M., Guo, R., Raglin, A., & Liu, H. (2020). Causal interpretability for machine learning-problems, methods and evaluation. ACM SIGKDD Explorations Newsletter, 22(1), 18-33.

  5. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR), 54(6), 1-35.

  6. Zafarani, R., Zhou, X., Shu, K., & Liu, H. (2019, July). Fake news research: Theories, detection strategies, and open problems. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 3207-3208).

  7. Cheng, L., Silva, Y. N., Hall, D., & Liu, H. (2020). Session-based cyberbullying detection: Problems and challenges. IEEE Internet Computing, 25(2), 66-72.