1st Workshop on Trustable and Ethical Machine Learning

During Transdisciplinary AI, Laguna Hills, CA, USA

19-21 September 2022

Remote Participation Supported

Machine learning is often seen as a "magical" technique to solve data-related pattern recognition and generation problems. This general utility has led to ML's incorporation into a wide variety of fields. However, simply incorporating ML into a new field is not sufficient.

The first hurdle that should be addressed is whether or not the ML results can be trusted or not. Differences such as the pseudo-random number seed or train/test ratio can radically affect the model accuracy [1]. Other considerations, such as overfitting, properly selected training data, and proper model validation techniques need to be employed to better understand how well the ML model corresponds to the system modeled and what the limitations of this model might be. The Trustworthy ML Initiative's existence is evidence of the importance and broad reach of this topic.

The second hurdle is the ethical considerations of the model's output and the impacts using ML may have on society and the world. Data considerations, such as a full spectrum of skin tones, hair types and styles, and other physical characteristics are crucial to having a person recognition model that does not discriminate against people not fully represented in the training data set. Further, if data that is inherently discriminatory is used to train the data, such as house loan approvals that do not properly account for red lining, can end up reinforcing this practice even though the goal was to remove the human bias. Other examples, such as predictive policing can be self-reinforcing systems targeting particular groups and areas rather than true helps to reduce crime.

This workshop seeks to explore how to trust ML both from a "are the results something I can rely on?" perspective as well as the "are these results fair and legal to everyone?" perspective. The solutions for the both problems can be technological and/or social with a broad-based solution that uses tools to identify human-affected features that could reduce trust or cause ethical questions about how ML was incorporated into an application or process.

This workshop contributes by sharing experiences and exploring the extent and boundaries of the problem spaces as well as solutions and experiences that work within these bounds. The problem domain is not limited by the type of system nor by the data or application domain. Instead, this workshop focuses on how to best ensure that ML is working as intended and that it is not reinforcing or generating ethical questions about the results.

Topics of Interest:

  • Position, research, and experience papers related to trustworthy ML and ethics in ML (particularly the topics listed below)

  • Explainable ML

  • FAIR data princples for ML

  • Ethical uses for ML

  • Ethical data uses for ML model generation

  • Evaluating the ethical standard for an ML model

  • Privacy preserving ML

  • And other topics related to trustworthy ML and ethical ML

Submissions accepted in EasyChair:

https://easychair.org/conferences/?conf=teml22

Papers should be formatted in IEEE format following eScience formatting rules and can be 5 pages not including references.

Important Dates:

  • Submission Deadline (firm): 19 August, 2022 AoE

  • Responses to Authors: 7 September, 2022

  • Camera Ready due: 9 September, 2022

Proposed Program Committee:

  • Whit Schoenbrun (Sandia)

  • Jakob Luettgau (UT Knoxville)

  • Margaret Lawson (Google)

  • Randy Rannow

  • Shadi Ibrahim (INRIA)

  • Teresa Porton (Sandia)


Organizing Committee:

  • Jay Lofstead (Sandia)

  • Roselyne Tchoua (DePaul University)

Agenda:

Wednesday, 21 September (all times PDT)

9:00-9:05 intro by Jay

9:05-9:25 Edward Xu, Thiruvarangan Ramaraj, Roselyn Tchoua, Jacob Furst and Daniela Raicu, “Contextualizing Lung Nodule Malignancy Predictions with Easy vs. Hard Image Classification”

9:25-9:45 Pei-Hung Lin, Chunhua Liao, Winson Chen, Tristan Vanderbruggen, Murali Emani and Hailu Xu, “Making Machine Learning Datasets and Models FAIR for HPC: A Methodology and Case Study”

9:45-10:20 Trustworthy and Ethical ML open discussion led by Randy Rannow

paper presentations (15 + 5 for questions)