2nd Workshop on Trustable and Ethical Machine Learning

During Transdisciplinary AI, Laguna Hills, CA, USA

https://www.TransAI.org

25-27 September 2023

Remote Participation Supported

Machine learning is often seen as a "magical" technique to solve data-related pattern recognition and generation problems. This general utility has led to ML's incorporation into a wide variety of fields. However, simply incorporating ML into a new field is not sufficient.

The first hurdle that should be addressed is whether or not the ML results can be trusted or not. Differences such as the pseudo-random number seed or train/test ratio can radically affect the model accuracy. Other considerations, such as overfitting, properly selected training data, and proper model validation techniques need to be employed to better understand how well the ML model corresponds to the system modeled and what the limitations of this model might be. The Trustworthy ML Initiative's existence is evidence of the importance and broad reach of this topic.

The second hurdle is the ethical considerations of the model's output and the impacts using ML may have on society and the world. Data considerations, such as a full spectrum of skin tones, hair types and styles, and other physical characteristics are crucial to having a person recognition model that does not discriminate against people not fully represented in the training data set. Further, if data that is inherently discriminatory is used to train the data, such as house loan approvals that do not properly account for red lining, can end up reinforcing this practice even though the goal was to remove the human bias. Other examples, such as predictive policing can be self-reinforcing systems targeting particular groups and areas rather than true helps to reduce crime.

With the rise of Large Language Models, the ethical considerations for their use and societal impacts have become an important conversation topic. Ethical Principles that can help guide the use of these and other AI tools

This workshop seeks to explore how to trust ML both from a "are the results something I can rely on?" perspective as well as the "are these results fair and legal to everyone?" perspective. The solutions for the both problems can be technological and/or social with a broad-based solution that uses tools to identify human-affected features that could reduce trust or cause ethical questions about how ML was incorporated into an application or process.

This workshop contributes by sharing experiences and exploring the extent and boundaries of the problem spaces as well as solutions and experiences that work within these bounds. The problem domain is not limited by the type of system nor by the data or application domain. Instead, this workshop focuses on how to best ensure that ML is working as intended and that it is not reinforcing or generating ethical questions about the results.

Topics of Interest:

Submissions accepted in EasyChair:

https://easychair.org/conferences/?conf=teml23

Papers should be formatted in IEEE format following eScience formatting rules and can be 5 pages not including references.

Important Dates:

Proposed Program Committee:


Organizing Committee:

Agenda:

TBD

 

paper presentations (15 + 5 for questions)