August 15th, 2021
Measures and Best Practices for Responsible AI
at KDD 2021
About
About
The use of machine learning (ML) based systems has become ubiquitous including their usage in critical applications like medicine and assistive technologies. Therefore, it is important to ascertain the trustworthiness of these ML models and tasks. A key component in this determination is the development of task specific interventions in the form of measures and test datasets, and guidelines for best practices which are able to ensure the various aspects of responsible model development and deployment including robustness, interpretability and fairness. Further, severe imbalances and inadequate representation in datasets have been known to be reinforced and exacerbated by models leading to repercussions of an undesirable nature. Some common examples include how coreference resolution systems in natural language understanding (NLU) are often not all gender inclusive and discrepancies in the measurement of how robust and trustworthy machine predictions are in domains where the selective labels problem is prevalent. Development of interventions which detect and quantify such problems or guidelines that prevent them are vital. In this workshop, we will discuss, curate and highlight work which focus on these practical aspects of Responsible AI.
The use of machine learning (ML) based systems has become ubiquitous including their usage in critical applications like medicine and assistive technologies. Therefore, it is important to ascertain the trustworthiness of these ML models and tasks. A key component in this determination is the development of task specific interventions in the form of measures and test datasets, and guidelines for best practices which are able to ensure the various aspects of responsible model development and deployment including robustness, interpretability and fairness. Further, severe imbalances and inadequate representation in datasets have been known to be reinforced and exacerbated by models leading to repercussions of an undesirable nature. Some common examples include how coreference resolution systems in natural language understanding (NLU) are often not all gender inclusive and discrepancies in the measurement of how robust and trustworthy machine predictions are in domains where the selective labels problem is prevalent. Development of interventions which detect and quantify such problems or guidelines that prevent them are vital. In this workshop, we will discuss, curate and highlight work which focus on these practical aspects of Responsible AI.
By hosting papers, invited talks, panel discussions and demos of tools from the industry, the workshop will try to focus on different aspects - development of measures and methods, limitations, case studies, trade-offs - associated with deploying responsible AI models.
By hosting papers, invited talks, panel discussions and demos of tools from the industry, the workshop will try to focus on different aspects - development of measures and methods, limitations, case studies, trade-offs - associated with deploying responsible AI models.