AAAI 2022 Spring Symposia Series

Approaches to Ethical Computing

Metrics for Measuring AI’s Proficiency and Competency for Ethical Reasoning

Virtual Meeting

Stanford University, Palo Alto, California

March 21-23, 2022

Pre-recording Submission


Description:

The prolific deployment of Artificial Intelligence (AI) across different applications have introduced novel challenges for AI developers and researchers. AI is permeating decision making for the masses: from self-driving automobiles, to financial loan approval, to military applications. Ethical decisions have largely been made by humans with vested interest in, and close temporal and geographical proximity to the decision points. With AI making decisions, those ethical responsibilities are now being pushed to AI designers who may be far-removed from how, where, and when the ethical dilemma occurs. Such systems may deploy global “ethical” rules with unanticipated or unintended local effects or vice versa.

There has been important work in recognizing the ethical impact of AI, such as the identification of unintended biases introduced by training data to the accuracy of facial recognition algorithms. Further, there has been increasing interest in explainable AI. Because of the complexity of recent AI advances, the logic of how and why the analytics respond in certain ways may not be transparent to humans. Especially for methods that are not grounded in symbolic approaches, the internal logic may not be interpretable, predictable, or even constrainable by the algorithm designers.

While explainability is desirable, it is likely not sufficient for creating “ethical AI”, i.e. machines that can make ethical decisions. Similar to a human’s natural development from newborn to adulthood, we posit that it is possible for an AI to evaluate its own decisions and evolve to advance its own ethical reasoning capabilities. However, this will require the invention of new evaluation techniques around the AI’s proficiency and competency in its own ethical reasoning. Using traditional software and system testing methods on ethical AI algorithms may not be feasible because what is considered “ethical” often consists of judgements made within situational contexts. The question of what is ethical has been studied for centuries. This symposium invites interdisciplinary methods for characterizing and measuring ethical decisions as applied to ethical AI.


Call for Submissions

We cordially invite authors to submit papers of 2–6 pages that will be reviewed by the organizing committee. We welcome prior and work-in progress papers describing new methods and that present a challenge or opportunity for developing metrics to evaluate ethical AI, where authors wish to use the symposium as an opportunity to leverage the diverse perspectives and disciplines of other participants to solidify problem definition and solution ideation. Papers should clearly state limitation(s) of current methods and potential ways these could be overcome. Submissions should be formatted according to the AAAI template and submitted through the AAAI Spring Symposium EasyChair site here. Authorkit available here.


Due date: November 30, 2021 (closed).


Sample Topics:

  • What are the dependencies and requirements for developing metrics of a system’s “ethical proficiency”?

  • What is a sufficiently “ethical” system? What are design and measure considerations necessary to engineering an “ethical standard” for AI systems?

  • How do ethical principles operationalize?

  • How does ethical AI impact Modeling and Simulation of large scale system of systems?

  • What are methods for authoring rules of behavior for ethical AI? What measures need to be captured and what are the acceptable boundaries of those measures?

  • How can we address performance concerns around deployment of ethical AI?

  • Measuring for Dangerous Adaptations - How can we measure for this effect on ethical AI? If human behavior is changing in a way that “we” don’t like, how to we realize this before it’s too late?


Sample Domains/Applications:

  • Education

  • Healthcare

  • Law enforcement and Judiciary

  • News and social media

  • Military operations and applications

  • Transportation

  • Others (please describe)


Submit papers for this session HERE.

Organizing Committee

Peggy Wu

(Corresponding Organizer)

Associate Director and Principal Investigator, Human Machine Interactions

Raytheon Technologies Research Center

Brett Israelsen

Research Scientist

Raytheon Technologies Research Center

Michael Salpukas, PhD

Engineering Fellow

Raytheon Technologies - Raytheon Missiles & Defense

Shannon Ellsworth

Senior Principal Systems Engineer

Raytheon Technologies - Raytheon Missiles & Defense

Hsin-Fu “Sinker” Wu

LCDR, USN (Ret)

Senior Manager, Systems Engineering, Systems of Systems Modeling & Architecture (SMA)

Raytheon Technologies - Raytheon Missiles & Defense


Joseph Williams, PhD

Seattle Director

Pacific Northwest National Laboratory (PNNL)

John Basl, PhD

Associate Professor of Philosophy

Northeastern University