The AI Ethics Tool

AI Ethics has gained significant momentum, evidenced by the growing body of published literature, policy guidelines, and public discourse. However, the practical implementation and adoption of AI ethics principles among practitioners has not kept pace with this theoretical development. Common barriers to adoption include overly abstract language, poor accessibility, and insufficient practical guidance for implementation. Through participatory design with industry practitioners, we (my colleagues at Tufts, Sarah Hladikova and Winnie Wang, and I), developed an open-source tool that bridges this gap. The AI Ethics tool is inspired by the Morley Typology [1]. This theoretical framework matches N= 106 available methods and tools reported in the literature to the core AI Ethics principles (Beneficence, Non-Maleficence, Justice, Autonomy, and Explainability), and to phases in the algorithm development pipeline (Business Development; Design; Training and Test; Prototyping; Testing; Development; Monitoring). We first reviewed the literature associated with each phase and excluded the methods and tools no longer available. We used this compressed Morley Typology as our starting point for the development of the AI Ethics tool, but we also reviewed and included methods and tools beyond the Typology that were also aligned with such mandates.  We believe that the current version of the Tool is representative of the diverse normative work in the AI Ethics literature.

Check the paper about this tool in arxiv: https://arxiv.org/abs/2501.16954

Check the tool here! https://ai-ethics-tool.gitbook.io/ai-ethics-tool

References:

[1] Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Science and engineering ethics, 26(4), 2141-2168.