The core challenge of getting ethics embedded into AI workflows is the belief that it will slow down the innovation process. This challenge can be overcome through a process we call TOME (Top of Mind Ethics). TOME helps designers and developers see how ethical vetting actually makes them more creative and their solutions more robust.
With a proliferation of AI ethics principles and limited demonstration of their efficacy in practice, this talk looks at organizational structures, governance, and incentives as a lever for improving AI ethics. In particular, this talk focuses on how public audits of the ethical engagement of an organization can help rectify power imbalances and increase the likelihood that AI ethics principles and tools will achieve their stated goals. The ethical engagement audit we propose includes two fundamental questions: how well integrated AI ethics tools and principles are into the organization's workflows, and how well the organization solicits and responds to ethical critiques. Both can help guide an organization to meaningfully address the societal impacts resulting from their design, development, and deployment of AI. Further, we break down each of these fundamental questions into concrete organizational aspects for auditors to focus on. Auditing the integration of AI ethics principles and tools into workflows includes examining organizations’ day-to-day practices and incentives. Auditing organizations’ critical practices include examining their criticism uptake, critique solicitation, cultivation of relevant literacies, and access to diverse critiques.
Optional reading/resource: State of AI Ethics Report - Volume 6
Research from the FAccT community is becoming integrated into responsible AI tools and processes and used by AI practitioners in their work. In this talk, I will discuss how organizational incentives and business logics affect practitioners' responsible AI work practices, including how organizational priorities for speed, scale, and profit may impact processes for identifying and assessing fairness-related harms. More broadly, this talk will call for FAccT research that engages with the organizational contexts of responsible AI work in practice.
For the last few years, Yoav Schlesinger and I have thought a lot about how to grow and mature our AI ethics practice at Salesforce. We’ve spent time in self-reflection and talking to our peers at other large, U.S. enterprise tech companies that have built their own teams and practices. We've also examined maturity models in other fields from safety to privacy to security. From this, we’ve identified a maturity model for building an ethical (or “trusted” or “responsible,” choose your own word) AI practice and will share our lessons learned with others interested in building culture of responsible innovation.