Bridging the Gap from AI Ethics Research to Practice

FAT* 2020 CRAFT Session

About the Workshop

This 90-minute workshop will take place in Room MR8 on January 29 at 3:00 pm (15:00). It will focus on efforts by AI ethics practitioners in technology companies to evaluate and ensure fairness in machine learning applications. Six industry practitioners (Amazon, Yoti, Microsoft, Pymetrics, Facebook, Salesforce) will briefly share insights from the work they have undertaken in the area of fairness in machine learning applications, what has worked and what has not, lessons learned and best practices instituted as a result. After that set of lightning talks and for the remainder of the workshop, attendees will discuss insights gleaned from the talks. There will be an opportunity to brainstorm ways to build upon the practitioners’ work through further research or collaboration. The goal is to develop a shared understanding of experiences and needs of AI ethics practitioners in order to identify areas for deeper research of fairness in AI.

Maximum 40 Participants

Agenda

  • 40 minutes: Six “lightning talks,” in which presenters will share their experiences and lessons learned.
  • 10 minutes: Participants will have an opportunity to ask questions directed at any of the presenters.
  • 5 minutes: Themes from the talks will be identified and grouped for breakout discussions.
  • 20 minutes: Attendees will join a facilitated group conversation on the topic of most interest to them and brainstorm ideas for further implementing fairness research in practice, identify open questions, and topics for future research.
  • 15 minutes: The groups will reconvene to share a summary of their discussions with the larger group.

Organizers

Kathy Baxter is Architect of Ethical AI Practice at Salesforce with over 20 years of experience in the tech industry. She develops research-informed best practice to educate Salesforce employees, customers, and the industry on the development of ethical AI. You can read about her research at Einstein.ai/ethics. She received her MS in Engineering Psychology and a BS degree in Applied Psychology from the Georgia Institute of Technology. The second edition of her book, "Understanding your users," was published in May 2015.

Yoav Schlesinger is Principal of Ethical AI Practice at Salesforce. He helps instantiate, embed and scale industry-leading best practices for the responsible development, use and deployment of AI. Prior to joining this team, Yoav worked at Omidyar Network where he led the Responsible Computer Science Challenge and helped develop EthicalOS, a risk mitigation toolkit for product managers. Before that, he leveraged his undergraduate studies in Religious Studies and Political Science as a leader of mission-driven, social impact organizations.

Lightning Talks

Joaquin Quinonero Candela, Director of Engineering, Facebook (with Isabel Kloumann, Research Science Manager): The presenter will focus on fairness in Facebook’s approach to fairness in product development, with an emphasis on fairness being a process that considers the holistic impact and interaction between people and systems, rather than a specific state or property of data or an algorithm. Facebook's approach will be illustrated through a case study of fairness in a content moderation system.

Krishnaram Kenthapadi, Principal Scientist, Machine Learning Services, Amazon AI [formerly LinkedIn]: The presenter will share LinkedIn’s fairness-aware reranking for talent search. The key lesson from LinkedIn's experience will be highlighted: that building consensus and achieving collaboration across key stakeholders (such as product, legal, PR, engineering, and AI/ML teams) is a prerequisite for successful adoption of fairness-aware approaches in practice.

Lewis Baker, Director of Data Science, Pymetrics: Pymetrics builds predictive models of job success that are tailored to a specific roles at a company. The input into the model has been checked and curated to contain minimal differences between protected groups (e.g., age, ethnicity or gender). However, in a real-life example of Simpson’s Paradox, it is possible to find bias in a success model for salespeople in Ohio, even if the same model would not have bias on a global population. The presenter will discuss how this is addressed at Pymetrics.

Julie Dawson, Director of Regulatory & Policy, Yoti: Yoti, a digital identity platform and a BCorps, provides secure AI identity and age estimation in a way that protects user's privacy and is transparent about accuracy across gender, age and skin tone. The presenter will share how the platform has applied ML research to age estimation and steps being taken to address bias and be open about the levels of bias to relying parties and regulators.

Luke Stark, Postdoctoral Researcher, Microsoft (with Michael A. Madaio, Jennifer Wortman Vaughan, & Hanna Wallach): Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI. Many organizations have published principles and even checklists intended to guide the ethical development and deployment of AI systems; however, unless checklists are grounded in practitioners' needs, they may be misused. To understand the role of checklists in AI ethics, Microsoft conducted an iterative design process with practitioners to co-design an AI fairness checklist, and identifying desiderata and concerns for AI checklists in general. It was found that AI checklists could provide organizational infrastructure for formalizing ad-hoc processes and empowering individual advocates, and highlight future research directions in this space.

Sarah Aerni, Director of Data Science and Engineering, Salesforce: Salesforce Einstein democratizes AI by putting it in the hands of business experts with little experience or understanding of machine learning. The presenter will cover how Salesforce provides an appropriate level of friction in the setup experience, automates the modeling process, builds guardrails to protect users from common mistakes, and provides scorecards to offer broadly-consumable explainability to help users in their next steps.