This 3.5-hour workshop at the ACM FAccT Conference 2022 took place virtually on Wednesday, June 22nd at 04:30 (4:30am) Korean Standard Time / Tuesday, June 21st at 12:30 (12:30pm) Pacific Daylight Time / 15:30 (3:30pm) Eastern Daylight Time. The session was comprised of lighting talks by 16 experts from industry, academia, and NGOs organized into four breakout groups by theme. The goal of this workshop was to provide participants with a deeper view into the emerging problems within each of these spaces as shared in talks from and discussions with leading experts across academia, industry, corporate governance, and public policy. The goal was for participants to take away a more thorough understanding of new challenges in the dynamics, institutions, concepts, and approaches unique to domains across the FAccT space. It is our hope that enhanced awareness of these challenges will equip attendees to more quickly and more comfortably be able to recognize and confront these challenges and give them a baseline or framework they can use to begin thinking about solutions.
The first theme, Concepts of Fairness and Transparency, explored problems and paradoxes of fairness and transparency, including how to balance privacy and security needs with documentation requirements, address moral values in documentation methods, and the implications of lack of representative data. This session also discussed tensions between concepts. Understanding new issues in FAccT concepts and research will give participants insight into deeper theoretical and moral frameworks through which to think about all areas of FAccT work.
Applied FAccT Practices delved into issues arising in current FAccT methods and practice. This session covered processes for debugging, model assessment, the pros and cons of existing fairness assessments, as well as the problems and impacts related to how practitioners are thinking about classifying or labeling individuals’ data. Discussions about emerging issues in applied FAccT practices will keep practitioners up to speed on current discussions and the most relevant problems they may need to consider in the context of their own work.
Organizational Approaches to FAccT and Cultural Change centered on issues that block or encourage the advancement of FAccT practices within organizations and the organizational or cultural dynamics at play. Topics covered included the tensions between key components of technology production like speed and efficiency and the need to address fairness-related harms. It examined challenges with organizational incentives and cultural attitudes toward FAccT. As shifting organizational attitudes and cultures in support of FAccT is critical to advancing and evolving this work, familiarity with novel problems that arise while navigating the adoption and acceptance of and gaining support for FAccT practices is key to success in this field.
Finally, Public Policy and Regulation discussed the problem spaces that are surfacing when it comes to legislative and regulatory approaches to emerging technology oversight. The goal was for participants to get a sense of the main friction points for regulators and companies when it comes to regulation and further examine issues with policy solutions that have been proposed. While practitioners and researchers are driving industry and academic work in FAccT concepts and practices, governments are examining how to incorporate FAccT topics into emerging regulations, and often without access to those living and breathing the work. It is critical for the FAccT community to understand trends and themes in policymaking for emerging tech so that regulation, concepts, and practice can more closely align.
All of these domains are essential parts of the FAccT space, yet each comes with unique problem spaces and challenges. In this workshop, participants had the opportunity to dig deep into emerging issues with experts working in each of the domains, allowing them to explore these challenges. Participants had the opportunity to ask questions, share their expertise, ideate, and engage with the presenters within the topic-specific breakout groups, as well as hear summaries and key insights of the groups for which they did not attend. A paper summarizing the presentations, discussions, and insights will be published after the workshop.
Maximum 56 participants (14 participants per break out group)
Introduction to the four emerging problem spaces (25 minutes)
Break Out Discussions: Each speaker will each give 7-10-minute talks on their topic within the emerging problem space theme. A Documenter will then facilitate the discussion and take notes. There will be a 10-minute break at the half-way point. (100 minutes)
Break: Participants will take a break while Documenters summarize points for final readout. (15 minutes)
Read Out: Documenters will read out a summary of the key takeaways from their breakout group. (35 minutes)
Closing Remarks: Final remarks from organizers and participants. (5 minutes)
Kathy Baxter is Principal Architect of Ethical AI Practice at Salesforce with over 20 years of experience in the tech industry. Kathy develops research-informed best practice to educate Salesforce employees, customers, and the industry on the development of responsible AI. She collaborates and partners with external AI and ethics experts to continuously evolve Salesforce policies, practices, and products. She is also a member of Singapore’s Advisory Council on the Ethical Use of AI and Data.
Prior to Salesforce, she worked at Google, eBay, and Oracle in User Experience Research. She received her MS in Engineering Psychology and BS in Applied Psychology from the Georgia Institute of Technology. She is the co-author of "Understanding Your Users: A Practical Guide to User Research Methodologies." You can read about the Ethics AI Practice Team's current research at salesforceairesearch.com/trusted-ai.
Twitter: @baxterkb
Chloe Autio's work bridges the gap between technology and public policy, specifically in the area of artificial intelligence (AI) governance. She is a connector, communicator, and problem solver who has developed and operationalized corporate programs for Responsible AI and technology oversight. She brings this experience and perspective to the Cantellus Group where she provides custom services to help companies embed governance into their technology strategy.
Chloe has worked across global teams of data scientists, business and sales leaders, lawyers, and researchers to shape governance and policy for artificial intelligence and other emerging technologies. This work extends to partnerships and collaboration with key stakeholders to shape and influence public policy, including government officials, consumer advocacy and civil society organizations, think tanks, and academia. A proud Montanan, Chloe now resides in the Washington DC Metro Area. Chloe holds an Economics degree from UC Berkeley.
Twitter: @ChloeAutio
Dr. Rumman Chowdhury’s passion lies at the intersection of artificial intelligence and humanity. She is a pioneer in the field of applied algorithmic ethics, creating cutting-edge socio-technical solutions for ethical, explainable and transparent AI.
She is currently the Director of META (ML Ethics, Transparency, and Accountability) team at Twitter, leading a team of applied researchers and engineers to identify and mitigate algorithmic harms on the platform. Previously, she was CEO and founder of Parity, an enterprise algorithmic audit platform company. She formerly served as Global Lead for Responsible AI at Accenture Applied Intelligence. In her work as Accenture’s Responsible AI lead, she led the design of the Fairness Tool, a first-in-industry algorithmic tool to identify and mitigate bias in AI systems. Dr. Chowdhury co-authored a Harvard Business Review piece on it’s influences and impact.
Twitter: @ruchowdh
Dr. Kush Varshney is a distinguished research staff member and manager with IBM Research at the Thomas J. Watson Research Center, Yorktown Heights, NY, where he leads the machine learning group in the Foundations of Trustworthy AI department. He was a visiting scientist at IBM Research - Africa, Nairobi, Kenya in 2019. He is the founding co-director of the IBM Science for Social Good initiative. He applies data science and predictive analytics to human capital management, healthcare, olfaction, computational creativity, public affairs, international development, and algorithmic fairness, which has led to recognitions such as the 2013 Gerstner Award for Client Excellence for contributions to the WellPoint team and the Extraordinary IBM Research Technical Accomplishment for contributions to workforce innovation and enterprise transformation. He conducts academic research on the theory and methods of trustworthy machine learning. His work has been recognized through best paper awards at the Fusion 2009, SOLI 2013, KDD 2014, and SDM 2015 conferences and the 2019 Computing Community Consortium / Schmidt Futures Computer Science for Social Good White Paper Competition. He self-published a book entitled 'Trustworthy Machine Learning' in 2022, available at http://www.trustworthymachinelearning.com and paperback: https://www.amazon.com/dp/B09SL5GPCD. He is a senior member of the IEEE.
Twitter: @krvarshney
Anna Bethke is a Principal Data Scientist in the Ethical AI Practice team at Salesforce. They have experience in several different facets of machine learning, deep learning, human factors engineering, and AI Ethics. They particularly enjoy working with stakeholders to determine what type of algorithmic system or insights may be most valuable to them and then determining the most intuitive manner to accomplish these goals. To support this goal, they have a strong knowledge of a multitude of algorithms, Python, Spark, as well as SQL. In addition, they quite enjoy teaching others about technology's potential and helping them understand its strengths and weaknesses.
They previously managed the AI for Good Data Science team at Meta, was the Head of AI for Social Good at Intel, and worked as a Data Scientist at Lab 41, Argonne National Laboratory, and MIT Lincoln Laboratory. They received their M.S. and B.S. in Aerospace, Aeronautical, & Astronautical Engineering from MIT and MBA from the Quantic School of Business and Technology.
Twitter: @data_beth
Dr. Rachel Thomas is a professor of practice at Queensland University of Technology and co-founder of fast.ai, which created the most popular deep learning course in the world. Previously, she was the founding director of the University of San Francisco Center for Applied Data Ethics. Rachel earned her mathematics PhD at Duke, was selected by Forbes as one of 20 Incredible Women in AI, and was an early engineer at Uber.
Twitter: @math_rachel
Website: https://rachel.fast.ai/
Dr. Krishnaram Kenthapadi is the Chief Scientist of Fiddler AI, an enterprise startup building a responsible AI and ML monitoring platform. Previously, he was a Principal Scientist at Amazon AWS AI, where he led the fairness, explainability, privacy, and model understanding initiatives in Amazon AI platform. Prior to joining Amazon, he led similar efforts at the LinkedIn AI team, and served as LinkedIn’s representative in Microsoft’s AI and Ethics in Engineering and Research (AETHER) Advisory Board. Previously, he was a Researcher at Microsoft Research Silicon Valley Lab. Krishnaram received his Ph.D. in Computer Science from Stanford University in 2006. He serves regularly on the program committees of KDD, WWW, WSDM, and related conferences, and co-chaired the 2014 ACM Symposium on Computing for Development. His work has been recognized through awards at NAACL, WWW, SODA, CIKM, ICML AutoML workshop, and Microsoft’s AI/ML conference (MLADS). He has published 50+ papers, with 4500+ citations and filed 150+ patents (70 granted). He has presented tutorials on privacy, fairness, explainable AI, and responsible AI at forums such as KDD ’18 ’19, WSDM ’19, WWW ’19 ’20 '21, FAccT ’20 '21, AAAI ’20 '21, and ICML '21.
Twitter: @kkenthapadi
Tulsee Doshi is the Head of Product for Google's Responsible AI & Human-Centered Technology Organization, where she leads efforts to build more equitable, intentional, and accountable user experiences into Google's products. She is also an AI Ethics Advisor to Lemonade Insurance.
Tulsee has expertise in product management for machine-learning products, with a particular focus on AI Ethics in policy & product development. She was recognized on the Forbes 30 under 30 list for 2022. Tulsee received her M.S. in Computer Science, with a focus on Artificial Intelligence and B.S., in Symbolic Systems from Stanford University.
Twitter: @tulseedoshi
Dr. Su Lin Blodgett Su Lin Blodgett is a researcher in the Fairness, Accountability, Transparency, and Ethics (FATE) group at Microsoft Research Montréal. Her research focuses on the ethical and social implications of language technologies, focusing on the complexities of language and language technologies in their social contexts, and on supporting NLP practitioners in their ethical work. She completed her Ph.D. in computer science at the University of Massachusetts Amherst, where she was supported by the NSF Graduate Research Fellowship, and has been named as one of the 2022 100 Brilliant Women in AI Ethics.
Twitter: @sulin_blodgett
Dr. Diptikalyan Saha is Sr. Technical Staff Member at IBM Research Lab in India. His current research focuses on AI Testing where he is trying to ensure Trustworthy AI by developing novel techniques to test, debug and repair AI models and applying AI techniques to improve traditional software testing. Broadly he is looking at contributing to the intersection of the two fields - Software Engineering and Artificial Intelligence. He has varied expertise in different fields of Computer Science which is evident from his publications at varied conferences like ICSE, FSE, VLDB, SIGMOD, ICLP, AAAI, and TACAS. Dipti holds a Ph.D. in Computer Science from the State University of New York at Stony Brook.
Will Griffin is Chief Ethics Officer of Hypergiant, an enterprise A.I. company based in Austin, Texas. Will is the recipient of the 2020 IEEE Award for Distinguished Ethical Practices and the creator of Hypergiant's Top of Mind Ethics (TOME) framework which won the Communitas Award for Excellence in A.I. Ethics. He is a graduate of Dartmouth College and Harvard Law School. To learn more about his work visit https://www.hypergiant.com/ethics/
Twitter: @WillGriffin1of1
Abhishek Gupta is the Senior Responsible AI Leader & Expert with the Boston Consulting Group (BCG) where he works with BCG's Chief AI Ethics Officer to advise clients and build end-to-end Responsible AI programs. He is also the Founder & Principal Researcher at the Montreal AI Ethics Institute, an international non-profit research institute with a mission to democratize AI ethics literacy. Through his work as the Chair of the Standards Working Group at the Green Software Foundation, he is leading the development of a Software Carbon Intensity standard towards the comparable and interoperable measurement of the environmental impacts of AI systems.
His work focuses on applied technical, policy, and organizational measures for building ethical, safe, and inclusive AI systems and organizations, specializing in the operationalization of Responsible AI and its deployments in organizations and assessing and mitigating the environmental impact of these systems. He has advised national governments, multilateral organizations, academic institutions, and corporations across the globe. His work on community building has been recognized by governments from across North America, Europe, Asia, and Oceania. He is a highly sought after speaker with talks at the United Nations, European Parliament, G7 AI Summit, TEDx, Harvard Business School, Kellogg School of Management, amongst others. His writing on Responsible AI has been featured by Wall Street Journal, MIT Technology Review, Protocol, Fortune, VentureBeat, amongst others. For more information on his work, check out https://atg-abhishek.github.io .
Twitter: @atg_abhishek
Dr. Michael Madaio is a postdoctoral researcher at Microsoft Research working with the FATE research group (Fairness, Accountability, Transparency, and Ethics in AI). He works at the intersection of human-computer interaction and AI, focusing on enabling more fair and responsible AI through research with AI practitioners and stakeholders impacted by AI systems. Michael received his PhD in Human-Computer Interaction from Carnegie Mellon University. His research has received multiple best paper awards and nominations, including from the CHI, KDD, and COMPASS conferences.
Twitter: @mmadaio
Kathy Baxter is Principal Architect of Ethical AI Practice at Salesforce with over 20 years of experience in the tech industry. Kathy develops research-informed best practice to educate Salesforce employees, customers, and the industry on the development of responsible AI. She collaborates and partners with external AI and ethics experts to continuously evolve Salesforce policies, practices, and products. She is also a member of Singapore’s Advisory Council on the Ethical Use of AI and Data.
Prior to Salesforce, she worked at Google, eBay, and Oracle in User Experience Research. She received her MS in Engineering Psychology and BS in Applied Psychology from the Georgia Institute of Technology. She is the co-author of "Understanding Your Users: A Practical Guide to User Research Methodologies." You can read about the Ethics AI Practice Team's current research at salesforceairesearch.com/trusted-ai.
Twitter: @baxterkb
Dr. David Hardoon is the Chief Data and AI Officer at Union Bank of the Philippines, Chief Data Officer at UnionDigital, Chief Data and Innovation Officer at Aboitiz Group, and Managing Director at Aboitiz Data Innovation. Concurrently David is an external advisor to Singapore's Corrupt Investigation Practices Bureau (CPIB) in the capacity of Senior Advisor (Artificial Intelligence) and to Singapore's Central Provident Fund Board (CPF) in the capacity of Senior Advisor (Data Science).
Prior to his current roles, David was Monetary Authority of Singapore (MAS) first appointed Chief Data Officer and Head of Data Analytics Group reporting to the agency Deputy Managing Director for Financial Supervision and subsequently Special Advisor (Artificial Intelligence) reporting to Deputy Managing Director for Markets and Development. In these roles he led the development of the AI strategy both for MAS and Singapore’s financial sector as well as driving efforts in promoting open cross-border data flows.
David holds a PhD in Computer Science in the field of Machine Learning from the University of Southampton and graduated from Royal Holloway, University of London with First Class Honors B.Sc. in Computer Science and Artificial Intelligence.
Twitter: @DavidHardoon
Dr. Casey Fiesler is an assistant professor in Information Science (and Computer Science by courtesy) at the University of Colorado Boulder. She researches and teaches in the areas of technology ethics, internet law and policy, and online communities. Her work on research ethics for data science, ethics education in computing, and broadening participation in computing is supported by the National Science Foundation, and she is the recipient of an NSF CAREER Award. Also a public scholar, she is a frequent commentator and speaker on topics of technology ethics and policy, and her research has been covered everywhere from The New York Times to Teen Vogue, but she's most proud of her TikToks. She holds a PhD in Human-Centered Computing from Georgia Tech and a JD from Vanderbilt Law School.
Twitter: @cfiesler
Karen Silverman is a leading global expert in practical governance strategies for AI and other frontier technologies. As the CEO and Founder of The Cantellus Group, she advises Fortune 50 companies, startups, consortia, and governments on how to govern cutting-edge technologies in a rapidly changing policy environment. Her expertise is informed by more than 20 years of practice and management leadership at Latham & Watkins, LLP where she advised global businesses in complex antitrust matters, M&A, governance, ESG, and crisis management.
Karen is a WEF Global Innovator and sits on its Global AI Council where she has contributed to the Board Toolkit for AI and ongoing work on AI, data, and cybersecurity issues. She has been named one of the Top Ten Legal Innovators (The Financial Times, 2019), a Top AI Lawyer in California (California Daily Journal, 2019), and one of the 100 Brilliant Women in AI Ethics (Women in AI Ethics, 2020).
As a leading voice in the governance of AI and other frontier technologies, she is a regular speaker at conferences and forums including CogX, RSA, HIMSS, the Athens Roundtable, National Judicial College, Aspen Institute Technology Fellows, and Berkeley Law. Her thoughts on the governance, oversight and real-world applications of AI, AR, VR, and other nascent technologies are featured in The WEF’s Agenda and the AI Journal.
Twitter: @KESilverman
Reva Schwartz is a research scientist in the Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST). She currently serves as Principal Investigator on Bias in Artificial Intelligence for NIST’s Trustworthy and Responsible AI program. Her research focuses on the role of context in human language and behavior and the nature of expertise and expert judgment in socio-technical systems. Reva has advised federal agencies about how experts interact with automation to make sense of information under high-risk and high-uncertainty operational conditions.
Yoav Schlesinger is Principal of Ethical AI Practice at Salesforce. He helps instantiate, embed and scale industry-leading best practices for the responsible development, use and deployment of AI. Prior to joining this team, Yoav worked at Omidyar Network where he led the Responsible Computer Science Challenge and helped develop EthicalOS, a risk mitigation toolkit for product managers. Before that, he leveraged his undergraduate studies in Religious Studies and Political Science as a leader of mission-driven, social impact organizations.
Twitter: @yschlesinger
Michelle Carney is a Computational Neuroscientist turned UX Researcher, whose practice focuses on the intersection of Data Science and UX. Currently a Senior UX Researcher on Google’s Tensorflow Team, Michelle's projects focus on combining Machine Learning and UX. Her work includes Magenta’s latest Tone Transfer project and People + AI Research team. Outside of work, Michelle organizes the Machine Learning and UX Meetup and teaches at the Stanford d.school on Designing Machine Learning.
Twitter: @michelleRcarney
Danielle Cass is a serial networker who has built communities across Silicon Valley, the tech industry, and the Global South. She is Activation Lead on the Microsoft Ethics & Society team where she helps drive Responsible Innovation across the Cloud + Artificial Intelligence engineering organization. She co-founded an AI ethics / responsible tech community of enterprise and consumer tech companies, civil society, and academia to share resources for ethics training and implementation She is a member of the World Economic Forum Steering Committee for the Ethical Design and Deployment of Technology Project, and was named on the "100 Brilliant Women in AI Ethics" 2020 list by Lighthouse3. Based on her community-building in Silicon Valley, she was recruited by the Obama Administration to drive tech sector engagement for the U.S. Agency for International Development, where she was a lead organizer of President Obama’s 2016 Global Entrepreneurship Summit at Stanford University.
Twitter: @daniellecass
Brinson Elliott is an Analyst at the Cantellus Group, a boutique advisory firm focused on the strategy, oversight, and governance of artificial intelligence and other frontier technologies. She previously clerked at the San Francisco District Attorney's Office and interned at the Santa Barbara Public Defender.
Brin recently graduated from Queen’s University with a BAH in political studies and philosophy, with Distinction. While there, she studied the socio-political implications of AI and technology governance through a global lens.