Welcome to Responsible AI in Practice! The Privacy and Responsible AI course educates participants on ethical considerations, regulatory frameworks, and best practices for developing and deploying AI systems with a focus on protecting user privacy and ensuring responsible use of AI technologies. This course is offered as part of Stanford University’s Institute for Computational and Mathematical Engineering (ICME) Summer Workshops Series.
How do we develop machine learning models and systems taking fairness, accuracy, explainability, robustness, and privacy into account? How do we operationalize models in production, and address their governance, management, and monitoring? Given the increasing role played by artificial intelligence (AI) applications in determining our day-to-day experiences and the rapid adoption of AI systems in high-stakes domains such as hiring, lending, and healthcare, it’s become imperative to build and deploy AI systems in a responsible manner. In this course, we will first motivate the need for adopting a "responsible AI by design" approach when developing AI / machine learning (ML) models and systems for different consumer and enterprise applications. We will present an overview of responsible AI and associated techniques and tools, with a focus on model explainability, fairness, and privacy in AI. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will emphasize that topics related to responsible AI are socio-technical, that is, they are topics at the intersection of society and technology. The underlying challenges cannot be addressed by technologists alone; we need to work together with all key stakeholders — such as customers of a technology, those impacted by a technology, and people with background in ethics and related disciplines — and take their inputs into account while designing these systems. We also motivate the need for adopting responsible AI principles when developing and deploying large language models (LLMs) and other generative AI models, and provide a roadmap for thinking about responsible AI for generative AI in practice. We provide a brief technical overview of text and image generation models, and highlight the key responsible AI desiderata associated with these models. We then describe the technical considerations and challenges associated with realizing the above desiderata in practice.
In this course, you’ll get a first hand feel for responsible AI industry case studies, and discover the challenges underlying responsible AI in practice! Join us for an interactive learning experience that includes engaging presentations, collaborative group exercises, polls & quizzes, and even a "homework" to probe deeper between the two sessions.
Instructor: Krishnaram Kenthapadi (Chief AI Officer & Chief Scientist, Fiddler AI)
Course Assistant: Farnaz “Naz” Ghaedipour (Postdoctoral Scholar, Department of Management Science and Engineering, Stanford University)
Course Slides (Google Slides link; embedded above)
Zoom Video Recording
After this class, you will:
Understand the need and challenges of ethical consideration in AI/ML development.
Develop a working knowledge of responsible AI principles and techniques, with a focus on model explainability, fairness, and privacy in AI.
Build a practical awareness of risk assessment, harm reduction approaches, and emerging trends and best practices.
This course is aimed at attendees with a wide range of interests and backgrounds, including researchers interested in knowing about techniques and tools for model explainability, fairness, and privacy in AI as well as practitioners interested in implementing responsible AI models for web-scale machine learning and data mining applications. We will not assume any prerequisite knowledge, and present the intuition underlying various explainability, fairness, and privacy notions and techniques to ensure that the material is accessible to all attendees. To knowledgeably appreciate what can go wrong when developing AI/ML systems and how such problems are addressed, it’s helpful to have a conceptual understanding of machine learning and how it differs from computer software developed by explicit programming. Previous exposure to machine learning at the level of the companion “Introduction to Machine Learning” workshop in this series should suffice. Beginners to ML will do well to read the transcript from this link: The Ethical Algorithm, with Michael Kearns. Warning: it will take a careful reader at least 15, if not 30 minutes or more!
To join the workshop, you’ll need a device with a recent web browser and two-way audio and video access to Zoom. This could be a laptop or desktop computer running any operating system, such as Windows, Mac, or Linux. Participative activities benefit from a larger screen, so joining via a smartphone or tablet may not provide the best learning experience.
The course will consist of three parts: (1) responsible AI foundations including motivation, definitions, models, algorithms, and tools for explainability, fairness, and privacy in AI/ML systems (about 2 hours), (2) case studies across different companies, spanning different application domains, along with practical challenges and opportunities (about 2 hours), and (3) trustworthy generative AI (about 2 hours).
Motivation from regulatory, business, and data science perspectives
Fairness-aware ML techniques/tools
Explainable AI techniques/tools
Privacy-preserving ML techniques/tools
Open source and commercial tools for AI explainability, fairness, privacy, and ML model monitoring (e.g., Amazon SageMaker Clarify and Debugger, Fiddler AI's AI Observability Platform, Google AI Explainability, Fairness, and What-If tools, IBM AI 360 Toolkits, LinkedIn Fairness Toolkit (LiFT), Microsoft Fairlearn and InterpretML)
We will present case studies across different companies, spanning different application domains. We will emphasize that topics related to responsible AI are socio-technical, that is, they are topics at the intersection of society and technology. The underlying challenges cannot be addressed by technologists alone; we need to work together with all key stakeholders --- such as customers of a technology, those impacted by a technology, and people with background in ethics and related disciplines --- and take their inputs into account while designing these systems. Finally, we will identify open problems and research directions for the data mining/machine learning community.
We also motivate the need for adopting responsible AI principles when developing and deploying large language models (LLMs) and other generative AI models, and provide a roadmap for thinking about responsible AI for generative AI in practice. We provide a brief technical overview of text and image generation models, and highlight the key responsible AI desiderata associated with these models. We then describe the technical considerations and challenges associated with realizing the above desiderata in practice.
Instructor: Krishnaram Kenthapadi is the Chief AI Officer & Chief Scientist of Fiddler AI, an enterprise startup building a responsible AI and ML monitoring platform. Previously, he was a Principal Scientist at Amazon AWS AI, where he led the fairness, explainability, privacy, and model understanding initiatives in the Amazon AI platform. Prior to joining Amazon, he led similar efforts at the LinkedIn AI team, and served as LinkedIn’s representative in Microsoft’s AI and Ethics in Engineering and Research (AETHER) Advisory Board. Previously, he was a Researcher at Microsoft Research Silicon Valley Lab. Krishnaram received his Ph.D. in Computer Science from Stanford University in 2006. He serves regularly on the senior program committees of FAccT, KDD, WWW, WSDM, and related conferences, and co-chaired the 2014 ACM Symposium on Computing for Development. His work has been recognized through awards at NAACL, WWW, SODA, CIKM, ICML AutoML workshop, and Microsoft’s AI/ML conference (MLADS). He has published 50+ papers, with 4500+ citations and filed 150+ patents (70 granted). He has presented tutorials on privacy, fairness, explainable AI, ML model monitoring, responsible AI, and trustworthy generative AI at forums such as KDD ’18 ’19 '22 '23, WSDM ’19, WWW ’19 ’20 '21 '23, FAccT ’20 '21 ‘22 '23, AAAI ’20 '21, and ICML '21 '23, and instructed a course on AI at Stanford.
Course Assistant: Farnaz “Naz” Ghaedipour is a Postdoctoral Scholar at the Department of Management Science and Engineering at Stanford University. Her research lies at the intersection of work, technology, and organizations, with a particular emphasis on understanding the platform economy, AI-driven technology, and the future of work. She employs both qualitative and quantitative research methods to understand how people adopt and integrate advanced technology into their workflow. Her dissertation explored the use of algorithms and persuasive technology design in managing and directing Instagram content creators. Currently, she is collecting data on the adoption and integration of generative AI technology in various occupational groups. She frequently presents her research at the Academy of Management’s annual conference. Prior to her postdoctoral appointment, she completed her PhD in business, an MBA, and a B.Sc. in Engineering. In her free time, she enjoys Muay Thai, yoga, podcasts, and audiobooks.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES)
Sara Hajian, Francesco Bonchi, and Carlos Castillo, Algorithmic bias: From discrimination discovery to fairness-aware data mining, KDD Tutorial, 2016.
Solon Barocas and Moritz Hardt, Fairness in machine learning, NeurIPS Tutorial, 2017.
Kate Crawford, The Trouble with Bias, NeurIPS Keynote, 2017.
Arvind Narayanan, 21 fairness definitions and their politics, FAccT Tutorial, 2018.
Sam Corbett-Davies and Sharad Goel, Defining and Designing Fair Algorithms, Tutorials at EC 2018 and ICML 2018.
Ben Hutchinson and Margaret Mitchell, Translation Tutorial: A History of Quantitative Fairness in Testing, FAccT Tutorial, 2019.
Henriette Cramer, Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miroslav Dudík, Hanna Wallach, Sravana Reddy, and Jean Garcia-Gathright, Translation Tutorial: Challenges of incorporating algorithmic fairness into industry practice, FAccT Tutorial, 2019.
Sarah Bird, Ben Hutchinson, Krishnaram Kenthapadi, Emre Kiciman, and Margaret Mitchell, Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned, Tutorials at WSDM 2019, WWW 2019, and KDD 2019.
Krishna Gade, Sahin Cem Geyik, Krishnaram Kenthapadi, Varun Mithal, and Ankur Taly, Explainable AI in Industry, Tutorials at KDD 2019, FAccT 2020, and WWW 2020.
Freddy Lecue, Krishna Gade, Fosca Giannotti, Sahin Geyik, Riccardo Guidotti, Krishnaram Kenthapadi, Pasquale Minervini, Varun Mithal, and Ankur Taly, Explainable AI: Foundations, Industrial Applications, Practical Challenges, and Lessons Learned, AAAI 2020 Tutorial.
Himabindu Lakkaraju, Julius Adebayo, and Sameer Singh, Explaining Machine Learning Predictions: State-of-the-art, Challenges, and Opportunities, Tutorials at NeurIPS 2020 and AAAI 2021.
Freddy Lecue, Pasquale Minervini, Fosca Giannotti and Riccardo Guidotti, On Explainable AI: From Theory to Motivation, Industrial Applications and Coding Practices, AAAI 2021 Tutorial.
Kamalika Chaudhuri and Anand D. Sarwate, Differentially Private Machine Learning: Theory, Algorithms, and Applications, NeurIPS 2017 Tutorial.
Krishnaram Kenthapadi, Ilya Mironov, and Abhradeep Guha Thakurta, Privacy-preserving Data Mining in Industry, Tutorials at KDD 2018, WSDM 2019, and WWW 2019.
Krishnaram Kenthapadi, Ben Packer, Mehrnoosh Sameki, Nashlie Sephus, Responsible AI in Industry, Tutorials at AAAI 2021, FAccT 2021, WWW 2021, ICML 2021.
Krishnaram Kenthapadi, Himabindu Lakkaraju, Pradeep Natarajan, Mehrnoosh Sameki, Model Monitoring in Practice, Tutorials at FAccT 2022, KDD 2022, and WWW 2023.
Krishnaram Kenthapadi, Himabindu Lakkaraju, Nazneen Rajani, Generative AI meets Responsible AI, Tutorials at FAccT 2023, ICML 2023, and KDD 2023.