This page contains the most up-to-date schedule for HAII from Prof. Iris Howley. It was taught during the Fall 2022 semester. You can find materials from previous versions of the course at other institutions available on the Alternatives page, or previous versions of this course here.
Note: For this 2022 schedule I've reused the links for the Conference Sections from Spring 2021. The activities were mostly the same, except: "Breakout" was changed to "Activity", the Upcoming slides were updated with appropriate deadlines & reminders to students leading discussions, and many of the post-it slides that had two sections on them were split out into separate slides to accommodate the entire class of 20 students (rather than the several groups of 8 in the remote versions).
Below is a listing of some of the (approximate) regular deadlines for this class, mostly for the release of new modules which includes the readings, Lectures 1 & 2, and Comprehension Quizzes.
Thursdays
Discussion Section Class (finish-up previous discussion, have student-lead discussion of the readings)
Posted: New module with: Reading & Lecture/Listening/Watching #1 & #2 & Comprehension Quiz
Due: Student pair assigned to lead that module's discussion does so
Fridays
Due: Assignments and pass/fail check-ins (if assigned)
Tuesdays
Due: Comprehension Quiz
Conference Sections meet (small group activities)
Posted: Assignments (if assigned)
Wednesdays
Due: Discussion Outline from student pair assigned to lead that module's discussion
Note: For the first week of class, Module 0 & Module 1 happen simultaneously!
Learning goals: (1) Find where course materials are in order to be successful in this course. (2) Identify where in the Syllabus to find answers to course logistics
Part 1
READ: Syllabus
COMPREHENSION QUIZ: Syllabus
Learning goals: (1) Explain why Human-AI interaction is important. (2) Connect Human-AI Interaction to our lives & communities. (3) Summarize the machine learning "pipeline"
Part 2
READ: Schneiderman & Maes (1997). "Direct Manipulation vs. Interface Agents," ACM Interactions.
Optional Reading: Bigham (2019) "The Coming AI Autumn," Personal Blog.
ASSIGNMENT 1 released (A1 Check-in)
Learning goals: (1) Summarize AI history in order to better explain AI/ML's current trajectory. (2) Compare/Contrast AI to Intelligence Augmentation (IA).
READ: Horvitz (1999). "Principles of Mixed Initiative Interaction," Proceedings of ACM CHI Conference.
READ: Licklider (1960). "Man-Computer Symbiosis," IRE Transactions on Human Factors in Electronics.
WATCH Lecture 1: History of AI
Optional Podcast: Judge John Hodgeman Podcast, "Are machine guns robots?" (2010).
COMPREHENSION QUIZ
ASSIGNMENT 1 CHECK-IN due
Learning goals: (1) Describe the pros & cons of the user-centered design process as it applies to AI/ML. (2) Apply the user-centered design process to building AI/ML technologies.
READ: Cristina (2017). "The Designer's Guide to Machine Learning," Digitalist company blog.
READ: Amershi (2019). "Guidelines for Human-AI Interaction," Proceedings of ACM CHI Conference.
READ: Colyer (2019). "Software Engineering for Machine Learning: A Case Study," The Morning Paper [personal blog].
Optional Reading: Amershi (2019). "Software Engineering for Machine Learning: A Case Study" [full article], ACM Conference on Software Engineering in Practice.
Optional Reading: Kay (2015). "How good is 85%? A survey tool to connect classifier evaluation to acceptability of accuracy," Proceedings of ACM CHI Conference.
WATCH Lecture 1: Matchmaking AI/ML
Worksheets: Mental Models & Stakes
WATCH Lecture 2: Radical AI's "Checklists and Principles and Values Oh My! Practices for Co-Designing Ethical Tech w Michael Madaio" (2020).
Optional Lecture: Google I/O 2019. "Designing Human-Centered AI Products," (2019).
COMPREHENSION QUIZ
Worksheets: Co-Adaptation & Heuristics
ASSIGNMENT 1 due
Learning goals: (1) Describe and anticipate common problems/failures occurring at the User Experience and AI/ML overlap. (2) Incorporate user feedback into the design of UX of AI/ML systems. (3) Apply service design methods for designing user experiences in AI/ML systems.
READ: Zimmerman et al (2021). "UX Designers pushing AI in the enterprise: A case for adaptive UIs," ACM Interactions.
READ: Kocielnik et al (2019). "Will you accept an imperfect ai? exploring designs for adjusting end-user expectations of ai systems," Proceedings of ACM CHI Conference.
READ: Transit (2019). "Can we make Montreal’s buses more predictable? No. But machines can," Transit company blog [medium].
Optional Reading: Cai (2019) ""Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making," Proceedings of ACM Conference on Computer-Supported Collaborative Work (CSCW).
WATCH Lecture 2: Avoiding AI/ML Failures with People
Optional Lecture: Joy Buolamwini's "AI, Ain't I a Woman," (2018).
COMPREHENSION QUIZ
Worksheets: Feedback
ASSIGNMENT 2 released (A2 Check-in)
October 10 & 11 - Reading Period (no class) - Module 5 is Thursday (Micro-ideation) + Tuesday + Thursday
In the real world, data for the problems you care about won’t usually be prepackaged into nice, already-existing datasets -- you’ll have to create new datasets, and most of the data you’ll be interested in comes from people. This week is about where data comes from, and how you can go about finding, collecting, and managing data. We’ll pay special attention to the humans involved.
READ: Etlinger & Groopman (2015). "The Trust Imperative: A Framework for Ethical Data Use," Altimeter Market Definition Report.
READ: O'Malley (2018). "Captcha if you can: how you’ve been training AI for years without realising it," TechRadar.
READ: Madrigal (2012). "Why Google Maps Is Better Than Apple Maps," The Atlantic.
READ: Bigham et al (2014). "Human-computer interaction and collective intelligence," Collective Intelligence Handbook.
READ: Naylor (2021). "Underpaid Workers Are Being Forced to Train Biased AI on Mechanical Turk," Motherboard by Vice.
Optional Reading: Moore (2021). "If Big Tech has our data, why are targeted ads so terrible?," Financial Times.
WATCH Lecture 2: People & Public Data for AI
Optional Lecture: Radical AI's "Data as Protest: Data for BLM with Yeshi Milner," (2020).
WATCH Lecture 3: Collecting human data for AI/ML
Optional Lecture: Microsoft Research's "Reinforcement learning in Minecraft: Challenges & opportunities in multiplayer games," (2021).
COMPREHENSION QUIZ
ASSIGNMENT 2 CHECK-IN due
Learning Goals: (1) apply data visualization principles to the design of complex data visualization. (2) Explain how AI can be used used to improve the ML modeling user experience. (3) Implement info viz design principles to visualize complex data.
READ: Cai et al (2019). "Human-centered tools for coping with imperfect algorithms during medical decision-making," Proceedings of ACM CHI Conference.
READ: Kay et al (2016). "When (ish) is my bus? user-centered visualizations of uncertainty in everyday, mobile predictive systems," Proceedings of ACM CHI Conference.
Optional Reading: Strobelt et al (2018). "Seq2seq-v is: A visual debugging tool for sequence-to-sequence models," IEEE transactions on visualization and computer graphics
Optional Reading: Samek et al (2017). "Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models."
WATCH Lecture 1: Visualization to improve HAII
Optional Lecture: MSR "Data Visualization: Bridging the Gap Between Users & Information," (2020).
Optional Podcast: Data Stories "Visualizing Uncertainty with Jessica Hullman and Matthew Kay," (2019).
WATCH Lecture 2: Info Viz for Users of AI/ML Systems
Optional Podcast: Data Stories "Human-Driven Machine Learning with Saleema Amershi," (2018).
COMPREHENSION QUIZ
ASSIGNMENT 3 released (A3 Check-in)/ Assignment 2 due
AI systems are being deployed in increasingly diverse and complicated situations, and the machine learning models underlying these systems are often incredibly difficult to understand. How can we build AI systems that allows us to explore why the algorithm is doing what it's doing?
READ: Weld & Bansal (2019) "The challenge of crafting intelligible intelligence," Communications of the ACM.
READ: Lim, Yang, & Wang (2019). "Why these Explanations? Selecting Intelligibility Types for Explanation Goals," Conference on Intelligent User Interfaces Workshops.
Optional Reading: Lipton (2018). "The mythos of model interpretability," Queue.
Optional Reading: Ribeiro et al (2016). "Why should I trust you?" Explaining the predictions of any classifier," Proceedings of ACM SIGKDD Conference on Knowledge Discovery and Data Mining.
WATCH Lecture 1: Interpreting & Explaining Algorithms
Optional Lecture: Jenn Wortman Vaughan "Transparency and Intelligibility Throughout the Machine Learning Life Cycle," (2020).
COMPREHENSION QUIZ
Ethics are the moral principles that govern our behavior in our lives. The ethics by which we choose to develop technologies (including AI technologies) can determine in very real ways how those technologies will benefit (or not) the humans who provide the data to train models, who use the systems we develop directly, or who may face the consequences of AI systems deployed for them or around them. Furthermore, despite our best efforts, all datasets have bias. In some especially bad cases, data might be biased against a marginal subpopulation (e.g., racial bias, gender bias, etc.), but sometimes bias can be harder to spot. Once we recognize that our data will have bias, we can work toward datasets that limit undesirable bias and seek to mitigate potential negative effects of remaining bias in the models we create. As we build models with data that make it into user interfaces, we will work toward understanding how people might appropriately decide when and how to trust the underlying models that result.
READ: Hill (2021). "Your face is not your own," The New York Times.
Optional Reading: Hartzog (2018). "Facial Recognition Is the Perfect Tool for Oppression," Medium blog.
Optional Reading: Crawford (2021). "Time to regulate AI that interprets human emotions," Nature.
READ: Hao (2021). "How Facebook got addicted to spreading misinformation," MIT Technology Review.
Optional Reading: Hoffmann (2017). "Facebook doesn't need a chief ethics officer," Slate.
Optional Reading: Naughton (2021). "Google might ask questions about AI ethics, but it doesn't want answers," The Guardian.
READ: Green & Chen (2019). "Disparate interactions: An algorithm-in-the-loop analysis of fairness in risk assessments," Conference on Fairness, Accountability, and Transparency.
Optional Reading: Winner (1980). "Do artifacts have politics?" Daedalus.
Optional Reading: Kallus & Zhou (2018). "Residual unfairness in fair machine learning from prejudiced data," International Conference on Machine Learning.
Optional Reading: Caruana et al (2015). "Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission," ACM SIGKDD international conference on knowledge discovery and data mining.
WATCH Lecture 1: AI Ethics, Fairness, Bias
Optional Lecture: Alex Hanna "Data, Transparency, and AI Ethics (Ethics of AI in Context)," (2020).
WATCH Lecture 2: Kate Crawford's 2017 NeurIPS Keynote, "The Trouble with Bias" (2017).
Optional Lecture: Ruha Benjamin's 2021 ACM SIGCHI Keynote, "Which Humans? Innovation, Equity, and Imagination in Human-Centered Design" (2021). --> Note: Maybe move this to one of the design modules?
COMPREHENSION QUIZ
In machine learning classes, we sometimes treat datasets as fixed. In the real world, data continues to grow, and a big challenge is thus how to continue to incorporate new data into models as they grow. This is challenging because we need to have continued access to ground truth labels, don’t want to accidentally make performance worse, and most machine learning algorithms don’t allow new examples to be easily added without retraining the whole model. Additionally, both humans and machines struggle to complete many types of tasks well, and oftentimes they have complementary strengths. Constructing fruitful human-machine partnerships is thus promising in a lot of hard domains, but non-trivial in most of them. Humans can struggle to understand and fix the sometimes nonsensical errors made by AI, and rigid AI systems can struggle to incorporate the decontextualized and slow input of people.
READ: Wang (2019). "Humans in the Loop: The Design of Interactive AI Systems," Stanford HAI blog.
READ: Oppenheimer (2017). "Machine learning with humans in the loop," Algorithmia blog.
READ: Huang, Chang, & Bigham (2018). "Evorus: A crowd-powered conversational assistant built to automate itself over time," Proceedings of ACM CHI Conference.
Optional Reading: Bansal et al (2019). "Updates in human-ai teams: Understanding and addressing the performance/compatibility tradeoff," Proceedings of AAAI Conference on Artificial Intelligence.
Optional Reading: Zhou, Valentine, & Bernstein (2018). "In search of the dream team: temporally constrained multi-armed bandits for identifying effective team structures," Proceedings of ACM CHI Conference.
WATCH Lecture 2: Humans in the Loop for Human-AI Partnerships
Optional Lecture: StitchFix "Machine Learning with Humans in the Loop," (2017).
COMPREHENSION QUIZ
ASSIGNMENT 4 released (A4 Check-in)/ Assignment 3 due
November 24 - Thanksgiving Break (no class) - Discussion sections moved to Tuesdays, Conference Sections on Thursdays
Chatbots, dialog systems, and conversational agents are starting to be used and interacted with by people at a huge scale. Twitter bots are interacting with unsuspecting people, companies continue to rely on increasingly sophisticated dialog systems to reduce costs, and listening devices are entering our homes at large scale in the form of speech-controlled devices. How are these systems built, what are their limitations, and what might they look like in the future?
READ: Miller (2015). "Mind: How to Build a Neural Network (Part One)," Personal website.
Optional Lecture: Rohrer "Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM)," (2017).
READ: Bender et al (2021). "On the dangers of stochastic parrots: Can language models be too big?" Proceedings of the ACM Conference on Fairness, Accountability, and Transparency.
READ: Seering et al (2019). "Beyond dyadic interactions: considering chatbots as community members," Proceedings of ACM CHI Conference.
Optional Reading: Druga et al (2017). ""Hey Google is it OK if I eat you?" Initial Explorations in Child-Agent Interaction," Proceedings of ACM Conference on Interaction Design and Children.
Optional Reading: Weizenbaum (1966). "ELIZA—a computer program for the study of natural language communication between man and machine," Communications of the ACM.
Optional Reading: Porcheron et al (2018). "Voice interfaces in everyday life," Proceedings of ACM CHI Conference.
Optional Reading: Newton (2016). "Speak, Memory, When her best friend died, she rebuilt him using artificial intelligence," The Verge.
WATCH Lecture 2: Radical AI's "The Power of Linguistics: Unpacking Natural Language Processing Ethics with Emily Bender," (can start at 13:23) (2020).
Optional Lecture: MSR "Failures of imagination: Discovering and measuring harms in language technologies," (2021).
COMPREHENSION QUIZ
Computer vision has long been a dream of computer science, and here we explore how applications of computer vision to human-facing topics such as facial recognition impact society. In addition to identifying objects in images, machines have been making great strides in producing images creatively. In this module, we discuss all things humans, machine, and image.
READ: Bowman (2021). "Slick Tom Cruise Deepfakes Signal That Near Flawless Forgeries May Be Here," NPR.
READ: Andrews (2020). "Using AI to Detect Seemingly Perfect Deep-Fake Videos," Stanford HAI Blog.
READ: Saha (2018). "A Comprehensive Guide to Convolutional Neural Networks — the ELI5 way," Towards Data Science Blog.
Optional Reading: Powell (2018). "Image Kernels Explained Visually," Setosa Visual Explanations.
READ: Machine's Creativity (2019). "Artificial Art: How GANs are making machines creative," Heartbeat by Fritz.ai Medium.
Optional Reading: Hui (2018). "GAN — Some cool applications of GAN," Jonathan Hui Personal Blog.
Optional Reading: Google Developers (2019). "Crash Course on GANs," Google.
WATCH Lecture 1: AI & Images (Processing & Generation)
Optional Lecture: Rohrer "How Convolutional Neural Networks Work," (2016).
Optional Lecture: Serrano "A Friendly Introduction to GANs," (2020).
WATCH Lecture 2: Timnit Gebru's "Computer vision in practice: who is benefiting and who is being harmed?"
Optional Lectures: Part 2 & 3 of FATE/CV 2020 workshop
COMPREHENSION QUIZ
FINAL PROJECT released (Project Check-in; Project Pitch)/ Assignment 4 due
Algorithms that predict user preferences are everywhere, delighting users with recommendations in which they might be interested (a relevant book, a similar movie, ...) and online advertisers with increased relevant traffic to their products. How do they work, how can they be gamed, and what do recommendation algorithms do to people? More broadly, what impact can we expect AI (in general) to have on people and the world?
READ: Tufekci (2018). "YouTube, the Great Radicalizer," The New York Times.
Optional Reading: Naughton (2018). "Extremism pays. That’s why Silicon Valley isn’t shutting it down," The Guardian.
Optional Reading: Covington et al (2016). "Deep neural networks for Youtube recommendations," Proceedings of the ACM Conference on Recommender Systems.
Optional Reading: Nguyen et al (2014). "Exploring the filter bubble: the effect of using recommender systems on content diversity," Proceedings of the ACM Conference on the World Wide Web.
Optional Reading: Willemsen et al (2016). "Understanding the role of latent feature diversification on choice difficulty and satisfaction," Journal of User Modeling and User-adapted Interaction.
Optional Reading: Koebler & Cox (2018). "The Impossible Job: Inside Facebook's Struggle to Moderate Two Billion People," Motherboard by Vice.
Content Warning: Mention of suicide, eating disorder, anti-semitism, and sexual acts. Contact Iris if you'd like an alternate reading.
READ: Citron (2021). "Fix Section 230 and Hold Tech Companies to Account," Wired UK.
Optional Reading: Kelly (2021). "Congress is way behind on algorithmic misinformation," The Verge.
READ: Vincent (2021). "Google is Poisoning Its Reputation with AI Researchers," The Verge.
Could consider replacing with Simonite (2021). "What really happened when Google ousted Timnit Gebru," The Verge.
Optional Reading: Mickle (2021). "Google Plans to Double Its AI Ethics Research Staff," The Wall Street Journal.
Optional Reading: Field (2021). '"What on Earth are they doing?": AI ethics experts react to Google doubling embattled ethics team,"' Emerging Tech Brew.
WATCH Lecture 1: Recommender Systems + Continuing in HAII
Optional Podcast: NYTimes "Rabbit Hole, What is the internet doing to us?" (2020).
Optional Podcast: Radical AI's "What is AI for social good?" (2020).
WATCH: Lecture 2: AI and the World + Course Wrap-Up
Optional Podcast: Radical AI's "Industry AI Ethics 101 with Kathy Baxter," (2020).
COMPREHENSION QUIZ
FINAL PROJECT (Check-in + Pitch + Submission) Due
Student Course Survey Forms Due