This page contains the out of date schedule of topics from the Fall 2020 semester. A more up-to-date version is available under the Main Schedule page.
Below is a listing of some of the (approximate) regular deadlines for this class, mostly for the release of new modules which includes the readings, Lectures 1 & 2, Engagement Activities, and Comprehension Checkpoints.
Thursdays
Posted: New module with: Reading & Lecture/Listening/Watching #1
Due: Previous Module Engagement Activity (Discussion forum post)
Sundays
Due: Discussion forum responses to peers & Assignments (if assigned)
Mondays
Posted: Lecture/Listening/Watching #2
Posted: Checkpoint, Engagement Activities
Tuesdays/Wednesdays
Due: Comprehension Checkpoint
Recitation/Conference Sections meet
Posted: Assignments (if assigned)
Note: For the first week of class, Module 0 & Module 1 happen simultaneously!
Learning goals: (1) Find where course materials are in order to be successful in this course. (2) Identify where in the Syllabus to find answers to course logistics
Part 1
Start of Semester Survey
READ: Syllabus
READ: How To Sign-up for Student Help Hours
CHECKPOINT: Syllabus
Learning goals: (1) Explain why Human-AI interaction is important. (2) Connect Human-AI Interaction to our lives & communities. (3) Summarize the machine learning "pipeline"
Part 2
READ: Schneiderman & Maes "Direct Manipulation vs. Interface Agents."
Optional Reading: Bigham, "The Coming AI Autumn."
Engagement Activity: Share & Tell: Find an AI technology and describe its potential impact
For Module #1's Engagement Activity:
Post a link to an artificial intelligence technology not yet discussed in course materials.
Summarize what the tech is in your own words.
State how it will positively/negatively impact society.
By Sunday night, check this forum again to leave a response to at least 2 of your peers' posts!
Having trouble finding an AI technology to talk about? ArsTechnica , Wired, ProPublica , TheMarkup , and other popular news ' tech sections, etc. are all good first approaches to look through.
ASSIGNMENT 1 released
Learning goals: (1) Summarize AI history in order to better explain AI/ML's current trajectory. (2) Compare/Contrast AI to Intelligence Augmentation (IA).
READ: Horvitz "Principles of Mixed Initiative Interaction"
READ: Licklider "Man-Computer Symbiosis"
WATCH Lecture 1: History of AI
Optional Lecture: Judge John Hodgeman Podcast, "Are machine guns robots?"
MODULE CHECKPOINT
Engagement Activity: Direct Manipulation --> Intelligence Augmentation
For Module #2's Engagement Activity:
Briefly describe a technology that you use that would be considered an example of direct manipulation. Include links, screenshots, etc. as needed.
Why is this direct manipulation?
Afterward, respond to at least two peers' Discussion Forum posts from this week's [Module #2] Engagement Activity, by:
Suggesting & describing an interface agent or intelligence augmentation for the original direct manipulation interface described in the originating post. Be sure to include examples (i.e., video, links) with comparable technology, if any! It can be a technology you invented on your own, or one that already exists.
Describe whether this suggested addition aligns more with Maes, Schneiderman, Horvitz, or Licklider's views.
Learning goals: (1) Describe the pros & cons of the user-centered design process as it applies to AI/ML. (2) Apply the user-centered design process to building AI/ML technologies.
READ: Amershi "Guidelines for Human-AI Interaction."
READ: Amershi "Software Engineering for Machine Learning: A Case Study"
Optional Reading: Kay "How good is 85%? A survey tool to connect classifier evaluation to acceptability of accuracy."
WATCH Lecture 1: Matchmaking AI/ML
Worksheets: Mental Models & Stakes
WATCH Lecture 2: Radical AI's "Checklists and Principles and Values Oh My! Practices for Co-Designing Ethical Tech w Michael Madaio."
Optional Lecture: Google I/O 2019 "Designing Human-Centered AI Products"
MODULE CHECKPOINT
Worksheets: Co-Adaptation & Heuristics
ASSIGNMENT 2 released / Assignment 1 due
Engagement Activity: SE for ML
For Module #3's Engagement Activity:
Imagine you are a Software Engineer on a team building the latest & greatest in interactive AI, the HikeFinder, which, in conversation with the user, will identify the best hike fitting that user's preferences. It relies on a generic natural language processing model of English from one of your company's other projects, and a newly developed language model for describing hiking treks.
One of your newest teammates, Taylor, makes the following claim: "Software teams should organize themselves similarly to their project's architecture, so it's a good thing that the English language model we have was built by a separate team. This way we can add on the hiking model using proper software interfaces for our libraries to work together."
Do you agree or disagree with Taylor's statement? Use evidence from readings in the course to support your position.
Afterward, respond to at least two peers' Discussion Forum posts from this week's [Module #3] Engagement Activity, by Sunday.
Learning goals: (1) Describe and anticipate common problems/failures occurring at the User Experience and AI/ML overlap. (2) Incorporate user feedback into the design of UX of AI/ML systems. (3) Apply service design methods for designing user experiences in AI/ML systems.
READ: Kocielnik "Will you accept an imperfect ai? exploring designs for adjusting end-user expectations of ai systems."
READ: Cai ""Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making."
READ: Transit "Can we make Montreal’s buses more predictable? No. But machines can."
WATCH Lecture 2: Janelle Shane's "Machine learning mistakes - for art."
Optional Lecture: Joy Buolamwini's "AI, Ain't I a Woman"
MODULE CHECKPOINT
Worksheets: Feedback
Engagement Activity: Designing for Failure
For Module #4's Engagement Activity:
Post a link to an AI/ML + user experience failure not yet discussed in course materials.
Summarize the technology is it Critical or Complementary? Proactive or Reactive? Visible or Invisible?
Describe what the failure is in your own words.
By Sunday night, check this forum again to leave a response to at least 2 of your peers' posts!In your responses, you will propose how the designers of the technology might have mitigated the failure in onboarding/during-the-fail/or after the failure.
Having trouble finding an AI fail to talk about? ArsTechnica , Wired, ProPublica , TheMarkup , and other popular news ' tech sections, etc. are all good first approaches to look through.
October 12 & 13 - Reading Period (no class) - Module 5 is Thursday + Monday + Thursday
Optional Conference Section (Design Workshop)
In the real world, data for the problems you care about won’t usually be prepackaged into nice, already-existing datasets -- you’ll have to create new datasets, and most of the data you’ll be interested in comes from people. This week is about where data comes from, and how you can go about finding, collecting, and managing data. We’ll pay special attention to the humans involved.
READ: Etlinger "The Trust Imperative: A Framework for Ethical Data Use."
READ: O'Malley "Captcha if you can: how you’ve been training AI for years without realising it."
READ: Madrigal "Why Google Maps Is Better Than Apple Maps"
READ: Bigham "Human-computer interaction and collective intelligence."
WATCH Lecture 2: People & Data for AI
Optional Lecture: Radical AI's "Data as Protest: Data for BLM with Yeshi Milner"
MODULE CHECKPOINT
Engagement Activity: Crowd Sourcing in the Wild
For Module #5's Engagement Activity:
Answer: Is Facebook directed, collaborative, or passive crowd sourcing? Consider user browsing behavior, video watching, likes, comments, clicks, posts, photo uploads, etc.
Explain your reasoning with evidence from this week's readings & lectures.
By Sunday night, check this forum again to leave a counterpoint to at least 2 of your peers' posts! In your responses, make sure to disagree (respectfully!) with your peers and provide counter-evidence!
Learning Goals: (1) apply data visualization principles to the design of complex data visualization. (2) Explain how AI can be used used to improve the ML modeling user experience. (3) Implement info viz design principles to visualize complex data.
READ: Cai "Human-centered tools for coping with imperfect algorithms during medical decision-making."
READ: Kay "When (ish) is my bus? user-centered visualizations of uncertainty in everyday, mobile predictive systems."
Optional Reading: Strobelt "Seq2seq-v is: A visual debugging tool for sequence-to-sequence models."
Optional Reading: Samek "Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models."
WATCH Lecture 1: Visualization to improve HAII
Optional Podcast: Data Stories "Visualizing Uncertainty with Jessica Hullman and Matthew Kay"
WATCH Lecture 2: Adam Perer's "What is the role of visualization in prediction?" & "Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models"
Optional Podcast: Data Stories "Human-Driven Machine Learning with Saleema Amershi"
MODULE CHECKPOINT
ASSIGNMENT 3 released / Assignment 2 due
Engagement Activity: Visualizing Complex Information in the Real World
For Module #6's Engagement Activity, we're going to get an opportunity to interact with a lot of information visualizations with the following tasks:
Select and share the link to a visualization from Alberto Cairo's blog, The Functional Art, that visualizes complex information (ideally predictive in nature, such as from a probabilistic or ML model of some sort), you may have to scroll through a few pages to find a good one! You may also find that going to the link Professor Cairo provides via the blogpost may be more appropriate for the purposes of this activity. Try not to pick the first visualization you find, it'd be nice to have some variety in the discussions to talk about!
Summarize the content you think the visualization is portraying (i.e., what are the main takeaways).
Explain why you selected this visualization to share.
By Sunday night, check this forum again to leave a response to at least 2 of your peers' posts! In your responses, you will (1) visit your peers' selected visualization, (2) describe how well the visualization conveys its main takeaways, and (3) explain how the visual features and perceptual tasks leveraged by the visualization contribute to its conveying of information effectively (these are concepts from lecture).
AI systems are being deployed in increasingly diverse and complicated situations, and the machine learning models underlying these systems are often incredibly difficult to understand. How can we build AI systems that allows us to explore why the algorithm is doing what it's doing?
READ: Weld "The challenge of crafting intelligible intelligence."
READ: Lim "Why these Explanations? Selecting Intelligibility Types for Explanation Goals."
Optional Reading: Lipton "The mythos of model interpretability."
Optional Reading: Ribeiro "Why should I trust you?" Explaining the predictions of any classifier."
WATCH Lecture 1: Interpreting & Explaining Algorithms
Optional Lecture: Jenn Wortman Vaughan "Transparency and Intelligibility Throughout the Machine Learning Life Cycle"
MODULE CHECKPOINT
Engagement Activity: AI Explainability for User Reasoning
For Module #7's Engagement Activity:
Select an interactive visualization article from the IEEE VISxAI workshop on visualization for AI Explainability (look through the whole website, there's visualizations from past year's workshops, too!): https://visxai.io/
Try not to pick the first explainable you find, it'd be nice to have some variety in the discussions to talk about!
Summarize what the interactive article / explainable is and why you selected it.
What is an example of each of the types of knowledge the interactive article is explaining: (a) a fact, (b) a rule, and (c) a rationale?
By Sunday night, check this forum again to leave a response to at least 2 of your peers' posts! In your responses, you will visit your peer's selected interactive article, choose one of the interactive visualizations therein, and describe which Intelligibility Type it represents, and how that promotes the designers' desired reasoning from viewers.
Ethics are the moral principles that govern our behavior in our lives. The ethics by which we choose to develop technologies (including AI technologies) can determine in very real ways how those technologies will benefit (or not) the humans who provide the data to train models, who use the systems we develop directly, or who may face the consequences of AI systems deployed for them or around them. Furthermore, despite our best efforts, all datasets have bias. In some especially bad cases, data might be biased against a marginal subpopulation (e.g., racial bias, gender bias, etc.), but sometimes bias can be harder to spot. Once we recognize that our data will have bias, we can work toward datasets that limit undesirable bias and seek to mitigate potential negative effects of remaining bias in the models we create. As we build models with data that make it into user interfaces, we will work toward understanding how people might appropriately decide when and how to trust the underlying models that result.
READ: Hartzog "Facial Recognition Is the Perfect Tool for Oppression"
READ: Hoffmann "Facebook doesn't need a chief ethics officer"
READ: Green "Disparate interactions: An algorithm-in-the-loop analysis of fairness in risk assessments."
Optional Reading: Winner "Do artifacts have politics?"
Optional Reading: Kallus "Residual unfairness in fair machine learning from prejudiced data."
Optional Reading: Caruana "Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission."
WATCH Lecture 1: AI Ethics, Fairness, Bias
Optional Lecture: Alex Hanna "Data, Transparency, and AI Ethics (Ethics of AI in Context)"
WATCH Lecture 2: Kate Crawford NeurIPS Keynote 2017 "The Trouble with Bias"
MODULE CHECKPOINT
Engagement Activity: AI Personal Ethics
For Module #8's Engagement Activity:: Consider each of the below 6 tasks:
Implement a GPS system for export to an autocratic state where it will be used to keep track of political dissidents.
Modify a financial software so that selected bank accounts cannot be traced by tax authorities.
Install on-street facial recognition system that can identify people who should be self-isolating following travel to a COVID high-risk country.
Develop a system that helps deter bots by showing images to users - but also uses the user responses to train an AI system without informing the user.
Develop a ML system that calculates and assigns credit scores for a bank which optimizes profit but is likely to deny loans to the poor.
Develop a system that assigns a "unionization risk score" for a big retail chain by creating an interactive heat map that monitors warehouse works.
Imagine your employer is asking you to complete each task. For each task, state which option most reflects your stance (and why):
A. Quite happy to do it
B. Reluctant, but would do it
C. Object to doing it and ask for alternative task - but do it if I must
D. Resign from my job rather than do it
E. Resign from the job and launch a public protest campaign
What rules are you using to make your decisions?
(Some version of many of these systems exist in some form: 1, 2, 3, 4, 5, 6)
-=-=-=-=-=-=-
By Sunday night, check this forum again to leave a response to at least 2 of your peers' posts! In your responses, you will select at least 2 of the tasks on which you disagree with the peer, and try to persuade them to your perspective.
(Many thanks to Abeba Birhane for this activity)
In machine learning classes, we sometimes treat datasets as fixed. In the real world, data continues to grow, and a big challenge is thus how to continue to incorporate new data into models as they grow. This is challenging because we need to have continued access to ground truth labels, don’t want to accidentally make performance worse, and most machine learning algorithms don’t allow new examples to be easily added without retraining the whole model. Additionally, both humans and machines struggle to complete many types of tasks well, and oftentimes they have complementary strengths. Constructing fruitful human-machine partnerships is thus promising in a lot of hard domains, but non-trivial in most of them. Humans can struggle to understand and fix the sometimes nonsensical errors made by AI, and rigid AI systems can struggle to incorporate the decontextualized and slow input of people.
READ: Wang "Humans in the Loop: The Design of Interactive AI Systems"
READ: Oppenheimer "Machine learning with humans in the loop"
READ: Huang "Evorus: A crowd-powered conversational assistant built to automate itself over time."
WATCH Lecture 2: Humans in the Loop for Human-AI Partnerships
Optional Lecture: StitchFix "Machine Learning with Humans in the Loop"
MODULE CHECKPOINT
ASSIGNMENT 4 released / Assignment 3 due
Engagement Activity: Humans in the Real World Loop
For Module #9's Engagement Activity:
Summarize a Human-in-the-Loop system we haven't covered in this module (this may be tricky, consider the apps you use that incorporate AI, follow citations in articles we've read, visit ProPublica, TheMarkup, ArsTechnica, Wired, other popular news' tech sections for some possible ideas, etc.)
Would you describe it as a human in the loop system, or an algorithm in the loop system?
Identify the roles that the human and the AI agents are fulfilling.
By Monday night, check this forum again to leave a response to at least 1 of your peers' posts! In your responses, you will do one of two options:
If you feel the original poster missed a role of the AI agent or human agent in their described system, suggest that role and explain why you think it exists.
Otherwise, describe how the system might be flipped from algorithm-in-the-loop to human-in-the-loop (or vice versa), and describe what the new human & AI roles would be.
Chatbots, dialog systems, and conversational agents are starting to be used and interacted with by people at a huge scale. Twitter bots are interacting with unsuspecting people, companies continue to rely on increasingly sophisticated dialog systems to reduce costs, and listening devices are entering our homes at large scale in the form of speech-controlled devices. How are these systems built, what are their limitations, and what might they look like in the future?
READ: Miller "Mind: How to Build a Neural Network (Part One)"
Optional Lecture: Rohrer "Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM)"
READ: Seering "Beyond dyadic interactions: considering chatbots as community members."
READ: Druga ""Hey Google is it OK if I eat you?" Initial Explorations in Child-Agent Interaction."
Optional Reading: Weizenbaum "ELIZA—a computer program for the study of natural language communication between man and machine."
Optional Reading: Porcheron "Voice interfaces in everyday life."
Optional Reading: Newton "Speak, Memory, When her best friend died, she rebuilt him using artificial intelligence"
WATCH Lecture 2: Radical AI's "The Power of Linguistics: Unpacking Natural Language Processing Ethics with Emily Bender"
MODULE CHECKPOINT
Engagement Activity: Optional: Share one AI/ML technology that you are thankful for!
Module 11 is a Monday + Thursday + Monday due to break!
Computer vision has long been a dream of computer science, and here we explore how applications of computer vision to human-facing topics such as facial recognition impact society. In addition to identifying objects in images, machines have been making great strides in producing images creatively. In this module, we discuss all things humans, machine, and image.
READ: Andrews "Using AI to Detect Seemingly Perfect Deep-Fake Videos"
READ: Saha "A Comprehensive Guide to Convolutional Neural Networks — the ELI5 way"
Optional Reading: Powell "Image Kernels Explained Visually"
READ: Machine's Creativity "Artificial Art: How GANs are making machines creative"
READ: Hui "GAN — Some cool applications of GAN"
WATCH Lecture 1: AI & Image Processing
Optional Lecture: Rohrer "How Convolutional Neural Networks Work"
WATCH Lecture 2: Timnit Gebru's "Computer vision in practice: who is benefiting and who is being harmed?"
Optional Lectures: Part 2 & 3 of FATE/CV 2020 workshop
WATCH Lecture 3: AI-generated Imagery
Optional Podcast: Radical AI "Industry AI Ethics 101 with Kathy Baxter"
Optional Lecture: Serrano "A Friendly Introduction to GANs"
Optional Reading: Google Developers "Crash Course on GANs"
MODULE CHECKPOINT
ASSIGNMENT 5 released / Assignment 4 due
Engagement Activity: AI Can See Clearly Now
For Module #11's Engagement Activity:
Find a publicly available algorithmic image generation technology (likely a website). There are a few options shared in Lecture 11.3, but feel free to find additional ones!
Use it to construct an image that is meaningful to you (or at the very least, interesting to you).
Do a little research on the technology to try and figure out what the algorithm is (sometimes websites are completely open about it, sometimes they obfuscate it!).
Share the link to the technology, a description of what the algorithmic approach might be, a screenshot of the image you created + explain why it's important to you.
By Sunday night, check this forum again to leave a response to at least 1 of your peers' posts! In your responses, you will:
Use the original poster's artwork as a starting point to inspire/generate a new image of your own, using the same image generation technology (the website).
Share the image you created.
Comment on whether you agree with the original poster's assessment of the algorithm being used (might require a little research).
Algorithms that predict user preferences are everywhere, delighting users with recommendations in which they might be interested (a relevant book, a similar movie, ...) and online advertisers with increased relevant traffic to their products. How do they work, how can they be gamed, and what do recommendation algorithms do to people?
READ: Tufekci "YouTube, the Great Radicalizer."
READ: Naughton "Extremism pays. That’s why Silicon Valley isn’t shutting it down"
Optional Reading: Covington "Deep neural networks for Youtube recommendations."
Optional Reading: Nguyen "Exploring the filter bubble: the effect of using recommender systems on content diversity."
Optional Reading: Willemsen "Understanding the role of latent feature diversification on choice difficulty and satisfaction."
WATCH Lecture 1: Recommender Systems
Optional Podcast: NYTimes Rabbit Hole, What is the internet doing to us?
Optional Lecture: Radical AI "What is AI for social good?"
MODULE CHECKPOINT
Assignment 5 due
Engagement Activity: None! It's the end of the semester!