Howley 2020/1 is largely a combination of Kulkarni & Kery (2019)'s course structure, with some added content from Bigham & Seering (2018), but taught in a fully remote manner. Many of the readings from Bigham & Seering (2018) were substituted in for research papers from Kulkarni & Kery (2019) to make the course more approachable for undergraduates. Lectures from previous versions were condensed from two to one video lecture (likely as in-class discussion previously took place in lecture, rather than conference section). Additionally, Howley 2020/1 operates under a 12 week semester, whereas the 2018 & 2019 courses have a 14 week semester.
On this page are documented links to previous course materials that have informed the development of Howley 2020/1. More specifics from previous versions can be found on those course websites: Kulkarni & Kery (2019) and Bigham & Seering (2018). The most up-to-date version of the CMU HAII class is available at http://humanaiclass.org
While not exactly the same, Blodgett, Handler, Keith (2018). "Ethical Issues Surrounding Artificial Intelligence Systems and Big Data" also contains a wealth of relevant links/topics. About half of which make an appearance in HAII, mostly as topics described in lecture.
In Howley 2020/1, we have lectures released twice per week (typically Thursday and Monday). The slides for Howley 2020/1 are available on the Schedule page. Some alternative lecture slides are available from other instructors:
0.1 Course Introduction
1.1 Intro to HAII
2.1 A History of AI
2.2 History of Humans Interacting with AI and AI vs. IA
3.1 Matchmaking Needs and Risks for Adding AI/ML
Bigham & Seering (2018): How does ML surface in UX?
4.1 Failure & Feedback with Users
5.1 Why would people give you data anyway? Data ethics and laws
5.2 Human-centric data in an ML pipeline (wait, should it even be a pipeline??)
Bigham & Seering (2018): Crowdsourcing Data Collection
6.1 Visualizations to improve human-AI interaction
6.2 Visualization for ML
Guest lecture: Adam Perer: Visualization in AI, Kulkarni & Kery (2019)
7.1 Interpreting and Explaining Algorithms
7.2 How does telling people how an algorithm works change their experience?
8.2 AI Ethics, Fairness, Social Acceptability, and Trust
Kulkarni & Kery (2019): Confidence is not Accuracy
Kulkarni & Kery (2019): Ethics Theories
Bigham & Seering (2018): Data, Bias, and Trust
Bigham & Seering (2018): Data, Ethics, and AI
Bigham & Seering (2018): Ethics of AI, Examples + AI Cameras
9.1 Human in the loop with AI/ML & Recommendations
11.1 Natural Language & Speech Applications
12.1 Vision, Images & Art
Kulkarni & Kery (2019): Computer Vision
Kulkarni & Kery (2019): CNNs
12.3 Generating Images (GANs)
In this class, we will have weekly readings (2-3 per week), twice-weekly pre-recorded lectures, weekly comprehension quizzes, weekly synchronous class meetings, weekly engagement activities, and regular homework assignments with pass/fail check-ins (1-3 weeks). The details for which are mapped out in detail on the Schedule page.
Howley 2021 homeworks are available below:
Assignment 1: The Machine Learning Development Cycle (mid-point check-in)
Assignment 2: Designing AI/ML Systems (mid-point check-in)
Assignment 3: Info Viz for Predictive Data (using speed dating data) (mid-point check-in)
Assignment 4: Chatbots (using fastai version 2) (mid-point check-in)
Final Project: Choose one of 4 options (mid-point check-in; Project Pitch)
The following Assignment was used in Howley 2020 but was folded into Final Project options for Howley 2021:
Assignment 5: Generating Images (mid-point check-in) --> See Final Project for an updated version of this assignment that uses only Google Colab (Jupyter notebook here was too complex), plus the added mid-point check-in).
Some alternative assignments are available from other instructors:
Kulkarni & Kery (2019) Assignment 3: Info Viz for Predictive Data (using CDC death data)
Kulkarni & Kery (2019) Assignment 4: Chatbots (using fastai version 1)
Bigham & Seering (2018) Assignment 0: AI in your World
Bigham & Seering (2018) Assignment 1: The Machine Learning Development Cycle (different data, questions)
Bigham & Seering (2018) Assignment 2: Gesture Recognition
Bigham & Seering (2018) Assignment 3: Image Labeling & Biases
Bigham & Seering (2018) Assignment 4: Chatbots
Bigham & Seering (2018) Assignment 5: Recommender Systems
Where content from previous courses has been dropped, the materials are available below:
8.1 Computer Ethics
Reading: Medical devices: the Therac-25 by Nancy Leveson
(This is the only reading for the week because a) it is somewhat longer, and b) I want you to think about it carefully.
[Warning: this reading has certain graphic, but textual, descriptions of the results of accidental radiation exposure during clinical therapy. ]
This reading is ostensibly not about AI. But it may be one that allows you to draw many parallels to our AI space. When you read the paper, consider these questions:
What parallels can you draw from the reading to the design of human-AI systems? (For instance, the Tyler incident is caused by a "race" condition -- a hard-to-find bug that is the result of exact timing of operations, which are largely determined by chance and so hard to inspect ahead of time. AI systems can similarly rely on things that are hard to inspect ahead of time.)
What are the roles that people played in this story in making the error, diagnosing it, and fixing it?
Who are the heroes of this story? Who are the villains? Is it useful to think of them this way?
What roles did users play in this story?
Was the response from the manufacturer ethical? What about the regulatory governmental agencies, and the attending doctors?
If something similar were to happen today with an AI-infused system, what would you expect to go differently?
IML.1 Interactive Machine Learning
Kapoor, A., Lee, B., Tan, D., and Horvitz, E. 2010. Interactive optimization for steering machine classification. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pg. 1343-1352.
Fiebrink, R., D. Trueman, and P. R. Cook. A meta-instrument for interactive, on-the-fly machine learning. NIME 2009.
HRI.1 Human Robot Interaction
No slides, Guest speaker Henry Admoni
Masahiro Mori. The Uncanny Valley. June 2012.
Leyzberg et al. The Physical Presence of a Robot Tutor Increases Cognitive Learning Gains.
Tapus et al. Socially Assistive Robotics. March 2007.
Wainer et al. Embodiment and Human-Robot Interaction: A Task-Based Perspective. 2007.
This topic was briefly addressed in the first half of a Module in Howley 2021, Module 12.1. More resources from the full module:
Goal: None listed.
RM.1 Introduction to Recommender Systems
No slides, Grading Assignment 4
Jack Nicas. How YouTube Drives People to the Internet’s Darkest Corners
Paul Covington, Jay Adams, Emre Sargin. “Deep Neural Networks for YouTube Recommendations.”
Zeynep Tufekci. YouTube, the Great Radicalizer.
Martijn C. Willemsen, Mark P. Graus, and Bart P. Knijnenburg. Understanding the role of latent feature diversification on choice difficulty and satisfaction. User Modeling and User-Adapted Interaction (UMUAI 2016).
(Grad) Ricci, F., Rokach, L., & Shapira, B. (2015). Recommender systems: introduction and challenges. In Recommender systems handbook (pp. 1-35). Springer, Boston, MA.
RM.2 Effects of Recommender Systems on People
No slides, Guest Speaker Michael Ekstrand
“Extremism pays. That’s why Silicon Valley isn’t shutting it down” - | The Guardian
Nguyen, T. T., Hui, P. M., Harper, F. M., Terveen, L., & Konstan, J. A. (2014, April). Exploring the filter bubble: the effect of using recommender systems on content diversity. In Proceedings of the 23rd international conference on World wide web (pp. 677-686). ACM.
This topic was just barely touched upon in Howley 2021 Lecture 12.1.
Goal: The rise of the social web was once heralded as liberating, democratizing force that would only lead to positive changes in the world. In the past few years, we’ve learned that social web technologies are amplifiers, and they can amplify both good and bad aspects of humanity. What is the role of AI in helping users deal with content? In this week, we’ll cover how people are working with AI to moderate content, how content algorithms drive engagement and seek to influence human behavior, and overview technology tasked with identifying hateful, fake, or otherwise harmful online content (and the very real difficulties of figuring out these problems automatically).
CM.1 The Role of AI in Content Moderation
No slides
Jason Koebler and Joseph Cox. The Impossible Job: Inside Facebook’s Struggle
Myers West, S. (2018). Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society, 1461444818773059.
Joseph Seering, Tony Wang, Jina Yoon, and Geoff Kaufman. Moderator Engagement and Community Development in the Age of Algorithms. New Media & Society.
CM.2 Content Moderation by Amplifying Human Moderators
No slides, Student Panel
Gillespie - Improving Moderation.
“The Moderators” - https://www.youtube.com/watch?v=k9m0axUDpro (CONTAINS SOME GRAPHIC CONTENT. If you would prefer an alternate reading, please contact the instructor).
Goal: None listed.
AHW.1 Jobs? (This topic was folded into Howley 2021 Lecture 12.2)
No Slides, Grading Assignment 5
President Barack Obama on How Artificial Intelligence Will Affect Jobs | WIRED
Nilsson, N. J. (1984). Artificial intelligence, employment, and income. AI magazine, 5(2), 5.
(Grads) Brynjolfsson, E., Rock, D., & Syverson, C. (2017). Artificial intelligence and the modern productivity paradox: A clash of expectations and statistics. In Economics of Artificial Intelligence. University of Chicago Press.
AHW.2 How to make AI better for humans, and how not to break the world
No Slides, Instructors Discussion/Response