This is a dumping ground for lots of resources that could be useful for teaching HAII. Emphasis on could.
Stommel (2018). "How to Ungrade," Personal blog.
Calarco (2020). Teaching Equitably, Twitter. (Example Syllabus is very good)
Darby (2020). The Secret Weapon of online course: Good discussion forums.
Lee & Hu (CRA-WP) "PROFESSORS IN A PANDEMIC:Tips and Tricks for Teaching Online"
Wray (2011). "RISE Model" for giving peer feedback.
"CATME" for forming teams.
Topaz (2020). "MATH200 Syllabus."
Politz (2020). "Reflections on Emergency Remote Teaching," UCSD CSE.
Loom (for recording one-on-one video feedback)
Computer Vision FATE Workshop talks
Particularly: Gebru (2020). "FATE/CV 2020 | Computer vision in practice: who is benefiting and who is being harmed?"
Iris' Human-AI Interaction Playlist on YouTube has many options
Landay (2019). "Smart Interfaces for Human-Centered AI: HCII Special Seminar." (and associated blog post)
Jen Wortman Vaughan webinar: https://www.microsoft.com/en-us/research/video/transparency-and-intelligibility-throughout-the-machine-learning-life-cycle/
How to make a difference in tech: Engelberg Center Live (podcast) "The Leak."
Adam Perer. 2016. Interacting with Predictions: Visual Inspection of Blackbox ML Models.
Schneiderman. "Educational Efforts on AI and HCI" (a list of AI+HCI courses).
Kulkarni & Kery (2019). "Human-AI Interaction Class," CMU. And the Bigham & Seering version (Fall 2018)
Williams (Spring 2021). "COSC494: Human-AI Interaction," UTennessee.
Blodgett, Handler, Keith (2018). "Ethical Issues Surrounding Artificial Intelligence Systems and Big Data."
Carney & Callaghan (2021). "Designing Machine Learning: A Multidisciplinary Approach."
Keliher (2015). "Design Fictions Class," CMU.
Cosley (2020). "Intelligent User Interfaces class," Cornell.
Bigham. The Coming AI Autumn (blog).
Rachel Tatman's Fairness, Accountability, and Transparency Twitter research papers, Twitter thread.
Frank Pasquale's Second Wave of Algorithmic Accountability.
Evan Peck's Ethical Reflection Modules for CS1
Papers
Friedman (2021). "Why Is Facebook Rejecting These Fashion Ads?" NYTimes.
Murad (2021), "The computers rejecting your job applications." BBC.
Schwab (2021), "‘This is bigger than just Timnit’: How Google tried to silence a critic and ignited a movement," Fast Company.
Wondinsky (2020), "Anonymized Data is Meaningless Bullshit," Gizmodo.
Paullada et al (2020). "Data and its (dis)contents: A survey of dataset development and use in ML research." NeurIPS Wkshp.
Tuckey et al (2020). "A general framework for scientifically inspired explanations in AI."
Mohamed et al (2020). "Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence," (email from Faris).
(1996) Bias in Computer Systems.
AI Now Institute "How to interview a tech company"
Bass, Banjo, Bergen (2020) "Google’s Co-Head of Ethical AI Says She Was Fired for Email" in Bloomberg
The emails (from Casey Newton)
Schiffer (2020) "Google illegally spied on workers before firing them, US labor board alleges"
Tweets
Overfitting - Roverfitting
GANs - GANs "Aha!" moment
Viz - Are your summary statistics hiding something interesting?
Viz - Data viz principles
Examples
Explainability: The Pain Was Unbearable. So Why Did Doctors Turn Her Away?
Google Teachable Machine.
(2020) "The Guardian view on A-level algorithms: failing the test of fairness," The Guardian.
“Recreating Historical Streetscapes Using Deep Learning and Crowdsourcing”
Google's "Black-owned Businesses Near You" feature used to target Black-owned businesses with negative reviews.
The best approach for finding out about these events appears to be following certain Twitter accounts: AINowInstitute, AJLUnited, Black_in_AI, Women_AI_Ethics, mtlaiethics, UofTEthics, TeachAccess, TechWorkersCoalition, as well as Timnit Gebru, Joy Buolamwini, Ruha Benjamin, Rediet Abebe, Deb Raji, Alex Hanna, Safiya Noble, Kate Crawford, Meredith Whittaker, Emily Bender, Sasha Costanza-Chock, Meredith Broussard, Alberto Cairo, Jeffrey Bigham, Nazanin Andalibi, Zeynep Tufekci, among others.
Montreal AI Ethics Institute has quarterly "State of AI Ethics" panels (like this one from December 2020)
Ruha Benjamin posts her talk schedule
Women in AI Ethics occasionally has a summit or other event
Some relevant MSR talks, viewable on demand: "In pursuit of responsible AI: Bringing principles to practice", "Transparency & Intelligibility Throughout the ML Life Cycle", "Fairness-related harms in AI systems: Examples, assessment, and mitigation".
"Coded Bias" documentary frequently has free film screening [online] tickets available
Accompanying text: If you've seen The Social Dilemma on Netflix, then you should definitely read these critiques [1] and [2]. While The Social Dilemma makes great strides in introducing the harms of social media to a mass audience, it largely ignores the voices that have been doing the heavy-lifting on examining, publicizing, and rectifying the situation, instead, centering the majority voices that earned wealth by introducing the problematic technologies. Coded Bias is highly recommended to gain a more well-rounded view of the issues on technological harms.
Machine Learning for Good (open source course). Website.
@Williams:
CSCI375. Natural Language Processing (may not be offered soon...)
CSCI378. Human-AI Interaction (this course!)
Browse relevant Williams College Science & Technology Studies Courses. Here are some!
"How does this help people not like us?"
End-of-semester: https://gather.town/
During orientation yesterday I asked my students this: "What is your favorite thing you have ever written? Why is it your favorite?" Twitter friends, how would you answer?
Scrimba: python tutorial | javascript tutorial
w3schools: python tutorial | javascript tutorial