This page contains the most up-to-date schedule for HAII from Prof. Iris Howley. It was taught during the Spring 2021 semester. You can find materials from previous versions of the course at other institutions available on the Alternatives page, or previous versions of this course here.
Below is a listing of some of the (approximate) regular deadlines for this class, mostly for the release of new modules which includes the readings, Lectures 1 & 2, Engagement Activities, and Comprehension Quizzes.
Thursdays
Posted: New module with: Reading & Lecture/Listening/Watching #1
Due: Previous Module Engagement Activity (Discussion forum post)
Sundays
Due: Discussion forum responses to peers
Due: Assignments and pass/fail check-ins (if assigned)
Mondays
Posted: Lecture/Listening/Watching #2
Posted: Comprehension Quiz, Engagement Activities
Tuesdays/Wednesdays
Due: Comprehension Quiz
Conference Sections meet (like an 8 person Recitation)
Posted: Assignments (if assigned)
Note: For the first week of class, Module 0 & Module 1 happen simultaneously!
Learning goals: (1) Find where course materials are in order to be successful in this course. (2) Identify where in the Syllabus to find answers to course logistics
Part 1
READ: Syllabus
COMPREHENSION QUIZ: Syllabus
Learning goals: (1) Explain why Human-AI interaction is important. (2) Connect Human-AI Interaction to our lives & communities. (3) Summarize the machine learning "pipeline"
Part 2
READ: Schneiderman & Maes (1997). "Direct Manipulation vs. Interface Agents," ACM Interactions.
Optional Reading: Bigham (2019) "The Coming AI Autumn," Personal Blog.
Engagement Activity: Share & Tell: Find an AI technology and describe its potential impact
For Module #1's Engagement Activity:
Post a link to an artificial intelligence technology not yet discussed in course materials. Authoritative/Reliable sources only!
Summarize what the tech is in your own words.
State how it will positively/negatively impact society.
By Sunday night, check this forum again to leave a response to at least 2 of your peers' posts!
Having trouble finding an AI technology to talk about? ArsTechnica , Wired, ProPublica , TheMarkup , and other popular news ' tech sections, etc. are all good first approaches to look through.
ASSIGNMENT 1 released (A1 Check-in)
Learning goals: (1) Summarize AI history in order to better explain AI/ML's current trajectory. (2) Compare/Contrast AI to Intelligence Augmentation (IA).
READ: Horvitz (1999). "Principles of Mixed Initiative Interaction," Proceedings of ACM CHI Conference.
READ: Licklider (1960). "Man-Computer Symbiosis," IRE Transactions on Human Factors in Electronics.
WATCH Lecture 1: History of AI
Optional Podcast: Judge John Hodgeman Podcast, "Are machine guns robots?" (2010).
COMPREHENSION QUIZ
Engagement Activity: Direct Manipulation --> Intelligence Augmentation
For Module #2's Engagement Activity:
Briefly describe a technology that you use that would be considered an example of direct manipulation. Include links, screenshots, etc. as needed.
Why is this direct manipulation?
Afterward, respond to at least two peers' Discussion Forum posts from this week's [Module #2] Engagement Activity, by:
Suggesting & describing an interface agent or intelligence augmentation for the original direct manipulation interface described in the originating post. Be sure to include examples (i.e., video, links) with comparable technology, if any! It can be a technology you invented on your own, or one that already exists.
Describe whether this suggested addition aligns more with Maes, Schneiderman, Horvitz, or Licklider's views.
ASSIGNMENT 1 CHECK-IN due
Learning goals: (1) Describe the pros & cons of the user-centered design process as it applies to AI/ML. (2) Apply the user-centered design process to building AI/ML technologies.
READ: Cristina (2017). "The Designer's Guide to Machine Learning," Digitalist company blog.
READ: Amershi (2019). "Guidelines for Human-AI Interaction," Proceedings of ACM CHI Conference.
READ: Colyer (2019). "Software Engineering for Machine Learning: A Case Study," The Morning Paper [personal blog].
Optional Reading: Amershi (2019). "Software Engineering for Machine Learning: A Case Study" [full article], ACM Conference on Software Engineering in Practice.
Optional Reading: Kay (2015). "How good is 85%? A survey tool to connect classifier evaluation to acceptability of accuracy," Proceedings of ACM CHI Conference.
WATCH Lecture 1: Matchmaking AI/ML
Worksheets: Mental Models & Stakes
WATCH Lecture 2: Radical AI's "Checklists and Principles and Values Oh My! Practices for Co-Designing Ethical Tech w Michael Madaio" (2020).
Optional Lecture: Google I/O 2019. "Designing Human-Centered AI Products," (2019).
COMPREHENSION QUIZ
Worksheets: Co-Adaptation & Heuristics
Engagement Activity: SE for ML
For Module #3's Engagement Activity:
Imagine you are a Software Engineer on a team building the latest & greatest in interactive AI, the HikeFinder, which, in conversation with the user, will identify the best hike trail fitting that user's preferences. It relies on a generic natural language processing model of English from one of your company's other projects, and a newly developed language model for describing hiking treks.
One of your newest teammates, Taylor, makes the following claim: "Software teams should organize themselves similarly to their project's architecture, as separate modules that communicate through predetermined methods. So, it's a good thing that the English language model we have was built by a separate team. This way we can add on our new hiking language model using predetermined software interfaces for our two language libraries to work together."
Do you agree or disagree with Taylor's statement? Use evidence from readings in the course to support your position.
Afterward, respond to at least one peer's Discussion Forum posts from this week's [Module #3] Engagement Activity, by Sunday. In your response, you may want to include details of how the best approach to designing HikeFinder would adapt the general ML process, or data handling.
ASSIGNMENT 1 due
Learning goals: (1) Describe and anticipate common problems/failures occurring at the User Experience and AI/ML overlap. (2) Incorporate user feedback into the design of UX of AI/ML systems. (3) Apply service design methods for designing user experiences in AI/ML systems.
READ: Zimmerman et al (2021). "UX Designers pushing AI in the enterprise: A case for adaptive UIs," ACM Interactions.
READ: Kocielnik et al (2019). "Will you accept an imperfect ai? exploring designs for adjusting end-user expectations of ai systems," Proceedings of ACM CHI Conference.
READ: Transit (2019). "Can we make Montreal’s buses more predictable? No. But machines can," Transit company blog [medium].
Optional Reading: Cai (2019) ""Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making," Proceedings of ACM Conference on Computer-Supported Collaborative Work (CSCW).
WATCH Lecture 2: Avoiding AI/ML Failures with People
Optional Lecture: Joy Buolamwini's "AI, Ain't I a Woman," (2018).
COMPREHENSION QUIZ
Worksheets: Feedback
Engagement Activity: Designing for Failure
For Module #4's Engagement Activity:
Post a link to an AI/ML + user experience failure not yet discussed in course materials. Use a reputable/authoritative source, such as TheMarkup , ProPublica , Motherboard, Wired, ArsTechnica , or other popular news ' tech sections.
Summarize the technology.
Is it Critical or Complementary? Proactive or Reactive? Visible or Invisible?
Describe what the failure is in your own words.
By Sunday night, check this forum again to leave a response to at least 2 of your peers' posts! In your responses, you will propose how the designers of the technology might have mitigated the failure in onboarding/during-the-fail/or after the failure.
ASSIGNMENT 2 released (A2 Check-in)
March 22 & 23 - Reading Period (no class) - Module 5 is Thursday + Thursday + Monday
CONFERENCE SECTION for Wednesday Section (Finishing up Module 4 Micro-ideation)
In the real world, data for the problems you care about won’t usually be prepackaged into nice, already-existing datasets -- you’ll have to create new datasets, and most of the data you’ll be interested in comes from people. This week is about where data comes from, and how you can go about finding, collecting, and managing data. We’ll pay special attention to the humans involved.
READ: Etlinger & Groopman (2015). "The Trust Imperative: A Framework for Ethical Data Use," Altimeter Market Definition Report.
READ: O'Malley (2018). "Captcha if you can: how you’ve been training AI for years without realising it," TechRadar.
READ: Madrigal (2012). "Why Google Maps Is Better Than Apple Maps," The Atlantic.
READ: Bigham et al (2014). "Human-computer interaction and collective intelligence," Collective Intelligence Handbook.
READ: Naylor (2021). "Underpaid Workers Are Being Forced to Train Biased AI on Mechanical Turk," Motherboard by Vice.
Optional Reading: Moore (2021). "If Big Tech has our data, why are targeted ads so terrible?," Financial Times.
WATCH Lecture 2: People & Public Data for AI
Optional Lecture: Radical AI's "Data as Protest: Data for BLM with Yeshi Milner," (2020).
WATCH Lecture 3: Collecting human data for AI/ML
Optional Lecture: Microsoft Research's "Reinforcement learning in Minecraft: Challenges & opportunities in multiplayer games," (2021).
COMPREHENSION QUIZ
Engagement Activity: Crowd Sourcing in the Wild
For Module #5's Engagement Activity:
Answer: Is Facebook directed, collaborative, or passive crowd sourcing? Consider user browsing behavior, video watching, likes, comments, clicks, posts, photo uploads, etc.
Explain your reasoning with evidence from this week's readings & lectures.
By Sunday night, check this forum again to leave a comment on at least 1 of your peers' posts in which you provide an example of the same type of crowd sourcing in a different social media platform! In your response, be sure to share an example of the same crowd sourcing (i.e., if the original poster selected "directed crowd sourcing", then describe "directed crowd sourcing") you've suspected/identified in a different platform such as Reddit, TikTok, etc. Be sure to explain your reasoning with evidence from this week's readings & lectures as to why you think these features/platforms are of the type of crowd sourcing you've identified.
ASSIGNMENT 2 CHECK-IN due
Learning Goals: (1) apply data visualization principles to the design of complex data visualization. (2) Explain how AI can be used used to improve the ML modeling user experience. (3) Implement info viz design principles to visualize complex data.
READ: Cai et al (2019). "Human-centered tools for coping with imperfect algorithms during medical decision-making," Proceedings of ACM CHI Conference.
READ: Kay et al (2016). "When (ish) is my bus? user-centered visualizations of uncertainty in everyday, mobile predictive systems," Proceedings of ACM CHI Conference.
Optional Reading: Strobelt et al (2018). "Seq2seq-v is: A visual debugging tool for sequence-to-sequence models," IEEE transactions on visualization and computer graphics
Optional Reading: Samek et al (2017). "Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models."
WATCH Lecture 1: Visualization to improve HAII
Optional Lecture: MSR "Data Visualization: Bridging the Gap Between Users & Information," (2020).
Optional Podcast: Data Stories "Visualizing Uncertainty with Jessica Hullman and Matthew Kay," (2019).
WATCH Lecture 2: Info Viz for Users of AI/ML Systems
Optional Podcast: Data Stories "Human-Driven Machine Learning with Saleema Amershi," (2018).
COMPREHENSION QUIZ
ASSIGNMENT 3 released (A3 Check-in)/ Assignment 2 due
Engagement Activity: Visualizing Complex Information in the Real World
For Module #6's Engagement Activity, we're going to get an opportunity to interact with a lot of information visualizations with the following tasks:
Select and share the link to a visualization from Alberto Cairo's blog, The Functional Art, that visualizes complex information (ideally predictive in nature, such as from a probabilistic or ML model of some sort), you may have to scroll through a few pages to find a good one! You may also find that going to the link Professor Cairo provides via the blogpost may be more appropriate for the purposes of this activity. Do not pick the first visualization you find, it'd be nice to have some variety in the discussions to talk about!
Summarize the content you think the visualization is portraying (i.e., what are the main takeaways).
Explain why you selected this visualization to share.
By Sunday night, check this forum again to leave a response to at least 2 of your peers' posts! In your responses, you will (1) visit your peers' selected visualization, (2) describe how well the visualization conveys its main takeaways, and (3) explain how the visual features and perceptual tasks leveraged by the visualization contribute to its conveying of information effectively (these are concepts from lecture).
AI systems are being deployed in increasingly diverse and complicated situations, and the machine learning models underlying these systems are often incredibly difficult to understand. How can we build AI systems that allows us to explore why the algorithm is doing what it's doing?
READ: Weld & Bansal (2019) "The challenge of crafting intelligible intelligence," Communications of the ACM.
READ: Lim, Yang, & Wang (2019). "Why these Explanations? Selecting Intelligibility Types for Explanation Goals," Conference on Intelligent User Interfaces Workshops.
Optional Reading: Lipton (2018). "The mythos of model interpretability," Queue.
Optional Reading: Ribeiro et al (2016). "Why should I trust you?" Explaining the predictions of any classifier," Proceedings of ACM SIGKDD Conference on Knowledge Discovery and Data Mining.
WATCH Lecture 1: Interpreting & Explaining Algorithms
Optional Lecture: Jenn Wortman Vaughan "Transparency and Intelligibility Throughout the Machine Learning Life Cycle," (2020).
COMPREHENSION QUIZ
Engagement Activity: AI Explainability for User Reasoning
For Module #7's Engagement Activity:
Select an interactive visualization article from the IEEE VISxAI workshop on visualization for AI Explainability (look through the whole website, there's visualizations from past year's workshops, too!): https://visxai.io/
Do not pick the first explainable you find, it'd be nice to have some variety in the discussions to talk about!
Summarize what the interactive article / explainable is and why you selected it.
What is an example of each of the types of knowledge the interactive article is explaining: (a) a fact, (b) a rule, and (c) a rationale?
By Sunday night, check this forum again to leave a response to at least 2 of your peers' posts! In your responses, you will visit your peer's selected interactive article, choose one of the interactive visualizations therein, and describe which Intelligibility Type it represents, and how that promotes the designers' desired reasoning from viewers.
Ethics are the moral principles that govern our behavior in our lives. The ethics by which we choose to develop technologies (including AI technologies) can determine in very real ways how those technologies will benefit (or not) the humans who provide the data to train models, who use the systems we develop directly, or who may face the consequences of AI systems deployed for them or around them. Furthermore, despite our best efforts, all datasets have bias. In some especially bad cases, data might be biased against a marginal subpopulation (e.g., racial bias, gender bias, etc.), but sometimes bias can be harder to spot. Once we recognize that our data will have bias, we can work toward datasets that limit undesirable bias and seek to mitigate potential negative effects of remaining bias in the models we create. As we build models with data that make it into user interfaces, we will work toward understanding how people might appropriately decide when and how to trust the underlying models that result.
READ: Hill (2021). "Your face is not your own," The New York Times.
Optional Reading: Hartzog (2018). "Facial Recognition Is the Perfect Tool for Oppression," Medium blog.
Optional Reading: Crawford (2021). "Time to regulate AI that interprets human emotions," Nature.
READ: Hao (2021). "How Facebook got addicted to spreading misinformation," MIT Technology Review.
Optional Reading: Hoffmann (2017). "Facebook doesn't need a chief ethics officer," Slate.
Optional Reading: Naughton (2021). "Google might ask questions about AI ethics, but it doesn't want answers," The Guardian.
READ: Green & Chen (2019). "Disparate interactions: An algorithm-in-the-loop analysis of fairness in risk assessments," Conference on Fairness, Accountability, and Transparency.
Optional Reading: Winner (1980). "Do artifacts have politics?" Daedalus.
Optional Reading: Kallus & Zhou (2018). "Residual unfairness in fair machine learning from prejudiced data," International Conference on Machine Learning.
Optional Reading: Caruana et al (2015). "Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission," ACM SIGKDD international conference on knowledge discovery and data mining.
WATCH Lecture 1: AI Ethics, Fairness, Bias
Optional Lecture: Alex Hanna "Data, Transparency, and AI Ethics (Ethics of AI in Context)," (2020).
WATCH Lecture 2: Kate Crawford's 2017 NeurIPS Keynote, "The Trouble with Bias" (2017).
Optional Lecture: Ruha Benjamin's 2021 ACM SIGCHI Keynote, "Which Humans? Innovation, Equity, and Imagination in Human-Centered Design" (2021). --> Note: Maybe move this to one of the design modules?
COMPREHENSION QUIZ
Engagement Activity: AI Personal Ethics
For Module #8's Engagement Activity:: Consider each of the below 6 tasks:
Implement a GPS system for export to an autocratic state where it will be used to keep track of political dissidents.
Modify a financial software so that selected bank accounts cannot be traced by tax authorities.
Install on-street facial recognition system that can identify people who should be self-isolating following travel to a COVID high-risk country.
Develop a system that helps deter bots by showing images to users - but also uses the user responses to train an AI system without informing the user.
Develop a ML system that calculates and assigns credit scores for a bank which optimizes profit but is likely to deny loans to the poor.
Develop a system that assigns a "unionization risk score" for a big retail chain by creating an interactive heat map that monitors warehouse works.
Imagine your employer is asking you to complete each task. For each task, state which option most reflects your stance (and why):
A. Quite happy to do it
B. Reluctant, but would do it
C. Object to doing it and ask for alternative task - but do it if I must
D. Resign from my job rather than do it
E. Resign from the job and launch a public protest campaign
What higher-level rules/principles are you using to make your decisions?
(Some version of many of these systems exist in some form: 1, 2, 3, 4, 5, 6)
-=-=-=-=-=-=-
By Sunday night, check this forum again to leave a response to at least 2 of your peers' posts! In your responses, you will select at least 2 of the tasks on which you disagree with the peer, and try to persuade them to your perspective.
(Many thanks to Abeba Birhane for this activity)
April 21 & 22 - Health Days (no class) - Wednesday Conference is canceled, Wed & Thurs deadlines moved to Friday.
In machine learning classes, we sometimes treat datasets as fixed. In the real world, data continues to grow, and a big challenge is thus how to continue to incorporate new data into models as they grow. This is challenging because we need to have continued access to ground truth labels, don’t want to accidentally make performance worse, and most machine learning algorithms don’t allow new examples to be easily added without retraining the whole model. Additionally, both humans and machines struggle to complete many types of tasks well, and oftentimes they have complementary strengths. Constructing fruitful human-machine partnerships is thus promising in a lot of hard domains, but non-trivial in most of them. Humans can struggle to understand and fix the sometimes nonsensical errors made by AI, and rigid AI systems can struggle to incorporate the decontextualized and slow input of people.
READ: Wang (2019). "Humans in the Loop: The Design of Interactive AI Systems," Stanford HAI blog.
READ: Oppenheimer (2017). "Machine learning with humans in the loop," Algorithmia blog.
READ: Huang, Chang, & Bigham (2018). "Evorus: A crowd-powered conversational assistant built to automate itself over time," Proceedings of ACM CHI Conference.
Optional Reading: Bansal et al (2019). "Updates in human-ai teams: Understanding and addressing the performance/compatibility tradeoff," Proceedings of AAAI Conference on Artificial Intelligence.
Optional Reading: Zhou, Valentine, & Bernstein (2018). "In search of the dream team: temporally constrained multi-armed bandits for identifying effective team structures," Proceedings of ACM CHI Conference.
WATCH Lecture 2: Humans in the Loop for Human-AI Partnerships
Optional Lecture: StitchFix "Machine Learning with Humans in the Loop," (2017).
COMPREHENSION QUIZ
ASSIGNMENT 4 released (A4 Check-in)/ Assignment 3 due
Engagement Activity: Humans in the Real World Loop
For Module #9's Engagement Activity:
Summarize a Human-in-the-Loop system we haven't covered in this module (this may be tricky, consider the apps you use that incorporate AI, follow citations in articles we've read, visit ProPublica, TheMarkup, ArsTechnica, Wired, other popular news' tech sections for some possible ideas, etc.)
Would you describe it as a human in the loop system, or an algorithm in the loop system?
Identify the roles (from lecture) that the human and the AI agents are fulfilling.
Roles: data feeders, backup, triagers, appeals judge, worker, advisor, orchestrator, questioner, manager, lifeguard, worker
By Monday night, check this forum again to leave a response to at least 2 of your peers' posts! In your responses, you will do one of two options:
If you feel the original poster missed a role of the AI agent or human agent in their described system, suggest that role and explain why you think it exists.
Otherwise, describe how the system might be flipped from algorithm-in-the-loop to human-in-the-loop (or vice versa), and describe what the new human & AI roles would be.
Chatbots, dialog systems, and conversational agents are starting to be used and interacted with by people at a huge scale. Twitter bots are interacting with unsuspecting people, companies continue to rely on increasingly sophisticated dialog systems to reduce costs, and listening devices are entering our homes at large scale in the form of speech-controlled devices. How are these systems built, what are their limitations, and what might they look like in the future?
READ: Miller (2015). "Mind: How to Build a Neural Network (Part One)," Personal website.
Optional Lecture: Rohrer "Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM)," (2017).
READ: Bender et al (2021). "On the dangers of stochastic parrots: Can language models be too big?" Proceedings of the ACM Conference on Fairness, Accountability, and Transparency.
READ: Seering et al (2019). "Beyond dyadic interactions: considering chatbots as community members," Proceedings of ACM CHI Conference.
Optional Reading: Druga et al (2017). ""Hey Google is it OK if I eat you?" Initial Explorations in Child-Agent Interaction," Proceedings of ACM Conference on Interaction Design and Children.
Optional Reading: Weizenbaum (1966). "ELIZA—a computer program for the study of natural language communication between man and machine," Communications of the ACM.
Optional Reading: Porcheron et al (2018). "Voice interfaces in everyday life," Proceedings of ACM CHI Conference.
Optional Reading: Newton (2016). "Speak, Memory, When her best friend died, she rebuilt him using artificial intelligence," The Verge.
WATCH Lecture 2: Radical AI's "The Power of Linguistics: Unpacking Natural Language Processing Ethics with Emily Bender," (can start at 13:23) (2020).
Optional Lecture: MSR "Failures of imagination: Discovering and measuring harms in language technologies," (2021).
COMPREHENSION QUIZ
Engagement Activity: TAIk to Me
For Module #10's Engagement Activity:
Interact with a chatbot! Provide a link to the chatbot (or interaction).
Some possible options: this list of twitter bots, some reddit bots, the Eliza psychotherapist bot (introduced in Lecture 1-1), or other chatbots you may be aware of! Don't use the one from this week's Conference Section.
What technologies might be going on behind the scenes with this chatbot? This might require a little digging and experimentation!
Summarize your interaction with the chatbot.
What role(s) from this week's reading on Beyond Dyadic Interactions: Considering Chatbots as Community Members might this chatbot fulfill?
What benefit might it provide to communities?
What harms might it produce?
By Sunday night, check this forum again to leave a response to at least 2 of your peers' posts!
Computer vision has long been a dream of computer science, and here we explore how applications of computer vision to human-facing topics such as facial recognition impact society. In addition to identifying objects in images, machines have been making great strides in producing images creatively. In this module, we discuss all things humans, machine, and image.
READ: Bowman (2021). "Slick Tom Cruise Deepfakes Signal That Near Flawless Forgeries May Be Here," NPR.
READ: Andrews (2020). "Using AI to Detect Seemingly Perfect Deep-Fake Videos," Stanford HAI Blog.
READ: Saha (2018). "A Comprehensive Guide to Convolutional Neural Networks — the ELI5 way," Towards Data Science Blog.
Optional Reading: Powell (2018). "Image Kernels Explained Visually," Setosa Visual Explanations.
READ: Machine's Creativity (2019). "Artificial Art: How GANs are making machines creative," Heartbeat by Fritz.ai Medium.
Optional Reading: Hui (2018). "GAN — Some cool applications of GAN," Jonathan Hui Personal Blog.
Optional Reading: Google Developers (2019). "Crash Course on GANs," Google.
WATCH Lecture 1: AI & Images (Processing & Generation)
Optional Lecture: Rohrer "How Convolutional Neural Networks Work," (2016).
Optional Lecture: Serrano "A Friendly Introduction to GANs," (2020).
WATCH Lecture 2: Timnit Gebru's "Computer vision in practice: who is benefiting and who is being harmed?"
Optional Lectures: Part 2 & 3 of FATE/CV 2020 workshop
COMPREHENSION QUIZ
FINAL PROJECT released (Project Check-in; Project Pitch)/ Assignment 4 due
Engagement Activity: AI Can See Clearly Now
For Module #11's Engagement Activity:
Choose an available image generation technology from this Medium post on GAN -- Some Cool Applications of GAN. If you can't find one on that website you like, there's a couple more described in Lecture 11.3, but you may also find different ones on your own.
Use the image generation technology to construct an image that is meaningful to you (or at the very least, interesting to you).
Do a little research on the technology to try and figure out what the algorithm is (sometimes websites are completely open about it, sometimes they obfuscate it!).
Share the link to the technology, a description of what the algorithmic approach might be, a screenshot of the image you created + explain why it's important to you.
By Sunday night, check this forum again to leave a response to at least 2 of your peers' posts! In your responses, you will:
Use the original poster's artwork as a starting point to inspire/generate a new image of your own, using the same image generation technology (the website).
Share the image you created.
Comment on whether you agree with the original poster's assessment of the algorithm being used (might require a little research).
Algorithms that predict user preferences are everywhere, delighting users with recommendations in which they might be interested (a relevant book, a similar movie, ...) and online advertisers with increased relevant traffic to their products. How do they work, how can they be gamed, and what do recommendation algorithms do to people? More broadly, what impact can we expect AI (in general) to have on people and the world?
READ: Tufekci (2018). "YouTube, the Great Radicalizer," The New York Times.
Optional Reading: Naughton (2018). "Extremism pays. That’s why Silicon Valley isn’t shutting it down," The Guardian.
Optional Reading: Covington et al (2016). "Deep neural networks for Youtube recommendations," Proceedings of the ACM Conference on Recommender Systems.
Optional Reading: Nguyen et al (2014). "Exploring the filter bubble: the effect of using recommender systems on content diversity," Proceedings of the ACM Conference on the World Wide Web.
Optional Reading: Willemsen et al (2016). "Understanding the role of latent feature diversification on choice difficulty and satisfaction," Journal of User Modeling and User-adapted Interaction.
READ: Koebler & Cox (2018). "The Impossible Job: Inside Facebook's Struggle to Moderate Two Billion People," Motherboard by Vice.
Content Warning: Mention of suicide, eating disorder, anti-semitism, and sexual acts. Contact Iris if you'd like an alternate reading.
READ: Citron (2021). "Fix Section 230 and Hold Tech Companies to Account," Wired UK.
Optional Reading: Kelly (2021). "Congress is way behind on algorithmic misinformation," The Verge.
READ: Vincent (2021). "Google is Poisoning Its Reputation with AI Researchers," The Verge.
Could consider replacing with Simonite (2021). "What really happened when Google ousted Timnit Gebru," The Verge.
Optional Reading: Mickle (2021). "Google Plans to Double Its AI Ethics Research Staff," The Wall Street Journal.
Optional Reading: Field (2021). '"What on Earth are they doing?": AI ethics experts react to Google doubling embattled ethics team,"' Emerging Tech Brew.
WATCH Lecture 1: Recommender Systems + Continuing in HAII
Optional Podcast: NYTimes "Rabbit Hole, What is the internet doing to us?" (2020).
Optional Podcast: Radical AI's "What is AI for social good?" (2020).
WATCH: Lecture 2: AI and the World + Course Wrap-Up
Optional Podcast: Radical AI's "Industry AI Ethics 101 with Kathy Baxter," (2020).
COMPREHENSION QUIZ
FINAL PROJECT (Check-in + Pitch + Submission) Due
Student Course Survey Forms Due
Engagement Activity: None! It's the end of the semester!