Dates: Aug 28, 2025 to Dec 11, 2025
NMGM 5208, A
CRN 18407
FACULTY INSTRUCTOR
Prof. R.Trebor Scholz
scholzt@newschool.edu
Office Hours
For students residing in New York City, I offer “walk-and-talk” office hours—meaning we can meet outdoors and walk from my office at 79 Fifth Avenue to around Washington Square Park. It’s a great way to discuss course topics in a relaxed setting while getting some fresh air. For all other students, please email me to arrange a meeting time, and we’ll schedule a virtual session that works for both of us.
Course Description
As artificial intelligence (AI) reshapes societies and economies in many countries, who controls its direction—and whose interests does it serve? This course explores alternatives that are both practical and within reach, moving beyond legislative solutions to consider how AI can be developed and governed in ways that promote equity and collective well-being.
In many parts of the Global South, AI holds genuine promise for supporting local priorities—but it can also reinforce inequality and extract resources without consent. This course focuses on grounded, community-based alternatives to the dominant Big Tech model—approaches that expand the conversation beyond policy fixes and the familiar perspectives often centered in U.S.-based academic and activist tech spaces. We actively work to include voices from the Global South, even when language barriers, limited internet access, or cultural differences pose challenges to participation. We’ll explore how people are already building more inclusive, sustainable, and locally rooted ways to shape AI that reflect the lived realities and values of their communities. These themes unfold through a three-part structure that grounds the course: critique, alternatives, and action.
While this course stands on its own, it is closely connected to the upcoming Cooperative AI Conference in Istanbul this November. Students will engage directly with some of the researchers and practitioners featured at the conference, gaining early access to their ideas and joining this international conversation.
In this way, the course not only explores cooperative approaches to AI—it actively links participants to the global network of thinkers and doers shaping this field. Through interactive discussions, case studies, and hands-on projects, students will examine real-world applications of AI in sectors such as labor, governance, finance, and community development. The course will also explore policy incubation efforts, highlighting strategies for AI governance that integrate cooperative principles into both local and global frameworks.
This is an interdisciplinary course and collective learning journey, bringing together an international group of students and leading voices in AI—from cultural critics and technical experts to cooperative leaders actively implementing AI in real-world cooperative settings. It integrates perspectives from sociology, law, political economy, technology studies, ethics, and environmental humanities. Drawing on scholarly, journalistic, artistic, and policy sources, students will expand their epistemic horizons. Through close engagement with books, papers, podcasts, videos, and first-hand accounts from cooperatives using AI, we investigate current practices and collaboratively envision future applications across the full supply chain—from mineral sourcing to data centers, algorithmic development, cross-border coordination, and democratic governance.
Course Learning Objectives
By the end of the course, participants will:
– Understand the historical and current debates around AI, ownership, and equity.
– Grasp how extractive AI systems entrench global inequalities
– Examine the role of digital infrastructure in maintaining technopower
– Analyze cooperative alternatives emerging globally, with case studies from Latin America, //Sub-Saharan Africa, South and Southeast Asia, and Indigenous contexts
– Assess AI’s impact on labor rights, platform governance, and the solidarity economy.
– Explore alternative AI governance models centered on democratic decision-making.
– Engage with open-source AI tools and cooperative-led digital initiatives.
Remote Learning Guidelines
– Video on: Required during sessions to support meaningful interaction.
– No cellphone use: Please join from a stable setup (laptop/desktop).
– No multitasking: This seminar demands your full attention—do not join while in transit, walking, or engaged in other activities.
– Use a headset if possible to minimize background noise and improve audio quality.
– Be on time: Arriving late disrupts the flow of discussion and shows a lack of respect for your peers.
– Mute when not speaking: To avoid unintended interruptions.
– Engage fully: Be prepared to participate actively through voice and/or chat, just as you would in person.
– Respect the space: Treat the virtual classroom as you would a physical one—be present, prepared, and respectful.
Policy on the Use of Generative Artificial Intelligence
Generative AI can be a helpful tool for learning—but only when used skilfully. While the temptation to pass off assignments to AI tools is understandable—especially for those for whom English is not a first language, or for whom academic writing is not part of their everyday practice—it limits the opportunity for meaningful learning that this course is designed to offer.
Engaging with lectures, readings, and peer perspectives is central to building the critical thinking and analytical skills needed to articulate your ideas in your own voice.
When used thoughtfully, AI can function as a conversational partner for feedback—helping you test arguments, pose questions, clarify complex ideas, and improve drafts of your writing. It can support your learning, not replace it—and it should push you to engage more deeply with your own writing, prompting more frequent and rigorous revision. It should make you think more critically about your drafts.
Generative AI can be a useful tool for exploring ideas, refining your understanding, and clarifying course material—for example, by helping you examine a topic from different angles, identify relevant examples, or revisit points raised in discussion.
However, you may not submit work that is primarily generated by AI. Submitting AI-written material—especially without meaningful revision or critical engagement—is considered academic misconduct, akin to plagiarism. You are responsible for ensuring that all submitted work reflects accuracy, independent thought, and principled academic conduct. While relying on AI tools may be tempting—especially for those less practiced in writing—it undermines the learning this course is designed to support. Meaningful engagement with lectures, readings, and the contributions of your peers is essential for developing critical thinking, analytical depth, and a clear, grounded voice. Because AI can generate content that sounds plausible but may be false or misleading, you are responsible for verifying its accuracy and should consult me before using it in assignments if you're unsure.
Course Participants + the Cross Connection Document
This class may be different from any online course you’ve taken before.
It brings together a wide range of participants—graduate students, cooperative leaders in agriculture and energy, computer scientists, PhD researchers, postdocs, junior faculty, municipal officials, and others—with a shared interest in the critical discourse surrounding artificial intelligence. We examine the intersection of AI—particularly general AI and large language models—with the solidarity economy and cooperatives.
This is the syllabus for graduate students at The New School.
Add yourself to the cross connection document
The cross-connection document is designed to help you get to know your classmates and build connections. You’ll use it to share information about yourself, and it will also serve as a resource for contacting other students for peer-to-peer learning, including exchanging feedback on your writing.
Grading and Assignments
Assignment Due Date Weight
Participation & Discussion (in class + Hylo) Ongoing 20%
Reading Response 1 Sept 14 10%
Reading Response 2 Oct 12 10%
Reading Response 3 Nov 9 10%
Seminar Leadership Weeks 6–13 10%
Final Research Paper Dec 11 40%
Participation & Discussion – Lead by example in class and on Hylo; pose critical questions, link readings to broader scholarship, and build on peers’ contributions. Ongoing 20%
Reading Responses (3 × 10% = 30%) – 1,000–1,200 words each; due Sunday 11:59 PM ET on Sept 14, Oct 12, and Nov 9. Show critical engagement with multiple readings, integrate external scholarly sources, and offer original analysis that connects to broader course themes.
Seminar Leadership – Co-lead part of one class session; prepare discussion prompts and facilitate engagement with readings. Weeks 6–13 10%
Final Research Paper/Project – Research paper: 4,000–5,000 words with an original argument supported by at least 10 scholarly sources. Applied project: Deliverable (e.g., policy proposal, toolkit, prototype) plus a 1,500-word analytical essay. Dec 11 40%
Course Learning Objectives
By the end of the course, participants will:
– Understand the historical and current debates around AI, ownership, and equity.
– Grasp how extractive AI systems entrench global inequalities
– Examine the role of digital infrastructure in maintaining technopower
– Analyze cooperative alternatives emerging globally, with case studies from Latin America, Sub-Saharan Africa, South and Southeast Asia, and Indigenous contexts
– Assess AI’s impact on labor rights, platform governance, and the solidarity economy.
– Explore alternative AI governance models centered on democratic decision-making.
– Engage with open-source AI tools and cooperative-led digital initiatives.
Remote Learning Guidelines
– Video on: Required during sessions to support meaningful interaction.
– No cellphone use: Please join from a stable setup (laptop/desktop).
– No multitasking: This seminar demands your full attention—do not join while in transit, walking, or engaged in other activities.
– Use a headset if possible to minimize background noise and improve audio quality.
– Be on time: Arriving late disrupts the flow of discussion and shows a lack of respect for your peers.
– Mute when not speaking: To avoid unintended interruptions.
– Engage fully: Be prepared to participate actively through voice and/or chat, just as you would in person.
– Respect the space: Treat the virtual classroom as you would a physical one—be present, prepared, and respectful.
Policy on the Use of Generative Artificial Intelligence
Generative AI can be a helpful tool for learning—but only when used skilfully. While the temptation to pass off assignments to AI tools is understandable—especially for those for whom English is not a first language, or for whom academic writing is not part of their everyday practice—it limits the opportunity for meaningful learning that this course is designed to offer.
Engaging with lectures, readings, and peer perspectives is central to building the critical thinking and analytical skills needed to articulate your ideas in your own voice.
When used thoughtfully, AI can function as a conversational partner for feedback—helping you test arguments, pose questions, clarify complex ideas, and improve drafts of your writing. It can support your learning, not replace it—and it should push you to engage more deeply with your own writing, prompting more frequent and rigorous revision. It should make you think more critically about your drafts.
Generative AI can be a useful tool for exploring ideas, refining your understanding, and clarifying course material—for example, by helping you examine a topic from different angles, identify relevant examples, or revisit points raised in discussion.
However, you may not submit work that is primarily generated by AI. Submitting AI-written material—especially without meaningful revision or critical engagement—is considered academic misconduct, akin to plagiarism. You are responsible for ensuring that all submitted work reflects accuracy, independent thought, and principled academic conduct. While relying on AI tools may be tempting—especially for those less practiced in writing—it undermines the learning this course is designed to support. Meaningful engagement with lectures, readings, and the contributions of your peers is essential for developing critical thinking, analytical depth, and a clear, grounded voice. Because AI can generate content that sounds plausible but may be false or misleading, you are responsible for verifying its accuracy and should consult me before using it in assignments if you're unsure.
Course Participants + the Cross Connection Document
This class may be different from any online course you’ve taken before.
It brings together a wide range of participants—graduate students, cooperative leaders in agriculture and energy, computer scientists, PhD researchers, postdocs, junior faculty, municipal officials, and others—with a shared interest in the critical discourse surrounding artificial intelligence. We examine the intersection of AI—particularly general AI and large language models—with the solidarity economy and cooperatives.
This is the syllabus for graduate students at The New School.
Add yourself to the cross connection document
The cross-connection document is designed to help you get to know your classmates and build connections. You’ll use it to share information about yourself, and it will also serve as a resource for contacting other students for peer-to-peer learning, including exchanging feedback on your writing.
Grading and Assignments
Participation & Discussion – Lead by example in class and on Hylo; pose critical questions, link readings to broader scholarship, and build on peers’ contributions. Ongoing 20%
Reading Responses (3 × 10% = 30%) – 1,000–1,200 words each; due Sunday 11:59 PM ET on Sept 14, Oct 12, and Nov 9. Show critical engagement with multiple readings, integrate external scholarly sources, and offer original analysis that connects to broader course themes.
Seminar Leadership – Co-lead part of one class session; prepare discussion prompts and facilitate engagement with readings. Weeks 6–13 10%
Final Research Paper/Project – Research paper: 4,000–5,000 words with an original argument supported by at least 10 scholarly sources. Applied project: Deliverable (e.g., policy proposal, toolkit, prototype) plus a 1,500-word analytical essay. Dec 11 40%
Assignment Notes:
Participation & Discussion (20%): Lead by example in class and on Hylo; pose critical questions, link readings to broader scholarship, and build on peers’ contributions.
Reading Responses (3 × 10% = 30%):
1,000–1,200 words each.
Due Sunday 11:59 PM ET Sept 14, Oct 12, Nov 9
Must show critical engagement with multiple readings, integrate external scholarly sources, and offer original analysis that connects to broader course themes.
Seminar Leadership (15%): Co-lead part of one class session, prepare discussion prompts, and facilitate engagement with readings.
Final Research Paper (45%):
Research paper: 4,000-5,000 words with an original argument supported by at least 10 scholarly sources.
Readings
We’re aiming to broaden the conversation beyond the often U.S.-centered networks of scholars and tech activists, where many voices tend to circulate within the same references, by including a wider range of perspectives—especially from regions commonly referred to as the Global South or majority world. 1–2 required readings or resources per week, totaling not more than 10–30 pages.
For some key readings, we’ve created audio recordings that, while not professionally produced, are meant to support different learning styles.
Access:
This document contains hyperlinks to access the readings, week by week.
Discussion
For this class, discussions will take place not on Canvas, but on a cooperative platform called Hylo, which is value-aligned with our course. The private group we’ve created there will remain active beyond the end of the semester, allowing students to stay in touch after the course concludes. The discussion space is designed for starting conversation threads, sharing materials (such as articles, papers, and other resources), and posting your own writing—responses to course materials that others can engage with. Students are encouraged to respond thoughtfully to one another’s posts. Participation in these discussions is mandatory for New School undergraduate and graduate students, and voluntary for community participants. However, all students—including those in the community section—must join the Hylo group.
Join the group here.
Pedagogy
This course follows a participatory, co-learning model inspired by Jacques Rancière’s concept of the “ignorant schoolmaster,” which holds that knowledge emerges collectively through dialogue rather than being delivered from a single authority. The aim is for participants to make this knowledge their own—through reflection, discussion, and application—rather than receiving it through top-down transmission.
The course brings together a uniquely diverse cohort—from cooperative leaders and municipal officials to technologists, researchers, and educators across Africa, Asia, Europe, and the Americas—fostering a global, cross-sectoral learning environment. A significant benefit of this structure is the opportunity to connect across geographies, sectors, backgrounds, and generations.
Rather than centering any one voice, participants read core texts together, pose questions, and co-produce insights. Occasional guest speakers, often affiliated with the concurrent conference, contribute to key thematic sessions.
For working professionals and community participants, the expected time commitment is approximately 2–4 hours per week, including reading, forum engagement, and compulsory live sessions. While the course is flexible and open, it maintains a shared rhythm that supports consistent engagement. Those seeking a certificate should plan to participate meaningfully throughout the term.
Community Norms
This course is built on mutual respect, curiosity, and trust. Please engage with others kindly, challenge ideas without attacking people, and keep anything anyone is saying in class private unless you have permission to share it. Course materials may not be shared beyond the class. We follow the Robert’s Rules of Chatham House, which means participants are free to use the information shared, but not to attribute it to course participants without their consent. This helps us create a space where everyone can participate openly and safely. Discussions will rotate between learners, and we ask that no one dominate the conversation.
Course Flow
The class will meet 15 times between August 28th and December 11th. This class meets every Thursday from 12:10 PM to 2:00 PM (New York Time, Eastern Time). In these sessions, we discuss the text, reflect on the readings, analyze films or podcasts, and hear presentations from students as well as presentations from guests in the second half of the class. The first class will be on September 4th. The Zoom URL for all but one meeting is https://NewSchool.zoom.us/j/8986200210.
For the community syllabus—intended for participants not enrolled for credit—there is no writing requirement. However, you are expected to attend each session, engage actively, and contribute meaningfully to class discussions.
Each class begins with a brief check-in, offering participants a chance to connect and share their contexts. This is followed by a short 10-15 minute lecture by Professor Trebor Scholz introducing key concepts and guiding the learning journey. A rotating note-taker will document important ideas, terms, and historical references in a shared glossary. This collaboratively generated resource will serve as a record of what we’ve learned throughout the course.
PDFs of slides of lectures will be made available for download.
Note the following schedule:
Please note that on September 11, we will meet at a different time than usual: 10:00–11:30 a.m. This is a public event, held in connection with this course and as part of the lead-up to the Istanbul conference. To attend, you must register separately using the link that follows, as the Zoom URL will only be sent to registered participants. https://NewSchool.zoom.us/meeting/register/wSMGyUsWQiKFfnzGuBuW3g
On Thursday, November 14, there will be no synchronous class. Instead, you are asked to complete an asynchronous viewing assignment—watch Sleep Dealer by Alex Rivera (available on YouTube) and, if you have access to Netflix, “Joan Is Awful” (Season 6, Episode 1 of Black Mirror).
On Thursday, November 27, there will be no class due to the Thanksgiving holiday in the United States.
Class does not meet on December 4. However, those interested in writing a manifesto for people-centered, solidarity economy–aligned AI are encouraged to connect individually for peer exchange. These meetings are self-organized using the contact information in the cross-connection document shared in Week 1.
Please be sure to pencil these dates into your calendar—whatever form your “pencil” takes these days.
Essential Links
Lecture Slides (PDFs) – here
Class Website – here (select appropriate syllabus in pulldown menu– upper right)
Weekly Class Zoom – https://NewSchool.zoom.us/j/8986200210
September 11 Special Event (10:00–11:30 AM ET) – Register here
Cross-Connection Document – here
Hylo Discussion Group – here
Week-by-Week Readings – see below
KEY DATES
All sessions take place on Thursdays, 12:10–2:00 PM ET unless otherwise noted.
Aug 28 Foundations
Sept 4 Unpacking the Myths of AI
Sept 11 Beyond Democracy Theater — 10:00–11:30 AM ET (special event, separate registration required)
Sept 18 Colonialism Reloaded
Sept 25 The Environmental Cost of AI
Oct 2 AI and the Appropriation of Creative Labor
Oct 9 Automation’s Backbone
Oct 16 What Co-ops Are Already Doing
Oct 23 A Cooperative AI Stack
Oct 30 Applying Cooperative Principles to AI Ethics
Nov 6 Regulate, Enforce, Build Worker Power
Nov 13 Asynchronous Viewing — Class does not meet (asynchronous assignment)
Nov 20 Solidarity Across Struggles
Nov 27 Thanksgiving Holiday — No class
Dec 4 No Regular Class — Individual writing meetings (peer-to-peer writing exchange; instructor meets individually with students)
Dec 11 What Stays With Us — Closing Circle
COURSE OUTLINE
This course unfolds in three interconnected phases: critique, alternatives, and action. We begin by examining the power dynamics and systemic harms embedded in contemporary AI—from labor exploitation and bias to environmental degradation and data colonialism.
Through foundational readings and case studies, we explore how AI systems entrench inequality, with special attention to lived realities in the Global South and how extractive digital models replicate colonial patterns (Weeks 1–5).
In the second section (Weeks 6–10), we turn to alternatives—studying cooperative responses to AI’s concentration of power, including examples from worker-led initiatives, data co-ops, and shared infrastructure projects across multiple regions.
The final weeks (Weeks 11–15) center on collective imagination and action: participants engage with movement-builders from around the world, explore legal and policy pathways, and co-develop manifestos for a people-centered, solidarity economy–aligned AI.
Throughout, the course aims to resist a solely EU–US-centric lens, foregrounding, wherever possible, plural perspectives, especially from communities historically excluded from these debates.
Syllabus (vs. Aug 26, 2025)
Week 1 (Thursday, August 28, 2025)
FOUNDATIONS
This opening session orients students to the course structure, digital workspace (Hylo), and discussion norms. After a walkthrough of the syllabus and expectations, students will participate in breakouts to surface perceived harms of AI and begin co-creating a shared glossary of key terms. These early reflections help map how cooperative principles might shift power within AI systems.
Beyond logistics, the session introduces AI as a socio-technical system shaped by political, economic, and infrastructural forces. Through discussion and foundational readings—including Kate Crawford’s Atlas of AI and a policy reflection from the Digital Watch Observatory—students begin to examine how AI’s impacts vary across contexts and why people-centered alternatives are urgently needed.
Intro to the Logistics of the Course
Introductions
Walk through the syllabus, course website
Join Hylo workspace.
Sign Terms of Agreement COMMUNITY VALUES AND LEARNING CULTURE AGREEMENT
Explain readings, assignments, expectations, and discussion norms.
Register for September 11 here
Break-out workshop
"Impact Sprint:" In small groups, prompt ChatGPT to surface the most significant harms, blind spots, joys, and conveniences in today’s AI ecosystem, then critically reflect on those AI-identified issues and benefits, rank them, and briefly justify your ordering.
In small groups, draft a ChatGPT prompt asking: “What practices, technologies, and conceptual frameworks fall under the umbrella of ‘AI,’ and how do they span technical methods, socio-technical systems, and governance or rights frameworks?”
Class discussion
Invite reflections on how AI and digital infrastructures operate differently across economic, political, and infrastructural contexts.
In-class Assignment
Using your notes on harms, blind spots, joys, and conveniences in today’s AI ecosystem—and your notes on the introduction in Atlas of AI, feed them into NotebookLM’s
“text→podcast” feature, and generate a conversational episode that we can play in class.
Required Readings: Zip file download (vs Aug 26, 2025)
Kate Crawford, “Introduction,” in Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (New Haven, CT: Yale University Press, 2021), 1–22. (download Mp3 version, 36 MB)
Digital Watch, “AI That Serves Communities, Not the Other Way Round,” Digital Watch Observatory, July 10, 2025, https://dig.watch/updates/ai-that-serves-communities-not-the-other-way-round.
Week 2 (Thursday, September 4, 2025)
UNPACKING THE MYTHS OF AI
This session challenges the myth of AI as a singular, autonomous intelligence. Instead, we frame it as a global infrastructure rooted in resource extraction, labor exploitation, and algorithmic control. Drawing from Kate Crawford’s Atlas of AI, we examine how these systems function across governance, labor, and surveillance—amplifying inequality through racial, cultural, and geographic bias in hiring, policing, healthcare, and border control.
We also explore how AI undermines creative industries through mass copyright infringement and unconsented data use. From Gaza to Ukraine, Sudan, and Myanmar, we trace its militarization and role in reinforcing power. Finally, we confront the limits of regulation, given Big Tech’s dominance, weak U.S. oversight, and Europe’s failure to enforce existing laws.
Guiding questions:
– What are the foundational myths that sustain current AI development—and who do they serve?
– How do algorithmic, cultural, and environmental harms show up in real-world applications of AI?
– Why aren’t existing laws enough—and what stands in the way of meaningful regulation?
Suggested Activity:
Debrief from Week 1’s "Impact Sprint" and start glossary contributions
(crowdsourced across class).
Required Readings: Zip file download (vs. Aug 26, 2025)
Listen to Podcast
The Radical AI Podcast. 2020. “Episode 8: Love, Challenge, and Hope: Building a Movement to Dismantle the New Jim Code with Ruha Benjamin.” Radical AI Podcast, May 6, 2020. https://www.radicalai.org/episode-8-ruha-benjamin.
Video
Buolamwini, Joy. 2019. “The Coded Gaze: Bias in Artificial Intelligence.” Talk presented at the Bloomberg Equality Summit, New York. Bloomberg Live. Published on YouTube, March 28, 2019. https://www.youtube.com/watch?v=eRUEVYndh9c
Week 3 (Thursday, September 11, 2025. 10:00 am ET, Register)
BEYOND DEMOCRACY THEATER
This public online forum—Beyond Democracy Theater: From Participation to Worker Power in AI—offers a critical look at how those most affected by artificial intelligence are pushing to reshape its design, governance, and ownership. Rather than a regular class, this ICDE event gathers organizers, researchers, and practitioners to examine three grounded scenarios: participatory design, union bargaining, and cooperative ownership of AI. It builds on themes from a recent article on community ownership of AI, connects to a related course at The New School, and prepares the ground for the Cooperative AI conference in Istanbul (November 2025). Participants will hear firsthand from those challenging extractive systems and explore what real worker power could look like in an algorithmically governed world.
See the event here: https://platform.coop/blog/beyonddemocracytheater/
You must register: https://NewSchool.zoom.us/meeting/register/wSMGyUsWQiKFfnzGuBuW3g
Week 4 (Thursday, September 18, 2025)
COLONIALISM RELOADED
This session examines how AI is entangled with extractive systems that echo colonial legacies—particularly in the Global South. Through regional critiques, lived experiences, and artistic interventions, we explore how digital technologies intersect with labor, sovereignty, and cultural power. From Kriangsak Teerakowitkajorn’s call for new labor imaginaries in Asia to Shav Vimalendiran’s analysis of cultural bias in large language models, we begin to see how global inequalities are reproduced in AI systems.
Supplemental materials deepen this inquiry—covering data colonialism (Couldry & Mejias), governance gaps (Gurumurthy & Chami), and regional case studies from Turkey (Genç), South America (Rikap), and the Global South’s digital users (Arora). We also explore artistic and geopolitical perspectives, from the Ghana Think Tank’s inversion of North-South problem-solving to Timnit Gebru’s podcast on AI geopolitics. Guest speaker Nicholas Bequelin (Yale) will offer insight into China’s strategic approach to AI development.
Guiding Questions
– How does AI reinforce or reconfigure existing colonial and extractive power structures?
– Whose perspectives on the future of work are centered in mainstream AI and labor reports, and how does Teerakowitkajorn’s framing challenge or expand those narratives?
– Who benefits—and who is made invisible—when AI systems are deployed globally?
Required Readings: Zip file download (vs. Aug 26, 2025)
Teerakowitkajorn, Kriangsak. “Toward Asian Labor Futures.” Medium, June 26, 2025. https://kriangsakth.medium.com/toward-asian-labor-futures-623df78149b1.
Singh, Deepa. “AI Ethics in and for the Global South: Universal Vs. Situated Approaches.” Medium, March 27, 2025. https://medium.com/@deepasingh/ai-ethics-in-and-for-the-global-south-universal-vs-situated-approaches-58d3a6b6d07c.
Vimalendiran, Shav. “Cultural Bias in LLMs.” Blog, July 20, 2024. https://www.howandwhyai.com/blog/cultural-bias-in-llms.
Nick Couldry and Ulises A. Mejias, “The Coloniality of Data Relations,” in The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism (Stanford, CA: Stanford University Press, 2019), chap. 3.
Gurumurthy, Anita, and Nandini Chami. 2019. The Wicked Problem of AI Governance. New Delhi: Friedrich-Ebert-Stiftung India Office. https://www.researchgate.net/publication/344604252_The_Wicked_Problem_of_AI_Governance.
Genç, Kaya. “Desperate for work, translators trainthe AI that's putting them out of work” Rest of World, February 20, 2025. https://restofworld.org/2025/turkeys-translators-training-ai-replacements/
Rikap, Cecilia. “South America’s Sovereignty Is Being Lost in Big Tech’s Clouds.” openDemocracy, July 30, 2025. https://www.opendemocracy.net/en/south-america-sovereignty-big-tech-clouds-cecilia-rikap/.
Arora, Payal. 2019. “The Biggest Myths About the Next Billion Internet Users.” Quartz, November. https://qz.com/1669754/tech-companies-misunderstand-the-next-billion-internet-users
Art Project:
Ghana Think Tank—the collective that flips conventional power dynamics by having think tanks in the global South tackle problems from the global North: https://www.ghanathinktank.org/
Podcast
Marx, Paris, host. 2025. “AI Hype Enters Its Geopolitics Era w/ Timnit Gebru.” Tech Won’t Save Us (podcast), March 13, 2025. https://techwontsave.us/episode/267_ai_hype_enters_its_geopolitics_era_w_timnit_gebru.
Guest Speaker:
Nicholas Bequelin — Senior Fellow and human rights expert at Yale Law School, examining the global risks of unregulated AI, with a focus on China.
Kriangsak Teerakowitkajorn — Labor geographer and Just Tech Fellow at the Social Science Research Council, advancing worker-led advocacy and organizing in Southeast Asia, with a focus on platform labor.
Week 5 (Thursday, September 25, 2025)
THE ENVIRONMENTAL COST OF AI
We retrace every stage of AI’s material life—lithium and rare-earth extraction, megawatt-hungry data centers, water-cooled GPUs, and the mounting e-waste that follows each model upgrade. Readings from Atlas of AI and up-to-the-minute journalism ground a data-driven audit of energy, water, and mineral inputs for today’s leading systems.
In break-outs, students apply cooperative principles to draft “eco-manifestos” that set concrete targets for frugality (smaller models, data localization, renewable compute), shared ownership of infrastructure, and transparent impact reporting. We close by stress-testing each manifesto against real-world constraints—funding, governance, and user demand—to surface actionable next steps for the cooperative AI movement.
Guiding questions:
– What are the full ecological costs of AI, from extraction to e-waste?
– How can cooperative principles reshape the environmental footprint of AI infrastructures?
– What would it take to build truly sustainable, low-carbon AI systems—and who should be responsible for that transition?
Required Readings: Zip file download (vs. Aug 26, 2025)
Hao, Karen. “Plundered Earth.” In Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. New York: Penguin Press, 2025.
Bhat, Divsha. “Big Tech is builing AI in the desert. The water may not last.” Rest of World, August 4, 2025. https://restofworld.org/2025/gulf-ai-water-crisis.
David Berreby, “As Use of A.I. Soars, So Does the Energy and Water It Requires,” Yale Environment 360, February 6, 2024, https://e360.yale.edu/features/artificial-intelligence-climate-energy-emissions.
Parshley, Lois. “The Hidden Environmental Impact of AI.” Jacobin, June 20, 2024. https://jacobin.com/2024/06/ai-data-center-energy-usage-environment/.
Guest speaker: tbd
Week 6 (Thursday, October 2, 2025)
AI AND THE APPROPRIATION OF CREATIVE LABOR
This session centers on the massive infringement of intellectual property by AI companies—focusing on the unlicensed scraping and replication of artistic, literary, and musical works to train generative models. We examine how tools like ChatGPT and DALL·E have been trained on millions of copyrighted materials without consent, undermining the livelihoods and rights of creators.
From Studio Ghibli–style image generators to lawsuits by authors, musicians, and visual artists, we’ll explore the growing wave of legal challenges and the ethical breakdown in current AI development practices. We engage critically with frameworks from the World Economic Forum and the Brookings Institution to understand how copyright protections, fair attribution, and equitable compensation models are being tested—and often ignored—in this rapidly evolving landscape. We conclude the session by drafting policy memos or fair-compensation proposals that respond to the realities of large-scale intellectual property theft and propose enforceable protections for human creators in the age of generative AI.
Guiding Questions
— To what extent should AI companies be held legally or financially accountable for using copyrighted material without consent, and what models of compensation (if any) are viable at scale?
— Is “style” a form of intellectual property that deserves protection in the era of generative AI? How do current laws fall short in addressing this question?
— What would a fair and enforceable system look like for protecting artists, musicians, and writers in a world where their work can be scraped, mimicked, and monetized by machines?
Required Readings: Zip file download (vs. Aug 26, 2025)
World Economic Forum. Technology Convergence Report 2025. Geneva: World Economic Forum, 2025. https://www.weforum.org/publications/technology-convergence-report-2025/
Wang, Judy, and Nicol Turner Lee. “AI and the Visual Arts: The Case for Copyright Protection.” Brookings Institution – TechTank, April 18, 2025. https://www.brookings.edu/articles/ai-and-the-visual-arts-the-case-for-copyright-protection/.
Rocha, André, et al. “Worker-led AI Governance: Hollywood Writers' Strikes and the Worker Power.” Information, Communication & Society, June 2025. https://doi.org/10.1080/1369118X.2025.2521375.
O’Brien, Matt, and Sarah Parvini. “ChatGPT’s Viral Studio Ghibli-Style Images Highlight AI Copyright Concerns.” Associated Press, March 28, 2025. https://apnews.com/article/0f4cb487ec3042dd5b43ad47879b91f4.
Guest speaker:
Mark Esposito (Harvard University & Hult International Business School): Economist and complexity scholar investigating sustainable governance and AI ethics.
Week 7 (Thursday, October 9, 2025)
AUTOMATION’S BACKBONE
This session exposes the illusion of autonomous AI by spotlighting the hidden human labor behind it—content moderation, annotation, data labeling, and flagging. Drawing on the myth of the 18th-century Mechanical Turk, a supposed chess-playing machine secretly controlled by a person, we explore how modern AI similarly masks the human effort embedded in its operations.
Despite the image of machine intelligence, today’s systems depend on vast, underpaid workforces—often in the Global South—whose contributions are erased from view. Through personal accounts, cooperative alternatives, and critical artworks, we investigate how this invisibility sustains algorithmic injustice and explore pathways toward recognition and accountability.
Guiding Questions
— How does the myth of autonomous AI—echoing the Mechanical Turk—serve to obscure the human labor behind machine learning systems?
— What are the ethical and economic consequences of hiding the labor of content moderators, annotators, and data workers, particularly those in the Global South?
— What would it take to make this labor visible, valued, and collectively owned—and how can cooperative models challenge the extractive logics of current AI development?
Required Readings: Zip file download (vs. Aug 26, 2025)
James Muldoon, Callum Cant, and Mark Graham. “The Annotator.” In Feeding the Machine: The Hidden Human Labor Powering A.I., chap. 1. New York: Bloomsbury Publishing, 2024.
Smith, Robert, and Alexandra Heal. “Spies, Spinners, Solicitors: Builder.ai’s ‘Perfectly Normal’ Creditor List in Full.” Financial Times, June 9, 2025. https://www.ft.com/content/16ee837a-1d89-448b-8a33-9741025334d6
Mary L. Gray and Siddharth Suri. “Algorithmic Cruelty and the Hidden Costs of Ghost Work” and “Working Hard for [More Than] the Money.” In Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (Cambridge, MA: Houghton Mifflin Harcourt, 2019)
Prabhu, Vinay Uday, and Abeba Birhane. “Large Image Datasets: A Pyrrhic Win for Computer Vision?” arXiv (preprint), July 23, 2020.
Kate Crawford, “Labor,” in Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (New Haven: Yale University Press, 2021), chap. 2.
Joy Buolamwini, “Daughter of Art and Science,” in Unmasking AI: My Mission to Protect What Is Human in a World of Machines (New York: Random House, 2023), chap. 1.
Guest Speaker
Kauna Malgwi – Co-founder & Executive Director of the African Tech Workers Cooperative, driving worker-owned digital justice across Africa.
Abeba Birhane – Cognitive scientist and Senior Advisor in AI Accountability at Mozilla Foundation, specializing in ethics, bias, and the societal impact of AI.
Week 8 (Thursday, October 16, 2025)
WHAT COOPS ARE ALREADY DOING
AI is reshaping economies, institutions, and everyday life—but its development remains tightly controlled by firms like OpenAI, Microsoft, and Meta. These companies dominate through massive infrastructure, extractive data practices, and profit-driven design, often reinforcing global inequality and ecological harm. Even open-source challengers frequently depend on corporate-scale resources. This session asks: What alternatives are not just imaginable, but already underway?
We examine how cooperatives—and, more broadly, the solidarity economy—are beginning to intervene in AI’s development. Cooperatives alone already employ nearly 10% of the global workforce and represent a global movement with real economic and political weight. While acknowledging the enormous barriers, including centralized infrastructure and capital, we consider how connections to public digital infrastructure efforts in India (with all their limitations) and emerging frameworks in Europe might offer viable pathways forward. Through case studies like IFFCO, FrieslandCampina, MIDATA, Pescadata, READ-COOP, and the Gamayyar African Tech Workers’ Cooperative, we explore how these models are reclaiming AI for equity, sustainability, and collective ownership.
Guiding Questions
— Where are cooperatives and solidarity economy actors already intervening in the AI space—and what is the scope and impact of their work?
— What structural or legal shifts would enable cooperatives to do more in shaping AI infrastructure, design, and governance?
— Beyond cooperatives, how might broader solidarity economy networks provide the resources, alliances, and scale needed to challenge corporate AI dominance?
Activity:
In small groups, students draft concrete proposals for cooperative AI initiatives—balancing idealism with the constraints of compute access, funding, and public policy.
Required Readings: Zip file download (vs. Aug 26, 2025)
https://hbr.org/2025/06/5-ways-cooperatives-can-shape-the-future-of-ai
Sarah Hubbard, “Cooperative Paradigms for Artificial Intelligence,” Ash Center for Democratic Governance and Innovation, Harvard Kennedy School, November 20, 2024. https://ash.harvard.edu/resources/cooperative-paradigms-for-artificial-intelligence/.
Co-operative Councils Innovation Network, Wigan Council, and Mutual Ventures. Co-operative Values-Driven AI: A Guidance Framework. October 2024.
REWE Group. “AI Manifesto.” Press release, Digital Day 2020, Cologne, Germany, 2020.
Explore Cooperative Case Study:
Research MIDATA.coop, a Swiss health data cooperative that enables member-owned, consent-based data sharing for ethical AI development. In particular, focus on how MIDATA uses artificial intelligence through AiDAVA, its virtual assistant that helps individuals curate, integrate, and manage their health records with human-in-the-loop support.
Guest speakers:
Stefano Tortorici is a PhD student at Scuola Normale Superiore researching platform cooperatives and digital labor.
Sarah Hubbard is a Senior Fellow at the Harvard Kennedy School researching cooperative governance, emerging technology, and democratic AI.
Week 9 (Thursday, October 23, 2025)
A COOPERATIVE AI STACK
In this week, participants will examine the foundations of digital infrastructure through a cooperative lens. We explore how member-owned data centers differ from corporate models by emphasizing democratic control, equitable benefit sharing, and long-term community stewardship. Through case studies, students will engage with technical, organizational, and ethical design principles that support data sovereignty and resilience. Lectures, guest presentations, and collaborative design exercises will introduce server architectures, network configurations, and storage strategies rooted in cooperative values.
The session challenges participants to imagine what it would mean for the entire digital stack of AI—from mineral extraction and chip manufacturing to data centers, model training, deployment, and governance—to be cooperatively run. Cooperatively owned mines already exist in countries like Peru, Colombia, Zambia, and Mongolia, while cooperative data centers have emerged in Germany, the Netherlands, and beyond. Drawing on the 1880s vision of the cooperative commonwealth—and historic U.S. cooperative ecosystems such as the Co-operative Central Exchange in Superior, Wisconsin—we explore how solidarity-based infrastructure might be built at every layer of AI’s global supply chain. Students will analyze how cooperative practices could scale to create AI systems that are technically robust, socially just, and democratically governed.
Guiding Questions
— How do cooperative digital infrastructures—such as member-owned data centers—redefine control, ownership, and accountability compared to corporate platforms?
— What technical and organizational design choices support data sovereignty, democratic governance, and long-term resilience in digital systems?
— How can we practically design and implement infrastructure that serves collective needs while remaining adaptable, secure, and ethically grounded?
Required Readings: Zip file download (vs. Aug 26, 2025)
Wilson, David, and Andrew E. G. Jonas. “The People as Infrastructure Concept: Appraisal and New Directions.” Urban Geography, May 2021. https://doi.org/10.1080/02723638.2021.1931752.
This is a collection of short articles exploring how AI is unfolding across different contexts in the Global South. Choose one article from the collection:
Nicola Nosengo, Subhra Priyadarshini, Sahana Ghosh, Akin Jimoh, Lynne Smit, and Saad Lotfey, eds. AI Models in the Global South. Nature Collection. May 21, 2025.https://www.nature.com/collections/bgficabbgc.
Bria, Francesca, Paul Timmers, and Fausto Gernone. EuroStack – A European Alternative for Digital Sovereignty. Gütersloh: Bertelsmann Stiftung, February 2025. https://doi.org/10.11586/2025006.
Explore Cooperative Case Study:
Rocha, V. A. (2024, September 24). Ohio Co‑op sponsors top college project using AI to track rolling outages. Electric.coop. Retrieved Aug 3, 2025 from https://www.electric.coop/ohio-co-op-sponsors-top-college-project-using-ai-to-track-rolling-outages Am
Skim this:
Research Northeastern’s AI for Impact program, focusing on how students co-develop civic AI tools with public agencies to address social challenges.
https://burnes.northeastern.edu/ai-for-impact-coop/
Guest speakers:
Tara Merk-- ICDE Research Fellow at The New School and PhD candidate in Political Science at CNRS/Panthéon-Assas University Paris II, investigates cooperative data-centers
Week 10 (Thursday, October 30, 2025)
APPLYING COOPERATIVE PRINCIPLES TO AI ETHICS
Building on Week 10’s exploration of the cooperative digital stack, this session asks what it would concretely mean to apply the seven cooperative principles—first articulated in 1844—to the design, governance, and ethics of AI systems. Rather than treating cooperative values as abstract ideals, we examine how they can guide real-world alternatives to extractive, corporate-led tech development.
We begin with an in-depth case study of READ-COOP, the European cooperative behind the Transkribus handwriting recognition platform. Transkribus enables scholars, archivists, and communities to transcribe historical documents at scale—while retaining democratic governance, equitable data policies, and a federated ownership model. We will engage directly with Professor Melissa Terras, one of READ-COOP’s founding members. The conversation is expanded through the participation of Jose Mari Luzarraga Monasterio, co-founder of the Mondragon Team Academy.
Guiding Questions
— How can the seven cooperative principles be concretely applied to the design, governance, and deployment of AI systems?
— What makes READ-COOP and Transkribus a viable model for ethical, democratically governed AI infrastructure?
— How can federated cooperative systems challenge extractive AI models and scale solidarity-based alternatives?
Required Readings: Zip file download (vs. Aug 26, 2025)
Terras, Melissa, et al. “The Artificial Intelligence Cooperative: READ‑COOP, Transkribus, and the Benefits of Shared Community Infrastructure for Automated Text Recognition.” Open Research Europe 5, no. 16 (2025): 1–44. https://open-research-europe.ec.europa.eu/articles/5-16.
Srivastava, Lina. "Building Community Governance for AI: How Cooperatives and Collectives Can Build the AI Sector toward Justice, Equity, and Shared Prosperity." Stanford Social Innovation Review, March 4, 2024. https://ssir.org/articles/entry/ai-building-community-governance
Scholz, R. Trebor. “The Coming Data Democracy.” In Own This! How Platform Cooperatives Help Workers Build a Democratic Internet, Verso, 2023.
Project Liberty Institute and Decentralization Research Center. How Can Data Cooperatives Help Build a Fair Data Economy? Laying the Groundwork for a Scalable Alternative to the Centralized Digital Economy. July 2025.
Activity: Analyze Transkribus' cooperative structure.
Explore Cooperative Case Study:
Explore how each organization applies AI across different layers of the digital stack while aligning with cooperative values or challenging Big Tech models.
Investigate how LESTAC AI—a cooperative initiative in Perpignan—challenges Big Tech by building local, ethical AI infrastructure. How does it engage different layers of the digital stack? (Note: The article is in French and may need translation.) https://www.actuia.com/actualite/pyrenees-orientales-naissance-dune-ia-souveraine-au-service-des-entreprises-locales/
Explore Friesland Campina’s multi-layered AI strategy—from real-time factory monitoring and centralized analytics to machine learning for research insights and personalized nutrition. How does this illustrate AI integration across the stack?
https://www.symphonyai.com/resources/case-study/industrial/frieslandcampina/
https://www.agconnect.nl/partner/databricks/frieslandcampinas-ai-journey
https://www.stravito.com/resources/frieslandcampina-makes-insights-more-relevant-with-stravito
https://150years.frieslandcampina.com/en/stories/the-one-and-most-important-to-innovate-is-ourselves
Explore how Coop Sweden uses AI to transform grocery shopping and customer service across 800+ stores—what layers of the AI stack are at play?
https://ebi.ai/blog/coop-retail-ai-case-study/
Cooperative Councils’ Innovation Network. Co‑operative Values‑Driven AI – a Guidance Framework. Policy Lab, Wigan Council (lead): Brent Council; Bury Council; Cheshire West and Chester Council; Glasgow City Council; Kirklees Council; Manchester City Council; Oldham Council; Mutual Ventures. October 2024. Published on Co‑operative Councils’ Innovation Network website. https://www.councils.coop/project/coop-values-driven-ai/
Guest Speakers (invited):
Melissa Terras, Chair in Digital Cultural Heritage at the University of Edinburgh, uses AI for cultural preservation while probing the environmental impact of digital infrastructures.
Jose Mari Luzarraga Monasterio, Co-founder, Mondragon Team Academy
Week 11 (Thursday, November 6, 2025)
REGULATE, ENFORCE, BUILD WORKER POWER
NOTE: The class always meets at 12:10 PM New York time. If you're joining from a different time zone, this may shift by one hour around November 2, depending on whether and when your country changes clocks. Please check a time converter (like timeanddate.com) to make sure you join at the correct local time each week.
This session examines how legislation and policy could support a more democratic, people-centered AI ecosystem—one designed not for profit or control, but for public good. At stake is whether laws can meaningfully restrain the power of corporate monopolies and state surveillance, or if they merely formalize existing inequalities under the banner of “ethics.” We’ll look closely at initiatives like the EU AI Act, with its risk-based regulatory approach, and the U.S. Executive Order on Safe, Secure, and Trustworthy AI (2023), which sketches out principles for ethical deployment. Both reflect a growing consensus that AI requires guardrails. Yet these efforts often sidestep the core issue: the vast asymmetry of power between those who design AI systems and those most affected by them.
Against this backdrop, the session asks what more ambitious, structurally aware legislation might look like—laws that don’t just manage risk, but redistribute power. How can regulation confront the root causes of AI-driven inequality without succumbing to technocratic idealism? And in an era increasingly shaped by authoritarian leaders who use AI to consolidate control, can policy alone ever be enough?
Guiding Questions
— What kinds of legislative frameworks could truly enable people-centered AI?
— How do current efforts like the EU AI Act and U.S. Executive Order fall short?
— What are the risks of relying on policy alone in an era of techno-authoritarianism?
Required Readings: Zip file download (vs. Aug 26, 2025)
Jozak, Tom. 2024 EU AI Act: A Detailed Analysis. March 26, 2025. SSRN. https://doi.org/10.2139/ssrn.5196856.
Cuéllar, Mariano-Florentino, and Aziz Z. Huq. The Democratic Regulation of Artificial Intelligence. New York: Knight First Amendment Institute at Columbia University, January 31, 2022.
https://knightcolumbia.org/content/the-democratic-regulation-of-artificial-intelligence
Ramos, Maria Elisabete, Ana Azevedo, Deolinda Meira, and Mariana Curado Malta. 2023. “Cooperatives and the Use of Artificial Intelligence: A Critical View.” Sustainability 15, no. 1: 329. https://doi.org/10.3390/su15010329.
Panel of Guest Speakers:
Dorleta Urrutia Onate --Ph.D. student at the University of Deusto and researcher at the Platform Cooperativism Consortium, specializing in cooperative digital economies and ethical AI governance.
Stefan Ivanovski – Founder of Lifestyle Democracy, Stefan is a PhD candidate at Cornell University’s ILR School studying the democratization of ownership, cooperative models, and AI-powered labor infrastructures.
Morshed Mannan –Lecturer in Global Law and Digital Technology at Edinburgh Law School, Morshed researches cooperative governance, blockchain, and democratic forms in platform economies, with a recent focus on legal frameworks for digital labor platforms.
Week 12 (Thursday, November 13, 2025)
ASYNCHRONOUS VIEWING
In this special conference-week session in Istanbul, we'll not meet. You will watch two works of speculative media that interrogate technology’s human costs. First, watch Alex Rivera’s Sleep Dealer (available for rent here: https://www.sleepdealer.com/checkout/sleep-dealer-the-movie/purchase) and reflect on its portrayal of cross-border digital labor, networked surveillance, and worker exploitation. Then view “Joan Is Awful,” Season 6 Episode 1 of Black Mirror (Netflix), to explore themes of algorithmic identity, corporate control over personal data, and the commodification of self.
Suggested:
Prepare a brief written reflection comparing how each piece surfaces hidden labor, reshapes notions of agency, and invites cooperative alternatives to the tech-driven status quo.
Prepare a brief written reflection comparing how each piece surfaces hidden labor, reshapes notions of agency, and invites cooperative alternatives to the tech-driven status quo.
Week 13 (Thursday, November 20, 2025)
SOLIDARITY ACROSS STRUGGLES
This session explores why cooperatives alone cannot meaningfully confront the systemic challenges of AI. They will not succeed if they act in isolation—or cling to the illusion of being apolitical. To make a real impact, cooperatives must align with other movements fighting for justice in technology, labor, climate, and data governance.
We bring together tech-worker unions, climate justice networks, and Indigenous data-sovereignty campaigns to explore how cooperative strategies can amplify solidarity across struggles. The session investigates what cross-movement collaboration could look like in practice, and how shared principles—such as democratic governance, mutual aid, and collective ownership—can lay the groundwork for durable, people-centered alternatives to extractive AI systems.
Guiding Questions
— Why can’t cooperatives alone confront the structural harms of AI—and what are the limits of working in isolation?
— What does meaningful cross-movement collaboration look like between co-ops, unions, climate activists, and Indigenous groups?
— How can shared governance, mutual aid, and collective ownership shape AI development beyond capitalist or state-centered models?
Required Readings: Zip file download (vs. Aug 26, 2025)
Hunt-Hendrix, Leah, and Astra Taylor. “Solidarity Beyond Borders.” In Solidarity: The Past, Present, and Future of a World-Changing Idea. New York: Pantheon Books, 2024.
Kawano, Emily, and Julie Matthaei. 2020. "System Change: A Basic Primer to the Solidarity Economy." Nonprofit Quarterly, July 8, 2020. https://nonprofitquarterly.org/system-change-a-basic-primer-to-the-solidarity-economy/.
Vieta, Marcelo, et al. 2010. “Editorial: The New Cooperativism.” Affinities: A Journal of Radical Theory, Culture, and Action 4, no. 1 (August 25): 1–11. https://ojs.library.queensu.ca/index.php/affinities/article/view/6145.
Guest Speakers:
Andi Argast – Co-designing a Rubric for Cooperative AI (Hypha)
Felix Weth – Setting Up Local AI Assistants for Cooperatives
Please note: This week, the class will not meet online. Instead, you are required to arrange—at least one week in advance—a meeting with at least one fellow student for a peer-to-peer exchange, using the cross-connection document to coordinate.
Before your meeting:
Confirm the date and time with your peer at least one week ahead.
Prepare to share and discuss your final paper.
During your meeting, focus on:
Clarity of argument – Is your central idea clear and well-supported? Your main argument should be expressible in one concise sentence, and you should bold your argument in the paper draft to make it easy to identify.
Use of evidence – Are your sources relevant, well-integrated, and properly cited?
Structure, flow, and perfect spelling – Does the paper progress logically, read smoothly, and contain no spelling errors?
Properly formatted citations – Are all references and the bibliography formatted according to Chicago Manual of Style, 17th edition? (For guidance, see the Purdue OWL Chicago Manual of Style resources).
After your meeting, continue working on and refining your final paper based on the feedback you received.
Week 15 (Thursday, December 11, 2025)
What Stays With Us — Closing Circle and Collective Commitments
For our final session, we’ll gather to reflect on what has already shifted—in our thinking, our networks, and our sense of what’s possible. We’ll open with a short reflective prompt:
What’s one idea, encounter, or insight from this course that changed how you think about AI and cooperation?
From there, we’ll move into a closing circle, where each participant is invited (but not required) to share a brief reflection—something they learned, struggled with, or will carry forward. This is about a shiny performance, but a chance to honor the journey of the past 15 weeks, across backgrounds, sectors, and geographies.
Next, we’ll surface collective intentions: What do we want to do with what we’ve learned? Are there projects to support, research to continue, ideas to carry into our workplaces or communities? We’ll map these on a shared digital board, inviting ongoing connection beyond the course.
We’ll close with a few final words—not as conclusions, but as invitations to continue the learning and practice. This session is not just the end of a class; it’s a moment to recognize that we are already part of a growing network of people working to shape AI otherwise.
Required Materials and Expenses
All required reading materials, including PDFs, and audio resources, such as MP3s, are provided free of charge. There are no costs associated with this course.
Title IX Policy
The New School is committed to creating and sustaining an environment where students, faculty, and staff can study, work, and thrive unhampered by discrimination or harassment. This website intends to raise awareness and provide information, support, and resources to anyone in the community seeking assistance and education.
This website can offer assistance to community members affected by sexual harassment or any other form of sexual violence, regardless of whether any formal administrative or criminal process is initiated.
Academic Integrity Policy
What is Academic Dishonesty?
The standards of academic integrity apply to all forms of academic work and circumstances, including, but not limited to, presentations, performances, examinations, submissions of papers (including drafts), projects, academic records, etc. Academic integrity includes accurate use of quotations, as well as appropriate and explicit citation of sources in instances of paraphrasing, describing ideas, and reporting on research findings and work of others (including that of faculty members and other students). Only expressly authorized use of artificial intelligence tools is permitted. Students unsure about acceptable use of any source, including generative artificial intelligence tools, in the context of a particular assignment, should consult the syllabus for a course or speak with the instructor.
Academic dishonesty results from violations of academic integrity guidelines. Academic dishonesty includes, but is not limited to:
cheating on examinations, either by copying another student’s work or by utilizing unauthorized materials
using work of others as one’s own original work and submitting such work to the university or to scholarly journals, magazines, or similar publications
copying or appropriating someone else’s work in visual or performing arts
submission of someone else’s work downloaded from paid or unpaid sources on the internet as one’s own original work, or including the information in a submitted work without proper citation
submission of another student’s work obtained by theft, purchase or other means as one’s own original work
submitting the same work for more than one course without the knowledge and explicit approval of all of the faculty members involved, including applicable faculty from prior semester(s)
unauthorized use of artificial intelligence tools to generate ideas, images, art/design, audio, video, code, or text for any portions of work
destruction or defacement of the work of others
aiding or abetting any act of academic dishonesty
any attempt to gain academic advantage by presenting misleading information, making deceptive statements, or falsifying documents, including documents related to admission applications, academic records, portfolios, and internships
any form of distributing one’s work with the intent to enable students to use this work as their own, including, but not limited to, posting quizzes, papers, projects etc. on websites
engaging in other forms of academic dishonesty that violate principles of integrity
Image Credits
Unless otherwise noted, all images in this syllabus are from Better Images of AI and are licensed under CC BY 4.0. Image creators:
Yutong Liu & Digit
Elise Racine & Digit
Dominika Čupková & Archival Images of AI + AIxDESIGN
Shady Sharify
Hanna Barakat & Cambridge Diversity Fund
Hanna Barakat & Archival Images of AI + AIxDESIGN
Kathryn Conrad & Rose Willis
Clarote & AI4Media
Anton Grabolle
Catherine Breslin & Team and Adobe Firefly
Fritzchens Fritz
Alexa Steinbrück
ACKNOWLEGMENT
The generative AI policy reflects my own thinking and draws on course policies from the Harvard Kennedy School Executive Education program.