Reimagining Possibilities for Creative AI Literacy:
Generating Ideas for Future Programs through Participatory Design
Introduction to Participatory Design Sessions
The design sessions intended to bring people from different disciplines and sectors to critically consider how different communities might be drawn to learning about AI, and creatively iterate on ideas of how to promote agency with AI through different program formats in a variety of informal learning contexts.
The conference attendees were divided into eight groups. Each group was given 6 cards, 2 from each category:
Two AI Literacy learning goals connected to promoting agency in using AI, interrogating AI, and designing with AI,
A particular audience, and
A program format (e.g., an exhibit or afterschool program).
The sessions were not intended to generate full-fledged program ideas that could be implemented. Instead, the goal was to provide evocative examples of how one might approach the design of AI literacy experiences in informal settings that center humans first, rather than the technology.
What follows are summaries of the design ideas that came out of this creative process. We hope these examples can provide a process and a way of thinking about where informal educators might go next in developing programs and provide fodder for the AI education community to think out of the box about programming. As you read through, think about “What unique needs do these ideas fill?”, “What are the concerns remaining?”, “Will this work with our communities?”.
Given the recent rapid development of AI technology, we are eager to share these exploratory ideas with the informal learning and the AI fields, and are seeking feedback and insight from colleagues, experts, and community members to form a community of practice to ensure the human-centered approach will play a critical role in future AI education.
List of the Program Ideas
Project Title: Barrio Broker
Team Members
Tracy Allison Altman, Museum of AI
Phaedra Boinidiris, IBM/Trustworthy AI
Andres Henriquez, Education Development Center
Sarah Ketani, New York Hall of Science
Stephen Uzzo, Museum of Mathematics
Learning Goals: Awareness of bias in AI & how to interrogate it and reimagining how AI can be used to meet human needs
Audience: Families/Intergenerational learning groups
Program Format: Flexible pop-up exhibit in a science museum or kiosk for libraries where trusted intermediaries can facilitate
Program Idea and How it Works
This team came up with an interactive kiosk where all family members get to input their preferences for what they would want in the neighborhood where they would like to live. The AI decision-making system would be trained exclusively on buzzfeed lists to recommend places, resulting in the high likelihood of output being biased and a mismatch. This kiosk could include multimodal features that show how the AI is getting trained on the data to make this transparent. The kiosk may also include less emotionally charged examples to orient family members to notice how inputs into the system influence the decisions the AI system makes.. Facilitators can discuss the importance of AI transparency, opt-out options, discuss where AI influences decisions in daily life, and suggest actions they can take like writing Congress or going to websites to learn more (i.e., AI for All, a website version of Barrio Broker for hands-on exploration after the visit, etc.) To make this activity accessible to multiple age groups in a family, the team explored the idea of using object recognition for presenting categories to families (e.g., if they held up a dolphin, that would signify living by the ocean as an important criterion).
What is Interesting?
This design idea is an example of how AI may be used to help address a problem worth solving for intergenerational families (weighing the factors that matter in figuring out the best place to live) and how bias can enter into the equation. The team was inspired by a real-world Canadian example of an AI-automated decision-making system for the placement and integration of immigrant families into neighborhoods that raised critical issues about how beneficial the system was for the people it was designed to help: from the data employed, to the motivation behind using a given system, to the action triggered by the models employed. This idea illustrates the importance of crafting experiences where families can question how underlying values and perceptions can shape how AI systems make decisions based on the kinds of data entered and the rules one assigns to analyzing data. As one participant said, “garbage in gets garbage out. “
What Success Might Looks Like?
Families have a better understanding of AI and bias and how that can affect them.
Families are interested in engaging with information about AI after leaving the kiosk.
The project gets featured on WNYC or other relevant dissemination channels.
Canada changes its policies based on lessons learned from diverse communities.
Concerns
How can we explain data bias to families that may not be in the practice of examining bias or working with such tools?
Can we demonstrate what a proper AI system would look like? Proper for whom and by whom?
Project Title: ChatGPT Town Hall
Team Members
Kevin Dilley, Explora
Errol King, Regenerative Labs
Clifford Lee, Northeastern University
Rafi Santo, Telos Learning
Trevor Taylor, New York Hall of Science
Learning Goals: Explore tensions and paradoxes in AI – Ask what if?
Audience: Middle school learners
Program Format: Community town hall
Program Idea and How It Works
This team’s program idea was based on a real-life experience of one of the members who had an interaction with a parent and teacher about the use of ChatGPT in school and its role in “cheating”. The team was curious to explore what the role of education is in AI, and what AI's role is in education. The program was designed for students and teachers encountering tensions around the use of generative AI in school assessments.
In the program, students and adults would be assigned to separate affinity groups to discuss whether the use of ChatGPT is cheating so they can be openly honest without the fear of repercussions. Students will be asked to explore how, why, and when to use ChatGPT within current pedagogy at school. Teachers and other adults would be asked to discuss the larger macro and historical forces that may inhibit rapid change in applying AI in the curriculum (school bureaucracy, traditional ideologies of learning, etc.). The hope is that these separate affinity group will eventually be brought together to reflect on their experiences and perspectives, creating an intergenerational space for all stakeholders (students, teachers, admin, parents/guardians) to more critically surface the kind of learning that is possible and AI’s place in it. The long-term goal is to develop transformative learning experiences that integrate the conscientious and appropriate use of ChatGPT/AI. The key talking points for discussion may include the following:
Youth & adults need safe, trusting, affinity spaces to process, explore, and reflect on the use of ChatGPT in learning environments
Grounding in Mindfulness
Understand why students are using it
Understand teachers’ fears, worries, concerns, inadequacies, burnout, stress, and constraints
What’s working/not working with current classroom curriculum, pedagogy, and assessments?
Prioritize the whole student/teacher (feelings, agency, body response, heart, spirit)
Meta-reflection on ChatGPT - affordances and limitations
Engage in honest, reflective, inquiry-centered dialogue to transform traditional schooling structures and systems.
Why school? What are we learning? Why? How are students demonstrating their learning?
Developing meta-technological and meta-educational reflexive fluency, using that fluency to change what education looks like locally
Sticky Problem
How do we connect different stakeholders (students, teachers, administrators, parents/guardians) around the use(s) of ChatGPT in learning?
What is Interesting?
The program addressed the elephant in the room in the current education system, and asked the core question of “what education looks like and means” in the world of AI.
What Success Might Look Like
Students have opportunities to reflect on both the conditions under which they’re ‘educated’, and why, how, and when they look at technology like AI within that education process.
Students would be able to voice what they’d want their education to look like, and what role AI would play in it.
Teachers doing similar reflection on the above, from their perspective.
Students and teachers come together to voice and solve problems related to AI’s role in assessment, and transform what assessment/curriculum looks like in their school.
Concerns
Larger macro and historical sociopolitical forces that inhibit change (disciplinary silos; school bureaucracies, traditional ideologies of learning, curriculum, pedagogy, assessments amongst students and teachers).
Distrust between teachers/students
Project Title: From the Past to the Future -- Using AI for Critical Storytelling for Families of Emergent Bilinguals
Team Members
Stefania Druga, Hackidemia
Azedah Jamalian, The Giant Room
Jasmine Maldonado, New York Hall of Science
Peggy Monahan, Oakland Museum of California
Minerva Tantoco, New York University
Learning Goals: Foster Data Literacy/Explore Tensions and Paradoxes in AI
Audience: Mixed Age Groups/ Emergent Bilinguals
Program Format: Afterschool Program
Program Idea and How It Works
This team wanted to explore the uncertainties of AI. Participants (whole families ideally) bring their own memories into the program, and create stories together from “there to where/past to futures” in collaboration with generative AI to generate text and images.
This is an afterschool program in a community center. Participants will use AI to predict the future in a playful way (“What if” scenarios). Potential challenges include language barriers (communication can be tricky with a bilingual audience) and exploring tensions/paradoxes in AI (can AI be trusted?). Families will tell AI about their family, for example, “We are a family of four (e.g., two parents and two siblings)” or “I’m one of the siblings, and we’re these ages.”, and then provide prompts related to their future: “Tell us our story 20 years from now and imagine we’re super rich.”
The AI tool will generate a story in multiple languages and create visual representations of their projected futures. The children can play around with the AI tool and try to refine the story to reflect how they feel, and, at the end, create a time capsule of their story (e.g., a book). Participants can evaluate the story and update the prompts and inputs to create a book they connect with and then come back to the book in 5 years and see how it compares. Participants can bring in their own pictures to enhance the story. If parents don’t attend, children can share these stories with their parents at the end of the program and then bring the time capsule home together.
What is Interesting?
This is a playful use of generative AI for storytelling and future visioning. It offers participants the chance to explore the creative potential AI affords through building representations while also offering opportunities for agency, questioning the outputs, and tweaking the inputs.
What Success Might Look Like
The participants engage in sharing their memories, plan/envision different futures, and engage with generative AI text, images, and translations to create stories. The families will learn about prompt engineering in the process, and examine AI biases, tensions, and paradoxes. As they engage with the AI tool, they ask many “what ifs”... (e.g., What if our family gets rich? What if our family goes to Mars?, What if we input this memory in the prompt?, etc.). The program is about going beyond AI, and leading people to imagine how AI can be for them.
Concerns
Families need to make the time to join the program together, which is especially challenging for families with two working parents
How might we facilitate family conversations with age gaps?
Cultural differences and language barriers as families reflect and share their memories and future aspirations
Project Title: Promoting Data Literacy Through Youth-Led Community Events
Team Members
Tara Chklovski, Technovation Families
Peta-Gay Clarke, Google NYC
Katherine Culp, New York Hall of Science
Delia Meza, New York Hall of Science
Michael Uffer, Pictgen.ai
Learning Goals: Promoting agency about how data can be shared with trust and mined without trust
Audience: High school students on a date
Program Format: Museum-based Program
Use Case and How it Works
Program Idea and How It Works
This design idea focused on how to promote agency with all types of technology not just AI, promoting agency, activism, and joy. The program would take place in a community space, much like a dance or community party run with the help of a trusted advisor. The goal would be to create events or party experiences that would address how AI surveillance and data are being used to make decisions that crop up in the environment. This program would be a safe space where participants are invited to really experience the range of choices they’ve made and to notice how instrumented they are as a data source and how much they are “shut off.” One potential way this might get operationalized is through a dance club environment where the data that youth are generating (on their phones or through their own dance?) impacts what images appear on the wall or how the music is being played. Youth are invited to make a series of choices about all of their data assets while in that space.
What is Interesting?
What is interesting about this idea is the notion of making the invisible visible– that choices you make can influence and shape what you see and the information you access.
Concerns
How to pull this off in a low-tech manner that could take place in settings where youth like to gather.
How to successfully balance having youth explore important issues underlying surveillance while at the same time promoting agency by having youths make choices about the data they share.
Project Title: Date Night Career Adventure with AI
Team Members
Diana Ballesteros, New York Hall of Science
Xaq Pitkow, amegie Mellon University
Margaret Honey, New York Hall of Science
Lisa Soep, Vox Media/YR Media
Esme Tovar, Tech Interactive
Learning Goals: Learn How Machines Perceive the World and How Humans Think Differently
Audience: High School Students on a Date
Program Format: STEM Career Night/Digital Tool
Program Idea and How It Work
This team’s idea focuses on high school students who want to flirt and explore career opportunities. The problems they are addressing include demystifying career pathways and looking at what the job market will be like in the future (and how is AI going to impact it). It is the antidote to the usual boring career exploration events. This group wants to explore the larger idea of how humans and machines understand the world through entropy (the number of possibilities that can be experienced or produced). Both humans and machines learn from a lot of input but machines need a lot more entropy to understand the world. Entropy was included in this project to reflect what happens when machines don’t have enough data.
The program will consist of a self-guided (with your date) immersive experience or a portable pop-up (it can be set up in various stations). The AI tool will have a tactile combination of human experiences as well as a digital interface and will be future thinking.
The experience itself includes:
Choosing objects representing what you love and want
AI takes picture of the objects and the user describes what the objects mean to them
ChatGPT generates a career narrative based on these choices and descriptions
DAL-E generates images of that narrative
Output shows people what that actual job would be
Shows how whole life would be created by ChatGPT (prompted by a question: Is this a good life?)
An Example of Career Exploration AI Output: You’re a Green Travel and Wellness Consultant!
interact with sustainable farming initiatives
arrange farm-to-table experiences
collaborate with local hair salons for eco-friendly personal care
curate eco-conscious travel itineraries
Real Lives & Real Jobs
What is Interesting?
It’s playful
Experience has an emotional arc
Immediate youth agency
Both whimsical and attainable
Concerns
Co-design required (i.e., it beings built for teens so it should involve teens)
Existing AI tools can reinforce bias
What Success Might Look Like
Learners understand what AI generates and how it operates
It helps youth break the ice on dates
It offers opportunities for teens to imagine a career they would love
It reveals differences in how humans and machines understand the world
Where to Go Next?
This can have applications in all kinds of contexts. Novel uses of objects that have meaning to teens make this fun and expressive. Thinking about how AI can be enlisted to help people reimagine their futures is an interesting underlying theme in many of these design ideas but this one provides playful approaches to centering what youth care about most as criteria for AI to decide what a potential job could be.
Project Title : Jeopardy for Prompt Engineering
Team Members
Clarissa Buettner, Tech Interactive
Maria Janelli, Scratch Foundation
Priya Mohabir, New York Hall of Science
Craig S. Watkins, University of Texas at Austin
Learning Goals: Develop prompt engineering skills (skills needed to tell AI what to do)
Audience: Middle school learners
Program Format: Pop-up community experience
Program Idea and How It Works
How can we develop prompt engineering skills among middle school students at NYSCI during a pop-up community experience? This team translated the original card “prompt engineering skills” into a game of Jeopardy. Topics in this game would be relevant to the local community and would be done in multiple languages.
Participants will first pick a category (e.g., “books for a hundred dollars” and the answer is “Harry Potter series”) and then find out what is the prompt that leads to this answer (e.g., participants may guess the prompts be: “Children's books adapted into 8 films”, “Who is the greatest youth wizard?”, “Which book generated the most box office revenue?”, etc.). Participants would then put these guesses into the chat to see what the actual prompt that was used (in this case, the correct answer was: “What is the greatest book of all time for middle schoolers in the United States?”).
Questions with higher points can get more complicated than lower ones (e.g., 200-point questions are harder than 100-point questions). For example, a prompt for 400 points can be: “The Night Diary” and the potential guesses can be “What was the greatest young adult science fiction novel in the last 10 years?”, “What is a good book to explore the night sky?”, “What is a book that middle schoolers would find scary?”). And the answer, in this case, was: “The greatest book of all time where both main characters aren’t white”.
What is Interesting?
This idea is a good example of robust form and function. Using the game convention of Jeopardy which is all about questioning invites learners into what is clearly becoming an essential skill with large language models and AI in general– what are you asking and what are you learning from the response? It’s a fun and creative way to introduce the concept of prompt engineering and the concept of bias in LLMs.
What Success Might Look Like
Success looks like laughter, a friendly competitive spirit, and kids talking to each other and wanting to keep playing. Participants understand the concept of bias in LLM-based AI.
Concerns
Challenges include picking the right topics, making sure that there is a winner, getting this age group into the museum, and the attention span of middle schoolers.
Where to Go Next?
This is something that can be done in multiple contexts beyond the museum. Thinking through how to go about drafting the right prompts and answers is an interesting challenge to explore further that can inform what exactly does it take to get good at prompt engineering.
Project Title: Family Stories as Forays into Prompt Engineering for Seniors
Team Members
Emily Reardon, Sesame Workshop / New York University
Susan Letoumeau, New York Hall of Science
Pam Davis, Wellbotics
Jeffrey Lambert, Queens Public Library
Kate Mashak, New York Hall of Science
Learning Goals: Developing prompt engineering skills (skills needed to tell an AI what to do)
Audience: Older adults
Program Format: Digital games
Program Idea and How It Works
The program aims to use digital tools to help intergenerational families engage with AI in informal education settings. The goal is to create a learning experience that invites and welcomes older adults to better understand the processes of creating AI models. To achieve this goal, the tools will build on older adults’ life experiences and will allow families to share and feel connected. The initial idea is to use AI to reflect on families’ stories, such as using photos of family reunions, old songs grandparents love, recipes of traditional dishes, and old books or movies that families enjoyed together before.
Families will provide shared input and play with the AI tool… the more you input to the tools, the more things might come up that generate family interest.
What is Interesting?
This takes into account a very different audience and is very human-centered in its approach to using AI to gather and reflect on families’ stories and those things older adults cherish. This project begins to answer the question: "What if AI loved us, what would it find?"
What Success Might Look Like
Older adults feel more comfortable and familiar with AI tools
Younger generations are given resources/prompts for meaningful interactions with elders
Family history / untold stories are captured and recorded for future generations
Concerns
Balancing privacy, safety, and trust when interacting with AI tools
Integrating the user experience of older people into a technology built by a younger generation (perspective taking, empathy, etc.)
Where to Go Next?
This design idea raises interesting questions about geneolgical studies and what AI’s role can be in that as well as how stories can serve as data. It would be interesting to think about what families from different cultures might do with a tool like this.
Project Title : Drawing for Diversity--Exploring Why Diversity in Data, People, and Problems Matter in How AI Does Its Work
Team Members
Dorothy Bennett, New York Hall of Science
Peta-Gay Clarke, Google NYC
David Ping, Amazon
Amanda O’Donnell, Cornell Tech
Theo Watson, Design I/O
Learning Goals: Learning why diversity in AI matters
Audience: International high school students
Program Format: Game or digital tools
Program Idea and How It Works
This team designed a program for international high school students in informal learning environments to explore why diversity in AI matters. Since AI is going to affect high school students more than any other generation, the team wanted the students to feel in control (how do they affect the outcomes rather than being affected by the outcomes of AI) and also see how their own cultural perspectives and experiences can affect what kinds of outputs and decisions AI systems provides. The core idea being explored is the need for diversity and the different layers of diversity needed for AI to work—e.g., diversity in creators and users of AI as well as how culture, race, gender, accessibility, etc. affect how data and information get represented and analyzed through AI.
When brainstorming, the group members considered students’ interests in future jobs (what job fields will look like), and the global markets that students would be interacting with (and even familiar with from their points of origin), and how it would impact them. To keep high school students’ interest, the group members designed a model tool using Teachable Machine in which students would input drawn data, and train the teachable machine on the data. The tool could be applied to a range of data they create on subject matter young people care about or are familiar with (e.g., their houses, video games). The tool would be a social game in order to make it relevant/personal to the students.
The digital tool would be something that could run on an iPad or web browser and would consist of challenges based on drawings of different people (user interface) and youth could see how it changes when users add more data (more pictures). For example, the group came up with the idea of building a digital house classifier. The model will be trained to determine “what a house is” and with more data, the definition of a house could change (this also helps students understand what the concept of exclusion in AI is). This tool can be made into a game where your team's model is ranked and then participants have to collect more data to improve their rank and make it more accurate (shows the need for more data diversity in AI). This team likes this idea because it can show how scale and diverse inputs influence results (e.g., international students may have very different ideas of what a typical house looks like based on their culture and countries) which can show up in the data. In the group's mock-up of this idea, the teachable machine in fact did not recognize one of the team members’ drawings of her house in Jamaica which varied from other drawings of houses included from other cultures.
Extensions of This Idea:
Using data training tools on music or slang that shows differences among countries- slang translator, classifier that lets you look at slang similarity- how slang terms get shortened, generational changes
Writing chatbots using different rules
Career placement tool where students write the rules and see the outcomes
What is Interesting?
Explores diversity on two levels--need for diversity in creators and users and how culture, race, gender, accessibility, and other intersectionalities affect how people get represented in AI and how it affects outcomes
See how scale and diverse input affect results.
Understand exclusion in AI in a simple and direct way.
Empower students to be designers and thinkers
Leverage cultural experience and diversity as a superpower
Demystify AI and make it feel like something they can be a part of.
What Success Might Look Like
Students understand AI from a quality of data perspective
Students Understand how data can affect bias and outcomes.
Students actually start using it.
Students iterate over time and improve their models
Students see how their lived experiences make a difference in the outcomes they get out of AI
Concerns
Motivating high school students what subject matter would hold their interest
Ease of use for facilitators
Access to technology
Time commitment
Possible oversimplification of the topic
Might need a variety of tools to address different aspects of diversity in AI