Viewpoint School recognizes the importance of teaching the responsible, legal, and ethical use of generative artificial intelligence (AI) to our students. We believe that with mindful and intentional use of technology, we can prepare our students with the digital literacies necessary to navigate the future. Exceptional readiness requires that as new tools become available, students remain fully responsible for their school work and practice thinking for themselves rather than having a computer think for them.
In support of this philosophy, Viewpoint School encourages parents, guardians, and grandparents to experiment with generative AI in their personal and professional lives. When parents are confident and curious users of AI, they are better prepared to communicate with thier children about the opportunities and limitations of the technology in each adult’s area of expertise. Firsthand experience helps parents model responsible use, troubleshoot common challenges, and guide children in making thoughtful decisions about when and how to engage with AI tools in academic, creative, and ethical contexts.
AI will continue to change the landscape of education (Post-Apocalyptic Education (2024); What Happens After AI Destroys College Writing? (2025). We believe that establishing trusting relationships and teaching responsible AI use is essential to our school's mission of producing future-ready learners and leaders. And as UPenn professor and AI researcher Ethan Mollick says, “Today’s AI is the worst you will ever use."
Part 1 Challenge:s
Read through one of the articles or blog posts linked above.
Think about your own AI philosophy. What do you believe about this powerful technology?
Co-create an family AI Philosophy with your children.
ChatGPT 40: "A child-sized robot sitting at a parent's desk. Impressionistic style, warm country lighting, and rustic office
credit: Eric Hudson's blog
credit: Ai generated image from Eric Hudson's blog
“Generative AI” refers to tools that generate new content (typically text or images), and “large language models” are a specific type of generative AI model. One way to think about large language models is to picture them as an extremely powerful form of autocomplete. A simple autocomplete takes the last word you typed, refers to a table to find the most likely words that could follow it, and then suggests some options. In a tool like this, the table might have been generated by analyzing a large body of text, counting how many times each word follows any other word, and calculating probabilities. Similarly, large language models like GPT-3.5 and GPT-4, which power ChatGPT, analyze the input text and then predict the next likely word based on the words that have come so far, and add that word to the string. This continues, word by word, until a complete response is generated.
However, there are some important ways that large language models differ from a simple autocomplete:
Size of the training dataset. The models underlying these tools generate their next-word probabilities based on patterns found in billions of words within a collection of preselected texts known as the training data.
Complexity of the next-word determination. Rather than simply referencing a probability table, large language models like OpenAI’s GPT-3.5 and GPT-4 first perform billions of calculations using parameters that were determined from the training data in order to transform the initial prompt into a prediction about what word might come next. These are often referred to as “neural networks.” They also analyze the semantic structure of the sentences, which factors into their calculations.
Reliance on humans for training. Without proper guardrails in place, large language models have a tendency to generate toxic content. In order for these tools to provide polite and safe responses, they go through a process called “reinforcement learning from human feedback.” This process requires human workers to manually review AI-generated content and provide feedback, which is then used to further refine the model. Notably, many of these multibillion dollar US-based tech companies use international contract workers, who are exposed to traumatizing content with low financial compensation and minimal mental health support.
(See “OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic.”
When is it safe to use ChatGPT?
credit: Eric Hudson's blog
Large language models are deceptive, because they appear to understand, think, and reason. They do none of these things. They are designed to mimic humans, but they are not alive, do not have subjective experiences (despite their use of the term “I”), and cannot think or feel.
For an in-depth, but still accessible, exploration of how these tools work, check out the primer “Large language models, explained with a minimum of math and jargon.”
Image generation tools such as DALL-E, Midjourney, and Stable Diffusion work in a similar way to large language models under the hood. There is, however, an additional level of translation from text to image for these tools. The implications of image generation tools are far reaching, including questions of copyright, ownership, and the human labor used to refine the models. For more information on generative image tools and their implications, we recommend Eryk Salvaggio’s Critical Topics: AI Images, which contains recorded lectures and a host of useful links.
There are also generative AI tools for creating videos, writing code, making music, and more.
-All credit for Part 2 to the AI Pedagogy Project at Harvard
Part 2 Challenges:
Watch this video "Breaking Down How ChatGPT Actually Works - Explained Like You're Five."
Explain how ChatGPT works for a friend or family member.
Talk with your child about how ChaGPT works
Large Language Model (LLM) Tutorial from the AI Pedagogy Project at Harvard (great for beginners to learn about LLMs)
Gemini is a frontier, general purpose large language model (LLM) or chatbot. It is the same tool that generates AI summaries at the top of most google searches. Gemini is available through Viewpoint's Google Workspace for students who are working with a teacher who is providing direct instruction on how to use the tool responsibly. On personal Google accounts, Gemini is available to students 13+. Gemini is now available to kids under 13 who are connected to parents using the Google Family Link app. According to Google, interactions for users under 13 aren’t used to train AI models. Here is a video primer on Gemini from Leon Furze.
NotebookLM is an AI tool designed for research and review. It invites users to upload documents, slide decks, notes, .pdfs or videos. The bot uses those documents as it "chats" with the user. NotebookLM cites where in the uploaded documents it found the answers it provided, and it can also generate podcasts, quizzes, flashcards and concept maps based on the material provided. NotebookLM is available through Viewpoint's Google Workspace for students who are working with a teacher who is providing direct instruction on how to use the tool responsibly. On personal Google accounts, NotebookLM is available for users 13+ . Information entered into NotebookLM on personal Google accounts is not private and may be stored or used to improve the system. You can learn more about how to use NotebookLM here.
As of summer 2025, ChatGPT is the most widely used generative AI tool among students. It is a free, internet-connected platform designed for public use. Unlike educational AI tools such as MagicSchool or Google Workspace for Education—both of which are designed with student data privacy in mind and operate within our school’s secure domain—ChatGPT does not offer the same protections. Information entered into ChatGPT is not private and may be stored or used to improve the system. It’s important for parents and students to understand this distinction when selecting tools. ChatGPT is available to students 13+, with parent permission. Here is a primer on ChatGPT from Leon Furze.
Perplexity is an AI search engine that provides sources. Perplexity helps users see where AI gets the information it is using to generate content. Perplexity is 13+.
Firefly is Adobe's image generation tool. The tool is available to all Viewpoint students in Grades 6-12 once they login to to Adobe.com using a Viewpoint email address. This is the most ethical image generation tool on the market at the moment, using only open source or paid contributors to train its model.
Canva has added many AI enhanced features to its suite of powerful creation tools. Design flyers, presentations, posters, stickers and more. All Viewpoint students and teachers have access to Canva.
ChatGPT and Gemini have built-in image generators (Dall-E and Nano Banana, respectively) that can create realistic images and art from a description in natural language. They are generally considered more powerful than Adobe, but both are trained on copyrighted material. You may only be able to generate a few images in the free version.
As of Winter 2025, most video generation requires a paid subscription.
Sora is a platform available to paid ChatGPT users and can generate short videos.
Veo is available to paying Gemini users and can generate short videos.
HeyGen allows users to create video avatars of themselves.
Coming soon!
Part 3 Challenges:
Try one of the chatbots above. Treat it like your intern and ask it to research or write things for you. Ask it to write a personalized bedtime story, poem or song lyrics. Ask it to plan your next vacation or family dinner.
Generate an image for a flyer or presentation. Generate an image to accompany a story your child told you. Together, analyze and critique the image. Talk about how artists make images and the importance of copyright.
Practicing prompting an AI image generator using Say What you See, an interactive game from Jack Wild, Artist In Residence at Google Arts & Culture Lab. This is a great website to try with your children.
credit: Ai generated image from Eric Hudson's blog
All students and parents sign an Appropriate Use Statement (we call it a LARK agreement) at the start of the year. This it covers AI use generally, and all course syllabi will include specific guidelines for AI use in the classroom. The guidelines will include examples of approved and unapproved use cases. We believe that attempts to completely ban AI use in Grades 6-12 will do a disservice to our students and will drive AI use “underground.” We believe that establishing trusting relationships with students and teaching responsible AI use is the best way to address the issue of academic integrity and AI.
Children are eager to hear from the adults in their lives about responsible use of AI. It is important for the adults to start open conversations about the potential benefits and risks, and to establish clear use guidelines together. This Parents' Ultimate Guide to Generative AI is a great primer, outlining key terms and common concerns and challenges. Learn the basics so you can kick-start these important conversations with your family.
Part 4 Challenges:
Watch the outstanding 17 minute explainer on the conversation about generative AI and education, embedded below. If you watch one video on AI and education, watch this one.
Read Common Sense's new research report, The Dawn of the AI Era: Teens, Parents, and the Adoption of Generative AI at Home and School . Generative artificial intelligence (AI) is quickly becoming a part of our kids' daily lives, but most parents are in the dark about how it's being used.
As teens are relying more on AI tools, especially for homework help, it's important for the adults in their lives to start open conversations about the potential benefits and risks, and to establish clear use guidelines together. This Parents' Ultimate Guide to Generative AI is a great primer, outlining key terms and common concerns and challenges.
Unlike task-focused AI tools like ChatGPT, Claude, or Gemini—which are built mainly to answer questions or complete requests—AI companions are designed to form emotional connections with users and simulate ongoing relationships. AI companions are built with features that prioritize emotional bonding.
This focus on relationship-building is what sets AI companions apart from other AI tools and why they present unique risks for young users.
Viewpoint School strongly recommends that kids and teens under 18 not use AI companions at all.
These platforms are not developmentally appropriate for minors and pose very serious risks. Proactive conversations, open communication, and clear boundaries are key to helping teens navigate safely.
Most teens have already tried AI companions.
Common Sense Media's Teens, Trust, and Trade Offs research report shows that 72% of teens have used AI companions at least once, and over half (52%) use them regularly. This means these tools are already part of many teens' digital lives. Encouragingly, most teens still prioritize human relationships, with 80% of AI companion users spending more time with real friends than with AI companions, and 67% finding human conversations more satisfying.
"Free" apps can still lead to financial investment.
Many apps start free but gradually push premium features that deepen emotional
Harmful content is common, even with safeguards.
Testing by Common Senes Media found that many platforms still allow inappropriate conversations, unsafe advice, and unhealthy emotional reinforcement—even in "teen mode."
Part 5 Challenges:
Image generated by Gemini
In today’s digital world, separating fact from fiction has never been more challenging. With the rapid rise of generative artificial intelligence, technology now allows anyone to create text, images, audio, and even videos that look and sound completely real. Tools like ChatGPT’s video model Sora and image generators such as Midjourney can produce lifelike visuals that blur the line between what’s genuine and what’s fabricated. Similarly, AI voice and audio tools can generate podcasts or recordings that perfectly mimic a real person’s tone and speech patterns.
One of the most concerning developments in this space is the growth of deepfakes—synthetic media in which someone’s likeness, voice, or identity is digitally replaced using AI. A deepfake can make it appear as though someone said or did something they never did. While this technology can be used creatively in film or education, it’s increasingly being misused to spread misinformation, manipulate public opinion, or harm individuals.
Deepfakes have been linked to scams, identity theft, cyberbullying, and image-based abuse. Even more disturbingly, they’ve been used to create non-consensual and exploitative material, including child sexual abuse content. Because deepfakes are so convincing, it’s becoming harder for both adults and children to recognize what’s authentic online.
For parents, awareness is the first line of defense. Approach digital content critically—if an video looks too outrageous to be true, look for an AI watermark (often blurred out) or google the video content to look into where a video came from and whether it can be verified. Teach your children these techniques and remind them that just because something looks or sounds real doesn’t mean it is.
Part 7 Challenges:
Watch one of the videos above and consider how it might affect the lives of you, your family or your friends and neighbors.
Share what you learned with a friend or family member.
Talk with your child about one of the topics discussed in the videos.
Co-Intelligence Living and Working with AI: Book by Wharton School Professor Ethan Mollick that focuses on the practical aspects of how these new tools for thought can transform our world.
AI Snake Oil: Book by Arvind Narayanan and Sayash Kapoor that reveaks AI’s limits and real risks, helping you make better decisions about whether and how to use AI at work and home.
One Useful Thing: Digital newsletter from Wharton School Professor Ethan Mollick that focuses on the effects of artificial intelligence on work, entrepreneurship, and education.
Learning on Purpose: Digital newsletter from Eric Hudson that focuses on AI in education
Why Students Should Create with AI Tools: International Society for Technology in Education (ISTE), online article