This is a very important question because AI-powered tools can be wrong. They sometimes make up facts—AI developers call these falsehoods “hallucinations.” They sometimes give wrong math answers and might even tell you your answer is right when it isn’t. How should you deal with this?
Never trust that the answer the tool gives you is 100% accurate: Don’t rely on it to give you answers when it is really important for you to be right.
Trust your instincts: If you’re saying to yourself, “that doesn’t seem right,” it may not be! It is easy to convince yourself the AI must be right and you must be wrong. Instead, investigate further.
Consult other sources: It’s usually a good idea when learning new things to do this anyway, but it’s especially important when learning with AI-based tools. Think like a fact checker. What are the most essential facts to verify? Where else could you verify information?
Remember some places the tools are more likely to hallucinate: They won’t know information about current events because they take time to train and are not always up-to-date. As language models, they aren’t always experts at math. They aren’t always able to link to other web sources unless the specific site feeds them.
There are many different ways AI tools can help you learn. Here are a few examples:
Get unstuck on a math problem: If you don’t know what to do first, what the next step is, or just want to check if you are thinking the right way, you can interact with the AI to help.
Find out how or why something works the way it does: “Why is the sky blue?” "How does a radio work?" You may not be able to explain, but an AI tool can help you find the information and engage you in a fascinating conversation.
Ask it to simplify a confusing concept, and then quiz you on it: With a little practice with your prompting, you can start using an AI-powered chatbot to prepare you for a test.
Ask it to brainstorm activities to help you learn: LLMs can provide great ideas for making learning fun and relevant—just ask.
Experts agree that it’s important for educators to play around with the AI technologies first before introducing them into the classroom so they can get a sense of the strengths and weaknesses of the tools. Some teachers are already using AI to create lesson plans, provide feedback to students, and communicate with parents.
Students need to learn how to use AI technologies as assistants or advisers. One way to do that would be for teachers to allow the use of Gemini/ChatGPT for class assignments but require students to acknowledge and document how it was used. For example, a student who used Gemini/ChatGPT to get feedback on a draft essay can explain which of the tool’s suggestions she agreed with and which ones she didn’t. By using this approach, students can learn how to use the tool as a partner instead of having it do all the work for them. It could also build on students’ ability to evaluate and analyze writing.
What are the planning tasks that take you the longest? Could an AI-powered tool that is really good at summarizing and writing help you?
Here are a few ideas to get your creative juices flowing:
Generate ideas for lesson hooks: Need an engaging introductory question, or a quick activity to grab attention and spark your students’ curiosity? Ask the bot for some ideas.
Compile ideas for hands-on classroom activities: AI-powered tools are incredible idea generators. You won’t like all of their ideas, but it can be fun to pick and choose the ones that are the best match for your classroom, and improve them with your imagination, insight, and expertise.
Generate feedback and suggestions for improvement on drafts of student writing: This use case might sound controversial, but it doesn’t have to be. Any feedback the tool offers is yours to consider. You can keep it to yourself, or review and revise it before sending it to the student.
Some teachers have found uses for Gemini/ChatGPT and similar tools that help diverse learners with their assignments. For example, teachers can use ChatGPT to reduce Lexile levels—the measure of how difficult a text is—for English learners. Special education teachers are already using AI to minimize paperwork and generate individualized education programs, or IEPs.
However, teachers need to remember that generative AI tools can spit out biased or incorrect information. Researchers also found that some AI cheating-detector tools incorrectly flagged writing by non-native English learners as being AI-generated.
In addition, special education services often include sensitive information that would be risky or potentially even illegal to share on a publicly accessible AI platform that absorbs all the data it receives
Perhaps the biggest potential learning benefit to this technology may be the personalized, differentiated, tutor-like interactions AI can generate.
With so many different levels of learning in your classroom, you know how difficult it is to reach all students at their level. There are too many students and not enough of you.
Here are some ways AI-powered tools might be able to help:
Get your students unstuck: You simply can’t be next to every student as they're working on questions and answering problems. This type of technology can help coach students through a question.
Enable deeper conversations: You likely have engaging, deep discussions in your classroom, but does every student participate? With AI tools, each student can engage with deeper questions (How? Why? What if?) and respond and be encouraged and validated.
Help your students see relevance, which we know increases motivation: How many times have you been asked, “When will we need to know this?” AI tools can help your students make those personalized connections.
Improve student writing by providing feedback on drafts: What will AI’s role be in the future of writing instruction? What writing skills do students really need in an AI Age? These are provocative open questions—we don’t know the answer, but there is much room for experimentation.
This is definitely a hot topic. Given that a core principle of academic integrity is to present proper authorship of work, there are a number of paths:
You can forbid all use of these models: This might sound straightforward, but then you have to figure out a way to detect when AI is being used. This will become exceedingly difficult as the models become more advanced. Our view is that detection technologies won't be able to keep up.
You can allow use of the models but require students to acknowledge their use: In addition, you could ask students to document each prompt they crafted and/or describe how they used the AI tool, then reflect on that experience.
You can design assignments that are difficult or impossible to use these tools with: For example: Assignments that rely heavily on in-class discussions or activities of which the model would have no knowledge.
You can have students complete assignments in class where you can monitor what tools are used
AI tools are trained at a certain time, and the data sets they are trained on are not updated regularly, so these tools can provide outdated information or can fabricate facts when asked about events that occurred after they were trained.
And because the data sets on which the AI tools are trained contain biased information, that is also included in the responses generated by the tools. These AI tools, if left unchecked, could amplify harmful stereotypes and fuel misinformation and disinformation.
There are various professional development opportunities available for teachers to learn about AI, including workshops, online courses, webinars, conferences, and educational resources provided by reputable organizations or institutions focused on AI in education.
Reach out to your TiC for more information.
AI models, like the ones we use in our school, are tools designed to process information and assist with tasks in ways that mimic human intelligence.
These models can analyze text, solve mathematical problems, generate creative content, and even interact in conversational language. To understand these models, it's helpful to think of them as advanced computer programs that learn from large amounts of data to make predictions or decisions. At our school, we use AI to enhance learning, offering personalized support and enabling innovative educational methods. We ensure that the use of AI is always aligned with educational goals and safeguarded by ethical guidelines to benefit our students' learning experiences.
The best way to understand is to use tools like ChatGPT, Google Gemini, or Microsofts Bing.
It is important to recognize that AI tools like LLM-based chatbots are widely accessible to the public. Monitoring your child's interaction with these technologies should be part of a broader strategy for overseeing their internet and technology use.
It is important to discuss responsible technology use with your child and explain that using AI tools when prohibited by a teacher goes against the academic integrity of the school.
Encourage them to share their experiences and what they learn using AI tools. This can help you understand their level of engagement and any potential issues.
You should think about using these tools responsibly in the same ways you guide your children to use the internet and other technologies. Teach them the WHYs and the HOWs of what responsible use looks like.
Guidelines might include:
Avoid conversations with it that might be categorized as hate, violence, or sexual in nature
Acknowledge using it to help with drafts or feedback on schoolwork
Ask their teachers what their policies are about using it, and then comply with the policy
You can also start to help your child learn how the tools can serve as helpful collaborators. To get started, you might try using it to write a story together or ask it to answer questions your child has about the world and how it works and then discuss the answers.