What does it actually mean for something to be “intelligent”? Even though it looks intelligent, is it thinking or understanding, it is just following instructions exactly. This idea helps us explore how computers can appear smart without thinking like humans. In this unit, we move from rule-based systems to machine learning, focusing on Large Language Models (LLMs), which learn patterns from large amounts of text and use those patterns to predict the next word.
In pairs you are going to play Noughts and Crosses. One of you is going to be play normally and one of you are only going to do what the piece of paper says.
Swap over each time and play a few rounds.
You can also see a Chicken Play.
*This is a simplified explanation of how AI works.
Artificial Intelligence is a complex and rapidly changing field. For this lesson, we are only looking at the basic ideas behind machine learning and how Large Language Models recognise patterns in data.
If you are interested there are more detailed explanations in the Level 2 Computer Science.
Kiwis are confusing so we are going to make a Machine Learning model to tell them apart.
Large Language Models (LLMs) like ChatGPT and Gemini are computer programs that generate text by predicting one word at a time. They are trained on large amounts of text written by people and learn patterns about which words usually come next. When you type a prompt, the model looks at the words so far and chooses a word that is likely to follow. It repeats this process again and again to create sentences. LLMs do not understand meaning like humans do, they recognise patterns in language and use those patterns to produce text that sounds natural.
As a class make a one word story to see how you can add words to a sentence or story.
In this worksheet, you are acting like a Large Language Model. You are choosing one word at a time based on what could reasonably come next, rather than planning the whole sentence in advance.
Contexto is a word-guessing game where your guesses are ranked by how similar they are in meaning to a hidden word. This is similar to how Large Language Models work, as they learn patterns and relationships between words based on how they appear together in large amounts of text.