In this part of the training, we are hoping to gain a foundational understanding of common language and definitions. What is AI? How long has it been around? How are we already using it? How does it actually work?
AI describes computer programs that complete cognitive tasks typically associated with human intelligence. A cognitive task is any mental activity, such as thinking, understanding, learning, and remembering. Cognitive abilities enable people to make effective choices and thoughtfully solve problems; yet, there are limits to how much information humans can process. This is where AI comes in — extending our information-processing skills, supporting creativity and innovation, and expediting routine tasks.
Machine learning refers to the process of a machine “learning.” Think of it like training a pet. You toss your dog a ball and say, "Fetch." Then you reward her when she brings it back. Over time, she learns to associate “fetch” with the action, even if you use a different ball or toy.
ML algorithms are similar: They learn from examples and improve their ability to identify patterns as they're exposed to more information. Put simply, an algorithm is a set of rules that a computer follows to solve problems. So, like a dog playing fetch, ML is used to develop computer programs that can analyze data to make decisions or predictions, without needing explicit instructions for every single situation.
Supervised Learning: This type of machine learning (ML) trains programs using labeled data, similar to using flashcards where one side has a question (data) and the other has the answer (label). For example, a machine learning model might use a dataset of quiz questions labeled by difficulty to analyze a student's math performance over time.
Unsupervised Learning: This ML method uses unlabeled data to let a program identify patterns on its own, without predefined answers. It can be used for tasks like grouping news articles by their content, categorizing images into landscapes or portraits, or summarizing documents to highlight key points.
Reinforcement Learning: In this approach, a program learns to make better decisions through feedback—positive feedback encourages repeat actions. It's often used where decisions build on each other, such as an AI tool that learns to provide more effective writing feedback based on prior interactions.
Generative AI often uses a combination of supervised, unsupervised, and reinforcement learning. And all three approaches play distinct roles in conversational AI tools, which adapt to conversational context, understand human language requests, engage in natural dialogue, and generate responses in a meaningful way. Here’s how the three types of ML support conversational AI:
Supervised learning equips the tools with foundational dialogue data, enabling them to respond appropriately to common communicative cues.
Unsupervised learning enables them to interpret nuances in language, such as colloquialisms, that occur naturally in conversation.
Reinforcement learning further strengthens them by allowing the AI tools to improve their responses based on user feedback.
Machine Learning advancements helped pave the way for generative AI (genAI) — AI that can generate new content, such as text, images, or other media. The influence of genAI also extends to a wide range of diverse sectors, including drug research and discovery, industrial design, architecture, and fashion. For instance, by generating novel molecule structures, genAI can speed the development of life-saving medications. It empowers the creation of unique product variations and visionary building concepts. Further, designers can tap into genAI to inspire one-of-a-kind patterns and personalize clothing for the perfect fit.
Many conversational AI tools are based on large language models (LLMs). These are AI models trained on large amounts of text, which enables them to identify patterns between words, concepts, and phrases in order to generate effective responses to prompts. LLMs work by predicting the next-most-likely word or words in a sequence, using context clues.
If you type in a prompt field, "Debate the pros and cons of …"
Depending on what the LLM has learned from your search history, some predictions could be …
cat versus dogs
tennis versus pickleball
TV shows versus movies
Or perhaps you type, "The difference between …"
In this case, the LLM could predict that you want to learn about the difference between …
mean and median
mitosis and meiosis
affect and effect
Understanding how LLMs predict word sequences will help you navigate conversational AI tools more effectively because you can adjust your prompts to get the most relevant responses.
Take a few minutes and reflect on the prompt individually using the "Stop & Jot" form to gather your thoughts and then share out to the group.
Share examples of how you've used AI, just within the last week, in your personal or professional life. How did these interactions with AI impact your activities or decisions? Did you notice any particular benefits or challenges during these experiences? Feel free to discuss both the expected and unexpected aspects of using AI. This will help us understand the diverse ways AI is integrated into our daily lives and how we might further leverage it in our educational practices.