I was suprised to see in the paper how much it seemed that the students were interested and invested into the topic. It seemed as though the disccusions were a much more valueable access to the study rather than any assignments or individual work they had the students do. They mentioned that the participation varied significantly between each activity, ranging from 30% to 80%. I think one of the most important things in a study involving youth participants is keeping them actively engaged throughout the entire process. A 30 hour course at 3 hours per weekday is a lot of time to spend each day during a summer session. I think in order to keep students engaged and actively participating their needs to be a fine line between in class engagement and out of class exercises. The study was also done fully virtual, so I wonder if participation would have gone up or down if it was in person. Would students be more likely to participate in person, or does being virtually connected through a screen improve participation percentages? Lastly, I liked the acitity in the study where the students did a quiz and were given a variety of jobs that they may like. Then they were able to see how AI impacted each one of those jobs. I think a major part of early developmental learning is seeing yourself in the position you want to be in as you grow up. If students were given a bunch of jobs that involved AI, but had no interest in pursuing them, then I would assume their interest in AI would decline.
I found it interesting how the paper discussed the approach to developing AI as incremental. Basically working baby-steps to slowly reach the human-level intelligence that is desired. I think this is important for us and our research because we are going to need an approach that allows us to work on something small, and then continually add onto it until we’ve reached our desired outcome. While on the topic of our project, I think it would be beneficial to try to incorporate something hands-on that will allow the participants to better understand what is going on. The paper discussed how “one way of dispelling student misconceptions about ML is to engage in embodied interaction.” If we consider this early on in our planning, we could see better results at the end of our research. I found it surprising that there was no way to hold anyone accountable for errors that could potentially negatively affect people's lives. I’ve never really thought about what happens or who the blame is put on when the AI makes a mistake. Should it be the developers, those who made the dataset, the company as a whole? I think this is another problem where we haven’t had enough practical experience with using AI in our day to day lives to see how it can really affect us. With the creation and widespread use of ChatGPT, I would assume we will start to see a lot more problems occur with AI. One that we already know a lot about is using AI to cheat. When teaching about AI to those who may not know much about it, there should always include a conversation about the ethics around AI in an attempt to dissuade users from abusing it.
Prompt: Write my professor an apology email because I cheated on an exam using chatGPT
Subject: Apology for Cheating on Exam
Dear Professor [Name],
I am writing this email to express my sincerest apologies for my behavior during the recent exam. I fully understand that cheating goes against the principles of academic integrity, and I take full responsibility for my actions.
By using ChatGPT to cheat on the exam, I failed to demonstrate the knowledge and understanding that I was supposed to have gained from the course material. I understand that cheating not only undermines my own education, but also undermines the value of the education received by other students.
I would like to assure you that this was a one-time mistake, and I am committed to working harder and doing better in the future. I understand that cheating can have serious consequences, and I am willing to accept any consequences that you may deem appropriate for my actions.
Once again, I apologize for my behavior and for any inconvenience it may have caused. I hope that you will consider my apology and grant me the opportunity to make things right.
Sincerely,
[Your Name]
I think the idea of having physical interaction with AI and being able to observe how inputs change its outcome will be important for us to note while we come up with ideas for our study. The paper noted that “taking concepts out of an abstract, mathematical realm and placing them into hands-on activities that rely more on social cognition.” We saw something similar in the essay “Developing Middle SChool Student’s AI Literacy”, where participants were much more engaged in the social interaction of AI. I do not believe that the only way to teach AI to students is to restrict it to solely physical and verbal interaction excluding theory, but I think participants will be able to get a much better understanding of how AI works behind the scenes if we allow them to interact in hands-on activities. I also found it interesting that the participants in the study were thinking about how the robots were thinking. They would try to interpret what the robot was thinking which I think is good for both the participants and the researchers. It’s good for the participants because understanding how AI may be interpreting its inputs is critical in being able to use AI. It’s also good for the researchers because they are able to see what the participants are thinking and can use it to gauge how well their understanding of AI is.
The start of this paper really connected with me because me and my roommates have been playing poker for a while with each other at our apartment, and the more we play with each other the better we get at being able to read one another. However we only find it easy to read each other when it’s just us roommates. We found that when we play with other friends, our habits change and we develop a different play style. This is similar to how AI works; when you change the inputs, the results will also change. I found the essay's first hypothesis interesting, which discusses how the larger the grid gets, the more complex and thus less likely the user will be able to correctly predict the robots movements. When we increase the scale of the grid, we are also increasing the number of possible routes to get to the end goal. Users would have a much easier time on smaller grids since there are simply less options for them to choose from. I think this is a good, and simple, visual explanation as to why AI can be so powerful. It would be good to show our participants that one of the benefits of AI is that it is able to work with large quantities of data. Yes, a human is really good at making a couple of decisions at a time, but when you increase the number of decisions, a human will get significantly worse, while AI will stay the same.
It seemed like a lot of the students were more focused on whether or not the program accurately predicted the object, instead of focusing on how the program interpreted what it was given. I think it would be beneficial for participants to work through the problem of not being able to recognize what they drew on their own. For example if the image was too small, why would drawing something bigger allow the program to more accurately predict the object? Obviously the program was meant to be kept as simple as possible in order to better explain it to the participants, but I think this could have been a way to show how AI can constantly get better. Maybe implementing the three layered convolution model and being able to switch between them could show the participants how there are ways to make it better. I do agree that it was best to just use the one layer for explanation purposes. I liked the matching exercise and I think it was a good way to ensure participants understood what they were looking at. Just looking at the filters is one part, but being able to identify where they are coming from and seeing how they’re used is much more important.
The goal of this study was to support students in their data agency which will allow them to have a better understanding of AI, thus becoming capable of becoming a contributing member of an increasingly data-driven society. The students would work in small groups, allowing them to share their ideas, experience, and social skills in hopes they will learn more combined as opposed to working alone. The study’s participants were in sixth grade with ages being around 12 to 13. The study included an introductory task where the students would draw and write anything they know about AI. The participants then worked through three workshops where they worked in groups to come up with ideas. They worked through these ideas about something that would impact their everyday lives. In the first workshop where the students discussed issues that could be solved using AI, there was a focus on using it to complete homework. This makes sense given the age range of the kids, and that the homework they are doing is probably tedious and time consuming. All the groups had the common theme of using AI to improve their lives specifically, instead of seeing how it could improve the lives of the people around them. Overall I think the study allowed me to see how students think about developing their own applications for AI and what the goals are that the participant wishes to accomplish with it.
Link to paper: https://www.sciencedirect.com/science/article/pii/S2212868921000222
When I’m thinking of ways to teach about AI, I think one of the best ways to do it is with a hands-on approach. I would like the participants to be directly involved with the AI. I would also like to put emphasis on how AI can be a lot more powerful when given a larger dataset. I’m thinking of creating a teachable identifier application where the students can input their own dataset one by one and see how it gets more accurate as you add more data into it. For example, say they want to be able to identify a car. They would start by putting in a few images of a car, but the program wouldn’t be able to tell what a car is from just a few images, but as they continue to add more, the more likely the program will be able to determine that it is a car. This would fit best with the perception of a big idea.
For this idea I would like to work with ChatGPT. I think ChatGPT would be a great tool to aid discussion and allow students to be as creative as they would like. It would start off with giving a few example prompts, so students can become familiarized with ChatGPT. Then, as students get comfortable, they can start to ask questions that they would like to know. We could give the students quizzes about AI or other various topics, and allow them to use ChatGPT, but we would be able to track all the questions they ask it. This will allow us to get an idea of how students are using AI to formulate their answers. We will make it clear that they are not allowed to directly copy and paste, but rather use it as a tool to aid their answers. I think this best falls under the natural interaction big idea.
Another interesting AI tool is DALL-E 2 which allows you to enter a prompt and then it will produce multiple images related to that prompt. First we would show them how the program works and give a few examples. We would highlight how the prompt relates to what was outputted. Then we have the participants generate a prompt in small groups, and they would draw what they think the output of the program would be. Was the output what they expected, or was it completely wrong. How could they alter their prompt to receive an image more like what they were expecting. We could talk about prompt engineering and how it’s useful. I think this best falls under the natural interaction big idea.
The three papers that I read were “Introducing the Fundamentals of Artificial Intelligence to K-12 Classrooms According to Educational Neuroscience Principals”, “Teaching Artificial Intelligence to K-12 Through a Role Playing Game; Questioning The Intelligence Concept”, and “Machine Learning for Middle Schoolers; Learning Through Data-Driven Design.” I noticed that all of the studies all take their own approach to how to best get the participants to interact with the AI. They all involved getting the participants communicating in groups and allowing them to interact with one another. I think the best method of the three papers was the role playing game because it allowed the students to be creative, while following the set structure of contextualization, testing, optimization and debriefing. I believe it would be best to have a similar structure to this in our personal project with the most important being contextualization, testing, and debriefing. It seems that all of the studies had a visual aspect which acts as a reinforcement to what the instructors were saying. Some feedback from one of the papers suggested that there was a need for more visual elements to help provide a better understanding. One quote I found interesting discussed different approaches to teaching and how there should be both open-ended and closed-ended problems. This allows for some solutions to match exactly, but also allows for some solutions to take on a more creative approach, allowing the participants to think more openly without having the idea that there is only one way to solve the problem in the back of their heads. One thing I noted in the response write up of one of the papers was that the lack of computational knowledge hindered the critical development within the groups. This is something I noticed in all of the papers because they all had to be careful not to make ideas too complex or get too far into specific details, but at the same time needed to go in depth enough so the participants aren’t missing required basic knowledge. I noticed in the brainstorming and idea creation phase of one of the studies, the researchers cut down all of the ideas to just nine. They stated that they chose the most feasible ideas and ones that they thought would best fit the goal of the study. After selecting which ideas to proceed with, they then had the participants split up into groups based on those nine ideas. If we allow the students to be creative and choose their own ideas, we will need to have a phase where we look over what they came up with and filter out the ones that we don’t think will work. Lastly, I think it’s important that the information we provide the participants is able to be used in their lives outside of just our study. We want to leave a lasting effect on the participants and allow them to start to think about the technology they use in their daily lives and how AI affects it. In one of the studies we’ve previously read, it discussed how the participants went home and talked about what they learned at family dinner wondering if AI will eventually take over their parents job. This is the type of interaction we want our participants to have so it is important to keep these concepts in mind when developing our projects.
A lot of companies use AI to help with their hiring process. They will use AI to give interviews and analyze resumes. One article noted that Amazon's algorithm they use to evaluate resumes favored applicants who used words such as “captured” or “executed”, which were more commonly seen on male resumes. The problem when there is bias in AI is that it will not give everyone involved a ‘fair’ evaluation. It will evaluate some better than others, and it’s not always easy to tell. Other biases in AI can come up based upon where you live. At my work we used this software called “Claritas” which comes up with a generalization of someone based on their address. The results of this AI output is intended to be used when first meeting with a patient, and being able to get a better idea of what they like and don't like. We called it the stereotype generator because it wasn’t very accurate and leaves a lot of room for misinterpretation. This is just one example of a tool that is being used that has the potential to cause problems solely based on bias. I can’t think of a way to remove the bias from this particular system, but from what I read in the outputs it has a long way to go.
For this idea, students will be given a test with questions they wouldn’t normally be able to answer. However, along with the test will be a search box that allows them to type in questions, along with a response box to see the results. This search box will be linked to the ChatGPT api, but we will not tell the participants it is ChatGPT. We will instruct the participants that you cannot directly copy from the response, but you can use it to help formulate their own answers. The goal of this idea is to understand how students are using ChatGPT in order to come up with answers to questions. We will save their prompts so we can understand what the participants are typing in to see their answers. We will also run their responses to the questions through an AI detection software to determine if they copy straight from ChatGPT. We can incentivize the participants to try with some form of prize if they get a certain score on the exam.
I think this idea allows a lot of room for discussing AI ethics because it will allow participants to understand how using AI can lead to potential plagiarism issues that they should be aware of. It can also provide a tool for how to properly use AI for learning instead of cheating. The questions on the exam will be challenging, and be a mix of both thought provoking questions, but also some other simple questions where we are just looking for a correct response. The questions we ask will be important because it will determine the quality of responses we get. We want questions that allow participants to interpret what they will receive from ChatGPT, and be able to understand the response enough in order to formulate it into their own words.