I was suprised to see in the paper how much it seemed that the students were interested and invested into the topic. It seemed as though the disccusions were a much more valueable access to the study rather than any assignments or individual work they had the students do. They mentioned that the participation varied significantly between each activity, ranging from 30% to 80%. I think one of the most important things in a study involving youth participants is keeping them actively engaged throughout the entire process. A 30 hour course at 3 hours per weekday is a lot of time to spend each day during a summer session. I think in order to keep students engaged and actively participating their needs to be a fine line between in class engagement and out of class exercises. The study was also done fully virtual, so I wonder if participation would have gone up or down if it was in person. Would students be more likely to participate in person, or does being virtually connected through a screen improve participation percentages? Lastly, I liked the acitity in the study where the students did a quiz and were given a variety of jobs that they may like. Then they were able to see how AI impacted each one of those jobs. I think a major part of early developmental learning is seeing yourself in the position you want to be in as you grow up. If students were given a bunch of jobs that involved AI, but had no interest in pursuing them, then I would assume their interest in AI would decline.
I found it interesting how the paper discussed the approach to developing AI as incremental. Basically working baby-steps to slowly reach the human-level intelligence that is desired. I think this is important for us and our research because we are going to need an approach that allows us to work on something small, and then continually add onto it until we’ve reached our desired outcome. While on the topic of our project, I think it would be beneficial to try to incorporate something hands-on that will allow the participants to better understand what is going on. The paper discussed how “one way of dispelling student misconceptions about ML is to engage in embodied interaction.” If we consider this early on in our planning, we could see better results at the end of our research. I found it surprising that there was no way to hold anyone accountable for errors that could potentially negatively affect people's lives. I’ve never really thought about what happens or who the blame is put on when the AI makes a mistake. Should it be the developers, those who made the dataset, the company as a whole? I think this is another problem where we haven’t had enough practical experience with using AI in our day to day lives to see how it can really affect us. With the creation and widespread use of ChatGPT, I would assume we will start to see a lot more problems occur with AI. One that we already know a lot about is using AI to cheat. When teaching about AI to those who may not know much about it, there should always include a conversation about the ethics around AI in an attempt to dissuade users from abusing it.
Haven't been able to use ChatGPT in a while due to it being at capacity. Will update when I get access. Already have an account.