*For the conduction of the AI research experiment we had previously submitted forms of consent and a form about our previous knowledge on the topic of AI and our previous engagement with Generative AI.
Phase 1: Group Formation
At first, we gathered into small groups in tables where cameras had been placed for the data collection process. It was explained to us that it easier to distinguish who is talking when there is also video material besides audio. In groups, we talked about how we have used AI so far in general, and challenges related to our studies that we could use AI to help us with.
Phase 2: Scenario-based talk and experimentation with "Annie"
We were given a paper with five scenarios that described situations where a student might need assistance.
The scenarios are shown on the right:
After everyone had shaped a general idea about how they wanted to go about their conversation, we started interacting individually with the chatbot through the WhatsApp API. We went through the scenarios in opposite order, starting from the last one.
Reflecting on this now, using the WhatsApp API subconsciously created a familiar environment so this made my style of writing be more informal and "friendly".
Phase 3: Discussion between the scenarios
After everyone was satisfied with the bot's answers on the scenario, we discussed them all together. We discussed what prompts we gave, comparing the bot's answers and seeing if there were any differences.
We were also asked by the researcher to discuss between each scenario within the group:
Were the bot’s suggestions helpful in your opinion?
What challenges did you encounter during the conversation with the bot?
What topics came up during the conversation with the bot?
How did the conversation with the AI affect the solutions you created?
After discussing these points through, we combined them into a common answer that we wrote on paper.
Phase 4: Group discussion with the researcher / Reflection
At the end of the experiment we were asked:
How did you feel to discuss these themes with the bot?
In what kind of situations your considered chatbot inputs as reliable?
Could you imagine asking help from AI in situations like in scenarios? Why? Why not?
We first discussed about how we felt when we were talking with the Chatbot and asking it to assist us in various problems, because some questions were more personal than others.
I replied that it initially gave me a weird feeling to discuss with the chatbot in a personal way about problems such as loneliness, because it felt like an intimate discussion that I would make with a friend of mine. That feeling though gradually went away and I was able to chat comfortably with the chatbot.
Someone else mentioned that they didn't like that the chatbot used emojis when they were conversing and that it gave them a creepy feeling.
Regarding which cases we consider AI reliable and we trust it to help us, an answer given by one member of our team was "I trust it with things that I can then cross-check myself", which I found myself agreeing with. It's important when conversing with an AI system, to have some formulated knowledge already about the topic you are asking about so that you are in control of the situation and not relying on the AI's results.
Someone else mentioned that it's not very reliable with references and links because it often hallucinates or provides broken links.
Regarding if we imagine ourselves asking for help in scenarios like the ones we had on paper, I think most of us agreed that we would ask for help for practical issues but we wouldn't treat a chatbot as a friend, expressing deeper thoughts or asking for emotional support. None of us felt comfortable in doing that.
These screenshots are evidence that I did find one part of my conversation with the chatbot useful!
The 4th scenario really aligned with issues that I myself have about limiting distractions in my digital environment, so the chatbot's answers gave me some insights on how to set up my learning environment in a way that suits my needs better, providing me with multiple tool suggestions.
*Note from the future:
Coming back to this reflection a few months later, I am now actively using Forest to help me stay focused. The first time I learned about this app was through my conversation with the chatbot, so it really proved useful over time for me.
At the end of the workshop, after the presentation by Anni Silvola about Artificial Intelligence, we had a general discussion about our thoughts on AI, about the fears surrounding it, as well as ethical implications.
What I found interesting about this discussion is that the term "AI", even if it has been clearly defined scientifically multiple times, still is not considered by everyone as much as a scientific term I believe, but rather as a separate entity.
For example, when someone is asking "What do you think AI will be able to do in the future?", I think that using "AI" as the subject, in terms of the sentence's structure, attributes to it an ability to act on its own. This I think separates AI from any other technology in people's minds, because they don't necessarily think of it as a technology that we humans control but as an entity that can act and make decisions and advance on its own. (which is obviously a misconception).
I thought afterwards that this is really interesting, because the way we talk about technology can really contribute to people's misconceptions and can shape fears, especially around this topic of AI which has really been "butchered" in all kinds of industries. A typical example would be the movie industry where dystopian futuristic films have presented AI in various forms over the years, such as cyborgs, anthropoids, robots, hyperrealistic creatures etc. Even though those Sci-Fi constructions are blatantly not real, they do have a huge impact because the storylines that surround them are crafted in a way to be relevant to people's hopes about the future, even existential thoughts and questions that people have about where humanity is heading or what will happen to us in the future. Therefore, when someone connects to a story emotionally, it's easier for them to shape a distorted version of reality inside their head and, in a way, transmute their fears into this black-box umbrella term (for the general public) called AI.
Having said the above, I believe that it is a combination of things that has played a role in the general public's conscience, and it is sensible considering that not everyone can have a technical background and be aware of the mechanics, the algorithms running behind the AI systems. I believe now we have reached to a point where it doesn't matter much whether someone agrees or disagrees , but it will become more and more of a necessity in education and also professional life to be aware of it, of its usage to support humans, of the ethical dilemmas.
Merikko, J., & Silvola, A. (2024). An AI Agent Facilitating Student Help-Seeking: Producing Data on Student Support Needs. In M. Hlosta , I. Moser , & B. Flanagan, et al. (Eds.), Joint Proceedings of LAK 2024 Workshops, co-located with 14th International Conference on Learning Analytics and Knowledge (LAK 2024) (pp. 185-194). (CEUR Workshop Proceedings; Vol. 3667). CEUR-WS.org.