Oh… AI is such a boon and bane. According to Touretzky's research, the number of students using AI has increased significantly in 2019. These programs are meant to optimize our learning tasks, but they can also pose a negative impact on us in a variety of ways, such as reducing our ability to think critically, missing actual consultations with our teachers, abusing its service to cheat, and so on. So, where do we draw the line?


Step one, it all lies in the programming. We don't know the reasoning behind the answers provided by these programs, do we? Seo's research from 2021 confirms its lack of explainability. Accordingly, developers can incorporate a feature in these programs that provides a more detailed explanation of its input so that students can gain a deeper understanding of the information presented.


In addition to this, it must be guaranteed that AI systems only draw upon credible sources when generating answers.


Furthermore, technology truly needs more humanity. Designers should code it to filter any potentially offensive material that targets specific groups based on their race, gender, sexual orientation, and the like.


We've passed through the first step, brace yourself for the second! We should acknowledge that learner-instructor interaction is critical. To begin, as they say "experience is the best teacher," educators should provide their students with hands-on experience with AI programs to understand potential issues, as backed up by Cukorova’s study in 2019. However, it comes up as a concern that learners can utilize AI to cheat for it offers anonymity and fast processing of information. Therefore, educators must take a close look on which programs, when, and how their students will access these. Next, considering that these tools contribute to the quality of the output of the students, thus playing a role in grades and college admissions, teachers need to ensure that the grading system is fair and their lessons are tailored to the needs of their students. Thirdly, AI tends to collect data from its users such as personal information to allow for personalization, which raises the possibility of security and privacy conflicts. Accordingly, it is on educators to let students use the school’s own computers and internet, which can be closely monitored to prevent access to potentially dangerous websites, apps, and extensions. Finally, they should advise students that the best way to protect their privacy when using online programs is to log in as a guest rather than creating a personal profile.


Phew? Down to the last step? Third, and most importantly, keep an eye on yourself. The choice is in your hands. The key is to find the right balance between AI-driven technology and human interaction. We can effectively work with AI by using them to compile data from scholarly sources, check grammar, and so on. On the other hand, we should not blindly trust AI-generated summaries without reading the original text and its context. To avoid issues of plagiarism, we should cite our sources all the time as well.


Let’s recall! Step one: encourage developers to program these programs in an ethical way. Step two assistance from teachers is a must. Step three: do not misuse AI for malicious intentions. Now, you’re all set! AI in academics should not be used to reduce learning to a set of standardized procedures that limit student agency, but rather to stimulate human creativity and improve the learning process. 


You're all set! Together, let's make it ethic-AI!