The Word
Visit hawaiitesol.org for more:
The Word
Yoonseo Kim
Recent advances in artificial intelligence (AI) tools have created new opportunities by enabling anyone to ask questions through plain text at an affordable price. As a result, these tools are now widely used in language teaching and learning contexts. In particular, generative AI shows promise in encouraging self-editing, as it can provide guidance that helps students revise their own writing. In writing instruction, it is ideal for teachers to provide immediate and individualized feedback. However, generative AI is not limited by time or location and can be consulted repeatedly. Although critical perspectives on generative AI certainly exist, many students are already using these tools extensively. Guiding students to use generative AI wisely for writing practice can therefore be both realistic and pedagogically beneficial. This section presents practical tips for guiding students to use generative AI effectively and critically in second language writing.
First, it is recommended that instructors hold an open discussion with students at the beginning of the semester about the use of AI. Through this discussion, instructors and students can negotiate clear boundaries regarding how and to what extent AI tools may be used. When completing writing assignments, students should be asked to explicitly state whether they used AI and to specify which parts of their work were supported by AI. It is also advisable to have students compose their texts in Google Docs so that revision histories can be reviewed when necessary.
Next, before receiving teacher feedback, students may benefit from obtaining AI-based feedback. Earlier automated essay scoring systems relied primarily on linguistic features like grammar and vocabulary (e.g., Burstein et al., 2013). In contrast, generative AI models exhibit high rating consistency and can predict human judgment accurately, including content-related aspects (e.g., Koraishi, 2024; Kim, 2025). To promote learning, students can be encouraged to use AI in ways that provide constructive feedback by asking it to organize feedback in a table that identifies areas for improvement in their current writing, explains why those areas may be problematic, and suggests possible revision strategies. This approach shifts the focus from simple correction to reflection and understanding.
It is also important to teach students how to use appropriate prompts when interacting with AI. A prompt refers to a set of instructions or a question that guides generative AI to generate a specific response. Several prompting strategies can be introduced:
One effective strategy is assigning a specific role to the AI, such as prompting “You are an experienced English language teacher grading student essays.” This often results in more targeted and useful feedback (Shanahan et al., 2023).
Another strategy is to request a rationale for the feedback rather than feedback alone. For example, you can ask “Explain why the essay received that feedback.” This type of request elicits a chain of thought process in which complex judgments are broken down into intermediate steps, encouraging deeper reasoning (Wei et al., 2022).
If a rubric is available for the assignment, students can also ask the AI to refer explicitly to the rubric. Rubrics provide clear evaluation criteria and help ensure that feedback aligns with specific grading standards.
Combining all the prompting strategies discussed above often produces more accurate and useful feedback.
Despite all the promises, it is essential to recognize the limitations of generative AI.
These tools often rely too heavily on surface-level features when assessing written texts and may struggle to evaluate certain aspects such as creativity and critical thinking. Ethical and pedagogical concerns also remain, including the risk that students may accept AI feedback uncritically, the potential for plagiarism and academic dishonesty, and possible negative effects on student motivation and engagement in the writing process. In addition, AI systems may exhibit bias or misinterpret cultural contexts. Rather than simply banning generative AI or ignoring these issues, writing teachers are encouraged to acknowledge these limitations and engage students in open discussions to make informed and wise decisions about AI use.
References
Burstein, J., Tetreault, J., & Madnani, N. (2013). The e-rater® automated essay scoring system. In Handbook of automated essay evaluation (pp. 55–67). Routledge.
Kim, Y. (2025). Automated Essay Scoring With GPT‐4 for a Local Placement Test: Investigating Prompting Strategies, Intra‐Rater Reliability, and Alignment With Human Scores. TESOL Quarterly. https://doi.org/10.1002/tesq.3405
Koraishi, O. (2024). The Intersection of AI and Language Assessment: A Study on the Reliability of ChatGPT in Grading IELTS Writing Task 2. Language Teaching Research Quarterly, 43, 22-42.
Shanahan, M., McDonell, K., & Reynolds, L. (2023). Role play with large language models. Nature, 623(7987), 493-498.
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., & Zhou, D. (2023). Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35, 24824-24837.
Author bio:
Yoonseo Kim is a PhD candidate in the Department of Second Language Studies at the University of Hawaiʻi at Mānoa. Her research focuses on second language writing assessment and AI-assisted assessment. Her research has been published in leading applied linguistics journals, including TESOL Quarterly. Her teaching experience includes teaching statistics for language research to undergraduates and working with undergraduate second language writers and Korean high school students.