Research
ORCID: 0009-0001-8268-1828
ORCID: 0009-0001-8268-1828
My research explores the pedagogical and ethical dimensions of integrating large language models (LLMs) into English language and literacy education.
A central aim of my work is to develop evidence-based frameworks that support both learners and teachers, with a particular focus on empowering multilingual students to achieve academic success in English writing.
Keywords: LLM-assisted teaching and learning, prompt engineering, human-centered AI integration
Just as you use specific strategies when starting a conversation with someone new, prompting is the art of communicating effectively with LLMs.
AI art generated by Copilot (Microsoft Copilot, 2025).
Poorly structured prompts can result in robotic, unclear, and ineffective LLM-generated content.
Even subtle changes in how you word a prompt can significantly impact the quality of the output.
You can craft a prompt from scratch and use it as a template for similar tasks.
It's important to remember that LLM-assisted writing still requires careful proofreading, fact-checking, and plagiarism screening to ensure accuracy and originality.
The context: Support learning through LLM-mediated writing.
The overarching goal: By focusing on strategic prompt design, the goal is to harness the power of LLMs in writing instruction, empowering both learners and teachers throughout every stage of the writing process.
Examples of effective strategies used in high-quality prompts.
To address the question, “What makes a good prompt?” I turned to the theoretical framework of the rhetorical situation (Bitzer, 1968).
The core idea behind a rhetorical situation is that it “invites a fitting response,” which closely parallels the process of crafting prompts to elicit meaningful outputs from language models.
The theory of rhetorical appeals—ethos (credibility), logos (logic), and pathos (emotion)—can be strategically embedded into LLM prompts to enhance LLM-generated responses.
Together, these rhetorical strategies enable more human-centered LLM interactions.
The AI-integrated Self-Directed Writing (AI-SDW) framework, as proposed in the research, reimagines LLM-assisted writing not as an isolated activity but as a vibrant ecosystem where personal growth and collective insight come together.
The framework calls for the intentional integration of support systems, both human (instructors, tutors, peers) and non-human (LLMs, shared prompt databases).
An AI tutor offers structured practice, acting like a sparring partner, delivers personalized feedback, and serves as the writer’s first audience to hear their fresh ideas.
A human teacher monitors LLM-student interactions by reviewing chatbot transcripts, brings real-world experience to the table, and guides students in developing AI literacy.
Human teachers and LLMs play different roles, but ultimately, they share the same goal—to support the learner.
A flowchart showing the iterative process of promoting LLM-mediated writing—how writers can actually incorporate LLM tools and prompting strategies into real-world writing scenarios.
Aiming to bridge the learner experience gap in educational technology design, my research explores the pedagogical implications of strategic prompting with large language models (LLMs). Specifically, I focus on developing and testing practical prompting strategies that empower multilingual learners to use advanced LLM tools in ways that enhance both their writing proficiency and critical thinking skills. Prompting LLMs through the user interface is the most common and accessible way for everyday users to interact directly with these powerful technologies, making it a central point of human-computer interaction (HCI). My research investigates how to skillfully phrase prompts—queries that elicit more effective model responses—and foster more meaningful interactions.
The act of writing, at its core, is a transformative process. It’s the cognitive bridge that connects emerging ideas to clear, structured expression. For novice and second-language (L2) writers, it’s a process through which they not only learn how to write but also write to learn. This iterative practice of turning thought into text, guided by the conventions of academic discourse, is especially critical in the context of language development. Academic English writing, in particular, holds significant weight. It is a key determinant of academic achievement, playing a pivotal role in students' successes both within and beyond the classroom, even extending to their future professional lives. Moreover, as English continues its reign as the lingua franca of global scholarship, the ability to effectively communicate diverse writing goals and engage with international audiences becomes paramount.
The path to academic writing proficiency is often more challenging for multilingual learners. Beyond the inherent cognitive demands of code-switching, they face the dual challenge of mastering both sophisticated academic language and complex, discipline-specific content—often while working in environments that may not fully support their linguistic needs. These hurdles are often compounded by systemic opportunity gaps in education, where access to resources, quality instruction, and tailored support varies greatly. These disparities, often overlooked in favor of focusing on achievement gaps, contribute to a significant divide in academic performance between underserved communities and their better-resourced peers.
The advent of disruptive AI writing tools has dramatically reshaped the academic landscape, blurring the lines between who is the author and who is the audience to an unprecedented degree. This shift necessitates a re-examination of the principles that underpin effective human-machine interaction. While the achievement gap continues to garner attention, the more deeply rooted issue of opportunity gaps often remains understudied. This disparity is further exacerbated by the potential biases embedded in algorithms, which are increasingly used as learning tools inside and outside of the classroom, with the rise of generative AI, LLM tools in particular. The challenge now is to maximize the technological advantages while mitigating the potential harms, fostering learner agency, and safeguarding the student's original voice through thoughtful instructional design, particularly through experience-informed, practice-tested prompting pedagogy.
Considering their ability to process natural language and function as learning partners for data-driven learning and metacognitive self-reflection, the technological affordances of language models indeed offer unparalleled resources for language learners. While debates about the appropriate role of AI in education continue, the inevitable integration of these tools into learning environments calls for a shift in both pedagogy and academic discourse. Rather than banning the use of LLMs entirely or simply teaching students to extract answers from them, our pedagogical approach should prioritize the development of prompting skills: the ability to ask insightful, well-informed questions and critically evaluate the quality of the generated content.
The prompt serves as a critical site of HCI, and the ability to formulate effective questions that guide the model toward meaningful responses is essential in LLM-integrated learning and teaching. Prompting competency is therefore a key component of critical AI literacy, empowering learners while preserving their authorial agency. From a pedagogical standpoint, equipping front-end users—both educators and learners, with strategic prompting skills offers far greater immediate value than the technical endeavor of fine-tuning pre-trained models through reinforcement learning from human feedback (RLHF). The latter approach risks creating new, exclusive gatekeepers and may limit access to educational technology due to its inherent technical demands. However, empirical research on the impact of prompting strategies in authentic classroom settings remains scarce; most current investigations lean toward technical perspectives such as system prompting and prompt engineering. To ensure that everyday students and instructors can harness the potential of LLMs while safeguarding original thinking, developing teachable skills for effective AI communication is an urgent priority.
To better understand pedagogically valuable HCI, I turn to the foundational principles of rhetorical genre studies, which emphasize the importance of understanding the rhetorical situation—a constellation of communicative purposes, audiences, genres, and contexts that collectively drive the writing process. While these theories have traditionally been applied to human communication, I believe they remain profoundly relevant in our interactions with language models today. At its core, rhetoric is about mutual understanding, built on shared conventions, clear goals, and active engagement. In this new reality, prompting a language model becomes analogous to engaging with an immediate audience, which is distinct from a distant human audience, while the writer simultaneously co-authors with the model.
My own journey as a researcher is inextricably linked to my identity as an academic writer. The most compelling research, regardless of its brilliance, arguably remains incomplete until it is communicated effectively. The ultimate goal is to engage meaningfully with audiences across various settings, from fellow scholars to the general public. In this endeavor, the pen, or, in our digital age, the keyboard, is our most powerful tool. In the context of LLM-integrated writing, the overarching objective is not to perfect the art of crafting prompts, but to cultivate the ability to ask better questions. It is essential for learners to critically engage with AI-generated output, remaining independent writers who understand their communicative purpose, their audience, and their underlying writing needs.
Language models, by their nature as "black boxes," operate as echo chambers that can amplify existing human biases. The principle of “garbage in, garbage out” applies to the training process of these models, where human ideologies may be further entrenched—whether intentionally or not. This is especially relevant given that the training datasets are dominated by English-language texts from web data sources like Common Crawl. From a micro perspective, multilingual learners might perceive these models as infallible authorities, leading to uncritical acceptance of generated content and a resulting surrender to algorithmic conformity. From a macro perspective, this vicious cycle can be detrimental to learning communities and ecosystems. Algorithmic homogeneity not only reinforces harmful ideologies such as native speakerism—which can undermine learners’ confidence—but also flattens diverse voices, resulting in writing that is robotic and formulaic. Educators and researchers must therefore critically examine the roles these tools play in education, ensuring they are pedagogically aligned and do not inadvertently reinforce harmful ideologies.
The stakes are undeniably high. Why does this matter? Because the very fabric of academic discourse, and, by extension, the future of knowledge production and dissemination, hangs in the balance. What we do now as educators, researchers, learners, and, most importantly, writers will shape the trajectory of AI’s influence on the writing process. The “So What?” is a resounding call to deliberate action: the transformative power of technology, when wielded thoughtfully and equitably, can unlock unprecedented opportunities for learning, creativity, and critical engagement. Conversely, a careless embrace of these tools risks exacerbating existing inequalities, silencing marginalized voices, and ultimately diminishing the richness and diversity of our intellectual landscape. To empower a new generation of thinkers who understand the rhetoric of technology, we must carry forward the wisdom of Octavia E. Butler: “Any change may bear seeds of benefit. Seek them out. Any change may bear seeds of harm. Beware.”
Take a look at the presentations and publications listed!