ABSTRACT: The integration of Large Language Models (LLMs) into English for Academic Purposes (EAP) instruction offers personalized support but risks fostering cognitive dependency and "algorithmic nativespeakerism." This study investigates how writer-centered prompt design, grounded in the rhetorical situation, can empower L2 writers to maintain agency. Employing a sequential mixed-methods design, the research first analyzed a corpus of 215 prompts, revealing a "pedagogical inversion" where 84.6% of baseline interactions focused on lower-order surface corrections. Subsequently, comparative A/B testing across three LLMs (Claude, Gemini, and ChatGPT) demonstrated that "Rhetorically-Informed" prompts, those explicitly defining audience, purpose, and role, significantly enhanced feedback quality, shifting the focus to higher-order concerns such as argumentation and structure. The findings quantify the "Gulf of Envisioning" facing novice users and validate "Rhetorical Scaffolding" as the mechanism to bridge this gap, transforming the LLM from a passive corrector into a Socratic collaborator. The study concludes with a pedagogical framework for "Rhetorical Prompting," advocating for a shift from functional tool proficiency to critical AI literacy that preserves the writer's voice.
Keywords: Large Language Models, EAP writing, prompt engineering, rhetorical situation, algorithmic native-speakerism, AI literacy