As Generative Artificial Intelligence (GenAI) becomes more integrated into academic contexts, it’s essential to understand how and when their use is appropriate. Ethical writing involves transparency, critical thinking, and an awareness of institutional expectations, and when modern GenAI is brought into the mix, you need to also consider your own ethical standards alongside these aspects. The following resources raise questions to help you draw your own informed conclusions regarding the ethical use of GenAI in academic writing.
Artificial Intelligence in Education (National Education Association, 2024): “Our report and statement cover ideas that are essential to the question of AI in education, namely:
Students and educators must remain at the center of education
Evidence-based AI technology must enhance the educational experience
Ethical development/use of AI technology and strong data protection practices
Equitable access to and use of AI tools is ensured
Ongoing education with and about AI: AI literacy and agency”
Effective and responsible use of AI in research (University of Washington Graduate School, 2025): “Creating new knowledge and performing research is at the highest level of the educational experience of students. Novice researchers must learn essential critical thinking skills needed in formulating a research idea, determining appropriate methods and approaches for the research plan, collecting data, summarizing results, and drawing conclusions. AI can be a valuable tool for assistance but is not an accountable entity for the research outcomes since the ultimate responsibility of research lies with the human.”
Guidance for performing graduate research and in writing dissertations, theses, and manuscripts for publications (Georgia Tech Graduate School, 2025): “Don’t short circuit the learning process: For a PhD student, an important part of their learning processes is to gain skills on analyzing, summarizing, and discussing their research results. Inputting data into a generative AI platform and asking it to write this type of content has two disadvantages: it does not give the student the experience to gain those skills and it may produce content that sounds good but would not withstand scrutiny by experts.”
Generative AI in Academic Writing (University of North Carolina at Chapel Hill): This page addresses how the ethical use of generative AI technology (like ChatGPT or Claude) depends on the context in which it is being used and the expectations of the individuals or organizations involved. It is important for users to understand the potential implications of using generative AI and to approach its use with honesty and integrity.
Fang, X., Che, S., Mao, M., Zhang, H., Zhao, M., & Zhao, X. (2024). Bias of ai-generated content: An examination of news produced by large language models. Scientific Reports, 14(1), 5224. https://doi.org/10.1038/s41598-024-55686-2
“Our study reveals that the AIGC produced by each examined LLM demonstrates substantial gender and racial biases.”
Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15(1), 6. https://doi.org/10.3390/soc15010006.
Guidry, K. R. (2024). AI Literacy Literature Summary | CSU AI Commons. Calstate.edu. https://genai.calstate.edu/communities/faculty/ethical-and-responsible-use-ai/ai-literacy-literature-summary
Larson, B. Z., Moser, C., Caza, A., Muehlfeld, K., & Colombo, L. A. (2024). Critical Thinking in the Age of Generative AI. Academy of Management Learning and Education, 23(3). https://doi.org/10.5465/amle.2024.0338
Lee, H., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects from a Survey of Knowledge Workers. https://doi.org/10.1145/3706598.3713778.
“Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship.”
Li, P., Yang, J., Islam, M., & Ren, S. (2025). Making AI less “thirsty”: Uncovering and addressing the secret water footprint of AI models. https://arxiv.org/pdf/2304.03271
“Additionally, GPT-3 needs to “drink” (i.e., consume) a 500ml bottle of water for roughly 10 – 50 medium-length responses, depending on when and where it is deployed.”
Lopatto, E. (2025, March 5). The Questions ChatGPT Shouldn’t Answer. The Verge. https://www.theverge.com/openai/624209/chatgpt-ethics-specs-humanism.
“ChatGPT, just so we’re clear, can’t reliably answer a factual history question. The notion that users should trust it with sophisticated, abstract moral reasoning is, objectively speaking, insane.”
Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Shen, M. Q. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2(1). https://doi.org/10.1016/j.caeai.2021.100041
University of Kansas. (2024, March). Addressing bias in AI. Cte.ku.edu. https://cte.ku.edu/addressing-bias-ai
Walter, Y. (2024). Embracing the Future of Artificial Intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. International Journal of Educational Technology in Higher Education, 21(1). https://doi.org/10.1186/s41239-024-00448-3
Zewe, A. (2025, January 17). Explained: Generative AI’s environmental impact. MIT News; Massachusetts Institute of Technology. https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
“By 2026, the electricity consumption of data centers is expected to approach 1,050 terawatts (which would bump data centers up to fifth place on the global list, between Japan and Russia).”