Professional Writing Project

AI used to be seen as something futuristic that would only be used by programmers but today almost every student has access to these tools. Students are using tools like ChatGPT and Copilot to generate text, solve equations or answer questions in an instant. AI offers new opportunities for innovation and learning but this also opens the door to academic dishonesty. While there are students using AI to brainstorm or generate outlines, there are others using it to complete whole assignments while putting in little of their own effort. This is raising concerns about the impacts of learning and integrity in the long-term. The main issue isn’t AI in general but how students are using it more and more today to avoid doing the thinking themselves. For educators, this is a challenge that could potentially reshape how students are taught, assessed and valued in higher education.

To understand the size of the issue, it helps to look at the numbers from reports. A 2023 report from the Pew Research Center found that nearly 30% of college students admitted to using ChatGPT or similar AI tools for school assignments. A large portion of these students admitted to using it for more than just a tool to help with studying, it was used to generate essays, posts or solving technical problems with little personal input. Some people might see this as a trend but in reality it’s becoming the norm in some classrooms.

This isn’t just a trend, it’s becoming the norm in some classrooms. Professors have noted sudden shifts in writing styles, unusually perfect grammar, and a lack of engagement in follow-up discussions or exams. Even students who previously struggled are turning in suspiciously flawless work. The challenge is that AI-generated content can often pass plagiarism detection software, making it harder to catch.

The appeal of using AI to “get the job done” without putting in the effort is understandable, especially for students balancing heavy course loads, jobs, and personal responsibilities. But the long-term consequences are severe. When students use AI as a crutch rather than a tool, they bypass essential learning moments that could detrimentally affect their future courses. Academic dishonesty through the use of AI is not just about cheating, it’s about the learning loss. Student’s miss out on valuable critical thinking exercises which in turn will affect their problem-solving abilities. These are fundamental skill, not just for graduating and passing exams but for succeeding in life after graduation as well.

There’s also growing concern from employers. Hiring managers are starting to question the reliability of degrees when graduates can’t demonstrate the basic skills their transcripts claim they’ve mastered. In fields like writing, coding, marketing, and even customer service, real-world performance matters more than grades. If students graduate without the ability to think independently, solve problems, or communicate effectively without AI assistance, their job prospects and the reputation of their institutions will suffer.

In my own experience as a student, I’ve seen classmates submit ChatGPT written assignments with minimal editing. When asked to discuss or defend their work in class, they stumble or avoid the conversation. It’s clear that many students are developing a habit of cutting corners, and that habit can stick long after college. One of the main reasons this issue persists is because many schools haven’t caught up. Academic integrity policies were written before generative AI tools were widely available. Instructors are left guessing whether a student’s work is AI-generated, and even if they suspect it, there’s often no clear process to address it.

Some instructors ban AI entirely, while others allow limited use, like brainstorming or citation generation. Without clear and consistent guidelines, students don’t always know what’s allowed and what’s not. Worse, some choose to interpret the gray area in their favor. There are also deeper ethical concerns. Is it fair for some students to use AI while others work honestly? Is AI-generated work a form of theft, or is it just using the tools available? And if employers find out a graduate faked their way through college using AI, how will that reflect on the institution? Students might be penalized in one class for using AI but praised in another for being innovative. Schools should take hold of the situation and update their policies rather than laying out punishments for using these tools. AI is still relatively new today so we are actively searching for new ways to regulate such tools.

There’s no single fix for this issue, but there are steps institutions can take to reduce AI abuse while promoting responsible use. Below are a few best practices that have been adopted today. For example, policies should clearly define what constitutes academic dishonesty in the age of AI. This means stating what types of AI use are permitted and what’s off-limits. Students should be required to disclose when and how they use AI tools in their work. Rather than banning AI completely, schools should teach students how to use it ethically and effectively. Courses or workshops on digital tools, critical evaluation, and AI responsibility can help students use technology to enhance, not replace their learning.

Instructors can also create assignments that are less vulnerable to AI misuse. Oral presentations, in-class writing, project-based assessments, and real-time collaboration are harder to fake. Some faculty now require students to explain their process or defend their work during office hours or in follow-up questions.

AI-detection tools exist, but they’re not perfect. Schools should use them carefully, alongside human judgment, and be transparent with students about how these tools are used. One example worth highlighting is the University of Michigan, which created a task force to study AI’s impact and now offers guidance for faculty on integrating it into coursework. Their balanced approach, neither fully embracing nor banning AI has helped reduce misuse while encouraging thoughtful engagement.

In conclusion, AI is here to stay and there isn’t much we can do about that but it’s not necessarily a bad thing. But as with any new technology, how we use it matters. When students rely too much on AI to complete academic work, it weakens their overall foundation for their academic success. Institutions need to act as soon as possible by updating policies, teaching digital responsibility, and redesigning coursework and assignments to promote integrity. The goal isn’t to reject AI or punish students but rather to find a way to balance it altogether. The vision is to ultimately allow AI to support growth rather than replacing effort. If schools take this seriously enough, we can preserve the value of our degrees and the integrity of learning in this era of AI.

 

 

References

Kernohan, D. (2023). What AI tells us about learning. Wonkhe.
  Pew Research Center. (2023). How teens and college students are using ChatGPT. Retrieved from https://www.pewresearch.org
 OpenAI. (2024). Usage Policies. https://openai.com/policies/usage-policies
 University of Michigan Center for Academic Innovation. (2023). AI and Academic Integrity Guidance.