That’s how one Quinnipiac communications student summed up his semester, half-joking, but not entirely. Asking to stay anonymous due to the fear of his honesty, and academic integrity issues, he said,
“If I write an essay without ChatGPT now, it’s going to suck, but if I use Chat, it does it for me.”
For many students, when they sit down to work on assignments, opening ChatGPT has become automatic.
Survey conducted among Quinnipiac University students
96% of students use AI in some capacity...
Only 4% of students never use AI...
Timeline of AI
2011
IBM's AI WATSON COMPUTER WINS "JEPORDY"
Apple launches SIRI making language models common and accessible.
2015
OpenAI founded
2018
Google's BERT language model allows machines to understand context vs keywords
2022
OpenAI launches "ChatGPT" gaining mass adoption from public
Microsoft, Google, and Meta launch competing models
2024
AI is integrated into apps and websites.(Microsoft Copilot, Google Gemini, Adobe)
University Academic Integrity policies incorporate AI guidelines
AI has quickly turned into the invisible classmate sitting inside laptops in classrooms across campus. For some, it is a breakthrough that makes learning more efficient. For others, it is a shortcut that risks replacing the learning process itself.
“There’s definitely a hindrance on what I’m learning because of it,” the anonymous student said. “Using ChatGPT is going to lower the quality of my personal writing, for sure.”
Across campus, that experience is becoming the norm. Students in writing intensive majors are turning to AI not just to help with assignments, but to start them. Assistant director of learning Media & Emerging Technologies, Joel Vanner says:
Vanner has seen the AI issue unfold on both sides of the equation. He was once a QU undergrad and now teaches a class on the shifting norms of higher education with AI.
That shift has raised an important question at Quinnipiac for both students and staff alike. What happens to learning when AI becomes the tool used right alongside every assignment?
This has begun to worry Quinnipiac’s academic leadership. Assistant provost for innovations in learning, teaching and tech Adam Nemeroff says
there is “early evidence that shows AI is beginning to alter critical thinking,”
a sign that AI’s influence is not limited to just students' prompts, but it may be shaping how they think in the first place. Nemeroff is no stranger to AI, he helped write Quinnipiac's AI policy and guidelines. The policy states that the University
"is working to facilitate the use of cutting edge AI while addressing the potential risks"
Additionally, the guidelines demand that all use of generative AI be disclosed clearly in submission of work.
Students are also concerned about its impact on learning.
Nemeroff asked, in a tone foreshadowing his coming answer.
To him, AI is not temporary or just an option for students. It is a compete transition of an academic system. A change that is going to take effort from every program on campus, to alter what higher education looks like.
“Each program has to think about how the content and skills needed are changing as a result,” he said.
That process is already underway on campus. Nemeroff and his team, which includes Vanner, have hosted dozens of introductory workshops to help professors understand what AI can and cannot do, and to give them space to experiment with the tools themselves.
“All tech is disruptive to tradition,” he said. “Zoom didn’t exist when I started in higher ed, and AI is part of that journey.”
Nemeroff is also working to connect Quinnipiac with national organizations studying the impact of AI on higher education.
“We’re trying to create spaces for faculty to come in and connect them with external opportunities like the American Association of Colleges and Universities, who’ve been doing great AI research,” he said.
In the spring 2026 semester, Quinnipiac is introducing a new AI course designed to give students a foundation of knowledge for AI and help them connect it directly to their area of study.
Nemeroff and his team clearly acknowledge that AI is a keystone to the future, yet it is not without its risks. They want to prepare students for its involvement in the future, because students themselves worry about the implications of AI in their career fields.
In one of Vanner’s, classes, "Is AI Taking Over," he asked ChatGPT about a federal bill passed in May 2025, only for the system to insist the legislation was “hypothetical.” The link from Congress.gov was open right in front of him.
“If you didn’t check the answer, you would have submitted that, and you would have been totally incorrect,” he told his students.
AI is not a fully reliable, fully tested source of truth, and it may never be. Its answers are shaped by training data that might be biased or outdated. In some cases, fake. The model hides those faults behind fluent and professional language.
As Vanner explained,
“Students don’t actually understand why AI responds to you the way it does, how it weighs what it thinks you want to know, and then formulates the sentence based on its training data. It can be extremely biased”
This is a problem that may follow students long after graduation. If AI is completely integrated into students' writing and research, what happens if they encounter AI regulations in the workforce?
Julie Hill, vice president at Peak Systems, a national tech services company that works on the forefront of tech instillation, data center builds, and cloud services, sees the consequences every day, and is beginning to worry about students' futures.
“You must proof it, because AI lies. They do lie,” she said.
But despite the inaccuracies, neither Hill nor Quinnipiac’s faculty see AI as something students should avoid. In fact, Hill strongly opposes that idea.
“I don’t think leveraging tools available to you is cheating. I think that is being smart and developing, you just have to be extremely careful and smart with your usage."
Students also believe AI is the technology of the future.
That’s how data science major Mia Holtz explained her use of AI. While many students use AI to get through assignments, Holtz uses it as a tool with direct connection to the skills she will need in the field.
“When I have questions about the platform we use to gather data, ChatGPT can give me pointers on exact sections of my code,” she said. “It helps identify what’s wrong with a function or suggest how to clean a dataset.”
But even as someone preparing to enter the AI industry, she draws a clear line.
"i'm worried that people are going to use it in a way where it takes over their learning completely,” Holtz said. “Not using it as a bridge, but just for copy and paste.”
For Holtz, the rapid evolution of AI is not something to fear; it is something to prepare for.
“It’s already grown so much since I’ve been here,” she said. I need to be prepared... it's becoming its own industry, and to keep up I'm going to need to be part of that industry."
said junior nursing student Madi Beyers, who brings up a valuable point not often covered. In the School of Nursing, AI has a very different role, a helpful study tool, but one with serious ethical boundaries.
“It’s a good practice tool, but if it’s overused, your future healthcare workers don’t know what they’re talking about,” she said.
This is an extreme concern for Beyers, and something she is passionate about.
“If you plug a chart into ChatGPT and ask it to analyze it, you won’t know what you’re doing in the field. You don’t have access to Chat during your clinical.”
Nursing, being one of the most intensive and hands-on fields of study at Quinnipiac, asks a crucial question of AI technology. How adaptable is it in the field?
“Nobody can replace a nurse because there is an empathetic part of it,” Beyers said.
“Human-to-human contact can’t get replaced. You can’t have a robot say, ‘Sorry for your loss.’”
Faculty see these limits beyond healthcare, too. Adjunct professor and assistant director of emerging tech, Joel Vanner notes that, in many industries, AI use is tightly regulated.
“From those I know in the field, their use of AI is monitored,” Vanner said.
“The information that they can put into the AI tool is limited.”
He stresses that students entering those careers must know how to work without AI, even as technology becomes more present everywhere else.
Quinnipiac is not trying to outrun AI, it is trying to meet it where students already are. Workshops, pilot courses, and evolving guidelines show the university knows this is not a fad. The next step is less about whether students should use AI and more about how the University teaches them to use it responsibly.
AI is already in the classroom. The question now is how education adapts around it.