Whether you're a new or seasoned instructor, our Semester Resource Pack includes thoughtful articles, tools, and more to help you build a student-centered classroom experience.
(Chronicle of Higher Ed, April 20, 2026)
With Panelists:
Flower Darby, Associate Director, Teaching for Learning Center, University of Missouri
Chris Hakala, Executive Director, Center for Excellence on Teaching, Learning and Scholarship, Springfield College
Susan Purrington, Harold F. Wiley Generative AI Teaching and Learning Fellow, Connecticut College
Evan Silberman, Senior University Dean of Academic Innovation, Office of Academic Affairs, CUNY
(Holly Wheeler, League for Innovation, May 2026)
Writing professors routinely utilize a variety of methods to engage students, facilitate skills development, and create opportunities for achievement. Peer review has a long history in college writing courses, where evaluating one another’s work is often treated as a core competency or an explicit learning outcome. Yet, what students actually do during peer review, and how that activity contributes to their development as writers, varies widely.
Students’ learning depends on factors such as their understanding of the purpose of peer review, whether they have been taught how to conduct one, and their attitudes toward participating in the process. Peer review instruction in my writing classes includes a range of practices, such as asking students to provide general feedback; respond to specific questions; or complete reviews assessed for thoroughness, accuracy, or effort. The design of these tasks has continued to evolve in response to lessons learned, but in the past year, a significant finding has emerged: Although students often prefer giving and receiving written feedback, the quality of their feedback tends to be stronger when they engage in oral feedback.
With its roots in collaborative learning, a peer review, or peer assessment, is when students evaluate the work of their classmates, offering feedback and/or grades on a designated assignment according to class-established standards (Falchikov, 2007). These reviews may function in a summative capacity, emphasizing evaluative judgments that contribute to grading, or in a formative capacity, prioritizing detailed feedback intended to enhance students’ learning. As such, this process can take multiple forms. Students in a college writing class, for example, may be asked to focus on higher‑order concerns such as argument, organization, and analysis, or on lower‑order concerns such as grammar and sentence‑level clarity. Instructors may structure peer review through rubrics, open‑ended comments, directed questions, content‑based grading, or checklists.
Research consistently demonstrates that peer review benefits students in two distinct, but complementary, ways: as recipients of feedback and as reviewers of others’ work. When students receive feedback, their revision efforts are supported and enhanced via concrete suggestions and multiple perspectives. This feedback assists students in understanding the assignment; clarifying expectations; and strengthening their own evaluative judgment as they learn to recognize, articulate, and apply criteria for quality writing to improve the quality of their revisions (see meta-analysis by Stančić, 2021). Peer review functions as a learning-centered practice that positions students as active participants in one another’s learning, fostering critical judgment, confidence, and a sense of ownership over academic work (Black et al., 2004; Shen et al., 2020). Peer review also benefits students in their role as reviewers. By engaging in the analysis and evaluation of peers’ texts, students develop self-assessment skills, internalize assessment criteria, and refine their own writing through reflective application of those criteria (Dochy et al., 1999; Nielsen, 2021). Evaluating the work of others has been shown to cultivate transferable evaluative skills and noncognitive capacities, such as time management and self-discipline (Chaktsiris & Southworth, 2019). Moreover, Cho and MacArthur (2011) demonstrate that the act of composing feedback itself contributes to learning by deepening students’ reflection on writing quality and disciplinary expectations.
Despite considerable evidence that students learn from the peer review process (e.g., Huisman et al., 2019), many find engaging in it challenging. Students often report feeling unsure or uncomfortable when providing feedback, expressing concerns that their peers might react negatively or that their advice will be unhelpful (e.g., Kaufman & Schunn, 2011; Zhou et al., 2020). These challenges are especially pronounced in first-year writing courses, where students must navigate the demands of academic discourse while managing unfamiliar rhetorical expectations; increased independence; and the need to develop more sophisticated reading, writing, and critical thinking skills. In order for the process to work, professors also need to devote considerable class time to teaching students about peer review by introducing principles of effective feedback; modeling how to apply evaluative criteria in thoughtful, constructive ways; and providing examples of expectations of the assignment, including guided prompts, rubrics, or low‑stakes practice activities (e.g., Zong et al., 2021).
Research on student peer feedback suggests that the mode of feedback influences its focus and depth. Van den Berg et al. (2006) found that written feedback tended to be product-oriented, with students evaluating the final product rather than asking analytical questions, explaining their comments, or proposing revisions, and generally focusing on content and style rather than structure. Reynolds and Russell (2008) reached similar conclusions when comparing written and audio feedback, observing that written comments were often general and less helpful, whereas oral feedback—although less frequent—typically included deeper analysis, explanation, and revision-oriented suggestions. Across courses, written feedback emphasized evaluative judgments and structural issues, while oral feedback was more process-oriented, addressing the writing process and providing guidance for revision.
As a long-time proponent of peer reviews in writing and literature classes, I often struggled with the quality of the reviews that students produced. Regardless of the amount of class time devoted to this exercise; the number of examples analyzed together; or the use of open-ended questions, checklists, or rubrics, the quality of students’ peer reviews remained uneven, with some producing thorough, insightful feedback and others struggling to provide meaningful guidance. At the same time, I was considering how to integrate multiple short oral assignments alongside existing coursework in a way that would allow students to meet all learning outcomes. To address this, I designed a combined approach that aligned the assignment goals while streamlining the workload.
Students completed two separate peer review assignments, which provided them with opportunities to develop feedback literacy through both written and oral modalities. The first required students to produce a traditional written peer review, supported by explicit instructions; instructor modeling; exemplars; and guided questions addressing assignment expectations, areas for improvement, and strengths. The second assignment asked students to respond to the same evaluative prompts, but to deliver their feedback orally through a narrated PowerPoint. The goal was simple: Ensure that students developed peer review skills and practiced oral communication skills.
The results were surprising. The first peer review yielded a common result: Students answered the questions, provided some general feedback, and offered generic ideas for improvement, even when explaining their written feedback orally. For example, one student wrote, “Your thesis seems too much like a statement rather than an argument,” but didn’t offer a suggestion to improve it. Another student said, “One of the things I really enjoyed . . . [was] your topic sentences. I really think they sounded very sophisticated and set up what you were going to talk about in the upcoming paragraph.” Since this is exactly what topic sentences are supposed to do, the feedback was polite and provided confirmation to the writer, but did not offer any further ideas or instructions. Overall, students’ feedback was product-oriented, focusing mostly on content.
On the other hand, the oral feedback was significantly stronger. Students offered feedback on the content of the writing, but provided explanations and analyses often absent in their written assessments. For example, students said things like, “Your thesis’s main point is clear, but it seems broader than it should be. Which aspects of student motivation will you talk about? That would make it easier to understand the direction of your paper.” This feedback goes beyond simple valuation, offering concrete analysis, clarification, and an actionable suggestion for improving the work. Another student said,
Your topic sentences are clear and make an argument as they are supposed to. What you could work on, though, is making them more specific. The second one sounds a little…um…boring? I’m sorry. It’s just that I have no real idea how it fits with the rest of the paper. Maybe you could add a detail that shows why it matters. That would help the topic sentence feel like it is doing something.
This feedback also moves beyond praise or general comments, identifying a specific aspect of the writing that could be improved, and provides a concrete suggestion for revision. By highlighting both what works and what could be strengthened, the reviewer helps the writer understand the purpose of the sentence within the larger argument and offers guidance that can directly inform improvements.
The best part of this experience was when students indicated that they were going to change something in their own paper when they saw what another student did, as in this example:
I liked how you worked your quotes into your sentences. It made the argument easier to follow and helped explain why the quote mattered. I realized that I just sort of stuck mine in there using the same words and just left them there. Seeing the way you did them helped me realize what I need to be doing.
This feedback is particularly effective because it demonstrates the reviewer reflecting on their own writing in response to a peer’s example. By articulating what worked well in the peer’s paper and comparing it to their own approach, the student identifies a specific area for improvement and outlines a clear plan for revision. This reflective engagement encourages metacognition and supports the transfer of strategies from one text to another, making the feedback both practical and instructional. The oral feedback also made feedback more natural and conversational, freeing students from worrying about formal language in their discussions with peers. In this setting, students proposed new ideas, elaborated on their reasoning, made connections to their own writing, and offered actionable suggestions. They demonstrated deeper engagement with the assignment, stronger understanding of both their own work and the peer review process, and increased self-reflection, making the feedback especially meaningful and instructional.
Students reported that they preferred giving written feedback because it allowed them greater control over their responses and more time to organize their ideas. This aligns with the findings of Reynolds and Russell (2008), who observed that 72 percent of students favored providing written feedback for similar reasons, and 73 percent preferred receiving written feedback because it was easier to process. My own students’ experiences reflect a similar pattern: Although they recognized the oral feedback as higher in quality and more detailed, engaging with it required more time to listen, reflect, and apply the suggestions to their own work. At the same time, the oral format appeared to reduce anxiety about language accuracy, allowing students to focus more on ideas and analysis. Taken together, these results suggest that written and oral peer feedback each offer distinct advantages: Written feedback supports careful planning and clarity, while oral feedback fosters deeper engagement, reflective thinking, and attention to process—highlighting the value of incorporating multiple modalities into peer review practice.
More research needs to be conducted, particularly regarding oral versus written feedback, and especially with first-year and community college students. In this experience, students benefited in distinct ways from both modalities: Written feedback supported careful planning and clarity, giving students time to organize their ideas, while oral feedback encouraged deeper engagement, reflective thinking, and attention to process. These observations highlight the importance of helping students understand the purpose of peer review and develop the skills necessary to participate meaningfully. As the quote from writer and artist Ann Marie Houghtailing (2020) reminds us, “Feedback is a free education to excellence. Seek it with sincerity and receive it with grace.” When students approach peer review with intention and openness, they not only improve their own work but also contribute to the learning of their peers, demonstrating that the value of feedback extends far beyond the immediate assignment. Overall, these experiences reinforce the benefit of integrating multiple feedback modalities and providing structured support to help students engage productively in the peer review process.
Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2004). Working inside the black box: Assessment for learning in the classroom. Phi Delta Kappan, 86(1), 8-21.
Chaktsiris, M. G., & Southworth, J. (2019). Thinking beyond writing development in peer review. The Canadian Journal for the Scholarship of Teaching and Learning, 10(1). https://doi.org/10.5206/cjsotl-rcacea.2019.1.8005
Dochy, F., Segers, M., & Sluijsmans, D. (1999). The use of self-, peer, and co-assessment in higher education: A review. Studies in Higher Education, 24(3), 33l-350.
Falchikov, N. (2007). The place of peers in learning and assessment. In D. Boud & N. Falchikov (Eds.), Rethinking assessment in higher education: Learning for the longer term (pp. 128-143). Routledge.
Houghtailing, A. M. (2020, May 18). Feedback is a free education to excellence [Video]. YouTube. https://www.youtube.com/watch?v=e7mHEqE8TMM
Huisman, B., Saab, N., van den Broek, P., & van Driel, J. (2019). The impact of formative peer feedback on higher education students’ academic writing: A meta‑analysis. Assessment & Evaluation in Higher Education, 44(6), 863-880. https://doi.org/10.1080/02602938.2018.1545896
Kaufman, J. H., & Schunn, C. D. (2011). Students’ perceptions about peer assessment for writing: Their origin and impact on revision work. Instructional Science, 39(3), 387-406. https://doi.org/10.1007/s11251‑010‑9133‑6
Nielsen, K. (2021). Peer and self-assessment practices for writing across the curriculum: Learner-differentiated effects on writing achievement. Educational Review, 73(6), 753-774.
Reynolds, J. A., & Russell, V. (2008). Can you hear us now?: A comparison of peer review quality when students give audio versus written feedback. The WAC Journal, 19(1), 29-44. https://wacclearinghouse.org/docs/journal/vol19/reynolds_russell.pdf
Shen, B., Bai, B., & Xue, W. (2020). The effects of peer assessment on learner autonomy: An empirical study in a Chinese college English writing class. Studies in Educational Evaluation, 64, Article 100821.
Stančić, M. (2021). Peer assessment as a learning and self-assessment tool: A look inside the black box. Assessment & Evaluation in Higher Education, 46(6), 852-864.
van den Berg, I., Admiraal, W., & Pilot, A. (2006). Designing student peer assessment in higher education: Analysis of written and oral peer feedback. Teaching in Higher Education, 11(2), 135-147.
Zhou, J., Zheng, Y., & Tai, J. H.‑M. (2020). Grudges and gratitude: the social‑affective impacts of peer assessment. Assessment & Evaluation in Higher Education, 45(3), 345-358.
Zong, Z., Schunn, C. D., & Wang, Y. (2021). Learning to improve the quality peer feedback through experience with peer feedback. Assessment & Evaluation in Higher Education, 46(6), 973-992.
Holly Wheeler is Professor, English and Philosophy, at Monroe Community College in Rochester, New York.
Opinions expressed in Learning Abstracts are those of the author(s) and do not necessarily reflect those of the League for Innovation in the Community College.
(Jimmy Licon, Faculty Focus, April 20, 2026)
When ChatGPT first arrived, many faculty reacted with horror. If an algorithm could write a plausible essay in seconds, what would become of higher education and philosophy, those disciplines that, at their best, are built around slow thought and crafted prose? But over the past two years, I’ve arrived at a different conclusion: the problem is not that AI can write for our students better than most of them could have on their own, but that it forces us to clarify what learning is actually for and how to instill in in our students.
In my philosophy classes at ASU, I’ve developed several AI-integrated assignments designed to do precisely that. They do not attempt to detect or prevent AI us. Instead, in the case of these specific assignments they require students to use it under conditions that make cheating (nearly) pointless and learning inevitable. Each assignment treats AI as a foil rather than a threat: an endlessly patient interlocutor that can absorb reputational risk, provoke explanation, and invite students to deepen their learning and understanding.
The approach builds on two converging insights. First, as David Duran’s review of decades of research shows, students learn more deeply when they teach by explaining, questioning, and clarifying for others. Second, as Andrew Vonasch and colleagues demonstrate, people are highly sensitive to their reputation to such a degree that they would endure pain, disgust, or even death rather than be seen as highly immoral. These insights reveal both the problem and the opportunity: students stay silent or disengaged not because they don’t care, but because they care too much about looking bad and the kinds of (social) costs that could be and sometimes are visited on the offending student. AI, being incapable of judgment or memory, provides a lower risk avenue to participate and share their views.
Philosophy is the rigorous and systematic practice of inquiry. However, practice requires vulnerability and the willingness to be wrong in front of others. When students risk public embarrassment, they risk the social currency that allows them to cooperate, belong, and be respected. In a reputation-driven classroom, silence is often the rational choice. While, in contrast, large language models have no ego and so can absorb the cost of students’ trial and error. The LLM becomes a reputational buffer that allows students to more freely speake their mind and explor positions with less risk of social retribution turning AI into a form of pedagogical prosthetic for intellectual risk taking.
Below are the cornerstone assignments that make use of AI as a catalyst. Each integrates reputational psychology and the learning-by-teaching principle into its design.
In Clap Back, students spar with a customized AI trained to defend a specific philosophical position—skepticism, utilitarianism, nihilism, theism, you name it. The role of AI is to argue tirelessly and consistently while the student is tasked with exposing errors in its reasoning and defending a counter position. Students work individually or in small teams, annotate their transcripts, and submit a meta-reflection analyzing where the debate turned. And we sometimes hold in-class tournaments, where groups compete to see who “defeated” their AI most convincingly, judged by peers and professor.
This assignment lowers reputational stakes while sharpening philosophical skills. Students who might hesitate to challenge a classmate will freely push back against a machine that cannot feel embarrassment or condescension. The social threat of “looking stupid” evaporates, replaced by the intellectual pleasure of winning an argument that matters with the learning benefit here coming from having to explain and respond their views to other students and the AI. Using this assignment experimentally across several classes and semesters has been revealing increasing discussion participation by almost double. It is clear so far that students very much enjoy the discussion when it is less socially risky.
This assignment is built directly on the research tradition Duran reviews: that students learn most effectively when they teach. In Teaching for Botmerica, students are paired with a deliberately “ignorant” LLM pre-trained to misunderstand a key philosophical concept—say, moral luck, validity, or the mind-body problem. The student’s job is to teach the AI step by step, using examples, counterexamples, and clarifications. They document the exchange and furnish a link for the professor to review.
The genius of this task, pedagogically, is that it forces metacognition. Students cannot teach what they do not understand. When the model parrots back a half-correct definition, it forces students to learn enough about the material to teach the bot by revising, rephrasing, and testing until the AI understands (so to speak). This dynamic mirrors Duran’s finding that learning-by-teaching is strongest when the “learner” (the AI) is interactive rather than passive. The questioning and feedback cycles consolidate understanding.
In this exercise, students take over office hours—but the professor becomes the questioner. Each week, a group meets with me to “teach” that week’s material. They know I’ll press them Socratically: Why does Mill think that? What problem is Kant solving here? Where does the argument fail? Their performance shapes the next lecture, where I incorporate their insights (and missteps) into the discussion. They receive public credit for strong contributions and constructive feedback when they falter. This signals that it is acceptable to get something wrong and how to fix it.
The reputational dynamics here are powerful but healthy. Reputation motivates effort, but because the setting is small and dialogical, the failure instructs. Students report that they prepare harder for “their” week than for any exam because they don’t want to let their group down. They experience reputation not as a threat but as a spur to excellence where the social pressure acts to pedagogically motivate students to be their best self. This exercise captures the essence of learning-by-teaching: students must organize material, anticipate questions, and explain ideas coherently enough to guide another person’s understanding. The reverse structure also democratizes for students that philosophy is about participating in a shared search for truth.
This is the most elaborate assignment to be used sparingly. Groups of students train and coach a LLM to defend an opposing side of a philosophical issue like whether free will is compatible with determinism or whether euthanasia can be morally justified. Each team gathers sources, crafts prompts, and adjusts training data to train and shape their model to argue better. They then stage a public debate in class between their two models, projected on-screen, followed by a debrief where the rest of the class or students in the tournament analyze the debate and offer praise and feedback.
When students see their carefully trained model spout a contradiction or a cliché, they feel the weight of responsibility that comes with teaching—echoing Duran’s observation that teaching demands reflective self-assessment and repair. It also engages reputational psychology in a productive way for the simple reason that teams want their models to perform well, so they dig deep into the content. Getting a good grade helps too.
To counterbalance all this digital immersion, I assign occasional low-tech exercises. Students read a passage from Aristotle or Mill and write their reflections by hand with blue or black ink only. They then photograph and upload the pages. Here the slowness is deliberate by making them write manually engages a different part of their brain. It restores the temporal dimension of thought that benefit when students pause, consider, and rephrase. That pace and visibility makes thought itself tangible again.
By juxtaposing Write-n-Snap with the AI-based assignments, I emphasize that the point in class is to learn, by whatever means works best even if it isn’t the latest technology. Paper books and LLMs can peaceful and productively co-exist. Students see that good thinking can take many forms—dialogue, teaching, debate, or pen-and-paper reflection—and that what matters most is the engagement and the process, not the polish of the product.
If there’s a through-line in all these exercises, it is that reputation and learning are entwined. Reputation gives moral and motivational force to our efforts but can paralyze when there is too much concern for our reputations such that it hinders learning. Vonasch’s studies suggest that people guard their reputations with near-mortal zeal because cooperation itself depends on being seen as competent and moral. In the classroom, that same instinct can suppress risk-taking where students would rather avoid controversy than learn something.
These assignments only work because they are framed as a partnership. I tell my students at the outset that if you use AI to replace your mind, you’re training the model. If you use it to extend your mind, you’re training yourself. The point I aim to convey to students–many of whom seem to get it–is that copy and pasting assignments in college will not make one more employable. However, if AI and LLMs make them more productive than their competitors, they can write their own ticket. So, really their shouldn’t be to avoid detection but instead to use AI and LLMs to improve and extend the recognition like math and language, thereby making them relatively more employable than their thoughtless competitors.
The results have been consistently positive. Participation rates have risen; students attend “reverse office hours” eagerly; their writing shows more reflection and less posturing. The conversations are richer, less rehearsed. Many of my students have discovered that doing philosophy doesn’t mean having the right answers but instead exploring deep, fundamental questions about reality and the human condition.
Socrates called himself a midwife of ideas. In my classes, the AI serves as a synthetic midwife that can help professors elicit understanding without replacing the human labor of thought. It lets students fail safely, reflect honestly, and teach boldly. It lowers the reputational cost of inquiry so that curiosity can do its work. When students debate and teach a machine, they take an active role in their development as students and budding philosophers.
Jimmy Licon is a philosophy professor at Arizona State University, where he teaches courses in epistemology, ethics, and philosophy of law. His research and public writing focus on how incentives and information shape moral and intellectual life in venues like AIER, Mises Wire, and The Pamphlet, alongside peer-reviewed work.
References
Andrew J. Vonasch et al., “Death Before Dishonor: Incurring Costs to Protect Moral Reputation,” Social Psychological and Personality Science (2017).
David Duran, “Learning-by-Teaching: Evidence and Implications as a Pedagogical Mechanism,” Innovations in Education and Teaching International (2016).
“Synthetic Socrates” Assignment Set, Teaching Notes (2025).
(Dr. Brian Rempel, Faculty Focus)
It is that time of year again. I am staring longingly out the window while I sit indoors, on my computer, tweaking syllabi in anticipation of teaching again in the coming semester. This ritual feels different this year because I am coming back from a sabbatical, part of which I spent as a student (again). At this time last year, I was preparing to learn basic statistics for my research as a student, because my chemistry undergraduate degree was curiously bereft of training in statistics. So, I (again) registered for a first-year class as a student. My Day 1 experience as a student (again) was enlightening. My brief foray into the freshman experience, while not quite a full experience including living in a dorm room and eating in the cafeteria (Nathan 2006), gave me a chance to critically reflect on what students experience in my own classes. I would like to share a few of my experiences, what I learned from them, and what I aim to change in my own Day 1 routine in the coming year, with the hope that it helps some of you too.
One of the most profound shocks of being a student (again) on Day 1 was the sense of anxiety I experienced before the class started. I knew I was very rusty on the prerequisite material; it has been almost 25 years since I took high school math! I was not feeling at all confident that I still “had it” in terms of my ability to succeed as a student. Looking back now, I realized that my own students must also feel the same way. How many of them have not retained all the prerequisite knowledge they learned a while ago? While probably not a quarter of a century ago, I am certain they do feel like their high school education was a long time ago by the time they are in my class. Similarly, how many of them were always near the top of their classes, and now suddenly realize that might no longer be true?
My experience reminded me that there is no such thing as too much review of prerequisite material, either in the first few classes at the start of my course or as informal reminders when I am introducing a brand-new concept. My students probably feel it has been a long time since they had to activate that prior knowledge that I am trying to build upon, and I guess some don’t feel anywhere near as confident in their ability to succeed as they did even a year ago. Little reminders and lots of reassurance will go a long way this coming fall when I am back in the class as an instructor.
But my anxiety did not only manifest before the class started. On my first day back as a student, I fully panicked when a ‘fellow student’ asked a question of our instructor (“Professor, is that variable nominal, or ordinal?”). I panicked because I realized I had absolutely no idea what the question even meant, and I realized how hard it was being a novice again (Mulnix 2023)! I also missed the answer to the question because I was so busy trying not to get visibly upset in the middle of a crowded lecture hall. I am not somebody prone to panic attacks or severe anxiety, but the sudden realization of how unmoored I felt taught me a valuable lesson.
This coming semester, I will be more attentive to how simple questions or off-the-cuff remarks could easily hit my students in unexpected ways. For students already feeling unprepared or uncertain in my class, innocent queries or comments might subtly reinforce underlying feelings that my class will be beyond their capabilities. Phrases such as “this should be easy” or “something you should all know already” shall be completely stricken from my vocabulary. Especially as we seek to educate a generation suffering from higher rates of diagnoses for anxiety and panic, I need to find ways to make sure my students remember that they probably know more than they think. Prerequisite classes, while perhaps feeling long past for them, have provided them with everything they need to succeed in my class. Part of my job is to encourage them to engage in the hard work and remind them that they do fit in my class. Whether this involves an easy content-related quiz on the first day of classes or some similar confidence-building exercise, I now realize how important confidence-building is for the first day of class.
The last key experience from Day 1 of my second time being a first-year student was linked to my discomfort when I entered the lecture hall. I suspect most of us still remember the shock of walking into our first big lecture hall and being stunned by the sheer size of the hall, and number of other bewildered students. But what I felt was discomfort disconnected from the size of the lecture hall. Rather, I walked into the lecture hall and instantly felt like I did not belong. The grey in my beard was a clear signal to other students that I was different, and they subtly found ways to avoid sitting near me or meeting my eye. I believe these behaviors were unconscious reactions on the part of others, but it felt very real. As a White male living in Western Canada, I have not often felt the sensation of walking into a room and realizing I don’t automatically fit in with other people.
I realized some of my students must also feel when they walk into my classroom and are visibly different from their peers or their instructor. That sense of alienation is subtle, but it is very real when you are the one who stands out. In my classes, I have always talked about, and tried to put into action, the message that everybody is welcome. But this semester, I will be trying much harder to convey that message, both implicitly and explicitly. My students who are part of traditionally underrepresented groups based on race, gender, sexual orientation, disability status, age, or some other feature, they need to feel welcome in my class. My syllabus language will be tweaked to be more directly inclusive. My friendly greetings on Day 1 shall be clearly directed to all my students to avoid whatever unconscious biases I may harbor. I will endeavor to arrange the physical classroom to make it easier for everybody to sit next to somebody else, and I will make sure my first day of class icebreaker gives people a chance to talk to each other (Weimer 2017). Even if it is just to exchange pleasantries, getting my students talking to each other is the best thing I can do to help them realize they are all feeling similar anxieties, in addition to excitement at how that first class will go.
As the semester continued and I settled in to learn about things such as the differences between a z-test and a t-test, my experience on Day 1 of being a first-year student (the second time around) stood out as particularly important. What I experienced as a student in statistics gave me a great deal to think about how my students might be feeling when they enter my 1st-year general chemistry classroom this fall. So instead of staring out the window, I had best get back to preparing that course syllabus to incorporate all those non-statistics things I learned this past year.
Brian Rempel, PhD, joined the University of Alberta’s Augustana campus where he primarily teaches general and organic chemistry as an Associate Professor. Brian developed a love for teaching chemistry during his PhD work on enzymology at the University of British Columbia and brought that passion to his teaching-focused role at Augustana in 2009. Brian’s research studies the impact and student perceptions of unique means for evaluating student knowledge, with a particular focus on finding equitable ways of evaluating student knowledge to reduce student anxiety.
References
Mulnix, Amy B. 2023. “Being a Novice Again.” The Teaching Professor, February 27. https://www.teachingprofessor.com/topics/professional-growth/being-a-novice-again/
Nathan, Rebekah. 2006. My Freshman Year: What a Professor Learned by Becoming a Student. Penguin Books.
Weimer, Maryellen. 2017. “First Day of Class Activities That Create a Climate for Learning.” The Teaching Professor, July 19. https://www.teachingprofessor.com/topics/for-those-who-teach/first-day-of-class-activities-that-create-a-climate-for-learning/
(Dr. Sarah Forbes at Faculty Focus)
Rubrics are a valuable tool that supports student growth and facilitates instructor grading and feedback (Suskie, 2018). As instructors, we see this value; unfortunately, many of our students, especially first-year students, are unfamiliar with the concept. This presents an opportunity to raise their awareness of a tool that will benefit them as they master concepts and seek course success.
During a two-week mathematics-focused summer bridge program, I teach a segment called Behind the Scenes, which highlights various success strategies students will need to leverage to be successful in college. One lesson exposes these students to the concept of rubrics, covering everything from their purpose, structure, and application.
As noted by Doyle and Zakrajsek (2018), “The human brain is constantly looking for connections. Connections help you use prior knowledge to build bridges to the new material, creating a more meaningful understanding of the new material” (p. 15). Building off this idea, the lesson begins with an illustration about desserts, specifically cake, a concept which will resonate with most if not all students. The following instructions are given:
I took a vote and decided that math is out, cake is in. Cake has much more bearing on your future than math. Your assignment is to design the cake I want to eat for lunch today. You will be graded on this, and it will be worth a lot of points. Take a couple of minutes to sketch out your design, making note of the details. You will not be graded on your artistic ability or lack of materials.
As students are working, I’m walking around the room. Sometimes, they will ask me questions about my preferences. Much to their dismay, I tell them I am not answering questions at that time, but that they will understand why in a few minutes.
I ask for volunteers to share their designs with me. As the student shares, I find a reason why I do not want to eat their cake. Perhaps there are not enough layers, or it’s not my favorite flavor. I will then subjectively give them a grade, loosely based on the rubric I will eventually show them, but with an element of randomness to aid the discussion. Finally, I’ll show them a picture of the right answer: a three-layer lemon cake decorated with pale yellow frosting, lemon slices, and a few small flowers.
I then pose a series of questions for discussion:
What did you think of my grading?
Was it subjective or objective? Why?
If we repeated this assignment, what would you want to know to meet my expectations?
Understandably, students are not a fan of how I graded, correctly identifying the subjective nature. Based on the grading and feedback they heard me give the volunteers, they can identify several criteria that would be helpful to know in advance. As we talk through these ideas, I make the connection that in higher education the tool we use to be transparent about expectations and objectively grade assessments is called a rubric. Of course, I also backtrack and tell them, “FYI, math is back in – turns out it is useful.”
Once we have defined a rubric, we talk about the structure. I present to them a sample cake rubric, and talk about the grid format, with rows representing the criteria we will be evaluating (layers, flavor, decoration); columns representing the performance rating (needs improvement, proficient, advanced); and the cells containing descriptions of performance and point values (0, 5, 10).
After we establish what a rubric is and how instructors use it, we turn towards application of rubrics as a way for them to pre-grade their assignments before submitting. Each student in the program will be enrolled in a first-year seminar their first term, which becomes the example from which they can practice. I pass out the first-year seminar rubric along with the assignment prompts, which are consistent throughout the course. I also provide two sample responses, one that clearly does not meet expectations (43 words total), and one that does meet expectations (270 words total). I ask them to work in pairs to review the submissions and grade them using the given rubric, which includes four criteria and two performance levels.
Pairs are then asked to report out their score for the first submission, and we talk through their decisions. I share with them my grade as well. This process is repeated for the second submission. We also discuss what they liked about the experience and what they found challenging. In all, this lesson takes about thirty minutes but could be shortened or lengthened as needed.
Doyle and Zakrajsek (2018) further identified that “when learning something new, it helps to be interested in it, see a value to it, pay attention to it, associate it with something you already know, and practice it a lot” (p. 100). The cake illustration is something they already know, and let’s be honest, if it relates to food, college students are very interested. Through the absurd example, they see the value which piques their curiosity and attention. The hands-on practice, while maybe not sufficient in and of itself, at least provides some experience with a new idea from which they can build their confidence.
This rubric lesson not only raises awareness of a grading technique they will encounter over their time in college, but they get practice with an actual rubric that will be used in their first term. Further, their review of poor and exemplary submissions can help frame the direction they need to take when it comes to their own assignments. As McGuire (2015) pointed out, one way “to help your students gain competence is giving them targeted feedback, rubrics, and exemplars” (p. 88).
When students understand the purpose of the rubric, they are better positioned to achieve success in their courses. Proactively, it allows them to assess whether they are meeting assignment expectations before the due date. Reactively, it allows them to understand why they missed points, leveraging the feedback as formative assessment and making corrections for future assignments.
Sarah A. Forbes, PhD, is the Student Academic Success Director and a first-year seminar instructor at Rose-Hulman Institute of Technology. In these roles, she helps students learn new strategies for academic success. Sarah also serves as a first-year seminar instructional designer, summer bridge program director, and academic advising program administrator.
References
Doyle, T., & Zakrajsek, T. D. (2018). The new science of learning: How to learn in harmony with your brain. Stylus Publishing.
McGuire, S. Y. (2015). Teach students how to learn. Stylus Publishing.
Suskie, L. (2018). Assessing student learning: A common sense guide. John Wiley & Sons, Incorporated.
Scholar Maryellen Weimer noted in her work "It's a Myth: Nobody Knows What Makes Teaching Good" in Teaching College, Collected Readings for the New Instructor that the consistent elements of effective teaching include:
personal enthusiasm (which she notes tends to be contagious to students),
clarity of discourse and presentation,
an ability to stimulate and arouse interest from listeners,
knowledge (both competence with respect to the actual content of instruction and an evident love of the subject matter).
Weimer stressed that these characteristics emphasize "thorough, up-to-date knowledge of the subject matter; clearly defined instructional objectives; and a genuine commitment to teaching" (p. 43).
What are you thoughts? What do you think contributes to an excellent teacher?
(Adapted from New Faculty: A Practical Guide for Academic Beginners by Christopher J. Lucas and John W. Murry, Jr.)
(from Community College Research Center, November 2025 by Padilla, Baker, Lahr, and Minaya).
Many community college students begin their journey with high educational aspirations. Unfortunately, a large percentage of students leave college before they achieve their academic goals. In a recent survey of entering community college students, 83% reported plans to earn a bachelor’s degree (Center for Community College Student Engagement, 2023). Yet, nearly 40% of students in public two-year colleges do not return for their second year and have not earned a certificate or degree (National Student Clearinghouse [NSC], 2025a). Students who leave without earning a credential are unlikely to return (NSC, 2025b). As a result, many students miss out on the socioeconomic benefits associated with completing a postsecondary credential. Given the gap between students’ initial goals and actual attainment rates, understanding why students depart college is a critical step in helping to prevent attrition and its negative implications.
Past research points to several key factors that shape students’ decisions to remain enrolled. Students enter college with goals and expectations that are modified by their experiences during their journey, both inside and outside the institution (Bean & Metzner, 1985; Tinto, 1993). Interpersonal relationships and interactions within the college, such as those with peers, faculty, and staff, can foster a stronger sense of belonging and provide support in navigating college, which encourages persistence (Deil-Amen, 2011; Karp, 2011; Schudde, 2019; Tinto, 1993). Additionally, psychological factors like satisfaction and the perceived value of a college degree, which evolve over the course of the college experience, can influence students’ decision to remain enrolled (Bean & Metzner, 1985). Finally, self-efficacy, the confidence in one’s ability to accomplish a goal, can positively influence academic achievement (Eccles & Wigfield, 2002).
(from Community College Daily, November 17, 2025 by Matthew Dembicki)
Economic contributions by international students at community colleges increased for the third consecutive year, with nearly 64,000 students contributing $2.2 billion and supporting 9,099 jobs to the U.S. economy during the 2024-2025 academic year, according to a new analysis released today.
That’s a 10.5% increase, but significantly less than last year’s growth of 33%. The number of jobs they support also increased, though also at a slower rate — 7.4% this year, compared to 28% last year.
According to the analysis by NAFSA: Association of International Educators and JB International, for every seven international students enrolled in a community college, one U.S. job is created and supported by spending occurring in the higher education, accommodation, dining, retail, transportation, telecommunications and health insurance sectors.
Texas had the most international community college students at 14,253, contributing more than $419.7 million to the economy, followed by California with 13,788 students ($615.7 million) and Washington with 6,236 students ($192.6 million), according to the analysis. Next on the list was Florida, New York, Maryland and Virginia. The report provides information on all 50 states, where available.
Broader analysis
The analysis is part of a larger analysis of international students at all U.S. colleges and universities, which sheds light on the expected loss of revenue and jobs resulting from a drop in international students. Overall, this fall has seen a 17% decline in new student enrollment, contributing to a 7% drop in total enrollment, driven mainly by dips in graduate and non-degree students. This translates to more than $1.1 billion of lost revenue and nearly 23,000 fewer jobs, according to analysts, who based their projections on the Fall 2025 Snapshot on International Student Enrollment report published by the Institute of International Education, along with data from the departments of Education, Commerce, Homeland Security and State.
There was, however, a 2% increase in undergraduate enrollment, which the report says may have been due to earlier student decisions and visa appointments occurring before various administrative actions took place this summer.
Fanta Aw, NAFSA executive director and CEO, warned that the U.S. must adopt more proactive policies to attract and retain the world’s best and brightest, especially in post-study work.
“Otherwise, international students will increasingly choose to go elsewhere — to the detriment of our economy, excellence in research and innovation, and global competitiveness and engagement,” she said in a release.
The broader analysis also examines data by state and congressional district, as well as industry sectors.
(from Faculty Focus, November 10, 2025 by Fahad Ameen)
One of my quietest students once came up to me after class and said, “I’ve never felt comfortable speaking in English before this course.” That single sentence reminded me that what we build in the classroom goes far beyond lectures or grading. It’s the atmosphere we create that allows learning to happen. For this student, the turning point wasn’t grammar drills or vocabulary tests. It was trust.
As educators, we often focus on the what and how of teaching. But who the student in front of us matters just as much. In my experience, building genuine rapport is one of the most overlooked yet powerful strategies for helping students feel safe enough to participate, take risks, and grow.
Rapport is not about being the “fun” professor or trying to be everyone’s favorite. It’s about creating a space where students feel respected, seen, and supported not just academically, but as people.
In my classrooms, especially with my work with adult ESL learners in Kuwait, rapport means:
Greeting students by name and with warmth
Encouraging participation without pressure
Acknowledging their challenges as second-language users
Listening actively to their concerns and ideas
When students feel this kind of connection, they are far more willing to ask questions, attempt difficult tasks, and take ownership of their learning.
Here are five practical habits I’ve developed that have made a noticeable difference in student engagement and classroom climate.
I make time before and after class for informal conversations, even brief ones. A simple, “How’s your week going?” can open doors. Students need to know we are not just grading machines. We are humans too.
It seems like a small detail, but using students’ names early in the semester changes everything. When I call on “Fatima” instead of “you in the third row,” I signal that her presence matters.
When a student takes a risk, especially with speaking, I make sure to acknowledge the effort. Saying, “That was a great attempt,” helps build confidence and normalizes the learning process.
Learning is full of errors. I often point out my own slips and laugh with the class. This sets the tone that mistakes are part of the process, not something to fear.
I regularly ask students what’s helping and what’s not. If I change something based on their feedback, I let them know. This builds trust and shows them that their voices shape the learning experience too.
I remember one student, Yousef, who barely spoke during the first few weeks of class. He sat near the back, avoided eye contact, and never volunteered. I made a point to greet him by name each class, ask simple follow-up questions, and check in privately after group work. Slowly, he started opening up. First, he answered yes-no questions. Then, short phrases. By the end of the semester, he stood up and gave a short presentation in English. It wasn’t perfect, but it was powerful. Afterward, he told me, “You made me feel like I could do it.” That comment stays with me to this day.
One area where rapport makes a real difference is in how students receive feedback. Constructive feedback is essential for improvement, but it only works if students feel it comes from a place of support.
Once, I had to correct a student’s repeated grammatical mistake. It could have felt embarrassing, but because we had already built trust, she laughed and said, “I knew you would catch that.” She didn’t feel attacked. She knew the correction was about helping her grow.
This kind of response isn’t automatic. It comes from creating a consistent environment where feedback is expected, respected, and grounded in care.
The impact of strong rapport is not limited to one assignment or one semester. I have seen students who once hesitated to speak now take initiative in group discussions, volunteer for peer mentoring, or continue English practice long after the course ends.
Rapport also builds community. When students see the teacher modeling kindness, encouragement, and open communication, they begin to do the same with each other. This shifts the classroom from a silent space to one that is collaborative and supportive.
If you’re looking to build rapport in your own classroom, here are three simple practices you can try immediately:
Learn and use student names within the first two weeks. Use name tents if needed. If you’re at mid-semester, consistent use of student names shows you care and value their presence which helps strengthen classroom connections.
Ask for anonymous feedback midway through the term. Just two questions: “What’s helping you learn?” and “What would you change?”
Set aside two minutes at the end of class to praise a risk taken, a great question asked, or a quiet win. This reinforces the kind of behavior you want to see more of.
These small actions compound over time. They send a clear message to students that they matter and that their growth is the shared goal of the classroom.
As educators, we hope students walk away from our courses remembering the material. But what they often remember most is how we made them feel. Did they feel respected? Encouraged? Safe enough to take a risk?
If the answer is yes, then we have done more than teach. We have helped them build confidence, resilience, and the courage to use their voice.
That is the kind of learning that stays with them long after the final exam.
Fahad Ameen is a PhD researcher in Applied Linguistics at the University of Nottingham and an English language instructor at Kuwait University’s College of Education. His work focuses on student motivation, gamified learning, and building meaningful teacher–student relationships in Arab ESL contexts.
(from American Association of Community Colleges, November 11, 2025 by Matthew Dembicki)
Faculty Focus, November 7, 2025 by Juli S. Charkes, PhD)
Undergraduate enrollments for this fall are up again, with community colleges again seeing the largest rate increase, according to preliminary reporting to the National Student Clearinghouse (NSC) Research Center.
Overall undergraduate enrollment is up 2.4% so far, with increases in all sectors, though community colleges are leading with a 4.0% boost, compared to 1.9% at public four-year institutions and 0.9% at private, nonprofit four-year institutions. And, once again this fall, certificates appear to be a driving force — a 6.6% increase, compared to a 3.1% bump for associate degrees and 1.2% for bachelor’s degrees.
While enrollment in programs like computer and information science is seeing a large decrease this fall — down -5.8% at two-year institutions — trade majors continue to grow in areas such as engineering technologies (8.3%), mechanic and repair technologies (10.4%) and health professions (10.1%), according to the NSC Research Center report. National discussions about AI replacing workers, coupled with technology companies scaling back on their workforce, could be a driver in the drop in computer and information sciences, which other higher education sectors also have experienced — -7.7% among baccalaureate institutions and -15% at the graduate level.
“It’s a truly eye-opening decline,” Matthew Holsapple, NSC Research Center’s senior director of research, said in a call with reporters on Monday.
The center’s preliminary data is based on 8.5 million enrollments reported as of September 25 by 49.4% of colleges that send information to the NSC Research Center. A final report based on information from all the colleges will be released in January. It will include more detailed information as well as data on dual enrollments, for-profits and online institutions, which were not in the preliminary report.
The NSC Research Center also looked at enrollment trends among sectors since fall 2023. Over the span, undergraduate enrollment has increased 5.7%, with community colleges leading the way with a 9.6% increase, followed by public four-years at 4.1%, according to the report.
While the report includes data on community colleges, it also breaks down numbers within the sector into public two-year colleges and primarily associate degree-granting baccalaureate institutions (PABs), which continue to expand. PABs saw a higher enrollment growth this fall than two-year colleges, a 4.1% increase compared to 3.9%. Since fall 2023, PABs have seen a 10.2% jump, compared to 9.4% at public two-years.
Only White students experienced a decline this fall in community college enrollment, sliding -3.0%. Multiracial students had the highest increase at 5.3%, followed by students who are Hispanic (4.4%), Black (4.3%) and Asian (3.8%).
But the center cautioned that the declines could be overestimated due to an increasing number of students choosing not to report their race/ethnicity. It observed that students with missing/unknown race comprise 17% of undergraduate enrollment, which is a 21% increase compared to last fall. Among community colleges, the increase is 22%. Since 2023, there has been a 64.9% increase in students reporting their race/ethnicity as missing/unknown.
While the center does not include dual enrollment in its preliminary report, it does have an enrollment breakout based on age. For students ages 17 and younger, enrollment increased 5.5%. Older learners also saw increases again, up 4.5%. Traditional college-age students (18 and 19-20 for community colleges) also saw increases of 5.2% and 3.8%, respectively.
However, the rates of increase are lower than last fall. In fall 2024, students 17 or younger at community colleges saw an 8.7% increase, and adults 25 to 29 saw a 7.4% jump. But traditional-age students saw higher increases than last fall. Holsapple cautioned about depending too much on the early findings, noting that not all colleges have reported their data. Findings in the final report in January could be different, he noted.
(from Faculty Focus, November 7, 2025 by Juli S. Charkes, PhD)
Wrapping up a recent course, one of my students approached and asked to talk. It turns out she wasn’t there to review an assignment or clarify a grade. Instead, she was seeking my advice on her future career: what were my thoughts on job prospects for her major; what professional pathways made sense, given our rapidly changing world.
As the conversation touched on search strategies, market forces, and even the shifting nature of work itself, I was struck by how the moment represented a growing phenomenon in higher education: helping students succeed beyond the academic setting is a shared responsibility, particularly as the call for post-academic success continues to grow.
Dedicated career centers lead the way for student success beyond campus, but when students seek guidance on their future professional roles, chances are it will be an instructor to whom they turn. That’s good news for those of us committed to academic success. When students reach out to faculty for career advice, it’s an opportunity to deepen trust and strengthen learner confidence–principles that correlate with learning success. It also allows us to link learning with purpose, while helping us more deeply understand the evolution of our disciplines.
Integrating career readiness into our courses benefits us all. Here are some simple steps to get started:
Classroom instructors are the most consistent professional mentors that students encounter throughout their college years. A passing comment about your own career trajectory, or a few minutes spent discussing potential paths in your field, can expand a student’s sense of what’s possible, particularly for those whose backgrounds lack the types of professional networks that can impact professional success. Inviting professionals into the classroom–whether through alumni networks, or local industry—is an opportunity to provide students with professional roadmaps. Sharing examples of the different ways your discipline shows up in the world can likewise orient students towards a meaningful future in which they will likely change careers multiple times. To help guide conversations, invite students to explore so-called “clusters” of careers using tools such as the U.S. Department of Labor’s Occupational Information Network which provides data on growing fields and industries.
When students understand how their academic work connects to real-world applications, their engagement deepens. They’re more likely to push through a complex assignment when they see how it builds toward the type of skills they can apply to future employment. Not every lesson needs to turn into a job training session but connecting the “what” of our teaching to the “why” that students are so often seeking strengthens outcomes and can also improve student satisfaction. Studies indicate that students gain motivation when they can see how the skills they’re developing serve a purpose beyond the classroom, so canvassing students about their future career goals and integrating conversations and activities that help them map content to careers can be highly effective.
Supporting students with future career goals means guiding them to recognize key competencies that develop across disciplines and that are prioritized across professional fields. Key among these are the skills of critical thinking, communication, teamwork, cultural fluency, and ethical decision-making. Students won’t always recognize these as in-demand skills unless we name them, so consistently referencing their utility is an impactful step. That might mean pointing out that a history paper builds research skills; a biology lab fosters analytical reasoning; and a group project in any discipline develops collaboration and leadership skills that are prized in the workplaces of today. By drawing attention to these connections, we help students value the breadth of what they’re learning while helping them understand how to showcase these skills to future employers.
Today’s employers seek graduates who bring both depth and versatility, meaning team members who know the specifics of their field, but can also communicate, adapt, and think creatively. These hybrid profiles are in demand across sectors and instructors can support their students by embedding assignments that mirror real-world demands. Case studies, presentations, simulations, along with reflective writing, all offer chances to practice skills that matter in professional life. When we give students opportunities to apply knowledge in dynamic ways, we prepare them not only for jobs, but for the type of lifelong learning these professional positions will demand.
Helping students prepare for their careers doesn’t dilute academic rigor; it strengthens it by affirming that education matters in the world beyond academics. That day in the classroom, as I listened to my student’s questions about her future, I realized she wasn’t looking for certainty, but rather the opportunity to engage in the very skills we’ve always valued in teaching: critical questioning, reflection, and the ability to envision new ways to advance. Faculty are in a unique position to offer this type of guidance. Embracing and integrating career readiness into our teaching supports the pedagogical goals of the classroom while helping students succeed well beyond them.
Juli S. Charkes, EdD, is a former Director of a Center for Teaching and Learning where she led faculty development across 100 academic programs. She has been a classroom instructor for the past 14 years, teaching organizational leadership, communications, and media studies at both the undergraduate and graduate levels.
References
Universities Address Workforce and career readiness. (n.d.-c). https://kaplan.com/about/trends-insights/universities-address-workforce-and-career-readiness
Pleschová, G., Sutherland, K. A., Felten, P., Forsyth, R., & Wright, M. C. (2025). Trust-building as inherent to academic development practice. International Journal for Academic Development, 30(1), 1–13. https://doi.org/10.1080/1360144X.2025.2454704
The future of work is non-linear (or why you’ll have more than one career in your lifetime). World Economic Forum. (n.d.). https://www.weforum.org/stories/2023/05/workers-multiple-careers-jobs-skills/
Fish, N., Bertone, S., & van Gramberg, B. (2025). Improving student engagement in employability development: recognising and reducing affective and behavioural barriers. Studies in Higher Education, 1–16. https://doi.org/10.1080/03075079.2025.2461271
Bauer-Wolf, J. (2023, November 30). Employers value a college degree but think students lack some skills, survey says. Higher Ed Dive. https://www.highereddive.com/news/employers-value-a-college-degree-but-think-students-lack-some-skills-surve/701051/
Mashek, D. (2022, June 23). Collaboration is a key skill. so why aren’t we teaching it? MIT Sloan Management Review. https://sloanreview.mit.edu/article/collaboration-is-a-key-skill-so-why-arent-we-teaching-it/
(from Faculty Focus, October 29, 2025 by Sybil Prince Nelson, PhD)
Parents who grew up in the ’80s and ’90s know the feeling: you’re listening to your kid’s playlist, and suddenly a song hits you with a wave of uncanny familiarity. Despite the claims by your teen that it is the latest and greatest, you know that it is just a repackaging of one of your favorite tunes from the past. I am noticing a similar trend with generative AI. It is inherently regurgitative: reshaping and repackaging ideas and thoughts that are already out there.
Fears abound as to the future of higher education due to the rise of generative AI. Articles from professors in many different fields predict that AI is going to destroy the college essay or even eliminate the need for professors altogether. Their fears are well founded. Seeing the advances that generative AI has made in just the past few months, I am constantly teetering between immense admiration and abject terror. My Chatbot does everything for me, from scheduling how to get my revise and resubmit done in three months to planning my wardrobe for the fall semester. I fear becoming too self-reliant on it. Am I losing myself? Am I turning my ChatGPT into a psychological crutch? And if I am having these thoughts, what effect is generative AI having on my students?
Grappling with the strengths and weaknesses of my own AI usage, I feel I have discovered what might be the saving grace of humanity (feel free to nominate me for the Nobel Peace Prize if you wish). As I hinted earlier, AI is more like a DJ remixing the greatest hits of society rather than an innovative game changer. My ChatGPT is more like Girl Talk (who you have probably never heard of. Just ask your AI) than Beyonce (who you most definitely have heard of). Not that there’s anything wrong with Girl Talk. Their mashups are amazing and require a special kind of talent. Just like navigating AI usage requires a certain balance of skills to create a usable final product. But no matter how many pieces of music from other artists you mash together, you will not eventually turn into a groundbreaking, innovative musician. Think Pat Boone vs. The Beatles, Sha Na Na vs. David Bowie, Milli Vanilli vs. Prince, MC Hammer vs. Lauryn Hill.
As a mathematician and a novelist, I see this glaring weakness in both of these very different disciplines. I’ll start with writing. ChatGPT is especially helpful in coming up with strange character or planet names for my science fiction novels. It will also help me create a disease or something else I need to drive the plot further. And, of course, it can help me find an errant comma or fix a fragmented sentence. But that is about it. If I ask it to write an entire chapter, for example, it will come up with the most boring, derivative, and bland excuse for prose I have ever seen. It will attempt my humor but fail miserably. It sometimes makes my stomach turn, it’s so bad.
A study from the Wharton School found that ChatGPT reduces the diversity of ideas in a pool of ideas. Thus, it diminishes the diversity of the overall output, narrowing the scope of novel ideas. Beyond that, I find that when I use ChatGPT to brainstorm, I typically don’t use its suggestions. Those suggestions just spark new ideas and help me come up with something different and more me.
For example, I asked ChatGPT to write a joke for its bad brainstorming practice of using the same core ideas over and over again. It said:
Joke: That’s not brainstorming—it’s a lazy mime troupe echoing each other.
That’s lame. I would never say that. But another joke it gave me sparked the music sampling analogy I opened this article with.
In any case, because of generative AI’s inability to actually generate anything new, I have hope that the college essay, like the fiction novel, will not die. Over-reliance on AI may indeed debilitate the essay, perhaps causing it to go on life support forcing students and faculty to drag its lifeless body across the finish line of graduation. But there is still hope.
I remember one of my favorite English teachers in middle school required that we keep a journal. Each day she asked us to write something, anything in our journal, even if it was only a paragraph or just a sentence. Something about putting pen to paper sparked my creativity. It also sparked a lifelong notebook addiction. And even though I consider myself somewhat of a techie and a huge AI enthusiast, to this day I still use notebooks for the first draft of my novels.
It is clear to me that ChatGPT will never be able to write my novels in my voice. I don’t claim to be a great novelist. I just feel that some of my greatest work hasn’t been written yet. While ChatGPT may be able to write a poem about aardvarks in the style of Robert Frost or a ballad about Evariste Galois in the style of Carole King, it can’t write my next novel, because it doesn’t yet exist. And even when it tries to imitate my voice and my style, predicting what I will write next, it does a poor job.
A research paper is inherently different from a creative work of fiction, however. ChatGPT does do a pretty good job of gathering information on a topic from several sources and synthesizing it into a coherent paper. You just have to make sure to check for the errant hallucinated reference. And honestly, when are our students ever going to be asked to write a 15-page research paper on Chaucer without any resources? And if they are, ChatGPT can probably produce that product better than an undergraduate student can. But the process, I would argue, is more important than the final product.
In his Inside Higher Ed paper Writing the Research Paper Slowly, JT Torres recommends a scaffolding process to writing the research paper. This method focuses on the process of writing a paper, exploring and reading sources, taking notes, organizing those notes into a ‘scientific story’ and creating an outline. Teaching students the process of writing the paper instead of focusing on the end product results in students feeling more confident that they can not only complete the task required but transfer those skills to another subject. Recognizing these limitations pushed me to rethink how I design assignments.
Knowing that generative AI can do somethings (but not all things) better than a human has made me a more intentional professor. Now when I create assignments, I think: Can ChatGPT do this better than an undergraduate student? If so, then what am I really trying to teach? Here are a few strategies I use:
When designing an assignment, ask yourself whether it is testing a skill that AI already performs well. If so, consider shifting your focus to why that skill matters, or how students can go beyond AI’s capabilities.
In some cases, it makes sense to integrate AI directly into the assignment (e.g., generating code, automating data analysis). In others, the objective may be to build a human-only skill like personal expression or creative voice. I decide case by case whether AI should be a part of the process or explicitly excluded.
When I am teaching tests, I have to ask myself: Am I assessing whether students understand the theory behind the test or whether they can run one using software? If it’s the latter, using AI to generate code might be appropriate. But if it’s the former, I’ll require manual calculations or a written explanation.
Any assignment where they are allowed to use AI, they also have to write a reflection about how they used AI and whether it was helpful or not. This encourages metacognition and reduces overreliance.
Having students share the prompts they used in completing the assignment teaches them about transparency and the need for iteration in their interaction with an AI. Students should not just be cutting and pasting the first response from ChatGPT. They need to learn how to take a response, analyze it then refine their prompt to get a better result. This helps them develop prompt engineering skills and realize that ChatGPT is not just a magic answer machine.
What about academic research in general? How is AI helping or hindering? Given that generative AI merely remixes the greatest hits of human history rather than creating anything new, I think its role in academic research is limited. Academic breakthroughs start with unasked questions. Generative AI works within the confines of existing data. It can’t sense the frontier because it doesn’t know there is a frontier. It can’t sample past answers of a question that hasn’t been asked yet. About a year ago, I was trying to get my AI to write a section of code for my research and it kept failing. I spent a week trying to get it to do what I wanted. I realized it was having such a difficult time because I was asking it to do something that hadn’t been done before. Finally, I gave up and wrote the piece of code myself, and it only took me about half an hour. Sure, the coding capabilities have gotten better over the past year, but the core principle remains the same. AI still struggles to innovate. It can’t do what hasn’t already been done. Also because of ‘creative flattery’ it wants to make you happy so it will try to do what you tell it to do even if it can’t. The product will be super convincing, but it can still be wrong.
I recently asked AI to write a theoretical proof that shows polygonal numbers are Benford distributed (Spoiler: They are not). Then I had it help me write a convincing journal-ready article. The only problem is it also wrote me a theoretical proof that Polygonal numbers are NOT Benford distributed as well. I submitted the former to a leading mathematics journal to see what would happen. Guess what, they caught it. A human was able to detect the ‘AI Slop’. This shows me, that (1) there will always be a need for human gatekeepers and (2) ‘creative flattery’ is extremely dangerous in a research setting and confirms the need for human review. The chatbot tries too hard to please, thus reinforcing what the user already thinks even if that means proving or disproving the exact same thing. Academic research thrives on novel questions and unpredictable answers, which AI is incapable of doing since it inherently just regurgitates what is already out there.
The Benford Polygonal Numbers experiment is an important example of how we need to educate our students about AI usage in an academic setting. The Time.com article Why A.I. is Getting Less Reliable, Not More states that despite its progress over the years, AI can still resemble sophisticated misinformation machines. Students need to know how to navigate this.
One of my favorite assignments in my Statistics course is what I call:
Students must craft a statistics question that the chatbot gets wrong, explain why the chatbot got it wrong and then provide the correct answer. A tweak of this activity would be to take AI generated content and human written then compare and critique tone, clarity, or originality.
AI-generated content is like a song built entirely from remixed samples. Sampling has its place in music (and in writing) but when everything starts to sound the same, our ears and brains begin to tune out. A great remix can breathe new life into a classic, but we still crave the shock of the new. This is why people lost their minds the first time they heard Beyonce’s Lemonade or Kendrick Lamar’s To Pimp a Butterfly – not because they followed a formula, but because they bent the rules and made something we’d never heard before. AI, for all its value, doesn’t break the rules. It follows them. That is the difference between innovation and imitation. It is also the reason why AI, in its current capacity, will not kill original thought.
Sybil Prince Nelson, PhD, is an assistant professor of mathematics and data science at Washington and Lee University, where she also serves as the institution’s inaugural AI Fellow. She holds a PhD in Biostatistics and has over two decades of teaching experience at both the high school and college levels. She is also a published fiction author under the names Sybil Nelson and Leslie DuBois.
References
Hsu, Hua. 2025. “The End of the English Paper.” The New Yorker, July 7, 2025. https://www.newyorker.com/magazine/2025/07/07/the-end-of-the-english-paper.
Warner, John. 2024. “Get Ready for Faculty Bot-ification.” Inside Higher Ed, December 11, 2024. https://www.insidehighered.com/opinion/columns/just-visiting/2024/12/11/great-ready-faculty-bot-ification.
Meincke, Lea, Gideon Nave, and Christian Terwiesch. 2025. “ChatGPT Decreases Idea Diversity in Brainstorming.” Nature Human Behaviour 9: 1107–1109. https://doi.org/10.1038/s41562-025-02173-x.
Torres, J. T. 2021. “Writing the Research Paper Slowly.” Inside Higher Ed, May 5, 2021. https://www.insidehighered.com/advice/2021/05/05/benefits-new-approach-student-research-papers-opinion.
Sonnenfeld, Jeffrey, and Joanne Lipman. 2024. “Why A.I. Is Getting Less Reliable, Not More.” Time, June 20, 2024. https://time.com/7302830/why-ai-is-getting-less-reliable/.
In recent weeks I’ve been focusing on the problem of students coming to class less prepared. When that happens, discussions fall flat and frustrations rise. How can you meaningfully teach anyone if they’re not doing much work in between classes? (Read more by Beth McMurtrie from The Chronicle of Higher Education).
Mary Shelley wrote Frankenstein in 1818 when she was just 18 years old. In doing so, she not only created a gothic masterpiece that continues to influence and inspire literature but also broke barriers as a woman writer, pioneered science fiction, and created a cautionary tale that is relevant for AI, STEM, and tech researchers today (keep reading this article by Erik Ofgang at Tech & Learning).
At Rogue Community College (RCC), recent initiatives have focused on identifying systemic barriers to industry engagement and implementing targeted strategies to enhance agility in workforce development. Over the past year, RCC has taken deliberate steps to cultivate stronger partnerships with industry stakeholders and to align curricular offerings more closely with labor market needs to increase responsiveness and relevance in service of both students and workforce partners (keep reading this article by Lisa Parks at League for Innovation).
As an educator with years of experience in community colleges, I have often reflected on what drives career success today. One moment that reshaped my perspective came at a conference, where I met a senior executive from a leading tech company. Given his work in artificial intelligence (AI) and digital innovation, I assumed he held advanced degrees in computer science or a related field. To my surprise, he shared that his background was in philosophy (keep reading this article by Dr. Muddassir Siddiqi at Community College Daily)
Advice for New Faculty: Start with the Syllabus. Focusing on the syllabus at the front end helps the teacher focus his or her ideas and bring all of his or her learning philosophies together in one place (keep reading this article by Jennifer Patterson Lorenzetti).
Possible AI Syllabus Statements (from TBR AI Learning Collaborative): A continually updated document outlining various syllabi policies for AI generative tools, providing guidance. See more from the TBR AI Learning Collaborative. Also refer to TBR Policy: 1.08.10.00 Use of Artificial Intelligence
Mindset GPS Syllabus Checklist: What is a Mindset GPS syllabus messaging checklist? This checklist is designed to help you evaluate and reflect on the messages conveyed in your course syllabus so you can embed motivationally-supportive language throughout it. We encourage you to review your syllabus, focusing each time on one of the three learning mindsets.
AI Syllabus Statement Template: A customizable syllabus statement template to help instructors transparently communicate the role, use, and ethical considerations of AI technologies in their courses, from the TBR AI Learning Collaborative.
Bringing C.H.A.O.S. to Choas: Syllabi with an A.I. Usage Policy: It is no secret that Artificial Intelligence (AI) technology is transforming college classrooms. AI tools can easily and quickly assist students in various tasks such as essay writing, literature reviews, analyzing data, formulating code, solving equations, image generation, music composition, and so much more. With minimal or no effort, within minutes, students have most assignments, test questions, or discussion problems figured out and done…enter the chaos! (continue reading the article)
Beyond Syllabus Week: Creative Strategies to Engage Students from Day One.
Ever wonder why students don't read the syllabus, despite the time and effort we put into creating it? It serves as a contract . . . yet many students simply aren't motivated to read it (continue reading this article by Dr. Joanne Ricevuto).
Start-Up Anxiety: Professor Shares His Fears as a New Semester Begins.
I have often said to my friends who don't teach that the week before fall classes begin is a tough time for me. The students are coming back and the campus is abuzz (continue reading this article by Dr. Peter Kakela).
Support GPS on the First Day. This activity provide example scenarios/activities, based on the article How to Teach a Good First Day of Class by James Lang, that you can implement during your first day of class that would support your students’ learning mindsets. For Sense of Belonging, actions include arriving 10 minutes early and asking students to introduce themselves in small groups.
1st Day Student Survey: On the first day of the course, ask students key questions to learn more about them. Surveying your students was also suggested as an active learning technique to appreciate their current skill levels, but additional questions can be added to appreciate student issues around belongingness (e.g., What concerns or questions do you currently have about the course?). Here is an example used in one of our courses. Then once collected, reach out to students individually over email (or create a single class email summarizing and addressing the major concerns/questions raised). (Sense of Belonging, number 1).
Mindset Supportive Welcome Message: A welcome message can be an initial strategy to promote a better Sense of Belonging, but you also can consider adding elements of Growth Mindset and Purpose & Relevance as well. This includes a sample template to adapt.
Check out how Walters State prepped for Week 1 of the Fall 2025 semester!
Visit the extensive OER resources from Pellissippi State Community College which includes:
from the Tennessee Board of Regents (TBR): Tennessee Open Education flyer
from the Tennessee Higher Education Commission (THEC): Tennessee Open Education
Review this sample checklist from fellow faculty at Columbia State: Pre-term checklist
Review these sample checklists below to construct your own: Start of Semester Checklist from ETSU
The first day of class represents an opportunity to get your course off to a good start. Don't just tell students your name and hand out the syllabus (important document, though it is). Keep in mind that the opening session sets the tone for the entire semester. It should be a time to anticipate students' unspoken questions and to address them directly:
Who's the teacher of the course and are they any good? Introduce yourself, briefly share information about yourself: credentials, degrees, professional background. But also include your areas of interest. Share something about yourself as an individual human being.
Who else is taking this course with me? Take the time to have students introduce themselves to one another and, time permitting, to the class as a whole. No one should leave class without learning the names of at least 3 other students.
What's this course about? Refer to the syllabus and share an overview of the class. Express your enthusiasm for your topic and show your students you're interested in the course and in their success. If you are not interested in your own course, do not expect your students to become involved either.
Will I enjoy this class? Explain what you require of your students and the course objectives. Indicate when, where, and how students can get help when they need it.
How do I get a good grade in this course? Explain in detail what evaluation procedures will be used. Explain the basis for your grading. Offer concrete, specific suggestions on studying, preparing for class, and reviewing the material. Include test-taking tips.
(Adapted from New Faculty: A Practical Guide for Academic Beginners by Christopher J. Lucas and John W. Murry, Jr.)
See below for Term Start activities to explore; click on activity titles below for more information.
Mindset GPS Syllabus Checklist: What is a Mindset GPS syllabus messaging checklist? This checklist is designed to help you evaluate and reflect on the messages conveyed in your course syllabus so you can embed motivationally-supportive language throughout it. We encourage you to review your syllabus, focusing each time on one of the three learning mindsets.
Mindset Supportive Welcome Message: A welcome message can be an initial strategy to promote a better Sense of Belonging, but you also can consider adding elements of Growth Mindset and Purpose & Relevance as well. This includes a sample template to adapt.
Support Mindset GPS on the First Day. This activity provide example scenarios/activities, based on the article How to Teach a Good First Day of Class by James Lang, that you can implement during your first day of class that would support your students’ learning mindsets. For Sense of Belonging, actions include arriving 10 minutes early and asking students to introduce themselves in small groups.
Current Confidence and Prior Experience Survey: At the start of a course or new unit, have students reflect on their current confidence and prior experience in the skills that they are about to learn. Check out an example used in one of our courses. Then, at the end of the course (or unit), have students respond again to reflect on how their confidence and experience with those skills improved. As a variation on this activity, you also can have students reflect on what’s helped (or not helped) them learn similar skills in the past. (Growth Mindset, number 1).
Student Interest Survey: At the start of the course, collect a survey to assess students' (a) interest and prior knowledge in your course topic, (b) interest in pursuing particular majors/minors, (c) interest in future careers, and/or (d) general interests and hobbies in life. (Purpose & Relevance, number 1).
1st Day Student Survey: On the first day of the course, ask students key questions to learn more about them. Surveying your students was also suggested as an active learning technique to appreciate their current skill levels, but additional questions can be added to appreciate student issues around belongingness (e.g., What concerns or questions do you currently have about the course?). Here is an example used in one of our courses. Then once collected, reach out to students individually over email (or create a single class email summarizing and addressing the major concerns/questions raised). (Sense of Belonging, number 1).
Value Writing Interventions: This is a writing exercise that students can complete one or more times during the semester. In this activity, students reflect on beliefs that help them stay motivated. Specifically, this activity asks students to focus on reasons for learning that go beyond the typical motives of making money or making family proud.
Sense of Belonging Interventions. This is a writing exercise that students can complete during the semester. In this activity, students reflect on how their experiences in college may change over time. Specifically, this activity asks students to read example quotes from other college students and reflect on how their own feelings of belonging on campus may be similar to their peers.