ChatGPT & AI

ChatGPT is  an artificial Intelligence (AI) chatbot tool developed by Open AI and launched November 2022. ChatGPT has implications for higher education, both in and out of the classroom.

New! Five Tips for Writing Academic Integrity Statements in the Age of AI

(from Faculty Focus)


https://www.facultyfocus.com/articles/teaching-with-technology-articles/five-tips-for-writing-academic-integrity-statements-in-the-age-of-ai/ 

Author Rie Kudan received a prestigious Japanese literary award for her book, The Tokyo Tower of Sympathy, and then disclosed that 5% of her book was written word-for-word by ChatGPT (Choi & Annio, 2024).  

Would you let your students submit a paper where 5% of the text was written by ChatGPT?  

What about if they disclosed their use of ChatGPT ahead of time?  

Or, if they cited ChatGPT as a source?  

The rise of generative AI tools like ChatGPT, Copilot, Claude, Gemini, DALL-E, and Meta AI has created a pressing new challenge for educators: defining academic integrity in the age of AI.  

As educators and students grapple with what is allowed when using generative AI (GenAI) tools, I have compiled five tips to help you design or redesign academic integrity statements for your syllabus, assignments, exams, and course activities.  

1. Banning GenAI tools is not the solution 

Many students use GenAI tools to aid their learning. In a meta-analysis on the use of AI chatbots in education, Wu and Yu (2023) found that AI chatbots can significantly improve learning outcomes, specifically in the areas of learning performance, motivation, self-efficacy, interest, and perceived value of learning. Additionally, non-native English speakers and students with language and learning disabilities often turn to these tools to support their thinking, communication, and learning.  

Students also need opportunities to learn how to use, and critically analyze, GenAI tools in order to prepare for their future careers. The number of US job postings on LinkedIn that mention “GPT” has increased 79% year over year and the majority of employers believe that employees will need new skills, including analytical judgment about AI outputs and AI prompt engineering, to be prepared for the future (Microsoft, 2023). The Modern Language Association of America and Conference on College Composition and Communication (MLA-CCCC) Joint Task Force on Writing an AI recently noted that: 

Refusing to engage with GAI helps neither students nor the academic enterprise, as research, writing, reading, and other companion thinking tools are developing at a whirlwind rate and being integrated into students’ future workplaces from tech firms to K–12 education to government offices. We simply cannot afford to adopt a stance of complete hostility to GAI: such a stance incurs the risk of GAI tools being integrated into the fabric of intellectual life without the benefit of humanistic and rhetorical expertise. (pp. 8-9) 

Ultimately, banning GenAI tools in a course could negatively impact student learning and exacerbate the digital divide between students who have opportunities to learn how to use these tools and those who do not (Trust, 2023). And, banning these tools won’t stop students from using them–when universities tried to ban TikTok, students just used cellular data and VPNs to circumvent the ban (Alonso, 2023).  

However, you do not have to allow students to use GenAI tools all the time in your courses. Students might benefit from using these tools on some assignments, but not others. Or for some class activities, but not others. It is up to you to decide when these tools might be allowed in your courses and to make that clear to your students. Which leads to my next point… 

2. Tell your students what YOU allow  

Every college and university has an academic integrity/honesty or academic dishonesty statement. However, these statements are either written so broadly that there can be different interpretations of the language, or these statements indicate that the responsibility of determining what is allowable depends on the instructor, or both!  

Take a look at UC San Diego’s Academic Honesty Policy (2023) (highlights were added for emphasis).  

While this policy is detailed and specific, there is still room for interpretation of the text; and the responsibility of determining whether students can use GenAI tools as a learning aid (section “e”) relies solely on the instructor.  

Keeping the UC San Diego academic honesty policy in mind, consider the following: 

Which of these examples is a violation of the policy? This is up to you to determine based on your interpretation of the policy.  

Now, think about your students. Some students take three, four, five, and even six classes a semester. Each class is taught by a different instructor who might have their own unique interpretation of the university’s academic integrity policy and a different perspective regarding what is allowable when it comes to GenAI tools and what is not.  

Unfortunately, most instructors do not make their perspectives regarding GenAI tools clear to students. This leaves students guessing what is allowed in each course they take and if they guess wrong, they could fail an assignment, fail a course, or even get suspended – these are devastating consequences for a student who is unsure about what is allowed when it comes to using GenAI tools for their learning because their instructors do not make it clear to them.  

Ultimately, it is up to you, as the instructor, to determine what you allow, and then to let your students know! Write your own GenAI policy to include in your syllabus. Write your own GenAI use policies for assignments, exams, or even class activities. And then, talk with students about these policies and clarify any confusion they might have. 

3. Use the three W’s to tell students what you allow  

The 3 W’s can be used as a model to write your academic integrity statement in a clear and concise manner.  

Let’s start with the first W: What GenAI tools are allowed?  

Will you allow your students to use AI text generators? AI image creators? AI video, speech, and audio producers? What about Grammarly? Khanmigo? Or, GenAI tools embedded into Google Workspace

If you do not clarify what GenAI tools are allowed, students might end up using an AI-enhanced tool, like Grammarly, and be accused of using AI to cheat because they did not know that when you said “No GenAI tools” you meant “No Grammarly, either” (read: She used Grammarly to proofread her paper. Now she’s accused of ‘unintentionally cheating.’).  

Please do not put students in a situation of guessing what GenAI tools are allowed or not. The consequences can be dire and students deserve the transparency.  

The next W is When are GenAI tools allowed (or not allowed)? 

If you simply list what GenAI tools you allow, students might think it is okay to use the tools you listed for every assignment, learning activity, and learning experience in your class.  

Students need specific directions for when GenAI tools can be used and when they cannot be used. Do you allow students to use GenAI tools on only one assignment? Every assignment? One part of an assignment? Or, what about for one aspect of learning (e.g., brainstorming) but not another (e.g., writing)? Or, for one class activity (e.g., simulating a virtual debate) but not another (e.g., practicing public speaking by engaging in a class debate)?  

To determine when GenAI tools could be used in your classes, you might start with the learning outcomes for an activity or assessment and then identify how GenAI tools might support or subvert these outcomes (MLA-CCCC, 2024). When GenAI tools support, enhance, or enrich learning, it might be worthwhile to allow students to use these tools. When GenAI tools take away from or replace learning, you might tell students not to use these tools.  

Making it clear when students can and cannot use GenAI tools will eliminate any guesswork from students and reduce instances of students using GenAI tools when you did not want them to.  

The final W is Why are GenAI tools allowed (or not allowed)? 

Being transparent about why GenAI tools are allowed or are not allowed helps students understand your reasoning and creates a learning environment where students are more likely to do what you ask them to do.  

In the case of writing, for example, you might allow students to use GenAI tools to help with brainstorming ideas, but not with writing or rewriting their work because you believe that the process of putting pen to paper (or fingers to keyboards) is essential for deepening understanding of the course content. Telling students this will give them a clearer sense of why you are asking them to do what you are asking them to do.  

If you simply state: “Do not use GenAI tools during your writing process” students might wonder, Why? and might very well use these tools exactly how you asked them not to because they were not given a reason why not to.  

To sum up, the three W’s model brings transparency into teaching and learning and makes it clear and easy for students to understand when, where, and why they can use GenAI tools. This eliminates the guesswork from students, and reduces potential fears, anxieties, and stressors about the use of these tools in your courses.  

You can use the thre W’s as a model for crafting your academic integrity statement for your syllabus and also as a model for clarifying AI use in an assignment (see the image below), on an exam, or during a class activity. 

4. Clarify how you will identify AI-generated work and what you will do about it 

Even when you provide a detailed AI academic integrity policy and increase transparency around the use of GenAI tools in your courses, students may still use these tools in ways that you do not allow.  

It is important to let students know how you plan to identify AI-generated work.  

Will you use an AI text detector? (Note that these tools are notoriously unreliable, inaccurate, and biased against non-native English speakers and students with disabilities; Liang et al., 2023; Perkins et al., 2024; Weber-Wulff et al., 2023) 

Will you simply be on the lookout for text that looks AI-generated? If so, what will you look for? A change in writing voice and tone? Overuse of certain phrases like “delve”? A Google Docs version history where it appears as though text was copied and pasted in all at once? (see Detecting AI-Generated Text: 12 Things to Watch For

Keep in mind that your own assumptions and biases might negatively impact certain groups of students as you seek to identify AI-generated work. The MLA-CCCC Joint Task Force (2024) noted that “literature across a number of disciplines has shown that international students and multilingual students who are writing in English are more likely to be accused of GAI-related academic misconduct” both because “GAI detectors are more likely to flag English prose written by nonnative speakers” and “suspicions of misuse of GAI are often due to complex factors, including culture, context, and unconscious ‘native-speakerism’ rather than actual misconduct” (p. 9).  

Also, consider what happens if a student submits content that looks or is identified by a detector as AI-generated. Will they automatically fail the assignment? Need to have a conversation with you? Need to prove their knowledge to you in another way (e.g., oral exam)? Be referred to the Dean of Students? 

Whatever you decide, being upfront about your expectations can foster a culture of trust between you and your students, and it might even deter students from using the tools in ways that you do not allow them to.  

5. Consider whether you will allow students to cite GenAI tools as a source 

One final point to consider as you are writing your academic integrity statement is whether students should be allowed to cite GenAI tools as a source. 

Many college and university academic integrity/honesty statements indicate that as long as the student cites their sources, including GenAI tools, they are not violating academic integrity. AI syllabus policies, too, often state that students can use GenAI tools as long as they cite them. 

But, should students really be encouraged to cite GenAI tools as a source?  

Consider, for example, that many of the popular GenAI tools were designed by stealing tons of copyrighted data from the Internet. The companies that created these tools “received billions of dollars of investment while using copyrighted work taken without permission or compensation. This is not fair” (Syed, 2023).  

While several companies are currently being sued for using copyrighted data to make their GenAI tools, in many cases, artists, authors, and other individuals whose work has been used without their permission to train these tools are losing their cases because of US copyright law and fair use. GenAI companies are arguing that their tools transform the copyrighted data they scraped from the Internet in a way that falls under fair use protections. However, a study found that large language models, like ChatGPT, sometimes generate text of over 1,000 words long that has been copied word-for-word from the original training data (McCoy et al., 2023) – making ChatGPT a plagiarism machine! 

Consider also that GenAI tools can make up (“hallucinate”) content and present harmful and biased information. Do you want students to cite information from a tool that is not designed with the intent of providing factual information?  

In a recent class activity, I asked my students (future educators) to write their own AI policy statements. Before they wrote their statements, I explained how GenAI tools were designed by scraping copyrighted data from the Internet and then they interrogated a GenAI tool by asking it at least 10 questions about whether it violated intellectual property rights. Across the board, my students decided that citing GenAI tools is not allowed and that they want their future students to cite an original source instead.  

It is up to you whether you allow students to cite GenAI as a source or not. The most important thing is to be transparent with your students about whether you allow them to cite GenAI tools as a source; and if you do, let them know how much text you would allow them to copy word-for-word into their work as long as they cite a GenAI tool. 

So, this returns us back to the question at the start: Would you let your students submit a paper where 5% of the text was written by ChatGPT…as long as they cited ChatGPT as a source?  


GenAI Disclosure: The author used Gemini and ChatGPT 3.5 to assist with revising text to improve the quality of the writing. All text was originally written by the author, but some of the text was revised based on suggestions from Gemini and ChatGPT 3.5.  

Author Bio 

Torrey Trust, PhD, is a professor of learning technology in the Department of Teacher Education and Curriculum Studies in the College of Education at the University of Massachusetts Amherst. Her work centers on the critical examination of the relationship between teaching, learning, and technology; and how technology can enhance teacher and student learning. Dr. Trust has received the University of Massachusetts Amherst Distinguished Teaching Award (2023), the College of Education Outstanding Teaching Award (2020), and the ISTE Making IT Happen Award (2018), which “honors outstanding educators and leaders who demonstrate extraordinary commitment, leadership, courage and persistence in improving digital learning opportunities for students.”  

References 

Alonso, J. (2023, January 19). Students and experts agree: TikTok bans are useless. Inside Higher Ed. https://www.insidehighered.com/news/2023/01/20/university-tiktok-bans-cause-concern-and-confusion  

Choi, C. & Annio, F. (2024, January 19). The winner of a prestigious Japanese literary award has confirmed AI helped write her book. CNN. https://www.cnn.com/2024/01/19/style/rie-kudan-akutagawa-prize-chatgpt/index.html  

Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7).  

McCoy, R. T., Smolensky, P., Linzen, T., Gao, J., & Celikyilmaz, A. (2023). How much do language models copy from their training data? Evaluating linguistic novelty in text generation using raven. Transactions of the Association for Computational Linguistics, 11, 652-670. 

Microsoft. (2023). Work trend annual index report. https://www.microsoft.com/en-us/worklab/work-trend-index/will-ai-fix-work/  

Modern Language Association of America and Conference on College Composition and Communication. (2024). Generative AI and policy development: Guidance from the MLA-CCCC task force. https://cccc.ncte.org/mla-cccc-joint-task-force-on-writing-and-ai  

Perkins, M., Roe, J., Vu, B. H., Postma, D., Hickerson, D., McGaughran, J., & Khuat, H. Q. (2024). GenAI detection tools, adversarial techniques and implications for inclusivity in higher education. arXiv preprint. https://arxiv.org/ftp/arxiv/papers/2403/2403.19148.pdf  

Syed, N. (2023, November 18). ‘Unmasking AI’ and the fight for algorithmic justice. The Markup. https://themarkup.org/hello-world/2023/11/18/unmasking-ai-and-the-fight-for-algorithmic-justice  

Trust, T. (2023, August 2). Essential considerations for addressing the possibility of AI-driven cheating, part 1. Faculty Focus. https://www.facultyfocus.com/articles/teaching-with-technology-articles/essential-considerations-for-addressing-the-possibility-of-ai-driven-cheating-part-1/  

UC San Diego. (2023). UC San Diego academic integrity policy. https://senate.ucsd.edu/Operating-Procedures/Senate-Manual/Appendices/2  

Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero-Dib, J., Popoola, O., … & Waddington, L. (2023). Testing of detection tools for AI-generated text. International Journal for Educational Integrity, 19(1), 26. 

Wu, R. & Yu, Z. (2023). Do AI chatbots improve students learning outcomes? Evidence from a meta-analysis. British Journal of Educational Technology, 55(1), 10-33.  



New! Can "Linguistic Fingerprinting" Guard Against AI Cheating?

(from EdSurge)


This article is part of the guide: For Education, ChatGPT Holds Promise — and Creates Problems.

Since the sudden rise of ChatGPT and other AI chatbots, many teachers and professors have started using AI detectors to check their students’ work. The idea is that the detectors will catch if a student has had a robot do their work for them.

The approach is controversial, though, since these AI detectors have been shown to return false positives — asserting in some cases that text is AI-generated even when the student did all the work themselves without any chatbot assistance. The false positives seem to happen more frequently with students who don’t speak English as their first language.

So some instructors are trying a different approach to guard against AI cheating — one that borrows a page out of criminal investigations.

It’s called “linguistic fingerprinting,” where linguistic techniques are used to determine whether a text has been written by a specific person based on analysis of their previous writings. The technology, which is sometimes called “authorship identification,” helped catch Ted Kaczynski, the terrorist known as the Unabomber for his deadly series of mail bombs, when an analysis of Kaczynski’s 35,000-word anti-technology manifesto was matched to his previous writings to help identify him.

Mike Kentz is an early adopter of the idea of bringing this fingerprinting technique to the classroom, and he argues that the approach “flips the script” on the usual way to check for plagiarism or AI. He’s an English teacher at Benedictine Military School in Savannah, Georgia, and he also writes a newsletter about the issues AI raises in education.

Kentz shares his experience with the approach — and talks about the pros and cons — in this week’s EdSurge Podcast.

Hear the full story on this week’s episode. Listen on Apple Podcasts, Overcast, Spotify, or wherever you listen to podcasts, or use the player on this page. Or read a partial transcript below, lightly edited for clarity.

EdSurge: What is linguistic fingerprinting?

Mike Kentz: It's a lot like a regular fingerprint, except it has to do with the way that we write. And it's the idea that we each have a unique way of communicating that can be patterned, it can be tracked, it can be identified. If you have a known document written by somebody, you can kind of pattern their written fingerprint.

How is it being used in education?

If you have a document known to be written by a student, you can run a newer essay they turn in against the original fingerprint, and see whether or not the linguistic style matches the syntax, the word choice, and the lexical density. …

And there are tools that produce a report. And it's not saying, ‘Yes, this kid wrote this,’ or ‘No, the student did not write it.’ It's on a spectrum, and there's tons of vectors inside the system that are on a sort of pendulum. It's going to give you a percentage likelihood that the author of the first paper also wrote the second paper.

I understand that there was recently a time at your school when this approach came in handy. Can you share that?

The freshman science teacher came to me and said, ‘Hey, we got a student who produced a piece of writing that really doesn't sound like him. Do you have any other pieces of writing, so that I can compare and make sure that I'm not accusing him of something when he doesn't deserve it?’ And I said, ‘Yeah, sure.’

And we ran it through a [linguistic fingerprint tool] and it produced a report. The report confirmed what we thought that it was unlikely to have been written by that student.

The biology teacher went to the mother — and she didn’t even have to use the report — and said that it doesn’t seem like the student wrote it. And it turned out his mom wrote it for him, more or less. And so in this case it wasn’t AI, but the truth was just that he didn't write it.

Some critics of the idea have noted that a student’s writing should change as they learn, and therefore the fingerprint based on an earlier writing sample might no longer be accurate. Shouldn’t students’ writing change?

If you've ever taught middle school writing, which I have, or if you taught early high school writing, their writing does not change that much in eight months. Yes, it improves, hopefully. Yes, it gets better. But we are talking about a very sophisticated algorithm and so even though there are some great writing teachers out there, it's not going to change that much in eight months. And you can always run a new assignment to get a fresh “known document” of their writing later in the term.

Some people might worry that since this technique came from law enforcement, it has a kind of criminal justice vibe.

If I have a situation next year where I think a kid may have used AI, I am not going to immediately go do the fingerprinting process. That's not gonna be the first thing I do. I'll have a conversation with them first. Hopefully, there's enough trust there, and we can kind of figure it out. But this, I think, is just a nice sort of backup, just in case.

We do have a system of rewards and consequences in a school, and you have to have a system for enforcing rules and disciplining kids if they step out of line. For example, [many schools] have cameras in the hallways. I mean, we do that to make sure that we have documented evidence in case something goes down. We have all kinds of disciplinary measures that are backed up by mechanisms to make sure that that actually gets held up.

How optimistic are you that this and other approaches that you're experimenting with can work?

I think we're in for a very bumpy next five years or so, maybe even longer. I think the Department of Education or local governments need to establish AI literacy as a core competency in schools.

And we need to change our assessment strategies and change what we care about kids producing, and acknowledge that written work really isn't going to be it anymore. You know my new thing also is verbal communication. So when a kid finishes an essay, I'm doing it a lot more now where I'm saying, all right. Everybody's going to go up without their paper and just talk about their argument for three to five minutes, or whatever it may be, and your job is to verbally communicate what you were trying to argue and how you went about proving it. Because that's something AI can't do. So my optimism lies in rethinking assessment strategies.

My bigger fear is that there is going to be a breakdown of trust in the classroom.

I think schools are gonna have a big problem next year, where there's lots of conflicts between students and teachers where a student says, ‘Yeah, I used [AI], but it's still my work.’ and the teacher goes, ‘Any use is too much.’

Or what's too much and what's too little?

Because any teacher can tell you that it's a delicate balance. Classroom management is a delicate balance. You're always managing kids' emotions, and where they're at that day, and your own emotions, too. And you're trying to develop trust, and maintain trust and foster trust. We have to make sure this very delicate, beautiful, important thing doesn't fall to the ground and smash into a million pieces.

Listen to the full conversation on the EdSurge Podcast.

Jeffrey R. Young (@jryoung) is an editor and reporter at EdSurge and host of the EdSurge Podcast. He can be reached at jeff [at] edsurge [dot] com



Spring 2024 TL Topic Talks > Full Schedule

For the Spring 2024 semester, the CTL is presenting a bi-weekly series of topics focused on various aspects of AI and ChatGPT, how they impact students, faculty, and staff. See how you can particpate below:

Spring 2024 CTL Topic Talk_Overview

Ultimate Guide to Authentic Assessment in an AI-Enabled World (recording)

8 ways to create AI-Proof Writing Prompts

Recordings from Spring 2024 Convocation

Copyleaks: AI Content Detector

AI Resources for CoSCC faculty & staff


(Thanks to Glenna Winters for continuously collecting and curating these items!)

Review this assembled collection of syllabi policies created by Lance Eaton and openly shared.

AAC&U Project Kaleidoscope (PKAL) call for proposals

League for Innovation in the Community College

Click to view the recorded sessions:



By Kate Lucariello

09/20/23

According to recent research conducted by the Pew Research Center in 2023, 52% of Americans feel "more concerned than excited" about the use of AI in daily life, compared to 37% in 2021 — an increase of 15 percentage points.

The research team surveyed 11,201 U.S. adults from July 31 to Aug. 6, 2023. Participants were contacted randomly by the national online survey American Trends Panel.

In 2021, 45% of participants were equally excited and concerned about the use of AI, with 18% more excited than concerned, and 37% were more concerned than excited. That number did not change significantly in 2022 (46% equally concerned/excited, 15% more excited, and 38% more concerned), but this year those percentages have drastically changed, with 36% equally concerned/excited, 10% excited, and 52% concerned.

Pew said concern about AI outweighs excitement across the major demographic groups polled: gender, race, ethnicity, partisan affiliation, education, and others. Perhaps predictably, 61% of older adults (65+) are more concerned than excited, with the gap being smaller among 18 to 29-year-olds, but still significant: 42% more concerned, and 17% more excited.

The research also shows that growing public awareness about AI is keeping pace with rising concerns.

"Those who have heard a lot about AI are 16 points more likely now than they were in December 2022 to express greater concern than excitement about it," the report noted. "Among this most aware group, concern now outweighs excitement by 47% to 15%. In December, this margin was 31% to 23%." Levels of concern seem to be about equal whether people have heard a lot about AI (16%) or not much (19%).

Americans' concerns focus heavily on maintaining control of AI, doubts about whether it will improve our lives, and use of it in certain fields, such as medicine, the report said. A large concern is also data privacy and safety with the use of AI, with 53% of survey respondents feeling their information is not being kept safe and private.

Visit the report page to read more about the results and follow links to the methodology.


ABOUT THE AUTHOR

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.


Pew Research: 52% of American feel "more concerned than excited" about AI

By Kate Lucariello

09/20/23

According to recent research conducted by the Pew Research Center in 2023, 52% of Americans feel "more concerned than excited" about the use of AI in daily life, compared to 37% in 2021 — an increase of 15 percentage points.

The research team surveyed 11,201 U.S. adults from July 31 to Aug. 6, 2023. Participants were contacted randomly by the national online survey American Trends Panel.

In 2021, 45% of participants were equally excited and concerned about the use of AI, with 18% more excited than concerned, and 37% were more concerned than excited. That number did not change significantly in 2022 (46% equally concerned/excited, 15% more excited, and 38% more concerned), but this year those percentages have drastically changed, with 36% equally concerned/excited, 10% excited, and 52% concerned.

Pew said concern about AI outweighs excitement across the major demographic groups polled: gender, race, ethnicity, partisan affiliation, education, and others. Perhaps predictably, 61% of older adults (65+) are more concerned than excited, with the gap being smaller among 18 to 29-year-olds, but still significant: 42% more concerned, and 17% more excited.

The research also shows that growing public awareness about AI is keeping pace with rising concerns.

"Those who have heard a lot about AI are 16 points more likely now than they were in December 2022 to express greater concern than excitement about it," the report noted. "Among this most aware group, concern now outweighs excitement by 47% to 15%. In December, this margin was 31% to 23%." Levels of concern seem to be about equal whether people have heard a lot about AI (16%) or not much (19%).

Americans' concerns focus heavily on maintaining control of AI, doubts about whether it will improve our lives, and use of it in certain fields, such as medicine, the report said. A large concern is also data privacy and safety with the use of AI, with 53% of survey respondents feeling their information is not being kept safe and private.

Visit the report page to read more about the results and follow links to the methodology.


ABOUT THE AUTHOR

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.


More Educators Looks to Oral Exams

By Jeffrey Young

10/5/2023

This article is part of the guide: For Education, ChatGPT Holds Promise — and Creates Problems.

Since the release of ChatGPT late last year, the essay has been declared dead as an effective way to measure learning. After all, students can now enter any assigned question into an AI chatbot and get a perfectly formatted, five-paragraph essay back ready to turn in (well, after a little massaging to take out any AI “hallucinations”).

As educators have looked to alternatives to assigning essays, one idea that has bubbled up is to bring back oral exams.

It’s a classic idea: In the 1600s it was the basic model of evaluation at Oxford and Cambridge (with the grilling by professors done in Latin), and it was pretty much what Socrates did to his students. And oral evaluations of student learning do still happen occasionally — like when graduate students defend their theses and dissertations. Or in K-12 settings, where the International Baccalaureate (IB) curriculum used by many high schools has an oral component.

But even fans of administering oral exams admit a major drawback: They’re time-consuming, and take a lot out of educators.

“They’re exhausting,” says Beth Carlson, an English teacher at Kennebunk High School, in Maine, who says she occasionally does 15-minute-each oral assessments for students in the school’s IB program. “I can only really do four at a time, and then I need a brain break. I have a colleague who can do six at a time, and I am in awe of her.”

Even so, some educators have been giving the oral exam a try. And they say the key is to use technology to make the approach more convenient and less draining.

Can oral exams be delivered at the scale needed for today’s class sizes?

Fighting AI With AI

Two undergraduate students who are researchers at Stanford University’s Piech Lab, which focuses on “using computational techniques to transform fundamental areas of society,” believe one way to bring back oral exams may be to harness artificial intelligence.

"I think where the future is going is, it's going to become even more important for students to be able to have those soft skills and be able to talk and communicate their ideas."

—Joseph Tey, a Stanford University undergraduate developing an AI tool for delivering oral exams.

The students, Joseph Tey and Shaurya Sinha, have built a tool called Sherpa that is designed to help educators hear students talk through an assigned reading to determine how well they understood it.

To use Sherpa, an instructor first uploads the reading they’ve assigned, or they can have the student upload a paper they’ve written. Then the tool asks a series of questions about the text (either questions input by the instructor or generated by the AI) to test the student’s grasp of key concepts. The software gives the instructor the choice of whether they want the tool to record audio and video of the conversation, or just audio.

The tool then uses AI to transcribe the audio from each student’s recording and flags areas where the student answer seemed off point. Teachers can review the recording or transcript of the conversation and look at what Sherpa flagged as trouble to evaluate the student’s response.

“I think something that's overlooked in a lot of educational systems is your ability to have a discussion and hold an argument about your work,” says Tey. “And I think where the future is going is, it's going to become even more important for students to be able to have those soft skills and be able to talk and communicate their ideas.”

The student developers have visited local high schools and put the word out on social media to get teachers to try their tool.

Carlson, the English teacher in Maine who has tried oral exams in IB classes, has used Sherpa to have students answer questions about an assigned portion of the science fiction novel “The Power,” by Naomi Alderman, via their laptop webcams.

“I wanted the students to speak on the novel as a way for them to understand what they understood,” she says. “I did not watch their videos, but I read their transcript and I looked at how Sherpa scored it,” she says. “For the most part, it was spot on.”

She says Sherpa “verified” that, according to its calculation, all but four of the students understood the reading adequately. “The four students who got 'warnings' on several questions spoke too generally or answered something different than what was asked,” says Carlson. “Despite their promises that they read, I'm guessing they skimmed more than read carefully.”

Compared to a traditional essay assignment, Carlson believes that the approach makes it harder for students to cheat using ChatGPT or other AI tools. But she adds that some students did have notes in front of them as they went through Sherpa’s questions, and in theory those notes could have come from a chatbot.

One expert on traditional oral exams, Stephen Dobson, dean of education and the arts at Central Queensland University in Australia, worries that it will be difficult for an AI system like Sherpa to achieve a key benefit of oral exams — making up new questions on the fly based on how the students respond.

“It’s all about the interactions,” says Dobson, who has written a book about oral exams. “If you’ve got five set questions, are you probing students — are you looking for the weaknesses?”

Tey, one of the Stanford students who built Sherpa, says that if the instructor chooses to let the AI ask questions, the system does so in a way that is meant to mimic how an oral exam is structured. Specifically, Sherpa uses an educational theory called the Depth of Knowledge framework that asks questions of various types depending on a student’s answer. “If the student struggled a little with the previous response, the follow-up will resemble more of a ‘hey, take a step back’, and ask a broader, more simplified question,” says Tey. “Alternatively, if they answered well previously, the follow-up will be designed to probe for deeper understanding, drawing upon specific phrases and quotes from the student’s previous response.”

Scheduling, and Breaks

For some professors, the key technology to update the oral exam is a tool that has become commonplace since the pandemic: Zoom, or other video software.

That’s been the case for Huihui Qi, an associate teaching professor of mechanical and aerospace engineering at the University of California at San Diego. During the height of the pandemic, she won a nearly $300,000 National Science Foundation grant to experiment with oral exams in engineering classes at the university. The concern at the time was to preserve academic integrity when students were suddenly taking classes remotely — though she believes the approach can also safeguard against cheating using AI chatbots that have emerged since she started the project.

She typically teaches mechanical engineering courses with 100 to 150 students. With the help of three or four teaching assistants, she now gives 15-minute-each oral exams between one and three times each semester. To make that work, students schedule an appointment for a Zoom meeting with her or a TA, so that each grader can do the grading from a comfortable spot and also schedule breaks in between to recharge.

“The remote aspect helps in that we don’t have to spend lots of time scheduling locations and waiting outside in long lines,” she says.

What Qi has come to value most from oral exams is that she feels it can be a powerful opportunity to teach students how to think like an engineer.

“I’m trying to promote excellence and teach students critical thinking,” she says. “Over the years of teaching I have seen students struggle to decide what equation to apply to a particular problem. Through this dialogue, my role is to prompt them so they can better form this question themselves.”

Oral exams, she adds, give professors a window into how students think through problems — a concept called metacognition.

One challenge of the project for Qi has been researching and experimenting with how to design oral exams that test key points and that can be fairly and consistently administered by a group of TAs. As part of their grant, the researchers plan to publish a checklist of tips for developing oral exams that other educators can use.

Dobson, the professor in Australia, notes that while oral exams are time-consuming to deliver, they often take less time to grade than student essays. And he says that the approach gives students instant feedback on how well they understand the material, instead of having to wait days or weeks for the instructor to return a graded paper.

“You’re on the spot,” he says. “It’s like being in a job interview.”

Jeffrey R. Young (@jryoung) is the editor of EdSurge and producer and host of the EdSurge Podcast. He can be reached at jeff [at] edsurge [dot] com


Using AI to Design PBL Projects

Welcome to a new era of project-based learning, where the power of artificial intelligence meets your creativity and expertise as an educator. This guide is designed to assist teachers like you in harnessing the potential of ChatGPT, a cutting-edge language model, to develop engaging Project-Based Learning (PBL) projects for your students.

ChatGPT is a powerful tool that can help generate project ideas, prompts, and resources, but it doesn't replace the critical role you play as an educator. Your knowledge, experience, and insight are essential in crafting meaningful learning experiences. ChatGPT is here to augment your abilities and provide inspiration, not to substitute them.

In the world of education, PBL is an invaluable approach to foster deeper learning, critical thinking, and problem-solving skills. It empowers students to engage with real-world challenges and actively construct knowledge, making learning a dynamic and immersive journey.

This guide will walk you through the steps of developing a PBL project, from identifying relevant problems to creating thought-provoking driving questions, designing final products, and engaging authentic audiences. ChatGPT will assist you at various stages, but I can’t say this enough, it is you who will bring life to these projects and adapt them to your unique classroom environment.

I encourage you to embrace the creative potential that ChatGPT offers while also considering its limitations. Remember that AI is not infallible, and the responsibility for crafting an effective and engaging learning experience lies with you.

The process of designing PBL projects should be a collaborative and reflective one. Engage with your fellow educators, seek input and feedback, and continuously refine your projects. Together, you can create transformative, epic learning experiences for your students.

-Trevor Muir, www.epicpbl.com



Professional Development: Facing the Future: Educators Discuss Teaching in the Era of ChatGPT > Recording Available

Thank you for registering for the Magna Online Seminar, Facing the Future: Educators Discuss Teaching in the Era of ChatGPT. We hope you enjoyed the presentations and gained useful information that you can apply at your institution!

 

The final transcripts and handouts are available and can now be viewed. To access the presentation: 

WICHE, "The Promise and Challenges of AI in Higher Education"

Lindsey Downs

Aug 31

The biggest topic in higher education right now, at least in my opinion, is artificial intelligence. There are various stories in higher education news about what impact AI will have on students, instructors, and the education field at large. This week, we welcome Marc Watkins, Academic Innovation Fellow from the University of Mississippi. Marc joins us to discuss the potential of AI in digital learning and to highlight the amazing work happening at his institution to prepare faculty and students for a future that includes working with AI. I really enjoyed learning about these initiatives and student reflections on AI in higher education.

Enjoy the read,

Lindsey Downs, WCET

Preparing Faculty to work with AI

The University of Mississippi is pioneering new approaches to prepare faculty for emerging technologies, like generative AI, in the classroom. A year ago, in August 2022, several UM faculty within the Department of Writing and Rhetoric began piloting AI-powered assistants in first-year writing courses. Doing so helped us develop the capacity needed to train others in generative AI literacy.

This summer, we hosted an innovative cross-disciplinary AI Institute for Teachers. Participants were generously funded by the Institute of Data Science and nearly two dozen faculty members received stipends to attend an immersive two-day workshop in early June, where they explored how to responsibly integrate generative AI tools in support of student learning.

The goal of this training was to help prepare faculty for the fall semester. Participants returned to their departments with enhanced capacity in AI literacy to teach students AI fundamentals, evaluate AI-generated content, and develop educational applications and policies that explored proactive approaches to generative AI. The AI Institute for Teachers demonstrates the University of Mississippi’s commitment to equipping educators with leading-edge capabilities to enrich their pedagogy and serve the needs of their students in this new technological era.

Feedback from participants was overwhelmingly positive. The hands-on agenda encouraged active experimentation with tools like ChatGPT to imagine curricular integration in disciplines from sciences to humanities. Colleagues appreciated the practical insights tailored to higher ed contexts, noting that this training filled an important gap in institutional preparedness. UM remains committed to spearheading such initiatives to position our educators at the forefront of AI innovation in service of student learning.

With rapid AI advances, we anticipate the need for training educators about generative AI to grow. In July, I developed an online asynchronous training course for all University of Mississippi faculty. Faculty at dozens of other institutions have been granted access to the course through course licenses and scholarships. I’ve released assignments from this course under a CC-BY SA 4.0, so any educator can use and remix them for teaching: Generative AI in Education Assignments.

Our Ethos—Explore, Don’t Panic

We developed this capacity through forward-thinking approaches to technology and teaching, with campus-wide leadership and support from UM Provost Noel Wilkin.

Stephen Monroe, the Chair of the Department of Writing and Rhetoric, recognized the impact generative AI technology would have on writing in the spring of 2022 and his approach was to “explore, don’t panic.”

Our department developed an AI Working Group to explore what affordance GPT-3 powered writing assistants, research assistants, and reading assistants, could offer students. Together, we built assignments that asked students to engage generative technologies at different points in their writing process.

We partnered with Dr. Robert Cummings, the Executive Director of the Academic Innovation Group, who helped frame this process. Eventually, we defined our teaching approaches by the acronym DEER:

The Introduction to AI-Powered Assistants video provides an overview of the tools we adopted using the DEER framework. Drawing on student and faculty feedback, including more than 80,000 words of student reflections, allowed us to explore the pedagogical affordances and limitations of generative AI in the writing classroom. As Stephen Monroe noted, “Our early pilots indicate the panicked presumptions about student adoption/abuse seen in the media were not accurate.” We observed tentative and responsible exploration from our students during these first semesters, leading us to believe that many maturing writers may not always be eager adopters of AI tools.

Some Key Takeaways Based on Student Reflections

It is clear that generative AI technologies will soon be implemented within virtually every web-based interface we interact with daily. This will pose challenges to authenticity and authorship that academia is currently struggling with in nearly every industry. AI detection is proving to be too unreliable, as evidenced by OpenAI shutting down their Text Classifier tool and universities turning off or opting out of Turnitin’s AI detection feature. It’s important to help students understand the pros and cons of working with this, and future, innovations. Instead of surveillance, we should foster a culture of trust with our students by modeling ethical usage of this technology, and such usage will only be possible if faculty and students become AI literate. Training all stakeholders in such literacy is our best pathway forward.

Further Valuable Resources about Generative AI





New : Ted Talk from Sal Khan, founder of Khan Academy,

 APA provides guidance on how to cite ChatGPT. Click to read.

Respondus and ChatGPT - What is Respondus doing about ChatGPT? Click to read.

Report: More than half of students will use AI writing tools even if prohibited by their institution

Article from Campus Technology (April 26, 2023): click to read

ChatGPT Whitepaper from League of Innovation

Whitepaper-How-Community-Colleges-are-Adapting-to-Generative-AI.pdf

The League for Innovation in the Community College & Packback present: 

ChatGPT and AI's Effect on Community Colleges

 (recording below)

ChatGPT Detection Tools:

ChatGPT & Open AI overview

(Mouse over the right hand corner, then "click out" to view the entire presentation).

OpenAI ChatGPT.pdf