Artificial Intelligence

Artificial Intelligence, and specifically tools like ChatGPT, continue to exert a major impact on higher education. ChatGPT is an AI chatbot tool developed by Open AI and launched in November 2022. ChatGPT and similar AI tools have implications for higher education, both in and out of the classroom.

Your AI Resource Repository from the TBR AI Training Collective

Welcome to our dedicated resource hub for TBR educators exploring the integration of Artificial Intelligence (AI) in teaching and learning. On this site, you will find recaps of our Tech Talk Tuesday sessions--including links to meeting recordings. Additionally, we have included a vetted resource repository of examples, articles, videos, and tools. Lastly, included is an AI Playbook, which contains best practices collected from leaders in the field as well as various educational institutions, organizations, and industry. As AI is rapidly changing, expect constant updates to this site. 

Learn more here.

Free eBook: AI Syllabus Statement and Project Ideas

What’s inside:

•  Best practices for implementing GenAI in teaching and learning, including prompt engineering tips and tricks.

•  Principles for appropriate AI use, including what constitutes misuse that you can communicate to students.

•  Practical activities and applications of AI across disciplines to enhance your curriculum.

 ‌Get the Ebook ‌ 

AI is Getting Better at Grading. Should Teachers Use it to Grade?

AI Is Getting Better At Grading. Should Teachers Use It To Grade?

By Erik Ofgang published yesterday

AI grading tools can now match human accuracy and consistency in certain topics and situations putting the spotlight on the ethics of machine grading. 

An anonymous high school teacher recently wrote to The New York Times ethicist column, asking if a teacher could ethically use AI to grade student work while actively prohibiting students' use of AI to submit their own work. To this teacher, doing so felt hypocritical.

The column’s author, Kwame Anthony Appiah, a philosophy professor at NYU, replied to the teacher that this mixed policy toward AI was ethical because the students need to practice writing while the teacher already knew how to grade. The real question, Appiah wrote, was whether AI grading tools could fairly grade students in a manner that helped them improve for the next assignment, like a skilled teacher.

As AI gets better and better, and more tools are available on the market, it’s an increasingly important question, as is a follow up: If AI tools can grade fairly and effectively, will students and teachers accept it?

AI Can Already Help With Assessments 

Deirdre Quarnstrom, Vice President of Education at Microsoft, says there’s a lot of interest in the question. “As I look across the industry, I think any potential task that an educator can do, there is organizations working on how to improve that, and how to make that better,” she says.

AI is already skilled enough at summarization to help teachers in the grading and assessment process, she says, by performing initial evaluations based on a set of instructions or prompts or criteria.

Michael Klymkowsky, a biology professor at the University of Colorado Boulder, is developing an AI tool that can help assess biology students' progress. Instead of grading, it’s designed to help inform teachers on where students are at with the material. However, he says it might already be able to do a better job at grading than humans in some instances, including a mega section for a course with grading done by time-strapped graduate assistants.

“Graduate students don't always have time to grade everything equally rigorously,” Klymkowsky says. He adds that adopting a system such as this might help teachers to not be as reluctant to assign short-answer questions because of the additional time traditionally needed for grading.

Whether or not school leaders would permit such a use openly, or if students would accept its use, is another question altogether.

AI Grading Obstacles and Concerns  

A recent pair of studies looking at AI assessments and writing found that AI wasn’t as good as skilled teachers, but was close and was probably better than overworked or inexperienced teachers at providing feedback.

Steve Graham, a co-author on both studies and professor at Arizona State University, says that a key to whether students and teachers become comfortable with AI-graded work are perceptions of its accuracies. “If you trust feedback from AI, then I think you're more likely to use it,” Graham says. One way to build that trust is through increased research and studies so its efficacy can be evaluated and best practices can be developed.

Still, it’s easy to imagine some teachers still being reluctant and students pushing back against poor AI-generated grades even though there is precedent for acceptance of machine-generated assessments. For example, state writing assessments are increasingly scored by computer programs less advanced than generative AI, Graham says. These tools don’t really analyze the writing, just the semantic and syntactic markers, while also assessing whether it's a match for good, quality writing.

“People were initially leery about that, but you're seeing that used more and more,” he says.

Although Graham believes AI grading and assessments overall can eventually help students learn and ease time constraints for teachers, he also stresses that we’ll need to always keep in mind the human element.

“The only reason I'd want feedback from AI is to make a paper better that I'm either writing for myself to explore something about me, or that I'm writing for other people to read,” he says. “The fear I have is that if we turn to algorithm-based feedback or feedback from ChatGPT, and we don't have other people reading our papers, well, what's the purpose here? And ultimately, we write for a purpose.”

Department of Education: Designing for Education with Artificial Intelligence - An Essential Guide for Developers

Today and in the future, a growing array of Artificial Intelligence (AI) models and capabilities will be incorporated into the products that specifically serve educational settings. The U.S. Department of Education is committed to encouraging innovative advances in educational technology improve teaching and learning across the nation’s education systems and to supporting developers as they create products and services using AI for the educational market.

Building on the Department’s prior report, Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations, this guide seeks to inform product leads and their teams of innovators, designers, developers, customer-facing staff, and legal teams as they work toward safety, security, and trust while creating AI products and services for use in education. This landscape is broader than those building large language models (LLMs) or deploying chatbots; it includes all the ways existing and emerging AI capabilities can be used to further shared educational goals.

The inclusion of non-Federal resources or examples on this website is not intended to reflect their importance, nor is it intended to endorse any views expressed, or products or services offered.

Download the Guide