You can find NYU’s official policy on the use of generative AI, plus more here.
Discussing academic integrity with students requires that you first determine what plagiarism and cheating are in the context of your course, based on your learning goals. As such, you will need to clearly define what unauthorized use of generative AI looks like. Below are a few suggestions to help get that conversation started.
Have an open conversation with your students as early as possible. This gives students the opportunity to understand what academic dishonesty and appropriate use of generative tools look like in your classroom. Students may assume another course’s policies are the same as yours or may have a different cultural understanding of academic integrity.
Emphasize how generative tools may support or interfere with learning goals and be explicit about the value you expect students to get out of the course and specific assignments.
Consider giving students a live demonstration using a generative tool to show students what appropriate usage looks like and highlight the shortcomings of the technology.
Give students multiple opportunities and ways to ask clarifying questions. Be available to continue conversations - either in or outside of class - on an ongoing basis. New developments may prompt more discussion.
Including language that is clear on when and how students can and cannot use generative AI is critical.
It is increasingly difficult to differentiate AI-generated text from human writing. Detection tools that claim to be able to detect LLM-generated language have been proven to be easily defeated and are prone to false positives.
The NYU Office of the Provost, the Learning Science Lab, and the educational technology community largely recommend that instructors not rely on the use of third-party detection tools, like TurnItIn, to detect use of generative tools in student work. Use of these tools can inadvertently target specific populations of students and lead to false accusations.
There is no easy answer here. Short-term and long-term solutions will involve developing courses, assignments, and assessments with generative tool use as a given.
In the arms race of generative AI tools vs detectors, advances in AI will very likely always outpace the capabilities of detectors. Their lack of rigorous review and tendency to result in false positives can make them unreliable as evidence.