Data Privacy & PII Misuse:
Risky AI: The reality of AI in higher education – T&CMedia (Jan 2025)
Article built around multiple real incidents, including a case in which a college uploaded a student’s assignment to an AI/automation system without consent, potentially violating FERPA by sharing identifiable work with a third-party tool
Also describes a case at the University of Iowa where a student used a Zoom AI assistant that secretly recorded and summarized class sessions involving confidential patient information, leading to the AI tool being banned on campus
Data, privacy, and cybersecurity in schools: A 2025 wake-up call - eSchoolNews (2025)
A 2025 K–12–higher-ed privacy “wake-up call” article explains that many AI tools are not inherently FERPA-compliant and notes that teachers (including college instructors) may unknowingly paste student essays or identifiers into prompts, effectively releasing that data into commercial models’ training pipelines.
Academic Integrity & Misuse of AI Detectors:
Moving Beyond Plagiarism and AI Detection: Academic Integrity in 2025 – Packback
Discusses confirmed AI-related misconduct cases and the growing use of AI-detection tools, with examples where overreliance on detectors created adversarial relationships and false positives rather than supporting integrity.
Offers narrative accounts from faculty about situations where using Turnitin-style AI detection as a primary evidence source backfired, underscoring why “ask the AI if it wrote this” is not a sound practice
Academic Integrity and Artificial Intelligence Use by Medical Students – open-access article (2025)
Reports real patterns of misconduct (students using AI to generate essays or cheat on assessments) and institutional responses, including the need for clearer rules on what constitutes undisclosed AI use
Though focused on medical education, the study provides concrete behaviors and response scenarios that can be directly mapped to broader higher-ed contexts
Texas A&M Instructor Uses ChatGPT to Catch Cheaters... and Fails Spectacularly (2023)
A Texas A&M instructor in 2023 used ChatGPT itself to check whether student essays were AI-written; when the model responded that it had written the work, the instructor accused the entire class of cheating and withheld grades. The story, widely covered and later revisited in 2025 commentary, is a classic example of what not to do: asking a generative model if it “wrote” a text and treating the answer as evidence.
Teachers are using software to see if students used AI. What happens when it's wrong? (2025)
An NPR report documents teachers relying on AI detection tools that mislabel human writing as AI-generated, leading to repeated false accusations against a student based solely on detector scores.
Institutional AI Use:
Academic Integrity in the Age of AI – Digital Education Council
Discusses emerging cases and patterns of AI misuse across the student lifecycle, including assessment, admissions, and data storage, and highlights institutional tensions between adopting AI and preventing abuse.
Provides scenario-style descriptions that can be easily adapted into case discussions on legal and ethical risks.
Northeastern college student demanded her tuition fees back after catching her professor using OpenAI's ChatGPT - Tech Reporter (2025)
A 2025 feature recounts a case at Northeastern University where a business professor forbade students from using AI but quietly used ChatGPT to generate lecture slides, resulting in error-filled, obviously AI-generated materials that students discovered due to leftover prompts and distorted images.
The student filed a formal complaint and requested a tuition refund, highlighting ethical issues of undisclosed AI use, low-quality instruction, and a double standard between faculty and students.
AI is Destroying the University and Learning Itself - Current Affairs (2025)
Commentaries on university practice point to instructors offloading key teaching tasks—writing lectures, grading essays, and redesigning syllabi—to ChatGPT, sometimes with minimal human review, which can degrade instructional quality and fairness.
These patterns are cited as institutionalized misuse: the university encourages faculty to use AI for efficiency, but without safeguards this can lead to biased grading, superficial feedback, and misalignment with course outcomes
Campus Innovation: How Students Use AI in Coursework and Research - Scholaro Database (2026)
Survey-based work on “How students use AI in coursework and research” notes that faculty reports of AI misuse incidents have surged, with investigations and appeals increasing strain on academic integrity offices.
The article warns that when faculty respond only with stricter surveillance and automated tools, rather than pedagogical redesign and clear communication, they may inadvertently escalate misuse and mistrust.