Canvas is the exclusive LMS (Learning Management System) for the UH main campus.
Generative AI is no longer an experiment; it is embedded into the digital infrastructure of higher education. Conversations that once centered almost entirely on academic integrity now focus on a more complex question:
How do we protect student privacy, institutional data, and intellectual property while using AI to enhance teaching and learning?
As faculty, you play a dual role:
Using AI responsibly in your own teaching, and
Modeling ethical, privacy-conscious practices for students.
This guide distills the latest guidance from higher-education associations, universities, and 2025 policy updates to help you make informed, secure decisions when integrating AI into coursework.
Most free AI tools rely on a “data-for-service” model. According to the University of Pittsburgh Center for Teaching and Learning, free chatbots are not FERPA-, HIPAA-, or GDPR-compliant, and user inputs may be reused for training or analysis.
Risk: Students who enter personal reflections, peer names, draft essays, or case information unintentionally release this data permanently into a public large language model (LLM).
Reality: These models can store, analyze, and learn from inputs submitted in free accounts—constituting a form of public disclosure.
The Harvard University Information Technology office warns that, by default, information submitted into external generative AI tools is not private and may be used to improve the model.
Risk: Uploading unpublished research, exam banks, grant proposals, or proprietary course materials into a public chatbot can jeopardize:
Patentability of research
Copyright over instructional materials
Confidentiality of sensitive departmental documents
Whenever available, use institutionally supported AI tools (e.g., Microsoft Copilot with Data Protection, institutionally vetted ChatGPT Enterprise, LMS-integrated AI).
Enterprise tools typically include:
Data encryption
No training on user inputs
FERPA-aligned contractual protections
Vendor accountability through procurement agreements
University of Houston Context: UHS faculty and staff have access to Microsoft 365 Copilot with commercial data protection when logged in with their UHS credentials. This ensures user data is not saved or used to train the models.
Multiple institutions, such as Albany State University, explicitly prohibit entering confidential or student-identifying data into public AI tools.
Source: https://www.asurams.edu/docs/ai/AISafetyGuidelines.pdf
Rule of thumb:
If IT has not vetted it > assume it is insecure for student or institutional data.
University of Houston Context: UHS SAM 07.A.08 explicitly prohibits inputting Level 1 (Confidential/Mission Critical) or Level 2 (Protected) data into public AI tools. This includes student records (FERPA), health information (HIPAA), and critical research data.
Your syllabus is your first line of defense.
Define "Open" vs. "Closed" Loops: Clearly state if students are allowed to use AI, and specifically which tools.
Open-use AI: students may use any AI with guardrails
Restricted AI: only approved tools
Closed-use AI: assignments that prohibit AI entirely
University of Houston Context: UH Downtown and UH Clear Lake have published specific syllabus decision trees and sample language ranging from "No AI Allowed" to "AI Encouraged with Attribution."
Opt-Out Options: Following guidance from the Yale Poorvu Center, if you wish to use a non-university supported tool, providing an alternative assignment for students who are uncomfortable creating external accounts is a best practice to respect their privacy rights.
Teach students (and model yourself) how to “clean” data before submitting prompts:
Remove any personally identifiable information (PII): Strip names, IDs, addresses, specific locations, and health or financial details
Generalize context: Instead of "Write a critique of John Doe's performance in Bio 101," use "Write a critique of a hypothetical student's performance in an introductory biology course."
This protects student privacy and reduces institutional risk.
AI literacy is now inseparable from privacy literacy. Embedding even short activities into your course helps students enter a workforce where AI and data ethics are fundamental skills.
Task: Students identify the “Data Use” section of a popular AI tool’s Terms of Service (e.g., Google Gemini, ChatGPT, Grammarly).
Discussion Question:
What rights does the company gain over your content?
Is your data used for training?
Can you delete your history?
Goal: Transform students from passive users into critical evaluators of technology.
Task: Have students attempt to prompt an AI for information about a public figure versus a private individual (themselves or a peer).
Goal: Explore how much scraped data may exist about individuals and discuss topics like:
The “Right to Be Forgotten”
Consent
Data persistence and model training
Before launching your course, conduct this quick 5-point review:
Is this an enterprise/university-approved tool? (e.g., UHS Microsoft Copilot)
Note: Vet any new third-party tools with IT before assigning them
Am I (or my students) entering sensitive reflections, assignments, research data, or PII?
Do I have a clear policy on AI use and data privacy? AI platforms and policies evolve rapidly.
Note: Faculty should revisit their syllabus language and tool choices each semester to ensure compliance with updated institutional guidelines.
Do I have a backup plan for students who decline to sign up for third-party tools?
Have I ensured that my research, exam banks, and original teaching materials are not uploaded to public models?
Increased enterprise integration (AI assistance embedded directly inside LMS, Microsoft 365, Google Workspace, and publisher tools)
More regulations about educational data and AI (state-level AI bills, federal FERPA modernization proposals, DOE guidance)
Campus-wide AI governance models (AI committees, tool-approval workflows, mandatory risk assessments)
Growing student expectations for AI literacy and data privacy
AI hallucination and misinformation safeguards (required verification steps for any AI-assigned work)
Ethical and equity implications of AI-driven personalization
U.S. Department of Education – Student Privacy (FERPA)
https://studentprivacy.ed.gov/
University of Houston System - Usage of Artificial Intelligence (AI) at UHS.
https://uhsystem.edu/offices/information-security/resources/artificial-intelligence/index.php
University of Houston System - Data Classification and Protection (SAM 07.A.08).
University of Houston-Downtown - Generative AI Faculty Guide.
https://www.uhd.edu/provost/teaching-learning-excellence/generative-ai-faculty-guide.aspx
EDUCAUSE – Generative AI & Data Privacy Resources
https://library.educause.edu/topics/infrastructure-and-research-technologies/generative-ai
University of Pittsburgh – AI and Data Privacy and Security
https://teaching.pitt.edu/resources/ai-and-data-privacy-and-security/
Harvard University Information Technology – Generative AI Guidelines
https://it.huit.harvard.edu/book/export/html/3301861
Yale Poorvu Center – Protecting Student Privacy and Your Data
Albany State University – AI Integration & Ethical Use Guidelines (2025)
https://www.asurams.edu/docs/ai/AISafetyGuidelines.pdf
Future of Privacy Forum – Vetting Generative AI Tools (Legal Compliance Checklist)
https://fpf.org/wp-content/uploads/2024/10/Ed_AI_legal_compliance.pdf_FInal_OCT24.pdf
National Education Association – Student & Educator Data Privacy
https://www.nea.org/resource-library/student-and-educator-data-privacy
This guide was developed with the assistance of Copilot, Google Gemini, and ChatGPT to synthesize current 2025 higher education policies.