What is our campus AI policy?
Our Academic Policy on the Use of Generative AI Tools was adopted in October of 2024 and contains guidance for faculty, staff, and students. This policy may be found within the Files section of the Campus Faculty Teams.
A broad summary of the policy is as follows:
This university policy outlines the ethical and responsible use of generative AI tools like ChatGPT and CoPilot by faculty, students, and staff. It prohibits the use of AI for completing assignments, exams, or any academic assessments without instructor approval and emphasizes adherence to plagiarism rules. Acceptable uses for students include brainstorming, outlining, and using tools like Grammarly for grammar support. Faculty may use AI to enhance course materials, assist with research, and streamline administrative tasks, but not for grading, generating feedback without review, or inputting confidential data. Staff may use AI for operational efficiency, provided data privacy is maintained.
The policy also stresses the importance of privacy, stating that no sensitive or FERPA-protected information should be entered into AI tools unless the platform is compliant. Violations must be reported and will be handled through existing disciplinary channels. The Provost’s Council is charged with enforcing the policy and ensuring compliance, while CETL, IT, and the Academic Affairs Committee will review it annually. The overarching goal is to promote innovation while preserving academic integrity and data security.
Note: As of summer 2025, TXWES does not provide an institutional license for any AI model or software.
What about course AI policies?
At Texas Wesleyan, each faculty member is responsible for crafting a custom AI policy for their course syllabus that reflects their own instructional goals and academic standards. While faculty have full autonomy in deciding how permissive or restrictive they wish to be with AI use, they are encouraged to explore some level of integration to support student learning and engagement. Course-specific AI policies should clearly outline whether AI tools are allowed for assignments, how and when such tools must be cited, and what process will be followed if academic integrity concerns arise. Faculty may also wish to recommend specific tools (e.g., Grammarly, ChatGPT) and describe acceptable uses such as brainstorming ideas, organizing outlines, or receiving feedback on drafts. Including these details helps set clear expectations and fosters responsible, ethical use of AI in the classroom.
Click the "Crafting a Course AI Policy" button above for more guidance on creating these. If you have questions about crafting your policies, speak with your chair, dean or the CETL. It's encouraged to review your course AI policies every semester as AI is still rapidly evolving.
Note: As of summer 2025, TXWES does not provide an institutional license for any AI model or software.
AI Ethics
As educators, we have a responsibility to model the same ethical standards we expect from our students. That means if you use AI to support your teaching—whether it’s generating quiz questions, drafting assignment feedback, or using AI features in Turnitin—your students deserve transparency. Just as we expect them to cite AI assistance in their academic work, we should disclose when and how we’ve used these tools in course materials. This doesn’t diminish your expertise—it reinforces your integrity and builds student trust. Remember: students are watching how we engage with these technologies, and our example shapes how they learn to use AI responsibly.
Equally important is remembering why you were hired in the first place: for your disciplinary insight, pedagogical skill, and professional judgment. Generative AI tools can support those strengths, but they cannot replicate your critical lens, cultural context, or personal voice. Avoid over-reliance on AI to build lectures, modules, or entire courses—your students benefit most from content shaped by your unique perspective.
Additional ethical considerations go beyond transparency and authorship. Faculty should never input student names, grades, or assignment content into public AI platforms, as this may violate FERPA and compromise student privacy. When using AI-generated content, it's also essential to evaluate it for accuracy, bias, and alignment with your course goals—AI can produce plausible-sounding but incorrect or inappropriate responses. Furthermore, faculty should ensure that any AI-assisted materials are accessible to all learners, including those using screen readers or other assistive technologies. Ethical AI use also means critically considering how these tools support or hinder learning outcomes, and intentionally selecting use cases that reinforce—not replace—human-centered teaching and feedback.
Try it! Faculty AI Exploration Course
Click "Commons" and search "TXWES" to find this resource and more!
Institutional AI Tools
At this time, Texas Wesleyan does not offer an institutional license for employees or students to AI tools or models such as Copilot, ChatGPT, Gemini, and more. However, TXWES does provide Turnitin, an AI detector, and Grammarly, a writing support tool.
Turnitin AI Detection Tool
Turnitin is a university-provided tool to scan student submissions for potential AI-generated work. There are a few very important limitations of the TII tool, straight from their FAQ article:
TII itself has a disclaimer that they "do not make a determination of misconduct even in the space of text similarity". They state that you still need to apply your professional judgement, personal knowledge of the students, and assignment context to make the best determination. "The final decision on whether any misconduct has occurred rests with the reviewer/instructor."
1 out of every 100 fully human-written documents might generate false positives for AI content.
"There is a chance we might miss 15% of AI-written text in a document."
"Our AI writing detection scores under 20% have a higher incidence of false positives. This is inconsistent behavior, and we will continue to test to understand the root cause."
"Minimum word requirement [of] 300 words for a document to be evaluated by our AI writing detector."
"In shorter documents where there are only a few hundred words, the prediction will be mostly 'all or nothing' .."
"We observed a higher incidence of false positives in the first few or last few sentences of a document." (intro/conclusion)
"..the AI writing percentage does not necessarily correlate to the amount of text in the submission" (i.e. poetry, scripts, bullet point lists, short-form and annotated bibliographies are not assessed by the AI checker tool)
"We no longer show an AI score for documents where we detect less than 20% of AI writing."
"Sometimes false positives... can include... text that has been paraphrased without developing new ideas."
TII is trained to detect models "including GPT-3, GPT-3.5, and variants", as well as GPT-4, ChatGPT Plus, GPT-4o, Gemini Pro, LLaMA and more.
"Our detector is not tuned to target Grammarly-generated spelling, grammar, and punctuation modifications to content but rather, other AI content written by LLMs..". "...this excluded content generated by Grammarly's generative AI-powered features, including draft generation, paraphrasing, summarizing, and other features. Content produced using these features will likely be flagged as AI-generated by our detector."
Faculty members may not use Turnitin as the sole determinant of potential unethical use of AI by a student, nor take punitive action as a result. As mentioned by the TII website, faculty members must pull in personal experience, connection with the student and more. TII is a university-provided tool and TXWES does not have current plans to stop access to this tool. All faculty are urged to understand the limitations and scope of the TII AI Checker tool and use it appropriately.
Grammarly
Grammarly is a university-provided tool for all employees and students. Be sure to include in your course AI policies whether this tool is allowed or disallowed for student use on course tasks/assignments. Note: While checking spelling probably won't flag AI detection tools, other features like "clarity", "tone", and "paraphrase" are highly likely to be flagged as AI-generated content in a detection tool.
Have questions about using AI within your course, talking to students about AI use or other AI-related topics?