This course includes a clear policy on the use of Generative Artificial Intelligence (GAI) tools (such as ChatGPT, Copilot, Claude, DALL-E, etc.). This policy is designed to help you understand when and how GAI tools may be used appropriately to support your learning, and when their use is not permitted.
We aim to promote ethical, transparent, and effective use of AI in your academic work.
The CSIT Department recognizes that GAI technologies are increasingly part of professional computing practice. In this course, GAI use is permitted but limited to help students develop real-world skills while ensuring academic integrity.
This approach balances:
· Encouraging responsible AI use aligned with industry standards.
· Preserving your opportunity to build your foundational skills.
· Maintaining fairness and transparency in assessment.
Generative Artificial Intelligence (GAI) refers to AI tools that can generate text, code, images, or other content based on user prompts. Examples include (but are not limited to):
· ChatGPT
· GitHub Copilot
· Bing Chat
· Claude
· Google Bard
· DALL-E, Midjourney, and other image-generation tools
In this course, GAI includes any tool that generates content for you automatically, beyond simple spellchecking or grammar correction.
When GAI use is permitted, students should learn to use these tools ethically and effectively. Recommended resources include:
· Microsoft Responsible AI Principles
· GitHub Copilot Documentation
· Austin Community College - Artificial Intelligence Policies
These resources can help you understand responsible, transparent use of AI tools in software development and writing.
When GAI use is allowed or required:
· You must disclose any GAI tool used in your submission (e.g., in comments in code or an acknowledgment section in written work).
· You remain responsible for verifying the accuracy, quality, and originality of all work you submit.
· Instructors may specifically require or prohibit GAI use in particular assignments or projects, as described in assignment instructions.
· Failure to disclose GAI use when required is considered academic dishonesty.
Violations of this policy are considered breaches of academic integrity and may result in:
· A reduced grade or zero on the assignment.
· Failing the course for serious or repeated offenses.
· Referral to the College’s Academic Integrity process.
Instructors may grant explicit written exceptions to this policy for specific assignments, projects, or research explorations. If you wish to request an exception, you must do so in advance and receive written approval.
· Submitting AI-generated content as your work without disclosure.
· Using GAI to automatically complete assessments that require individual skill demonstration (e.g., exams, quizzes, coding challenges marked as “no AI assistance”).
· Using AI to bypass learning objectives (e.g., generating entire essays or programs without understanding them).
· Misrepresenting AI output as your original analysis or problem-solving.
· Using GAI for brainstorming or idea generation (with disclosure).
· Getting help with general concepts or explanations.
· Drafting content or code snippets with proper attribution and verification.
· Proofreading or refining writing.
· Learning from AI explanations (while still submitting your work).
· Example of disclosure in code:
o Python:
§ # Partial function inspired by ChatGPT prompt on binary search (May 2024)
Certain assignments or labs may specifically require the use of GAI tools to simulate industry practice (e.g., using GitHub Copilot in coding exercises).
In these cases, instructions will indicate how to use GAI tools and how to document that use.
If you're unsure whether your intended use of a GAI tool is allowed, ask your instructor before you submit your work.
This policy aims to prepare you for responsible, real-world AI use while maintaining the integrity of your learning. Use AI as a tool to enhance your skills, not to replace them.