The following statement from the Provost's office has been shared with students:
Faculty members determine the use of artificial intelligence (AI) tools, both those provided by the College and third-party tools not licensed by the College, in their courses and communicate their expectations regarding the use of AI tools in course syllabi. Students should also talk with their faculty/instructors to understand course-specific expectations and policies regarding AI use and non-use. For both course and non-course-related work (e.g., personal learning, career preparation, creative projects, or extracurricular activities) student use of AI must adhere to Colby’s policies, including Acceptable Use Policy, and applicable laws. Whenever possible, Colby advises that students, as well as faculty and staff, use Colby-provided AI tools in place of third-party AI resources due to data security concerns.
AI literacy is emerging as an important aspect of higher education and professional work. However, its various definitions have yet to be standardized. While definitions vary across sources, they consistently emphasize a shared set of core competencies. Ng et al. (2021) offer a particularly clear framework, which defines AI literacy as having three key components:
(1) learning about AI
(2) learning how AI works
(3) learning for life with AI
Together, these components address knowledge, functional use, and social impact.
AI literacy must also include developing awareness of AI’s ethical, societal, and environmental impacts. Generative AI tools have the potential to assist humans in areas like data analysis, idea generation, task automation, and planning. But the widespread use and adoption of AI tools also includes emerging problems, such as misinformation, bias, lack of transparency, data privacy, addiction, atrophy of human skills, unsustainable water use, and mining pollution. Thus part of AI Literacy is to acknowledge the complex issues surrounding the use of AI, including its potential to reinforce bias, compromise privacy, and reduce transparency in decision-making. AI tools can also widen disparities in access, particularly in rural or under-resourced communities, due to their reliance on high-speed internet, advanced hardware, and digital fluency. The environmental impact of AI, including the significant energy demands of training large models, raises serious sustainability concerns. AI’s influence on labor is equally profound, from automating jobs to reshaping entire sectors and deepening inequality. Moreover, large language models (LLMs) are often trained on the labor, ideas, and cultural contributions of writers, artists, and online communities, frequently without their acknowledgment or compensation.
Lastly, AI literacy involves being able to: critically evaluate AI-generated content for accuracy and appropriateness and accurately assess the quality, bias, and reliability of AI outputs. In educational settings, AI literacy includes knowing when and how to use different AI tools responsibly to support learning while maintaining academic integrity.
For more context for these questions, see also Sheriff “Who Cares About AI and Writing?”
What knowledge, skills, or habits of mind do I want students to develop through this course?
Consider whether AI risks replacing the learning process with an output shortcut.
Example: “Make an outline for my paper” → risks outsourcing idea development and learning to scope
When/where should students struggle productively as part of learning?
Generative AI tools are designed to reduce “friction” and please users.
However, friction is a normal part of the learning process that can build competence. Learning requires productive struggle; smooth assistance can undercut growth.
Example: Prompt engineering vs. drafting and revising to work through confusion and develop ideas.
What role is/should critical thinking play in the context of generative AI?
AI tends to offer neat, simplified, or over-generalized answers. Overuse may reduce students' skepticism or willingness to interrogate claims (Lee, 2025).
Example: “Summarize this article” or “Give me both sides of an issue…”
How might using generative AI tools for a given assignment or task privilege the product over the learning process?
Students may turn in polished work produced by AI without deep engagement.
Example: “Fix my grammar” → helpful for polishing final text, but doesn't teach revision strategies or why a student’s language was changed.
AI can mask shallow understanding or bypass key learning moments.
Example: “Summarize this article in four key points” or “Write a first draft of 800 words in response to this assignment (attached)”
Note: Some models, like Gemini, can “show thinking,” which gives some context for how they developed answers and links to some external resources that informed the results.
Are all students equally equipped to use AI well?
Students vary in AI access, prompting skills, and disciplinary knowledge.
Students may over-trust or misuse AI, assuming that its responses are better than their own writing or thinking
Increasingly, students have become accustomed to using AI tools daily for school and personal questions (e.g., ChatGPT, Gemini, Claude, and phone apps). This use may or may not be critical and intentionally limited.
Content-area and/or skills expertise is essential for using generative AI effectively and evaluating the results
Am I preparing students for “real-world” AI use, outside the educational context at Colby?
AI is widely used in many industries etc., but with expectations for human judgment and oversight.
Teaching ethical use, documentation, and critical evaluation is essential.
Example: GitHub Copilot for coding → can assist, but encourages dependency if unexamined.
When copyrighted materials are uploaded to a secure large language model (LLM) licensed by the institution, such as Google Gemini and NotebookLM, the user agreement between Colby and Google states that any content uploaded is not used to retrain or improve their LLM model(s). This agreement is designed to protect intellectual property and user privacy, meaning that course readings, proprietary research data, or student work remain confined to Colby’s secure computing environment.
However, if the same materials are uploaded to a public or consumer-facing LLM without such protections (e.g., the free versions of ChatGPT, Bard, or Claude, etc.), the provider’s terms of service typically allow the content to be stored and used to train or improve the model. This poses a risk for copyrighted or sensitive materials. Importantly, uploading copyrighted material to an unsecured or unlicensed platform without permission may constitute a violation of copyright law, potentially exposing the user and the institution to legal liability. Lawsuits filed by authors, artists, and publishers have challenged the legality of this practice, raising important ethical and legal concerns about how training data is sourced and how intellectual property is respected in the development of AI. Understanding the data usage policies of AI platforms is essential for making informed, responsible decisions about what content is appropriate to share. At Colby, anyone with questions about copyright should consult with Natalie Hill or Kara Kugelmeyer. For intellectual property or legal concerns, they should contact the Office of General Counsel.
Faculty may find resources, example policies, and FAQs on Colby's Academic Integrity website. For more guidance on academic integrity and AI, faculty should contact the Academic Integrity Committee Chair/Coordinator.
Additionally, faculty are encouraged to collaborate with campus support networks to help create the course experience you want (No AI, Restricted AI Use, etc.) for your students to craft course policies, assignment design and assessment. These resources include consulting with colleagues and drawing on the expertise of the Writing Department, the Center for Teaching and Learning (CTL), Academic ITS, Academic Integrity committee/staff, Colby Libraries, and Davis AI. These partners can assist in reviewing both formative (e.g., quizzes, homework) and summative (e.g., papers, projects, exams) assessments to ensure they promote authentic learning and reduce the likelihood of unacknowledged AI use.
When sensitive research data, such as human subjects data, confidential or licensed information, or datasets owned or governed by third parties, is uploaded to a secure large language model (LLM) licensed by the institution, such as Google Gemini or NotebookLM, the user agreement between Colby and Google explicitly states that uploaded content is not used to train or improve their LLM model(s). This contractual protection helps safeguard data privacy, research integrity, and compliance with institutional and regulatory requirements.
However, uploading the same data to a public or consumer-facing LLM without institutional protections (e.g., free versions of ChatGPT, Bard, Claude, etc.) can be unethical and illegal. In most cases, the provider’s terms of service allow uploaded content to be stored and potentially used for training, raising serious concerns about data security, privacy, and unauthorized redistribution. Sharing sensitive or restricted data on unsecured or unlicensed platforms may violate data use agreements, IRB protocols, or applicable laws (such as HIPAA, FERPA, or data protection regulations), and could expose the researcher and the institution to compliance violations or legal consequences.
Researchers must review the data sharing and usage terms of any AI platform before uploading course, research-related data or data sensitive data they have access to as part of their employment at Colby. In particular, any data that is confidential, governed by a Data Use Agreement (DUA), part of a grant or sponsored research contract, or includes personally identifiable information (PII) should not be shared with tools outside of Colby’s secure computing environment.
Faculty and students with questions about data privacy or third-party data related to research should consult with Colby’s Data Services Librarian (Kara Kugelmeyer) or the Chair of Colby’s Institutional Review Board (Kara Kugelmeyer). For legal or contractual concerns, contact the Office of General Counsel.
(Please sign in with your Colby credentials to view this resource)
(Please sign in with your Colby credentials to view this resource)
(Please sign in with your Colby credentials to view this resource)
(Please sign in with your Colby credentials to view this resource)
SUPPORT
Troubleshooting, access issues, or information security:
ITS Support Center or support@colby.edu
Academic setup, operational guidance, or consultation:
Academic Technology Services or teched@colby.edu
Advanced academic or research-related support:
Davis Institute for Artificial Intelligence or davisai@colby.edu
Research, data management, copyright, or intellectual property:
Colby College Libraries, liaisons@colby.edu, or kmkugelm@colby.edu
Generative AI and writing assignments, writing assessment, supporting multilingual students:
Writing Department, Farnham Writers' Center, or ssheriff@colby.edu
Teaching consultation, course design, or effective teaching methods:
Center for Teaching and Learning or ctl@colby.edu
Generative AI and writing assignments, writing assessment, supporting multilingual students:
Writing Department, Farnham Writers' Center or ssheriff@colby.edu
If you have any questions, feedback, or corrections regarding the information provided, please contact Academic Technology Services at teched@colby.edu to get in touch.