You may have heard of ChatGPT (Chat Generative Pre-Trained Transformer), a cutting-edge AI tool making waves in classrooms and among students. It's part of the generative AI family, designed to assist with writing, editing, and organizing information. Generative AI is artificial intelligence that creates new content based on what it has learned from existing materials. This learning process, known as "training," results in a statistical model that predicts likely responses to given prompts, enabling the generation of fresh content. Developed by OpenAI and launched for public beta testing in November 2022, ChatGPT utilizes a pre-trained language model. It takes user prompts and crafts responses that mimic human dialogue. Its standout features include answering questions, conversing, summarizing information, and even writing computer code.
At UMPI, and across the UMS-Sysytem, we also have access to Google's Gemini AI product. Gemini is trained on vast datasets from the Google-verse, encompassing websites, books, and various texts—estimated to include nearly one trillion words across multiple languages. This training involves both labeled data, which humans annotate to help the model categorize information, and unlabeled data, or "raw data," which lacks such annotations. Gemini has enhanced the learning model's accuracy by having evaluators refine its analysis of unlabeled data, boosting its proficiency in producing coherent written language.
As we continue to explore generative AI, our understanding will evolve. At the UMPI, we are committed to collaborating with Programs, Faculty, Student Support Services, and the local community leaders to help faculty and students effectively utilize this innovative learning tool, in a way that is ethical and supports learning and skill development.
at this time, Chat GPT has not published an Accessibility Policy or Statement
Let's look at how we can use this tool, in the context of Academic Integrity...
Some students have already started using Chat GPT, or Gemini (or similar tools) to support their studies; however, it's crucial to remember that Gemini and other emerging AI tools cannot replace the importance of independent thinking and scholarship. Each of your instructors may have their own stance on what constitutes acceptable use regarding these tools, but most advice across disciplines cautions against directly copying responses from Gemini or submitting an essay prompt and claiming the generated response as your own.
Instructors may also factor in participation, group work, and peer engagement as part of your final grade, emphasizing the collaborative nature of learning in the course. Relying solely on Gemini or other AI tools as your main source of interaction could be viewed as a lack of participation if other forms of collaboration and peer engagement are expected.
UMPI encourages you to review your individual class syllabus and discuss your professors' policies if you need more clarification. Most academic citation styles, such as MLA , Chicago, and APA, have issued guidelines on how to cite AI tools in academic writing, providing various approaches based on different style guides, which is almost always required when using these tools.
Remember, while Gemini can deliver quick and seemingly comprehensive responses to exploratory questions compared to search engines like Google, there are significant limitations to consider. Any generated responses should be fact-checked. When in doubt, consult reliable sources to build the background knowledge necessary to critically analyze the content produced by AI.
Understanding the limitations of generative AI, particularly language models, is crucial when using them for learning purposes. AI tools are only as effective as the prediction models and training data they are built upon. This means that an AI tool's knowledge is confined to what it was trained on, and an algorithm fed inaccurate training data will yield poor results. The initial training data for these AI tools was sourced from a wide array of resources across the internet, comprising a vast amount of text from websites, articles, books, and other written materials. Additionally, examples of human conversations and dialogues in over 95 languages were also included in the learning model.
Despite the extensive data available, it’s important to recognize that there are still limitations to using AI tools that you should be aware of. Furthermore, some limitations specific to certain AI tools--let's look at some common and wide-spread limitations:
One significant limitation of generative AI tools is their susceptibility to biases and stereotypes present in the data used to train the model. These machine-learning tools pool data from sources within the training data. In many ways, tools like Gemini reflect humanity's prejudices and conscious and unconscious biases. For this reason, anyone using this technology should be aware that generative AI cannot necessarily recognize biases and thus can amplify inaccuracies and biases as fact without disclosure.
Another serious limitation of ChatGPT, and similar technology, is the fact that it often “hallucinates” or makes up information. There are enormous ethical implications to this, and in the research and writing process, it is crucial that students understand that sources produced by ChatGPT can be entirely made up and not actually correspond to real sources.
Although ChatGPT's responses are generated in real-time, they may not always be up to date. The GPT 3.5 model was last trained in September 2021, meaning it may give outdated responses unless its training data is updated (the GPT 4 model, which requires a paid subscription, was last updated in April 2023)--Gemini was last trained in December of 2023. Unlike search engines, which can provide the latest information, most generative AI models are not trained to search the internet and pull information while being used.
While you may have heard tools like ChatGPT and Gemini compared to the calculator within the history of technological innovation, these tools are actually not a calculator and cannot perform calculations. If you ask ChatGPT to calculate something for you, you may notice that its responses are inconsistent. This is because it treats all numbers as text or string and only makes an educated guess on the correct answer. If you do ask an AI tool to provide such an educated guess, we recommend comparing your results with a calculator or a tool designed to calculate equations.
Do I understand my professor's acceptable use of AI class policy?
Did I cross-reference the information I’m using?
Did I directly copy and paste AI-generated content, when that is not explicitly part of the assignment?
Does how I am using this generative AI tool allow me to still reach my own conclusions?
Did my understanding of the topic come to me prior to or after using generative AI?
Have I appropriately cited my AI session?
Coming Soon: An AI Exploratory Toolkit!