Navigating A.I.: Creating Healthy Habits
Navigating A.I.: Creating Healthy Habits
Welcome to our site!
AI is becoming a huge part of our every day lives. It shows up in every online sphere and even in our schools. It can be tempting to use it for anything and everything. It's fast, convenient, and most of the time free to use. But sometimes AI use can go from harmless to harmful when applied in the wrong settings. That's why this site exists. We're a group of psychology students here to teach you when AI use in school becomes harmful to you and your learning. If you're interested in learning how to use AI in a way that helps you but doesn't also hurt you in the long run, this site is for you. At the bottom of the site you can download a digital 'zine about responsible AI use!
What is AI (like really)?
When people hear the word “AI,” a lot of them instantly picture something out of a sci-fi movie—a robot that can think for itself, knows everything, or might even take over the world. But that version of AI doesn’t exist. A huge part of truly understanding AI is letting go of the idea that it’s some super-intelligent, self-aware being. The kind of technology people imagine, called Artificial General Intelligence (AGI), isn’t real today and might not ever be. What we do have is a tool that mimics certain parts of human intelligence, not a creature with a mind of its own. As Lanier (2023) puts it, “The most pragmatic position is to think of A.I. as a tool, not a creature” (p. 4). When we see AI as technology instead of a personality, we can understand it better and use it in healthier ways. But when people start treating AI like it has feelings, intentions, or human-like intelligence, it only leads to more confusion and encourages irresponsible use.
AI is really just a tool built from math, code, and pattern recognition. It can do a lot of things really well—like turning long readings into short summaries or searching through tons of information and pulling out exactly what you need. But even with all of that, AI doesn’t actually think or feel. It’s not trying to replace human intelligence; it’s just copying small pieces of how we think, and even then, it doesn’t always get it right. A fear people have is that AI “thinks” on its own, but it doesn’t. It doesn’t understand meaning, form opinions, or interpret things the way humans do. It works by using probability—basically guessing what the next word should be based on patterns it has seen before. When ChatGPT, the first major generative language model to go viral, was released, even its own creators didn’t expect it to blow up. According to The Atlantic, staff members at OpenAI were betting how many users it would get in the first week, and the highest guess was only around 100,000. No one expected a predictive text model to have such a huge impact or spark such a massive conversation about AI. It shows how quickly public interest grew—not because AI suddenly became alive or intelligent, but because people didn’t fully understand how these tools actually worked.
Kortelling (2021) makes a really important point when he says that AI is “silicone, not biological intelligence.” In other words, our brains and AI are nothing alike—and AI is actually at a big disadvantage compared to us. Humans can take past experiences, memories, emotions, and context and instantly make sense of what’s happening around us. We can look at something new and immediately connect it to things we’ve seen before. AI can’t do that. It doesn’t have lived experience. Every time it looks at something—an image, a question, a sentence—it starts from scratch with no real understanding behind it. That’s why things that seem obvious to us, like telling the difference between a chihuahua and a blueberry muffin, can completely confuse an AI program. It doesn’t “see” the world the way we do; it just notices patterns in pixels.
To help show what this actually looks like in real life, an example is ELIZA, one of the earliest computer programs that acted like a therapist. Even though people in the 1960s felt emotionally connected to it, ELIZA wasn’t thinking or understanding them at all—it was simply matching patterns and repeating phrases in a way that felt human. This example helps us understand how easy it is to mistake pattern-matching for real intelligence—and why understanding AI’s limits matters so much today.
The same thing happens today with large language models, including Chat GTP. They have been compared to the intelligence of young kids: AI might get simple things right, but it often says stuff that’s totally wrong or doesn’t make sense when trying to solve novel problems. This is unlike kids, who can come up with creative new ideas, where AI cannot.
In short, AI is not a brain. It makes mistakes, sometimes gives outdated info, and can sound confident even when it's wrong. It’s powerful, but it’s still only a tool, and it works best when humans use it thoughtfully and responsibly.
Should you use AI?
There is controversy surrounding AI use and its impacts on the environment, as well as the origin of the data AI is trained on. When it comes to the environmental impact, the training of modern AI models uses vast amounts of power and water to fuel and cool the data centers that run them. This comes from both the training of the model, as well as the data centers it takes to maintain them and run queries. The estimated amount of energy it took to train ChatGPT3 was enough to power 120 U.S. homes for a year.
And the consumption doesn’t end there. It has been estimated that every ChatGPT query uses 10 times as much energy as a web search. When given one complex prompt, a large language model like ChatGPT generates a substantially larger carbon footprint than charging a phone. To give a more concrete sense of scale: classroom energy-use calculators show that everyday activities (such as streaming video or even charging a phone) in some cases use equal or greater amounts of energy than a single AI query. This reminds us that AI’s footprint is part of a much larger pattern that compounds energy demands by modern technology. This is predicted to grow even more in its energy costs as AI models become more complex in their parameters.
Another potential reason to avoid using AI models is the way they are trained. There have been countless lawsuits against AI companies for the use of individuals’ work without permission or compensation. This includes: photos, videos, music, literature, art, blog posts, and much more. The vast amount of data it takes to train an AI model makes it so companies feel the need to scrape every corner of the internet to create a model that will generalize and generate better than its competitors. This reality, combined with the lack of transparency from AI companies, makes this an ongoing issue, for which very few companies have begun to remedy. There are AI models out there that are better than others when it comes to these ethical qualms. For example, Adobe claims it’s AI model Firefly is only trained on content it has officially licensed. The company has also stated it will not use customer’s data without their permission. It does use the content uploaded to its stock photo website, but has agreed to pay an annual bonus to creators whose work is used to train the model. This is an application of a concept called “data dignity”, where in theory, companies who develop these models would both credit and pay contributors to the model. Not all AI models follow such ethical boundaries, so it’s important to do your own research when it comes to the model you choose to use.
While many people are choosing to avoid AI use when possible because of these issues (as well as other concerns) this isn’t always realistic. In today’s culture, artificial intelligence is integrated into nearly every part of our lives. Even somebody who thinks they have never used AI has likely encountered it at one point or another without realizing. This could be through scrolling on social media, using search engines, using the suggested text in an email or google doc, or writing documents with grammar correctors.
In all of these situations, you are more than likely using some form of built-in artificial intelligence. Because of how woven into our lives and culture AI has become, it’s obvious it influences how we think, learn, create, and communicate. This reality makes it essential to understand how AI works- as well as what it can and cannot do, and how to use it responsibly.
Do you use AI irresponsibly?
What does AI use actually mean? Research has found something pretty interesting: AI can help you see new perspectives and even boost your critical thinking- but if you start relying on it too much, your motivation and real level of learning can drop fast. In other words, AI isn't automatically "good" or "bad." It really depends on how you use it.
If you want AI to actually help you- not hurt your learning- you need to know what irresponsible use looks like. Once you know the signs, it's easier to catch yourself and use AI as a tool, not a way to avoid thinking, making decisions, or doing your own work.
Here are a few examples of irresponsible AI use:
Letting AI create finished ideas or assignments for you
Using AI answers without questioning them
Relying on AI to do the entire task
Using AI to avoid learning or practicing skills
Letting AI record, summarize, or transcribe lectures so you don't have to engage
80% of adults believe young people are losing critical thinking skills. While that concern makes sense, AI hasn't actually caused a major spike in cheating since tools like ChatGPT came out. Still, because AI makes things easier, students who trust it more than their own judgment can run into problems.
AI can't replace real human reasoning, and if you rely on it too heavily, you may end up with wrong or misleading information. That's why it's so important to build healthier habits and use AI in intentional, ethical ways.
How Can You Be More Responsible With AI?
Responsible AI use isn’t about avoiding AI—it’s about using it in ways that help you learn. Some examples include:
Using AI to brainstorm, not replace your creativity
Fact-checking everything AI generates, even summaries
Using AI to support your learning, not do the work for you
Combining AI’s ideas with your own thinking
AI is becoming super accessible, which is great—but it also means a lot of people are picking up bad habits without realizing it. Because AI feels fast, convenient, and “smart,” it’s easy to overuse it. But remember: AI doesn’t think the way humans do. It mimics patterns we create. So if you rely on it too much, it can actually hold you back from developing your own skills.
One major issue is overdependence. That’s when AI starts doing the thinking for you. While AI can help students explore topics from different angles, using it too often can make you less motivated and weaken your deep-thinking skills.
Thinking is like a muscle. If you don’t use it, it gets weaker. And since so many adults already worry that young people are losing critical thinking skills, relying on AI to think for you will only make that fear feel more real.
Another problem? A lot of people don’t question AI’s answers. AI can sound super confident, but it still makes mistakes—way more often than most users realize. And because it doesn’t understand information the way humans do, it can easily get things wrong.
How can you be more responsible with your AI use?
Alternative and responsible AI usage can include:
Using AI to brainstorm ideas
Fact-checking all outputs, including summaries
Using AI to support learning (not replace it)
Synthesizing AI responses with your own thinking
As AI tools become more accessible and widely used, many young people are unintentionally developing irresponsible patterns of AI use. These habits often form because AI feels fast, convenient, and “smarter” than us. However, because AI was designed to mimic human intelligence, in many ways, it is just as accurate as we are. So, if people rely on it too much, it can actually get in the way of real learning, thinking, and personal growth. AI may sound smart, but errors occur more often than users think, and the truth is, AI doesn’t process information the way a human mind does.
One of the biggest problems of irresponsible AI use is overdependence, which means relying on AI to create or think for you. Thinking for yourself is a skill, just like writing, cooking, or exercising. If you stop practicing it, that skill can weaken over time. Many adults believe younger generations are losing critical thinking skills. If students start letting AI do their thinking, this fear could worsen. So how do we build responsible AI habits?
Summarizing text that you have read (as well as fact-checking the summary to deepen understanding).
Use it to help you generate ideas, but don't use it to write for you.
Correcting grammar and/or rephrasing sentences you have already written.
AI can be great to help you brainstorm ideas or understand something better, but you should still be the one thinking, deciding, and creating. Think of AI like a calculator. A tool that you can use to help you out, but it shouldn’t be doing the whole assignment for you.
AI can sound confident, resulting in users assuming that their responses are always correct. But sometimes AI may give made-up, outdated, or false information.
That’s why it's important to:
Fact-check
Look up information from real sources
Use your own judgement
2.Set boundaries for when you use AI
Boundaries might include:
Using AI for suggestions, not answers
Using AI for feedback on something you already wrote
Not using AI during tests, personal reflections, or skill-building activities
3.Use AI to assist, not escape the work
Ask questions such as:
“Explain this to me more simply.”
“Show me the steps to solve this problem.”
“Help me understand this concept."
AI is powerful, but it’s important to continue practicing writing, thinking, and problem-solving on your own. Your brain needs exercise, just like your muscles do.
Responsible AI usage doesn't mean avoiding it completely, but instead understanding its limits and using it in ways that support your growth. By building healthier habits and staying aware of how AI affects your thinking, teens and young adults can take advantage of technology while still developing strong, independent skills for the future.
About The Authors:
Why this project is important to us:
We acknowledge the ethical issues surrounding AI use- and some of us avoid using it when realistic. But we also acknowledge that AI has become inextricably woven into our everyday lives. Since AI is always changing and evolving and is so widely used, we think it’s incredibly important that people have up to date information on what AI is and how to use it. This applies especially teens and young adults in school; who are young and are actively growing up with this technology. This could lead to an unhealthy or damaging level of use, which we hope to help with.
Atzimba Alfaro:
Major(s): Psychology
Minor(s): Psych Health & Well-Being
Interests: Cartooning, Painting, and Photography
Trinity Millier:
Major(s): Psychology/ Mathematics
Minor(s): None
Interests: Crochet/ Knitting, Yoga, Reading, Baking, Painting
Makayla Ogo:
Major(s): Dental Hygiene
Minor(s): None
Interests: Crafting, violin, piano, video design
Bella Renner:
Major(s): Psychology
Minor(s): Spanish
Interests: Hiking, Baking, Olympic Lifting, Traveling
Resources:
Bearne, S. (2025, May 5). The people refusing to use AI. BBC. https://www.bbc.com/news/articles/c15q5qzdjqxo
Carson, D. (2025, April 21). Theft is not fair use. Medium. https://jskfellows.stanford.edu/theft-is-not-fair-use-474e11f0d063
Fried, I. (2024). Adobe's pledge: We won't train AI with your data. AXIOS. http://axios.com/2024/11/12/adobe-data-ai-training
Jiang, H., et al. (2023). “AI Art and Its Impact on Artists”. Artificial Intelligence for the Earth Systems.
https://dl.acm.org/doi/pdf/10.1145/3600211.3604681
Kassan, P. (2006). AI Gone Awry. Skeptic, 12(2).
Korteling, J. E., van de Boer-Visschedijk, G. C., Blankendaal, R. A., Boonekamp, R. C., & Eikelboom, A. R. (2021). Human- versus Artificial Intelligence. Frontiers in Artificial Intelligence, 4. https://doi.org/10.3389/frai.2021.622364
Lanier, J. (2023, April 20). There Is No A.I. The New Yorker. https://www.newyorker.com/science/annals-of-artificial-intelligence/there-is-no-ai
Partovi, H., & Yongpradit, P. (2024). “7 principles on responsible AI use in education”. World Economic Forum. https://www.weforum.org/stories/2024/01/ai-guidance-school-responsible-use-in-education/
Prothero, Arianna. (2024). “New Data Reveal How Many Students Are Using AI to Cheat.” Education Week. www.edweek.org/technology/new-data-reveal-how-many-students-are-using-ai-to-cheat/2024/04
Spector, J. M., & Ma, S. (2019). “Inquiry and critical thinking skills for the next generation: From Artificial Intelligence back to human intelligence - smart learning environments”. SpringerLink. https://link.springer.com/article/10.1186/s40561-019-0088-z
Steele, J.L. (2023). To GPT or not GPT? Empowering our students to learn with AI. Computers and Education: Artificial Intelligence, 5. https://www.sciencedirect.com/science/article/pii/S2666920X23000395?via%3Dihub
Warzel, C. (2023). One Year In, ChatGPT’s Legacy Is Clear. The Atlantic. https://www.theatlantic.com/technology/archive/2023/11/chatgpt-impact-one-year-later/676188/
Zewe, A. (2025, January 16). Explained: Generative AI’s environmental impact. MIT News.
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-011
AI Disclosure Statement:
We used AI to better our grammar and make a few of our sentences more precise.