Artificial Intelligence (AI) is transforming our world in profound ways. It's a branch of computer science focused on designing systems capable of problem solving and performing tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. At its core, AI operates through algorithms and models that process large amounts of data to identify patterns and make predictions (Russell & Norvig 2016).
As schools integrate new technologies and rely more on data analytics to guide academic decisions, these privacy concerns become more pronounced (Cortez 2016). Apprehension about protecting privacy and ensuring online security in educational settings is rising. Educators need to understand and teach students how AI and other computing innovations impact our lives, both positively and negatively.
One of the negative by-products of training AI is data collection. This fundamental aspect of AI, poses a threat to privacy and security. The vast amount of data collected by AI systems can still be used in ways that jeopardize privacy (Reese, 2020).
Bryon Reese's podcast with Nir Bar-Lev delves deeper into the broader implications of AI and privacy. As Bar-Lev notes, AI's ability to collect and analyze data at scale means “privacy as we know it no longer exist.” (Bar-Lev 2020). In addition to this alarming statement, AI systems can inherit biases from their training data, leading to discriminatory outcomes, such as racial misidentification in facial recognition technology. Data breaches are another significant concern, as the large datasets collected for AI can become targets for cyberattacks, exposing individuals' private information and leading to identity theft. Moreover, AI technologies can create deepfakes—realistic but fake audio or video content—that can be used for misinformation, defamation, or blackmail. These examples highlight the potential for AI misuse and underscore the importance of ethical guidelines to protect privacy and ensure responsible use.
As educators we can teach students about privacy and responsible use. The ISTE Standards for Educators emphasize teaching students to manage their digital identities, understand data collection, and maintain privacy. October, being cybersecurity month, provides an excellent opportunity to engage students in these discussions. In my classroom, I focus on teaching students practical ways to protect their data. This includes a guided tour of multifactor authentication and the importance of regular software updates. Multifactor authentication and encryption adds an extra layer of security by requiring multiple forms of verification, while software updates patch vulnerabilities, keeping systems secure. Following these discussions, students are introduced to a stimulus question where they apply their knowledge to analyze data collection, benefits, and harms, effects, and security or privacy concerns.
By engaging in inquiry based activities, students deepen their understanding of the complex landscape of technology, honing their ability to assess both the advantages and disadvantages of privacy implications. This approach empowers students to critically evaluate modern technologies and understand the significance of data protection. Additionally, I have students participate in activities that encourage them to develop opinions on the privacy tradeoffs of today's AI tools. For instance, they watch a video on facial recognition technology, exploring the balance between convenience and privacy. Facial recognition technology offers numerous benefits, such as increased security and ease of access. Even as it simplifies processes, making them more convenient, the privacy concerns are significant. As students explore these complex issues, they become more adept at recognizing the nuanced relationship between AI's conveniences and its privacy challenges.
In conclusion, AI and privacy are intricately linked. While AI offers significant benefits, it also poses substantial privacy challenges. As educators, we must equip ourselves and students with the knowledge to navigate this landscape. By understanding the security risks of AI, we can make informed decisions about our digital footprints and privacy. Additionally, ongoing discussions and activities, especially during cybersecurity month, can foster a deeper understanding of these critical issues. As educators and students, we must take on the responsibility of becoming empowered digital citizens, fully aware of the tradeoffs inherent in modern technologies.
References:
Above the Noise. (2017, Dec 6). Is facial recognition invading your privacy? [Video]. YouTube. https://www.youtube.com/watch?v=f5qgOqNQ7zY
Amell, S. (2023, June 16). How to train a generative AI model. Medium. https://medium.com/@iamamellstephen/how-to-train-a-generative-ai-model-1ab605615acd
Bogardus Cortez, M. (2016, October 24). 4 tips to help schools with privacy and security. EdTech Focus on K-12. https://edtechmagazine.com/k12/article/2016/10/4-tips-help-schools-privacy-and-security-compliance
code.org. (2020, Dec 1). Ethics & AI: Privacy [Video]. YouTube. https://www.youtube.com/watch?v=zNxw5gJtHLc
Erb, R. (2024). Privacy Implications of AI [Image]. Created using Adobe Firefly
Leffer, L. (2023, October 19). Your personal information is probably being used to train generative AI models. Scientific American. https://www.scientificamerican.com/article/your-personal-information-is-probably-being-used-to-train-generative-ai-models/
Reese, B. (Host). (2020 February 19). A conversation with Nir Bar-Lev (No. 107) [Audio podcast episode]. In Voices in AI. GigaOm. https://voicesinai.com/episode/episode-107-a-conversation-with-nir-bar-lev/
Russell, S., & Norvig, P. (2016). Artificial Intelligence: A Modern Approach.