This information was created using Google Gemini with the prompt: I want to create a glossary of terms for teachers which explains cyber attacks, cyber incidents and malicious use of the internet. This is to provide them with an idea of the digital world which children are entering, a description of social media applications and their dangers would also be useful. Then added to prompt create a glossary of terms for school teachers who might be using AI, for example, hallucination, deepfake and other ways it can be misused.
Cyber Attack: An attempt by an individual or organisation to gain unauthorised access to a computer system, network, or digital device with the intent to cause damage, steal data, disrupt services, or extort money. Cyber attacks can range from simple phishing attempts to sophisticated nation-state-sponsored espionage.
Cyber Incident: A broader term referring to any event that compromises the confidentiality, integrity, or availability of an information system, or the information it processes, stores, or transmits. This includes successful cyber attacks, but also covers accidental data breaches, hardware failures, or even natural disasters that impact digital systems. Not all cyber incidents are malicious.
Malicious Use of the Internet: Refers to any activity conducted online with harmful intent. This encompasses a wide range of behaviours, including but not limited to cyber attacks, online harassment, distribution of illegal content, scams, and the spreading of misinformation or disinformation.
Phishing: A type of social engineering attack where an attacker attempts to trick individuals into revealing sensitive information (like usernames, passwords, credit card details) by disguising themselves as a trustworthy entity in an electronic communication (e.g., email, text message, social media message).
Smishing: Similar to phishing, but specifically uses SMS (text messages) to deliver the malicious link or request for information.
Vishing: Phishing conducted over the phone, where the attacker tries to trick the victim into revealing information or taking action.
Malware (Malicious Software): A broad term for any software intentionally designed to cause damage to a computer, server, client, or computer network, or to gain unauthorised access to data. Common types include:
Virus: A type of malware that attaches itself to legitimate programs and replicates itself, spreading from one computer to another, often causing damage to files or the system.
Worm: A standalone malware program that replicates itself to spread to other computers, often without user intervention, consuming system resources or bandwidth.
Ransomware: A type of malware that encrypts a victim's files or locks their computer, demanding a ransom payment (usually in cryptocurrency) in exchange for decryption or unlocking.
Spyware: Software that secretly gathers information about a user and their computer activity without their knowledge or consent, often transmitting this data to a third party.
Adware: Software that automatically displays or downloads unwanted advertisements when a user is online. While often annoying, some adware can also be a security risk.
Trojan (Horse): A type of malware that disguises itself as legitimate software, tricking users into installing it. Once installed, it can perform malicious activities, such as granting remote access to the attacker or stealing data.
Social Engineering: The psychological manipulation of people into performing actions or divulging confidential information. It relies on human error rather than technical vulnerabilities, often used in phishing, pretexting, and baiting attacks.
Data Breach: A security incident in which sensitive, protected, or confidential data is copied, transmitted, viewed, stolen, or used by an unauthorised individual.
Identity Theft: The fraudulent acquisition and use of a person's private identifying information (such as name, social security number, or bank account numbers) for financial gain.
Cyberbullying: The use of electronic communication to bully a person, typically by sending messages of an intimidating or threatening nature.
Online Grooming: The process by which an individual builds an emotional connection and trust with a child online, often with the intention of sexual abuse or exploitation.
DDoS (Distributed Denial of Service) Attack: An attempt to disrupt the normal traffic of a targeted server, service, or network by overwhelming the target or its surrounding infrastructure with a flood of internet traffic from multiple compromised computer systems.
Vulnerability: A weakness or flaw in a system, application, or network that an attacker could exploit.
Exploit: A piece of software, data, or sequence of commands that takes advantage of a vulnerability to cause unintended or unanticipated behaviour to occur on computer software, hardware, or something electronic (usually to gain control of a computer system or to steal data).
Dark Web: A part of the internet that is not indexed by traditional search engines and requires specific software (like Tor) to access. It is often associated with illegal activities, but also serves as a platform for privacy and anonymity.
Social Media Application: Digital platforms that allow users to create and share content, interact with others, and participate in virtual communities. They facilitate networking, communication, and the exchange of information on a global scale.
TikTok: A short-form video hosting service where users can create and share short, looping videos set to music or other audio.
Dangers: Exposure to inappropriate content, privacy concerns (data collection), cyberbullying, addiction, pressure to conform to trends (which can be risky), and algorithmic echo chambers.
Instagram: A photo and video-sharing social networking service where users can upload, edit, and share media, follow other users, and use direct messaging.
Dangers: Body image issues and low self-esteem (due to curated content and filters), cyberbullying, privacy risks, exposure to predatory individuals, oversharing, and online scams.
Snapchat: A multimedia messaging app that allows users to send "snaps" (photos or videos) that disappear after a short time. It also features filters, lenses, and "stories."
Dangers: Ephemeral nature can lead to a false sense of security for risky content, sexting, privacy issues (Snap Map location sharing), cyberbullying, and addiction to streaks.
YouTube: A video-sharing platform where users can upload, view, rate, share, comment on, and report videos. It hosts a wide variety of content, from educational videos to entertainment.
Dangers: Exposure to inappropriate or harmful content, misinformation/disinformation, excessive screen time, exposure to online predators via comments/messages, commercialism (ads and product placement).
Discord: A voice, video, and text communication service used by millions to chat and hang out with communities and friends. Popular among gamers.
Dangers: Exposure to inappropriate content in unmoderated servers, cyberbullying, online grooming, scams, privacy concerns (personal information sharing in servers).
Roblox: An online platform and storefront where users can play games created by other users. It combines gaming with social interaction.
Dangers: Exposure to inappropriate user-generated content, in-app purchases and predatory monetisation, online grooming within games, cyberbullying, exposure to scams.
Minecraft: A sandbox video game where players explore a blocky, procedurally generated 3D world, and can discover and extract raw materials, craft tools, build structures or earthworks, and play with others online.
Dangers: Similar to Roblox regarding user-generated content, online grooming in multiplayer servers, cyberbullying, and potential for excessive screen time.
Facebook: A social networking service where users can create profiles, connect with friends and family, share updates, photos, and videos, and join groups. While perhaps less popular with very young children, it's still widely used by older teens and adults.
Dangers: Privacy concerns (extensive data collection, potential for oversharing), cyberbullying, misinformation and disinformation, exposure to scams, online predators.
X (formerly Twitter): A microblogging and social networking service where users post and interact with "posts" (short text messages, images, videos).
Dangers: Exposure to hate speech and extremist content, cyberbullying, misinformation and disinformation, addiction to news cycles and instant updates.
Artificial Intelligence (AI): The overarching field of computer science dedicated to creating machines that can perform tasks typically requiring human intelligence. This includes learning, problem-solving, understanding language, and decision-making. AI is a broad term encompassing various subfields.
Machine Learning (ML): A subset of AI where computers learn from data without being explicitly programmed. Instead of following a fixed set of instructions, ML algorithms identify patterns in large datasets and use those patterns to make predictions or decisions.
Generative AI (GenAI): A type of AI that can create new content, such as text, images, audio, or video, by learning from vast amounts of existing data. Examples include tools that write essays, generate artwork, or compose music.
Large Language Model (LLM): A specific type of Generative AI, typically trained on enormous datasets of text and code. LLMs are designed to understand, generate, and predict human language, making them capable of tasks like writing essays, answering questions, summarising texts, and translating languages (e.g., ChatGPT, Gemini).
Algorithm: A step-by-step set of rules or instructions that a computer follows to solve a problem or perform a specific task. Algorithms are the "brains" behind AI systems, determining how they process information and make decisions.
Prompt: The input or instruction given to an AI model to initiate a response or action. For generative AI, this is often a text command or question (e.g., "Write an essay about the causes of World War I").
Prompt Engineering: The skill of crafting effective and precise prompts to elicit desired and accurate responses from an AI model. It involves understanding how AI interprets instructions and iterating to refine outputs.
Bias (in AI): Unfair favouritism towards certain ideas, groups, or outcomes within an AI system. This often arises because the data used to train the AI was unbalanced, incomplete, or reflected existing societal prejudices, leading the AI to perpetuate or even amplify these biases.
Dataset: A collection of organised information (e.g., text, images, numbers) used to train, test, or evaluate an AI system's performance. The quality, diversity, and representativeness of a dataset are crucial for an AI's accuracy and fairness.
Hallucination (AI Hallucination): When an AI model generates information that is factually incorrect, nonsensical, or made-up, presenting it as if it were true. This happens when the AI "predicts" plausible-sounding but false information based on its training data, rather than having a true understanding of facts. Example for teachers: A student uses an AI to research a historical event, and the AI generates a detailed but entirely fabricated account of a battle that never happened, complete with fictional characters and dates.
Deepfake: A video, image, or audio recording that has been manipulated or entirely generated using AI to appear authentic, often depicting someone saying or doing something they never did. Deepfakes use sophisticated AI techniques to swap faces, alter expressions, or synthesise voices with high realism. Example for teachers: A deepfake video circulating online shows a public figure (e.g., a politician or celebrity) making a controversial statement they never uttered, potentially causing confusion or spreading misinformation among students.
Algorithmic Discrimination: When an AI system's decisions or outputs unfairly disadvantage certain individuals or groups due to biases embedded in its algorithms or training data. Example for teachers: An AI-powered grading tool, if not carefully designed and monitored, could inadvertently give lower scores to students whose writing styles differ from the dominant language patterns in its training data, disproportionately affecting non-native English speakers.
Data Poisoning: A malicious attack where false, misleading, or biased data is intentionally fed into an AI model's training dataset, causing the AI to learn incorrect patterns or produce harmful outputs. Example for teachers: An attacker could inject malicious data into a publicly accessible dataset used to train an educational AI chatbot, leading the chatbot to generate inappropriate or dangerous responses.
Plagiarism (AI-Assisted): Using AI tools to generate text, code, or other content for academic assignments and presenting it as one's own original work without proper attribution or permission from the teacher. Example for teachers: A student submits an essay entirely written by an LLM, attempting to pass it off as their own work, thus bypassing the learning process and demonstrating a lack of original thought.
Academic Dishonesty (AI-related): A broader term encompassing various forms of cheating or dishonest behaviour involving AI, including plagiarism, using AI to solve problems during exams, or relying on AI to complete assignments without engaging in the learning process. Example for teachers: A student uses an AI math solver during a take-home test that explicitly prohibits AI assistance, or uses an AI coding assistant to complete a programming assignment without understanding the underlying concepts.
Over-reliance/Dependency: When individuals, particularly students, become overly dependent on AI tools to complete tasks, they potentially hinder the development of their critical thinking, problem-solving, and creative skills. Example for teachers: Students habitually turn to AI to generate ideas for projects or write summaries, rather than engaging in brainstorming, research, or analytical reading themselves, leading to underdeveloped intellectual "muscles."
Privacy Concerns (AI and Data): Risks associated with AI systems collecting, storing, and processing sensitive personal or student data without adequate transparency, consent, or security measures, potentially leading to data breaches or misuse. Example for teachers: An educational AI platform used by the school collects detailed student usage data, including learning patterns and personal identifiers, without clear consent from parents or sufficient safeguards, raising concerns about who has access to this information and how it might be used.
Misinformation/Disinformation (AI-generated): The spread of false or inaccurate information, often created or amplified by AI, which can be difficult to distinguish from credible sources. Misinformation is unintentionally false, while disinformation is deliberately deceptive. Example for teachers: An AI-generated article or social media post containing false historical claims or unproven scientific theories spreads rapidly online, and students may encounter it and struggle to verify its accuracy.
Echo Chamber (AI-enhanced): A situation where an AI's algorithms reinforce a user's existing beliefs by primarily showing them content that aligns with their past interactions, limiting exposure to diverse perspectives and potentially narrowing their worldview. Example for teachers: An AI-powered news aggregator or social media feed used by students consistently shows them content from only one political viewpoint, making it harder for them to understand different perspectives on complex issues.
Cyberbullying: The pervasive nature of online interactions can make children vulnerable to relentless bullying, leading to significant emotional distress, anxiety, and depression.
Online Grooming and Exploitation: Predators use social media to build relationships with children, often leading to sexual exploitation. They may manipulate children into sharing inappropriate images or meeting in person.
Exposure to Inappropriate Content: Children can inadvertently or intentionally encounter age-inappropriate content, including violence, pornography, hate speech, or self-harm promotion.
Privacy Risks: Children often overshare personal information online (location, school, photos, future plans), making them vulnerable to identity theft, stalking, or other forms of exploitation. Privacy settings are often complex and not fully understood.
Mental Health Impact:
Anxiety and Depression: Pressure to present a perfect image, fear of missing out (FOMO), and constant comparison with others' curated lives can lead to feelings of inadequacy, anxiety, and depression.
Body Image Issues: Exposure to highly filtered and edited images can contribute to unrealistic beauty standards and negative body image.
Sleep Deprivation: Excessive screen time, especially before bed, can disrupt sleep patterns.
Addiction: The design of social media platforms can be highly addictive, leading to compulsive use and withdrawal from other activities.
Misinformation and Disinformation: Children may struggle to discern credible information from false or misleading content, potentially impacting their worldview and decision-making.
Scams and Fraud: Children can be targeted by various online scams, including fake giveaways, phishing attempts, or requests for money.
Loss of Critical Thinking and Attention Span: The constant stream of short-form content can hinder the development of deeper critical thinking skills and shorten attention spans.
Algorithmic Bias and Echo Chambers: Algorithms can curate content that reinforces existing beliefs, limiting exposure to diverse perspectives and potentially leading to radicalisation.
Digital Footprint: Everything posted online, even if deleted, can leave a lasting "digital footprint" that could impact future opportunities (e.g., college admissions, job applications).
By understanding these terms and the associated risks, teachers can better support children in navigating the complex and ever-evolving digital world safely and responsibly. Open communication with students, fostering digital literacy, and collaboration with parents are crucial in this endeavour.