Student data privacy laws are designed to safeguard students' sensitive personal information. These laws establish parents' and eligible students' rights to access and control their educational records [U.S. Department of Education, Student Privacy Policy Office].
Data Sharing Restrictions: Student data can only be shared with authorized individuals, organizations, or vendors without explicit parental or eligible student consent, except under specific legal exceptions. This includes refraining from posting student information on public platforms or using unapproved third-party applications that could compromise data privacy [U.S. Department of Education, Student Privacy Policy Office].
AI models learn from the data they are trained on. If this data is skewed, incomplete, or reflects existing societal inequalities, the AI system will likely perpetuate and even amplify those biases. This can lead to distorted outputs of information, causing harm and trauma. Understanding these types of bias is crucial for developing and using AI ethically, especially in education.
Mitigating bias in AI is crucial because it ensures fairness, accuracy, and trust, preventing harm and promoting equitable outcomes across all aspects of society, especially in critical areas like education.
Did you know that if your AI tool doesn't know the answer, it will make something up?
Distortions in Images, plausible but incorrect answers to math questions, fabricated sources and historical facts are all examples of AI hallucinations (errors). The consequences to AI hallucinations is that is can spread false information, lead to a waste of resources or cause harm. To avoid these pitfalls, it is important to critically evaluate all output from an AI tool.
ALWAYS KEEP A HUMAN IN THE LOOP!