This page outlines our commitment to using AI technologies, including Large Language Models (LLMs), responsibly. Our goal is to ensure that our AI-supported resources enhance learning effectiveness while upholding the highest ethical standards, serving as a model for responsible digital practice in education.
Core Principles Guiding Our AI Use
We ground all AI integration in these three foundational ethical principles:
Fairness and Equity: We actively work to prevent and mitigate biases—including those related to gender, culture, and socioeconomic background—that may be present in AI outputs, ensuring equitable and relevant experiences for all Indonesian students.
Transparency and Accountability: We clearly disclose where AI tools are used to design and structure educational materials. Our educators retain final oversight and responsibility for all content generated or suggested by AI.
Data Privacy and Security: We adhere strictly to data protection standards, ensuring that student and educator personal data are never used to train or enhance public AI models. All interactions are handled securely and confidentially.
Ethical Risks and Mitigation Strategies
Integrating AI carries inherent risks. We have identified key potential pitfalls and established the following strategies to mitigate them:
Ethical Risk Identified
Description
Mitigation Strategy
Algorithmic Bias
AI models may perpetuate or amplify cultural, regional, or gender stereotypes present in their training data (e.g., generating only male scientist examples or irrelevant non-local contexts).
Prompt Engineering Review: All core content generation prompts (especially for scenarios and examples) undergo a bias review by a human educator before deployment. We explicitly instruct the AI to use diverse, localized names and contexts.
Lack of Contextual Relevance
AI may generate content that is technically correct but lacks the pedagogical or cultural nuance necessary for Indonesian 4th-grade students (e.g., using irrelevant analogies like snow or American sports).
Localization Mandate: All AI tools are prompted with an explicit "Indonesian 4th Grade Context" mandate. Human review focuses on verifying that the output aligns with the 2025 Curriculum's conceptual goals.
Intellectual Dependence
Students or educators may rely too heavily on AI outputs without critical thinking or conceptual justification (e.g., using AI to solve problems without understanding the process).
Focus on Justification (AaL): Our curriculum design intentionally limits AI use for direct answers. Instead, we use AI to generate reflection prompts and scaffolding hints, forcing students to articulate why an answer is correct (Assessment as Learning).
Data Security & Privacy
The risk of proprietary lesson plans or student interaction data being exposed or used without consent.
Input Protocol: We use AI tools only with anonymized, non-personal data. Educators are trained to never input sensitive student information or confidential institutional documents into public-facing AI models.
Example of Bias Detection and Revision
Modeling ethical practice means demonstrating the process of refinement. Here is how we detect and revise potential bias in a teaching prompt:
"Generate a word problem about profit and debt. Use a scenario involving a single character starting a new business."
Analysis: This prompt is neutral, but historically, AI often defaults to generating scenarios with male names, certain professions, and generic cultural settings.
"Generate a complex word problem about profit and debt for an Indonesian student. The main character must be female (e.g., named Siti or Dewi) and the scenario must relate to a locally common small business (e.g., selling es campur or snacks at the market). Ensure the numbers are positive and negative integers."
Result: The revised prompt forces gender diversity and ensures cultural/economic relevance, yielding a fairer, more contextualized educational resource.