Rethinking Policy: Who Governs Algorithmic Knowledge?
Before proposing your own ideas on how to balance AI and human authority in knowledge curation, it is important to understand the current landscape of policy efforts and ethical frameworks shaping this global conversation.
Across the world, governments, institutions, and advocacy groups are beginning to recognise the need for greater oversight of how algorithms influence public life. In education—where algorithmic systems quietly shape what is seen, valued, or excluded—this conversation becomes urgent.
Below are some key developments and frameworks offering insight, caution, and possibilities for future governance:
🇪🇺 The Digital Services Act
Passed by the European Union in 2022, the DSA establishes new rules for transparency and accountability on very large online platforms. It requires platforms to explain how their recommendation algorithms work, provide users with options to disable algorithmic sorting, and allow moderation decisions to be reviewed.
Although not aimed solely at education, the DSA has significant implications for how learners encounter knowledge—especially through search engines, learning portals, and social platforms. It invites a rethinking of agency and choice in AI-curated educational environments.
🇺🇸 The Algorithmic Accountability Act
In the United States, this proposed legislation would require companies to audit their automated systems for bias, discrimination, and harmful outcomes, particularly in sensitive areas such as education, employment, and healthcare.
Though not yet enacted, the Act reflects increasing concern about the social impact of AI. It signals a future where algorithms must not only function but also explain and justify their decisions—especially where human wellbeing and opportunity are concerned.
🌍 UNESCO’s Recommendation on the Ethics of Artificial Intelligence
Adopted by over 190 member states in 2021, this is the first global framework to guide the ethical development and use of AI. It upholds principles of human rights, transparency, and democratic governance.
Crucially, it names education as a key domain in which ethical oversight is essential, urging that AI systems be designed to expand, not restrict, access to knowledge—and that they do not reinforce inequality, surveillance, or epistemic harm.
🔒 Indigenous Data Governance: CARE and OCAP Principles
Beyond government policy, Indigenous-led frameworks such as CARE (Collective benefit, Authority to control, Responsibility, Ethics) and OCAP (Ownership, Control, Access, Possession) offer alternative visions for data and knowledge governance.
These models centre community agency and resist extractive or exploitative uses of data. They remind us that knowledge infrastructures must be accountable not only to institutions or users, but also to the communities and cultures they affect—especially those historically marginalised.
Together, these initiatives point toward a broader goal: ensuring that the future of knowledge is not governed solely by efficiency or engagement metrics, but by ethics, equity, and collective agency.